Isaac Newton once quoted “If I have seen further, it is by standing on the shoulders of giants”. LLM and agentic coding technologies are the giants that are now roaming in our software engineering domain. Use of LLMs and agentic coding tools to improve software engineering is a strongly debated subject with plenty of strong arguments on both sides. For curious readers, there are plenty of examples of successes, failures, and mixed results are available in the blogs and online discussions.
Our engineering team has been using agentic coding tools for the last six months to streamline the day-to-day software engineering tasks, e.g., code review, root-cause analysis of production incidents, writing unit tests, onboarding new team members. We have improved the sprint velocity, written better code with higher code coverage, and were able to solve production incidents quickly. Here is a collection of the tips we have found useful and use cases that showcase the usefulness of agentic coding in writing and maintaining software. The Claude code commands used in these use cases can be found in this github repo.
Tips to help with Agentic Coding
If you do not have a good grasp of the language and the framework in use, you will likely not be effective or efficient with the agentic coding tools.
Always review all changes made by the agent - you are the owner.
Direct the agent with very specific and detail instructions/prompts
Create a feedback loop for the agentic tool: Analyze → make changes → write tests to verify the changes.
For complex tasks and changes, ask the agent to think hard and justify its solution.
Watch out for the cost (use a cheaper model) for long-running tasks
Use case 1: Reviewing Pull Requests
def convert_data(df: DataFrame) - > bytes:
buffer = bytearray(b"")
for row in df._data:
row_dump = dumps(dict(zip(self._columns, row)))
buffer.extend(f "{row_dump}\n".encode("utf-8"))
return gzip.compress(bytes(buffer))
session = aioboto3.Session()
async with session.client(
"s3",
region_name = "ap-east-1",
config = get_aioboto_config(),
) as s3_client:
await s3_client.put_object(
Body = convert_data(self),
Bucket = bucket,
Key = key,
)
Agent recommendations
Reuse the session and possibly use a connection pooling approach
Improve the performance of
convert_data
function:eliminate 1) multiple memory allocations, 2) encoding, 3) buffer extension and manipulations
add a shortcircuit
if not df._data:
return gzip.compress(b"")
columns = df._columns
json_lines = [dumps(dict(zip(columns, row))) for row in df._data]
res = "\n".join(json_lines) + "\n"
return gzip.compress(res.encode("utf-8"))
Use case 2: Root-cause analysis of a production incident
We have been experiencing some type of memory leak in our heavily-used Java webservice running on EKS in AWS. The symptoms of the leak were java connection timeouts to the storage services. The web service is designed using Java Spring and ORM. By enabling spring.datasource.hikari.leak-detection-threshold
in the deployment, we were able to collect a detailed stack trace of the error in the production.
Exceptions
LOGGING_FORMAT: json
caller_class_name: com.zaxxer.hikari.pool.ProxyLeakTask
caller_file_name: ProxyLeakTask.java
caller_line_number: 84
caller_method_name: run
level: WARN
level_value: 30000
logger_name: com.zaxxer.hikari.pool.ProxyLeakTask
message: Connection leak detection triggered for org.postgresql.jdbc.PgConnection@123456c on thread http-nio-8080-exec-5, stack trace follows
stack_trace: java.lang.Exception: Apparent connection leak detected
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:128)
at org.hibernate.engine.jdbc.connections.internal.ConnectionProviderImpl.getConnection(ConnectionProviderImpl.java:122)
at org.hibernate.internal.NonContextualJdbcConnectionAccess.obtainConnection(NonContextualJdbcConnectionAccess.java:38)
...
Agent recommendations
Once the exception details and the stack trace is fed into the Claude code (Opus 4.1) fixed the issue:
Identified the root cause of the issue (using transactions for S3 read/write operations)
Performed the code change - using transactions only for DB access and removing it for S3 access and added unit tests to verify the changes
Ran the unit tests and verified its change
Cost
Total cost: $65.09
Total duration (API): 25m 28.1s
Total duration (wall): 17h 32m 53.9s
Total code changes: 1012 lines added, 36 lines removed
Usage by model:
claude-3-5-haiku: 70.5k input, 3.8k output, 0 cache read, 0 cache write
claude-opus-4-1: 1.2k input, 48.6k output, 23.4m cache read, 1.3m cache write
claude-sonnet: 168 input, 7.7k output, 2.1m cache read, 164.0k cache write