Working with AI agents is quickly becoming the de facto standard in software engineering. Not only because they make us faster, but because they let us try more things: more experiments, more iterations, more ways to explore an idea before lunch. That part is real, and I like it.
Still, I think there is a distinction that matters more every month: the model accelerates you, but you decide the direction. That is the line that separates using AI as leverage from using AI as a crutch.
The hard part is finding balance.
We live in a world of instant answers and instant results. Having that kind of power at your fingertips is amazing, but it is also a double-edged sword. If you never question the agent, if you never challenge its output, if you never scratch the surface and ask why it is doing what it is doing, then sooner or later you start to drift. Not because AI agents are bad, but because you stopped building your own understanding.
And that is why taste matters even more now.
By taste, I do not mean aesthetics, I mean judgment. The ability to look at two solutions that both "work" and still feel that one of them is cleaner, safer, more scalable, or more likely to survive contact with reality. We have always needed that. What is changing is that thinking hard is quietly becoming a rare skill. That sounds absurd to me, because thinking hard should be normal. But today it is very easy to avoid it. We can always ask for the answer, we can always ship something that passes. But then what happens when the system scales, when things break in production, or when performance suddenly matters?
That is the part I refuse to outsource.
So this is the way I try to use AI agents: not just to go faster, but to go in the right direction. I have always been curious by default. I was the kid who took apart the RC car just to understand it, and if I was lucky, I managed to put it back together again.
Engineering feels the same to me. I do not just want the output, I want the reasoning. I want to understand what is happening and why, and that requires the thing modern tooling often tries to remove: time.
In that sense, an AI agent can feel like having a distinguished engineer with you 24/7. You can ask questions until the concept finally clicks. Early in your career, you dream about the engineer who can just jump in, fix the problem, and close the ticket that has been open for three sprints. Later, you realize that is not the kind of help that really makes you better.
You do not want someone who takes over your keyboard. You want the mentor who helps you learn, sharpen your judgment, and build taste.
That is the core of how I use agents: I want them to work for me, not instead of me. So I spend a lot of time going back and forth with them, peer-reviewing code, asking for explanations, asking for trade-offs, asking "what would you do if...", not just asking for code and moving on.
Issue-solving guides with AI agents
One of the best places to learn real engineering is still the wild world of free and open source software. It is everywhere, it supports most of what we build, and it is full of problems that are actually worth solving. I honestly believe contributing to open source is a kind of moral duty. We all benefit from OSS in one way or another, and I have a lot of respect for the people who keep that ecosystem alive.
The problem is that jumping into a large project is not always easy.
Most of the easy issues are already gone. The ones that remain usually require context, architecture knowledge, and time. When you are new to a project and you do not know the internals yet, it can feel intimidating very quickly.
I have spent many nights scrolling through GitHub issues trying to find something I could realistically solve: something useful for the community, but also something that would teach me. I love learning by fixing. And if you do not understand how something works, you cannot really fix it. You can patch it, maybe. But that is not the same thing.
This is one of the places where agents have been genuinely useful for me.
I built a small custom skill for myself. Nothing fancy, mostly markdown templates and a few helpers. It helps me search through repositories I like, filter issues, and sort them by what I am in the mood to do. Sometimes I want a small fix so I can understand the codebase. Sometimes I already know the codebase and I want to go deeper: maybe a performance issue, maybe a subtle bug, maybe a race condition that actually deserves focused thinking.
When I want to contribute to a project, I run that skill and the agent automatically:
- finds issues in the repository related to what I want to work on
- maps the repository at a high level
- creates a markdown issue-solving guide for a specific issue in my Obsidian vault
What is an issue-solving guide?
It is not "here is the solution."
It is a markdown note, created directly in my Obsidian vault, that explains:
- what the issue really is, in plain language
- what parts of the codebase are relevant
- what concepts or theory I need to understand first
- where I should start reading
- what to test
- what to watch out for
- what a good fix should feel like
That last point matters to me a lot. A good fix is not only correct. It usually has a shape to it. It respects the codebase. It does not create weird side effects. It does not solve one problem by planting three more.
So the guide gives me hints in a guided way, almost like a tutorial mixed with a mentor's notes. I still do the work myself, step by step. I still get stuck. I still have to think hard. And if I am unsure, I can ask the agent to challenge my direction or confirm whether I am missing something important.
For some people, this will sound like an inefficient use of AI, because yes: I could ask the agent to fix the issue in a few minutes and open a PR. And if your goal is purely shipping, that is a valid approach.
But that is not my goal here. I am not doing this for productivity alone. I am doing it because I like thinking hard. I genuinely believe that thinking hard, and struggling a bit (or a bunch), is still one of the most reliable ways to build real knowledge.
If the only objective was speed, then sure: let the agent solve it, run the tests, and ship it. But in this context I am optimizing for understanding.
The agent helps me avoid dead ends while still leaving the hard part to me. It is like having that distinguished engineer we talked about earlier, the one you can ask:
- where should I start in this codebase?
- what should I read before touching anything?
- what are the trade-offs between these two approaches?
- what is the real risk of this change?
- how do I test that I did not break something subtle?
That is the kind of help that makes you stronger without making you passive. And honestly, I love those nights when I am stuck, thinking, breaking things, trying again, and then suddenly everything clicks.
That feeling is still one of the best parts of engineering.
Takeaway
So in this era of agents, that is my invitation: use them in a way that helps you without weakening you.
Because yes, for me, asking AI to solve things you do not understand can be a form of self-sabotage. You might ship faster today, but you are borrowing against your future skills.
If you already understand the problem and you just want speed, go ahead. Let the agent help you write the code. That is smart. But if you do not understand what is happening, slow down. Ask why. Challenge the output. Build the mental model. Spend time thinking hard again.
You might get a headache the first few days. Later, you realize something important: speed is temporary, understanding compounds.
P.S.
This is how I understand software engineering.
For me, it is about depth, mental models, and building taste over time. It is about enjoying the hard parts and not outsourcing the thinking.
That said, I completely understand that not every context is the same. Product teams often need to ship fast. Startups need validation. Sometimes speed is the strategy. In those situations, using AI mainly for acceleration makes perfect sense.
I am not saying one approach is morally superior. I am saying that, as an engineer, I choose to optimize for understanding first, because in the long run, that is what compounds for me.