February 19, 20266 min 13 sec read

How to Think Hard in the Agentic Era

Nowadays, working with AI agents has become almost the de facto standard in software engineering. Not only because they boost productivity, but because they unlock more: more experiments, more learning, more ways to explore ideas quickly.

Still, I think there’s a crucial distinction whose importance grows exponentially: the model accelerates you, but you decide the direction. That’s the line that, IMHO, separates AI adopters from AI users.

The hard part is finding balance.

We live in a world of speed, instant answers, instant results. Having that power at your fingertips is amazing, but it’s also a double-edged sword. If you don’t question the agent, if you don’t challenge it, if you don’t scratch the surface and ask why it’s doing what it’s doing, you will get outdated. Not because the agent is bad, but because you stopped building your own understanding.

Taste has become more important, but hasn’t it always been? We’ve always needed judgment. We’ve always needed the ability to look at two solutions and feel, deep down, which one is cleaner and more solid. What I do think is changing is that thinking hard is becoming a rare skill, and that sounds ridiculous to me, because thinking hard should be normal. But today it’s easy to avoid it. We can always ask for the answer. We can always ship something that “works.” But then what? What happens when you need to scale, when things break, or when performance actually matters?

So today I want to share how I use AI agents in a way that matches my personality: not just to go faster, but to go in the right direction. I’m curious by default. I’ve always been the kind of person who wants to see what’s under the hood. I was the kid who took apart the RC car just to understand it… and sometimes I even managed to assemble it back and make it work again.

And engineering is the same for me. I don’t just want the output, I want the reasoning. I want to understand what’s happening and why, and that requires what society often resists: time.

In that sense, an AI agent can feel like having a distinguished engineer with you 24/7. You can ask questions endlessly until the concept finally clicks. Early in your career, you dream of the engineer who just spits out solutions and helps you close the ticket that has been open for three sprints. Later you learn that’s not what makes you strong (in the mental kind of way).

You don’t want someone who takes over your keyboard. You want the mentor who helps you learn and build taste.

That’s the core of how I use agents: I want them to work for me, not instead of me. So I spend a lot of time going back and forth with them, asking for explanations, asking for trade-offs, asking “what would you do if…”, not just generating code to fix a bug quickly.

Issue-solving guides with AI agents

One of the best places to learn real, interesting engineering is in the wild: free and open source software. It’s everywhere, it supports most of what we build, and it’s full of problems that are actually worth solving. I honestly believe contributing to open source is a kind of moral duty. We all benefit from OSS in one way or another, and I genuinely respect the open source community.

But truth be told, it’s not always easy to jump into a big open source project.

Most of the easy issues are already fixed. The ones left require context, architecture understanding, and time. And when you’re new to a project, or you don’t know the internals yet, it can feel intimidating.

I’ve spent countless nights scrolling through GitHub issues, trying to find something I could solve: something useful for the community, but also something that would teach me. I love learning by fixing, and if you don’t know how something works, you can’t truly fix it. You can only patch it and hope.

This is one of the places where agents have been genuinely valuable for me.

I built a small custom skill (nothing fancy, mostly markdown templates and a few helpers) that helps me search through repositories I like, filter issues, and sort them by what I’m in the mood to do. Sometimes I want small fixes to understand the codebase. Sometimes I already know the codebase and I want to go deeper: maybe a performance issue, maybe a tricky bug, maybe a race condition that requires real thinking.

When I want to contribute to a project, I run my custom skill and the agent automatically:

  • finds issues in the repo related to what I want to do
  • understands the repository at a high level
  • creates a markdown issue-solving guide for a specific issue in my Obsidian vault

What’s an issue-solving guide?

It’s not “here is the solution.”

It’s a markdown note, created directly in my Obsidian vault, that explains:

  • what the issue actually is (in plain language)
  • what parts of the codebase are relevant
  • what concepts or theory I need to understand first (with a checklist to make sure I went through all of them)
  • where to start reading
  • what to test
  • what to watch out for
  • what a good fix should feel like

It gives me hints in a guided way, kind of like a tutorial mixed with a mentor’s notes, so I can still do the work myself step by step. I can still struggle. I can still think. And if I get stuck, I can ask the agent to confirm whether I’m on the right path.


For some people, this will sound like a “bad use of AI,” because yes: I could ask the agent to fix the issue in minutes and open a PR. And if your goal is purely shipping, that’s a valid approach.

But that’s not my goal here. I’m not doing this for productivity. I’m doing it because I like thinking hard. I genuinely believe thinking hard (and struggling a bit) is the most bulletproof way to gain knowledge. So I’m doing it for myself.

If the only objective was speed, then sure: let the agent solve it, run the tests, and ship it. But here I’m optimizing for understanding.

The agent helps me avoid dead ends while still letting me do the hard part. It’s like having the distinguished engineer we mentioned earlier, the one you can ask:

  • where should I start in this codebase?
  • what should I read before touching anything?
  • what are the performance trade-offs between these two approaches?
  • what’s the risk of this change?
  • how do I test that I didn’t break something subtle?

That’s the kind of help that makes you stronger without making you lazy. And I love those nights where I’m stuck, I’m thinking, I’m breaking things, nothing works… and then suddenly it clicks.

That feeling is still the best part of engineering.

Takeaway

So in this era of agents, that’s my invitation: use them in a way that benefits you without sabotaging you.

Because yes, for me, asking AI to solve things you don’t understand can be a form of self-sabotage. You might ship faster today, but you’re borrowing against your future skills.

If you already understand the problem and you just want speed, go ahead: let the agent help you write the code. That’s smart. But if you don’t understand what’s happening, slow down, ask why, challenge the output, build the mental model, and spend time thinking hard again.

You might get a headache the first days. But later you’ll realize something important: Speed is temporary, understanding compounds.


P.S.

This is how I understand software engineering.

For me, it’s about depth, mental models, and building taste over time. It’s about enjoying the hard parts and not outsourcing the thinking too quickly. That said, I completely understand that not every context is the same. Product teams often need to ship fast. Startups need validation. Sometimes speed is the strategy. And in those scenarios, using AI primarily for acceleration makes total sense.

I’m not saying one approach is morally superior. I’m just saying that, as an engineer, I choose to optimize for understanding first, because in the long run, that’s what compounds for me.