I remember my father telling me to look things up in the encyclopedia myself. He said I'd remember them better if it cost me a bit of effort. He was right.
That small friction: walking to the shelf, flipping through pages, actually finding the answer yourself, that's where learning happens. Not in the answer itself, but in the hunt for it.
Learning, thinking
Learning has been a topic in my life for as long as I can remember. At school, I wondered why it seemed to come easier to some classmates. At university, I questioned whether the teaching actually worked or if I was just ticking semesters off.
Now, with large language models everywhere, the landscape changed again. AI can explain, summarize, generate, and even "reason." It can make you feel like you're learning, even when you're just scrolling through some "predicted" understanding.
That's the trap: the false sense of thinking and the false learning feeling.
When answers come too easily, curiosity dies. The hard parts, the digging, the debugging, the small moments of friction where understanding actually forms start to fade.
The best problems are still the ones that make you forget to eat lunch because your brain is fully alive, wrestling with the unknown.
That's intrinsic motivation. The hunger not for completion, but for comprehension.
Artificial Intelligence
I believe in AI. I default to it. It already changed the game and there's a new normal.
But I believe even more in agency, the will to know why things work, not just that they do.
We're at a strange point right now. The model writes, builds, tests, and ships. It answers with perfect confidence, fluent and certainB, but fluency isn't understanding, and certainty isn't truth.
You don't need to know how every piece works to stay sharp (unless you are curious about it), but you do need the will to question what the model gives you. You need to pull at the seams of its reasoning until you see the logic underneath. It's not about knowing every detail. It's about refusing to take its word as final.
Why this output? What's missing?
Use AI to amplify your thinking, not replace it.
The model accelerates you, but you decide the direction.
Trace the reasoning, refine the prompt, build an intuition around what it tends to miss. Be curious about understanding what it did, not just grateful that it did something.
That's how you stay in control. That's how you stay an engineer.
The important thing is not to stop questioning. Curiosity has its own reason for existing.
— Albert Einstein
Yes, default to AI. But let that be the start, not the end.
Because in the age of infinite assistance, actually thinking hard is rebellion.
And that thinking is what moves the world. Or at least, it moves mine.