You Have to Adapt
The debate around AI in development has seemingly split into two very vocal camps. I don't find either is particularly helpful. Here's where I've landed after adopting it fully across real projects.
The idea that it's either all-in on AI, vibe coding everything into existence, or a wholesale rejection of it is a false choice. It's a very vocal argument between two extremes that conveniently ignores the middle ground where most working developers actually sit. AI assistance is a tool, the same as any other in a developer's kit, and like any tool it has a context in which it's useful and a context in which it isn't.
From curious to committed
Before I started contracting, I was curious about AI but wary. Having reviewed PRs made with Copilot, I'd experienced the spaghetti mess that can be the output. This had put a bit of a damper on my uptake: I'd used it occasionally, prodded at it, found it interesting but hadn't committed to making it part of how I actually work. That changed quickly when I went independent about six months ago. Contracting demands output, and the ramp-up time on new projects is real - new codebases, new stacks, new expectations, all with the clock running. Within weeks of starting, I went from occasional use to full adoption, and fast.
The shift wasn't gradual. It was a deliberate decision driven by a practical problem: I needed to deliver at pace across unfamiliar territory, and AI gave me a way to do that without cutting corners on quality. The results were immediate. My output increased significantly, and I found myself able to take on work that I'd have previously needed a longer runway for.
The blocker that disappeared
For years, my main frustration was picking up new languages. I could design the system, I knew what I wanted to build, but the time investment to become comfortable with unfamiliar syntax and idioms was significant. Projects got shelved because I couldn't justify weeks of ramp-up for work that needed output now. It was a genuine bottleneck, and one that experience alone couldn't fully solve because the landscape keeps moving.
AI changed that. Not by writing the code and handing it over, but by collapsing the feedback loop. I can scope out a project, define the constraints and the stack, then work alongside an AI agent, learning the syntax as I go and querying differences with languages I already know. It's closer to pair programming with a very patient colleague who happens to know every language than it is to "vibe coding".
Languages that would have taken weeks to feel productive in now take days, and the understanding sticks because I'm actively working through problems rather than passively reading documentation.
The trade-off is real
There are always trade-offs. I'd be dishonest if I said there was no cost. There is, and it's worth being upfront about: my line-by-line awareness of what the code is doing has decreased. When an agent produces fifty lines of working code, I'm not absorbing every detail the way I would if I'd written each line myself. That's a genuine loss, and one I'm conscious of.
But what's increased is arguably more valuable at this stage of my career. My awareness of global functionality - how the pieces fit together, how data flows through the system, where the boundaries are - has sharpened because I'm spending more time thinking at that level rather than getting bogged down in syntax. My understanding of code structure has improved for the same reason: when you're reviewing and directing rather than typing every character, you naturally focus on architecture and patterns. And my test coverage has gone up substantially, partly because AI is very good at generating tests and partly because the time I've saved on implementation gives me room to actually write them properly.
The trade-off isn't "AI makes you worse". It's that the skills you exercise shift, and you need to be aware of which ones are atrophying so you can address it deliberately.
Experience as the filter
Here's the thing that makes this work for me, and it's the bit that the vibe coding debate consistently misses: I already know how to code. Close to two decades of building software means I know structures, algorithms, architecture, systems. I know the trade-offs involved in choosing one approach over another, or can get to grips with them quickly because I've seen enough similar decisions play out. That knowledge doesn't disappear because an agent wrote the implementation.
This is the lens I view the entire process through. When AI produces code, I'm not accepting it at face value - I'm filtering it against everything I know about how software should be built. Does this structure make sense for the project? Will this scale? Is this introducing complexity that isn't warranted? Are there edge cases the agent hasn't considered? These aren't questions you can ask if you don't already have the experience to know what the answers should look like.
Without that foundation, I'd be fully in vibe coding territory - accepting output because it runs, not because it's right. The difference between using AI as a senior developer and using it without that background is the difference between directing a build and hoping for the best. The tool is the same, but what you bring to it determines whether the result is robust or fragile.
This is also what offsets the line-by-line knowledge tax. Yes, I'm less familiar with the granular detail of every function the agent writes. But I can look at a module and tell you whether it belongs, whether it's doing too much, whether the boundaries are in the right place, and whether it'll cause problems six months from now. That's the skill set that matters when you're responsible for the outcome, not just the output.
The numbers tell the same story
The data on AI adoption over the past two years paints a clear picture of how fast this has moved. Stack Overflow's 2024 Developer Survey found that 76% of developers were using or planning to use AI tools. Just a year later, their 2025 survey pushed that to 84%, with 47% using AI tools daily. By January 2026, JetBrains' AI Pulse survey of over 10,000 developers found that 90% were regularly using at least one AI tool at work, with 74% on specialised developer tools rather than just chatbots.
But here's the telling part: while adoption has surged, trust hasn't kept pace. Stack Overflow found that 46% of developers don't trust the accuracy of AI output in 2025, up from 31% the year before. More developers are using these tools, and fewer of them trust what comes out. That's not a contradiction - it's exactly what you'd expect when adoption outpaces experience. The developers who trust the output least are often the ones best placed to use it well, because they know enough to verify rather than assume.
Where it falls down
AI is not good at everything, and pretending otherwise does a disservice to the conversation. It struggles with nuanced architectural decisions that require understanding of business context, team capability, and long-term maintenance burden. It can produce code that works but is structurally wrong for the project it's being dropped into. It has a tendency to over-engineer, to reach for abstractions and patterns that aren't warranted, and to generate a volume of code that creates its own maintenance overhead.
I've had agents confidently produce solutions that passed tests but would have been a nightmare to maintain, or that introduced dependencies the project didn't need, or that solved a problem in a way that was technically correct but completely at odds with the existing codebase. Catching this requires experience, and that's the part of the job that AI can't replace. Recent benchmarks back this up - while AI-enabled coding is faster, actual delivery of quality features can be slower due to the increase in code review time and logic errors.
The developers who will struggle are the ones who skip building that experience because the tool made it feel unnecessary. The ones who will thrive are the ones who use the tool to build experience faster, which is a subtle but important difference.
Adapting is the job
Technology has always required adaptation. The developers I've worked with over close to two decades who stayed relevant weren't the ones who picked a stack and defended it forever - they were the ones who evaluated new tools honestly, adopted what worked, and discarded what didn't. AI is just the latest in a long line of shifts that reward pragmatism over dogma.
My approach is straightforward: use AI where it genuinely helps, maintain the skills to work without it, and make the choice deliberately rather than by habit. It's not a revolutionary position, but it's a practical one, and in my experience the practical positions tend to age better than the ideological ones.
If you're navigating how AI fits into your development process, or you're building a team and want to get the balance right, I've spent close to two decades in engineering leadership including as CTO through a successful acquisition. Get in touch.
References: