There are some experienced developers who caution against using AI while developing software, basically as a rule. The core of their argument is that if you use AI to help you write code or make decisions, your abilities will not progress, or will even regress. The analogy used by one such developer I know: if you ask someone to copy their homework, do you actually learn anything? That’s what, according to him, using generative AI tools such as ChatGPT, Copilot, or Grok while coding is akin to.
I think that’s a misreading of the situation. That analogy assumes that the developers who use AI tools (76% of developers, according to a 2024 StackOverflow survey) are blindly copy-and-pasting code without reference to how it actually works, leaving their repositories as patchworks of potentially buggy or misaligned code, while their actual software engineering abilities atrophy.
That doesn’t make sense—neither logically, nor based on my experience as a developer.
Firstly, it doesn’t make sense in terms of incentives. Developers have a natural desire to grow their knowledge and abilities. If their skills were to atrophy, their chances of professional advancement (achieving promotions, new contracts, etc.) would also atrophy, as their fall from grace would inevitably show up in the quality of their work. If generative AI really had such an effect, developers would choose not to use it, which is overwhelmingly not the case.
Secondly, regardless of whether a snippet of code was generated by AI or by a developer, it is still the developer who is responsible for integrating that snippet into a wider context (i.e. into the surrounding features or wider product). Doing so requires assessing if and how a suggestion by AI makes sense, which is an active thinking exercise, and frequently also a learning experience. And, because AI can generate code snippets faster than you can, using AI increases the rate of such learning experiences, as less time is spent manually writing out the simple stuff that you’ve written out robotically (or copy-pasted) 1000 times before.
Additionally, any developer will tell you that it’s frequently necessary to adapt, amend, or reject suggestions provided by AI. In fact, it’s common knowledge among everyone at this point, not just developers, that ChatGPT and its siblings can get things wrong—really really wrong. Developers know this intimately, with most of us having horror stories of learning it for the first time. (Mine involves losing an entire night’s sleep configuring a particular cloud deployment that ChatGPT reassuringly informed me would be totally suitable for my unique use case. It wasn’t.) Every developer who doesn’t want to shoot themselves in the foot knows to carefully scrutinise AI’s suggestions.
So it’s quite a bit different from copying someone’s homework.
I believe that the reality is more like this: developers use AI as a tool and a resource to solve the complex problems that face them at work. Using these tools both helps them to streamline their efficiency, and to expand their knowledge of the coding languages and other technologies that they are also using to solve those problems. For these reasons, developer use of LLMs is equally beneficial to the developer as to the overarching business context in which their work exists.
With these points in mind, I think the anti-LLM, coding-purist argument is reduced to the following contention: if developers spend less time writing the mechanical code that generative AI can automate, then their abilities to recall and produce the syntax for writing code manually may decline (though, importantly, their understanding of what the code does most likely will not). That’s true, but is not a problem—for two reasons:
- Automation is nothing new. IDEs (coding environments) have had automation features like IntelliSense long before LLMs were around, which reduced the amount of time spent writing code manually.
- Unless you’re planning on doing some kind of special exam, syntax recall just doesn’t matter anymore. The world is changing, and just as at a certain point the need to write machine code disappeared as it became abstracted away, the need to write the more mundane, syntactically mechanical code in today’s programming languages has been obviated by automation tools.
So if the goal is personal mastery of the syntax of a programming language, from the ground-up, then indeed LLMs will reduce our expertise in that regard. However, if the goal is delivering high quality software products (with high quality code) in an efficient timeframe that serves a business goal, then LLMs are our friend.
So why the resistance, from such intelligent and experienced people no less?
I think that the experience itself may explain the resistence. Some developers have spent decades refining and perfecting their coding abilities. Their mastery in their specialised programming domain has become a key part of their identities, and the crux of their high value in the industry. LLMs have blown that apart, eliminating knowledge monopolies. Not only has the barrier to entry to using various tools and technologies been reduced, but the ability of more experienced developers to compete in providing value to a particular team or company may even in some cases be lower than that of a less experienced developer equipped with an AI companion. To see a new generation of developers delivering advanced features without going through the same learning process you did, it may feel like your hard work has been devalued.
It’s not just in software engineering that AI is eroding knowledge monopolies. The same phenomenon is observable with other knowledge-driven professions such as with doctors, many of whom are very cynical and critical of patients who would have the audacity to research medical matters themselves, or to consume media such as Huberman Lab, a podcast platform which seeks to provide science-based health information at zero cost to consumer.
What it means to be a developer is changing. It might be somewhat sad that people will no longer be writing so much code themselves, but keep in mind that the most popular programming languages today were once themselves created (fairly recently, actually). They displaced old ways of doing things, changed the technology environment, and opened up new opportunities for everyone. LLMs are just doing the same thing today, and in order to keep up, we all have to adapt.