When code is no longer only human (because of AI): the new philosophical issues surrounding the use of chatbots for Programming
- Graziano Stefanelli
- Jul 7
- 8 min read

Programming meets delegation: what remains of the programmer’s craft?
In the last two years, the arrival of AI chatbots capable of generating functional code on demand has swept through online developer communities, sparking a new season of debates that oscillate between enthusiastic curiosity and existential anxiety. On forums like Reddit, where discussions often run deeper, questions have emerged that go beyond mere technical functionality, touching on the very meaning of creative work, the nature of authority in software, and the fate of craftsmanship in coding.
A programmer who copies and pastes a block generated by ChatGPT—is that person truly “writing code”? And if the result works, does the source even matter? The delegation of ever-growing portions of work—from tests to the most basic routines, but increasingly also to more “thoughtful” segments—leads one to wonder where human responsibility starts and ends, and what value remains in the developer’s skill, intuition, or even personal taste.
On the most-followed threads of r/Programming and r/ExperiencedDevs, there are accounts of entire teams entrusting AI with the generation of tests, documentation, and even complete services, leaving senior developers with the task of “overseeing” or reviewing the code. There are stories of those who, over time, lose confidence with traditional patterns now automated by AI, while others defend the need to continuously hone manual skills to avoid becoming mere “prompt directors.” Alongside, there’s a rise in memes and jokes about the anxiety of becoming obsolete, but also new narratives that describe computer science as an evolving discipline where creativity is increasingly shifting toward the design and integration of technologies rather than the material writing of code lines.
The figure of the author: who can sign off on software in a world of LLMs?
In the past, authorship of a function, a library, or an entire application was clear: it could be attributed to a person or team, with their expertise, ideas, and unmistakable style. Today, the massive insertion of Large Language Models into the workflow upends everything: a well-written prompt can generate hundreds of lines of code in seconds, perhaps further refined with Copilot or Claude.
So who is the real author? The programmer who orchestrates prompts, edits the output, and decides what to keep? The model itself, as an artificial “intelligence” capable of surprises? Or the collective of developers whose data trained the AI? This dilemma, often dismissed as academic, comes back forcefully in real cases: academic papers where GPT-4o code predominates, startups releasing tools almost entirely “assembled” from AI output, company policies requiring disclosure or even banning generated code for compliance reasons.
Situations in which the chain of authority becomes muddled are multiplying, as happens in open source projects on GitHub that monitor the number of “Copilot commits,” or in academic contexts where many conferences and journals now require disclosure of AI usage in code drafting. Cases of rejection or requests for deep revision due to overly “automated” contributions are on the rise, and companies are developing policies to delimit the scope of individual responsibility. Even in legal and patent discussions, new issues are emerging: if a solution is generated by an AI, who truly holds the rights or responsibility for its effects?
The epistemology of code: knowing what it does and why
Another almost Kantian philosophical issue concerns the transparency and comprehensibility of the software produced. Historically, every programmer is called upon to “understand” what they write: they know the logic, anticipate behaviors, and take responsibility for every line. With AI, the process transforms: the human supervises, checks, sometimes “accepts” without truly understanding. The relationship between programmer and code becomes more like that between architect and industrial supplier: not everything is under one’s control anymore, not everything can be explained in a linear way.
But is “opaque” code, even if perfectly functional, still reliable? Engineering culture demands control, traceability, and the explanation of every step. However, the rise of “blind trust” in AI output presents a radical challenge to technical knowledge: can we still speak of knowledge, or are we entering the realm of pure acceptance of the result, as if by magic?
On code review forums, a growing difficulty in deeply verifying chatbot output is emerging: the code generated often appears clean and coherent, but can hide pitfalls hard to spot without exhaustive testing. Some developers recount hours spent not so much fixing trivial errors as deciphering the machine’s design choices, sometimes inspired by inaccessible training data. In more regulated sectors, such as banking or healthcare, this phenomenon has already led to the emergence of new professional figures tasked with “AI code auditing,” with strict procedures demanding detailed documentation and human explanations for every line generated automatically.
The ethics of responsibility: bugs, damage, and the temptation of abdication
If a routine generated by a chatbot causes damage, who is responsible? On forums, it’s not uncommon to find developers discussing—with an ironic but worried tone—cases where using AI code led to vulnerabilities, data loss, or malfunctions. What, then, distinguishes a “human” error from an “automated” one? Does AI become a shield for superficiality, or does it require even more attention and verification from the human?
Here the discussion becomes deeply ethical: the temptation to abdicate responsibility is strong, but companies, institutions, courts (and the community itself) cannot accept a blank delegation. Today, more than ever, the programmer is called to the role of curator, reviewer, and guarantor—a profession shifting from production to oversight, but no less serious or responsible. Some see this as a loss of agency, others as a maturation of the role.
Discussion on Reddit, especially in r/technology and r/cscareerquestions, provides examples of even serious bugs caused by AI routines that were not thoroughly reviewed. There are stories of companies that suffered financial losses due to errors that escaped human oversight, as well as internal policies requiring developers to document every use of AI precisely to avoid shifting blame onto an impersonal “algorithm.” Some describe the frustration of having to “sign off” on code largely written by a machine, while others embrace the new role of supervisor, believing the human critical judgment is more central than ever.
The training problem: “prompt-based” generations and the risk of superficiality
Another question lighting up Reddit and similar platforms is that of new generations of developers: is a divide forming between those who learned to program “from scratch,” making mistakes, iterating, and those who build only via prompts, never really delving into the deep structure of languages, algorithms, and architectures? Seniors fear that automating the “boring” parts will leave young people without the critical foundations of the craft, unable to diagnose errors, truly optimize, or innovate beyond patterns learned from AI.
On the other hand, some argue that democratizing coding is an opportunity: access widens, the technical barrier drops, creativity can be freed from the constraints of syntax. It’s a tension reminiscent of old disputes between craftsmen and industrialists, classicists and modernists: every innovation brings both losses and gains, and the balance remains open.
Just scroll through discussions in the megathreads of r/learnprogramming or r/AskAcademia to see heated confrontations: on one side, young people recounting how they started writing full apps in a few weeks thanks to ChatGPT; on the other, professionals warning about the danger of “building castles on sand” without a real understanding of the fundamentals. There are anecdotes about students passing university exams with AI code but then struggling to solve nonstandard problems, and reports from companies finding it hard to hire developers who are truly autonomous in resolving complex bugs.
The risk of standardization and the disappearance of personal style
Many software philosophers emphasize how AI-generated code tends to be “neutral,” homogeneous, lacking in distinctive traits. The risk is not only aesthetic: in a world where solutions all look alike—because they derive from common corpora—variety, inventiveness, and even the “voice” of the author are lost. Standardization can facilitate maintenance, but risks flattening the discipline, turning programming from an art into mere procedure.
Some see in this trend a new era of “bureaucratization” of code, others the foreshadowing of a society where AI generates and validates almost all software, leaving humans only with governance and high-level choices. What will become of the joy of discovery, the satisfaction of solving a difficult problem with a new approach? This is a question that resonates, in its own way, in all the densest Reddit threads and on sector boards.
In recent months, there has been an increase in reports of open source and commercial projects exhibiting a kind of stylistic “uniformity,” even when created by very different teams. There are testimonies from developers who can now “spot at a glance” code generated by ChatGPT or Copilot, due to its repetitive structure and lack of personality. At the same time, some celebrate standardization as a tool for global collaboration and accessibility, while others lament the “craft” creativity that used to characterize projects, believing that software’s richness also comes from the diversity of styles and solutions.
The “new” philosophy of programming: creativity, collaboration, and the future of work
Ultimately, the massive presence of AI chatbots in code writing forces a rethinking of many philosophical categories related to the programmer’s craft. We are shifting from a model of “solitary craftsmanship” to one of collaboration between humans and machines, where value moves from manual skills to design, from execution to process planning.
It’s a shift that evokes great philosophical themes: the tension between control and delegation, between explicit and tacit knowledge, between technique and creativity. The human is no longer the only actor capable of writing code, but remains the only one able to ask radical questions, to imagine new uses, to interrogate the machine about the direction to take.
In the most sophisticated discussions on specialist subreddits, contrasting visions emerge: on one side, those who fear a loss of “ownership” and meaning in work; on the other, those experimenting with unprecedented forms of human-machine co-creation, where AI is both tool and collaborator. Some threads gather examples from hackathons and competitions where human code blends with AI output, producing unpredictable results. There are even theories proposing a new ethics of “transspecies” collaboration, where creativity is no longer a purely human prerogative but a distributed process among different agents, each with its unique contribution.
Toward new responsibility and a new aesthetic
Today’s debate suggests that the real challenge is neither to “resist” AI nor to delegate everything, but to learn how to integrate generative tools into a reflective practice, capable of recognizing limits and possibilities, of keeping ethics and the search for meaning alive even as productivity explodes. From this perspective, philosophy is not a luxury, but a necessity: it serves to orient, to give human depth to the craft of programming in a world where the human is no longer alone.
Thus, in endless online discussions, a new ground for confrontation emerges: that between efficiency and authenticity, between responsibility and abdication, between innovation and memory of the past. And in the end, perhaps the most important question remains the same as ever: what place to reserve for the human, when machines learn to write even our dreams in machine language?
Among the perspectives most cited in forums and specialist publications is the need to define new rules of responsibility, new metrics of quality, and new spaces for individual expression even within a deeply automated environment. We are already witnessing the birth of “manifestos” asserting the centrality of the human in orchestrating systems, but also the emergence of an aesthetic sensibility that values transparency, diversity, and collaboration more than an obsession with pure productive efficiency. In this direction, philosophy can represent not only a tool of resistance but a compass to navigate the complexity and ambiguity of this new season of software.
_______
FOLLOW US FOR MORE.
DATA STUDIOS




