Part IV: Vibe Coding and the Paradox of Democratization
By Bhanu Nallagonda, Cofounder, Ogha Technologies
April ‘26
One of the most culturally significant trends of 2025 was the mainstreaming of “Vibe Coding” and phenomenally accelerating in the 1st quarter of 2026. This phenomenon fundamentally altered the relationship between humans and software creation. Hitherto, what was not possible by the end of 2025 i.e. having a fully working code as an output in one go, is a reality just after 2 months by February of 2026.
Vibe Coding – Definition and its Evolution
Coined initially by AI researcher Andrej Karpathy, “vibe coding” refers to a development style where the human provides the intent (the “vibe”) via natural language and the AI handles the implementation details (syntax, boilerplate, libraries).
Tools like Cursor, Lovable, Bolt and Replit etc. became the standard environments for this workflow, with OpenAI and Claude Code again being the frontiers in providing the brains or the leading models. Initially, pure vibe coding is used for “throwaway” weekend projects and rapid prototyping. The standard vibe coding evolved into Architectural Vibe Coding, using AI agents to build and maintain complex, production-grade systems where the human’s primary role is system design and orchestration. Essentially it is a specification driven development with multi-agent orchestration – the architect agent, the coder agent and SRE (Site Reliability Engineer) agent and so on. An auditor agent or the AI reviewer, like CodeRabbit, Anthropic’s Claude Code Review or other agents are used to find “AI Slop” before merging the generated code. Claude Code review uses a parallel agentic architecture, dispatching a specialized pod of agents to review simultaneously. A verification agent acts as a lead auditor reviewing the findings from the specialized agents, which in turn, each focusing on logic, edge cases, security, vulnerabilities, dependency integrity and so on. CodeRabbit uses a hybrid architecture that combines a deterministic workflow with targeted two agentic loops and not the swarm. Recently CodeRabbit added the ‘Fix All Issues with AI Agents’ feature, which generates a single structured prompt that can be handed off to any external agent such as Claude Code or Cursor to execute fixes in parallel.
Democratization of Coding
With AI, making it so easy, it decouples the logic from language, its syntax, the drudgery of typing line by line and removing the high barriers of technical fluency. If anyone can clearly specify in natural language what needs to be achieved by the software, the system will build it for them, thus giving rise to the Citizen Architect. Any domain expert, be it a doctor, a scientist, a student or a teacher, an end user or anyone with an idea, a clear idea can get that implemented in days, if not hours using AI.
Viability and cost break even points of many projects across the world would shift and a huge amount of pent up development could be taken up by the enterprises. The technical debt accumulated over decades can be repaid using AI and the legacy code bases be ported to modern languages and architectures easily enabled by this.
The advantages of this ability to generate code at will, are like – anyone with an idea can generate code and need not be a developer, it democratizes the whole development and small, lean teams can accomplish a lot in much shorter time with a lot less money and time. While the AI generated code works (and not work as well or not work as exactly intended!), it has its quirks too along with hallucinations! Let me illustrate that with a simple and trivial example.


Here the issue is very innocuous, but the flow of thinking and thus the possibilities is illustrative. It can for example, hallucinate a non-existent database table. Unfortunately, I cannot post more complex or substantive examples here. As the functionality and size of the code increases, the complexity of issues rise exponentially with it and beyond human comprehension, given the pace at which the code is generated.
So, How Much Code Can AI Generate?
With AI generating code at the speed of tokens, it could be a world full of code. So how much code can a GW data centre generate?
Back of the envelope calculations indicate that a mere GW can generate many times more code than all of the developers can generate, equivalent to the output of half a billion developers! The assumptions in arriving at this number can be challenged, but that should give a magnitude of the volume we are talking about here. Models optimized for code can generate 150 to 200 tokens per Watt-hour and developers generate about 50-100 lines of code per day. Obviously people can debate these ever changing numbers on the side of the machine, while the human rate of output is stagnant. This AI generated code comes at a tiny fraction of human cost.
Half a billion is many times more than all of the developers on earth, and there is no dearth of them, as on date, at 31 millions of them.
China has the largest pool with 7 M and going up to 9.4 M if students and hobbyists are included.
Those numbers for India are like 5.8 M and 17 M including the students and hobbyists. For Europe, it is 6.1 M to 10 M. The US has about 4.4 M of them. However, the kind of work they carry out differs a lot in each region. The US leads the product development, with Chinese developers serving the local eco system predominantly, the Indian developers mostly serving the world and Europe having more senior developers. That number of half a billion is just for a Gigawatt! It does take 3-4 years to put together a gigawatt of a data centre and many gigawatts of data centres are in the making, though not all are for generating the code. However, Anthropic’s Economic Index reports do indicate that about 40-60% of their Claude.ai and 1st party API usage is for the dominant use case of computer and mathematical category i.e. primarily code generation, debugging and architecture, with all other use cases being very diffused and fragmented following that, struggling to break 10% barrier. In markets like India, code-related tasks account for over 50% of all Claude usage, acting as the “anchor” use case that supports the technology’s economic viability.
The Glut of Code
What has this ability led to? This has immediately led to huge code overload, a glut of code everywhere, with enterprises scrambling to deal with it.
The visible brunt is being born by the code repositories of the world – GitHub and GitLab! They no longer host just the human generated code, but have become primary execution environments or repositories for autonomous AI agents. In early 2026, GitHub is processing 14 billion commits per year, a staggering 14-fold increase from pre-AI levels. As of March 2026, over 50% of all code published to GitHub is AI generated, and it is rapidly climbing up. Pull requests (PRs) initiated by AI agents (like Claude Code and GitHub’s own Coding Agent) surged from 4 million in late 2025 to over 17 million per month by mid-2026 (i.e. many times in just a few months). There is 4x increase in code cloning and duplication, putting immense pressure on GitHub’s deduplication algorithms and storage clusters. AI agents can trigger hundreds of commits and CI/CD runs in a single hour as opposed to prolific developers pushing the code 5-10 times a day.
This has led to an aggressive change from the traditional seat-based pricing model to hybrid consumption models of these companies.
GitHub’s original architecture was built for human-scale interaction, not for the vibe coders or swarms of agents. They are also currently in the final, high-pressure stages of migrating its massive legacy backend from their own servers to Microsoft Azure. The Agentic Flood hit exactly during this transition, leading to cascading failures and outages. The outages have doubled to 8-10 major incidents in Q1-2026 when compared half of yester years, with 4 major incidents in March itself. There have been service degradations with AI agents not being able to talk to it, causing CI/CD pipelines stalling globally.
Another platform, GitLab shares many of the same underlying AI Factory led issues and pressures as GitHub. GitLab saw several 20–30 minute blips in March (with 99.58% uptime) as it launched the Duo Agent Platform. Unlike GitHub’s massive multi-hour crashes, GitLab’s issues were more “granular” – specific services like Security Dashboards or MR (Merge Request, yes, they call it with a different name) reviews failing while the core Git service remained up. While GitHub is focused on handling the raw volume of the “Agentic Flood,” GitLab is positioning itself as the “Intelligent Orchestration” platform that focuses on the cost and security of that flood. However, there is interdependency too, with many enterprise pipelines and mirror-repos crossing both platforms, so a crash on one, bottlenecks the other.
Then there was the trend of “TokenMaxxing” with the likes of OpenAI and Meta rewarding their engineers for high token burn, with the highest usage of 281B tokens, reportedly costing over million dollars in spend. Meta’s has recently shuttered this “Claudeonomics” leaderboard.
The Paradoxes of Vibe Coding
What are the dichotomies and paradoxes lurking in here? When the models generate the code, at that pace, humans hardly read the generated code, instead of writing it in the first place.
Thus, the role of the developer has shifted from “writer of code” to “architect of vibes” and “auditor of logic”. The value of a developer is no longer their knowledge of syntax, but their taste, architectural vision, the ability to specify clearly, take right design decisions and spot a subtle bug in a thousand lines of machine-generated code i.e. better or superhuman debugging skills.
Let us contrast this with writing a book vs reading a book. All of us know that writing a book takes a few to many months (ok, that was without AI!) and reading it takes only a few hours, of course, depending on the size of the book. When it comes to reading code as opposed to writing, it is entirely and fundamentally a different process and needless to say it is much harder than reading a book.
Reading a book is primarily a linguistic activity, while reading code is a logical and spatial problem-solving kind of task. The cognitive load is different and in fact the research shows that reading a book and reading code happen in different parts of your brain – the first one activates the brain’s language network and the latter activates the Multiple Demand network, the same area used for math, puzzles and complex logic and it hardly uses language areas at all (though ironically the models which generate code prolifically are still called LLMs).
Code is often read much more often than it is written – attributed to Guido Van Rossum, the creator of Python, highlighting why Python emphasizes readability. There are informal sayings in the industry that it is 10x harder to read than write, but in reality it actually depends on various factors, but suffice it to say it is definitely harder to read code and grasp, than writing it in the first place.

The Paradox of Expertise: The Blackbox Problem
While touted as a way for non-technical people to build software, the reality of it revealed a paradox: Vibe coding requires more engineering expertise, not less, to be done safely.
- The Comprehension Gap: When an AI generates 90% of the code or even 100% of it, the human developer often loses track of how the system works. This leads to “Dark Debt”—hidden complexity and bugs that no one understands and that only surface months later. A project might work perfectly for the demo, but when a specific edge case arises in production, the “vibe coder” has no idea where to look in the thousands of lines of machine-generated code. Of course, this is being addressed with AI code reviewers, but the underlying lack of human understanding and comprehension remains. The key paradox here is the loss of interpretability.
- Security Risks: Reports found that 45% of AI-generated code in 2025 introduced vulnerabilities, such as SQL injection flaws or hardcoded secrets. Without a senior engineer’s eye to audit the AI’s output, “vibe coded” apps often became security nightmares. As I write this, it is also being addressed with better AI code/security auditors and Anthropic’s Mythos rollout is staggered, given the implications.
Productivity Illusion: Studies showed that while AI sped up simple tasks, it actually slowed down experienced developers on complex tasks (by ~19%) because the time saved writing code was lost debugging subtle AI hallucinations. While AI can generate many times the code of human developers, the end-to-end and overall productivity improvements are more modest in percentage terms, as the bottlenecks in the organization shift elsewhere.
The Paradox of Sovereignty: Distributed vs. Centralized Power
True democratization must imply decentralized power. Vibe coding does the opposite with the dependency of the vibe-giver, in the hands of massive model providers. The power has moved from elite coders, millions of them, to GPU-rich corporations and the investors. It is all about Tokenomics.
The Paradox of Semantic Debt
In traditional software, the technical debt is messy, spaghetti or legacy code. With vibe coding, it is the rise of “Semantic Debt”. Because the user is coding via vibes, imprecise or less precise natural language, the specifications are vague at the level, but the underlying code generated often contains the cruft or the vulnerabilities of edge cases, that the user never intended or is never aware of. As it becomes infinitely easier to generate infinite amounts of code, the software is only “mostly” correct, but a sea of semi-broken or fragile one, that is harder to audit than to build a proper one in the first place.
The Paradox of Value: Commoditization of the Developer and Software
If everyone is a developer, paradoxically the value of being a developer or the software approaches zero. Ironically, vibe coding or AI actually empowers the senior architect more and better than a novice. So instead of levelling the playing field, vibe coding force multiplies the power of who understand the system design. So a novice can become a better hobbyist, the elite expert or small groups can become a 1-person or lean conglomerates. The new or entry level developer used to learn over time and graduate to the experienced and higher skilled designer and architect. With AI and the pressure to shore up the bottom-line, the entry level jobs will be hard to come by and there is already some statistical evidence for this. As a result, this pipeline of software engineering leadership breaks and new ones will need to be established to get those elite members of the industry. While the models are trained fast with synthetic data, ouroboric loops can result in model collapse and the frontier labs fight these pitfalls.
So Vibe Coding can generate a significant amount of technical debt, if rushed too soon, at the same time addressing or repaying the traditional, accumulated debt in certain areas such as modernizing the legacy code faster or enabling those Enterprise technical and architectural overhauls, hitherto not possible. While it lowers the barriers for startups or anyone with an idea to rush with an implementation at a fraction of cost and time, precisely the same thing increase the competition in the overall landscape, with many more turning out their ideas into products and entering the market. Copyright for the code can lose its significance and open source can have newer meaning and purpose.
Part V: The Changing Landscape of IT Services and Jobs Forced by AI follows..












