Denksaft

by Jan Wedel

Trojan horse

The Trojan Horse

Software engineering is currently standing at a precipice. On one side, we see the promise of infinite productivity and the end of tedious "janitorial" work. On the other, we see a slow erosion of the very principles that make software reliable and secure up until the point when software developers are obsolete - all driven by bets on future profits.

We are being handed a gift - a powerful, autonomous engine that can build products in minutes. But as the saying goes, "Beware of Greeks bearing gifts."

The "Sugar High"

Only a few months ago, most of the code produced by AI assistants was objectively poor and required significant manual cleanup. We used it primarily for tedious "janitorial" work - tasks anyone could have performed, such as "rename all these test cases using a new naming pattern."

With the rise of more complex tools like Claude Code and Copilot CLI, the landscape has changed. These tools allow for better problem structuring by passing tasks to specialized sub-agents - such as a "Bug Analyzer," "Software Engineer," or "Database Expert" - that follow written guidelines, utilize specific skills, and work together as an agentic team.

Recently, we faced a complex database performance issue that several seniors couldn’t solve, at least not within an hour of brainstorming and trial and error. I started an empty DB container on my machine, handed over the credentials, and instructed the agent to populate the database with the schema and data, find a solution, and measure the gains. It crunched away while I focused on other tasks. Eventually, it returned with a working solution, proven by measurements and an explain plan. We reviewed it, and it performed perfectly.

Should we now all stop coding, as Jensen Huang suggests, or at least adopt an AI-native workflow, as McKinsey advises? Does a Junior developer now "outperform" an experienced Senior, as Andrew Ng claims?

When you don't buy the "we won't need developers" hypothesis, we must invest in educating Juniors so that, ten years from now, we still have people capable of developing software. I already have nightmares of being called out of retirement to fix a legacy system I once built, simply because no one else - including the AI - understands how the software works anymore. This is a "knowledge debt" that will increase over time and requires payment at some point in the future unless it can be covered by enough markdown documentation is added to the AGENTS.md.

The Death of Best Practices

Over decades, the field of software engineering has developed best practices and patterns that prevent bugs and ensure software remains readable and maintainable.

Companies like Anthropic advocate for "AI-native" development, suggesting you can implement a product without writing a single line of code. In fact, they claim that nearly 100% of the Claude Code CLI was developed with AI.

How does that work in practice? Following the recent leak of the Claude Code source code, we now know it doesn't hold up well-at least not compared to traditional standards.

Despite the irony of people now using Claude Code to translate its own source code into Python and Rust, the original code is abysmal. It features 5,000 lines of code in a single convoluted file with endless if/else branches, poorly named variables, and methods with 30 parameters.

When we say a Junior "outperforms" a Senior, we might agree that a Junior with AI can produce more lines of code. But as any experienced developer knows, code is a burden. Ideally, you want the smallest amount of maintainable code possible to solve a problem.

One could argue that you no longer need to read code when you have LLMs to do it for you, or that you can simply treat code as a "black box" to be thrown away and rewritten by AI whenever needed.

The Black Box Dilemma

When we treat software as a black box, we lose visibility into its internal state. As software becomes more complex, it becomes exponentially harder to "prompt" an LLM to make changes without breaking existing logic.

Historically, programming has been defined as the "precise, unambiguous specification of what the computer is to do". So, when you want to tell the computer unambigously what to do, you need to program.

While non-deterministic output might be acceptable for throwaway proofs-of-concept, it is a massive business risk for production systems, obviously depending on the value of the system itself. You are betting that AI-generated software won't end in a catastrophic failure - like deleting a production database or leaking user data and corporate IP. (Hello, McKinsey: how is AI-native working for you lately?)

Security is inherently difficult, and humans make mistakes constantly. Nothing is ever 100% secure, but relying on generated code that no one understands increases the risk significantly. This risk quadruples when LLMs run as part of your system, handling user-generated input. As we know from the fundamental design of LLMs, it is currently impossible to e.g. fully prevent prompt injection.

Blame it on the Vibe

McKinsey writes:

"AI agents make it possible to run software development like a two-shift digital factory... The next day, the human team resumes the day shift by reviewing the output of the night."

Everyone I know hates PR reviews. Most are neither willing nor capable of performing a truly thorough review. This is why smart people long ago came up with the idea of continuous review through pair programming.

I enjoy coding. It is a distinct activity from problem-solving, and I find it fulfilling. If I couldn't code at my day job anymore, I would still do it for fun. Now McKinsey - the "AI-native experts" - suggests my job should consist of writing prompts and reviewing thousands of lines of agent-produced code every morning? No, thank you. I’d rather go into woodworking.

I can envision an agent I can talk to rather than just type at - one that could join a Teams call with other developers. Sometimes, I just want to yell "Stop!" at the agent the moment I spot a problem. While solutions like this may be coming, a bigger issue remains: accountability.

Who is responsible for a failure that affects user data, money, or health? Companies will likely shift the blame onto the individuals who were tasked with reviewing code they didn't - and perhaps couldn't - understand. We must establish regulations that hold companies financially accountable with high fines, rather than blaming the workforce. If a company lawyer asks, "How can we minimize the risk?" and the answer is "We can't," many of these "AI-native" ideas will quickly be buried.

Unsurprisingly, big AI firms recognize this threat and are actively trying to influence legislation to protect themselves while preventing competitors from entering the market.

Follow the Money

Big Tech is driven by profit. The days of "Don't be evil" are long gone.

When OpenAI or Anthropic talk about transformation, they are selling tokens. When OpenAI buys OpenClaw, they aren't doing it for the common good. They are securing a cash-generating machine. Running prompts triggered by a "heartbeat" all day long keeps the cash register ringing.

When Jensen Huang, as the CEO of the world’s primary AI hardware provider, talks about transformation, he has an obvious interest in selling more GPUs.

When Andrew Ng speaks, you must remember that his AI Fund is a $175-million investment vehicle. Their fortunes rely on these claims coming true.

The AI economy currently resembles a closed loop of capital. Global Finance Magazine (Feb 2026) describes "Circular Financing," where Nvidia invests in OpenAI, which uses that capital to buy GPUs from Nvidia and compute from Oracle.

These firms are burning venture capital to keep API costs artificially low-the "Amazon Strategy." With Oracle raising $50 billion for infrastructure, we must ask: what is the actual cost of a line of code once the subsidies vanish and the market consolidates? Once you have laid off everyone who actually understands software, you cannot go back. You are locked in.

Many analysts agree that we are currently in an AI bubble. While the technology is here to stay, the staggering amount of capital being gambled on it is likely unsustainable, and widespread bankruptcies may be inevitable as the market corrects. No one knows exactly how the dust will settle, but a shift may be on the horizon. Models are becoming increasingly more efficient, potentially reaching a point where they can run locally on your own machine. This could eliminate the need for Oracle’s data centers and perhaps even Nvidia’s high-end chips. As the gap between proprietary and open-source models narrows, we may eventually see cloud-hosted giants like OpenAI and Anthropic rendered obsolete by localized, specialized and maybe open-source AI.

Conclusion

We live in times that are simultaneously exciting and terrifying. AI can be a powerful tool, but it is also a sophisticated trap - a Trojan Horse. If we aren't careful, we will trade our creative agency, structural integrity, and digital independence for a "digital factory" model where companies pay Big Tech instead of their "human resources", and individuals are held accountable for code they can no longer understand. All the while, we are following the advice of people whose bank accounts depend on us being convinced.


Jan Wedel's DEV Profile