
The Renaissance Thesis: Why I'm Betting AI Expands Software Engineering
A historically grounded argument for why AI will expand software engineering, who's actually at risk, and what has to go right. From someone who's lived through the shift.
Software Engineering Is Entering a Renaissance
A historically grounded argument from someone who grew up with Pokémon Yellow and now ships with Claude
Growing Up in the Middle of It
I was six years old when I got my first Game Boy and a copy of Pokémon Yellow. That game was made by about twenty people at Game Freak, on hardware so limited they couldn't fit a save feature without a battery-backed SRAM chip. The original Pokémon Red & Green took six years to produce and nearly bankrupted the studio.1
Then Rails proved small teams could ship fast. Convention over configuration meant less boilerplate, more product. Then came the iPhone. Suddenly everyone needed a mobile app, and Apple's SDK abstracted away the hardware. You didn't need to understand ARM chips to ship to millions of pockets. A few years after that, AWS abstracted the server room. Instagram launched with 13 employees serving millions of users because they didn't have to rack their own hardware.
Each decade made the last one unrecognizable. In less than 30 years, we went from Pokémon Yellow to photorealistic open worlds and AI-generated images nearly indistinguishable from reality. Food arriving at your door minutes after ordering, 0's and 1's flowing through space and time. And every single leap came with an underlying abstraction that made it possible: compilers, frameworks, platforms, engines, cloud infrastructure.
Now AI is abstracting away the code itself.
I think we're entering a renaissance
I think the demand for software will expand faster than AI can satisfy it. I think the engineers who survive will do more interesting work than the engineers who came before. And I think the historical pattern will hold again: every major productivity leap has expanded the profession, not contracted it.
But I'm not naive. The transition will be brutal for some. Entry-level engineers are getting squeezed out right now, and that's a genuine crisis that deserves more than a hand-wave.
This is my attempt to lay out the case: why I think expansion is more likely than contraction, who's actually at risk, and what has to go right for this to be a renaissance rather than us fighting for jobs at McDonald's.
The Data That Scares Everyone
Overall programmer employment in the U.S. dropped 27.5% between 2023 and 2025—the lowest level since 1980.2 Stanford's Digital Economy Lab found that employment for the youngest software developers fell 20% from late 2022 to mid-2025.3 Early-career workers (ages 22-25) in AI-exposed occupations experienced a 13% relative decline even after controlling for firm-level factors.3
The layoff numbers tell the same story: roughly 265,000 tech workers laid off in 2023, 151,000 in 2024, and 246,000 in 2025.4 McKinsey's November 2025 survey of nearly 2,000 business leaders found that 30% of companies are planning AI-related workforce reductions in 2026.5 Companies have moved from talking about AI job cuts as a future possibility to treating them as a near-term reality.
Early-Career Impact
Young and AI-exposed workers since late 2022
(Ages 22-25)
Early Career
Source: Stanford Digital Economy Lab - Brynjolfsson, Chandar, Chen (2025)
Tech Layoffs by Year
Global tech workforce reductions (in thousands)
Looking ahead: McKinsey's November 2025 survey found 30% of companies are planning AI-related workforce reductions in 2026.
Source: Layoffs.fyi, McKinsey Global Institute (2025)
A note on timing: We're less than three years into the generative AI era. The Stanford data captures early effects: employment shifts that showed up in payroll data by mid-2025. Technological disruptions typically take 5-10 years to fully manifest in labor markets. We're watching this unfold in real-time, which means the picture is incomplete.
But these early numbers exist within a pattern that has repeated across decades of technological progress.
History's Repeating Pattern
I could write thousands of words on the history of programming languages, but here's what matters:
FORTRAN (1954): Skeptics said it was inefficient. They were right: early compiled programs were slower than hand-tuned assembly. But programmers could write code 500% faster,6 and what once took weeks could be done in hours.7 The number of programmers exploded.
C and UNIX (1972): Dennis Ritchie's language didn't eliminate assembly programmers. It elevated what was possible.8 The result was operating systems that now run virtually every device on earth.
Mainframes → PCs: When computing moved to desktops, mainframe specialists predicted chaos. Instead, new categories emerged: desktop support, network administration, database management. Today, 71% of Fortune 500 companies still run on IBM mainframes.9
Cloud Computing: When AWS launched, skeptics predicted infrastructure engineers would become obsolete. Instead, cloud computing birthed entirely new disciplines: cloud architects, DevOps engineers, SREs. It spawned an ecosystem of startups that couldn't have existed before.
The Bureau of Labor Statistics tells a nuanced story: software developer jobs are projected to grow 15% between 2024 and 2034, twice the rate of the general workforce.10 But "computer programmer" jobs (pure coding roles) are projected to decline 6% in the same period.11 The distinction matters: design and judgment work is expanding; routine coding work is contracting.
Important caveat: These BLS projections were finalized before generative AI coding tools went mainstream. ChatGPT launched November 2022; GitHub Copilot went GA in June 2022. The full effects of AI on employment take years to manifest in labor statistics. The 2024-2034 projections may not fully account for the disruption we're seeing in real-time.
The pattern is consistent: fear, then expansion. The question is whether AI breaks it, or accelerates it.
Software Developers: The Growth Story
BLS 'Software Developers' category - design + coding roles (thousands)
Sources: U.S. Census Bureau (1970-1990); BLS Occupational Outlook Handbook (2024-2034 projections)
A note on BLS categories: The Bureau of Labor Statistics tracks two distinct occupations. "Software Developers" (1.9M jobs) focuses on design, architecture, and problem-solving. This category is growing. "Computer Programmers" (121k jobs) focuses on pure coding implementation. This category has been declining for years, and AI accelerated that decline dramatically:
The Other Side: 'Computer Programmers'
BLS 'Computer Programmers' category - coding-focused roles (thousands)
Note: BLS tracks "Computer Programmers" (SOC 15-1251) separately from "Software Developers" (design + coding). This category peaked during the dot-com boom, then declined due to: (1) role evolution toward "developer" titles, (2) offshoring in the 2000s-2010s, and (3) AI automation from 2022 onward.
Source: U.S. Bureau of Labor Statistics Occupational Outlook Handbook
The -27.5% drop in programmer jobs is real and significant. But it's happening in a category that was already shrinking, being absorbed into the broader "developer" role for years. The story isn't "all software jobs are declining." It's "routine coding is being automated while design work expands."
Why This Time Might Be Different
The historical pattern is encouraging, but there's a catch: small teams winning doesn't mean more people are employed. It might mean fewer people doing more. Previous productivity leaps automated execution.
AI is different. Let me be direct about this: AI can pass technical interviews. FORTRAN couldn't. Cloud computing couldn't. No previous abstraction could plausibly sit in a coding screen and produce working solutions to novel problems. If you're not at least a little unsettled by that, you're not paying attention.
So why am I still betting on expansion?
The Automation Paradox. Research on automation consistently shows a counterintuitive pattern: automating 80% of a task often increases demand for humans who handle the remaining 20%. ATMs didn't eliminate bank tellers; they made branches cheaper to operate, banks opened more branches, and teller employment grew for decades after ATM deployment. Spreadsheets didn't eliminate accountants; they made financial analysis faster, companies demanded more analysis, and accounting employment expanded.
The mechanism: automation reduces cost, reduced cost increases demand, increased demand creates new work at the edges that automation can't reach. The question is whether AI breaks this pattern or accelerates it.
Klarna: The AI Replacement Experiment
A real-time case study in the hype cycle
The lesson: AI handles volume, humans handle complexity.
Sources: CX Dive, Tech.co, Fortune (2024-2025)
What AI still can't do: navigate the meeting where stakeholders disagree, answer the 3 AM page, testify after a breach, or know what not to build. AI writes code; it can't own outcomes.
The bet isn't that AI is weak. It's that the remaining 20% expands faster than AI can absorb it.
The Apprenticeship Crisis
This is the strongest argument against optimism.
Software engineering has always reproduced itself through apprenticeship. Juniors do routine work under senior supervision. They write the boilerplate, fix the simple bugs, and gradually absorb judgment through iteration. Over years, juniors become seniors.
If AI handles the routine work, what's left for juniors to wrestle with?
The numbers I cited earlier aren't abstract. They represent real people being locked out at the entry point. The 20% drop in early-career employment is already happening.
The Apprenticeship Pipeline
How software engineers have always learned - and where AI disrupts it
The question: Do we redesign the ladder, or watch a generation fall through the gap?
A Historical Parallel: When calculators entered classrooms, critics predicted the death of mathematical thinking. Why learn arithmetic if a machine does it? Instead, math education shifted. Less time on computation, more time on problem formulation, estimation, and knowing when an answer looks wrong. The skill moved up the abstraction stack. The same pattern could apply here: less time on syntax, more time on architecture, requirements analysis, and recognizing when AI output is subtly broken.
AI-augmented apprenticeship: Juniors generate with AI, seniors teach them why it's wrong. Learning shifts from writing code to evaluating it. More problem variety, less boilerplate repetition.
But why would companies bother?
Here's the problem: if AI can do junior work, what's the economic incentive to hire juniors? The honest answer is that many companies won't. They'll optimize for short-term cost savings and let someone else solve the problem.
Some will invest anyway: seniors are scarce, institutional knowledge can't be hired externally, and teams without juniors become brittle. Whether enough companies figure this out before the pipeline collapses is an open question.
The Productivity Paradox
Role compression sounds neutral, but I've done the math on my own work.
Same feature. Fraction of the overhead.
The work I automated wasn't the interesting part. It was the CRUD, the boilerplate, the stuff that exists purely so we can have meetings about it.
That's not the 55% improvement controlled studies show.12 That's skipping the theatrical production we've constructed around writing software.
But aggregate data tells a more complex story:
The Productivity Paradox
Lab results vs. real-world outcomes tell different stories
We're using tools we don't fully trust.
Sources: arXiv:2302.06590, arXiv:2507.09089, JetBrains State of Developer Ecosystem 2025
If one senior engineer with AI can do what three engineers and a Scrum Master did before, that's a net reduction of four positions. The productivity gains flow to me and the company. Everyone else bears the cost.
So how does this math net positive?
I'll be honest: in the short term, it might not. If you're measuring headcount over the next 2-3 years, displacement will likely outpace creation. The productivity gains are immediate; the demand expansion takes time to materialize.
But here's the pattern I keep coming back to: every previous productivity leap created roles that didn't exist before. "DevOps engineer" wasn't a job title before cloud computing. "Mobile developer" didn't exist before smartphones. "Data scientist" emerged from the intersection of cheap storage and machine learning. We couldn't have predicted these roles before the enabling technology existed.
I suspect AI will create roles we can't name yet. AI integration engineer? Prompt architect? Scrum AI Manager? AI Product Scrum RAG alignment On all verticals manager? System reliability engineer for AI pipelines? Some of these already exist in other forms. Others haven't emerged yet.
The honest position is this: I don't know if headcount grows. What I believe is that the work grows, the influence grows, and the value created per engineer grows. Whether that translates to more jobs or fewer, better-paid jobs is genuinely uncertain. But I'd rather be an engineer in a field with expanding influence than one in a field that's contracting, even if the headcount story is complicated.
From Typing to Thinking
The JetBrains 2025 State of Developer Ecosystem confirms the shift: 85% of developers now use AI tools regularly,13 and their roles are evolving toward "curators, reviewers, integrators and problem-solvers."
From Typing to Thinking
The skills that matter are shifting dramatically
Declining in Value
Growing in Importance
The bottleneck shifted: Code generation is cheap. Knowing what to build is the new constraint.
Based on observed workflow changes and industry practitioner feedback
This is the hill I will die on: judgment stays human.
At least for now.
The Hidden Costs
The productivity gains may be borrowing from the future.
AI makes it easy to produce code without understanding it. This creates risks productivity metrics don't capture:
Risks Productivity Metrics Don't Capture
The Data on AI Code Quality
Early research paints a sobering picture
"I don't think I have ever seen so much technical debt being created in such a short period of time during my 35-year career in technology."
— Kin Lane, API Evangelist
Sources: GitClear, Qodo, Google DORA Report (2024-2025)
The productivity gains reported in controlled studies may be net negative once maintenance costs and debugging time are factored in. We're generating code faster than we can understand it. That bill comes due eventually.
Where Demand Will Grow
If the pattern holds, cheaper software production should unlock new demand. This is speculative, but the economics are clear.
The cost differential matters. Custom software development traditionally costs $50-500/hour depending on location and expertise. Offshore development reduced that to $15-50/hour but introduced coordination costs, time zone friction, and quality variance. No-code tools reduced costs further but hit capability ceilings quickly.
AI changes the equation differently. It doesn't just reduce hourly rates; it compresses the number of hours. A feature that took 40 hours now takes 10. That's a 75% cost reduction on top of whatever labor arbitrage you were already doing. For the first time, custom software becomes viable for problems that couldn't justify even offshore rates.
Cost was always the constraint. AI drops it.
A concrete example: A regional healthcare network has 47 different intake forms across 12 clinics, each with slightly different fields because they were created by different administrators over 20 years. A consulting firm quoted $400K to build a unified system. That's a non-starter for a network running on thin margins. But an AI-augmented engineer could audit those forms, design a unified schema, and build the integration layer in weeks instead of months. The project becomes viable at $60K. Multiply that by every mid-sized healthcare network, every county government, every agricultural co-op that's been limping along on spreadsheets.
This is Jevons Paradox applied to software: when you make something dramatically more efficient, you don't use less of it. You use more. Coal efficiency didn't reduce coal consumption; it made coal viable for more applications, and total consumption exploded. The same logic suggests AI won't reduce software production. It will make software viable for problems that couldn't justify development before.
The caveat: this expansion may not require proportionally more humans. Some of that demand will be met by AI directly, or by fewer engineers doing more. The field's influence may grow even if headcount doesn't scale linearly. But even pessimistic scenarios suggest the work expands, even if the workforce grows more slowly than the work does.
Who's at Risk
Every technological shift creates winners and losers.
The Shifting Landscape
What's valued in software engineering has always evolved. The question is whether you're evolving with it.
Valued Skills by Era
- Syntax mastery
- Memory management
- Low-level systems
- Manual debugging
- Framework proficiency
- OOP design patterns
- Relational databases
- Web standards
- JavaScript ecosystem
- Cloud & DevOps
- Mobile development
- Microservices
- AI-assisted development
- System thinking
- Domain expertise
- Knowing why, not just how
Where Different Roles Stand Today
The pattern: Every era devalued the previous era's "hard skills" while creating demand for higher-level thinking. The engineers who thrived were those who climbed the abstraction ladder.
The lower rungs of the abstraction ladder face compression. Some will climb. Some organizations will extend a hand. Others won't.
The Uneven Distribution
The costs and benefits aren't landing on the same people
Who Bears the Cost
Who Benefits
The people bearing the costs aren't less talented. They're just positioned where AI pressure is highest.
If the role shifts from "write code" to "review AI output," does that feel like engineering or QA? The engineers I know who use AI well aren't just reviewing output. They're working at a higher level of abstraction, tackling problems they couldn't have attempted before. But I'm watching closely.
The Strongest Case Against Me
I'm not interested in winning an argument. I'm interested in being right. Here are the strongest counterarguments I've encountered:
"Software demand is approaching saturation."
Healthcare still runs on fax machines. Local governments use software from 2003. The "software is eating the world" thesis is maybe 30% complete. Cost was always the constraint. AI drops it.
"AI will follow the bottleneck and automate judgment too."
"AI will improve faster than expected" has been the prediction for 60 years, and the timeline keeps slipping. I'm not betting against improvement. I'm betting that demand expands faster than AI's ability to fully automate creation.
"You're overfitting to historical patterns."
Fair. But name a cognitive profession that permanently contracted due to productivity tools. I've looked, and I can't find one.
What I'd Actually Bet Money On
Predictions are cheap. Here's what I'd put real money on:
- 90%+ of developers using AI tools daily
- Entry-level hiring down 30-40% from 2022
- Major tech company restructures around 'AI-augmented teams'
- Discourse still: 'AI replacing programmers any day now'
- Total developer employment higher than 2024
- 'AI Integration Engineer' is a recognized role
- Fortune 500 catastrophic failure from AI code nobody understood
- Universities still teaching data structures the same way
- Software engineering looks as different from 2024 as 2024 from 1994
- 'Should I learn to code?' cycles through 3+ more moral panics
- Nostalgic think pieces about 'when we wrote code by hand'
- Tesla Full Self-Driving still in beta
The timeline question:
AI capability has been on something like an exponential. GPT-3 to GPT-4 to Claude 3.5 to whatever comes next. Each generation meaningfully more capable than the last. My "judgment stays human" bet has a shelf life.
Here's my honest over/under: I think human judgment remains the bottleneck for at least 5-7 more years (until ~2031-2033). Not because AI won't improve, but because the gap between "AI can write code" and "AI can be held accountable for production systems" is wider than it appears. Accountability, liability, and trust are social problems, not just technical ones.
After that? I genuinely don't know. If AI achieves robust reasoning across novel domains, maintains context over long time horizons, and can be meaningfully held responsible for outcomes... then yes, the game changes entirely. But I'm skeptical that happens before 2030.
If I'm wrong and it happens faster, I'll update. That's the point of putting a timeline on it.
Advice for the Transition
The Bet
I'm betting on the renaissance.
Not because I'm certain. Certainty is for people who haven't paid attention to history. But every productivity leap I've studied has expanded this field, and I don't see compelling evidence that AI breaks the pattern.
I could be wrong. If I'm wrong, I want to have written something that was honest about the risks, not one that cheerled us into a cliff. And if I'm right, I want juniors reading this in 2030 to know that some of us saw the path forward and tried to articulate it.
And if I'm really wrong? If AI genuinely automates judgment, architecture, and everything else I'm betting stays human? Then we have much bigger problems than software engineering jobs. Lawyers, doctors, analysts, executives. The whole knowledge economy restructures. At that point:
Anything that can be automated, will be.
~ Erikstotle
I don't think that's the world we're heading into. But if it is, this article is the least of anyone's concerns.
What I'm doing about it:
I'm learning AI tools deeply, not just superficially. I'm shifting my focus from execution to judgment, from "how to build" to "what to build and why." I'm mentoring wherever I can offer value, because the pipeline problem is real and someone has to help solve it. And I'm writing this because articulating the bet forces me to think it through.
The transition will be painful for some. The distribution of costs will be unfair. But on the other side, I believe we'll find a profession that's larger, more interesting, and more impactful than what came before.
That's the bet.
References
[1] Lava Cut Content. "Satoshi Tajiri Talks Red & Green's Development." https://lavacutcontent.com/satoshi-tajiri-pokedex-interview/
[2] IEEE Spectrum. "AI Shifts Expectations for Entry Level Jobs." https://spectrum.ieee.org/ai-effect-entry-level-jobs
[3] Brynjolfsson, Chandar, Chen. "Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence." Stanford Digital Economy Lab (2025). https://digitaleconomy.stanford.edu/publications/canaries-in-the-coal-mine/
[4] Layoffs.fyi. "Tech Layoff Tracker." https://layoffs.fyi/
[5] McKinsey Global Institute. "Agents, robots, and us: Skill partnerships in the age of AI." https://www.mckinsey.com/mgi/our-research/agents-robots-and-us-skill-partnerships-in-the-age-of-ai
[6] IBM. "FORTRAN." https://www.ibm.com/history/fortran
[7] ibiblio. "A brief history of FORTRAN/Fortran." https://www.ibiblio.org/pub/languages/fortran/ch1-1.html
[8] Wikipedia. "Dennis Ritchie." https://en.wikipedia.org/wiki/Dennis_Ritchie
[9] Precisely. "Mainframe Statistics: 9 That May Surprise You." https://www.precisely.com/blog/mainframe/9-mainframe-statistics
[10] U.S. Bureau of Labor Statistics. "Software Developers, Quality Assurance Analysts, and Testers: Occupational Outlook Handbook." https://www.bls.gov/ooh/computer-and-information-technology/software-developers.htm
[11] U.S. Bureau of Labor Statistics. "Computer Programmers: Occupational Outlook Handbook." https://www.bls.gov/ooh/computer-and-information-technology/computer-programmers.htm
[12] Peng et al. "The Impact of AI on Developer Productivity: Evidence from GitHub Copilot." arXiv:2302.06590 (2023). https://arxiv.org/abs/2302.06590
[13] JetBrains. "The State of Developer Ecosystem 2025." https://blog.jetbrains.com/research/2025/10/state-of-developer-ecosystem-2025/
[14] CX Dive. "Klarna says its AI agent is doing the work of 853 employees." https://www.customerexperiencedive.com/news/klarna-says-ai-agent-work-853-employees/805987/
[15] Tech.co. "Klarna Reverses AI Customer Service Replacement." https://tech.co/news/klarna-reverses-ai-overhaul
[16] Fortune. "As Klarna flips from AI-first to hiring people again, a new landmark survey reveals most AI projects fail to deliver." https://fortune.com/2025/05/09/klarna-ai-humans-return-on-investment/
[17] InfoQ. "AI-Generated Code Creates New Wave of Technical Debt, Report Finds." https://www.infoq.com/news/2025/11/ai-code-technical-debt/
[18] Qodo. "State of AI Code Quality in 2025." https://www.qodo.ai/reports/state-of-ai-code-quality/
[19] Google. "DORA State of DevOps Report 2024." https://dora.dev/research/2024/dora-report/
[20] Harness. "State of Software Delivery 2025." https://www.harness.io/state-of-software-delivery
[21] METR. "Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity." arXiv:2507.09089 (2025). https://arxiv.org/abs/2507.09089
Enjoyed this post?
I write about AWS, React, AI, and building products. Have a project in mind? Let's chat.