3 Reasons Dohmke is Wrong: A Dev's 2025 AI Manifesto
GitHub's CEO Thomas Dohmke has a bold vision for developers, but is it the right one? A senior dev breaks down 3 key areas where his perspective misses the mark.
Alex Carter
Senior software engineer with over a decade of experience building and scaling applications.
We’ve all seen the headlines, the conference keynotes, and the relentless marketing push: "AI will 10x developer productivity!" and "The future of coding is here!". At the center of this whirlwind is GitHub CEO Thomas Dohmke, a charismatic leader with a bold vision for a world where developers are augmented by ever-smarter machines. And to be fair, tools like GitHub Copilot are genuinely impressive. They can spit out boilerplate in seconds and solve trivial problems almost instantly.
But as a developer who spends my days in the trenches—navigating legacy code, debugging cryptic errors, and collaborating with a team of real, complex humans—I can't shake a nagging feeling. Dohmke's vision, while shiny and optimistic, feels disconnected from the ground truth of software development. It seems to prioritize the metrics that sell products over the principles that build great, lasting software. I believe this perspective, if left unchallenged, could lead us down a path that’s detrimental to our craft.
So, let's hit pause on the hype train and talk about three critical areas where I think Dohmke’s vision for the future of development misses the mark.
Reason 1: The Illusion of Productivity: Speed vs. Quality
The primary selling point of AI coding assistants is speed. Dohmke frequently cites studies showing massive gains in how quickly developers can complete tasks with Copilot. And on a surface level, this is true. I can ask Copilot to generate a function to parse a CSV file, and it will do it in a blink. It feels like magic.
The problem is that software development isn't a typing contest. The hardest part of our job isn't writing code; it's thinking. It's understanding the business requirements, designing a resilient architecture, considering edge cases, and anticipating future maintenance needs. AI, in its current form, helps with none of that. It’s a powerful autocomplete, not a thinking partner.
This focus on speed over substance creates a dangerous illusion of productivity. We're generating more code, faster than ever. But is it the right code? Is it secure? Is it maintainable? Often, the answer is a resounding "maybe."
"I've spent more time in the last six months reviewing subtly broken Copilot-generated code than I ever spent fixing junior dev mistakes. The AI writes code that looks plausible but has a fundamental misunderstanding of the context. It's death by a thousand cuts for code quality."
This "Copilot Crutch" is especially concerning for junior developers. Learning to code involves struggling, making mistakes, and building mental models from the ground up. By offloading the "easy" parts to an AI, we risk creating a generation of developers who can prompt a solution but can't reason about why it works or how to fix it when it inevitably breaks in a new and exciting way.
Reason 2: The Misguided Metric: Redefining "Developer Productivity"
This leads directly to my second point: Dohmke's vision seems to operate on a very narrow, tool-centric definition of "developer productivity." In the world of GitHub's dashboards, productivity often gets implicitly measured by things the platform can see: pull requests opened, lines of code committed, comments written, and merges completed. These are activities, not outcomes.
As any experienced developer will tell you, our most productive days are often the ones where we write no code at all. A productive day might be spent whiteboarding an architecture that simplifies the system, deleting thousands of lines of dead code, or having a 30-minute conversation with a product manager that clarifies a requirement and prevents weeks of wasted work. None of this shows up on a GitHub activity chart.
The discrepancy between the corporate view and the developer reality is stark. Let's break it down:
Metric | The Implied "Dohmke/GitHub" View | A Developer's Reality |
---|---|---|
Code Volume | More code committed is a sign of progress. | The best code is often the code you don't write. Simplicity and deletion are virtues. |
Speed | Faster task completion is the ultimate goal. | Time spent in deep thought to prevent future bugs is more valuable than fast, buggy code. |
Collaboration | Measured by PR comments and review interactions. | Includes offline discussions, pair programming, and mentoring that reduces the need for lengthy PR debates. |
Maintenance | An afterthought; a separate cycle of work. | A primary consideration during initial development. Writing maintainable code is productivity. |
By optimizing for what can be measured easily, we risk devaluing the most crucial, human-centric aspects of our work. True productivity isn't about the volume of your output; it's about the value of your outcome. It’s about reducing cognitive load, creating clarity, and building systems that are a joy, not a terror, to work on six months from now.
Reason 3: The Open Source Paradox: Who Really Benefits?
GitHub is, without a doubt, the center of the open-source universe. Dohmke and Microsoft (GitHub's parent company) rightly celebrate their role as stewards of this vibrant community. But this is where the biggest and most troubling contradiction in Dohmke's vision lies.
GitHub Copilot, the flagship product embodying their AI-driven future, is a proprietary, closed-source tool. Its model was trained on a massive corpus of public code from GitHub itself—code written by millions of developers under a variety of open-source licenses. This has created what I call the "Open Source Paradox": a for-profit product is being built on the back of free, community-driven labor, and the value is being centralized into one corporate entity.
Many open-source licenses, like the GPL, were specifically designed to ensure that derivative works remained open. Copilot's existence arguably challenges the spirit, if not the letter, of these licenses. This has not gone unnoticed, leading to class-action lawsuits and a palpable sense of unease within the OSS community.
The core issue is one of value extraction. Is the relationship symbiotic, or is it extractive? Does providing a powerful tool justify training it on community data without explicit, granular consent or compensation? Dohmke's narrative focuses on the benefits to the individual developer, but it sidesteps the systemic impact on the ecosystem. If the ultimate end-game of contributing to open source is to provide training data for a proprietary product you then have to pay for, what does that do to the incentive to share code openly in the first place? It risks turning a collaborative community into a simple resource to be mined.
A Call for a More Human-Centric Future
Let me be clear: this is not an anti-AI or anti-GitHub post. I use GitHub every single day, and I'm genuinely excited about the potential of AI to augment our abilities. But I'm a developer first. My loyalty is to the craft of building good software, not to any single tool or corporate vision.
Thomas Dohmke's vision is compelling, but it's incomplete. It needs to be balanced with the on-the-ground realities of our profession.
- We need to prioritize code quality and maintainability over the raw speed of generation.
- We need a more holistic and human-centric definition of productivity that values thinking, collaboration, and simplicity.
- We need to ensure the open-source ecosystem remains a collaborative community, not just a data farm for proprietary AI.
The future of development isn't just about better tools; it's about better thinking. It's about empowering developers, not just instrumenting them. As we navigate this new AI-powered landscape, it's crucial that we, the developers, keep our voices in the conversation and advocate for a future that serves the craft, not just the corporation.
What does 'productivity' mean to you? Is the current AI trajectory helping or hurting your team's code quality? I'd love to hear your thoughts in the comments.