At Microsoft Build, GitHub CEO Thomas Dohmke sat down with Forward Future’s Matthew Berman to reflect on how AI is fundamentally reshaping software development.
From doubting GPT’s early code completion to launching open-source Copilot and redefining what “programming” means, Dohmke offers a candid, future-focused view into where the craft of engineering is headed, and why the future of software may not involve software at all.
00:00 – The First Time He Saw GPT Code
00:56 – 25% of Code Written by Copilot
03:53 – Tab Completion as a Breakthrough
06:43 – Learning to Code with AI
09:40 – Why We Still Need to Teach Programming
11:08 – The Coding Agent Era Begins
15:21 – GitHub Copilot Goes Open Source
20:00 – What Developers Might Build Next
23:01 – Deterministic vs. Generative Code
26:46 – The Operating System Disappears
29:12 – Personalized Software > Off-the-Shelf
31:34 – What Coding Agents Can (and Can’t) Do
36:48 – The Limits of No-Code and Agent UX
40:52 – Will It All Converge—or Fragment?
43:47 – Memory and the Future of Agent Governance
44:17 – Will AI Replace Developers?
“I doubted it. I thought it would never work. And then it worked—and it felt like magic.”
Before GitHub Copilot launched, Dohmke couldn’t believe a model without a compiler could generate clean, syntactically accurate code. But by the summer of 2020, GPT-3 and Codex proved it could—and that realization changed everything.
“The first telemetry came back and said: Copilot is writing 25% of the code. We didn’t believe it.”
Early GitHub experiments with Copilot showed it wasn’t just assisting—it was generating a meaningful chunk of production code. Feedback from developers was so strong, GitHub launched a private preview in June 2021, followed quickly by over a million users in public preview.
“Looking back, tab completion feels obvious. But it wasn’t. That interaction lowered the learning curve dramatically.”
The choice to make Copilot’s first interaction tab completion built on decades of IDE behavior. But combining it with a generative model was a leap: it created a flow state where developers could keep building, modifying, and shipping—all without leaving their editor.
“We believe the flow state for developers is magical. If we can keep you in the IDE, you're unstoppable.”
Dohmke believes modern coding mirrors how many of us actually learned: copying from tutorials, modifying examples, and trial-and-error experimentation. Copilot accelerates that loop—especially for kids and beginners.
“We don’t stop teaching music just because we don’t become musicians. Kids should still learn to code.”
Even in an agent-powered world, Dohmke argues that teaching programming is essential for literacy. If engineers are to review, verify, and understand what agents build, they need foundational knowledge—even if AI does the heavy lifting.
“You describe the feature, and it modifies your codebase and opens a pull request. That’s the coding agent.”
Copilot’s latest “agent mode” does more than complete snippets. It reads your repo, makes changes, and proposes them via pull requests. Developers now act as reviewers, not just authors—raising the bar for trust, security, and code comprehension.
“This was the natural next step. The JavaScript was already reverse-engineered. Now the community can truly build on it.”
Dohmke announced that GitHub Copilot for VS Code is now open source under the MIT license. The goal: allow developers to fork, extend, and integrate Copilot however they like—with full Microsoft support.
“We can't cover every migration path, every language. That’s where the community comes in.”
Whether it’s supporting COBOL-to-Java migration, improving agent rollback UX, or building domain-specific copilots, Dohmke hopes open-sourcing Copilot will catalyze experimentation across the developer ecosystem.
“The code is deterministic. But the prompt is not. That’s the new craft—navigating both layers.”
Developers must learn to jump between structured logic and fuzzy language. Natural language specs might vary by interpretation, but the resulting code must be precise. That’s the dual fluency engineers will need.
“In the future, the operating system might be invisible. Your primary interface could be an agent.”
Dohmke envisions a world where traditional software disappears behind chat agents that complete tasks—like ordering food or planning a trip—without users ever seeing an app.
“Why create an account on a parenting app when you can generate a custom allowance tracker for your kid?”
The rise of just-in-time, personalized software could replace generic SaaS. Instead of signing up for services, you’ll prompt an agent to build what you need in the moment.
“Vibe coding is great for prototypes. But real projects still need security, tests, governance.”
Dohmke draws a clear line: fast, iterative development (vibe coding) is powerful, but most real-world software demands structure. The future is agentic DevOps—where agents handle bugs and boilerplate, while humans shape direction and review.
“There won’t be one agent. But they’ll connect—your personal, work, and task-based agents will talk to each other.”
Dohmke sees a future of interconnected agents—each specialized for different contexts, but operating under shared protocols like A2A or MCP. The key will be governing memory and identity across boundaries.
“Some jobs will go away. Others will be created. But Copilot makes it easier than ever to become a developer.”
GitHub’s mission is to expand access. AI lowers the barrier to entry—especially for kids, non-English speakers, or anyone without formal training. More people building = more ideas realized.
00:00:00
Matthew Berman: What did you think when you first saw the GPT of the time?
Thomas Dohmke: I thought it didn’t work. It wouldn’t work. I doubted it—and then it worked. And it is like magic.
Matthew: Did you immediately see that software development was going to change forever?
Thomas: Absolutely. 100%.
Matthew: What are your thoughts on coding agents? Taking your hand almost off the steering wheel a little bit?
Thomas: Still actually forces you to keep your hands on the steering wheel. It writes 25% of the code, and that knowledge—that was part of your work—goes with the company.
00:00:28
Matthew: Thomas, thank you so much for joining me, for chatting with me. Really excited.
Thomas: Yeah, yeah—excited about talking about the future of software, the future of coding agents.
Matthew: The first question I have for you is: you were leading the development of GitHub Copilot. This was before the most recent generative AI wave—this was where you would start to type a line of code, hit Tab, and it would complete it for you. When I first saw it, it was completely mind-blowing.
What did you think when you first saw the GPT of the time? When you first started to see code completion—what was that feeling like?
00:00:53
Thomas: I thought it didn’t work. It wouldn’t work. I thought it would never work.
I had done a lot of compiler work when I went to university in the late ’90s, early 2000s, so I understand how a compiler works. I couldn’t believe that the model was able to keep the syntax of Python apart from Ruby or JavaScript. I thought it would just mangle it—put parentheses in the wrong place, or a semicolon where you don’t need one.
When I first saw, in summer 2020, how GPT-3—and then Codex, which was the first coding model from OpenAI—could actually write proper code when you prompted it with something like “Give me a prime number detection” or “Sorting algorithm in a certain language,” it would write the entire method in correct syntax. Even though it didn’t have a compiler, right? It didn’t have that tool-calling step we have now with agent mode in the coding agent.
00:01:56
Matthew: Yeah. So when you and your team first rolled it out—I remember when it came out, and I’ll say it again, it really blew my mind. The other engineers on my team were like, “You have to do this. This is going to change everything.”
Did you immediately see that software development was going to change forever?
Thomas: We saw it a little bit before that.
At GitHub, we have a staff ship for every feature—we ship it first to all the Hubbers, all the GitHub employees, to try it out. We did the same with Copilot about three or four months before the public preview announcement in June 2021.
The feedback was phenomenal. We tracked how much code it was writing, and the first telemetry came back. The team came into a weekly review and said, “It’s writing 25% of the code in files where it’s enabled.”
We didn’t believe that number. We sent them back to double-check if the measurement was accurate. They came back and said, “Yep, it’s actually true.” And it’s even higher now.
It was different across languages, which gave us a kind of validation. For languages like Python, it was better. For others, like C or C++, it was worse—which makes sense when you think about the history of those languages. C is very dependent on import or include statements, and things like header files.
00:03:22
The other big signal was the Net Promoter Score—the feedback from developers using it. I think it was 72, somewhere in that range. And NPS goes from -100 to +100, so that’s really high for a preview product—especially something you’re inserting into what we call the “inner loop,” the work you do locally on your laptop.
A lot of developers, including myself, are peculiar about things like their color scheme and shortcut configuration. That’s a big part of why VS Code has been so successful—you have all these customization options through settings and extensions. Everyone has their own dev environment.
Now, bringing something into that and saying, “You have Copilot, and it will predict the next 10 lines of code”—and then you have to read, parse it, and decide whether to accept—it felt like it might be perceived more negatively than it actually was.
In fact, the response was overwhelmingly positive. And as you described, as soon as we shipped the private preview and then the public preview, within a short amount of time, we had over a million users on Copilot. And many had that moment: “I doubted it… and then it worked… and it was like magic.”
00:04:19
Thomas Dohmke: They had so many—back then—tweets, posts on X now, about that kind of experience and framing.
Matthew Berman: I want to talk about the user experience. Looking at it now, tab completion seems so obvious—but it wasn’t there before. It wasn’t a thing. And it makes the learning curve so low.
How did you arrive at tab completion being the first interaction point between your AI coding assistant and a developer?
00:05:22
Thomas: It was a thing in the sense that in Visual Studio and Visual Studio Code, there was IntelliSense, which used information about the programming language and its libraries to tell you what methods exist in a class, and so on. There’d be a little dropdown showing, okay, these three options are available.
A lot of other IDEs have a similar feature. Like if you’re coding an iPhone app in Xcode, it helps you write those long Swift methods and shows the parameters.
So some of that was already learned behavior. If you don’t know the exact spelling, as long as you can remember the first three or four characters, then autocompletion—IntelliSense—would help you.
Editors like TextMate and Sublime Text also had smart autocompletions. They’d look at your local file, extract methods you’d already written, and predict those for you. So you could write a method and then, when calling it, it would autocomplete based on the declaration.
00:06:16
So we already had that learned behavior from developers. Autocompletion has been around for maybe 20 years. I remember using TextMate for the first time, and it had smart autocompletion.
The other piece is that developers—unless they have a photographic memory—will always reach a point where they need to look something up, whether it’s in documentation, a browser, Stack Overflow, Reddit, blog posts, or even developer conference videos like this one from Build. Or GitHub repos and open-source libraries.
They’re trying to figure out how to solve a problem: how to access an API, make rounded corners, encode a string—things like that. And then they’ll take a code snippet from somewhere, paste it into their editor, and modify it to work with their variable names and the right version of the language or library.
00:07:12
There was always a learned behavior of taking code from somewhere else and making it work for your scenario. In fact, I’d argue that most of us learned coding like that—trial and error. You took something from a “hello world” tutorial or a how-to article, and modified it into your first application.
Matthew: Right.
Thomas: So you’ve got autocompletion as a feature in the editor, and this learned behavior of adapting imperfect code. Now you combine that with a large language model—Codex in 2020—which also wasn’t perfect and had hallucinations. Models today still do.
But it was good enough to shortcut the process of looking it up elsewhere—and it kept you in the flow state.
We believe that flow state for software developers is something magical. You have this idea you want to build. You’ve got limited time and energy. As long as we can keep you in the IDE, building, with the code flowing, Tab to complete, modify, compile, and keep going—that’s the best moment for many developers.
Especially when you run it and it works—and you can say, even just to yourself, “Look what I created… with nothing but my hands and a little thinking and ideation.”
00:08:10
Matthew Berman: You mentioned learning to program—taking pieces and just kind of playing around with them. So I want to talk to you about programming education. I know that’s something near and dear to your heart—me as well.
I would say the most important skill set I ever learned was how to code. I could take my ideas and build anything.
Two-plus years ago, if you'd asked me what I should teach my children—I have two kids, seven and two—I would’ve said, “I want to teach them to code.” But I’m not so sure anymore.
I still think there’s a lot of value in systems thinking, but give me your thoughts. Are you still a proponent of teaching core programming languages to kids?
00:09:08
Thomas Dohmke: Absolutely. 100%.
I think kids should learn this because software is such an important part of our lives today. We have these devices in our pockets, on our wrists—they all run software. But it goes beyond that.
Our cars today are mostly defined by software—and maybe a battery, if you drive an electric vehicle. Our houses are dominated by software. Travel is dominated by software. And, of course, most professional life—even disciplines like farming or law enforcement—use software in their day-to-day work.
So I think it’s similar to math, physics, or chemistry. You may not use those subjects again directly, but you learn them to understand the world.
00:10:09
The same is true of computer science. Having the ability to read code, understand binary logic and Boolean operations—that's a fundamental skill everyone should have, even if they don’t become computer scientists. Just like taking music in school doesn’t mean you’re going to go on stage and sing.
(Definitely true for me—I don’t know about you!)
But that doesn’t mean I shouldn’t have had those music lessons. So I think that's number one: you can’t argue that software, computers, and technology won’t play a massive role in the next decade—or the next hundred years.
And number two: today, here at Build, we announced the coding agent for GitHub Copilot—and it’s magical.
00:11:08
Thomas Dohmke: You give it a task or an issue—a description of what you want to build—and it takes your existing codebase, your repository, and figures out how to implement that issue in the codebase.
It’s not just building something from scratch. We’ve seen those demos—I’ve done them myself, like building a Snake game. But this goes further. It takes an existing codebase, takes your description, modifies that codebase, and creates what we call a pull request to show how it would implement the feature.
Now the role of the engineer is to verify what the agent has done. To read the changes.
How do I do that if I no longer understand what the agent is doing? How do I validate that what it did aligns with my goals, with my business, with the trust my customers place in us?
Because the danger, obviously, is that the agent creates insecure code. And now I have a security incident and have to go to the press and say, “We lost customer data.”
00:12:02
Hypothetically, of course—but you can imagine these scenarios. The agent does something, it feels good in the moment because it worked fast, but because you didn’t understand what it actually did, you’ve damaged your business.
So I think it’s fundamental—for any business that creates software—to understand what these agents are building, and how to leverage them responsibly to create competitive advantage.
Matthew Berman: So maybe it’s not just about learning the core languages and syntax and logical operators. Correct me if I’m wrong, but now it feels like learning how to use AI in the process is just as important. Learning the craft—right?
00:12:29
Thomas: Exactly. You have to learn the craft, and you have to evolve the craft.
In software development, if you’ve been in the game for 20 years, you know that how we built software then is very different from how it’s built today.
Twenty years ago, open source was still doubted by many enterprises. They worried about security, compliance, IP—who’s going to maintain the project if the creator disappears?
Today, every company uses open source in their stack. From the OS to container management, from the editor to hundreds—if not thousands—of libraries in the backend and frontend.
00:13:03
Thomas Dohmke: Twenty years ago, we didn’t really have the concept of a full-stack engineer. There were separate disciplines—databases, backend, frontend, Windows application development. I think it was called PDC back then, and it was probably a lot more focused on Windows than it is now. That played a much bigger role.
Windows developers had to understand the architecture of the PC—how much memory was available, what methods were in the kernel. Today, most developers building for the web never even think about the hardware layer. They just assume they can boot a bigger virtual machine if needed.
So software development has evolved, and developers have to evolve with it. They have to refine their craft, stay current. Understand what’s coming next.
Using models, integrating them, understanding them, testing them, aligning them with customer expectations—that’s all part of the modern full-stack engineer’s job now.
00:14:26
It’s no longer just frontend and backend—it’s frontend, backend, and models.
And it’s not just one model. At GitHub and Microsoft, we don’t believe in a future where there’s only one model. There will be many models for different use cases.
Code completion, for example—you want a fast model with low latency. For agents, latency isn’t as important, because agents take more time and need to call tools. So the model needs strong tool-calling capabilities.
There are going to be many applications that use multiple models at once.
00:14:53
Matthew Berman: Okay, so you mentioned open source, multiple models, and Microsoft’s evolution around open source—especially under Satya. You also made a major announcement today, and as soon as I heard it, I tweeted it: GitHub Copilot is open source.
What was the thinking behind that? Why did you do it? And what can we expect from an open-source project as powerful as this?
00:15:21
Thomas Dohmke: We’re really excited about the announcement today—making Copilot within VS Code open source.
It follows VS Code’s long history of being an open-source editor. In fact, just last month—April 2025—VS Code turned 10 years old. Satya said in the keynote there have been over 100 VS Code releases in those 10 years—roughly one every month. There were only a few months the team didn’t ship a release.
And the VS Code team truly operates like an open-source project. They do all their planning in public. The VS Code repo shows the roadmap, release notes, even blog posts are written as markdown files in the repo.
00:16:25
So in the past months, we looked at Copilot and realized: we’ve come far enough—from basic autocompletion, to chat, to voice, to agent mode, to multi-model choice, to offering a free tier. And if you want to nurture open-source development, you can’t ask someone to buy a $20 plan first just to contribute.
So we felt we had all the ingredients to make Copilot open source and integrate it into the VS Code project, continuing our commitment to give back to the developer ecosystem that has supported us for the last 10 years.
00:17:27
The other piece is that the client code for Copilot in VS Code offers a great learning opportunity for other people building AI software. Whether they’re building a competitor, or want to bring Copilot-like features into their favorite IDE, or want to integrate it into something totally different within the dev tools stack—they can now do that under the MIT license.
00:18:32
That provides business value for us. Copilot is built on Azure AI Foundry, GitHub APIs, Microsoft APIs. Those are the parts we care most about. The front-end logic? That’s already been reverse-engineered. The system prompt is known. The VS Code extension is just JavaScript—it’s not hard to unpack.
So for us, open-sourcing was the natural next step. And we’re excited to see people fork it, contribute to it, and build on it—with full support from Microsoft.
00:20:00
Matthew: What are you most excited to see external developers build into GitHub Copilot—features Microsoft hasn’t been able to prioritize?
Thomas: Well, I think we’ve prioritized a lot—this year alone, we’re close to 100 changelogs just for Copilot.
But one obvious opportunity is model integration. We announced multi-model choice at GitHub Universe. Our team can only integrate so many models at once—we have to go through evaluations, responsible AI testing, red-teaming.
So now, developers can bring in their own models. We already support BYOK (bring your own keys), with tools like OpenRouter, LLaMA, etc. But if someone wants Copilot to work with another model, they can now do all the testing themselves.
00:21:33
There are also a ton of startups entering the space. The industry isn’t zero-sum. Disruption is always coming.
We also announced app migration for Java and .NET apps. But there are tons of other programming languages. Someone could take Copilot and extend agent mode to migrate COBOL to Java, or Perl to Python—there’s a lot of tech debt out there that we can’t cover ourselves.
Then there are UX ideas—like better rollback capabilities in agent mode, or smaller interface improvements. Developers have their own vision of how things should work. Now they can implement and contribute those ideas directly.
00:23:01
Matthew: Let’s stay on the topic of the future. The line between deterministic code (traditional software) and non-deterministic, generated parts of the application—that’s getting blurry. What do you think about that? And where is software architecture headed?
Thomas: Code itself is hopefully always deterministic—it’s an abstraction over CPU instructions. But prompts are not. You can input the same prompt into the same model and get different results.
So part of the craft of modern software engineering is learning to jump between abstraction layers: the non-deterministic (natural language prompting) and the deterministic (compiled code).
00:23:57
Even in teams, there’s always some mismatch. I describe a feature as CEO, and three months later, the team builds something slightly different—because humans interpret language differently.
The same is true when you prompt a model. You can brainstorm with it first—have it help you write a markdown spec, break the work into smaller steps, and then feed those steps into agent mode. That’s engineering: breaking complex problems into parts.
Eventually, the complexity is low enough that the model produces what you expect. But you, as an engineer, have to know when to use the model, when to do it yourself, when the prompt is too ambiguous, and when it’s ready to hand off.
00:26:17
Matthew: Understood. It’s all deterministic in the end, but maybe there’s a future where the entire OS is generated on the fly. Do you think that’s possible? And if so, on what kind of timeline?
Thomas: There will always be a kernel that sits on top of the CPU. But I can imagine a world where you no longer think about what OS you’re running.
People already don’t care about the file system or terminal on their iPhone—they care about features and UI. So I can see an agent becoming the primary interface.
Like Iron Man’s J.A.R.V.I.S.—a chat assistant that uses software and tools for you. You say, “Order my usual,” and it knows what you want, charges the right card, and your food shows up.
00:28:16
Of course, we still need a physical delivery mechanism. But yes, I can see that world—where just-in-time, personalized apps get generated for specific scenarios and then disappear.
For example, I have kids. Let’s say I want to track their allowance. I could use a service, but then I need accounts, permissions, parental controls... it’s a pain.
Instead, I could just use Copilot to generate a micro-app just for me, my partner, and the kids. Input allowance, track spending, and that’s it. It’s custom, disposable, and private.
00:30:09
There are so many life scenarios like that—where personalized software makes more sense than off-the-shelf solutions.
Matthew: I’m definitely building that with my kid. Great learning exercise too.
Thomas: Exactly. The magic is that they can keep going—even if you run out of time. It’s all natural language, across any language. Kids are naturally curious, and tools like Copilot let them explore without hitting the same walls we did growing up.
00:31:04
When I was learning to code, no one in my house could help. I had books and magazines. No internet.
Today, if you have a smartphone and a connection, you can ask Copilot questions and evolve your skills with less frustration.
Matthew: Let’s switch gears. GitHub announced coding agents, which kind of enable “vibe coding.” What are your thoughts on that? On taking your hands partially off the steering wheel and working more with AI to get results?
00:32:01
Thomas: I don’t think it’s about taking your hands off the wheel. It’s more like activating driver assistance. And even then, most systems today still require you to keep your hands on the wheel—they’re not perfect.
Same thing here. Coding agents aren’t perfect. Most software projects involve working on someone else’s code. Even in startups, by year two, new engineers are working on an existing codebase. You’re not starting fresh.
Even looking at my own code from a year ago—I’ve learned so much since then that I’d rewrite it differently today. Let alone code from five or ten years ago.
00:33:29
Vibe coding won’t replace the craft—it supports it. There are two reasons this movement is happening:
First, software has always been about turning an idea into a product as fast as possible. But often, by the end of the day, you’ve just gotten the environment set up.
Vibe coding shortcuts all that. You describe the idea, and the agent gives you something fast—something you can build on.
00:34:49
The second reason is about real software projects. You still need security, quality, efficiency, testing. You still need to work within team standards and business constraints.
That’s where coding agents shine. They propose code through pull requests, you review it like you would with a teammate, and CI/CD handles the rest. That’s not vibe coding—that’s agentic DevOps.
So now you can vibe locally, and offload all the boring stuff—bug fixes, test cases, security checks—to agents.
00:36:18
Matthew: People on my team who’ve never written code are building real apps now. But there’s still a threshold—after a certain number of lines, the AI starts to break down.
What are the biggest improvements we need to make coding agents truly scalable?
Thomas: Low-code and no-code have existed for a long time—even before AI. IT professionals and consumers could use templates to build apps.
AI has expanded that dramatically. But at a certain point, agents still hit a wall.
You need a systems-level understanding—how to scale from 100 to 10,000 users, how to handle auth, how to build for compliance.
00:38:30
Most companies have an endless backlog. I do too. Not literally endless, but longer than what we’ll ever ship in my lifetime.
If agents could just burn down the compliance backlog, I’d consider that a huge win. And that’s before we even get to new features—features that require deep discussion, planning, naming.
So yes, agents will help us ship more. But we’ll always have more to build.
00:39:51
Matthew: I’m so glad you said that. I’m optimistic about AI too. With unlimited intelligence, we don’t replace people—we solve more problems. Like Jevons paradox for software: the more you can build, the more you will build.
We’ve seen many flavors now—tab completion, VS Code plugins, hands-off agents. Do these converge into one interface? Or will it always be fragmented?
Thomas: Both.
Developers love the idea of a unified interface, but in reality, the landscape is fragmented—because of business incentives, platforms, legacy systems.
You’ll have different agents for different domains: your car, your work, your personal life.
00:41:57
But those agents will connect. Just like signing in with GitHub lets you link services together, you’ll have a personalized agent, a work agent, a travel agent—all with protocols to share information when needed.
00:43:21
Ideally, all of them speak the same language-based UI so you don’t have to switch context. Because life isn’t cleanly separated between work and personal.
Matthew: That’s really interesting—memory as the ultimate governance layer. Maybe another agent sits on top, deciding what’s personal, what’s company property.
A lot of people are anxious about being replaced. What would you say to folks nervous about the future of knowledge work?
00:44:51
Thomas: There are jobs where models will outperform humans—like real-time translation, for example. You and I could be speaking different languages, and with AirPods, we’d hear each other in our own voice and language.
So yes, some jobs will be automated.
But AI will also create new ones. Copilot opens up software development to anyone on the planet. You no longer need to know English or have someone in your family who can help. You just try building, and iterate via prompts.
00:46:20
We’ll see new companies, new industries, and new kinds of work we couldn’t have imagined 20 years ago.
Even inside Microsoft, the tester role used to be a dedicated job. Now it’s gone—replaced by automated tests. But many of those testers became engineers or PMs.
00:47:14
That’s the confidence I can give you: when tech replaces a role, we’ve always found new, better opportunities. And I believe the same will be true with AI agents.
Matthew: Thomas, thank you so much. This was fun.
Thomas: It’s been a pleasure. Thank you.
Coding agents are shifting software development from authorship to orchestration.
GitHub Copilot is now open source, giving the community full ability to extend and remix.
Agentic DevOps will define the next era of software: fast prototyping, secure pipelines, human review.
Just-in-time, personal apps may replace traditional SaaS tools for many use cases.
AI will not replace engineers—but it will massively increase what they’re capable of building.
Enjoyed this conversation?
For more interviews on the future of AI, agents, and software, follow @forwardfuture and subscribe to our YouTube channel.
![]() | Nick WentzI've spent the last decade+ building and scaling technology companies—sometimes as a founder, other times leading marketing. These days, I advise early-stage startups and mentor aspiring founders. But my main focus is Forward Future, where we’re on a mission to make AI work for every human. |
Reply