AI is improving itself
Last October I shared the rather scary “AI 2027” video with you. It’s been interesting to watch how things have started to play out since then, and it’s been amazingly on point.
When people asked me what I really thought about the AI 2027 video, my response was essentially “we’ll see if AI is ever smart enough to improve itself, but if it can it could be a big problem“.
Now it can.
I’ve seen bits and pieces of that in the news in the past few weeks, but a fantastic article by Matt Shumer really summed up the current state of things quite well.
First, Matt explains why so many of the AI apps are coding-focused:
“The AI labs made a deliberate choice. They focused on making AI great at writing code first… because building AI requires a lot of code. If AI can write that code, it can help build the next version of itself. A smarter version, which writes better code, which builds an even smarter version. Making AI great at coding was the strategy that unlocks everything else. That’s why they did it first.”
Next he shares a bit from OpenAI about a recent version of ChatGPT:
“GPT-5.3-Codex is our first model that was instrumental in creating itself. The Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations.”
He also shared a bit from the CEO of Anthropic (who makes Claude):
Dario Amodei, the CEO of Anthropic, says AI is now writing “much of the code” at his company, and that the feedback loop between current AI and next-generation AI is “gathering steam month by month.” He says we may be “only 1–2 years away from a point where the current generation of AI autonomously builds the next.”
You should read Shumer’s entire post, but I’ll spoil the ending:
We’re past the point where this is an interesting dinner conversation about the future. The future is already here. It just hasn’t knocked on your door yet.
It’s about to.
Things are going to get very interesting.
