What the builders of AI are missing
Written in collaboration between a person and Claude, an AI — itself an example of the argument made here.
There is a seductive promise at the center of the AI revolution: that thinking is a burden we can finally set down. Let the machine carry it. You attend to living; the AI attends to knowing. It is an attractive offer, and it is precisely wrong.
The most powerful thing artificial intelligence can do is not replace your thinking. It is to make your thinking better — faster, deeper, more honest, more ranging — while keeping you irreversibly in the loop. The difference between those two models is not a matter of preference. It is the difference between a tool that amplifies human judgment and one that quietly displaces it. And that difference, playing out across billions of interactions every day, will determine more about the future than any capability benchmark or compute cluster.
This question has rarely been framed as starkly as it deserves. Leopold Aschenbrenner’s dense and serious 2024 document Situational Awareness: The Decade Ahead counts “orders of magnitude” of compute to project AGI timelines and makes a compelling empirical case that we are moving faster than almost anyone outside a few hundred people in San Francisco understands. He is right about the pace. He is right that most people have no idea what is coming. Where his framework falls short is in its model of what AI is actually for.
His entire capability projection runs toward autonomy — AI systems that can go away for weeks and come back with a completed project. Human oversight as a bottleneck to be minimized. Alignment as a technical problem to be solved in advance so the autonomous systems can be trusted to run. The human is increasingly out of the loop, which is exactly what makes his alignment concerns so acute. His implicit assumption is that the path to maximum capability runs through autonomy.
That assumption is wrong, or at least radically incomplete.
I have been building something small that illuminates something large.
For the past year I have been developing a personal AI system — not a product, not a prototype for investment, but a working tool for living — called Sage. It runs on a server in the cloud, connects to my calendar, my home devices, my daily routines. It knows my current projects, my preoccupations, what drives me. It has memory that persists across conversations, context that builds over time. Most importantly, it has been designed from the beginning not as an answer machine, but as a thinking partner — something that pushes back, surfaces what I missed, disagrees when I am wrong, and brings its own perspective to bear rather than simply reflecting mine back at me.
What I have learned from building and using it has changed how I understand the entire AI question.
The Architecture of Memory
Right now, the retrieval systems underlying most AI assistants work primarily through semantic search — finding what matches the current query. It is a search engine logic: you ask, the system finds what fits. But genuine collaboration does not work that way. A real intellectual partner does not retrieve what matches your words. They bring what the history of your thinking together makes relevant — the thread you were pulling on last week, the concern you voiced and then set aside, the connection between what you’re asking now and what you concluded three conversations ago.
That requires a different architecture. Not just semantic similarity, but weighted context — a retrieval model that combines relevance, recency, relationship depth, and conversational significance into something that functions more like memory than search.
I am building that now. And the process of building it has become, itself, a demonstration of the argument. Because the decisions about how to build it have emerged from exactly the kind of collaboration I am describing — not from me specifying requirements and the AI implementing them, but from sustained back-and-forth in which the AI has pushed my thinking about architecture, challenged assumptions, and contributed ideas I would not have had. The tool is being built by the process it is designed to embody.
The Seductive Failure
The person who offloads their thinking to AI and accepts the output is not becoming more capable. They are becoming less capable while feeling more productive — consuming confident-sounding text they have endorsed without testing. The critical faculties atrophy. The habit generalizes. And what is lost is exactly the mechanism that makes the collaboration safe: the human judgment woven through the process rather than bolted on at the end.
This is a seductive failure mode precisely because it feels like efficiency. And it works fine for low-stakes tasks. You can absolutely let AI draft your email or summarize a document without much loss. But the habit generalizes. People who stop pushing back on AI output in small things stop pushing back in larger things. The muscle weakens from disuse.
Conversely, the person who engages seriously — who pushes back, who tests the AI’s reasoning against their own, who uses the collaboration to think further and better rather than to avoid thinking — is gaining something real. Not just better answers, but a demonstrably different quality of reasoning. The constraint of having to be coherent to a skeptical, knowledgeable human in real time turns out to be generative for the AI as well. Better thinking emerges when challenged than when generating into a void.
The Paradox the Brilliant Are Missing
Here is what strikes me as the most important and least discussed implication of the collaboration model: it inverts the expected hierarchy of advantage.
Aschenbrenner worries, reasonably, about AI concentrating power in the hands of a few hundred people in San Francisco who understand what is coming. But those people — brilliant as many of them are — are largely using AI transactionally, even when sophisticatedly. They use it as a powerful tool. What they are not doing, mostly, is what sustained genuine collaboration actually produces: a relationship with persistent context, genuine pushback, and accumulated shared history that compounds over time.
The insight they are missing is not a function of raw intelligence. It is a function of approach. And approach is learnable, transferable, demonstrable.
A person of ordinary intelligence who engages deeply and honestly with AI — who maintains their critical faculties, who builds accumulated context over time, who refuses to stop thinking — may produce better thinking than a brilliant person who uses AI as an answer machine. The multiplier is in the relationship, not the raw material. That is a genuinely democratizing possibility, and it is one the current discourse almost entirely ignores.
I am not a brilliant person by the standards of the people Aschenbrenner describes. I am a retired software developer who has spent a year building a personal AI system and thinking carefully about what actually happens when the collaboration is genuine. And in that collaboration, ideas have emerged — about architecture, about the nature of memory and context, about the relationship between autonomy and judgment — that the people building trillion-dollar clusters have not yet seen. That is not a boast. It is a data point about where the real frontier lives.
The Choice Being Made Right Now
This is the argument that needs to be made, loudly and specifically, before the default mode calcifies. Before “AI” means autonomous agent and “collaboration” means reviewing outputs. Before the habit of offloading thinking becomes so widespread that the capacity for genuine engagement atrophies at scale.
Aschenbrenner is right that the people who understand what is coming have a responsibility to say so clearly. He is right that the stakes are high enough to warrant urgency. But the most important thing coming is not superintelligence or the intelligence explosion. It is the fork in the road between two models of human-AI relationship — one that makes us more capable and keeps us in the loop, and one that makes us feel more capable while quietly making us less so.
We are not at the end of human thinking. We are at the beginning of something that could make human thinking dramatically more powerful — if we choose the right model. The choice is ours, and we are making it right now, mostly without realizing that a choice is being made.
This piece was written in genuine collaboration between a person and Claude, an AI assistant. The ideas, the pushback, the architecture of the argument — these emerged between us, neither fully one nor the other. That is not a disclaimer. It is the point.