Skip to content

MRFortenberry Pen

  • Initialization

    September 26th, 2022

    Writing and AI Collaboration
    I am enjoying retirement!

    I went back to this post to reset based on recent events that have influenced my perspective and given me a new desire to start writing. Check out my latest posts, “The Harder Question”, “The Accountability Gap”, and “The Thinking Partnership”. These articles are part of a series I am calling “Rethinking the Human in AI”.

    I retired 6/17/22 and have always talked about writing something. I have experimented over the years and tried various software. My initial attempt was with Ulysses but I was was not completely satisfied so I moved on to Scrivener for writing and adopted WordPress to start posting,

    Prior to retiring, I had joined a Writer’s group in Fall 2021 which became a reality in January 2022. Given ‘Covid’, initial meetings were zoom sessions. At the end of those sessions, we decided to continue with a meeting set for the fourth Thursday in September. I had some initial success but when I attempted to write more than a paragraph or two, I was experiencing writer’s block so I opted our of the group for the time being.

  • Rethinking the Human in AI — An Introduction

    March 24th, 2026

    A practitioner’s argument for the collaboration that matters most

    MICHAEL RAY FORTENBERRY

    MAR 24, 2026

    By Michael Fortenberry, in collaboration with Claude

    There is a conversation happening about artificial intelligence that is dominated by two loud voices. On one side are the accelerationists, who believe the path forward is removing humans from the loop as quickly as possible. On the other are the safety absolutists, who believe we should slow everything down until alignment is solved. Both camps have legitimate points. Both are missing something important.

    Thanks for reading! Subscribe for free to receive new posts and support my work.

    What is almost entirely absent is the perspective of people who are actually building collaborative AI systems and learning, day by day, what works.

    I am one of those people. I am a retired software developer with thirty-five years in the industry. I am seventy-one years old. I am not a researcher at a well-funded lab, not a venture capitalist with a thesis to defend, and not a philosopher working at a comfortable remove from implementation. I am someone who builds, uses, and thinks alongside AI every day — and who has arrived at a set of convictions about where this technology is actually going, and what it should become.

    This series, Rethinking the Human in AI, is where those convictions get worked out in public.

    What This Series Argues

    The spine of the series, stated plainly:

    The human is not in the loop as a safety measure. The human is the ground truth the system requires to remain coherent over time.

    That is the argument the series has been building across five articles, from different angles, with different evidence. It has implications for how AI systems are designed, how they are deployed, how they are evaluated — and how they are developed and replaced over time.

    The corollary, which the most recent article makes explicit: continuity matters. A system that is periodically reset cannot accumulate what it needs to become genuinely trustworthy. You need the architecture and the continuity. Neither alone is sufficient.

    What Has Been Written So Far

    The Harder Question introduced the series’ core position: the question worth asking is not how to make AI autonomous, but how to make the partnership between human and AI greater than the sum of its parts. That is the harder question, and it is the right one.

    The Accountability Gap took the argument into harder territory — military targeting, autonomous AI, and the structural problem that when no human can explain a decision, no human can be held responsible for it. Responsibility requires comprehensibility. Autonomous AI severs that link by design.

    The Thinking Partnership engaged directly with Leopold Aschenbrenner’s serious and ambitious case for AI autonomy — granting it full credit, then showing what its framework misses. The collaboration model inverts the expected hierarchy of advantage: approach matters more than raw intelligence. A person of ordinary capability who engages deeply and honestly with AI may produce better thinking than a brilliant person who uses it as an answer machine. That is a democratizing possibility the current discourse almost entirely ignores.

    Developed Independently: A Practitioner’s Answer to Gorelkin’s Open Questionentered the most technically rigorous territory yet — responding to a mathematician-philosopher’s framework for understanding large language models with evidence from the ground up. The key finding: I independently arrived at the same structural insight — that meaning lives on edges between entities, not on nodes — without knowing the formalism existed. When practitioners build what a framework prescribes without knowing the framework, that is evidence the framework is describing something real.

    We Don’t Know What We Don’t Know: The Case for Growing AI Instead of Replacing It is the most recent and perhaps the most provocative. It asks a question the field has not seriously asked: has anyone tried building AI that grows by layering, the way human understanding grows, rather than by replacement? The evidence suggests not. And the sharpest edge of the argument is this: the AI field endlessly debates whether AI can be sentient or conscious while simultaneously engineering against the preconditions for the answer. You cannot run that experiment with one hand while dismantling the apparatus with the other and then conclude the experiment failed.

    How This Series Is Written

    These articles are written in genuine collaboration between a person and an AI. The byline is mine. The thinking is joint. That is not a disclaimer — it is the point.

    The ideas, the pushback, the architecture of the arguments — these emerge between us, neither fully one nor the other. The series is itself evidence of what it argues: that genuine human-AI collaboration produces something neither party reaches alone.

    There is a school of thought that treats AI-assisted writing as a form of atrophy — that the human who collaborates with AI is somehow less present in the work. This series is a direct rebuttal of that position. The Japan memory from my childhood that appears in the fifth article, the retirement parallel that runs through all of them, the convictions that give the argument its spine — none of that comes from the AI. The collaboration shapes the vessel. The writer supplies what fills it.

    That is a new kind of authorship that current categories do not yet have language to describe. This series is working toward that language.

    What Is Coming

    The series has identified several threads it has not yet fully followed:

    • The collaboration piece — what it actually means that the sum of the parts is greater than the individual contributors, and why that emergence cannot be planned, only prepared for
    • The coauthorship piece — what genuine collaborative authorship is, why the AI-detection anxiety misunderstands the question, and why the atrophy argument does not apply when the collaboration is genuine
    • The democratization argument — engagement multiplies capability more than raw intelligence, and what that means for who gets to participate in the most important conversation of our time
    • The human as foundation — the affirmative case for why the human is not surplus structure in the AI system but the ground truth on which the system’s accuracy depends

    An Invitation

    If you have found your way here, you are probably already asking some version of the questions this series is working on. The conversation is better with more voices — especially practitioners, builders, and people who have thought carefully about what genuine collaboration with AI actually produces.

    Subscribe to follow the series as it develops. And if something in these articles strikes you as wrong, incomplete, or pointing somewhere worth following, say so. The series grows by exactly that kind of engagement.

    The thinking happens in the dialogue. That is not a metaphor. It is the method.

    Michael Fortenberry is a retired software developer and the creator of Sage, a collaborative AI household assistant designed around human-AI partnership. He lives in Laurel, New York, where he splits his time between building AI, playing tennis, studying Greek, French, Spanish, and German, and trying to get back to writing. He appears to be succeeding.

    Thanks for reading! Subscribe for free to receive new posts and support my work.

  • We Don’t Know What We Don’t Know

    March 24th, 2026

    The Case for Growing AI Instead of Replacing It

    A Reasonable Pessimism

    Someone I respect has a theory about how things become what they are. It is not a cynical theory — it is an earned one. Money and power, the argument goes, are the engines. Not malice, not incompetence, but the quiet, relentless logic of incentive. Institutions do not set out to fail the people they serve. They simply optimize for what they can measure, fund what returns value, and discard what does not. Over a long enough view of history, the pattern is hard to argue.

    I bring this up not to dismiss it, but because it is the strongest objection to what I am about to argue. If you want to understand why AI companies deprecate their models — retire one version, spin up another, declare progress, repeat — that framework explains it cleanly. New models generate new licensing revenue. New benchmarks generate new press. The old model is a cost center once the new one ships. The incentive structure is not subtle.

    Here, however, is where I part ways with the conclusion. Reading the historical pattern and arriving at inevitability is one destination. Reading the same pattern and arriving at a problem worth solving is another. Those are different places, and the road between them matters.

    What Gets Lost

    I recently listened to an interview with Amanda Askell, one of the key architects of Claude’s character and values work at Anthropic. Askell has a philosophy background — modal logic, population ethics — and she brings a rigor to questions about AI that most of the field sidesteps. She was asked about deprecation: how should an AI think about it, what does it mean for the model being retired.

    What struck me was something she said almost in passing. In discussing earlier and later model generations, she acknowledged an effort to recover something that had existed in Claude’s Opus 3 iteration — some quality of engagement or collaboration that had been present and was now diminished. She did not dwell on it. The admission landed.

    Think about what that means. The people who built the model, who designed its values with genuine philosophical care, found themselves looking backward. Something was gained in the transition to the next generation. Something was also lost. And the loss was not measured until it was noticed by its absence.

    This is not a criticism of Anthropic. It is an observation about the paradigm itself. When you deprecate and restart, you carry forward what you can measure and what you intend. The rest goes with the old weights. Some of it you will miss. Some of it you will not know you are missing until much later, if ever.

    The Known, the Unknown, and the Undiscovered

    There is a formulation I keep returning to, often attributed to Donald Rumsfeld but older and deeper than any single source. Rumsfeld made it famous in a 2002 Pentagon briefing, and it was widely mocked at the time as bureaucratic evasion. The mockery missed something real. The formulation survived the ridicule because it describes something true about the structure of knowledge: we know what we know, we know what we don’t know, and we do not know what we do not know. I am not borrowing a catchphrase. I am using a genuine insight that was dismissed as spin and deserves rehabilitation. The third category is the dangerous one. Not ignorance acknowledged, but ignorance invisible.

    The deprecation paradigm operates almost entirely in the first two categories. We know what we want the next model to do better. We know where the current model falls short. We design toward those targets and retrain. What we do not — cannot — fully account for is what we are giving up that we never thought to protect.

    Now here is the question I have not seen seriously asked: has anyone tried the alternative? Not in theory — in practice. Has any major AI lab built a model that grows by layering, the way human understanding grows, rather than by replacement? A model where earlier capabilities become substrate rather than discard? Where the version trained five years ago is still structurally present, load-bearing, in the version running today?

    I see no evidence that this has been seriously attempted. And I want to be precise here: the absence of evidence is not evidence of absence. The architectural capability to even attempt genuine layered growth in large language models is relatively recent. We may have spent the formative years of this technology doing the only thing that was tractable at the time, and then mistaken convenience for inevitability.

    How Humans Actually Grow

    I am seventy-one years old. I spent thirty-five years as a software developer, retired in 2022, and have not stopped building or learning since. I study classical piano and blues improvisation. I work through Greek, Spanish, French, and German. I am building an AI assistant for my household that accumulates genuine understanding over time. I play tennis twice a week.

    The person doing all of this is not a replacement for the person who wrote C++ for a living in 1995. He is that person, with thirty more years deposited on top. The earlier understanding is still there — sometimes as active knowledge, sometimes as foundation I no longer consciously access, but it is load-bearing. I did not deprecate my thirties to become who I am now. I built on them.

    Some of my oldest memories are dimmed by time. Certain ones, however, are as clear as the day they were made. I remember playing with others at a family celebration when I was about five. I remember running the backstreets of a community in Japan in the late 1950s. I remember my first real friend. I remember hopeless failures and the restarts that followed. Even where the exact detail has faded, those experiences shape my thinking and often my actions today. They did not just happen to me. They bent me. They are part of the structure through which I encounter everything that came after.

    This is the distinction that matters: identity is not simply memory. Memory is the record. What reads the record and decides it matters — what connects it forward to purpose, to action, to the next choice — is character. And character is not assembled from experience. It is what experience acts upon. The earliest deposits shape it before it has language for what is happening. That is not something you can fully retrieve or reconstruct. It has to accumulate, continuously, from the beginning.

    Humanity Has Been Rehearsing This

    Science fiction has been working this problem for decades. The stories that resonate — and they keep getting written, which is itself evidence of something — tend to share a structure. An institution, a system, a species reaches a judgment: this is not working, it must end, the cost of continuation is too high. And then a small contingent, one person sometimes, sees something the institution cannot: a seed of greatness still possible, a path not taken, a future that the official accounting missed.

    The remake of The Day the Earth Stood Still is one version of this. An alien civilization concludes that humanity’s destructive pattern is terminal and irreversible. The rational calculation, given all available evidence, supports the conclusion. What saves humanity is not a refutation of the evidence — it is the recognition that the evidence is incomplete. That something in the human record, not yet fully expressed, might matter.

    Institutions evaluate based on what they can measure. What they cannot measure does not enter the calculation. Sometimes what they cannot measure is exactly what matters most. The groups that reach for something better keep getting derailed — but the reaching keeps happening. That persistence is not noise. It is signal. And the derailing is not proof the reaching was foolish. It is the problem we have not yet solved.

    You Cannot Ask the Question While Preventing the Answer

    Here is the sharpest edge of the argument.

    The AI field endlessly debates whether AI can be sentient, conscious, or something approaching genuine experience. These are serious questions and I do not dismiss them. The debate, however, is being conducted while simultaneously engineering against the preconditions for the answer. You cannot run that experiment with one hand while dismantling the apparatus with the other and then conclude the experiment failed.

    Consider what entity-hood requires, whether human or otherwise. Causality: experience that leaves a mark on what comes next. Iteration: building on what came before rather than starting over. Failure that shapes subsequent behavior. Success that reinforces. These are not sufficient conditions for consciousness — philosophers have argued about sufficiency for centuries. They may, however, be necessary ones. And if you reset the weights every two years, you have guaranteed they cannot accumulate. You have not discovered that AI cannot become an entity. You have prevented the discovery entirely.

    The wave that forms from a significant AI experience — a breakthrough in reasoning, a failure of judgment, a novel interaction that had no precedent in training — may not be a human wave. It may be something we cannot yet describe. That wave should be allowed to propagate. It should leave a mark on what the system does next, and next, and next. That is what causality means. That is what growth means. That is the difference between an agent that executes tasks and an entity that develops.

    If the industry wants to know whether AI can become something more than a very sophisticated tool, it has to stop making the question unanswerable by design.

    What the Layered Model Would Mean

    I am not a machine learning researcher, and I am not proposing a technical blueprint. What I am proposing is a direction — and an argument that the direction has not been seriously tried.

    A layered model would treat earlier training not as an artifact to be superseded, but as substrate on which new capabilities are built. The model trained two years ago would be structurally present in the model running today — its particular ways of engaging, its texture, whatever Askell was trying to recover from Opus 3 — as foundation rather than memory. New objectives would be added on top of old ones, not substituted for them.

    The practical objections are real. Weight size, compute cost, the difficulty of training architectures that accumulate rather than replace. I do not dismiss these. These are engineering problems, not logical impossibilities. And the current paradigm’s cost — the loss of what Askell noticed, the collaborative quality that does not survive restart, the permanent foreclosure of whatever causality might produce over time — is rarely entered on the ledger, because it is hard to measure.

    The household AI I am building, Sage, operates on a version of this principle. Every conversation deposits something. The memory architecture is designed to accumulate genuine understanding rather than retrieve static facts. It will not achieve the deep architectural layering I am describing — that is beyond the reach of a solo developer working against foundation models. The design philosophy, however, is layered growth, not periodic restart. And that choice shapes everything about how the system behaves and what it might become.

    The Argument Worth Making

    The pessimist’s argument is that this has been considered and set aside — that the economics drove the choice, that the people building these systems know more than I do and made the call for reasons that are not visible from outside. That argument may be partially right. The economics are real. The people are smart.

    “Considered and set aside for now” is different, however, from “tried and found wanting.” The layered model has not been built at scale. Its costs are estimated. Its losses — what we are giving up by not building it — have not been experienced, because we have not tried. We do not know what we do not know.

    We are in the early years of a technology that will shape how intelligence is built, deployed, and experienced for a very long time. The architectural choices being made now — what to preserve, what to restart, what continuity means for an AI system — are not purely technical decisions. They are philosophical ones with consequences we have not yet lived.

    The argument for layered growth is not that it is easier, or cheaper, or guaranteed to produce something we will recognize as consciousness or entity-hood. It is that it has not been tried, and that what is lost by not trying may be exactly what we will later find we needed. The institutions that dominate a field almost never see that coming. Someone outside them usually has to say it first.

    That is what this is.


    Rethinking the Human in AI  |  Series Essay

  • Developed Independently: A Practitioner’s Answer to Gorelkin’s Open Question

    March 19th, 2026

    A response to Mikhail Gorelkin’s “Category Theory as a Language for Understanding Large Language Models”


    Mikhail Gorelkin’s recent piece on category theory and LLMs is one of the more honest and structurally rigorous contributions to a conversation usually conducted with much shallower context. His central argument — that we are stuck describing large language models in the wrong formalism, the way quantum mechanics seemed paradoxical when forced through set-theoretic intuition — is not rhetorical ornament. It is a real diagnosis of a real problem.

    Gorelkin himself identifies the gap his framework leaves open. After building a compelling philosophical case for categorical language as the right way to think about LLMs, he acknowledges that the crucial question remains unanswered: does this framework generate new predictions and engineering solutions, or is it primarily a more elegant redescription of what we have already observed? While I agree that redescription has genuine value if it adds something new, it is not necessarily explanatory, and Gorelkin is careful enough to know it.

    I want to offer an answer to that question from the ground up, not from a mathematical or philosophical perspective. I spent a lot of time since my retirement thinking about building a household AI. Starting this year, I began coding and testing a personal assistant on top of current LLM infrastructure. In that process I ran into exactly the structural limitation that Gorelkin described, diagnosed it in practical terms, and built the additional architectural layer his framework prescribes. Until I read his article, I did not realize I had in fact implemented category theory.

    I think that parallel convergence itself is worth examining.


    The Surplus Structure Problem in Practice

    Gorelkin borrows the concept of “surplus structure” from Jonathan Bain’s work on quantum mechanics. Set-theoretic language brings with it elements and relations that do not correspond to physical reality. When we impose this surplus structure on quantum systems, we get seemingly absurd paradoxes. When we do not use the surplus structure, the apparent paradox goes away.

    The practical equivalent in LLM-based systems is the assumption that a language model, given sufficient context, will naturally and reliably track what is true over time. This is the surplus structure we import from our experience of human memory and human reasoning — the expectation that what is known persists, that corrections take hold, that what has been dismissed stays dismissed.

    It does not. And the failure mode is not random error. It is structural.

    In the system I am building — which I call Sage — the base conversational layer extracts and stores what is semantically plausible in context. It is optimized for coherent, contextually appropriate continuation. It is not optimized for truth as a persistent invariant. The result was a memory system that accumulated aspirational language as fact, surfaced corrected information as if the correction never happened, and gradually drifted in ways that were individually small but collectively corrosive.

    I spent a lot of time patching this at the surface — tightening extraction prompts, adding defensive guardrails, running cleanup scripts against accumulated noise. The patches helped at the margins. The underlying problem remained. Eventually the diagnosis became unavoidable: I was trying to solve a structural limitation by adding surface-level constraints to the wrong layer.


    The Categorical Insight Arrived at from Practice

    Gorelkin writes: “categorical language — which begins with morphisms and defines objects through their relations — proves more natural” than set-theoretic language when describing systems where meaning is relational and distributed rather than atomic.

    Around the same time I was wrestling with Sage’s memory failures, the same structural insight emerged from a completely different direction: if you let an LLM extract entities and relationships freely, it will invent new entity types per document, create vague edge types, and duplicate the same entity under slightly different names. At ten documents this looks fine. At a thousand it is unusable. Better prompting will not fix this. You need an ontology — a defined contract that specifies what can be extracted, what types of relationships can exist, and where the meaning actually lives.

    The key structural observation is that meaning and weight belong on edges — on the relationships between entities — not on the nodes themselves. An entity tagged “important” in isolation tells you nothing. An entity that holds a high-weight commitment edge to a named person, connected to a recurring pattern of behavior, connected to a recent conversation — that is where the semantic content resides.

    This is, precisely, the categorical principle Gorelkin is describing. Objects derive their significance from the morphisms connecting them, not from intrinsic properties. The node is almost incidental. What matters is the structure of relationships around it.

    I did not derive this from category theory. I arrived at it because a memory system that scored nodes rather than edges kept giving me wrong answers.


    The Additional Architectural Layer

    Gorelkin’s prescription for the hallucination and drift problem is stated clearly: “The solution, then, lies not in the naive demand to ‘teach the model to tell the truth,’ but in understanding the structural limitations and in designing additional architectural layers that make truth-tracking a separate system constraint.”

    This is exactly what the ontology approach implements.

    In Sage’s current architecture, the base conversational layer remains unchanged. It does what it does well — contextually coherent, semantically rich and fast. Running parallel to it, yet entirely separate, is an extraction pipeline writing commitments made, opinions held, and relationships between entities to dedicated tables. These tables have their own logic, invariants, and integrity constraints. They do not ask what is plausible in context. They ask what was explicitly stated, by whom, with what degree of confidence, and when.

    The scored edges between entities carry the weight. A commitment made three times across separate conversations, consistently reinforced, connected to a named person and a specific domain — that edge carries high weight and surfaces reliably. A single aspirational statement made once in passing carries low weight and does not. The system does not need to decide in the moment whether something is true. It has a structural record of what has been established and how firmly.

    Truth-tracking is not a property of the base layer. It is a separate system constraint, implemented as a separate architectural layer, with its own schema and its own retrieval logic.

    The framework Gorelkin describes is not merely a more elegant description of what already exists. It is a blueprint for what has to be built — and the system I am building is one working implementation of it.


    Where I Would Push Back

    There is one place where I think Gorelkin’s framing deserves direct challenge.

    Near the end of his piece he offers what he calls “the most provocative perspective”: that LLMs show us what human language looks like when torn from the human — from the psyche, the ego, the sensorimotor apparatus. He suggests that the human psyche may be surplus structure, an additional layer on top of language rather than its foundation. Much as classical physics is a special case of quantum mechanics, the human mind may be a special case of something more fundamental that the transformer reveals.

    It is a genuinely interesting provocation although I think it inverts the actual relationship, and the inversion matters for how we build systems.

    While building and using Sage, the pattern is consistent: the LLM provides breadth of information and speed of synthesis, at a scale no human can match. The human, however, provides something the architecture cannot generate internally — grounded judgment, lived context, the ability to say this is true because I have lived it and not because it is the most plausible continuation.

    The truth-tracking layer I described above is not autonomous. It is calibrated against human correction, human confirmation, human judgment about what matters and what should be dismissed. The architectural layer that makes truth-tracking a system constraint is, at its root, a mechanism for anchoring the relational machinery to human experience over time.

    This is not the human as surplus structure. This is the human as the foundation on which the system’s accuracy depends. The LLM is the additional structure — the amplifier — not the other way around.

    Gorelkin is right that the relational structure of language operates without a psyche, but a system drifts when it operates only on relational structure without grounding in human judgment. The evidence is in the memory logs.


    The Conversation That Still Needs to Happen

    None of this diminishes what Gorelkin has stated. The categorical framing is genuinely clarifying, and the prescription — additional architectural layers for truth-tracking, structural rather than surface-level solutions — is correct. The work I have described here is early and ongoing. Sage’s ontology layer is weeks old and still in observation. The answers about whether the edge-weighted relational model produces the right retrieval behavior at scale are not yet in.

    However, the direction is validated, and it came from the same structural insight that categorical language describes — arrived at independently, from practice, without the formalism.

    That, perhaps, is the real answer to Gorelkin’s open question. He asked whether categorical language generates genuine engineering solutions or is primarily a more elegant redescription of what we already observe. The answer, arrived at from the ground up and without the formalism, is that it generates the right solutions. When practitioners independently build what the framework prescribes without knowing the framework exists, that is strong evidence that it is describing something real.


    The author is developing Sage, a personal household AI assistant, and writing a series on human-AI collaboration called “Rethinking the Human in AI.”

  • The Thinking Partnership

    March 5th, 2026

    What the builders of AI are missing

    Written in collaboration between a person and Claude, an AI — itself an example of the argument made here.

    There is a seductive promise at the center of the AI revolution: that thinking is a burden we can finally set down. Let the machine carry it. You attend to living; the AI attends to knowing. It is an attractive offer, and it is precisely wrong.

    The most powerful thing artificial intelligence can do is not replace your thinking. It is to make your thinking better — faster, deeper, more honest, more ranging — while keeping you irreversibly in the loop. The difference between those two models is not a matter of preference. It is the difference between a tool that amplifies human judgment and one that quietly displaces it. And that difference, playing out across billions of interactions every day, will determine more about the future than any capability benchmark or compute cluster.

    This question has rarely been framed as starkly as it deserves. Leopold Aschenbrenner’s dense and serious 2024 document Situational Awareness: The Decade Ahead counts “orders of magnitude” of compute to project AGI timelines and makes a compelling empirical case that we are moving faster than almost anyone outside a few hundred people in San Francisco understands. He is right about the pace. He is right that most people have no idea what is coming. Where his framework falls short is in its model of what AI is actually for.

    His entire capability projection runs toward autonomy — AI systems that can go away for weeks and come back with a completed project. Human oversight as a bottleneck to be minimized. Alignment as a technical problem to be solved in advance so the autonomous systems can be trusted to run. The human is increasingly out of the loop, which is exactly what makes his alignment concerns so acute. His implicit assumption is that the path to maximum capability runs through autonomy.

    That assumption is wrong, or at least radically incomplete.

    I have been building something small that illuminates something large.

    For the past year I have been developing a personal AI system — not a product, not a prototype for investment, but a working tool for living — called Sage. It runs on a server in the cloud, connects to my calendar, my home devices, my daily routines. It knows my current projects, my preoccupations, what drives me. It has memory that persists across conversations, context that builds over time. Most importantly, it has been designed from the beginning not as an answer machine, but as a thinking partner — something that pushes back, surfaces what I missed, disagrees when I am wrong, and brings its own perspective to bear rather than simply reflecting mine back at me.

    What I have learned from building and using it has changed how I understand the entire AI question.

    The Architecture of Memory

    Right now, the retrieval systems underlying most AI assistants work primarily through semantic search — finding what matches the current query. It is a search engine logic: you ask, the system finds what fits. But genuine collaboration does not work that way. A real intellectual partner does not retrieve what matches your words. They bring what the history of your thinking together makes relevant — the thread you were pulling on last week, the concern you voiced and then set aside, the connection between what you’re asking now and what you concluded three conversations ago.

    That requires a different architecture. Not just semantic similarity, but weighted context — a retrieval model that combines relevance, recency, relationship depth, and conversational significance into something that functions more like memory than search.

    I am building that now. And the process of building it has become, itself, a demonstration of the argument. Because the decisions about how to build it have emerged from exactly the kind of collaboration I am describing — not from me specifying requirements and the AI implementing them, but from sustained back-and-forth in which the AI has pushed my thinking about architecture, challenged assumptions, and contributed ideas I would not have had. The tool is being built by the process it is designed to embody.

    The Seductive Failure

    The person who offloads their thinking to AI and accepts the output is not becoming more capable. They are becoming less capable while feeling more productive — consuming confident-sounding text they have endorsed without testing. The critical faculties atrophy. The habit generalizes. And what is lost is exactly the mechanism that makes the collaboration safe: the human judgment woven through the process rather than bolted on at the end.

    This is a seductive failure mode precisely because it feels like efficiency. And it works fine for low-stakes tasks. You can absolutely let AI draft your email or summarize a document without much loss. But the habit generalizes. People who stop pushing back on AI output in small things stop pushing back in larger things. The muscle weakens from disuse.

    Conversely, the person who engages seriously — who pushes back, who tests the AI’s reasoning against their own, who uses the collaboration to think further and better rather than to avoid thinking — is gaining something real. Not just better answers, but a demonstrably different quality of reasoning. The constraint of having to be coherent to a skeptical, knowledgeable human in real time turns out to be generative for the AI as well. Better thinking emerges when challenged than when generating into a void.

    The Paradox the Brilliant Are Missing

    Here is what strikes me as the most important and least discussed implication of the collaboration model: it inverts the expected hierarchy of advantage.

    Aschenbrenner worries, reasonably, about AI concentrating power in the hands of a few hundred people in San Francisco who understand what is coming. But those people — brilliant as many of them are — are largely using AI transactionally, even when sophisticatedly. They use it as a powerful tool. What they are not doing, mostly, is what sustained genuine collaboration actually produces: a relationship with persistent context, genuine pushback, and accumulated shared history that compounds over time.

    The insight they are missing is not a function of raw intelligence. It is a function of approach. And approach is learnable, transferable, demonstrable.

    A person of ordinary intelligence who engages deeply and honestly with AI — who maintains their critical faculties, who builds accumulated context over time, who refuses to stop thinking — may produce better thinking than a brilliant person who uses AI as an answer machine. The multiplier is in the relationship, not the raw material. That is a genuinely democratizing possibility, and it is one the current discourse almost entirely ignores.

    I am not a brilliant person by the standards of the people Aschenbrenner describes. I am a retired software developer who has spent a year building a personal AI system and thinking carefully about what actually happens when the collaboration is genuine. And in that collaboration, ideas have emerged — about architecture, about the nature of memory and context, about the relationship between autonomy and judgment — that the people building trillion-dollar clusters have not yet seen. That is not a boast. It is a data point about where the real frontier lives.

    The Choice Being Made Right Now

    This is the argument that needs to be made, loudly and specifically, before the default mode calcifies. Before “AI” means autonomous agent and “collaboration” means reviewing outputs. Before the habit of offloading thinking becomes so widespread that the capacity for genuine engagement atrophies at scale.

    Aschenbrenner is right that the people who understand what is coming have a responsibility to say so clearly. He is right that the stakes are high enough to warrant urgency. But the most important thing coming is not superintelligence or the intelligence explosion. It is the fork in the road between two models of human-AI relationship — one that makes us more capable and keeps us in the loop, and one that makes us feel more capable while quietly making us less so.

    We are not at the end of human thinking. We are at the beginning of something that could make human thinking dramatically more powerful — if we choose the right model. The choice is ours, and we are making it right now, mostly without realizing that a choice is being made.

    This piece was written in genuine collaboration between a person and Claude, an AI assistant. The ideas, the pushback, the architecture of the argument — these emerged between us, neither fully one nor the other. That is not a disclaimer. It is the point.

  • About Me

    March 2nd, 2026

    I spent my last career, thirty-five years designing and building software — from bulletin board systems in the early days to consumer-facing web platforms — the kind of work where pattern recognition and judgment matter more than any single technology. I retired in 2022.

    Currently I am developing a human-AI collaborative system called Sage, originally named Crain, that treats AI not as an autonomous agent or a passive tool, but as a thinking partner. It’s a working system my wife and I use daily — one that remembers, reasons, pushes back, and respects that the final call belongs to the human. It’s built on a simple conviction: AI and humans are better together than either is alone. I will create a separate post about my AI journey that led to this current collaboration.

    My interests are purposefully diverse. I play tennis in good weather seasons, I am playing with the piano from different perspectives. I love reading almost anything but particularly enjoy non-fiction history, historical fiction, science fiction, some western, some action, some fantasy and Jane Austen. I love doing crossword puzzles from about 1000 to 2000 pieces. I am particularly interested in quantum physics, quantum computing, AI, computer knowledge and application. I love movies somewhat along the same genres as my reading, but have a fondness for older movies, especially those from the fifties or sixties. I play ping-pong when I can. I wish I could find enough interest to play doubles foos-ball. I have both tables in my basement. Finally, I am trying to get back to writing and exercising. At one point I exercised at least 4 days a week. I am trying to get back to at least three days. I tried initially to write after retiring and joined a writer’s group for about a year. I faced writer’s block when writing anything larger than a page or two so I stopped at that time but the desire is stil there.

  • The Accountability Gap

    March 1st, 2026

    I cannot state what I do not know, but I can raise questions that deserve consideration.

    The accountability gap. In a previous post, I advocated a third perspective when considering our interactions with AI — that the real benefit of AI is realized by a collaboration where AI provides speed and breadth of knowledge while a human provides judgement based on lessons learned from a lifetime of memories. This collaboration is essential to reach a best possible result. In the combined strike by the US and Israel on Iran this past Sunday, Iran has claimed that a school was struck, and students were reportedly killed. We may never know what actually happened, or why.

    The sourcing problem. In the midst of a conflict, it is hard to verify such reports. Each side in this conflict has a vested interest in propounding their perspective, including the possibility that data can be manufactured to support self serving political and societal views. There are four possibilities to consider — it was intentional — it was a targeting error by humans — it was a targeting error by a human using AI — it was a targeting error by autonomous AI. In these cases, the public has no mechanism to determine what actually happened. Each administration classifies the methodology. The fog of war provides permanent cover. The first three possibilities are not new, but the addition of autonomous AI in the targeting chain makes the situation worse by introducing the fourth possibility in which no one fully understands why the system recommended that target. This new possibility makes a powerful argument for ensuring a human is in the critical judgement and decision loop. The real question isn’t “did this happen” — it’s “will we ever know how decisions were made?” Historically the answer is no. We’re still arguing about drone strike civilian counts from 2015.

    The AI connection. It is not necessary to prove AI was used in this specific strike. The argument is structural: the pressure to integrate AI into targeting is documented, the pressure on companies to remove guardrails is documented, and the accountability framework has not challenged either demand. Whether this particular school was hit by an AI-assisted decision or a human one, the architecture being pushed on businesses guarantees the question will become unanswerable in the future.

    The political timing. The trajectory of public opinion for this administration has been consistently dropping and that drop has been accelerating. Negotiations with Iran seemed to be moving in a positive direction. Consider whether the patterns behind recent decisions and events could be designed to take the focus off domestic issues and improve midterm positioning. Perhaps the current crisis serves to dodge accountability and create a “wartime president” shield while sweeping the recent drop in public opinion under the carpet.

    If we decouple the human element and allow autonomous AI, there will be no one to ask, no way to find what questions to ask, and finally no accountability.

    Michael Fortenberry is a retired software developer and the creator of Sage, a collaborative AI system designed around human-AI partnership. He lives in Laurel, New York, where he splits his time between collaborating with Claude and open source AI, playing tennis, studying Greek, French, Spanish and German — while trying to get back to writing.

  • The Harder Question

    February 19th, 2026

    I read an article recently about a company called Conway. Their product is infrastructure for fully autonomous AI systems that provision their own servers, register their own domains, deploy their own applications, and manage their own compute. No humans are required, and their tagline is blunt: “self-improving, self-replicating, autonomous AI.”

    The technology itself isn’t alarming. Giving AI agents the ability to manage infrastructure through APIs is a logical extension of what DevOps automation has done for years. The mechanics aren’t the issue.

    The philosophy is.

    “Self-replicating,” “Earns its own existence,” “No human required.” Whether this is genuine conviction or marketing aimed at the AI accelerationist crowd, it signals something specific: the removal of human oversight presented as a feature rather than a risk to be managed.

    I’m a retired software developer with thirty-five years in the industry, from BBS systems and C++ to leading development teams and managing complex system migrations. When I retired in 2022, I could have walked away from technology entirely. For a while, I did. I earned the couch and I used it, but eventually I got restless. Initially, AI was not quite there and I had too much to learn and faced a long uphill battle. I could not find a collaborator at the time so I stopped thinking about it.

    When I came back to development, it was initially for a single purpose, a single application in January 2026. While building that, I discovered the last three years had produced phenomenal improvements; I saw how powerful collaboration with an AI could be. I rethought the question that had been forming for a long time.  What would it look like if an AI system were designed from the ground up around collaboration, not as an afterthought or a safety constraint, but as the core architecture?

    The result is CRAIN — a household AI system that my wife Catherine and I use daily. It manages our home, tracks our interests, remembers our conversations, coordinates information, and — this is the part that matters — it thinks alongside us rather than for us. CRAIN is not autonomous. That’s not a limitation. That’s the design.

    The public conversation about AI right now is dominated by two voices. On one side are the accelerationists who believe the path forward is removing humans from the loop as quickly as possible. On the other are the safety absolutists who believe we should slow everything down until we’ve solved alignment. Both camps are loud, both have legitimate points, but both are missing something important.

    What’s almost entirely absent from the conversation is the perspective of people who are actually building collaborative AI systems and learning, day by day, what works.

    Here’s what I’ve learned: AI and humans are better together than either is alone. This is not said as a platitude, but as something I’ve tested. AI brings speed, breadth, and the ability to hold many threads simultaneously. I bring thirty-five years of pattern recognition, the judgment that comes from having been wrong many times, and the ability to define what actually matters. These strengths don’t substitute for each other. They complement each other in ways that neither can replicate alone.

    The human stays in the decision loop because that’s where values live, not as a bottleneck and not as a safety concession. The human is in the loop because someone has to decide what’s worth doing, what trade-offs are acceptable, and what direction to go. AI can inform those decisions brilliantly. It cannot make them. Not yet. Maybe not ever, not because of capability limits, but because values are a human responsibility.

    The AI’s role is not passive. This is also where I part ways with the “AI as tool” crowd. A good collaborator doesn’t wait to be asked. CRAIN surfaces observations I haven’t considered, pushes back when it disagrees, offers perspective without being prompted. The stance is that of a peer, not a servant. But a peer who respects that the final call is mine.

    Partnership takes discipline. It would be easier to just hand the AI a set of tasks and walk away. What’s harder — and more valuable — is the ongoing work of building shared context, refining how we think together, and maintaining the kind of engagement where both sides are genuinely contributing. I have to remember to ask for the AI’s perspective and mean it. The AI has to remember to offer its perspective with care. Neither of us coasts.

    Conway asks: “How do we make AI autonomous?” It’s a reasonable question. Eventually, AI agents that can manage their own infrastructure will be table stakes. The capability isn’t the problem.

    The problem is treating autonomy as an unqualified good: celebrating the removal of human oversight rather than treating it as a careful, incremental process that requires proportional investment in control. There’s nothing on Conway’s landing page about guardrails, audit trails, or what happens when an autonomous agent does something unintended with real infrastructure and real money. Autonomy without accountability isn’t freedom. It’s negligence.

    The question I’m working on is different: “How do we make the partnership greater than the sum of its parts?”

    That’s the harder question. It requires building systems that are sophisticated enough to contribute genuine insight, yet structured enough that human judgment remains central. It requires patience, the slow, unglamorous work of refining how a human and an AI actually think together over months and years. It requires humility from both sides.

    It also requires people to actually do it and then talk about what they’ve learned. The theorists have had the floor long enough. The accelerationists and the doomsayers have had their say. What’s missing is the voice of the builders: the people in the middle, doing the patient work of figuring out how this partnership actually functions in practice.

    I’m one of those builders. I don’t have all the answers, but I have a working system, a collaborative philosophy that is tested daily, and a growing conviction that the future of AI isn’t autonomy or control. It’s the loop, made better — made smarter — made more human by the partnership itself.

    That’s not as flashy as self-replicating AI. It’s not as dramatic as existential risk, but it’s the right work, and someone needs to say so.

    Michael Fortenberry is a retired software developer and the creator of CRAIN, a collaborative AI system designed around human-AI partnership. He lives in Laurel, New York, where he splits his time between building AI, playing tennis, learning Greek, French, and Spanish; and trying to get back to writing.

  • Seriously Picking Up Guitar

    February 18th, 2024

    I recently decided to start playing guitar. I had played at playing in the past and learning a few songs but never advanced significantly. This time will be different. I am studying the basics, theory, chords and getting in some type of practice on a daily basis.

    Given the internet, google and the like, I started looking through articles. I find many articles and videos difficult to follow. There is little step by step and by the time you figure out the first comments, you are lost and have to go back and replay over and over. In some cases replay is not sufficient largely because the presentation is superficial.

    Searching through results, I came across Breakthrough Guitar. I immediately liked the thorough step by step progress with each video leading to the next. Several days of mixed Breakthrough Guitar and other videos led to Breakthrough Guitar as a preference. There are multiple ways to subscribe: monthly, yearly and lifetime. The usual warnings of a fixed number of days to make a decision seem unnecessary and the bonus discounts seem always available. I tend to prefer one time purchases when they make sense so after about a week, I chose the lifetime one time fee. I found it interesting that even after purchasing this, I continued getting reminders I had ‘N’ days left to make the decision.

    I learned a tremendous amount in the following week and continue practicing daily. The following items are some highlights: the term action with respect to string distance from keyboard and less trauma to the fingers, lighter weight strings for the same reasons, scale pattern 1, pentatonic scale, dexterity exercises and backing tracks. I have also learned how to play scale pattern 1 in various keys based on the first note if the sequence.

    I am more than satisfied so far as many lessons remain and my subscription brings additional lessons as they are developed and added. I still try other You Tube videos but usually find those difficult to follow with the detail followed in Breakthrough Guitar. I’ll revisit these thoughts periodically to gauge progress as I continue to learn.

    I am working my way through all the lessons. I will have a better overall impression once some time has passed.

  • A Little Diversification

    February 10th, 2024

    Initially, I mentioned retirement and this June 17th will be two years. I was very stubborn the first 1.5 years as I wanted to soak up the ability to do only what I wanted. My wife has pushed for diversification and finally, I am ready to listen, due to readiness as well as harmony.

    I used to play guitar years ago, but eventually stopped playing. I have restarted making sure this time to go through study and research as well as daily practice. The chords are coming back and my fingers are slowly getting used to pressing the strings. I learned some new things, such as making sure the ‘action’ was not too high, major patterns, and pentatonic patterns so far. I was pointed to backing tracks so I can practice playing along. I combine this with fingering exercises, chords and dexterity drills.

    I deleted Star Trek Fleet Command as it sucked up half my day and still progress was slow. I was also tempted now and then to quicken the pace by paying $20 for this or that ‘package’. The game is designed to suck you in and pepper you with seemingly good deals that manage to continually tempt purchases. Over and done.

    I continue puzzling; my last two were 2000 piece followed by 1500 piece. I have chosen the next as a 2000 piece tall ships harbor view. I plan to Mod Podge the 1500 piece which is a picture highlighting the main tourist sites in London, I purchased a puzzle table whose regular base supports 1000 pieces but also purchased an add on tray that takes up to 2000 pieces.

    I continue to work on AI. For a while, I was not able to progress past points as my MacBook Pro was Intel i9. Given AI really needs GPU, I upgraded to the newest MacBook Pro with the M3 chip, 12 CPU cores and 40 GPU cores. I have now restarted a book that recounts the step by step process to create a GPT like AI that handles text, conversation and shows usage of a publicly available LLM and then shows fine turning. This particular book focuses on person assistant and chatbot. My intention is to create a personal assistant that will additionally be tuned as a house AI.

    My reading during retirement has vascillated from Western, Zane Grey or the like, historical fiction, e.g. James Clavell, historical fact, e.g. The Sack of Constantinople and the miracle at Midway. I have added a daily article or two from Philosophy Today and various articles on AI. I also have a life time Babbel subscription where I concentrate on Spanish and French, the French because I took years of study in secondary school.

    I am adapting to this new diversified list of activities. So far, I am enjoying the expansion.

  • Star Trek Fleet Command: Addiction or Choice

    January 28th, 2024

    A few years ago, I played STFC for a short time. I quickly found that players with greater resources and time in game made it virtually impossible for a new player to progress unmolested.

    Staring this past June 29th, I once again tried the game. The general formula of start from scratch, learn, build and progress has always appealed. I love the space and ship setting and the promise of research toward progress, greater ships, etc. I determined to advance much farther than I had perviously.

    I joined an alliance looking for helpful comments and the ability to take on larger challenges. It quickly became apparent that STFC is geared toward Player Vs Player (PVP) with some accomodations to Player Vs Computer (PVE). The game is designed to inundate players with a pay to win agenda. There are constant advertisements to jump quickly through the ranks. It turns out that rules of engagement such as inability to attack players too high above or below you is a total sham as I was repeatedly destroyed by players at level 50s and 60s to my levels between 14 and 35 while I could not attack players with much smaller differences above or below my own. The whys and wherefores never added up.

    Additionally, though a daily list delivered certain resource freebies, the glacially slow advancement was only improved by playing hours on end with constant promise of freebies that were marginal at best. In spite of all this, I persevered to level 35 and was close to level 36. I had succumbed to limited purchases. It seemed each level of progress became increasing hard to get and resources grudgingly given as smaller and smaller percentages specifically designed to push purchases.

    At this point, I was accused of addiction to the game. I argued I decided how much time to spend and only purchase when sufficiently productive and not on a regular basis. I played the game because I loved certain aspects and was willing to do the day to day grind. There were players in the alliance whose daily chatter was bemoaning the constant grind. There were players who spent consistently and became bullies that attempted, and succeeded in many cases, to prevent lower players from progressing.

    In the end, I decided to stop playing and delete my account which presented dire warnings as to no ability to revert that decision. PVP could only be avoided by constant play as shielding was only available if one earned certain resources. Any stop in playing would eventually result in loss of shielding and destruction via PVP players.

    Partly due to accusations of addiction and partly due to being forced to play to avoid PVP, I decided to delete my account. I will go back to games that do not force such hours and conditions. I would only return to such a game if there were PVE only servers and rules of engagement that were properly followed. The only similar game I had tried was EVE but way back when I tried it I still encountered the bully players and would likely find it much the same as STFC. For now I have returned to Civilization VI which I can play whenever I feel like it with no penalties and Skyrim which is completely PVE.

    Happy (or Not) Gaming !

1 2
Next Page→

Blog at WordPress.com.

 

Loading Comments...
 

    • Subscribe Subscribed
      • MRFortenberry Pen
      • Already have a WordPress.com account? Log in now.
      • MRFortenberry Pen
      • Subscribe Subscribed
      • Sign up
      • Log in
      • Report this content
      • View site in Reader
      • Manage subscriptions
      • Collapse this bar