I cannot state what I do not know, but I can raise questions that deserve consideration.
The accountability gap. In a previous post, I advocated a third perspective when considering our interactions with AI — that the real benefit of AI is realized by a collaboration where AI provides speed and breadth of knowledge while a human provides judgement based on lessons learned from a lifetime of memories. This collaboration is essential to reach a best possible result. In the combined strike by the US and Israel on Iran this past Sunday, Iran has claimed that a school was struck, and students were reportedly killed. We may never know what actually happened, or why.
The sourcing problem. In the midst of a conflict, it is hard to verify such reports. Each side in this conflict has a vested interest in propounding their perspective, including the possibility that data can be manufactured to support self serving political and societal views. There are four possibilities to consider — it was intentional — it was a targeting error by humans — it was a targeting error by a human using AI — it was a targeting error by autonomous AI. In these cases, the public has no mechanism to determine what actually happened. Each administration classifies the methodology. The fog of war provides permanent cover. The first three possibilities are not new, but the addition of autonomous AI in the targeting chain makes the situation worse by introducing the fourth possibility in which no one fully understands why the system recommended that target. This new possibility makes a powerful argument for ensuring a human is in the critical judgement and decision loop. The real question isn’t “did this happen” — it’s “will we ever know how decisions were made?” Historically the answer is no. We’re still arguing about drone strike civilian counts from 2015.
The AI connection. It is not necessary to prove AI was used in this specific strike. The argument is structural: the pressure to integrate AI into targeting is documented, the pressure on companies to remove guardrails is documented, and the accountability framework has not challenged either demand. Whether this particular school was hit by an AI-assisted decision or a human one, the architecture being pushed on businesses guarantees the question will become unanswerable in the future.
The political timing. The trajectory of public opinion for this administration has been consistently dropping and that drop has been accelerating. Negotiations with Iran seemed to be moving in a positive direction. Consider whether the patterns behind recent decisions and events could be designed to take the focus off domestic issues and improve midterm positioning. Perhaps the current crisis serves to dodge accountability and create a “wartime president” shield while sweeping the recent drop in public opinion under the carpet.
If we decouple the human element and allow autonomous AI, there will be no one to ask, no way to find what questions to ask, and finally no accountability.
Michael Fortenberry is a retired software developer and the creator of Sage, a collaborative AI system designed around human-AI partnership. He lives in Laurel, New York, where he splits his time between collaborating with Claude and open source AI, playing tennis, studying Greek, French, Spanish and German — while trying to get back to writing.