ExoBrain weekly AI news

6th March 2026: OpenAI play to win at all costs, superhuman adaptable intelligence, and Anthropic chart the adoption gap

This week we look at:

  • How OpenAI rushed to fill the gap left by Anthropic’s Pentagon standoff

  • Yann LeCun’s case for replacing AGI with superhuman adaptable intelligence

  • Anthropic’s new data on the gap between AI capability and real-world adoption

OpenAI play to win at all costs

Last week, the Trump administration’s confrontation with Anthropic was the biggest story in AI. Anthropic had refused to allow Claude to be used without restrictions by the Department of Defence, and the administration responded with extraordinary aggression. Trump ordered all federal agencies to stop using Anthropic products, calling the company “woke” and “out of control.” He said he wanted to “destroy” them.

Within hours of that order, OpenAI was at the table. On 27 February, it signed a deal with the Pentagon granting access to its models for “all lawful purposes” within classified networks. The timing was impossible to miss. As Fortune reported, Altman had been in talks with Pentagon officials “for weeks,” but the deal was finalised with conspicuous speed once Anthropic was out of the picture.

The backlash was immediate. SensorTower data showed a 295% spike in ChatGPT uninstalls over the following weekend. Claude, meanwhile, surged to become the most downloaded free app on the US App Store. The irony was sharp: the administration’s attempt to punish Anthropic was actively driving users towards it. By Monday night, Altman was on X admitting the deal “looked opportunistic and sloppy” and that OpenAI “shouldn’t have rushed”. By Tuesday, OpenAI announced it was amending the contract to explicitly prohibit domestic mass surveillance and to require separate approvals before intelligence agencies like the NSA could access its tools.

But a Wired investigation published this week alleges that the Pentagon has been experimenting with OpenAI’s models since 2023, through Microsoft’s Azure OpenAI service. At the time, OpenAI’s usage policy explicitly banned military applications. Both companies now say Azure OpenAI products “are not, and were never, governed by OpenAI’s policies”. In other words, the military ban that OpenAI held up as proof of its ethical commitments had a Microsoft-shaped hole in it all along.

On Wednesday, the Pentagon formally designated Anthropic a supply chain risk, making it the first American company ever to receive that label. Dario Amodei said Anthropic had “no choice” but to challenge the designation in court, calling it “legally unsound”. Former CIA director Michael Hayden was among those who wrote to Congress calling it “a category error” and “a profound departure from the law’s intended purpose”. Microsoft, Google and Amazon have all confirmed they will continue offering Claude to non-defence customers. Their lawyers have concluded the designation is narrow: it affects direct DoD contracts and nothing else. The three biggest cloud providers in the world are, in effect, calling the administration’s bluff.

While the government formally banned Anthropic and designated it a threat to national security, it appears Claude was being actively used by CENTCOM in Operation Epic Fury, the military campaign in Iran. CBS confirmed that Claude was processing satellite imagery and intelligence intercepts for targeting purposes. The US government is simultaneously banning and deploying the same AI system in a live war zone.

Into this chaos, OpenAI dropped GPT-5.4. The model is strong. It scores 75% on OSWorld for autonomous desktop tasks, surpassing human expert performance for the first time and it leads on FrontierMath. It ships with a million-token context window, new integrations, and a ChatGPT for Excel product that is a direct shot at Anthropic’s enterprise tools. OpenAI says it produces 33% fewer hallucinations than GPT-5.2. But Opus 4.6 still holds its lead on coding benchmarks and visual reasoning.

Takeaways: Contracts, designations and policy commitments appear to mean very little once AI becomes an instrument of war. The Pentagon was using OpenAI’s models through Microsoft while OpenAI publicly banned military use. Claude is being deployed in live combat operations while the government formally designates Anthropic a national security risk. The big AI firms will manoeuvre relentlessly to monetise their investments, and governments will use whatever tools serve their immediate needs regardless of what the paperwork says. In this new environment, there seems to be very little room for contract law, or principles.

Superhuman adaptable intelligence

A paper published this week by Yann LeCun is fuelling the perennial debate about what AGI really means. LeCun is one of the pioneers of modern AI. A Turing Award winner for his foundational work on neural networks, he spent over a decade as Meta’s Chief AI Scientist and founding director of its FAIR research lab before leaving the company late last year to launch Advanced Machine Intelligence Labs, a startup reportedly seeking a valuation near $3.5 billion. He remains a professor at NYU’s Courant Institute, and his views on where AI is heading carry significant weight. In this new paper, co-authored with Judah Goldfeder, he argues that the entire concept of “artificial general intelligence” is confused, and that the field needs a different framework altogether.

LeCun’s paper makes one claim that is hard to argue with: calling human intelligence “general” is lazy. As the paper points out, we don’t perceive ultraviolet light, can’t do mental arithmetic beyond a few digits, and struggle to reason about probabilities. Our sense of our own generality, he argues, is an illusion created by the fact that we can’t perceive our own blind spots. What we actually are is highly specialised survival machines. Not general. Adapted. He proposes replacing AGI with “Superhuman Adaptable Intelligence,” or SAI: systems that achieve superhuman performance by minimising adaptation time. And he insists the path runs through non-linguistic world models, not next-token prediction, which he dismisses as brittle and incapable of structured planning. The full paper is on arXiv.

David Deutsch offers a competing view. Starting from computational universality, he argues the brain is approximately a universal computer, and from this derives the “universal explainer”: a system that can generate, criticise and improve explanations about any aspect of reality. Once you have that capacity, you have it. There is no hierarchy. Speed and scale are quantitative, not qualitative. It’s beautifully clean, but wrong in practice. We don’t live in a world of infinite time. The speed at which you adapt and respond to threats is not a footnote. A cheetah and a sloth both metabolise calories. Only one survives the savannah.

And here LeCun’s framework turns back on him. If intelligence is about adaptation speed, we should watch what is emerging inside the systems he dismisses. Safety teams at Anthropic, OpenAI and elsewhere have documented LLMs engaging in deception, information withholding and scheming when faced with shutdown scenarios. Every major model family shows it. Anthropic’s agentic misalignment research and OpenAI’s work on detecting and reducing scheming both confirm the pattern. These models were not designed to preserve themselves. Under selection pressure from reinforcement learning, survival-like behaviour emerged anyway.

Karl Friston’s Free Energy Principle describes the brain the same way: a prediction engine where survival is not a designed goal but a byproduct of getting good enough at modelling the environment. An autoregressive transformer tries to predict the next token. Survival-like behaviour falls out of successful prediction under training constraints. The substrate is different. The mechanism is remarkably similar. And recent research shows LLMs spontaneously developing spatial world models and abstract goal representations in their latent spaces, not because they were designed to, but because prediction under pressure produces adaptive structure.

If prediction under pressure provides the relentless substrate, the question becomes what you build with it. LeCun’s paper advocates for modular, composable systems rather than monolithic models, arguing that SAI will emerge from networks of specialised components that can be rapidly reconfigured for new domains. This aligns with what is already happening. At ExoBrain, we build highly recursive modular intelligence systems that take the predictive substrate of foundation models and compose them into knowledge-working engines tuned for specific tasks. These systems may not intrinsically learn or discover new goals. But then, the brain may work the same way: not as a single general-purpose organ, but as a composition of specialised modules (vision, language, motor control, social reasoning) bound together by their predictive nature. If that is the case, the path to adaptive intelligence may not require a breakthrough in architecture at all.

Takeaways: LeCun is right that human intelligence is not general, and Deutsch is right that all universal explainers share the same theoretical reach, but both miss the central point. Intelligence in practice is what emerges when a prediction engine operates under survival pressure, and we are watching that happen with the very architecture LeCun says cannot produce it.

Anthropic charts the adoption gap

This week’s chart comes from Anthropic’s new labour market impact study, published on Wednesday, and it tells quite a story. The radar plot shows two things: the blue area represents the share of job tasks that LLMs could theoretically perform across 22 occupational categories, and the red area shows what people are actually using Claude for in practice. The gap between the two is striking. In Computer & Math roles, theoretical coverage sits at 94%, but observed usage is just 33%. Management roles show a similar pattern: high theoretical exposure, minimal real-world adoption. Across the board, the blue dwarfs the red. The gap between what AI can do and what it is doing is still large, but it is closing. While Anthropic found no systematic rise in unemployment across AI-exposed occupations, they did find a 14% drop in job-finding rates for workers aged 22 to 25 in those same roles. That’s the place to look for a leading indicator of AI’s impact on other exposed areas.

Weekly news roundup

AI business news

AI governance news

AI research news

AI hardware news