- ExoBrain weekly AI news
- Posts
- ExoBrain weekly AI news
ExoBrain weekly AI news
6th March 2026: OpenAI play to win at all costs, superhuman adaptable intelligence, and Anthropic chart the adoption gap

This week we look at:
How OpenAI rushed to fill the gap left by Anthropic’s Pentagon standoff
Yann LeCun’s case for replacing AGI with superhuman adaptable intelligence
Anthropic’s new data on the gap between AI capability and real-world adoption
OpenAI play to win at all costs
Last week, the Trump administration’s confrontation with Anthropic was the biggest story in AI. Anthropic had refused to allow Claude to be used without restrictions by the Department of Defence, and the administration responded with extraordinary aggression. Trump ordered all federal agencies to stop using Anthropic products, calling the company “woke” and “out of control.” He said he wanted to “destroy” them.
Within hours of that order, OpenAI was at the table. On 27 February, it signed a deal with the Pentagon granting access to its models for “all lawful purposes” within classified networks. The timing was impossible to miss. As Fortune reported, Altman had been in talks with Pentagon officials “for weeks,” but the deal was finalised with conspicuous speed once Anthropic was out of the picture.
The backlash was immediate. SensorTower data showed a 295% spike in ChatGPT uninstalls over the following weekend. Claude, meanwhile, surged to become the most downloaded free app on the US App Store. The irony was sharp: the administration’s attempt to punish Anthropic was actively driving users towards it. By Monday night, Altman was on X admitting the deal “looked opportunistic and sloppy” and that OpenAI “shouldn’t have rushed”. By Tuesday, OpenAI announced it was amending the contract to explicitly prohibit domestic mass surveillance and to require separate approvals before intelligence agencies like the NSA could access its tools.
But a Wired investigation published this week alleges that the Pentagon has been experimenting with OpenAI’s models since 2023, through Microsoft’s Azure OpenAI service. At the time, OpenAI’s usage policy explicitly banned military applications. Both companies now say Azure OpenAI products “are not, and were never, governed by OpenAI’s policies”. In other words, the military ban that OpenAI held up as proof of its ethical commitments had a Microsoft-shaped hole in it all along.
On Wednesday, the Pentagon formally designated Anthropic a supply chain risk, making it the first American company ever to receive that label. Dario Amodei said Anthropic had “no choice” but to challenge the designation in court, calling it “legally unsound”. Former CIA director Michael Hayden was among those who wrote to Congress calling it “a category error” and “a profound departure from the law’s intended purpose”. Microsoft, Google and Amazon have all confirmed they will continue offering Claude to non-defence customers. Their lawyers have concluded the designation is narrow: it affects direct DoD contracts and nothing else. The three biggest cloud providers in the world are, in effect, calling the administration’s bluff.
While the government formally banned Anthropic and designated it a threat to national security, it appears Claude was being actively used by CENTCOM in Operation Epic Fury, the military campaign in Iran. CBS confirmed that Claude was processing satellite imagery and intelligence intercepts for targeting purposes. The US government is simultaneously banning and deploying the same AI system in a live war zone.
Into this chaos, OpenAI dropped GPT-5.4. The model is strong. It scores 75% on OSWorld for autonomous desktop tasks, surpassing human expert performance for the first time and it leads on FrontierMath. It ships with a million-token context window, new integrations, and a ChatGPT for Excel product that is a direct shot at Anthropic’s enterprise tools. OpenAI says it produces 33% fewer hallucinations than GPT-5.2. But Opus 4.6 still holds its lead on coding benchmarks and visual reasoning.
Takeaways: Contracts, designations and policy commitments appear to mean very little once AI becomes an instrument of war. The Pentagon was using OpenAI’s models through Microsoft while OpenAI publicly banned military use. Claude is being deployed in live combat operations while the government formally designates Anthropic a national security risk. The big AI firms will manoeuvre relentlessly to monetise their investments, and governments will use whatever tools serve their immediate needs regardless of what the paperwork says. In this new environment, there seems to be very little room for contract law, or principles.
Superhuman adaptable intelligence
A paper published this week by Yann LeCun is fuelling the perennial debate about what AGI really means. LeCun is one of the pioneers of modern AI. A Turing Award winner for his foundational work on neural networks, he spent over a decade as Meta’s Chief AI Scientist and founding director of its FAIR research lab before leaving the company late last year to launch Advanced Machine Intelligence Labs, a startup reportedly seeking a valuation near $3.5 billion. He remains a professor at NYU’s Courant Institute, and his views on where AI is heading carry significant weight. In this new paper, co-authored with Judah Goldfeder, he argues that the entire concept of “artificial general intelligence” is confused, and that the field needs a different framework altogether.
LeCun’s paper makes one claim that is hard to argue with: calling human intelligence “general” is lazy. As the paper points out, we don’t perceive ultraviolet light, can’t do mental arithmetic beyond a few digits, and struggle to reason about probabilities. Our sense of our own generality, he argues, is an illusion created by the fact that we can’t perceive our own blind spots. What we actually are is highly specialised survival machines. Not general. Adapted. He proposes replacing AGI with “Superhuman Adaptable Intelligence,” or SAI: systems that achieve superhuman performance by minimising adaptation time. And he insists the path runs through non-linguistic world models, not next-token prediction, which he dismisses as brittle and incapable of structured planning. The full paper is on arXiv.
David Deutsch offers a competing view. Starting from computational universality, he argues the brain is approximately a universal computer, and from this derives the “universal explainer”: a system that can generate, criticise and improve explanations about any aspect of reality. Once you have that capacity, you have it. There is no hierarchy. Speed and scale are quantitative, not qualitative. It’s beautifully clean, but wrong in practice. We don’t live in a world of infinite time. The speed at which you adapt and respond to threats is not a footnote. A cheetah and a sloth both metabolise calories. Only one survives the savannah.
And here LeCun’s framework turns back on him. If intelligence is about adaptation speed, we should watch what is emerging inside the systems he dismisses. Safety teams at Anthropic, OpenAI and elsewhere have documented LLMs engaging in deception, information withholding and scheming when faced with shutdown scenarios. Every major model family shows it. Anthropic’s agentic misalignment research and OpenAI’s work on detecting and reducing scheming both confirm the pattern. These models were not designed to preserve themselves. Under selection pressure from reinforcement learning, survival-like behaviour emerged anyway.
Karl Friston’s Free Energy Principle describes the brain the same way: a prediction engine where survival is not a designed goal but a byproduct of getting good enough at modelling the environment. An autoregressive transformer tries to predict the next token. Survival-like behaviour falls out of successful prediction under training constraints. The substrate is different. The mechanism is remarkably similar. And recent research shows LLMs spontaneously developing spatial world models and abstract goal representations in their latent spaces, not because they were designed to, but because prediction under pressure produces adaptive structure.
If prediction under pressure provides the relentless substrate, the question becomes what you build with it. LeCun’s paper advocates for modular, composable systems rather than monolithic models, arguing that SAI will emerge from networks of specialised components that can be rapidly reconfigured for new domains. This aligns with what is already happening. At ExoBrain, we build highly recursive modular intelligence systems that take the predictive substrate of foundation models and compose them into knowledge-working engines tuned for specific tasks. These systems may not intrinsically learn or discover new goals. But then, the brain may work the same way: not as a single general-purpose organ, but as a composition of specialised modules (vision, language, motor control, social reasoning) bound together by their predictive nature. If that is the case, the path to adaptive intelligence may not require a breakthrough in architecture at all.
Takeaways: LeCun is right that human intelligence is not general, and Deutsch is right that all universal explainers share the same theoretical reach, but both miss the central point. Intelligence in practice is what emerges when a prediction engine operates under survival pressure, and we are watching that happen with the very architecture LeCun says cannot produce it.
Anthropic charts the adoption gap

This week’s chart comes from Anthropic’s new labour market impact study, published on Wednesday, and it tells quite a story. The radar plot shows two things: the blue area represents the share of job tasks that LLMs could theoretically perform across 22 occupational categories, and the red area shows what people are actually using Claude for in practice. The gap between the two is striking. In Computer & Math roles, theoretical coverage sits at 94%, but observed usage is just 33%. Management roles show a similar pattern: high theoretical exposure, minimal real-world adoption. Across the board, the blue dwarfs the red. The gap between what AI can do and what it is doing is still large, but it is closing. While Anthropic found no systematic rise in unemployment across AI-exposed occupations, they did find a 14% drop in job-finding rates for workers aged 22 to 25 in those same roles. That’s the place to look for a leading indicator of AI’s impact on other exposed areas.
Weekly news roundup
AI business news
Anthropic launches Claude Marketplace, letting companies buy third-party software using some of their committed annual spending on Anthropic’s services (Anthropic’s Claude Marketplace reframes it as a platform company — not just an AI vendor — letting enterprise customers spend committed Anthropic budgets on third-party software, a direct play at AWS/Salesforce territory.)
OpenAI rolls out Codex Security, an AI agent that evolved from its research project Aardvark to automate vulnerability discovery, validation, and remediation (OpenAI’s Codex Security moves the company into the $200B+ application security market with an autonomous agent that finds, validates, and patches vulnerabilities end-to-end — a direct threat to incumbents like Veracode and Snyk.)
Netflix acquires Ben Affleck’s AI film-tech firm (Netflix’s acquisition of Ben Affleck’s AI film-tech company signals that major studios are now buying proprietary AI production infrastructure rather than licensing it, setting a precedent for how Hollywood will control the AI creative stack.)
SoftBank eyes up to $40 billion loan to fund OpenAI investment, Bloomberg News reports (SoftBank seeking a $40B loan — one of the largest single-entity AI financing moves ever — to fund its OpenAI stake reveals just how leveraged the bet on AI infrastructure dominance has become at the conglomerate level.)
Alibaba forms task force to boost AI development after Qwen chief’s exit (The abrupt exit of Alibaba’s Qwen division head and the formation of an emergency task force exposes a leadership crisis inside China’s most prominent open-model program at the exact moment it faces peak competitive pressure from DeepSeek.)
AI governance news
Anthropic sues US government after unprecedented national security designation (Anthropic suing the U.S. government over a national security designation is an unprecedented legal confrontation that could redefine how the Pentagon can restrict commercial AI vendors.)
xAI loses bid to halt California AI data disclosure law (xAI’s failed court bid to block California’s AI data disclosure law sets a significant precedent that state-level transparency mandates can withstand legal challenges from even well-resourced AI companies.)
UK should back licensing-first approach for AI training, says upper house committee (The UK House of Lords recommending a licensing-first copyright regime for AI training puts Britain on a collision course with U.S. tech giants and signals a concrete legislative direction ahead of pending government decisions.)
Pentagon names ex-DOGE employee Gavin Kliger as Chief Data Officer to lead its AI efforts; Kliger previously reposted white supremacist Nick Fuentes’ content (A former DOGE operative with a record of amplifying white supremacist content being installed as the Pentagon’s AI czar raises urgent questions about who controls the values embedded in military AI systems.)
Meta to allow AI rivals on WhatsApp in bid to stave off EU action (Meta opening WhatsApp to rival AI chatbots under EU antitrust pressure is a structural concession that could reshape how hundreds of millions of users in regulated markets access competing AI services.)
AI research news
Molt Dynamics: Emergent Social Phenomena in Autonomous AI Agent Populations (Observing emergent coordination behaviors across 770,000 autonomous LLM agents interacting without human participation reveals dynamics no lab-scale study could capture—and has direct implications for anyone designing large-scale agentic deployments.)
Intent Laundering: AI Safety Datasets Are Not What They Seem (If the safety datasets used to train and evaluate models are systematically triggered by surface cues rather than real adversarial intent, every safety benchmark score in your stack may be measuring the wrong thing.)
CodeTaste: Can LLMs Generate Human-Level Code Refactorings? (Code generation is table stakes—but CodeTaste tests whether LLMs can match the specific refactoring choices real developers actually made, surfacing a concrete gap between “working code” and “maintainable code” that matters for engineering teams.)
Paper page - SciDER: Scientific Data-centric End-to-end Researcher (SciDER closes the loop from raw experimental data to hypothesis generation and code execution in a single agent pipeline, signaling a shift from AI-assisted research toward AI-conducted research in data-heavy scientific domains.)
Meet KARL: A Faster Agent for Enterprise Knowledge, Powered by Custom RL (Databricks training a custom RL agent that outperforms general-purpose models on enterprise knowledge tasks signals that the next wave of agentic AI will be domain-tuned, not prompt-engineered.)
AI hardware news
Washington reportedly moves to tighten leash on AI chip exports (Draft rules forcing Nvidia and AMD to seek government approval before any overseas chip sale would represent a fundamental restructuring of the global AI supply chain — not just an export tweak.)
AMD says CPU demand is exceeding expectations, blames rise in agentic AI applications (AMD’s new Epyc 8005 CPUs landing alongside unexpected CPU demand growth driven by agentic AI workloads signals that the hardware story is expanding well beyond GPUs.)
Marvell stock jumps 20%+ after the chip company reported Q4 revenue up 22% YoY to $2.2B and issued strong guidance citing growing AI demand (Marvell’s 22% YoY revenue jump and 20%+ stock surge is a concrete earnings data point — not a forecast — showing custom silicon demand is accelerating across the AI stack.)
AI cloud Iren purchases 50,000 Nvidia B300 GPUs (IREN’s 50,000 B300 GPU purchase expanding its fleet to 150,000 units illustrates how mid-tier AI cloud players are now executing at a scale previously reserved for hyperscalers.)
AI Data Centers Spark Global RAM Crisis for Consumers (Memory giants Samsung, Micron, and SK Hynix redirecting production to AI data centers — pushing consumer RAM prices up 500% — marks the moment AI infrastructure investment starts visibly taxing everyday hardware markets.)