ExoBrain weekly AI news

28th March 2025: Gemini raises the bar, an insult to art itself, and tracing the thoughts of LLMs

Welcome to our weekly email newsletter, a combination of thematic insights from the founders at ExoBrain, and a broader news roundup from our AI platform Exo…

Themes this week:

  • Google's Gemini 2.5 Pro and its bid for AI dominance

  • AI-generated Ghibli-style imagery raising questions of creative ownership

  • Anthropic's latest research into how Claude actually "thinks"

Gemini raises the bar

Coming unexpectedly soon after the release of Gemini 2.0, Google released Gemini 2.5 Pro under its "experimental" banner this week. It was a low-key arrival, with the firm continuing to drop new versions regularly and often and to avoid the controversies of its past high-profile launches, but it seems the changes are far from incremental. It appears Google has reclaimed a leading position in terms of raw model capability. By most accounts and by some margin this is now the most powerful AI model available.

This is a reasoning model that thinks before it responds. Building upon previous Gemini versions, it also maintains native multimodality, handling text, audio, images, video, and code. It features a very large context window, starting at one million tokens with plans to expand, and can output up to 65,000 tokens meaning its suited to dealing with the largest code bases.

Google will hope this release vindicates their deep investments in custom hardware (TPUs), software, and the focused efforts of the reorganised and consolidated Google DeepMind research team. The emphasis on advanced reasoning, coding, and multi-modal performance across a massive context window points to highly integrated strengths, that will put pressure on the other labs to respond.

But having the 'best' model is only part of the story. The crucial question now is: how do these increasingly powerful models deliver real-world value? There seem to be three main paths:

Chatbots: While 2.5 Pro will power formidable chatbots, this market is crowded. ChatGPT dominates, with others like Claude fulfilling a range of niches. It will remain hard for Google's Gemini app to gain significant share here, and simple chat massively underutilise the model's advanced capabilities.

Google Products: Integrating AI into Search, Workspace (Gmail, Docs), and other Google services holds potential. However, attempts so far, like adding piecemeal chat features or AI summaries, haven't been transformative. Simply "bolting on" AI to classical software is not the best way to harness new power.

Developers: Enabling developers to build new products accelerate the adoption of agents using these models is key. Google has somewhat improved its developer tooling, but Gemini 2.5 Pro's current "experimental" status means usage limits, no pricing details yet, and limited global availability. It's not ready for production and it will be several months before we know its actual impact.

Takeaways: Looking ahead, most see the real AI potential in sophisticated agents capable of handling long and complex tasks. At ExoBrain our focus is on building and deploying this new digital workforce. While Gemini 2.5 Pro appears to be the most powerful engine right now, availability and platform limitations will constrain its potential. If this is a high watermark for Google, the model will not be remembered as others will soon surpass it. If this release frequency is maintained, pricing is competitive, and developers are fully catered for (maybe even furnished with new agent building tools like those from OpenAI) Google could finally start to flex its muscles and dominate. To build transformative agents, developers require faster access to robust, globally available models and better tools. As some voice concerns about the demand for the huge expansion in datacentre capacity and compute, models like 2.5 Pro show what’s possible but the demand won’t be there until they are truly ready to power the agentic AI revolution.

An insult to art itself

This may look like the warm palettes and soft, detailed linework synonymous with the renowned Japanese animation house Studio Ghibli, but the troubling image is the work of the US Government armed with GPT-4o.

In response to Google last week, OpenAI launched native image generation in ChatGPT finally unlocking the ‘omni’ feature that was first demonstrated last year. In the launch stream, Sam Altman and team showed how the model could re-style a photo in an anime aesthetic. This quickly went viral with numerous examples flooding social media timelines. No use was off-limits with the White House getting in on the act and generating this Ghibli style image of a sobbing ICE detainee and posting it to X.

Essentially the work of a beloved creative studio is now being reproduced and remixed at scale, on demand, for any purpose, and without consent. Ghibli founder Hayao Miyazaki is well known for his past views on AI-generated art, calling it an “insult to art itself”. Miyazaki’s animation style, once the product of thousands of hours of hand-drawn effort, can now be applied in seconds. The textures, composition rules, and narrative signals of his work can be reproduced in any context.

To many artists and studios, this new wave of AI mimicry will feel more like appropriation. Not the recreation of specific copyrighted material so much as the exploitation of creative identity. Unlike traditional fan art, where homage is filtered through the hand of a human, these AI-generated images operate as high-fidelity impersonations. Google and OpenAI’s model outputs are not reinterpretations, they are stylistic clones. Apart from the legal one, the cultural question that arises is should ‘style’ be protected in the same way a ‘work’ might be such as a text or film? Do we owe something to the creators who spent decades evolving a visual language? And what happens when every house style, Ghibli, Aardman, Pixar, Laika, is copied, commercialised, or memed without attribution, and for political and manipulative ends.

The Ghibli moment highlights how generative AI collapses the boundary between inspiration and imitation. It also raises the stakes for platforms and AI companies: in amplifying this kind of mimicry, they’re not just distributing tools, they’re shaping cultural norms.

Takeaways: Widely available native image generation enables deep mimicry that will be commercially and socially impactful. As these tools grow in use, the question of what counts as creative ownership will need to move beyond copyright law. For creators, this may be the beginning of a world where their influence is everywhere, but their control is nowhere.

Tracing the thoughts of LLMs

This week, Anthropic released more research helping us to peek into the minds of AI models. Their "circuit tracing" method works like a brain scanner for LLMs, revealing how the likes of Claude actually think.

The findings challenge what we thought we knew. Rather than simply predicting one word at a time in sequence, Claude plans ahead. When writing poetry, it chooses the rhyming word first, then builds the line backward. It also seems to use a common language of thought rather than sticking to one human language or another.

Meanwhile, Claude uses a rather human approach to maths. Asked to add 36 and 59, it runs two parallel processes: one that estimates "about 90ish" and another that focuses on the last digit of the answer. Intriguingly, is you ask Claude how it does the calculation, it provides a standard answer. Clearly there an internal thinking process it can’t resolve.

The research also explains why AI sometimes makes things up. Claude has a default answer refusal circuit that is deactivated when the model believes it can answer. When this answer circuit misfires, the model generates information, but that will likely be false. For users and AI engineers, these insights could help build more reliable AI systems with fewer errors and better safety measures.

Takeaways: Anthropic are doing incredible work to increase our understanding of AI models. This peek beneath the hood will be essential for creating trustworthy agents and making AI more powerful and more transparent.

Weekly news roundup

This week's news reflects intensifying competition in AI hardware and chips, significant advances in AI research methodologies, and growing tensions around AI governance and regulation globally.

AI business news

AI governance news

AI research news

AI hardware news