7 min read

The AI Gatekeepers

The AI Gatekeepers

Hi everyone, I hope you had a nice week so far. This week, I ended up writing a lot about sovereignty, a word that means a lot of different things. So I’m tackling it from different angles. Feel free to share this email with a friend, it really helps 💌 Sign up here.

It took me a while to understand what Anthropic actually meant when it said the release of Claude Mythos would be limited to a select group of companies and organizations for national security reasons.

That sounds reasonable. Cybersecurity is inherently dual-use. The same model that can find a critical vulnerability can help exploit it. But the distribution method quickly becomes political.

But there’s a wider point. This precedent is going to greatly affect Anthropic’s (and OpenAI’s) distribution strategy going forward. And it’s concerning for European companies and developers.

As a reminder, with Project Glasswing, Anthropic gave early access to a small group of large technology and security organizations, including Apple, Google, Microsoft, CrowdStrike and JPMorganChase. The company says Mythos has already found thousands of zero-day vulnerabilities.

Axios reported that the NSA is also using Mythos Preview. The agency isn’t just using Mythos to ensure that there are no security vulnerabilities in its systems. Mythos is “being used more widely within the department,” as Axios put it.

And yet, Europe was mostly left out of the Mythos loop, as Next rightfully pointed out. While the U.K.’s AI Security Institute tested the model, European cyber agencies on the continent appear to have had little or no hands-on access.

And yet, the gatekeeping already looks inconsistent. Per Bloomberg, unauthorized users even started playing with Mythos. As my former colleague Zack Whittaker commented: “Anthropic spent weeks claiming that it couldn't publicly release its Mythos AI model because of its alleged offensive hacking capabilities and… some AI nerds from Discord just found it and accessed it.” (Subscribe to Zack’s This Week in Cybersecurity newsletter, it’s good.)

To be fair with Anthropic, it isn’t the only AI gatekeeper in town. OpenAI has adopted a similar strategy with its more permissive version of GPT-5.4 (called GPT-5.4-Cyber). It has created the “Trusted Access for Cyber (TAC) program,” a fancy name to say that the company gets to say who deserves to access GPT-5.4-Cyber and who doesn’t.

Both companies are building verification flows, partner programs and government relationships around models that are too useful to hide and too risky to simply ship.

And just like pharmaceutical companies put a long list of side effects on medication labels, this new distribution strategy is also a way to shift the blame down to partners for potential misuse.

But what is going to happen when security vendors, banks and critical software companies have fixed all the vulnerabilities identified by Mythos? Is Anthropic going to release it publicly as an API endpoint or expand the “preview release” to a larger group of American companies?

Talking about AI sovereignty, Forbes’ Iain Martin (hi Iain!) wrote an interesting profile on Mistral last week with a useful update. Mistral is no longer positioning itself as the company that will compete with OpenAI and Anthropic on benchmarks.

Instead, the company’s pitch is around control and sovereignty: European-built models, open-weight models that can run in your own cloud, data that can stay in the right geography.

An early-stage VC recently told me that sovereignty is often plan B for a startup narrative. Plan A is trying to build a world-class company. Plan B is when you start saying “but we’re European!” It’s a great pitch for customers in Europe. But American or Asian companies don’t care about European sovereignty. So you gain something and you lose something.

In Mistral’s case, that strategy is working out pretty well. According to Forbes, Mistral generated $200 million in revenue in 2025 and is “on track to start making around $80 million monthly by December” (what a weird metric, I could also say that I’m on track to generate $79 million monthly, just below Mistral). The company is now relying heavily on forward-deployed engineers working with large corporate clients.

So Mistral is a real business, even if some Silicon Valley people will dismiss it as a very expensive systems integrator with a French accent.

Ok, for real this time, 2026 might be the year of the Linux Desktop.

France’s DINUM announced that it plans to move some government computers from Microsoft Windows to Linux. I know, don’t try to fix something that isn’t broken…

But, as Tariq Krim writes in his newsletter, governments and companies should prepare for a (technological) decoupling between the U.S. and Europe because it seems like it’s a real possibility given the current geopolitical landscape.

The implementation detail is more interesting than the speech. The French government’s Sécurix project is publicly available on GitHub. It is based on NixOS, supports hardware keys like YubiKeys, and is designed so administrators can write configurations and roll them out to thousands of computers so that everybody gets the same thing.

This is not “let’s install Ubuntu on a few laptops and hope everyone enjoys LibreOffice.” It is an attempt to make secure, reproducible government workstations boring enough to deploy at scale. And I’m sure other European countries will pay attention to this implementation.

The funny thing about Cursor is that it started as an IDE story and is now turning into an infrastructure story.

On Tuesday, SpaceX announced plans to acquire Cursor for $60 billion later this year. If SpaceX doesn’t exercise the rights to acquire Cursor, it’ll have to pay $10 billion as a breakup fee.

But before we dive deeper into the deal: SpaceX, the rocket company? If you missed previous episodes, Elon Musk’s SpaceX acquired Elon Musk’s xAI. (xAI, which also owns X.) So it’s all the same company now.

It’s a highly unusual deal for a highly unusual situation. Cursor was a trailblazer in AI-assisted coding with its fork of VS Code. What made Cursor particularly successful is that users could switch between AI models from Anthropic, OpenAI, Google and more — the best model was always just one switch away.

In other words, Cursor made a bet on the application layer. AI coding was going to be a product problem, not an AI model problem. When the big AI lab turned their attention to agentic coding, Cursor became less relevant in the AI conversation.

The company knew it had to train its own AI models and redesign its product around agentic coding. And this pivot seems to be well underway. Cursor’s Composer seems like a capable agentic coding model. But it’s not the best one. And if they want to stay in the race with OpenAI and Anthropic, they need compute and they need it now.

Cursor’s announcement of the deal with SpaceX is surprisingly short (100 words!) but is right on point. “Cursor is partnering with SpaceX to accelerate our model training efforts,“ it reads. “Each step up in compute has translated to meaningfully more capable models. We’ve wanted to push our training efforts much further, but we’ve been bottlenecked by compute. With this partnership, our team will leverage xAI’s Colossus infrastructure to dramatically scale up the intelligence of our models.”

(It’s a great lesson in how you should communicate when the news is so big that people will dissect every word that you publish on your blog. In that case, shorter is better. Deliver the important message and hit publish.)

As for SpaceX/xAI, Musk has tons of GPUs and wants to IPO this year. It needs to prove that xAI’s Colossus data center was built for a good reason. Similarly, xAI doesn’t have any agentic coding product while Cursor is already installed on millions of laptops. It’s a GPU + distribution story.

That is an uncomfortable lesson for Europe. Many European AI startups have chosen to stay model-agnostic on top of U.S. labs to focus on the application layer. (Or they develop models without any plans to build data centers). The sovereign compute question is no longer just for foundation-model labs. It is coming for every serious AI application company too.

As it stands, if the next $100 billion AI companies are decided by who controls GPUs and distribution, European companies are set up to lose.

But I’m always optimistic, so I know that somewhere in Europe, there’s someone currently working on the next big thing. They just created the Git repository for this new project and it’s going to be magnificent ✨

Have a good day ☀️
Romain