Aleph Alpha’s ambitions
Hi everyone, I hope you’re enjoying a long weekend if May 1st is a holiday where you live (sorry, British friends, but I believe you get Monday off instead?). Feel free to share this email with a friend, it really helps 💌 Sign up here.
What can we learn from Aleph Alpha’s trajectory?
In the race to build Europe’s leading large language model (LLM) company, it initially looked like a two-horse race: France had Mistral, Germany had Aleph Alpha.
Of course, it’s a bit more complicated than that, as DeepMind was originally a British company but has been part of Google since 2014. Similarly, Meta/Facebook has always had a large AI research team in Paris with talent that worked on early Llama models, but Facebook isn’t exactly a European startup.
But if you only look at independent AI labs based in Europe, Mistral and Aleph Alpha stood out from the crowd pretty quickly. While Mistral managed to raise capital at a regular pace and is still releasing new models every other month (side note: Mistral just announced Mistral Medium 3.5), Aleph Alpha has had a completely different trajectory.
After raising a $500 million Series B round in 2023, the company stopped work on its own LLMs. It chose to focus on AI integration for large corporate clients and governments.
In hindsight, Aleph Alpha made three linked bets:
- LLMs would become a commodity very quickly.
- Enterprise AI integration looked like the clearest path to revenue.
- Controlled deployments with on-premise AI models would win over API-first models from OpenAI, Anthropic and others, in part due to customer control over data.
Things didn’t exactly pan out as expected. First, since 2024, there’s been a lot of innovation in the LLM space. Sure, open-weight models now lag behind state-of-the-art models by just a few months. But Anthropic and OpenAI are still iterating at a rapid pace. I still believe that LLMs will become a commodity at some point. But when?
Second, Aleph Alpha likely found real revenue with large corporate and public-sector clients, including SAP and Bosch. But that is not where most of the value has accrued so far. The companies that are benefiting the most from AI are not system integrators: chipset makers (Nvidia and its suppliers), hyperscalers (AWS, Microsoft Azure and Google Cloud) and large AI labs combining great AI models with great products (OpenAI and Anthropic) are growing much more rapidly.
It’s been nearly three years since Aleph Alpha’s latest funding round. The German startup wasn’t in a strong position to raise another $500 million to pursue the same strategy.
Hence, last week’s deal with Cohere, the Canadian AI firm. Both companies announced plans to merge. However, the new company will be called Cohere, Cohere’s CEO Aidan Gomez will remain as CEO and “Cohere is going to remain Canadian headquartered and owned,” according to Gomez. So it really sounds like Cohere is absorbing Aleph Alpha more than anything.
Interestingly, a German company played an instrumental role in structuring the deal. That company is Schwarz Group, the parent company behind Lidl and Kaufland.
My theory is that Schwarz Group looked at the margins of its supermarkets (low single digits) and compared it to the margins of AWS (a 35% operating margin per Amazon’s most recent earnings!). They realized they were in the wrong business…
So Schwarz Group wants to become a cloud company and is “investing” $600 million in Cohere. It expects Cohere to use Schwarz Group’s cloud infrastructure going forward, which should help when it comes to recouping its investment in Cohere. In other words, part of the investment is free rent in exchange for a stake in Cohere.
As for Aleph Alpha, it may end up as a cautionary tale for other mid-sized AI companies. If you’re not pushing the frontier, it’s hard to capture the upside. And if you don’t own the infrastructure, it’s even harder to capture the margins.

The quiet European winner
I mentioned Nvidia and its suppliers as some of the companies that benefited the most from AI in the piece above. Let me expand on that a bit because one of the biggest AI winners does not sell GPUs, cloud credits, or foundation models. It is ASML.
Based in the Netherlands, ASML’s extreme ultraviolet lithography machines are among the most sophisticated pieces of industrial equipment in the world. As this excellent Works in Progress profile from last week explains, these machines use lasers, tin droplets, plasma, and ultra-precise mirrors to print the tiny patterns that make advanced chips possible.
If you prefer a long-form video instead of a long read, check out Veritasium’s YouTube video on ASML.
That sounds abstract until you connect the dots. Nvidia’s success depends heavily on TSMC’s manufacturing prowess. TSMC’s leading-edge processes depend, in turn, on ASML’s continued lithography innovation. That’s how you get tens of billions of transistors packed onto a single chip.
And that’s also why ASML is now worth roughly €470 billion. It is not an AI company in the usual sense, but it is one of the companies benefiting the most from AI.

The billion dollar bet on reinforcement learning
I’m sure you’ve seen some headlines about Ineffable Intelligence, a London-based AI lab that raised a $1.1 billion seed round. The company was founded by David Silver, the former Google DeepMind researcher behind AlphaGo, the AI program that plays the board game Go, and its successors AlphaGo Zero and AlphaZero. Silver has also said that any money he makes from his Ineffable equity will go to charities, which adds an interesting footnote to an already unusual company formation.
But I wanted to spend a minute to talk about Ineffable Intelligence’s technical bet: reinforcement learning. In simple terms, reinforcement learning is when an AI system learns by doing. It takes actions in an environment, gets feedback, and gradually improves its strategy. It is less about copying human examples and more about discovering what works through trial and error.
That is how AlphaGo (and more specifically AlphaGo Zero, which was completely self-taught without using data from human games) became so interesting, as my Go player friend Matthieu once told me. It did not just imitate human Go players. It found moves that humans had not seriously considered.
If you’ve paid attention to AI model training techniques, big AI labs like Anthropic and OpenAI already use reinforcement learning. In most current frontier LLMs, reinforcement learning is a post-training step. The model first learns from huge amounts of human-generated text, code and synthetic data. Then reinforcement learning is used to make it more helpful, safer, better at reasoning, or better at following preferences.
Ineffable Intelligence’s pitch is more radical. It wants to use reinforcement learning from the very beginning and ditch training data sets altogether, just like a baby learns a new language from scratch without any “pre-loaded information.” Instead, a baby learns from experience in a specific environment.
The bet is that the next AI breakthrough will not come from reading more of the internet, but from building better worlds for machines to learn in.
Have a good day ☀️
Romain