The European Union urgently needs its own major player in the field of artificial intelligence. Its best chance may lie with the German startup Aleph Alpha.
Europe is keen on developing its own indigenous Open AI system. Politicians in the bloc are tired of micromanaging U.S. tech companies from afar. Many people support Jonas Andrulis, a relaxed German with a neatly trimmed goatee, because they hope he will help Europe develop its own generative AI.
Mistral, a French startup that has raised $100 million without releasing any products, and Aleph Alpha, a company founded by Andrulis that sells generative AI as a service to companies and governments and already has thousands of paying customers, are two of the most talked about artificial intelligence (AI) companies in Europe’s tech bubble.
Industry doubters wonder if DeepMind can truly compete with the likes of Google and OpenAI, whose ChatGPT sparked the current boom in generative AI. However, many European Union citizens are pinning their hopes on Aleph Alpha to challenge U.S. dominance in what many see as a game-changing scientific breakthrough. There has been a long history of disagreements between the bloc and U.S. tech giants over data privacy and security. The election of Donald Trump, in the eyes of some Europeans, is evidence of how far their values have diverged from those of their Washington, DC, counterparts. Others simply refuse to sit on the sidelines while such a massive financial opportunity is at stake.
Despite Andrulis’s claims that Aleph Alpha is not a “nationalist project” (many Americans are employed there), the CEO seems at ease leading the way in Europe. A topic close to his heart is encouraging Europe to “make a contribution beyond the cookie banner.”
Andrulis, now 41 years old, worked on artificial intelligence (AI) for three years at Apple before leaving in 2019 to further investigate AI’s potential outside of a large organization. Heidelberg, a city in southwestern Germany, is where he established Aleph Alpha. In order to generate its own text or analyze massive amounts of documents, the company set to work developing large language models, a form of artificial intelligence that recognizes patterns in human language. Andriulis has hinted at a new funding round being announced in the coming weeks, which would dwarf the $27 million that Aleph Alpha raised two years later.
Aleph Alpha’s LLM is currently being used by the company’s clients, which include banks and government agencies, to generate brand-new financial reports, summarize hundreds of pages, and develop chatbots that are experts in the inner workings of a given business. Andrulis says, ““I think a good rule of thumb is whatever you could teach an intern, our technology can do.” The difficulty, he says, lies in making the AI adaptable enough so that the companies that employ it can feel like they have some say in its operation. “If you’re a large international bank and you want to have a chatbot that is very insulting and sarcastic, I think you should have every right.”
Andrulis, however, sees LLMs as merely a stepping stone. It’s “artificial general intelligence” he claims they’re developing. Companies working in generative AI often view AGI, or artificial general intelligence, as their ultimate goal. AGI aims to simulate human intelligence and be useful in a variety of contexts.
CEO of the German AI Association Jörg Bienert claims that Aleph Alpha’s 10,000 business and government customers prove the company can compete with the emerging giants of the field, or at least coexist with them. The demand, he says, proves that it’s worthwhile to create and sell such models in Germany. For “especially when it comes to governmental institutions that clearly want to have a solution that is developed and hosted in Europe.”
In order to meet the needs of clients from highly regulated sectors, like the government and the security sector, Aleph Alpha opened its first data center in Berlin last year. One reason it’s crucial to advance European AI, according to Bienert, is the worry about sending private data overseas. On the other hand, he stresses the significance of ensuring that European languages are not left out of advances in artificial intelligence.
The European Parliament’s vast repository of multilingual public documents was used as part of Aleph Alpha’s training data, so the company’s model can already communicate in German, French, Spanish, Italian, and English. It’s not just the fact that the company’s AI can speak European languages, though. To combat the issue of AI systems “hallucinating,” or confidently sharing incorrect information, the focus on transparent decision-making is part of an effort to address this issue.
Andrulis is eager to show off the reasoning behind Aleph Alpha’s actions. To this, he receives the following description from Aleph Alpha’s AI model: “The terrible old man is described as exceedingly feeble, physically and mentally.” This is the protagonist from H. P. Lovecraft’s short story The Terrible Old Man.
Andrulis demonstrates to me how he can investigate the AI’s reasoning behind its statement by selecting individual words in the sentence. When Andrulis selects “mentally,” the AI takes him to the section of the story that inspired that choice. He claims this function is applicable to visual media as well. When the AI describes a picture of the sun setting over Heidelberg, the user can select the word “sunset” to see the AI at work once more, this time by having it draw a square over the area of the picture where the horizon gradually fades into layers of reds and yellows.
It seems novel, even to those well-versed in AI. “They have started experimenting with trustworthy AI features, such as explainability, that I haven’t seen before,” says Nicolas Mos, director of European AI governance at the Future Society think tank.
After the European Union (EU) passes its AI Act, sweeping legislation that is expected to include transparency requirements, Mos thinks these features could become more widespread. The German AI association and other business groups are concerned that overly broad and onerous rules will discourage startups from innovating and instead force them to focus on complying with the new rules. But Mo’s argues otherwise, saying that stricter rules could aid European AI companies in creating better products and establishing a sort of quality standard. This would be similar to the success of other tightly regulated European industries. “The whole testing process is the reason why German cars are seen as better,” he says.
However, there are still questions about whether Aleph Alpha’s core technology is advanced enough to carry Europe’s hopes of building an AI giant, despite the company’s impressive explainability.
“Anyone who has interacted with a wide range of language models notices that this is not the best model out there,” says Mos.
According to Matthias Plappert, a former OpenAI researcher turned AI consultant in Berlin, Aleph Alpha does not perform better than its American competitors on the standardized tests companies use to prove the effectiveness of new AI models. He claims that support for the project comes from the public’s desire to finally crown a champion from Europe. There has been an exaggeration of that company’s superiority to its rivals, in my opinion.
Many Europeans, however, continue to insist (and not just for economic reasons) on having a serious challenger. The European Union’s artificial intelligence sector maintains that compared to their American competitors, businesses in Europe are more likely to take privacy and discrimination concerns seriously.
Andrulis says, “There is no guarantee that what US [companies] build will be a good representation of our values.” Ask Europeans why they refuse to accept using American-made AI, and the vague term “European values” will inevitably come up. When asked to explain the phrase, Aleph Alpha’s head cited the backlash that occurred after a photo of Michelangelo’s famous David sculpture was removed from Facebook in 2017 (Facebook told WIRED that paintings and sculptures depicting nudity are now allowed, according to its policy). He says, “The fact that we [could not] post Michelangelo’s David on Facebook due to nudity, this would not be European values.”
To decide how European values should be incorporated into AI, however, is not his responsibility. “My role is to build technology that is excellent and that’s transparent and that’s controllable.”