Europe’s mythos problem is really a security problem | FOMO Daily
9 min read
Europe’s mythos problem is really a security problem
Joachim Nagel’s call for wider access to Anthropic’s Mythos is really a warning about uneven cyber defence, concentrated AI power, and the risk of leaving key institutions outside the defensive perimeter. Europe’s answer will need to combine access, regulation, infrastructure, and real operational readiness rather than relying on any one of them alone
On April 21, 2026, Bundesbank president Joachim Nagel used a speech in Rome to make a point that matters far beyond one model launch. He said Mythos appears capable of quickly identifying and exploiting security vulnerabilities in financial institutions’ software, warned that it could be used both to strengthen digital security and to exploit weaknesses for malicious purposes, and argued that all relevant institutions should have access to such technology to avoid competitive distortions. That is the heart of this story. This is not just a warning about a flashy frontier model. It is a warning about an uneven security landscape, where a few institutions may get a defensive head start while the rest of the system is left exposed. Reuters’ reporting on the speech carried the same message: Nagel wants banking authorities to prevent misuse while also keeping the playing field even. Taken together, that reads less like a call for mass public release and more like a call for controlled, fair, institutional access.
Why mythos has rattled banks
The concern is easy to understand once you look at what Anthropic itself says Mythos Preview can do. In its April 7 research note, Anthropic said the model was capable of identifying and exploiting zero-day vulnerabilities in every major operating system and every major web browser when directed to do so. The company also said it had identified thousands of additional high- and critical-severity vulnerabilities and that Mythos had written exploits in hours that expert penetration testers said could have taken weeks to develop. Anthropic’s broader argument is that powerful models will probably help defenders more than attackers in the long run, but that the transition period could be rough and dangerous if releases are handled badly. That is an important point. A model like this does not just make cyber work faster. It changes who can find bugs, how quickly they can chain vulnerabilities together, and how small the time window becomes between discovery and exploitation. For a banking sector still running plenty of legacy systems, that is not a theoretical issue. It is a live operational risk.
Why limited access changes the market
Anthropic has not released Mythos openly. It launched the model through Project Glasswing, which Anthropic describes as an initiative to secure critical software by giving early access to organisations responsible for infrastructure billions of people depend on. The launch partners include major technology and security companies along with JPMorganChase, and Anthropic says it also extended access to more than 40 additional organisations that build or maintain critical software infrastructure. Reuters reported that large U.S. banks moved early into testing while many others were still trying to catch up, and that Anthropic was planning to provide access to European banks soon, with timelines described as days or weeks rather than months. That limited rollout makes sense from a safety perspective, but it also creates a new kind of competitive asymmetry. If some firms can see what the model sees before others can, they may get earlier visibility into hidden weaknesses, earlier chances to patch, and earlier experience learning how frontier cyber models behave in the real world. What this really means is that access itself becomes part of resilience. In that environment, Nagel’s complaint starts to look practical rather than political.
Latest
Top Picks
The latest industry news, interviews, technologies, and resources.
India’s deepfake response has moved beyond headline promises and into a layered techno-legal framework built around platform duties, labelling, provenance, takedowns, and wider AI governance. The big test now is not whether rules exist, but whether enforcement becomes fast, visible, and strong enough for ordinary people to trust
There is also a deeper systemic reason central bankers care. The European Central Bank warned in 2024 that if AI becomes widely used in finance while supplier concentration stays high, institution-level risks can scale into system-level risks. The ECB pointed to supplier concentration, cyber risk, overreliance, herding, market correlation, and single-point-of-failure problems as the kinds of pressures that can emerge when many institutions depend on the same or similar tools. Nagel made a closely related point in Rome. He said AI could create financial stability risks through supplier concentration, herding behaviour, increased market correlation, and cyber and operational risks when similar models are used in critical processes. He also warned that if many banks rely on the same AI providers or similar underlying models, their credit risk assessments could become too closely aligned. That is why the Mythos debate should not be framed as a simple argument over access to a clever product. It is a debate about whether the next layer of financial infrastructure will be both concentrated and unevenly distributed. That combination is exactly where regulators start to worry.
Europe is not only behind on access
Europe’s problem is not just that some U.S. institutions got access first. The bigger issue is that Europe is still trying to close several AI gaps at once. In the same speech, Nagel said U.S.-based institutions produced 40 notable AI models in 2024, compared with 15 in China and only 3 in Europe. He also said U.S. private AI investment reached $109.1 billion, compared with $19.4 billion in Europe. At the same time, Europe is not standing still. The European Commission’s AI Continent agenda says InvestAI is meant to mobilise €200 billion, including €20 billion for AI gigafactories, while the broader plan also points to AI factories, data access, skills, and support infrastructure. The Commission’s newer Apply AI Strategy says the goal is not only regulation but stronger competitiveness, sector adoption, and technological sovereignty. What this really means is that Europe already knows its challenge is structural. It needs more compute, more deployment capacity, more talent, more practical adoption, and more confidence that its rules can support growth instead of smothering it. The Mythos debate simply throws that challenge into sharper focus.
Regulation helps, but it does not solve the whole problem
Europe’s regulatory framework is more mature than many others, but regulation alone will not protect a banking sector from a model that can find and exploit software flaws at machine speed. The AI Act entered into force on August 1, 2024 and is being phased in through August 2, 2027. According to the Commission’s AI Continent Action Plan, the general provisions and prohibitions started applying on February 2, 2026, rules on general-purpose AI models apply from August 2, 2025, and the wider rules covering high-risk systems, transparency, and measures supporting innovation take effect on August 2, 2026. The same plan says national AI regulatory sandboxes should be operational by August 2026 and that the AI Office is meant to provide more practical compliance help. That is useful and necessary. But the problem is that compliance frameworks move on one timeline and adversarial capability moves on another. A bank can be fully committed to governance and still be blind to a vulnerability a frontier cyber model can see in hours. So the best reading of Nagel’s position is not that Europe needs less regulation. It is that Europe needs regulation plus credible defensive capability plus fair institutional access to the tools shaping the threat environment.
Adoption is rising, but the hard part is still ahead
There is some good news for Europe in the background. Eurostat said that in 2025, 20% of EU enterprises with at least 10 employees used AI technologies, up from 13.5% in 2024. That tells us adoption is no longer a fringe story. It is becoming normal business infrastructure. But adoption statistics can be misleading if they hide the gap between ordinary enterprise AI use and frontier capability. Using AI for text analysis, content generation, or workflow automation is one thing. Operating at the edge of cyber offence and defence is something else entirely. That is where the Mythos argument bites. Europe can show healthy adoption rates and still find that the most strategically important layer of AI remains dominated by a small number of foreign firms and a small number of early-access partners. In other words, more companies using AI does not automatically mean Europe is secure, sovereign, or resilient in the parts of the stack that matter most under stress. This is where things change. The question stops being, “Are firms using AI?” and becomes, “Who controls the most consequential AI capabilities, and who gets to test them before they reshape the security environment?”
Infrastructure may still decide the outcome
There is another point in Nagel’s speech that deserves more attention. He said AI is not only the “steam engine of the mind” but also comes with a real electricity bill, and he warned that infrastructure constraints may end up mattering as much as capital. He pointed out that in Dublin and Frankfurt, the time needed to supply power to new data centres can run from three to five years. The Commission’s own planning reflects the same reality, with AI factories, supercomputing expansion, and computing access placed near the centre of Europe’s AI strategy. This matters because secure access to frontier models is not just about permissioning and policy. It is also about whether a region can build and sustain the compute, power, cloud, and testing environment needed to develop, evaluate, and deploy them responsibly. Europe may be right to demand a fairer share of access to critical models like Mythos, but if it does not also strengthen its infrastructure base, it will keep negotiating from a weaker position. Power, compute, and institutional readiness are not side issues anymore. They are part of AI sovereignty.
What a sensible response looks like now
A sensible European response would avoid two bad extremes. The first would be panic, where policymakers treat every frontier model as an argument for shutting the door. The second would be naïveté, where access is treated as the same thing as readiness. Europe probably needs a middle path: structured access for supervised institutions, shared red-team and evaluation programmes, faster disclosure and patch coordination, clear accountability rules for deployers and model providers, and stronger support for European infrastructure, testing facilities, and sector-specific AI capability. Anthropic’s own logic around Project Glasswing supports part of this, because the company explicitly says the model was initially limited so defenders could begin securing important systems before similar capabilities become broadly available. Nagel’s intervention supports the rest, because he is arguing that the relevant institutions exposed to the risk should not be left outside that defensive circle. That combination points to a workable principle. Do not throw the gates open blindly, but do not let a handful of firms become the only ones who can see the future of cyber risk early enough to respond.
The bigger lesson from the mythos debate
The deeper lesson here is that the next phase of AI policy will not be fought only over innovation versus safety. It will also be fought over timing, access, and distribution. William Gibson’s old line that the future is already here, just not evenly distributed, opened Nagel’s speech for a reason. Mythos has made that unevenness very concrete. A frontier model with serious cyber capability exists now. It is not broadly available. It is being tested by a limited group. Regulators are worried. Banks are scrambling. Europe is trying to build an AI continent while also making sure it does not become a second-tier user of systems developed and selectively shared elsewhere. That is why this story matters. It is not just about Anthropic, or one German central banker, or one April news cycle. It is about a new rule of the AI age: when capability arrives unevenly, resilience does too. And in finance, uneven resilience is never a small problem
The Kelp DAO exploit became much bigger than a $293 million theft because the stolen rsETH was reused as collateral inside DeFi lending markets, helping trigger about $9 billion in net outflows from Aave. The episode exposed a deeper weakness in DeFi: when trust in one asset breaks, the damage can spread through every protocol that treated that asset as solid collateral.