An AI Model So Good at Hacking It Can't Be Released—And Governments Want In
Anthropic has built an AI called Mythos that experts say potentially has unprecedented ability to find and exploit software vulnerabilities—and the company decided it's too dangerous to release publicly. Instead, access is limited to roughly 50 tech companies like Microsoft, Apple, and Amazon Web Services. Now the Trump administration is pushing for government access, raising a thorny question: who gets the keys to an automated hacking tool?
Bottom Line
An AI capable of automated hacking is now in the hands of about 50 private companies, governments are lobbying for access, and no public framework exists for governing who gets to use it or how. This isn't a hypothetical future risk—it's a live policy question being answered right now, mostly behind closed doors. The precedent set here will shape whether advanced AI capabilities get treated like nuclear materials (tightly controlled, internationally regulated) or like software (widely distributed, lightly governed).