Anthropic Restricts AI Model After Finding Thousands of Vulnerabilities

Anthropic Restricts AI Model After Finding Thousands of Vulnerabilities

Anthropic Restricts AI Model After Finding Thousands of Vulnerabilities

I made a smart tool that finds bad holes in computers. You should know that I keep this secret from the public. My group only lets a few people use it to fix big systems.

Claude Mythos Preview & Project Glasswing

This new model is Claude Mythos Preview and it sits in Project Glasswing. I joined with Apple and Nvidia to make your web safer. You can see that I put one hundred million dollars into credits for this work. My team also gave four million dollars to open source groups which guards our code.

A Model Beyond Existing Benchmarks

I did not try to build a hacking machine. The AI just got fast at finding bugs while I taught it to think. You can believe me that it beats every test we ever run in the past.

I found a bug in OpenBSD that was there for twenty seven years. The AI also broke into FreeBSD using a seventeen year old flaw without any help from me. Nicholas said this model finds more bugs in weeks than he saw in his career.

Why Keep It Private?

I hide this tool because it is very dangerous. You might lose your safety if the wrong people get this power. Chinese hackers used AI to attack thirty groups all at once recently. I spoke to US leaders so they can help keep everyones data safe.

Addressing Open‑Source Security Gaps

Most of your apps run on open source code. Jim thinks that people who write this code need more help. I gave two and a half million dollars to help your software stays clean.

The Road Ahead

I want to give this to everyone once the locks are strong. You will see me test the Claude Opus model next to make sure things work. Other companies like OpenAI does this too when they make big tools. You and I must stay smart to keep our world safe from bad tech.