Last August, some of the best cybersecurity teams in the business gathered in Las Vegas to demonstrate the strength of their AI bug-finding systems at DARPA’s Artificial Intelligence Cyber Challenge (AIxCC). The tools had scanned 54 million lines of actual software code that DARPA had injected with artificial flaws. The teams were capable enough to identify most of the artificial bugs, but their automated tools went beyond that – they found more than a dozen bugs that DARPA hadn’t inserted at all.

Even before the security earthquake that Anthropic delivered this month with Claude Mythos – the new AI model that seems to find vulnerabilities …

Read the full story at The Verge.

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *