Anthropic is investigating a claim that a small group of people gained access to its Claude Mythos model – the cyber-security tool which the AI firm says is too powerful to release to the public.

“We’re investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments,” the company said in a statement.

It was in response to a Bloomberg report that users in a private forum managed to access the model without the normal permissions.

There is deep unease about Mythos’ capabilities – though the UK’s top cyber official has said advanced AI tools could be a “net positive” if the technology was secured from misuse.

There is currently no suggestion that malicious actors have managed to get hold of the model, and Anthropic says it does not have evidence its systems are affected.

But the report of access by unauthorised users raises questions about the ability of large AI companies to stop their advanced AI models from getting into the wrong hands.

This was “most likely through misuse of access rather than a classic hack,” according to Raluca Saceanu, chief executive of cyber-security company Smarttech247.

Anthropic has released the Mythos model to some tech and financial companies in order to help them secure their systems against its reported ability to exploit vulnerabilities.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts