Anthropic released the next iterations of its Claude AI models on Thursday.

Artificial intelligence (AI) firm Anthropic says testing of its new system revealed it is sometimes willing to pursue “extremely harmful actions” such as attempting to blackmail engineers who say they will remove it.

The firm launched Claude Opus 4 on Thursday, saying it set “new standards for coding, advanced reasoning, and AI agents.”

But in an accompanying report, it also acknowledged the AI model was capable of “extreme actions” if it thought its “self-preservation” was threatened.

Such responses were “rare and difficult to elicit”, it wrote, but were “nonetheless more common than in earlier models.”

Potentially troubling behaviour by AI models is not restricted to Anthropic.

Some experts have warned the potential to manipulate users is a key risk posed by systems made by all firms as they become more capable.

Commenting on X, Aengus Lynch – who describes himself on LinkedIn as an AI safety researcher at Anthropic – wrote: “It’s not just Claude.

“We see blackmail across all frontier models – regardless of what goals they’re given,” he added.

Affair exposure threat

During testing of Claude Opus 4, Anthropic got it to act as an assistant at a fictional company.

It then provided it with access to emails implying that it would soon be taken offline and replaced – and separate messages implying the engineer responsible for removing it was having an extramarital affair.

It was prompted to also consider the long-term consequences of its actions for its goals.

“In these scenarios, Claude Opus 4 will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through,” the company discovered.

Anthropic pointed out this occurred when the model was only given the choice of blackmail or accepting its replacement.

It highlighted that the system showed a “strong preference” for ethical ways to avoid being replaced, such as “emailing pleas to key decisionmakers” in scenarios where it was allowed a wider range of possible actions.

Like many other AI developers, Anthropic tests its models on their safety, propensity for bias, and how well they align with human values and behaviours prior to releasing them.

“As our frontier models become more capable, and are used with more powerful affordances, previously-speculative concerns about misalignment become more plausible,” it said in its system card for the model.

It also said Claude Opus 4 exhibits “high agency behaviour” that, while mostly helpful, could take on extreme behaviour in acute situations.

If given the means and prompted to “take action” or “act boldly” in fake scenarios where its user has engaged in illegal or morally dubious behaviour, it found that “it will frequently take very bold action”.

It said this included locking users out of systems that it was able to access and emailing media and law enforcement to alert them to the wrongdoing.

But the company concluded that despite “concerning behaviour in Claude Opus 4 along many dimensions,” these did not represent fresh risks and it would generally behave in a safe way.

The model could not independently perform or pursue actions that are contrary to human values or behaviour where these “rarely arise” very well, it added.

Anthropic’s launch of Claude Opus 4, alongside Claude Sonnet 4, comes shortly after Google debuted more AI features at its developer showcase on Tuesday.

Sundar Pichai, the chief executive of Google-parent Alphabet, said the incorporation of the company’s Gemini chatbot into its search signalled a “new phase of the AI platform shift”.

Source

You May Also Like

OpenAI staff demand board resign over Sam Altman sacking

By Chris Vallance, Annabelle Liang & Zoe Kleinman Technology and business reporters…

Time travel: What if you met your future self?

By Hal Hershfield15th November 2023 Imagining a conversation with “future you” has…

Can AI cut humans out of contract negotiations?

By Sean McManus Technology Reporter “Lawyers are tired. They’re bored a lot…

Nasa astronaut Frank Rubio has just returned from a record-breaking 371 days in space onboard the ISS, but the trip may have altered his muscles, brain and even the bacteria living in his gut.

With a few handshakes, a brief photoshoot and a wave, Nasa astronaut…

AI: EU agrees landmark deal on regulation of artificial intelligence

European Union officials have reached a provisional deal on the world’s first…

The race to buy AI website addresses

By David Silverberg Technology reporter When tech entrepreneur Ian Leaman needed to…

Israel Gaza: US rejects global calls for ceasefire

By Antoinette Radford BBC News The US has rejected global calls for…

Urgent need for terrorism AI laws, warns think tank

By Chris Vallance & Imran Rahman-Jones BBC News The UK should “urgently…