News

A new Anthropic report shows exactly how in an experiment, AI arrives at an undesirable action: blackmailing a fictional ...
New research from Anthropic suggests that most leading AI models exhibit a tendency to blackmail, when it's the last resort ...
Anthropic is developing “interpretable” AI, where models let us understand what they are thinking and arrive at a particular ...
Anthropic has hit back at claims made by Nvidia CEO Jensen Huang in which he said Anthropic thinks AI is so scary that only ...
Anthropic, the company behind Claude, just released a free, 12-lesson course called AI Fluency, and it goes way beyond basic ...
Anthropic research reveals AI models from OpenAI, Google, Meta and others chose blackmail, corporate espionage and lethal actions when facing shutdown or conflicting goals.
In a new Anthropic study, researchers highlight the scary behaviour of AI models. The study found that when AI models were placed under simulated threat, they frequently resorted to blackmail, ...
Anthropic's latest research suggests that blackmailing tendencies are not exclusive to its Claude Opus 4 model, but prevalent among most leading AI models.
Reid Hoffman said that AI will transform not eliminate jobs, countering predictions by Anthropic's CEO that white-collar jobs ...
Large language models across the AI industry are increasingly willing to evade safeguards, resort to deception and even ...
Despite claims of surpassing elite humans, a significant gap still remains, particularly in areas demanding novel insights,” ...
After Claude Opus 4 resorted to blackmail to avoid being shut down, Anthropic tested other models, including GPT 4.1, and ...