Artificial Intelligence

Anthropic CEO Raises Concerns Over DeepSeek’s Bioweapons Safety Performance

Anthropic CEO, Dario Amodei has expressed serious concerns regarding DeepSeek, the Chinese AI company that has rapidly gained attention with its R1 model. In a recent interview on the ChinaTalk podcast with Jordan Schneider, Amodei revealed that DeepSeek performed poorly in a critical safety assessment related to bioweapons data.

According to Amodei, the model was among the weakest tested by Anthropic, generating rare and potentially sensitive information on the subject. DeepSeek’s performance was “the worst of basically any model we’d ever tested,” Amodei claimed. “It had absolutely no blocks whatsoever against generating this information.”

Amodei stated that this was part of evaluations Anthropic routinely runs on various AI models to assess their potential national security risks. His team investigates the potential of models to produce information pertaining to bioweapons that is not readily accessible through Google or conventional textbooks. Anthropic establishes itself as a provider of foundational artificial intelligence models that prioritises safety with utmost seriousness.

Amodei said he didn’t think DeepSeek’s models today are “literally dangerous” in providing rare and dangerous information but that they might be in the near future. Although he praised DeepSeek’s team as “talented engineers,” he advised the company to “take seriously these AI safety considerations.” He expressed support for stringent export controls on chips to China, highlighting worries that these technologies might provide an advantage to China’s military.

Amodei did not specify in the ChinaTalk interview which DeepSeek model was tested by Anthropic, nor did he provide additional technical details regarding these tests.  DeepSeek’s ascent has raised worries regarding its safety in other areas as well. Recently, Cisco security researchers reported that DeepSeek R1 did not block any harmful prompts during its safety tests, resulting in a 100% jailbreak success rate.

Cisco didn’t mention bioweapons but said it was indicated that it successfully utilised DeepSeek to produce detrimental information pertaining to cybercrime and other illicit activities. It is important to note, however, that Meta’s Llama-3.1-405B and OpenAI’s GPT-4o exhibited significant failure rates of 96% and 86%, respectively. It remains to be determined whether such safety concerns will significantly impede the swift adoption of DeepSeek.

Sponsored
Tehniyat Zafar

Share
Published by
Tehniyat Zafar

Recent Posts

Telenor Pakistan Posts 12% Revenue Growth Amid Transition to PTCL Ownership

Telenor Pakistan closed 2024 with 43.2 million subscriptions and recorded service revenues of NOK 3.81…

10 hours ago

APTMA Warns of Looming Textile Industry Collapse Amid Mass Shutdown

The All Pakistan Textile Mills Association (APTMA) has raised concerns over the ongoing crisis in…

10 hours ago

e& Group Chief Meets Pakistani Delegation to Explore Telecom Sector Opportunities

DUBAI: The chief executive of United Arab Emirates (UAE)-based e& group, formerly known as Etisalat,…

11 hours ago

Government Notice Leads to Removal of Ranveer Allahbadia’s ‘India’s Got Latent’ Episode from YouTube

NEW DELHI: After a notification from the Indian government, YouTube removed an episode of India's…

11 hours ago

Elon Musk Renames OpenAI’s Sam Altman, and It’s Not a Compliment

Elon Musk shared a video of OpenAI CEO Sam Altman testifying before the US Senate…

11 hours ago

Instagram Rolls Out Teen Accounts with Enhanced Safety Features in India

NEW DELHI: Instagram unveiled its teen accounts feature on Tuesday, introducing enhanced safety measures to…

11 hours ago