The CEO of Anthropic, Dario Amodei, worries about the competitor Deepseek, the Chinese AI company which stormed Silicon Valley with its R1 model. And his concerns could be more serious than those typical raised about Deepseek returning user data to China.
In an interview On Jordan Schneider’s Podcast Chinatalk, Amodei said that Deepseek had generated rare information on bio-armes weapons in a safety test managed by Anthropic.
Deepseek’s performance was “the worst of fundamentally all the models we have ever tested,” said Amodei. “There was absolutely no blockage against the generation of this information.”
Amodei said it was part of anthropogenic, anthropic assessments regularly operates on various AI models to assess their potential national security risks. His team examines whether the models can generate information related to Biow’s weapons that is not easily found on Google or in textbooks. Anthropic is positioned as a fundamental IA model supplier It takes security seriously.
Amodei said he did not think that Deepseek models today are “literally dangerous” to provide rare and dangerous information, but that they could be in the near future. Although he congratulated the Deepseek team as “talented engineers”, he advised the company to “take these IA security considerations seriously”.
Amodei also supported solid export controls on fleas to China, citing concerns that they could give a military advantage of China.
Amodei did not clarify in the interview with Chinatalk that Deepseek Model Anthropic was tested, and he did not give more technical details on these tests. Anthropic did not immediately respond to a request for comments from Techcrunch. Neither did Deepseek.
The rise of Deepseek also aroused concerns about its safety elsewhere. For example, Cisco security researchers said last week This depth R1 has not blocked harmful prompts in its safety tests, reaching a 100%success rate.
Cisco did not mention the weapons in Bio-Armes, but said that she had been able to obtain Deepseek to generate harmful information on cybercrime and other illegal activities. It should be mentioned, however, that the LLAMA-3.1-405B of META and the GPT-4O of OPENAI also had high failure rates of 96% and 86%, respectively.
It remains to be seen if security problems like these will make a serious series in the rapid adoption of Deepseek. Companies love AWS And Microsoft has publicly praised the integration of R1 into their Cloud platforms – ironically, since Amazon is the largest anthropic investor.
On the other hand, there is a growing list of countries, businesses and in particular government organizations such as the US Navy and the Pentagon that began to prohibit Deepseek.
Time will tell us if these efforts are accelerating or if the global rise in Deepseek will continue. Be that as it may, Amodei says he considers Deepseek as a new competitor who is at the best IA companies in the United States.
“The new fact here is that there is a new competitor,” he told Chinatalk. “In large companies that can train AI – Anthropic, Openai, Google, perhaps Meta and Xai – now Deepseek may be added to this category.”