In a new report, a group of policies based in California co-directed by Fei-Fei Li, an AI pioneer, suggests that legislators should consider the risks of AI which “have not yet been observed in the world” during the development of AI regulatory policies.
THE 41 pages temporary report Released on Tuesday comes from the working group on the policy of California joint on the models of Frontier AI, an effort organized by Governor Gavin Newsom after his veto of the controversial security bill of California, SB 1047. Although Newsom noted that SB 1047 missed the brand, he recognized last year the need for a more extensive assessment of AI risks legislators.
In the report, Li, as well as the co-authors UC Berkeley College of Computing Dean Jennifer Chayes and Carnegie Endowment for International Peace, the President of Peace, Mariano-Florentino Cuéllar, would argue in favor of laws which would increase transparency in that the laboratories of Frontier Ai such as Openai are built. The stakeholders in the industry of the entire ideological spectrum examined the report before its publication, including the ardent defenders of AI security as the winner of the Turing Prix Yoshua Benjio as well as those who pleaded against SB 1047, as the co -founder of Databricks, Ion Stoica.
According to the report, the new risks posed by AI systems may require laws that would force developers of AI models to publicly report their security tests, data acquisition practices and security measures. The report also recommends the increase in standards concerning third -party assessments of these corporate measures and policies, in addition to the extended protections of the denunciators for the employees and entrepreneurs of the IA company.
Li et al. Write, there is an “uncompromising level of evidence” of the potential of the AI to help make cyberattacks, create biological weapons or cause other “extreme” threats. They also argue, however, that AI policy should not only respond to current risks, but anticipate the future consequences that could occur without sufficient guarantees.
“For example, we don’t need to observe a nuclear weapon [exploding] To predict reliably that this could and would cause significant damage, “says the report.” If those who speculate on the most extreme risks are correct – and we are not sure if they will be – then the issues and costs of inaction on the border AI at this current moment are extremely high. »»
The report recommends a two -stea strategy to stimulate the transparency of the development of the AI model: confidence but check. The developers of AI models and their employees should benefit from paths to report on the public concerns, according to the report, such as internal security tests, while being required to submit test allegations for the verification of third parties.
Although the report, the final version of which was to be released in June 2025, does not approve of any specific legislation, it was well received by experts on both sides of the debate on the development of AI policies.
Dean Ball, a researcher focused on AI at George Mason University who criticized SB 1047, said in an article on X that the report was A promising step for AI security regulations in California. It is also a victory for the AI defenders in terms of security, according to the Senator of the State of California, Scott Wiener, who presented SB 1047 last year. Wiener said in a press release that the report was based on “urgent conversations around AI governance that we started in the legislative assembly [in 2024]. “”
The report seems to be aligned with several components of SB 1047 and the Wiener monitoring bill, SB 53, such as AI model developers to report the results of security tests. Taking a wider opinion, it seems to be a very necessary victory for people of IA security, whose agenda lost ground last year.