The author of California SB 1047, the most controversial IA security bill in the country of 2024, is back with a new AI bill which could shake Silicon Valley.
The Senator of the State of California, Scott Wiener, presented a new bill Friday, this would protect employees of the IA laboratories management, allowing them to express themselves if they think that the AI systems of their business could be a “critical risk” for the company. The new bill, SB 53, would also create a Cloud Computing Public Cluster, called Calcompute, to give researchers and startups the IT resources necessary to develop an AI that benefits the public.
The latest Wiener IA security bill, California’s SB 1047, was one of the most controversial AI legislation efforts of 2024. SB 1047 aimed to prevent the possibility of very important AI models creating catastrophic events, such as loss of life or cyberattacks costing more than $ 500 million in terms of damage. However, Governor Gavin Newsom finally vetoed the bill in September.
The debate on Wiener’s latest bill quickly became ugly in 2024. Some Silicon Valley leaders said that SB 1047 would harm America’s competitive advantage in the AI World Race, and said that the bill had been inspired by unrealistic fears that AI systems can cause doomsday-fiction. Meanwhile, Senator Wiener allegedly alleged that some venture capital gave himself up to a “propaganda campaign” against his bill, partially pointing to the complaint of Y combinator according to which SB 1047 would send startup founders in prison, argued that experts were misleading.
SB 53 essentially takes the least controversial parts of SB 1047 – such as denunciation protections and the establishment of a Calcompute cluster – and reconditioned them in a new IA bill.
In particular, Wiener does not move away from the risk of existential AI at SB 53. The new bill specifically protects the denunciators who believe that their employers create AI systems which present a “critical risk”. The bill defines critical risk as a “”Predictable or material risk that the development, storage or deployment of a developer of a foundation model, as defined, will lead to death or serious injuries to more than 100 people or more than a billion dollars of damage to rights in matters of money or goods. »»
SB 53 limits developers of border AI models – probably including Openai, Anthropic and Xai, among others – to retaliate against employees who disclose information to the California Attorney General, Federal Authorities or other employees. Under the bill, these developers would be required to report to the denunciators on certain internal processes that the denunciators find concerning.
As for Calcomptu, SB 53 would establish a group to build a Cloud Computing public cluster. The group would consist of representatives of the University of California, as well as other public and private researchers. He would make recommendations on how to build Calcompute, the size of the cluster and users and organizations should have access to it.
Of course, it was very early in the legislative process for SB 53. The bill must be revised and adopted by California legislative organizations before he reaches the office of Governor Newsom. State legislators will surely await the reaction of Silicon Valley at SB 53.
However, 2025 could be a more difficult year to adopt AI security bills compared to 2024. California adopted 18 invoices related to AI in 2024, but it now seems that the Doom movement has lost ground.
Vice-president JD VANCE pointed out at the top of the action of the Paris AI that America is not interested in the security of the AI, but rather favors the innovation of the AI. Although the Calcompute Cluster established by SB 53 may surely be considered as AI progress, it is not clear how legislative efforts concerning the risk of existential IA will manage in 2025.