Update: California’s Appropriations Committee passed SB 1047 with significant amendments that change the bill on Thursday, August 15. You can read about them here.
In real life, AI systems causing harm or being used for cyberattacks is unprecedented, outside of science fiction. However, some lawmakers are aiming to put safeguards in place to prevent such scenarios from becoming a reality. California’s SB 1047 bill is designed to mitigate potential disasters caused by AI systems before they occur and is approaching a final vote in the state’s senate in August.
What would SB 1047 do?
SB 1047 aims to prevent large AI models from being utilized to inflict “critical harms” on humanity. The bill outlines examples of such harms, including using an AI model to create a weapon resulting in mass casualties or orchestrating a cyberattack causing extensive damages. Developers, specifically the companies creating these models, would be responsible for implementing safety protocols to prevent such outcomes.
What models and companies are subject to these rules?
The rules of SB 1047 apply to the largest AI models globally, those with a cost of at least $100 million and utilizing 10^26 FLOPS during training. These thresholds may be adjusted as necessary. While few companies currently have AI products meeting these requirements, tech giants like OpenAI, Google, and Microsoft are expected to reach these thresholds soon.
The bill also mandates the implementation of safety protocols to prevent misuse of covered AI products. This includes requirements for an “emergency stop” button to shut down the AI model, testing procedures to address risks, and hiring third-party auditors annually to assess AI safety practices.
Enforcement of SB 1047 would be carried out by a new California agency, the Frontier Model Division (FMD), overseen by a board composed of representatives from the AI industry, open source community, and academia. Non-compliance by developers could lead to civil actions by the attorney general.
Who would enforce it, and how?
The Frontier Model Division (FMD) will govern the enforcement of SB 1047, requiring individual certification of AI models meeting the bill’s thresholds along with an annual assessment of AI model risks. Failure to comply could result in significant penalties for developers.
What do proponents say?
Proponents of SB 1047 argue that the bill is crucial for preventing potential AI-related disasters and protecting citizens. Influential figures in the AI community, such as Geoffrey Hinton and Yoshua Bengio, have expressed support for the bill, emphasizing the importance of mitigating risks associated with AI technology.
While there have been concerns raised about potential conflicts of interest among supporters, the overall objective of SB 1047 is to ensure the safety and responsible development of AI models.
What do opponents say?
On the opposing side, various stakeholders from Silicon Valley, including venture capitalists, tech giants, and AI researchers, have voiced their concerns about SB 1047. Criticisms range from burdensome regulations on startups to potential negative impacts on innovation and research efforts.
The opposition argues that the bill may stifle technological advancement and impose unnecessary restrictions on AI development, highlighting potential implications on the ecosystem and industry as a whole.
What happens next?
With SB 1047 undergoing amendments and facing diverse opinions, its fate will be determined in the California Senate’s Assembly floor. The bill’s journey through the legislative process will ultimately culminate in a decision by Governor Gavin Newsom on whether to sign it into law.