While AI systems causing harm in real life is mostly confined to sci-fi scenarios, lawmakers are pushing for safeguards to prevent such occurrences. California’s SB 1047 aims to pre-empt any disasters stemming from AI misuse, with a final state senate vote scheduled for later this month.
Despite the noble intent behind SB 1047, it has faced backlash from various factions within Silicon Valley, including major tech players, investors, and industry bodies. This specific bill stands out among the numerous AI-related legislations currently in the works due to its controversial nature.
What does SB 1047 do?
SB 1047 has one primary goal: to prevent the misuse of large AI models that could lead to catastrophic consequences for humanity.
It defines “critical harms” as scenarios where AI models are utilized to create mass-destructive weapons or orchestrate cyberattacks resulting in substantial financial losses. The responsibility lies with the developers to ensure that these models are equipped with safety measures to avoid such outcomes.
What models and companies are subject to these rules?
The rules outlined in SB 1047 are targeted at the largest AI models, those exceeding $100 million in cost and utilizing significant computational power during training. While few companies currently meet these criteria, tech giants like OpenAI, Google, and Microsoft are likely to fall under its purview in the near future.
Open-source models and their derivatives also fall within the bill’s scope, with developers mandated to implement safety protocols, conduct regular testing, and engage third-party auditors to assess their AI safety practices annually.
The aim is to provide a “reasonable assurance” that these protocols can mitigate critical harms, recognizing the impossibility of absolute certainty in this context.
Who would enforce it, and how?
Enforcement of SB 1047 would be overseen by a new California agency, the Frontier Model Division (FMD), which would be responsible for certifying public AI models meeting the bill’s thresholds.
The agency would comprise a five-person board with representatives from the AI industry, academia, and the open-source community. Developers must submit certifications assessing the risks posed by their AI models, the effectiveness of safety protocols, and compliance with SB 1047 on an annual basis.
Penalties for non-compliance include civil actions by the state attorney general, escalating with the scale of the AI models in question.
What do proponents say?
Proponents of SB 1047, such as California State Senator Scott Wiener, view the bill as a necessary precautionary measure to avert potential AI-related disasters. Advocates within the AI community emphasize the importance of addressing risks associated with advancing AI technologies.
Supporters of the bill, including prominent AI researchers, underscore the critical need for safeguards to prevent catastrophic incidents stemming from AI misuse.
What do opponents say?
Conversely, several stakeholders, particularly in Silicon Valley, have expressed skepticism and opposition towards SB 1047. Concerns range from the impact on startups to perceived limitations on innovation and research in the AI sector.
Opponents argue that SB 1047 could stifle technological progress, impose undue burdens on developers, and potentially hinder the growth of the AI ecosystem.
What happens next?
Following deliberations on suggested amendments, SB 1047 is set to face a crucial vote in the California Senate Assembly, which will determine its fate. If passed, the bill will proceed to Governor Gavin Newsom’s desk for final approval.
The potential implementation of SB 1047 would introduce a new regulatory framework for AI development in California, but legal challenges are expected as various groups continue to voice their concerns and opposition to the bill.