The California Senate has passed a controversial bill, SB 1047, aimed at preventing AI disasters, which will now be sent to Governor Gavin Newsom for approval. This decision poses a dilemma for Newsom as he balances the potential dangers of AI systems against the state’s booming AI industry. He has until September 30 to either sign SB 1047 into law or veto it altogether.
Introduced by State Senator Scott Wiener, SB 1047 seeks to mitigate the risks associated with very large AI models, focusing on preventing catastrophic events like loss of life or cyberattacks causing over $500 million in damages.
Although few AI models currently meet the criteria outlined in the bill, SB 1047 is forward-thinking and aims to address future AI developments rather than existing issues.
If signed into law, SB 1047 would hold AI model developers accountable for any harms caused by their technology, similar to gun manufacturers’ liability in mass shootings. It would also empower California’s attorney general to impose hefty penalties on AI companies whose technology leads to catastrophic incidents. Additionally, AI models covered under the bill would be required to have a “kill switch” for emergency shutdown if deemed dangerous.
The implications of SB 1047 could significantly impact the AI industry in America, pending Newsom’s decision. Here is a closer look at the possible outcomes if SB 1047 is signed into law:
Why Newsom might sign it
Scott Wiener argues that increased liability in Silicon Valley is necessary to prevent future technological catastrophes. Newsom may feel compelled to take decisive action on AI regulation and hold prominent tech companies accountable.
Some AI executives, such as Elon Musk and former Microsoft chief AI officer Sophia Velastegui, have expressed cautious optimism about SB 1047, recognizing the need for responsible AI practices.
The startup Anthropic, while not officially endorsing SB 1047, has contributed suggestions for the bill’s improvement. Their input influenced revisions that ensure AI companies can only be held liable after their models cause harm, not preemptively.
Why Newsom might veto it
Despite potential benefits, intense industry opposition to SB 1047 may prompt Newsom to veto the bill. Concerns raised by industry leaders suggest a significant shift in liability from applications to infrastructure, threatening innovation in California’s AI sector.
Notable figures like Speaker Nancy Pelosi and AI researchers have urged Newsom to reject SB 1047, emphasizing the potential negative impact on AI innovation and the broader tech economy.
The U.S. Chamber of Commerce has also advocated for a veto, citing AI’s pivotal role in economic growth and the risk of stifling innovation with increased regulation.
If SB 1047 becomes law
Should Newsom approve SB 1047, significant changes will follow in the coming years. By 2025, tech companies would need to provide safety reports for their AI models, with the attorney general granted authority to intervene if necessary.
In 2026, a regulatory board for AI models would be established to oversee compliance and safety practices within the industry. Developers would be required to engage auditors for safety assessments, while the attorney general could start legal action against companies responsible for catastrophic events linked to AI technology.
If SB 1047 gets vetoed
In the event of a veto, federal regulators may take the lead on AI regulation, as desired by OpenAI and other industry stakeholders. This shift could result in a more gradual approach to regulating AI models at the national level.
Recent agreements between OpenAI and the AI Safety Institute suggest a collaborative effort to establish federal standards for AI models, emphasizing the importance of national coordination in setting regulatory frameworks.
While vetoing SB 1047 may delay immediate regulation, it could pave the way for federal agencies to work alongside tech companies in developing responsible AI practices in the long run.