California’s contentious AI safety bill gets closer to becoming a law

California State Assembly has passed the “hotly debated” AI safety bill that could establish the nation’s most stringent regulations on AI, setting the stage for a contentious decision by Governor Gavin Newsom.

The legislation, known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), proposes rigorous testing and accountability measures for AI developers, particularly those creating large and complex models. The bill, if enacted into law, would require AI companies to test their systems for safety before releasing them to the public.

The State Senate, which has already passed the legislation, will vote again with the new amendments on August 31, before the bill goes to Newsom who will have time until September 30 to sign or veto it.

“With this vote, the Assembly has taken the truly historic step of working proactively to ensure an exciting new technology protects the public interest as it advances,” Senator and co-author of the bill Scott Wiener said in a statement.

Key Provisions of the bill

Authored by Democratic State Senator Scott Wiener, the bill mandates that companies developing advanced AI models — specifically those costing over $100 million to create or those using substantial computing power — must undergo pre-sale testing for significant risks. This includes preventing the misuse of AI for tasks such as launching cyberattacks or developing biological weapons.

The bill also requires developers to implement a “kill switch” to deactivate models that pose a threat and to undergo third-party audits to verify their safety practices.

If AI technologies are found to be used in harmful ways and companies have not conducted the required testing, the bill empowers the state attorney general to file lawsuits. The legislation aims to protect the public from potential AI-related hazards but also seeks to balance innovation with safety.

Though the bill, also known as SB 1047, aims to impose rigorous safety standards on AI developers, it has also drawn sharp criticism and sparked a broader controversy about the future of AI regulation in the state.

While proponents argue that the legislation is necessary to protect the public and prevent the dangerous misuse of AI, critics claim that the AI bill goes too far and could stifle innovation. They warn that the stringent requirements might drive AI developers out of California, making the state less competitive in the fast-evolving tech landscape.

Tech industry pushback

The tech industry has reacted strongly against SB 1047. More than 74% of all companies that shared their views over the bill have opposed it. Major players including Google and Meta have voiced their opposition, fearing that the bill could create an unfriendly regulatory environment and hinder AI advancements.

OpenAI, known for its popular ChatGPT platform, has argued that AI regulation “should be handled at the federal level” to ensure a uniform approach across states, rather than through state-specific laws that could lead to a patchwork of regulations. In an open letter to Senator Weiner, OpenAI chief strategy officer Jason Kwon said the AI bill would “stifle innovation,” and companies would “leave California.”

Weiner, however, responding to the letter, said this argument “makes no sense.”

“This tired argument — which the tech industry also made when California passed its data privacy law, with that fear never materializing — makes no sense given that SB 1047 is not limited to companies headquartered in California. Rather, the bill applies to companies doing business in California. As a result, locating outside of California does not avoid compliance with the bill,” Weiner wrote in the letter.

SB 1047 has also drawn criticism from key industry figures, including Dr. Fei-Fei Li, the co-director of the Stanford Institute for Human-Centered Artificial Intelligence and often referred to as the “godmother of AI.”

In an article published in Fortune earlier this month, Li expressed concerns that the bill’s penalties and restrictions could have unintended consequences that stifle innovation.

She argued that SB 1047 “will harm our emerging AI ecosystem,” particularly affecting sectors that are already at a disadvantage compared to major tech companies, such as the public sector, academia, and smaller tech firms.

Even some AI researchers and developers who support the idea of regulation have criticized the bill. Andrew Ng, a prominent AI entrepreneur and former head of AI at Google, has called the bill “anti-open source” and “anti-innovation” in an X post arguing that it targets the broad development of AI technology rather than focusing on specific harmful applications.

Not only the tech leaders, but even Democrat lawmakers have also opposed the bill vehemently. House Speaker Nancy Pelosi, representing San Francisco, has been particularly vocal in her opposition, labeling the bill as “well-intentioned but ill-informed.” She and Representatives Ro Khanna and Zoe Lofgren argue that the bill could harm California’s tech industry by imposing burdensome regulations that may deter AI development.

In an open letter, the group of lawmakers expressed concerns that the bill could jeopardize open-source AI models, which rely on publicly available code and are considered vital for advancing AI technologies. They argue that such regulations could discourage innovation and put California at a disadvantage compared to other states and countries that are more welcoming to AI development.

Besides, industry groups have also launched a major campaign against the bill, including creating a website that generates letters for people to send to California lawmakers, urging them to vote against the legislation.

Despite the significant pushback, SB 1047 has found supporters both in the legislature and within the tech community. The bill passed the California Assembly with a 41-9 vote and is expected to clear the Senate again before reaching Governor Newsom’s desk.

Supporters like Tesla CEO Elon Musk and AI startup Anthropic have praised the bill for taking a proactive stance on AI safety, emphasizing the need for guardrails to prevent potential misuse of powerful AI technologies.

However, Senator Wiener has defended the legislation, arguing that it does not stifle innovation but aims to ensure that AI development proceeds responsibly. “Innovation and safety are not mutually exclusive,” Wiener said in the statement, stressing that the bill is designed to increase public trust in AI technologies by holding companies accountable to safety standards they have already adopted.

What’s next for the California AI bill?

As the bill moves closer to becoming law, Governor Newsom faces a pivotal decision that could have wide-ranging implications for the tech industry and AI regulation nationwide.

With Congress showing little progress on federal AI legislation, California’s actions could set a precedent for other states considering how to manage the rapid development of AI technologies.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *