
RBGPF
-0.4300
California Governor Gavin Newsom has signed into law groundbreaking legislation requiring the world's largest artificial intelligence companies to publicly disclose their safety protocols and report critical incidents, state lawmakers announced Monday.
Senate Bill 53 marks California's most significant move yet to regulate Silicon Valley's rapidly advancing AI industry while also maintaining its position as a global tech hub.
"With a technology as transformative as AI, we have a responsibility to support that innovation while putting in place commonsense guardrails," State Senator Scott Wiener, the bill's sponsor, said in a statement.
The new law represents a successful second attempt by Wiener to establish AI safety regulations after Newsom vetoed his previous bill, SB 1047, after furious pushback from the tech industry.
It also comes after a failed attempt by the Trump administration to prevent states from enacting AI regulations, under the argument that they would create regulatory chaos and slow US-made innovation in a race with China.
The new law says major AI companies have to publicly disclose their safety and security protocols in redacted form to protect intellectual property.
They must also report critical safety incidents -- including model-enabled weapons threats, major cyber-attacks, or loss of model control -- within 15 days to state officials.
The legislation also establishes whistleblower protections for employees who reveal evidence of dangers or violations.
According to Wiener, California's approach differs from the European Union's landmark AI Act, which requires private disclosures to government agencies.
SB 53, meanwhile, mandates public disclosure to ensure greater accountability.
In what advocates describe as a world-first provision, the law requires companies to report instances where AI systems engage in dangerous deceptive behavior during testing.
For example, if an AI system lies about the effectiveness of controls designed to prevent it from assisting in bioweapon construction, developers must disclose the incident if it materially increases catastrophic harm risks.
The working group behind the law was led by prominent experts including Stanford University's Fei-Fei Li, known as the "godmother of AI."
C.Sramek--TPP