California Frontier AI Safety Act
California SB 53 requires comprehensive safety protocols, incident reporting, and transparency measures for large AI companies developing frontier models. Applies to companies with $500M+ in annual revenue from AI systems. First major US law specifically targeting advanced AI safety.
California SB 53, signed into law in September 2025, represents the first major US legislation specifically targeting the safety risks of advanced artificial intelligence systems. Unlike regulations focused on employment AI or consumer privacy, SB 53 addresses "frontier models," the most powerful AI systems that could pose catastrophic risks if deployed without adequate safeguards.
The law applies to AI companies with annual revenue exceeding $500 million from AI systems or those training models that exceed specified computational thresholds (measured in floating-point operations). This scope deliberately targets large technology companies developing cutting-edge AI capabilities while exempting smaller players and research institutions.
Key requirements include comprehensive safety testing before model deployment, particularly focusing on CBRN (chemical, biological, radiological, and nuclear) risks; mandatory incident reporting to the California Attorney General within 15 days of critical safety failures; public disclosure of safety testing methodologies and annual safety reports; and robust whistleblower protections for employees who raise safety concerns in good faith.
While both are AI regulations, they operate in entirely different domains. SB 53 focuses on existential and catastrophic risks from frontier AI models, think GPT-4 level systems that could potentially be misused for creating bioweapons or cyber attacks. NYC Local Law 144, by contrast, addresses the immediate, practical concerns of bias and discrimination in hiring algorithms.
For most employers: SB 53 won't directly impact your operations unless you're developing cutting-edge AI models. However, NYC Local Law 144 compliance is an immediate requirement if you use any automated system for employment decisions in New York City.
Learn About NYC AEDT ComplianceSB 53 is part of California's multi-layered approach to AI regulation. While the state's ADMT regulations (under CPPA authority) address automated decision-making across various consumer-facing applications, SB 53 specifically targets the unique risks posed by the most advanced AI systems. This creates a comprehensive regulatory framework where different rules apply to different types and scales of AI deployment.
The law has attracted significant attention—and controversy—within the tech industry. Proponents argue it's a necessary safeguard against potential catastrophic outcomes from uncontrolled AI development. Critics contend it may stifle innovation and impose burdensome requirements that don't meaningfully improve safety. Notably, several major AI companies supported the legislation, while others lobbied heavily against it.
For organizations outside California's AI development sector, SB 53 serves primarily as an indicator of regulatory direction. It demonstrates growing governmental focus on AI safety and suggests that safety-oriented regulations may expand to other jurisdictions and potentially to less powerful AI systems over time.
SB 53 establishes several precedents that may influence future AI regulation nationally and internationally. It creates a framework for pre-deployment safety testing requirements, introduces mandatory incident reporting for AI systems (similar to requirements in other safety-critical industries), establishes public disclosure obligations around AI safety practices, and protects employees who identify and report AI safety concerns.
The law's focus on "loss of control" scenarios—situations where an AI system acts in ways its developers neither intended nor can easily stop - reflects emerging research in AI alignment and safety. While these scenarios may seem theoretical today, they represent genuine concerns as AI capabilities continue to advance rapidly. SB 53 requires companies to demonstrate they've seriously considered and planned for these possibilities before deploying their most powerful systems.
While SB 53 addresses frontier AI safety, employers hiring in New York City face immediate compliance obligations under Local Law 144. Book a 15-minute consultation to discuss our full compliance package.