California Senate Bill 1047: Regulating Advanced AI Models
–
The California Senate Bill 1047 has sparked a lot of debate in the tech world. Known as the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act, this bill aims to regulate advanced AI models. It seeks to ensure that these models are developed and deployed safely.
The bill targets AI models that require major investment, specifically those costing over $100 million to train. Developers must conduct safety assessments and certify that their models do not enable hazardous actions. They also need to comply with annual audits and safety standards. Regulatory oversight will come from a new Frontier Model Division within the Department of Technology. This division will ensure compliance and could impose penalties for violations, potentially up to 30% of the model's development costs.
Some people believe that bills like SB 1047 are necessary to prevent potential harms from advanced AI. However, critics argue that this bill could stifle innovation and concentrate power among a few large tech companies. The bill's language is considered vague, leading to concerns about compliance and liability for developers. Critics, including tech companies and AI researchers, argue the bill focuses on AI models rather than their applications. They fear it could place unnecessary burdens on startups and open-source projects, slowing down progress in California.
Today, there was a response from OpenAI whistleblowers explaining their position. These whistleblowers, William Saunders and Daniel Kokalo, left OpenAI due to safety concerns. They argue that developing frontier models without adequate safety measures poses foreseeable risks of catastrophic harm to society. They also raise concerns about OpenAI's commitment to safety. The letter states that OpenAI has not always deployed its systems safely and has fired employees who raised security concerns.
Anthropic, another AI firm, also wrote a letter about SB 1047. They acknowledge that the bill addresses real and serious concerns but worry that regulation might struggle to keep up with rapid AI advancements. They suggest that regulation should be adaptable to rapid changes in the field. Transparency in safety and security practices is crucial. They argue that the public and lawmakers need ways to verify adherence to safety plans.
Both Anthropic and OpenAI highlight the challenges of regulating a rapidly advancing field like AI. While some see the need for regulation, others worry about the potential for stifling innovation. The debate around SB 1047 continues as industry figures and lawmakers weigh the pros and cons of this crucial issue.