Article

Zooming in on AI – #3: California SB 1047 – The potential new frontier of more stringent AI regulation?

Zooming in on AI – #3: California SB 1047 – The potential new frontier of more stringent AI regulation?
Published Date
Sep 9 2024
Related people
  • Cybersecurity protections: Before training any covered AI model, developers are required to implement administrative, technical, and physical cybersecurity measures to prevent unauthorized access, misuse, or modifications. This includes developing the capacity for a full shutdown of the model if necessary, and ensuring safeguards against advanced persistent threats or other malicious actors.
  • Full shutdown procedures: Developers must establish and document the conditions under which a “full shutdown” of the model or its derivatives would be enacted to prevent potential harm. This includes considering the impact of a shutdown on critical infrastructure.
  • Compliance and third-party auditing requirements: Beginning January 1, 2026, developers of covered AI models must conduct annual third-party audits of their safety and security protocols. Developers are also required to publish redacted versions of their safety and security protocols and the results of their audits, and submit full versions of their audits to the California Attorney General upon request. Additionally, developers must submit annual compliance statements, signed by a senior corporate officer, detailing any risks and measures taken to prevent critical harm.
  • Incident reporting: Any AI safety incidents involving covered models must be reported to the California Attorney General within 72 hours of the developer becoming aware of the incident. The report should detail the nature of the incident and the steps taken to address the risks associated with it.
  • Co-Existence with federal contracts and pre-emption: The Act does not apply to products or services to the extent that the requirements would strictly conflict with federal government entity contracts. The Act’s provisions do not supersede existing federal laws and may be adjusted or supplemented based on federal regulations or evolving technological standards. If any part of the Act is held invalid, the remaining provisions shall still be enforceable.
  • Guidance and best practices: Developers are encouraged to follow industry best practices and consider guidance from organizations such as the U.S. Artificial Intelligence Safety Institute and the National Institute of Standards and Technology.
  • Civil penalties and enforcement actions: The Act grants the Attorney General authority to initiate civil actions for violations, including:
    • Penalties for violations: Fines are imposed based on the severity of the violation:
      1. For violations causing death, bodily harm, property damage, theft, or imminent public safety threats, fines are set at a maximum of 10% of the cost of the computing power used to train the AI model (calculated using average market prices at the time of training) for the first offense, increasing to 30% for subsequent violations; and
      2. Additional penalties are prescribed for violations related to labor laws, safety protocols, and other specific sections of the Act.
    • Injunctive relief and monetary damages: Courts may issue injunctions, award compensatory and punitive damages, and grant attorney fees and costs to enforce the Act’s provisions.
    • Contractual limitations on liability: Any contract or agreement that attempts to waive, limit, or shift liability for violations is deemed void. Courts are empowered to impose joint and several liability on affiliated entities if they attempt to limit or avoid liability through corporate structuring.
  • Assessment of developer conduct: In determining whether a developer exercised reasonable care, regulators may consider the quality and implementation of the developer’s safety and security protocols, the thoroughness of risk management practices, and comparisons to industry standards.
  • Whistleblower protections: The Act protects employees of AI developers and their contractors/subcontractors who disclose information to the Attorney General or Labor Commissioner regarding non-compliance with safety standards or risks of critical harm. The Act prohibits retaliation against whistleblowers and mandates clear communication of employee rights. Additionally, developers must establish an internal process for employees to report violations anonymously.
  • Public disclosure and transparency: The Attorney General and Labor Commissioner may release complaints or summaries thereof if doing so serves the public interest, with sensitive information redacted to protect public safety and privacy.
  • Creation of the Board of Frontier Models: The Act establishes the Board of Frontier Models within the Government Operations Agency, which will regulate AI models posing significant public safety risks:
    • The Board consists of nine members, including experts from AI safety, cybersecurity, and other fields. Members are appointed by the Governor, Senate, and Assembly.
    • The Board will oversee the establishment of thresholds for defining AI models subject to regulation, auditing requirements, and guidance for preventing critical harms.
  • Establishment of CalCompute: The Act proposes the creation of CalCompute, a public cloud computing cluster designed to foster safe, ethical, and equitable AI development. CalCompute aims to:
    • Support research and innovation in AI and expand access to computational resources.
    • Be established within the University of California system, if feasible, with funding options including private donations.
    • The Act outlines a framework for the creation and operation of CalCompute, including the governance structure, funding, and equitable access parameters.
  • Public access and confidentiality: While the Act imposes some limitations on public access to safety protocols and auditors' reports to protect proprietary information and public safety, it is designed to balance transparency with the need for confidentiality.
  • This detailed regulatory framework, if enacted, would ensure that AI technologies developed and deployed in California adhere to high standards of safety, accountability, and ethical practice, while also promoting innovation and equitable access to technological resources.