This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| less than a minute read

California proposes the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act to regulate AI

California State Senator Wiener proposed SB 1047 to regulate the risks associated with AI. The bill, if enacted, would require AI developers to determine if an AI model has a “positive safety determination," which would be defined to mean that the developer has a reasonable belief certain types of AI models to not have a hazardous capability, or come close to having a hazardous capability when accounting for margin of safety and post-training modifications. 

  • Security protocol (including a separate safety and security protocol meeting certain specifications)
  • Implementing reasonable safeguards to prevent users from using the hazardous capabilities of the AI model. 
  • Refraining from making use of the model if there is an unacceptable risk of use of the hazardous capabilities. 

While the law has a long way to go before enacted into law, model developers may wish to consider the policies and procedures proposed by the bill to minimize risk associated with the use of AI. 

This bill would enact the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act to, among other things, require a developer of a covered model, as defined, to determine whether it can make a positive safety determination with respect to a covered model before initiating training of that covered model, as specified. The bill would also create a "Frontier Model Division" within the state Department of Technology. The bill, if enacted, would also create certain requirements for model developers when the model deos not meet the requirements for a positive safety determination. These requirements include:


ai, privacy, artificial intelligence, innovative technology