News

The California Report on Frontier Policy outlines new plans for AI guardrails that could be implemented as state law.
A long-awaited report says companies are “simply inadequate at fully understanding” risks and harms.
In 2023, California Senator Scott Wiener sponsored a first-of-its-kind bill, SB 1047, which would have required that large-scale AI developers implement rigorous safety testing and mitigation for ...
Without proper safeguards, AI could facilitate nuclear and biological threats, among other risks, report commissioned by ...
Scott Wiener, called for developers of major AI models to adopt stringent safety measures, including auditing, reporting, and ...
New York has a new AI safety bill that tries to regulate frontier AI models from OpenAI, Google, and Anthropic.
Notable Republicans back Trump’s "Big Beautiful Bill," with a provision to block state AI laws—but some in the GOP are ...
Unlike SB 1047, the New York bill’s provisions are directed at developers, not individual models. It doesn’t require developers to include kill switches in their models.
AB 3211 is moving through the California legislature, just like SB 1047, so it has a real chance to become law. This bill ...
If an AI is powerful enough to make beneficial scientific discoveries, it's also capable of being used for harm, OpenAI says.