Committee backs nonbinding resolution urging AI firms to adopt safe‑harbors for researchers and protect whistleblowers
Loading...
Summary
The Assembly Science, Innovation and Technology Committee released Assembly Resolution 158 after testimony recommending safe‑harbor provisions for AI safety researchers and voluntary whistleblower protections for employees who raise concerns about AI risks.
A nonbinding Assembly Resolution urging generative‑AI companies to make voluntary commitments to protect employees who raise risk‑related concerns was advanced out of the Assembly Science, Innovation and Technology Committee after testimony from a Princeton University researcher.
Sayash Kapoor, a computer science PhD candidate at Princeton's Center for Information Technology Policy (speaking in his personal capacity), told the committee that current terms of service and technical filters used by many generative‑AI companies can prevent third‑party safety research because researchers must sometimes test use cases the companies prohibit. Kapoor argued that a narrowly drawn safe harbor for good‑faith research — modeled on cybersecurity norms such as limited disclosure windows and not disrupting services — would allow independent evaluation of how well companies' safeguards work and whether systems could be misused for misinformation, cyber or biosecurity harms.
Kapoor also urged whistleblower protections for employees, saying internal commitments by companies are meaningful only if employees can report noncompliance without fear of reprisal. He said the resolution's approach is less onerous than some proposed regulatory frameworks and pointed to California's recent debate over Senate Bill 1047 as a contrast to more prescriptive statutory regimes.
A motion to amend and release the resolution was carried on roll call. One committee member noted concern about how voluntary commitments in subsections might be enforced in the absence of a regulatory structure; the chair responded that the measure is intended to encourage voluntary action rather than impose regulatory constraints.
Why it matters: Supporters say safe harbors and whistleblower protections would enable independent safety testing and greater transparency from AI firms while avoiding heavy-handed rules that might chill innovation. The committee's action sends a state‑level signal in a policy area where federal and state approaches are evolving.
