Committee hears testimony on AI bill that targets 'high‑risk' deployers and extends task force

Senate Environment, Energy, and Technology Committee · January 27, 2026

Get AI-powered insights, summaries, and transcripts

Subscribe
AI-Generated Content: All content on this page was generated by AI to highlight key points from the meeting. For complete details and context, we recommend watching the full video. so we can fix them.

Summary

SB 5,284 would require deployers of high‑risk AI systems to adopt risk‑management frameworks, conduct impact assessments, and disclose AI use; stakeholders including insurers, hospitals, banks and tech companies urged careful drafting or carve‑outs to avoid conflicts with existing regulation and technical infeasibility.

The Senate Environment, Energy and Technology Committee heard public testimony Jan. 30 on Senate Bill 5,284, a bill that targets "high‑risk" artificial intelligence systems, requires risk‑management programs for deployers, mandates consumer disclosures in some circumstances, and extends the state's AI Task Force.

Committee staff summarized the bill: beginning July 1, 2027, deployers of high‑risk AI systems would be required to use industry standard methods (for example, NIST frameworks) to mitigate foreseeable algorithmic discrimination, to perform impact assessments that disclose purposes, inputs and outputs, and to notify the Attorney General if a system causes algorithmic discrimination. The bill also requires government agencies that use AI systems to disclose to consumers when they interact with an AI system and extends the AI Task Force with a new workplace subgroup.

Senator Elias, the bill sponsor, said the bill focuses on deployers — the entities that put systems into use — and aims to concentrate limited regulatory resources on consequential decisions such as criminal justice, education access, employment and health care. "If a tool is used that could be discriminatory ... the person who deploys the tool is going to be responsible," Elias said, framing a risk‑based approach.

Witnesses presented a mix of support and technical concern. The National Association of Mutual Insurance Companies said the bill, as written, could create tension with insurance law and Office of the Insurance Commissioner practices and suggested removing property and casualty insurance from the bill to avoid conflicts. The Washington State Hospital Association urged targeted amendments to ensure HIPAA‑covered entities are not treated as high risk when using AI for administrative or clinical decision support that does not determine care. Financial institutions (banks and credit unions) asked for recognition that they already operate under comprehensive regulatory frameworks and proposed compliance pathways aligned with existing oversight.

Technology industry groups (WTIA, TechNet, Microsoft) said they supported the bill's goals but warned of technical and definitional challenges that have complicated implementation elsewhere, notably Colorado, and urged more time for drafting and coordination. Individual witnesses, including a high school student who described harm from AI‑generated images, urged strong protections and disclosure.

Committee staff clarified enforcement: while testimony earlier referenced the Consumer Protection Act, staff later confirmed enforcement authority would be routed through the Attorney General's office. The committee closed the public hearing on SB 5,284 and adjourned; no committee vote was recorded on the bill during the session.