NCSL policy associate briefs Arizona committee on surge of state AI laws, deepfake limits and government uses

Arizona House Advanced Artificial Intelligence and Innovation Committee · March 5, 2026

Get AI-powered insights, summaries, and transcripts

Subscribe
AI-Generated Content: All content on this page was generated by AI to highlight key points from the meeting. For complete details and context, we recommend watching the full video. so we can fix them.

Summary

Adam Cook Hook of the National Conference of State Legislatures told the Arizona House Advanced AI & Innovation Committee that states introduced more than 1,200 AI bills in 2025, highlighted government uses such as chatbots and wildfire forecasting, and warned that courts have narrowed some deepfake laws over First and Fourteenth Amendment concerns.

Adam Cook Hook, a policy associate with the National Conference of State Legislatures’ Elections and Redistricting Program, briefed the Arizona House Advanced Artificial Intelligence and Innovation Committee on how state legislatures are approaching artificial intelligence policy and government use.

Hook told committee members that in the 2025 legislative session all 50 states plus Puerto Rico, the U.S. Virgin Islands and the District of Columbia introduced AI legislation and that NCSL tracked “over 1,200 bills” across topics including elections, deepfakes, health care and workforce training. He said roughly 45 states adopted or enacted AI measures in that period and that private‑sector–focused bills were the single largest category.

The presenter outlined three strands of state activity: formal study bodies and task forces (he said at least six legislatures have committees or subcommittees on AI and at least 10 have task forces or similar bodies); state efforts to inventory and assess AI systems in government (including impact assessments in states such as Connecticut, Maryland, Virginia and Idaho); and procurement and pilot projects that incorporate AI into state operations. Hook cited examples including Hawaii funding a wildfire‑forecasting program, an Arkansas working group reviewing unemployment‑insurance fraud pilots, and Ohio’s Reg Explorer tool for streamlining administrative code review.

Hook described recent enactments that take different approaches. He summarized Colorado’s comprehensive law (requiring risk management and impact assessments for high‑risk systems), Utah’s Artificial Intelligence Policy Act (which imposes disclosure obligations for covered entities), California’s 2025 SB 53 (targeting transparency and safety for large AI models) and Texas’s HB 149 (which includes disclosure requirements and bans certain discriminatory government uses). He said these laws also often include provisions to foster innovation, such as regulatory sandboxes and public‑private coordination.

On deepfakes and political messaging, Hook said 26 states have enacted laws regulating AI in political messaging, most commonly by requiring disclosure labels or spoken notices. He noted that Texas and Minnesota impose timing prohibitions before elections — 30 days in Texas and 90 days in Minnesota, as cited in his presentation — while many other states rely on disclosure requirements.

Hook warned about litigation risks: federal courts recently enjoined some deepfake disclosure statutes after finding they treated parody and satire too broadly, a point he attributed to cases he referenced by name during the briefing. He also cited a copyright decision involving Anthropic and noted that questions of copyright and provenance, privacy and data protection, transparency, liability, and bias are central considerations for legislators.

Committee members asked follow‑up questions about commercialized likeness protections and how to treat nonconsensual synthetic intimate imagery. One member said California has a law extending post‑mortem rights for certain digital replicas and mentioned Tennessee’s so‑called Elvis Act; Hook said California has introduced much of the relevant legislation but offered to follow up with specific citations. On enforcement for nonconsensual intimate images, Hook said many states are creating private causes of action rather than new criminal penalties and offered to provide more detailed tracking from NCSL’s specialists.

The presenter also described practical government deployments of AI. He explained retrieval‑augmented generation (RAG) chatbots — systems trained on a defined corpus to answer questions — and gave the Montana Ethics Commission’s chatbot (trained on roughly 70,000 documents) and Ohio’s EVA, a tool to help election administrators interpret procedures, as examples. Hook emphasized that these systems can be quick to deploy but require lengthy quality checks to ensure accuracy.

Hook closed by pointing committee members to NCSL’s artificial intelligence policy toolkit and the organization’s legislation database and said he would follow up on the committee’s more detailed requests.

The committee took no formal action; the chair thanked Hook, encouraged members to use NCSL resources, and adjourned.