Citizen Portal

Cato scholar warns Arizona against broad AI bans, urges narrow remedies and education

Arizona Legislature Select Committee on Election Integrity and Florida-style Voting Systems · November 14, 2025

Get AI-powered insights, summaries, and transcripts

Subscribe
AI-Generated Content: All content on this page was generated by AI to highlight key points from the meeting. For complete details and context, we recommend watching the full video. so we can fix them.

Summary

David Inserra of the Cato Institute told an Arizona select committee that broad prohibitions or sweeping disclosure mandates could chill protected speech and recommended relying on existing fraud and harassment laws, targeted remedies, and education.

PHOENIX — At a legislative hearing on artificial intelligence and elections, Cato Institute fellow David Inserra urged caution about broad AI regulation, stressing the First Amendment and the risk of unintended speech‑chilling effects.

Inserra told the select committee that defining AI for regulatory purposes is difficult and that laws requiring blanket disclosure of AI use could sweep in benign or constitutionally protected uses, including satire and political parody. "Rather than restricting political speech in the name of stopping deepfakes, we must continue to protect free expression with our laws while encouraging a culture of free expression that adapts to new technologies," he said.

Inserra recommended policymakers focus on narrow, evidence‑based remedies—enforcing existing fraud, harassment and consumer‑protection laws where appropriate, improving digital literacy, and supporting voluntary industry standards—rather than adopting broad prohibitions that may be vulnerable to legal challenge.

His testimony drew no immediate votes but framed a central tension the committee will need to balance: protecting elections and voters from demonstrable harm while avoiding laws that could curb protected political speech.