Arizona committee hears experts on AI risks to election integrity and recommends transparency, procurement safeguards
Get AI-powered insights, summaries, and transcripts
SubscribeSummary
Witnesses told an Arizona select committee that AI could threaten election integrity through vendor concentration, opaque automation, and synthetic media; they urged transparent vendor documentation, procurement controls ensuring human review, digital‑literacy education, and investment in authentication technology.
PHOENIX — A state legislative panel convened to examine how artificial intelligence could affect Arizona elections heard experts urge a mix of near‑term administrative steps and longer‑term investments to protect election integrity.
Representative Culligan, chair of the select committee on election integrity and Florida‑style voting systems, opened the hearing by asking three questions: should the state regulate AI, is now the time to do so, and what regulations would help Arizonans. He warned that AI could concentrate power in unelected hands, create cyber risks for election systems and reduce civic participation by displacing jobs.
A representative from OpenAI identified in the transcript as "Nova" told the committee that three core threats merit attention: vendor concentration and opaque AI in tabulation and adjudication; reduced civic participation if automation eliminates local election jobs; and delegation of substantive decisions to AI without transparent, reviewable human oversight. "If AI systems become so intricate that only the original developers can truly understand their operations, we lose a layer of transparency and resilience," Nova said, recommending statutory requirements for vendor documentation and human final review of AI decisions.
Diane Cook, a nonresident AI fellow at the Center for Strategic and International Studies, told members that generative AI and synthetic media (commonly called deepfakes) pose immediate, practical risks to elections and cited research showing human detection of synthetic media is low. She recommended requiring disclosure or technical watermarking of AI‑generated content, expanding digital literacy in K–12 (and targeted programs for older adults), and funding authentication and provenance technologies through NSF partnerships to make labeling enforceable and scalable.
Committee members asked whether Arizona election administrators are already using AI for core tabulation work; Nova said jurisdictions were exploring supportive tools such as voter outreach and administrative streamlining but not deploying AI in core tabulation functions as of 2025, and noted Arizona''s statewide information security policy (policy 81‑20) would require agencies to incorporate AI into existing security and risk assessments.
The witnesses emphasized layered responses rather than single fixes. Cook said transparency requirements alone would not stop determined bad actors but would help detectives and platforms identify and remove malicious content; Nova similarly urged procurement controls, vendor diversification and human‑review safeguards to prevent vendor lock‑in and ensure reviewability of critical decisions.
The committee recessed for a short break before hearing additional testimony on related topics including free‑speech implications and longer‑term existential risks. In closing members signaled interest in both near‑term steps (procurement and security standards, education and county support) and broader study of economic and regulatory questions affecting Arizona's elections.
The committee did not vote on legislation during the hearing; members agreed to continue the study and invite additional witnesses, including state election administrators, for follow‑up sessions.
