Get Full Government Meeting Transcripts, Videos, & Alerts Forever!

Legislators and experts debate AI risks, deepfakes and possible state regulatory approaches

July 29, 2025 | Science, Technology & Telecommunications, Interim, Committees, Legislative, New Mexico


This article was created by AI summarizing key points discussed. AI makes mistakes, so for full details and context, please refer to the video of the full meeting. Please report any errors so we can fix them. Report an error »

Legislators and experts debate AI risks, deepfakes and possible state regulatory approaches
A legislative briefing on artificial intelligence at the Science, Technology & Telecommunications Committee examined generative AI's technical limits and public-policy options, with experts urging transparency about training data and developers' responsibilities when models are used for consequential decisions.

Representative Christine Chandler introduced the panel and said legislators must balance supporting beneficial AI uses while protecting citizens from hazards. "What's our role?" Chandler asked the committee, framing the briefing around consumer protection and public safety.

Dr. Melissa Warr, assistant professor at New Mexico State University (educational design and learning technology), illustrated how generative models "replicate past patterns" and can produce confidently stated but incorrect outputs. Using classroom and image-generator examples, Warr showed that small contextual changes (for example, identifying a student's stated hobby) can change automated feedback and perpetuate socioeconomic or cultural bias in model responses.

Dr. Chris Moore, a computer scientist who has worked in New Mexico's research community, said AI offers important societal benefits'from captioning to predictive tools'but that some deployments require guardrails. "If an AI is marketed as being accurate for particular purposes,'how well does it perform?" Moore asked, arguing for independent testing and disclosure of the data used to train models, especially where decisions affect housing, employment, health care or education.

Panelists and legislators discussed recent and proposed laws. Representative Chandler described House Bill 60 (a bill modeled on Colorado's law) that would require developers to perform bias and risk assessments and provide notice when a deployed tool is used in a "consequential decision." Chandler said HB 60 had advanced through House committees but stalled on the Senate floor; she also referenced a late-filed House Bill 30 addressing "sensitive deepfakes." Committee discussion noted that New Mexico previously enacted House Bill 182 addressing certain deceptive or nonconsensual materials.

Moore summarized state responses elsewhere: at least 40 states have considered or adopted legislation on deepfakes and AI; Texas requires disclosure when personal data such as location is collected; Utah has an AI policy office and guidance that, for example, recommends no advertising within mental-health chatbots; Illinois requires bias audits for AI used in hiring unless an audit is completed; Colorado requires impact assessments for AI used in consequential contexts.

Committee members raised implementation questions: where enforcement and oversight should sit, whether the attorney general's office is the right venue for enforcement, and how to design rules so they remain useful as technology evolves. Senator Michael Padilla and others suggested a cross-agency advisory or working group and warned against political capture or an overly narrow single-administration approach.

Lawmakers also discussed narrower, near-term steps: requiring clear, prominent consumer disclosure when an interaction involves an AI system (for example, labeling political ads or chatbot companions), data-minimization and access/deletion rights for consumers, and targeted protections for minors and mental-health use cases. Representative Luhan cited a prior legislative proposal to treat voice and visual likeness as personal property and noted that tech-sector uses may implicate intellectual property and privacy law.

No bills were passed during the hearing; legislators and experts urged further technical work, suggested targeted regulatory guardrails and recommended convening cross-disciplinary advisory groups to refine legislative language for subsequent sessions.

Ending: Legislators said they will continue work in interim months to narrow priorities (consumer notice, deepfake protections, privacy/data access, and independent audits for consequential uses) and to consider whether statutory standards or agency rulemaking best balance stability and nimbleness in a fast-changing field.

View the Full Meeting & All Its Details

This article offers just a summary. Unlock complete video, transcripts, and insights as a Founder Member.

Watch full, unedited meeting videos
Search every word spoken in unlimited transcripts
AI summaries & real-time alerts (all government levels)
Permanent access to expanding government content
Access Full Meeting

30-day money-back guarantee

Sponsors

Proudly supported by sponsors who keep New Mexico articles free in 2025

Scribe from Workplace AI
Scribe from Workplace AI