Citizen Portal

Experts and industry urge Congress to set federal AI rules to avoid patchwork of state laws

6443055 · September 18, 2025

Get AI-powered insights, summaries, and transcripts

Subscribe
AI-Generated Content: All content on this page was generated by AI to highlight key points from the meeting. For complete details and context, we recommend watching the full video. so we can fix them.

Summary

Witnesses at a House Oversight subcommittee hearing urged Congress to create a federal, risk‑based framework for artificial intelligence, citing a confusing state-by-state regulatory patchwork and urging preemption or a temporary pause on state enforcement to protect startups and consumers.

Chairwoman Mace convened a House Oversight and Reform subcommittee hearing where industry and policy experts urged Congress to adopt a federal, risk‑based framework for artificial intelligence to avoid a “patchwork” of state laws that, they said, hamper startups and create inconsistent consumer protections.

Kinsey Fabrizio, president of the Consumer Technology Association, told the subcommittee the current state landscape is a barrier to innovation for small companies. “For a start up or a small business, navigating this patchwork is crippling,” Fabrizio said, arguing that Congress should consider a temporary preemption of state and local AI rules to give a single federal framework time to take shape. Fabrizio said CTA represents more than 1,200 companies — “more than 80% are start ups, small and mid sized businesses” — and that the association supports a federal privacy law alongside any AI legislation.

The testimony reflected a split over the proper pace and locus of regulation. Dr. Nicole Turner Lee, senior fellow for governance studies and director of the Center for Technology Innovation at the Brookings Institution, warned that a long moratorium on state action could “threaten states’ rights and the public interest.” Turner Lee said states have already moved in the absence of federal rules — “since January, over a 100 measures across 38 states have been enacted into law” — and that state attorneys general are issuing guidance to protect consumers. She urged Congress to preserve the ability of federal agencies to enforce consumer protections and to maintain strong oversight over deceptive and unfair AI uses.

Ranking Member Brown framed the policy debate around workers and fairness, saying AI presents both opportunity and risk for communities already vulnerable to automation. Brown pressed witnesses on steps to ensure retraining and equitable workforce development if AI adoption accelerates.

Both industry and policy witnesses urged a risk‑based, technology‑neutral federal approach that would provide clear obligations for high‑risk uses while avoiding conflicting requirements for lower‑risk innovation. Fabrizio explicitly asked Congress to “adopt a 10 year pause on enforcement of state and local AI laws,” arguing that the pause would give Congress time to craft a preemptive federal framework. Turner Lee countered that consumer protection and human‑oversight requirements — including independent audits and disclosure rules — should not be sacrificed in a rush to innovate.

The committee did not take legislative action at the hearing. Members asked witnesses to submit written materials and follow‑up responses, and the subcommittee chair closed by giving members five legislative days to submit additional materials for the record.

Why this matters: Without a unified federal standard, companies face a complex compliance burden and consumers may receive uneven protections across states. The witnesses’ testimony frames a choice for Congress between rapid federal action to harmonize rules and continued state experimentation aimed at immediate consumer protection.