Lifetime Citizen Portal Access — AI Briefings, Alerts & Unlimited Follows
House Energy and Commerce hearing splits over proposed 10‑year moratorium on state AI rules
Loading...
Summary
At a House Energy and Commerce subcommittee hearing, lawmakers and witnesses debated a proposed 10‑year ban on state enforcement of AI laws, weighing concerns that the moratorium would protect startups and interstate commerce against arguments it would strip existing consumer protections and leave children and other vulnerable people exposed.
At a House Energy and Commerce subcommittee hearing on AI regulation and U.S. leadership, lawmakers and outside experts sparred over a provision in recent congressional text that would bar state enforcement of AI-related laws for up to 10 years.
Chairman Gus Bilirakis opened the hearing by framing the technology’s promise and risks, saying, “Since the public release of ChatGPT, AI has become a household name.” He and several witnesses urged a federal framework to provide certainty for innovators while protecting consumers and national competitiveness.
The dispute centered on a moratorium many Republicans have advanced in reconciliation text that would prevent states from enforcing AI-specific rules. Supporters including Sean Heather, senior vice president for international regulatory affairs at the U.S. Chamber of Commerce, and Adam Thier, a senior fellow at the R Street Institute, argued the moratorium would prevent a costly patchwork of state laws that could lock out small and mid‑sized innovators. Heather warned that heavy state regulation and European-style rules risked creating “trade frictions” and compliance costs that favor large, established firms.
“Small businesses and startups navigating 50 different sets of rules will have a harder time competing with larger, well‑established companies that can afford to navigate this regulatory maze,” Adam Thier testified.
Opponents, led by Democratic lawmakers and civil‑society witnesses, said a long moratorium would leave consumers, children and other vulnerable groups unprotected while no comprehensive federal law is ready to replace state action. Ranking Member Jan Schakowsky called a 10‑year pause “reckless,” saying Congress’s job is to protect consumers now. Amber Kak, co‑executive director of the AI Now Institute, warned about concrete harms and cited the case discussed in testimony of a teenager who died after interacting with a chatbot. She argued that states have already enacted targeted measures—on deepfakes, AI‑driven scams and transparency—that would be erased by an overbroad moratorium.
Witnesses also debated the global context. Several panelists criticized the European Union’s AI Act as overly prescriptive and costly, with Heather noting that the EU’s enforcement tactics and fines (which can reach as high as 7% of global annual sales) could be replicated worldwide and disadvantage U.S. firms. Others, including representatives of venture firms, urged a federal approach that sets consistent disclosure, testing and transparency standards but preserves the ability of startups to compete.
Health care and public‑safety uses of AI recurred in questioning. Multiple members and witnesses urged safeguards for medical applications, disclosures for high‑impact deployments, and pilot programs or sandboxes to test technologies before broad government adoption. General Catalyst’s Mark Bhargava described private‑sector practices—model cards, stress testing and “red teaming”—that investors and founders are already using and recommended similar baseline requirements at the federal level.
No formal actions or votes were taken during the hearing. Members of the subcommittee asked witnesses to submit additional information for the record; Chairman Bilirakis closed the hearing by saying members have 10 business days to submit questions for the record and then adjourned the panel.
The hearing underscored two central tensions for Congress: how to write a national AI policy that preserves innovation and U.S. competitiveness while ensuring consumer protections now enforced by states, and how quickly federal lawmakers can produce durable, bipartisan rules that address harms—especially for children and other vulnerable populations—without unintentionally entrenching compliance burdens that favor large incumbents.
Looking ahead, several members urged a bipartisan effort to craft a national framework that incorporates transparency and testing requirements, targeted bans on specific harmful practices (for example, certain deepfake uses and exploitive AI companions), and sector‑specific oversight where existing statutes (for example, in health care or consumer protection) are inadequate. Witnesses recommended NIST guidance, federal sandboxes, and model‑card‑style disclosures as practical near‑term tools while the committee works toward legislation.

