House Judiciary subcommittee debates federal preemption of state AI laws
Loading...
Summary
Witnesses and members clashed over whether Congress should preempt state AI rules or preserve states—s ability to regulate; witnesses recommended a mix of federal standards for development and state authority over deployment and consumer protection.
At a House Judiciary subcommittee hearing on Sept. 18, 2025, members and witnesses debated whether Congress should preempt state laws regulating artificial intelligence, with arguments centering on national security and a single national market versus states—role as laboratories of consumer protection.
The hearing featured testimony from Dr. David Bray, Kevin Frazier, Adam Thayer, and Professor Neil Richards and included repeated references from members to recent state bills such as California's SB 53 and AB 1046 and to families who have brought lawsuits alleging harms from AI chatbots.
The question at the center of the hearing was whether divergent state rules would create a —patchwork— that unduly burdens AI development and market deployment, or whether preemption would sweep away important state protections and common-law remedies. "Training frontier AI models... places regulation of training frontier AI models squarely in the authority of the national government," Representative Johnson quoted from witness testimony about the federal role in development. Dr. David Bray argued for a "light-touch policy framework" that distinguishes among AI methods and recommends updating domain-specific laws rather than imposing sweeping new rules.
Witnesses urging federal coordination said inconsistent state requirements could impose high compliance costs on startups and could force labs to change nationwide practices to comply with a single state. "If states begin to enact proposals that are going to impact how AI models are trained and developed, that's necessarily going to bleed into other states," Kevin Frazier said, arguing the founders—constitutional principles support national authority over development when national security and interstate commerce are implicated.
Others warned against broad federal preemption. "Denying states the ability to regulate novel technology issues going forward would be a grievous and avoidable error," Professor Neil Richards testified, saying state laws and common-law remedies have historically protected consumers and helped build digital trust. Richards cited state privacy, breach-notification and consumer-protection laws as examples of state-level action that fostered trust in digital markets.
Members on both sides described real-world harms and ongoing litigation. Representative Johnson recounted recent Senate testimony he said he had seen from parents affected by AI, noting families including Kristen Bridal, Juliana Arnold and Megan Garcia who have pursued common-law claims or other suits against AI companies. Representative Johnson said those cases illustrate why common-law remedies matter: "Common law is the foundation of American law...that allows the law to adjust to changed circumstances like the advent of technological revolutions such as artificial intelligence," Professor Richards added during questioning.
Several members and witnesses recommended that Congress act to provide a national framework while preserving state authority in deployment and consumer-protection domains. Proposals discussed included: (1) a focused federal role preempting state regulation of AI model training and frontier development where national security or interstate commerce is implicated; (2) leaving states free to regulate AI deployment in sectors such as education, health care and employment; and (3) giving standards bodies such as the National Institute of Standards and Technology (NIST) and the Center for AI Standards and Innovation (CASI) a larger coordinating role to reduce definitional inconsistencies.
Witnesses and members raised specific policy tools: whistleblower protections, incident reporting and information-sharing requirements, and strengthening domain-specific statutes such as HIPAA for health data and existing consumer-protection laws rather than drafting a single comprehensive technocratic regime mirroring the EU. Adam Thayer urged Congress to act promptly to formulate a clear national policy framework and said NIST and CASI could shepherd a federal-standards approach.
Chairman Issa and several members also urged reconstituting a bipartisan House task force and promised continued oversight and legislation. Multiple members said they oppose a moratorium that would nullify current state protections without replacing them. Representative Lofgren and other members emphasized California—s role as a leading innovation hub but said that states and the federal government each have distinct roles.
The hearing closed without formal votes; members asked the witnesses questions and entered statements and letters into the record, including letters from governors and civil-rights groups opposing a blanket moratorium on state AI laws.
The subcommittee left open next steps: lawmakers signaled plans for additional hearings, discussion of drafting federal legislation that aims to preempt narrowly where necessary, and use NIST/CASI to harmonize technical definitions and reporting standards. For now, the dispute remains over the proper balance between a national approach to AI development and state-level authority to protect consumers and preserve common-law remedies.

