Get AI Briefings, Transcripts & Alerts on Local & National Government Meetings — Forever.
Experts tell Senate HELP committee unchecked AI development could threaten humanity
Summary
At a Senate Committee on Health, Education, Labor, and Pensions hearing, a researcher and a Beijing-based expert warned that unregulated development of advanced AI could pose existential risks; one researcher said published work suggests the probability of catastrophic outcomes may exceed the often-cited 10–20% range.
Members of the Senate Committee on Health, Education, Labor, and Pensions pressed experts on whether rapidly advancing artificial intelligence could pose an existential threat to humanity, with witnesses warning the risks are real and not fully understood.
A researcher who testified during the panel said external estimates of a 10–20% chance of catastrophic AI outcomes—often cited publicly by figures such as Jeffrey Hinton—are not an exaggeration. "They're not exaggerating it," the researcher said, adding that a paper his team recently published at the NeurIPS conference suggests the risk could be "a lot higher than 20%" if development proceeds without regulation. He described company efforts to build what he called "superintelligence," capable of outperforming humans across many tasks and controlling robots and other machines, and warned that unleashing vastly smarter-than-human agents at scale could have catastrophic consequences.
A second expert, identified in the hearing transcript as "Doctor Zhang" and participating remotely from Beijing, told the committee there is currently no scientific evidence or practical method to ensure superintelligence can be made safe. "Without scientific evidence of how to secure ourselves, it's really dangerous to do this, for the way that we are doing for the current AI," the expert said. The chair summarized that line of testimony and in the transcript referred to the remote witness later as "Doctor Tsai," a naming inconsistency in the record.
Committee members cited public estimates by researchers including Jeffrey Hinton and asked witnesses to assess whether those probabilities are credible and whether policymakers and industry have tools to reduce existential risk. The witnesses emphasized uncertainty about control methods and the scale of effort by some companies to push toward general or superintelligent systems.
The exchange in the excerpted portion of the hearing focused on the scope of the risk and the absence of proven containment or safety techniques, rather than on specific legislative proposals. No motions or votes are recorded in this excerpt.
The committee's questioning underscored a central tension for policymakers: publicized risk estimates have prompted urgent calls for regulatory guardrails, while experts say proven technical solutions to make advanced AI safe do not yet exist. The hearing continued beyond the excerpt provided.

