Get AI Briefings, Transcripts & Alerts on Local & National Government Meetings — Forever.
Expert witness warns unregulated AI could pose catastrophic risk
Summary
A witness and two doctors testified that rapidly developing, unregulated artificial intelligence could produce catastrophic outcomes, saying safety methods for hypothetical ‘superintelligence’ do not yet exist and that extinction-risk estimates may exceed 20% if development continues unchecked.
A witness testifying before the hearing warned that rapidly developing, unregulated artificial intelligence could lead to catastrophic outcomes for humanity.
At the hearing, the witness said, “No. They’re not exaggerating it,” in response to whether experts such as Nobel laureate Geoffrey Hinton were overstating extinction risks. The witness added, “I think it’s likely to be a lot higher than 20% risk that we basically end civilization as we know it,” citing a recent paper his team published at the NeurIPS conference as the basis for that judgment.
The Chair framed the exchange by asking whether the rapid and uncontrolled development of AI poses a substantial threat to the human race and whether the 10–20% figure cited by some researchers might be an exaggeration. The witness responded that, based on recent research, those estimates were not an overstatement and that companies openly aiming for “superintelligence” — systems that could outperform humans at many tasks and control robots — increase the potential for systemic harm if not constrained.
Doctor Zhang, introduced by the Chair as participating from Beijing, said the world currently lacks “scientific evidence and [a] practical way to keep superintelligence safe enough” and warned that the global environment is not prepared to treat superintelligence as a controllable tool. Doctor Zhang added later in the exchange, “Without scientific evidence of how to secure ourselves, it’s really dangerous to do this, for the way that we are doing for the current AI.”
Doctor Tsai twice affirmed the core concern when asked directly by the Chair, indicating agreement that the risk of loss of control is real. The transcript records Doctor Tsai’s brief confirmations but does not include further elaboration in the provided excerpt.
The witnesses’ testimony emphasized two linked points: that some researchers’ public estimates of existential risk from AI may understate the danger, and that the scientific and practical methods needed to guarantee safety against hypothetical superintelligent systems are not yet established. The hearing participants did not record a formal motion or vote on policy in the provided excerpt.
The discussion closed with the witnesses reiterating caution about continuing current development approaches without validated safety mechanisms; no legislative action or regulatory decision was documented in this transcript excerpt.

