House Energy and Commerce hearing showcases benefits and risks of AI in health care
Loading...
Summary
A House Energy and Commerce subcommittee hearing brought industry leaders, clinicians and scholars to Washington to discuss real-world uses of artificial intelligence in health care, including clinical tools, price‑transparency products and risks from unregulated mental‑health chatbots and automated prior‑authorization systems.
At a House Energy and Commerce subcommittee hearing, industry founders, practicing clinicians and academic experts outlined how artificial intelligence is already being used across the U.S. health system — and where lawmakers say federal guardrails are needed.
The witnesses described concrete clinical gains from platforms that triage strokes and other time‑sensitive conditions, insurer and plan tools that surface missed preventive care, and consumer products that attempt to show upfront prices. At the same time, experts and members warned about a “foundational trust deficit” in health care AI, unregulated direct‑to‑consumer chatbots that can produce harmful advice, and pilot programs that would let AI play a central role in coverage decisions.
“By combining this infrastructure with AI, we can offer a vastly better health care experience for every American regardless of location, insurance, or background,” said TJ Parker, founder of General Medicine, describing the company’s use of large language models to parse insurer benefit documents and produce upfront prices. Andrew Toye, chief executive of Clover Health, and Dr. Andrew Ibrahim of Viz.ai described clinical results their companies attribute to AI tools: earlier detection of disease, higher screening rates and shorter time to treatment for strokes.
Stanford Law and health‑policy professor Michelle Mello told the panel that adoption of clinical AI has been limited by a lack of trust and uneven institutional governance. “There is a foundational trust deficit,” she said, and recommended institutional vetting, developer disclosures (model cards) and more independent performance research. The American Psychological Association’s Dr. Vail Wright urged particular caution for mental‑health applications and for children, saying AI should “augment, not replace the clinical judgment and therapeutic relationship.”
Members of both parties used the hearing to press for policy steps: some urged modernization of FDA review and clearer reimbursement pathways to encourage adoption and monitoring; others pressed CMS and Congress to prevent AI from becoming a tool that causes inappropriate denials of medically necessary care.
Committee members also placed the hearing in a broader political context: multiple lawmakers criticized recent actions at HHS and the CDC, arguing that agency instability increases the need for congressional oversight of both AI and public‑health institutions.
Looking forward, witnesses recommended a mix of actions: (1) clearer institutional governance and disclosure by developers, (2) independent post‑market performance studies, (3) reimbursement changes to help smaller hospitals adopt and monitor tools, and (4) targeted safeguards for high‑risk use cases such as mental‑health chatbots and automated prior authorization.

