State lawmakers hear experts on AI in health care: promise on efficiency, concerns on costs, evaluation and workforce involvement
Get AI-powered insights, summaries, and transcripts
SubscribeSummary
A joint Pennsylvania House hearing heard three experts who said AI could improve diagnosis and reduce clinician burnout but warned it can also raise costs, create a rural digital divide, and requires stronger evaluation, vendor transparency and frontline involvement.
Harrisburg — Lawmakers from Pennsylvania’s Communications & Technology and Health committees on Thursday heard experts outline both the promise and pitfalls of artificial intelligence in health care, including potential efficiency gains and new costs, limited real‑world evidence and workforce concerns.
Dr. Hannah Nepras, a health‑care economist, told legislators that AI can reduce administrative burdens — for example, automating call centers and prior‑authorization responses — and that ambient scribe tools may reduce clinician burnout. "AI has the potential to simplify administrative tasks in health care," she said, while cautioning that new tools often increase spending by expanding the treated population or by prompting follow‑up for incidental findings.
Nepras highlighted four ways AI could raise costs: expanding the population receiving treatment (for instance, broader diabetic retinopathy screening); surfacing clinically insignificant findings that trigger follow‑up care; new billable codes and reimbursement for AI services; and AI‑enabled increases in billing intensity. She noted that about 1,500 FDA‑approved AI medical devices exist but Medicare reimburses for fewer than 5 percent of them, and that high‑quality evidence on spending effects remains scarce.
Paige Nong, an assistant professor of health policy and management at the University of Minnesota, said adoption is already widespread. "In 2024, about 71 percent of hospitals were using predictive AI," she said, and "almost a third" had deployed some generative AI. Nong warned of a digital divide: larger, higher‑margin systems are likelier to implement and evaluate tools than rural and critical‑access hospitals, which often lack data‑science capacity.
Nong urged improved transparency from vendors and called for model cards and investment in local evaluation capacity so hospitals can judge whether tools perform well for their patient populations. "With generative AI, it's important to make sure the tool is not hallucinating," she said, citing the risk that false information could be embedded in clinical notes and propagate errors in care.
Dr. Peter Lazis, a clinical and industrial psychologist (visiting scholar at Penn State), focused on workforce impacts. He said many AI tools are developed without sufficient frontline input, which can produce tools that increase documentation burdens or are resisted by clinicians. "The real question is who decides where and how it is used and who benefits?" Lazis asked, urging "cogenerated" or human‑centered development that involves nurses, doctors and other frontline staff.
Committee members pressed the witnesses on evaluation, liability and infrastructure. Representative Joanne Stair asked who would assume liability when AI produces inaccurate information; Nepras said the legal framework is nascent and that insurers likely will pass higher costs to providers while courts and statutes sort out responsibility. Representatives expressed concern about insurers using AI to deny coverage and about hospitals using AI to alter billing practices.
Witnesses pointed to state approaches to limit insurer reliance on automated AI decisions. Nepras said several states now require a human reviewer when AI is used to deny coverage or make prior‑authorization determinations, ensuring a human remains "in the loop."
The witnesses and legislators converged on policy priorities that would make AI safer and more useful: stronger vendor transparency (model cards or standardized disclosures), public and private investment in local evaluation capacity, clearer guidance on acceptable administrative uses, and policies that require frontline participation in tool design and deployment.
The committees did not take formal action. Chairs said a second panel with additional stakeholders will be scheduled and recommended state‑level work while urging federal guidance. The hearing ended with an agreement to continue exploring legislative options to balance innovation with patient safety, workforce protections and equitable access to benefits from AI.
