Joint Economic Committee hears bipartisan push to use AI to reduce federal waste, fraud and abuse
Loading...
Summary
Witnesses and members at a Joint Economic Committee hearing urged wider use of data analytics and AI to detect improper payments and fraud across federal programs, while warning that data quality, human oversight and legal/privacy limits must be addressed before broad deployment.
Chairman Schweickart, chair of the Joint Economic Committee, opened a hearing where witnesses from the Government Accountability Office, academia and the inspectors general community urged Congress and federal agencies to use data analytics and artificial intelligence to reduce improper payments and fraud across federal programs.
The issue is large in scale: “The federal government reported an estimated $162,000,000,000 in payment errors or improper payments during fiscal year 2024, and that is almost certainly an underestimate,” said Dr. Sterling Thomas, chief scientist at the Government Accountability Office (GAO), during his opening testimony.
Why it matters: witnesses told members that improper payments and fraud appear most concentrated in large benefit and health programs and that established analytics techniques used in the private sector — anomaly detection, natural language processing, graph analytics and robotic process automation — could be applied to speed service to legitimate beneficiaries and flag suspicious activity earlier.
Testimony and proposals
Dr. Thomas said the GAO supports a cautious, staged approach: improve traditional anti‑fraud methods now, apply AI where data are reliable, and build workforce capacity. “Before pouring data science on a problem, we need solid, reliable, ground truth data and a human in the loop to ensure data reliability and application of this technology,” he testified.
Dr. Brian Miller, a hospitalist and professor at Johns Hopkins University, focused on health programs that account for large flows of federal money. He urged automation for eligibility and coding to reduce improper payments and administrative burden. “We should actually look to automate benefits eligibility. We might have to have human review first, followed by an auditing period, and then a move towards fully autonomous review,” Miller said, describing redetermination work underway after COVID‑era continuous coverage policies.
Andrew Canarsa, executive director of the Council of the Inspectors General on Integrity and Efficiency (CIGIE), described the inspectors general community’s push for a permanent, centralized analytics capability. He and other witnesses pointed to the Pandemic Analytics Center for Excellence (PACE) — created under the Pandemic Response Accountability Committee (PRAC) — as a working model and urged Congress to sustain and expand it beyond the pandemic work that produced recoveries and risk models.
Numbers and examples cited
- GAO: $162 billion estimated improper payments in fiscal 2024 (GAO testimony).
- Dr. Miller: Medicare and Medicaid comprise “over $1,500,000,000,000 in annual spending.”
- GAO example: a rules‑based pre‑screening of PPP loans identified about $4.7 billion in loans to ineligible recipients (cited by GAO as an instance of effective, lower‑complexity tools).
- CIGIE: the inspector general community collectively reported roughly $71 billion in audit and investigative accomplishments in fiscal 2024 and said PACE helped recover more than $165 million in a pension‑plan review (CIGIE testimony).
Committee discussion and cautions
Members from both parties pressed witnesses on implementation details, data sharing, privacy protections and the risk of false positives that could delay payments to legitimate beneficiaries. Several members noted recent, high‑visibility efforts by private entities to search for improper payments (referred to repeatedly in the hearing as “Doge”), and asked whether those outside efforts had gone through the same testing and governance standards the witnesses recommended for federal deployments. Witnesses said they had not been asked to complete an assessment of those private efforts and that GAO and the IG community are still reviewing them.
Multiple witnesses and members emphasized practical limits and prerequisites for reliable AI use: validated, “gold standard” datasets for training; role‑based access and privacy controls; human analysts to investigate algorithmic flags; and agency ownership of risks and decisions. Dr. Thomas said agencies must pair analytics with trained staff and follow GAO’s AI accountability framework.
No formal committee action or votes occurred at the hearing. Members and witnesses identified follow‑on steps that would require legislative or administrative action, including extending or replacing PRAC/PACE authorities, clarifying data‑sharing authorities (for example, renewal of Social Security Administration death‑data sharing with Treasury’s Do Not Pay), and implementing OMB AI guidance.
Ending
Committee members repeatedly returned to a central tension: many proven private‑sector analytic tools exist, but federal adoption depends on clarified legal authorities, sustained funding for centralized analytics and trained personnel inside agencies to operate and oversee new systems. Witnesses urged gradual, monitored deployment that prioritizes high‑quality data and human oversight rather than rapid, unvetted rollouts.
The committee did not vote or direct a specific next step at the hearing; members said they would consider legislative options and follow‑up hearings and oversight to track agency implementation.

