Citizen Portal
Sign In

Lifetime Citizen Portal Access — AI Briefings, Alerts & Unlimited Follows

Argonne researchers: digital twins, AI and the Aurora supercomputer are reshaping scientific experiments

Argonne National Laboratory OutLoud Lecture Series · March 11, 2026

Loading...

AI-Generated Content: All content on this page was generated by AI to highlight key points from the meeting. For complete details and context, we recommend watching the full video. so we can fix them.

Summary

Panelists described how Argonne integrates experimental imaging and high-performance computing to build digital twins, how AI accelerates reconstruction and discovery, and what validation and guardrails researchers are developing to ensure trustworthy results.

Researchers at Argonne said the laboratory is rapidly exploring AI and digital-twin methods to accelerate scientific discovery while actively researching validation approaches and guardrails.

"We're at, like, warp speed on on trying to tackle that," Catherine Riley, director of science at the Argonne Leadership Computing Facility, said when asked how quickly a government lab can adopt AI to advance research. Panelists stressed a combination of methods and teamwork to use AI appropriately.

Stefan described concrete validation practices used at the APS: comparing AI reconstructions with conventional reconstructions and splitting datasets to cross-check results. "You simply compare them... use only half the data for one method and the other half for the other," he said, as a practical approach to build confidence in AI-generated results.

Connie Pfeiffer described early-stage digital-twin efforts that combine imaging from CNM and APS with ALCF models so scientists can "practice" experiments virtually before consuming scarce facility time. Panelists emphasized that digital twins become trustworthy through iterative data collection, constrained tests and gradual expansion of application domains.

The panel also noted institutional efforts to move more quickly, including participation in DOE initiatives such as Genesis Mission to streamline research pathways and interagency collaborations. Speakers repeatedly framed AI as a research tool that promises speed but needs rules and benchmarks to define when its outputs are reliable for scientific conclusions.

Audience questions focused on how to qualify AI answers and how representative a digital twin must be before scientists act on its outputs. Speakers recommended a cautious, evidence-based rollout in constrained settings with active verification.

The panel concluded that integration of experimental facilities, automation (Polybot) and high-performance computing promises to change scientific workflows but that community-wide validation and standards remain a work in progress.