FR Stat tool shifts fingerprint reports toward probabilistic "strength-of-evidence," prompts debate on courtroom interpretation

Panel on forensic evidentiary statistics (FR Stat) · February 13, 2026

Loading...

AI-Generated Content: All content on this page was generated by AI to highlight key points from the meeting. For complete details and context, we recommend watching the full video. so we can fix them.

Summary

Developers and forensic practitioners debated FR Stat, a likelihood-ratio approach issued in lab reports for nearly a year, focusing on how large numeric results and technical disclaimers will be interpreted by judges and juries, training needs, and model limitations.

A panel of forensic practitioners and researchers debated the rollout and courtroom implications of FR Stat, a statistical tool intended to replace categorical identifications in friction ridge (fingerprint) reports with a numeric "strength-of-evidence." The lab behind FR Stat has been issuing results in investigative reports for about 11 months, and cases are only now beginning to reach the courts.

"There's a big number on the top and then there's a disclaimer at the bottom," said a participant about slide 22 of Henry Swofford's presentation, warning that non-experts may treat the headline value as a definitive identification rather than a probabilistic measure. Henry Swofford, who described the FR Stat approach during the session, said the reports provide both a strength-of-evidence number and a technical note explaining interpretation. "This is the result of this given comparison," he said, noting that an illustrative value "(about) 96,000 ... is much more probable to observe this result among common sources versus different sources," and that policy criteria (for example, results greater than 10) are supplied to indicate when an association should be considered "positive."

Developers acknowledged trade-offs. Swofford called the FR Stat model "extremely simplistic," saying it summarizes high-dimensional latent-print data into a single value and does not account for level 1 and level 3 detail. He framed that omission as "unquantified extra value" not captured by the statistic but said the approach yields useful discrimination between same- and different-source prints while the community works toward richer models.

Panelists discussed how courts and juries might react. Chris, a panelist who addressed admissibility and litigation strategy, said scientific validity must come first: "It is critical that jurors understand whatever conclusions are being proffered by an expert witness. But more critical is that the opinion is scientifically defensible." He and others urged transparency — sharing bench notes, data, and the models used — to blunt cross-examination and to make expert choices defensible in adversarial settings.

On courtroom defense tactics, panelists recommended practices labs can use to reduce vulnerability. Running multiple plausible models (for example, three) and reporting a defended range of results can show that conclusions are not an artifact of a single model choice, the panel said. Swofford added that the lab intentionally chose conservative model forms during foundational validation and that the team would make the software and supporting materials available free to other laboratories for interim use and review.

The panel also flagged implementation and workforce issues. Swofford described outreach sessions (SALT training for special agents, and TCAP/DCAP sessions for prosecutors and defense counsel) conducted before and after launch to prepare justice-system partners. He said the lab has not yet implemented a formal field-wide survey of practitioners' experience but continues targeted engagement.

Where adoption is headed remains unsettled. The panel agreed FR Stat represents a cultural transition for a field long accustomed to categorical identifications, and speakers cautioned adoption will be incremental. Swofford said widespread practice change in related disciplines is unlikely to be complete within two years.

The session closed with a technical exchange about modeling choices and a reminder that probabilistic reporting does not eliminate the need for foundational validation and clear, comprehensible communication to juries. Several panelists urged the community to focus research on model convergence and on documenting the range of inferences different reasonable models produce before presenting likelihood ratios to fact-finders.

The panel did not record any formal votes or policy actions during the session; instead, it focused on scientific foundations, courtroom strategy, training, and release of software to enable peer review and uptake.