Citizen Portal
Sign In

Latent‑print examiner urges more research, oversight and bias mitigation after wave of forensic reports

Technical briefing (speaker presentation) · February 13, 2026

Loading...

AI-Generated Content: All content on this page was generated by AI to highlight key points from the meeting. For complete details and context, we recommend watching the full video. so we can fix them.

Summary

Thomas, a latent‑print examiner at the U.S. Army Criminal Investigation Laboratory, outlined 10 recurring trends in latent‑print examination since 2009 — including needs for stronger research, independent oversight, QA/QC, bias mitigation, validation of ACE‑V, and better automation interoperability — and urged the field to treat published critiques as a to‑do list.

Thomas, a latent‑print examiner at the United States Army Criminal Investigation Laboratory (USACIL), told a technical audience that a steady stream of post‑2009 forensic reports has produced a clear set of priorities for the latent‑print community.

“Number 1 is that the reports exist at all,” Thomas said, pointing to multiple critiques issued since 2009 and warning that the field can no longer treat individual reports as outliers. He framed his presentation as a review of 10 thematic trends the community should address going forward.

Why it matters: Forensic fingerprint identification has long played a central role in criminal prosecutions. Thomas said recurring external reviews — including the National Academy of Sciences’ 2009 study, later human‑factors work, the 2016 PCAST review, and more recent AAAS‑style critiques — have flagged gaps that bear on courtroom testimony, scientific reliability and public confidence.

Among the 10 trends Thomas highlighted were:

- Research support: Thomas said many historical claims lacked robust empirical backing and urged greater participation in research to make examiners’ conclusions more data‑driven and scientifically grounded. “We’re getting a little bit more data driven,” he said, but added that more work is needed to connect studies to casework.

- Independent oversight and standards: He argued that self‑regulation has produced agency disparities and called for harmonized standards and funding to support adoption across laboratories.

- Quality assurance and accreditation: Thomas described broader uptake of QA/QC programs, proficiency testing and accreditation since 2009, shifting practice away from informal assurances toward documented systems.

- Code of ethics: He recommended exploring formalized ethical codes for examiners similar to those used in medicine.

- Methodology and validation (ACE‑V): Thomas criticized vagueness in how the ACE‑V process is defined and urged clearer validation of methods, asking whether procedures are repeatable, reliable and demonstrably accurate.

- Bias mitigation: Calling bias “the dirty word,” he said human factors — fatigue, personal relationships or other influences — are real and require explicit mitigation strategies built into QA/QC.

- Automation and database use (AFIS): Thomas noted increased reliance on automated fingerprint identification systems and databases for both operational searching and research; he warned that interoperability issues between jurisdictions constrain those benefits.

- Articulating limitations and error rates: He urged examiners to be upfront in reports and testimony about the limits of the science and about how published error‑rate studies apply (or do not) to a given case.

- Identification challenges: On identification specifically, Thomas said both the practice of declaring exclusions or identifications and the statistical underpinnings of such statements can be problematic and deserve careful reconsideration.

Thomas repeatedly framed the list as practical steps rather than abstract criticism. “Maybe we can kinda take this as a to‑do list,” he said, calling for engagement between practitioners and researchers, funding to support standards work, and concrete QA/QC practices to mitigate bias.

The presentation did not propose a single, binding reform or a specific regulatory change; rather, Thomas urged the community to treat repeated external critiques as evidence that incremental reforms are necessary. He closed by saying the community should keep addressing the issues rather than “letting sleeping dogs lie.”

Next steps: Thomas’ remarks functioned as a call to the forensic community — practitioners, laboratory managers and researchers — to prioritize research participation, clearer validation practices, formalized QA/QC and ethics standards, and improved interoperability of automated systems. No formal votes or actions were recorded in the presentation.