NIST researchers outline limits of likelihood ratios and the challenge of communicating uncertainty in forensic evidence
Get AI-powered insights, summaries, and transcripts
SubscribeSummary
NIST presenters described measurement variability across top laboratories, argued that experts can supply likelihood ratios but not jurors' prior odds, and called for demonstrable methods to communicate uncertainty so fact-finders can give appropriate weight to forensic findings.
Two researchers from the National Institute of Standards and Technology (NIST) on Tuesday outlined technical and practical challenges in using probabilistic tools to communicate forensic evidence to jurors, saying experts can supply likelihood ratios but cannot supply the priors jurors must use in a Bayesian calculation.
The presenters warned that measurement variability across laboratories can be substantial. Speaker 2, a NIST researcher, gave the example of cholesterol in blood serum where “the known amount is 2.2 milligrams per gram,” and top metrology labs reported differing values and uncertainty ranges. He said those differences show the practical limits of measurement precision and the need to assess whether a measurement is fit for a particular legal application.
Why it matters: courtroom decisions can hinge on how fact-finders weigh forensic evidence. "In an ideal system ... there would be no wrongful convictions or no false acquittal," Speaker 2 said, framing the trade-off between minimizing wrongful convictions and minimizing false acquittals as a question of resources and method choice. He emphasized that the communication of findings to the trier of fact is as important as the factual accuracy of laboratory work.
On probabilistic reporting, the presenters contrasted categorical expert conclusions (identification, exclusion, inconclusive) with probabilistic approaches such as likelihood ratios (LR). Speaker 2 explained that while an expert can calculate a likelihood ratio that measures how strongly evidence favors one explanation over another, the LR alone does not produce a posterior probability without a prior. "Uncertainty is a personal matter. It's not the uncertainty, but your uncertainty," he quoted from Dennis Lindley to underline that jurors must supply their own probability assessments for a full Bayesian calculation.
The presenters also cautioned about implementation: some practitioners treat LR outputs as if they are true Bayesian ratios without sufficient calibration or validation, while others take steps to validate numerical methods before presenting them. They urged more demonstrable, descriptive ways to present evidence that could be tested for their effect on reducing decision errors.
The talk noted an additional practical difficulty: the judicial system offers limited opportunities to learn from mistakes because confirmed reversals or proven wrongful decisions are rare. That scarcity of feedback, the presenters said, reduces the ability to evaluate which communication methods actually lead to better outcomes.
The presenters concluded that LR is a logically coherent framework for individual decision making but that practical obstacles—how jurors would form priors, how to calibrate LR methods, and how to present uncertainty in demonstrable ways—remain unresolved. No formal recommendations or policy changes were adopted during the presentation.
