Citizen Portal
Sign In

Lifetime Citizen Portal Access — AI Briefings, Alerts & Unlimited Follows

Presenter outlines FRSTAT software to quantify fingerprint similarity, emphasizes limits

Presentation (unidentified speaker) · February 13, 2026

Loading...

AI-Generated Content: All content on this page was generated by AI to highlight key points from the meeting. For complete details and context, we recommend watching the full video. so we can fix them.

Summary

An unidentified presenter described FRSTAT, a basic tool that measures similarity between fingerprint impressions and reports how often similar scores occur in same- vs different-source datasets; the presenter stressed the tool provides similarity metrics (not source probabilities), a reporting threshold of ≥10, validation results, and availability to U.S. government and research entities under NDA.

An unidentified presenter detailed a simple software program called FRSTAT that measures the similarity between two fingerprint impressions and reports where that similarity falls in empirically observed distributions.

The presenter framed the work as a response to repeated critiques that fingerprint and other pattern-evidence disciplines function as a "black box." He said the tool aims to produce "measurements that I can see" and that can be shown to investigators, juries and courts rather than relying solely on a single examiner's experience.

FRSTAT, which the presenter called "friction rich statistical interpretation software," operates after an examiner visually examines and annotates corresponding features. The system reads the analyst's annotated features (not the raw fingerprint image), aligns the configurations on a coordinate plane, pairs features by combinatorial optimization, measures Euclidean distances and angular differences, applies dynamic weightings, and aggregates values into a single global similarity statistic.

The presenter showed empirical histograms comparing similarity scores observed in prints known to come from the same source versus different sources across varying numbers of annotated features. In a case example he cited a global similarity statistic of 54.334 and said that score sits well within the distribution of known same-source impressions and far from the distribution of different-source impressions.

He reported two supporting proportions from validation: roughly 49% of same-source scores were lower than the example score, and about 0.00005 of different-source scores would be as similar. Combining two tail probabilities, he said, produced a value the presentation characterized as "about 96,425" times more probable among common sources than different sources; he explicitly cautioned that FRSTAT "is not Bayesian" and "cannot be put into a posterior probability." The presenter repeated that the software yields a similarity statistic, not the probability that a named individual is the source of the print.

As a reporting policy, the presenter said the team adopted a threshold: results equal to or greater than 10 are classified as a "positive association" between two impressions. He said this threshold emerged from validation where a specificity rate around 99% was observed at a ratio of 10, but the presenter noted that the 99% figure is an observed rate from validation rather than a bound from a confidence interval.

The presenter listed operational limitations and safeguards: FRSTAT depends on user annotations (it does not verify the accuracy of annotations or see raw images), may not account for poor photography or all impression artifacts, and is designed to be conservative. He recommended using FRSTAT only after an examiner has visually concluded that two impressions may share a common source and after independent verification; he also recommended strict policies, procedures and quality assurance to limit cognitive bias.

Regarding distribution, the presenter said the team plans to transition FRSTAT to the commercial marketplace but is offering it free of charge in the interim to U.S.-based federal, state and local government entities and to U.S.-based research organizations that request access for evaluation. Access would be transferred under a nondisclosure agreement intended to preserve future commercialization options.

The presenter concluded by inviting interested parties to request materials by email. No formal policy vote or court ruling was recorded in the presentation itself.