FBI lab tests 3D firearm-scanning tools, recalls evidence after virtual review yields new ID

FBI Laboratory presentation (Firearm and Toolmarks Unit) · February 17, 2026

Get AI-powered insights, summaries, and transcripts

Subscribe
AI-Generated Content: All content on this page was generated by AI to highlight key points from the meeting. For complete details and context, we recommend watching the full video. so we can fix them.

Summary

Eric Smith of the FBI Laboratory described how the lab has integrated 3D scanning and virtual comparison into firearm/toolmark work, reporting thousands of virtual comparisons, few errors overall, and one case where a virtual review led to evidence being recalled for reanalysis.

Eric Smith, a physical scientist and technical leader at the FBI Laboratory’s Firearm and Toolmarks Unit, described onstage how the lab has incorporated 3D imaging and virtual comparison into firearm and toolmark examinations and how those tools have changed casework review.

"I have 20 minutes," Smith said as he opened the presentation, then summarized a multi‑phase research plan to quantify similarity between tool marks and to test whether scanned images can reproduce—or improve on—traditional examiner conclusions. He told the audience the lab focused its demonstration on the Cadre Forensics Top Match 3D system and referenced an RTI market overview of available instruments.

Smith said the lab adopted a rigorous validation approach aligned with ISO/IEC 17025 accreditation expectations: planned validation activities, written work instructions, competency testing, and modifications to reporting language to spell out instrument limitations. "You have to be a planned activity," he said, referencing the accreditation standard and emphasizing the need for qualified personnel and management support.

On performance, Smith reported the lab has run thousands of virtual comparisons in its research program. He gave specific figures: more than 3,400 total virtual comparisons, including 956 drawn from evidence casework. In one set of proficiency-style tests run on the Cadre system, the lab ran hundreds of comparisons and recorded no false positives, though it did record at least one false elimination that prompted further review. "There were no false positives, but we did have a false elimination," Smith said.

To probe limits, the lab scanned archived proficiency tests (some dating to 2003) and used consecutively manufactured barrels and breech faces to create highly similar non‑match comparisons. Smith showed side‑by‑side CADRE renderings that revealed richer breach‑face topography than optical microscopy in some views, but he warned of a key shortcoming: that particular instrument in its current configuration could not always capture firing‑pin impressions deeply enough, so examiners sometimes had to rely solely on breech‑face marks.

When a virtual comparison produced a different result from prior casework—one originally judged inconclusive that converted to an identification during virtual review—Smith said the lab recalled and resubmitted the evidence for additional analysis. "We recalled that evidence," he said, describing a follow‑up review where additional examiners agreed with the updated identification during blind retesting.

Smith also outlined operational rules the lab follows when using virtual systems: algorithm candidate outputs are blocked during proficiency testing so examiners are not biased; input and sample‑entry procedures are standardized as appendices to SOPs; and examiners who obtain inconclusive results on virtual comparisons must return to light comparison microscopy because handling the physical item provides additional surfaces and context. "If I'm inconclusive there, I have to go back to the light comparison microscopy," Smith said.

He flagged other limitations the lab must manage, including lacquer or sealants on cartridge cases that can be scanned along with tool‑mark topography and create interference, environmental damage to evidence, and measurement bounds of the instrument that can obscure physical characteristics like color or texture. Smith closed by showing the lab’s results and conclusions forms (marked LCM for light‑comparison microscopy and VCM for virtual comparison) and by demonstrating a new Cadre gel‑tray device that can batch‑scan samples and be operated remotely; once files are scanned, Smith said, only the software is needed to compare images.

The lab’s next steps, as described by Smith, are continued blind verification of casework, competency testing for operators, updated reporting language to capture limitations, and careful adherence to accreditation and quality assurance steps while deploying remote scanning capability.