Audio synchronization methods can resolve timing disputes in shootings, but geometry and recording limits matter

Webinar presentation · February 17, 2026

Get AI-powered insights, summaries, and transcripts

Subscribe
AI-Generated Content: All content on this page was generated by AI to highlight key points from the meeting. For complete details and context, we recommend watching the full video. so we can fix them.

Summary

Using waveform correlation, audio fingerprinting and spectrographic methods, analysts can align multiple unsynchronized recordings to resolve timing disputes — Maher demonstrated with measured inter-shot intervals and a supersonic-bullet case — but cautioned about clip, codec and geometric limits.

Dr. Rob Maher, professor of electrical and computer engineering at Montana State University, told webinar attendees that aligning multiple user-generated recordings can help answer timing questions in investigations — but success depends on geometry, recorder placement and the properties of codecs and devices.

He reviewed four alignment strategies his team has examined: waveform correlation for closely spaced recorders, audio fingerprinting that breaks recordings into short pieces for matching, spectrographic similarity using short-time Fourier transforms, and spectrographic correlation. "Probably the most basic technique is referred to as waveform correlation," he said, adding that other methods may work better when devices differ in position and character.

Case example: Maher presented a three-recorder gunfire incident with measured inter-shot intervals of 292.9 milliseconds (bystander), 320.9 milliseconds (police cruiser) and 374.2 milliseconds (officer body camera). Because the body camera was closer to firearm 2 and the bystander closer to firearm 1, the differing intervals led Maher to conclude that "that is indicating from this timing that firearm number 2 is discharged first." He emphasized that analysts must consider each recorder's frame of reference and relative distances rather than assuming identical timing across devices.

Supersonic‑bullet example: Maher described a May 11, 2022 case involving the killing of journalist Shireen Abu Akleh and noted that supersonic bullets produce a distinct shock wave (a "crack") separate from the muzzle blast. Measuring the time between the crack and the muzzle blast, and applying estimates for bullet speed and speed of sound at ambient temperature, can help estimate shooter distance and orientation.

Limits and cautions: Maher warned that loud transients can clip or trigger automatic gain control, complicating onset detection, and that perceptual audio coders (MP3, AAC) and live-stream packet loss introduce timing uncertainty. "I would encourage people to be a little hesitant to draw very precise conclusions when they're dealing with audio that's been through a perceptual coder or ... through a cell phone channel," he said.

Bottom line: Multiple recordings and careful geometric reconstruction can often answer which device heard an event first and, in some cases, which firearm fired first, but examiners must quantify uncertainty and explain limitations to investigators and courts.