Panel cautions that AI voiceovers can undermine trust; teams opted for human narrators for public materials
Loading...
Summary
Presenters said AI-generated voices and avatars can be efficient but often 'sound like AI'; for public-facing content they decided a human voiceover preserved credibility and emotional connection.
Panelists described technical and ethical limits of AI voice technology and why some projects chose human narration for public-facing materials.
Jillian Beach, communications and community engagement manager at Legal Aid of the Bluegrass, said their testing showed AI voices could meet technical needs but still “sounded like AI,” which undercut trust. “We made a deliberate choice to step back from AI, and we replaced that with a human voice over artist,” she said, adding that they hired a former legal-aid client who brought credibility and emotional resonance to public-facing clips.
Adam Mustovsky said text-to-voice tools such as 11 Labs can be useful for large-scale, internal audio needs but that producing polished public narration required time-consuming sentence-by-sentence editing. He said the effort was “much more time consuming than the tried and true hiring of a voice actor” for highly polished material.
Speakers urged organizations to adopt a risk-based approach: use automated audio for accessibility or emergency messaging where speed matters; use human narration for materials meant to build trust or in high-stakes legal explanations; and always keep a human reviewer in the loop.

