During a recent government meeting, discussions centered on the growing concerns surrounding synthetic media and disinformation, particularly their impact on marginalized communities. Mr. Gregory highlighted that deep fakes and AI-generated disinformation are already being produced in Spanish and other non-English languages, raising alarms about the inadequacy of technology companies' investments in ensuring these systems function effectively across diverse languages.
Gregory emphasized that the current systems disproportionately expose non-English speaking communities to the risks associated with synthetic media, a concern his organization has been addressing for the past five years. In response to these challenges, legislation known as the Listos Act was introduced, aimed at increasing investment in multilingual large language models and ensuring that AI transparency measures protect the most vulnerable populations.
The meeting also touched on the varied applications of AI, from financial decision-making to medical diagnostics, and the necessity for government to leverage AI to enhance access to services for constituents. Ms. Espinal contributed to the discussion by underscoring the importance of transparency in AI interactions, particularly in sectors where decisions can significantly affect individuals' lives, such as access to public benefits or employment.
She advocated for tailored transparency and oversight requirements based on the specific use cases of AI systems, suggesting that companies should conduct impact assessments to identify and mitigate risks associated with their technologies. This approach aims to ensure that AI systems do not inadvertently exacerbate existing inequalities or create new vulnerabilities for already at-risk communities.