Get Full Government Meeting Transcripts, Videos, & Alerts Forever!

Lawmakers push for AI transparency to protect vulnerable communities

September 13, 2023 | Commerce, Science, and Transportation: Senate Committee, Standing Committees - House & Senate, Congressional Hearings Compilation


This article was created by AI summarizing key points discussed. AI makes mistakes, so for full details and context, please refer to the video of the full meeting. Please report any errors so we can fix them. Report an error »

Lawmakers push for AI transparency to protect vulnerable communities
During a recent government meeting, discussions centered on the growing concerns surrounding synthetic media and disinformation, particularly their impact on marginalized communities. Mr. Gregory highlighted that deep fakes and AI-generated disinformation are already being produced in Spanish and other non-English languages, raising alarms about the inadequacy of technology companies' investments in ensuring these systems function effectively across diverse languages.

Gregory emphasized that the current systems disproportionately expose non-English speaking communities to the risks associated with synthetic media, a concern his organization has been addressing for the past five years. In response to these challenges, legislation known as the Listos Act was introduced, aimed at increasing investment in multilingual large language models and ensuring that AI transparency measures protect the most vulnerable populations.

The meeting also touched on the varied applications of AI, from financial decision-making to medical diagnostics, and the necessity for government to leverage AI to enhance access to services for constituents. Ms. Espinal contributed to the discussion by underscoring the importance of transparency in AI interactions, particularly in sectors where decisions can significantly affect individuals' lives, such as access to public benefits or employment.

She advocated for tailored transparency and oversight requirements based on the specific use cases of AI systems, suggesting that companies should conduct impact assessments to identify and mitigate risks associated with their technologies. This approach aims to ensure that AI systems do not inadvertently exacerbate existing inequalities or create new vulnerabilities for already at-risk communities.

View the Full Meeting & All Its Details

This article offers just a summary. Unlock complete video, transcripts, and insights as a Founder Member.

Watch full, unedited meeting videos
Search every word spoken in unlimited transcripts
AI summaries & real-time alerts (all government levels)
Permanent access to expanding government content
Access Full Meeting

30-day money-back guarantee