Several senators urged federal action to protect individuals and creators from harmful AI‑generated content, including nonconsensual sexually explicit deepfakes and unauthorized use of copyrighted material to train models.
Why it matters: Deepfakes and unlicensed uses of copyrighted content can harm victims, undercut creators' rights and expose platforms and AI vendors to legal and reputational risk. Senators asked OSTP how executive and legislative tools can protect citizens while respecting First Amendment limits.
Sen. Amy Klobuchar discussed bipartisan legislation, including the Take It Down Act, that would compel platforms to remove nonconsensual explicit AI imagery within 48 hours and called for a regime to label altered content when it is constitutionally protected speech. She also raised copyright and fair‑use concerns for creators whose work is used in training models; several senators described worry among authors, performers and trainers about unauthorized replication.
OSTP's Michael Kratsios said the administration would “directionally” work with Congress and that take‑down rules like the Take It Down Act are appropriate examples of where statutory remedy is possible. He emphasized education and transparency so the public can recognize AI‑generated content and supported standards for labeling and independent evaluation as part of an overall approach.
Senators requested that OSTP and agencies coordinate with platforms, rights holders and civil‑liberties groups to craft workable takedown, notice and labeling rules and to explain how they would be enforced. Kratsios pledged to continue that work with Congress and partner agencies.