Lifetime Citizen Portal Access — AI Briefings, Alerts & Unlimited Follows
Senate Judiciary Committee hears SB 247 to criminalize AI‑generated child sexual abuse material; committee delays action
Loading...
Summary
Sponsor Sen. Jesse Kiel told the committee SB 247 would treat AI‑generated obscene images of minors the same as non‑generated CSAM, apply the Miller obscenity test and set equal penalties; child‑welfare groups urged passage while at least one expert warned of First Amendment litigation risk. The committee set the bill aside for further review and possible language changes.
Senator Jesse Kiel, sponsor of Senate Bill 247, told the Alaska Senate Judiciary Committee on April 15 that the bill would create parity between artificially generated child sexual abuse material and non‑generated child sexual abuse material, apply the U.S. Supreme Court’s Miller obscenity test to generated images and videos, and set penalties for generated material equal to existing CSAM laws.
"These are images of children who are being subjected to [sexual] assault," Kiel said, and with the rapid growth of AI image‑generation and modification tools, he warned many such images "will be indistinguishable from actual video" unless the law is updated. Kiel said the bill includes protections so that tech employees and 'search‑and‑destroy' teams that collect or possess generated material only to remove it or report it would not be prosecuted.
The committee heard invited testimony from Trevor Storrs, president and CEO of the Alaska Children’s Trust, who described what he called fast‑growing reports nationally of AI‑related CSAM and urged the committee to act. Storrs cited National Center for Missing and Exploited Children figures presented to the committee — 4,700 reports in 2023, about 67,000 in 2024 and more than 400,000 in the first half of 2025 — and warned that generated imagery contributes to sextortion and other real‑world harms.
Advocates from the Alaska Network on Domestic Violence and Sexual Assault also supported the bill. Executive Director Brenda Stanfill told the committee that even entirely fictitious AI‑generated images "fuel and normalize the sexualization of children" and can increase demand for real abuse material.
Tech sector representatives signaled conditional support but asked the committee to add clarifying language. Rose Feliciano of TechNet said the association supports the measure but requested explicit protections to ensure Internet service providers and platform teams that identify and route CSAM to law enforcement are not treated as distributors when they transmit evidence (emails or texts) to authorities.
A member of the public, Remi Spring, urged caution and opposed the bill’s current wording, arguing that U.S. Supreme Court precedent — including Stanley v. Georgia and Ashcroft v. Free Speech Coalition, cited by Spring — limits the ability of states to criminalize private possession of certain obscene materials and that sweeping language could invite litigation. Spring asked the sponsor to focus enforcement on distribution and commercial conduct rather than private possession of generated images that do not depict identifiable minors.
Kiel told the committee the bill does not change Alaska’s mandatory‑reporting statutes and said he would research whether possession or distribution of generated material triggers reporting duties. He also said he would review proposed language from TechNet and consult the Department of Law to avoid creating a loophole for bad actors while protecting good‑faith network defenders.
After testimony and questions, the committee set SB 247 aside for further review so the sponsor and staff can refine language and confirm protections for platform reporting and industry 'white hat' activity. No formal vote on the bill’s merits was recorded at the hearing.
What happens next: The sponsor said he is discussing TechNet’s proposed language with Department of Law staff; the committee will take up the bill again after those consultations and further drafting.
