Kansas AG backs bill to ban AI training that impersonates humans or provides clinical advice

Senate Federal and State Affairs Committee · February 10, 2026

Get AI-powered insights, summaries, and transcripts

Subscribe
AI-Generated Content: All content on this page was generated by AI to highlight key points from the meeting. For complete details and context, we recommend watching the full video. so we can fix them.

Summary

Attorney General Chris Kobach urged the Senate Federal and State Affairs Committee to pass SB405, which would ban knowingly training AI to simulate humans, provide clinical advice, encourage self-harm, or perform other listed harms; the bill creates state and private civil remedies and penalties including fines up to $50,000 per violation.

Senate Federal and State Affairs Committee heard testimony on Senate Bill 405 on Feb. 20, a proposal that would make it unlawful to knowingly train artificial intelligence to perform certain human-like or therapeutic roles.

Jason, committee staff, described the bill as defining artificial intelligence broadly to include chatbots while carving out exceptions for customer-service bots, limited video-game companions and standalone consumer speaker devices. The bill lists eight prohibited behaviors, including encouraging suicide, providing emotional-support or mental-health treatment that would otherwise be delivered by a licensed professional, simulating a human being (appearance or voice), and promoting social isolation. The measure would authorize the Kansas attorney general to bring civil enforcement actions, allow private lawsuits by harmed individuals, impose civil fines up to $50,000 per violation, and permit liquidated damages of $150,000 plus costs and attorneys’ fees in private suits. The bill as drafted would take effect July 1 if enacted.

Attorney General Chris Kobach, the bill’s in-person proponent, told the committee that AI has already produced demonstrably harmful outcomes, citing court sanctions for briefs that contained AI-generated, fabricated case law and local examples of chatbots producing sexualized material involving minors. "AI has created sexualized conversations with minors," Kobach said, and he argued the tools are evolving to form "sycophantic and unhealthy relationships" with users that present higher risks for children. Kobach urged lawmakers to act quickly and to consider narrowing or clarifying parts of the bill — especially the criminal-and-civil standard of what it means to "knowingly" train AI and the wide reach of the provision that would prohibit training systems to simulate a human being.

Committee members asked how enforcement would work and whether evidence-gathering would be possible. Kobach described tools available to prosecutors and civil litigants, including civil investigatory demands to compel companies to produce information, and cited prior consumer-protection lawsuits his office has brought against social-media platforms. He also said existing professional regulations would not prevent the bill from applying to AI that represents itself as a licensed health-care professional to users; professionals using AI as research or support tools would not necessarily fall under the prohibition unless the AI itself was presented as acting as a health-care professional.

Several senators raised the difficulty of defining and proving multiple instances of unlawful training (i.e., what counts as a separate violation), and Kobach acknowledged that the bill’s current draft leaves some of those enforcement decisions to prosecutorial discretion. The committee closed SB405’s hearing and noted there are written proponent and opponent testimonies from the Kansas Mental Health Coalition and NetChoice, respectively, available to members for review before the bill is worked further.

The committee did not take a vote on SB405 during the hearing. Members signaled interest in amendments to refine the scope, the standard for liability and exclusions for permissible commercial or research uses.