Board explores ad hoc committee to study artificial intelligence in psychology; members flag generative AI and augmented reality

5857862 · September 29, 2025

Loading...

AI-Generated Content: All content on this page was generated by AI to highlight key points from the meeting. For complete details and context, we recommend watching the full video. so we can fix them.

Summary

Board members discussed forming an ad hoc committee to analyze artificial intelligence issues affecting psychology practice, including generative AI, AI‑driven chatbots, augmented reality used in therapy and AI’s role in assessments. Members agreed the board should monitor proposed state task forces and may formalize an internal workgroup.

Board members raised artificial intelligence as a growing issue for regulation and oversight (sunset issue 13). At a recent subcommittee meeting, members discussed forming an ad hoc committee to study AI’s implications for practice, professional standards, and consumer protection.

Members asked that the board’s sunset response note the ongoing discussion and reference a proposed state workforce task force related to AI. Board member Cervantes and others said the board should watch developments in ASPPB and state legislation and consider convening a formal committee to analyze AI risks and possible regulatory responses.

Members and staff discussed specific technology examples to include in the board’s response: AI‑driven chatbots, generative AI tools, and augmented‑reality (AR) technologies that are already being used in exposure therapy and other therapeutic modalities. Board member Kasuga emphasized that the board should address both psychotherapy and assessment uses of AI, including diagnostic tools and automated scoring, and recommended that the sunset response list generative AI and augmented reality as examples.

No formal action was taken to create a committee during the meeting; staff said they would speak with the chair about next steps. The board included language in the sunset report that indicates it is monitoring the issue and may pursue further study.

Why it matters: rapid adoption of AI and AR in clinical tools raises questions about accuracy, client safety, informed consent, recordkeeping, and supervision of automated decision‑support. The board’s decision to monitor and potentially create a workgroup signals it may later propose practice guidance or rulemaking.