Oregon’s chief privacy officer outlines AI guardrails: human in the loop, data classification, and training
Get AI-powered insights, summaries, and transcripts
SubscribeSummary
Nick Blosser, Oregon’s chief privacy officer and AI strategist, briefed the committee on privacy work and AI guidance focused on prohibiting regulated data in generative AI, requiring human oversight of AI outputs, maintaining an approved tools list, and building training and governance frameworks.
Nick Blosser, Oregon’s chief privacy officer and AI strategist, told the committee that privacy recommendations from prior work (including 2021 legislation) and an AI Advisory Council informed a set of guiding principles and five recommended executive actions. "We issued a final recommended action plan in February," he said, listing governance, privacy, security, architecture and workforce needs.
Blosser described interim guidance for state employees and agencies. He said the current guidance prohibits using restricted or regulated data with generative AI tools and requires a human to remain in the loop for AI-generated content. "In no case should an AI-generated content just be let loose," he said, adding that agencies must document processes for human review and that many other AI uses must go through security and strategy review.
He also described what the state offers today: secure Copilot chat access for all employees, a more powerful M365 Copilot available to about 400 users, and other tools such as GitHub Copilot and Copilot Studio for building agents within a secure environment. Blosser said training activity includes a Responsible AI for Public Professionals course produced by the nonprofit InnovateUS that about 1,000 employees have completed and another 1,000 have begun.
Blosser and committee members discussed risks including accuracy and hallucination. He said the primary risk now is accuracy — AI systems can "hallucinate" and make up information — and that techniques, user training and guardrails are needed to reduce those errors and build public trust.
Committee members pressed whether the state’s work would extend into customer-facing services and how to ensure equitable and accurate service for vulnerable populations. Blosser said the immediate focus is safe internal use cases to support employees while longer-term policies and risk-assessment tools are developed to enable secure public-facing uses.
