Commission reviews AI governance, NIST guidance and local policy options
Loading...
Summary
Commissioners discussed AI governance, the need for policy guardrails, and potential local approaches ranging from permissive experimentation with oversight to centralized approval or temporary moratoria.
Commissioners and staff discussed principles and risk management for the town’s use of artificial intelligence, reviewing NIST guidance, examples from other cities and practical governance steps for the town.
Staff described three broad local approaches: permissive experimentation paired with individual accountability (Boston‑style), centralized control and approval for AI tools (Seattle‑style), and a temporary moratorium until more guidance is available (an approach the presenter associated with Maine). The presenter recommended governance as a foundation before widespread deployment and sketched a crawl‑walk‑run approach that begins with policy drafts and review teams, legal review and an internal oversight committee to vet high‑risk uses.
The commission discussed common risks for AI systems in local government, including data‑quality and data‑poisoning risks when training models, reduced transparency for large pretrained models, privacy concerns from aggregated unstructured data lakes, testing and reproducibility gaps, and the operational risk of shadow IT if staff use consumer AI tools without approvals.
Staff and commissioners emphasized enabling legitimate departmental uses while preventing unvetted deployments. The IT director said the town is drafting an AI policy that will include examples and guardrails and described plans for training, citation requirements when staff use AI and an internal review process for higher‑risk deployments. No formal vote or action was taken; commissioners asked staff to return with draft policies for review.
