Policy committee discusses draft AI policy; task force recommendations and legal review requested

Get AI-powered insights, summaries, and transcripts

Sign Up Free
AI-Generated Content: All content on this page was generated by AI to highlight key points from the meeting. For complete details and context, we recommend watching the full video. so we can fix them.

Summary

Central York SD policy committee reviewed a draft artificial intelligence policy, discussed a teacher‑led task force, definitions, legal vetting, and liability concerns for data loss; the committee did not forward the AI policy to the board and requested further counsel review and refinement.

Committee members discussed a draft artificial intelligence (AI) policy and the work of a teacher‑led AI task force, but did not send the policy to the full board during this meeting. The committee asked legal counsel to review liability language and to help refine implementation provisions.

Dr. Aiken summarized the task force’s work: the district formed an AI task force to develop a vision and operating procedures and to craft a policy that balances ethical use, transparency and data security. The draft policy under review draws on external samples the committee examined and on a draft from a technology policy council; the committee said teachers and building leaders contributed sample language.

The policy’s stated purpose is to ensure “ethical and responsible use of AI technologies to enhance teaching, learning and administrative processes while safeguarding student privacy and data security.” The authority section explicitly references federal laws and standards such as FERPA, IDEA, ADA, CIPA and COPPA and a long list of board policies related to acceptable use, student records, non‑discrimination, and data security. Committee members suggested adding an explicit reference to the student handbook (school handbook) so student use is governed by handbook rules.

Several drafting questions arose. Members recommended that the formal definition of AI end where the technical description ends (at human intervention) and that operational disclaimers — for example, statements that the district does not guarantee accuracy of third‑party AI tools or that it is not responsible for loss of information — be moved to implementation or procedures sections. The committee discussed whether the district can disclaim responsibility for data lost or misappropriated when staff or students use third‑party AI tools, and whether the district creates additional liability by whitelisting or recommending particular tools.

Legal vetting was flagged as essential. Committee members said Saxton and Stump (legal counsel, as referenced in the meeting) should review liability disclaimers and confirm whether the district’s suggested language appropriately limits exposure when the district does not develop an AI tool but facilitates its use. Members discussed maintaining a vetted “white list” of approved tools and the need to protect student and financial data.

The committee did not approve the AI policy for first read at this meeting; members agreed to continue refining the draft, seek counsel guidance on liability and data‑handling language, and add implementation guidance in administrative procedures rather than embedding operational details in the policy itself.