Committee members discussed the implications of generative AI for academic integrity, detection and the future of assessment, and raised concerns about wearables and emerging implantable devices.
Several members said policing student work with detectors will not scale as generative models and agentic AI evolve. Committee members urged shifting pedagogy and assessment design toward transferable, discipline‑specific tasks and process documentation rather than attempting to detect use of AI alone.
Audience and committee participants identified additional concerns: the speed of model improvements, vendor procurement and data‑security implications, equity (differences in student access to advanced tools or wearables), and occupational readiness if learners use AI to complete tasks without developing foundational skills.
Discussion participants also flagged risk areas for institutional procurement and compliance: vendors adding AI features to existing contracts, the need for institutional certification and security review when systems process student or protected data, and the potential for wearables (hearables, glasses with AI, wrist devices or future implantable interfaces) to compromise exam integrity or capture sensitive biometric data.
Committee members recommended sustained activity on the topic, including recurring agenda items, coordination with the Texas Department of Information Resources and state procurement/compliance offices, and opportunities for institutions to share procurement lessons and collective approaches to vendor certification and training.
No policy changes or votes resulted from the discussion; members agreed to continue AI topics on future agendas and explore speaker invitations and working groups.