UT presenter urges practical, secure use of AI tools to ease financial aid workloads
Get AI-powered insights, summaries, and transcripts
SubscribeSummary
Maria Serna of UT Austin told the Financial Aid Advisory Committee that AI tools such as Microsoft Copilot and ChatGPT can speed routine tasks, improve knowledge transfer and help with student communications — but she warned institutions to protect FERPA data, train staff and continuously monitor outputs.
Maria Serna, associate director at the University of Texas at Austin, told the Higher Education Coordinating Board's Financial Aid Advisory Committee on Dec. 11 that artificial intelligence can deliver real efficiency gains in financial aid offices — but only when used with clear processes and safeguards.
Serna demonstrated use cases her office has deployed, including summarizing long email threads, converting meeting transcripts into step-by-step workflows, generating executive summaries from team channels and building a web-based agent that answers routine student questions. "AI is only as good as the person using it," Serna said, urging careful prompt design, training and oversight.
She recommended three practical steps before adopting AI: assess and map existing workflows to find time-consuming tasks, pilot tools for clearly defined use cases (for example, knowledge transfer and routine reporting), and train staff so the technology augments rather than replaces institutional expertise. Serna also shared sample prompts and a short live demonstration showing how specificity in a prompt improves results.
Serna cautioned about data protection and FERPA compliance when integrating tools such as Microsoft Copilot or cloud-based large language models. She said UT uses enterprise controls, labeling and access restrictions to make sure confidential records are not inadvertently exposed and urged institutions to consult IT and legal teams before wide deployment. On privacy she said, "Data is not used to train large language models" in their enterprise configuration and stressed the need for encryption and marking PHI/FERPA data carefully.
On staffing concerns, Serna said AI adoption should be paired with upskilling: "Being ethical in use of AI" means creating transparent processes, accountability and continuous monitoring to avoid bias or incorrect outputs. She gave examples of savings in staff time — converting several days of training work to a one-day turn-around for materials — and suggested that modest pilots (chatbots that pull only from an institution's website or SharePoint) can yield student-facing benefits while limiting risk.
The committee responded positively: members praised the practical examples and said they were looking for guidance on implementation and governance. Serna offered to share materials and invited committee members to contact her with follow-up questions.
The presentation concluded with an invitation to explore continuing-education offerings (Copilot/ChatGPT workshops) and an emphasis on measured, accountable adoption: pilot, train, monitor, and iterate.
