Task force splits over an 'AI index'; DOE assessment staff urge rigor while others want a quick signaling tool

State Board of Elementary and Secondary Education (BESE) · December 16, 2025

Loading...

AI-Generated Content: All content on this page was generated by AI to highlight key points from the meeting. For complete details and context, we recommend watching the full video. so we can fix them.

Summary

Members debated adopting an AI 'index' used by other states. Department of Education assessment staff warned the instrument lacks technical validity and recommended careful selection or development; other members urged a faster signaling approach to attract funding. The task force established six subcommittees and named chairs to produce deliverables.

A lengthy exchange at the BESE task force meeting centered on whether the group should adopt an existing AI 'index' (an instrument used in other states) as a baseline measure or instead invest time to build a validated tool.

Thomas Blackburn, assistant superintendent for assessment, accountability and analytics at the Department of Education, told the task force he found no public evidence of validity or reliability for one proposed instrument that members had discussed. He said the questions appeared poorly targeted to either children or adults and that the department had requested but not received a technical report describing how the instrument’s cut scores or psychometrics were developed. "We will be very cautious about selecting an instrument," Blackburn said.

Other members argued for a pragmatic approach. Several participants described the instrument as a signaling tool that raises awareness with funders and partners and makes early benchmarking comparisons with states like Utah and North Carolina. One member warned that “perfect is gonna be an impediment to progress,” arguing a subjective, comparability‑oriented tool could help Louisiana attract attention and funding while a more rigorous instrument is developed.

Points of contention: assessment staff warned that a poorly designed survey could produce misleading categories and suffer survey fatigue; they urged sampling, university partners and performance tasks to validate any measure. Supporters said a timely baseline—even if imperfect—would let the state demonstrate engagement and compete for partnership and funding opportunities.

Outcome and next steps: the task force did not adopt a single instrument at this meeting. Instead, members agreed to pursue a dual approach: (1) subcommittees will research and recommend metrics and possible instruments, and (2) the Department of Education will evaluate candidate tools for validity and reliability, working with university partners where appropriate. The task force created six subcommittees and assigned chairs to the deliverables, asking for draft recommendations in January–February.

Representative quote: "The instrument that we received... we have not received any validity or reliability and permission on. We requested it. We did not receive it," Thomas Blackburn said of the sample index under review.

The takeaway: the task force recognized the need for a baseline but split on how to obtain it—some members favor an expedient signaling approach to mobilize funding and partnerships, while DOE assessment staff demanded empirical validation before statewide deployment.