Srinivas Mukhamala, chief executive of a New Mexico-based cybersecurity company, told the Science, Technology and Telecommunications Committee that state government and other institutions must treat artificial intelligence as a cybersecurity and governance problem and adopt continuous discovery, detection, validation and remediation for AI systems.
Mukhamala said New Mexico firms and agencies already use AI widely and that legislation and policy must balance consumer protections, security and innovation. He described a framework his group calls Navigate for assessing state AI bills and said his team scored New Mexico roughly 63 points on a 100-point scale when comparing state AI statutes and guidance.
“Discover, detect, validate, remediate” was Mukhamala’s core prescription: discover where AI is used across agencies and vendors; detect weaknesses and vulnerabilities introduced by AI; validate defenses and the model’s behavior; and remediate issues continuously whenever models or weights change. He said AI requires continuous security posture monitoring, not periodic audits.
Mukhamala also urged early, broad education in AI literacy, saying AI should not be treated as a late-stage upskill but introduced “as early as we start teaching them how to speak.” He argued for targeted statewide use cases — small, high-value projects selected so the state can work backward to ensure data, expertise and labeling are available before large deployments.
On energy and infrastructure, Mukhamala warned that AI inference and large model workloads multiply energy needs. He said modern data centers and the chips that power AI are improving in efficiency but that demand for 24/7 power, large water use for some cooling systems and other resource questions make data-center siting and sustainability complicated policy choices.
Mukhamala described commercial activity at his firm: about 70 full-time employees in New Mexico, 11 focused on AI research now and a plan to expand to approximately 25 AI researchers; a product to test models for safety and security planned for commercial release on Oct. 1; and testimony delivered to the U.S. Congress and to Stanford about AI governance. He said he works with New Mexico Tech, NMSU, UNM and other institutions on a nascent consortium and that he had recently discussed state House Bill 60 with legislators.
Committee members asked whether the state’s minimum cybersecurity standards are being applied to political subdivisions and whether the judiciary and other independent entities were adhering to the standards. Raja Sambandan, the state chief information security officer, told the panel the state uses NIST Special Publication 800-53 (moderate) as its baseline and that the state cybersecurity strategic plan and standards have been communicated publicly and to branches of government; Sambandan said the plan is reviewed annually and reported to the Legislature in October.
Senators and representatives pressed Mukhamala on workforce disruption, which he said has already begun in software development and repetitive white-collar tasks such as call centers. He recommended that the state incentivize training through workforce-development programs and partner with higher education to seed workforce pipelines.
Mukhamala warned that, if unchecked, uneven access to AI could create “civil unrest between the haves and the have-nots,” and urged policy that makes AI accessible and affordable for broad populations. He said state leaders should convene experts to pick several high-value use cases, secure the data and expertise needed, and apply continuous security validation rather than leap directly into large deployments.