AI Amplifies Cyber Threats, Panel at Columbus Forum Urges Inventory, Guardrails and Resiliency

Columbus Metropolitan Club Forum · February 4, 2026

Get AI-powered insights, summaries, and transcripts

Subscribe
AI-Generated Content: All content on this page was generated by AI to highlight key points from the meeting. For complete details and context, we recommend watching the full video. so we can fix them.

Summary

Experts at a Columbus Metropolitan Club forum warned that AI both improves defense and amplifies cyberattacks, urging organizations to inventory AI use, adopt policies and run resiliency exercises; COTA cited a 2022 incident that illustrated risks to transit operations.

Columbus — At a Columbus Metropolitan Club forum titled “Cybersecurity in the Age of AI,” a panel of public‑sector and industry experts on Wednesday urged organizations to treat AI both as a defensive force‑multiplier and as an amplifier of new cyber threats, and to prioritize governance, inventory and business‑continuity planning.

Padma Sastry, adjunct faculty at The Ohio State University’s College of Engineering and the session moderator, framed the discussion with two scenarios: an AI system that automatically isolates a fast‑moving compromise and an AI‑assisted environment that can normalize an attacker’s activity so it goes unnoticed. “Now imagine the same AI that was our defender now is also our offender,” Sastry said, summarizing the central tension facing security teams.

Kirk Harrath, who served as cybersecurity strategic adviser to Ohio Governor Mike DeWine until his recent retirement and chaired Cyber Ohio, described steps the state took after a third‑party NIST assessment to raise its cybersecurity maturity. “We took a lot of the routine, the mundane processes out and really enabled people to be decision makers based upon the data they were getting from the tools,” Harrath said, noting that automation and vendor partnerships helped the state move from lower to much higher maturity in about four years and that an enterprise data‑governance program and an AI council established guardrails around AI use.

Sophia Moore, chief innovation and technology officer for the Central Ohio Transit Authority (COTA), urged public agencies to consider that everyday service infrastructure is increasingly connected and therefore more exposed. “Each one of those vehicles is a connected vehicle,” Moore said, noting COTA operates more than 300 vehicles with routers, cameras and Wi‑Fi. She also told the audience that COTA experienced a cyber incident in 2022 that illustrated how attacks can disrupt operations for essential workers.

Michael Wyatt, who leads Deloitte’s cybersecurity practice for state, local and higher‑education clients, urged organizations to balance investment across prevention, detection, response and recovery. “Drill the well before you need the water,” he said, recommending tabletop exercises and full‑organization response planning that includes legal, communications and executive leadership as well as IT.

Panelists offered a set of practical, immediate steps: take an inventory of how AI is used across the organization (including shadow AI), classify the data accessible to those tools, adopt acceptable‑use policies, build security requirements into procurement, run tabletop and disaster‑recovery drills, and segment and offline critical backups to reduce ransomware risk. Harrath emphasized the need to know “who has access to it” and to put guardrails around models and data.

The audience asked about vendor responsibility and vendor differences; panelists pointed attendees to standards such as the NIST Cybersecurity Framework, the NIST AI Risk Management Framework and the Center for Internet Security benchmarks as useful guides. On quantum threats, the panel said NIST is developing post‑quantum cryptographic standards and advised organizations to inventory where encryption is used today to prepare for eventual migration.

Panelists also warned of what Michael Wyatt called a “lethal trifecta”: AI with access to private internal data, models ingesting contaminated public content (data poisoning), and AI that can communicate externally—conditions that can create peak risk unless AI is segregated and constrained.

The hour concluded with a wide‑ranging audience Q&A, including a question about lessons from the City of Columbus cyber incident; the panel emphasized preparedness and transparency but did not provide operational details. Toni Bell closed by thanking sponsors and hosts and urging attendees to continue implementing the practical recommendations discussed.

No formal votes or policy changes were taken at the forum; the event was a public discussion aimed at practical guidance for organizations wrestling with AI‑enabled cyber risk.