Experts tell Minnesota Senate one-third of jobs face ‘high’ generative-AI exposure; researchers urge reskilling and worker safeguards
Get AI-powered insights, summaries, and transcripts
SubscribeSummary
Researchers told the Minnesota Senate Labor Committee that roughly one-third of Minnesota workers—more than 800,000 people—are in roles where generative AI could perform half or more of their tasks. They recommended state-funded reskilling grants, greater worker voice in implementation, limits on AI decision-making and protections against workplace surveillance.
Researchers presenting to the Minnesota Senate Labor Committee said Tuesday that generative artificial intelligence is already capable of performing substantial portions of many professional jobs and that Minnesota’s occupational mix makes the state especially exposed.
"When you account for the capabilities of generative AI, the number basically doubles. So 33% of all working Minnesotans, so that is more than 800,000 people, are now in highly exposed roles," Manjeet Rege, a faculty member and department chair for software engineering and data science at the University of Saint Thomas, told the committee.
The report, produced by North Star Policy Action and presented by Aaron Rosenthal, the think tank’s research director, and Rege, combined task-level benchmarks from academic groups (including analyses associated with Oxford and Wharton), O*NET occupational task data, and Minnesota DEED employment counts to estimate local exposure. Presenters said their analysis treated a job as "highly exposed" when generative-AI could carry out roughly 50% or more of job tasks.
The researchers described three ways work can change as AI is adopted: augmentation (AI assists a worker), transformation (job definitions shift), and displacement (positions are cut). They said the state has an outsized share of high-exposure jobs—software development, market research, finance and insurance and other professional and technical services—which places Minnesota near the top of Midwestern states for generative-AI exposure.
"We need to make sure that workers have the capacity to exercise voice in AI implementation," Rosenthal said, listing worker participation in implementation, protections against intrusive digital surveillance, prohibitions on AI making final hiring or firing decisions, and state-supported retraining as the report’s central policy proposals.
The presenters flagged several quantitative findings from their analysis: a roughly 33% overall high-exposure rate (more than 800,000 workers), about 110,000 workers in a "very high" exposure category (around 75% of tasks), and a cited 13% recent decline in entry-level hiring in AI-exposed software roles. They also noted the study’s benchmarks were based on OpenAI models current at the time of analysis and that model capability growth (which they described as a roughly seven-month "half life") means the estimates are likely conservative.
Committee members asked about policy models and data. Senator Dornick asked which state bills Minnesota should emulate and how the researchers gathered and dated their data. Rosenthal said some forthcoming House bills will address surveillance and decision-making and that displacement protections have often been enacted on a job-by-job basis in other states; he repeated the report’s recommendation for advanced notice, incentives for employer retraining, and targeted state grants.
Senator Gubak asked whether generative AI can create new jobs and how steep the retraining curve would be. Rege and Rosenthal said new roles will likely emerge but that many current workers are not well positioned to move directly into those jobs without substantial retraining, stressing the need for state-funded reskilling pathways and stronger K–12 computer science and AI education.
The presenters compared Minnesota to states such as Illinois and Texas, which they said have passed more worker-related AI measures, often tailored to particular professions. Rege also recommended regular fairness audits, transparency about AI use in hiring and monitoring, and mechanisms to protect workers taken by surprise by rapid AI-driven changes.
Rege offered an interactive dashboard and the full report for committee members to explore; the presenters said they would provide a link to committee staff and the chair said staff would distribute the materials. The committee adjourned without formal votes.
What this means: the committee received data and concrete policy recommendations but took no formal action during the hearing. The presenters urged the legislature to consider targeted reskilling grants, limits on automated decision-making in employment, protections from AI-driven surveillance, and requirements that workers have meaningful input into AI adoption at their workplaces.
Sources and limitations: The presenters said their methodology maps generative-AI task-capability benchmarks to O*NET task lists and Minnesota DEED employment counts; they noted the benchmark models and job-capability mappings change rapidly and described the presented numbers as conservative estimates tied to a specific set of models.
