Unions and tech experts back bill to limit workplace surveillance and require AI impact reviews
Loading...
Summary
Labor‑aligned witnesses and the AFL‑CIO Tech Institute urged the committee to adopt SB 1484, proposing electronic monitoring limits, disclosure, high‑risk AI impact assessments, human oversight and a private right of action so workers can challenge automated decisions affecting hiring, pay or discipline.
Several labor organizations, the AFL‑CIO Tech Institute and civil‑liberties experts told the committee they support Senate Bill 14 84 to curb harmful workplace uses of artificial intelligence and electronic surveillance.
What the bill would do - Limit intrusive electronic monitoring and require employers to notify employees where workplace monitoring is used; monitoring must be narrowly tailored to legitimate business purposes. - Require impact assessments for “high‑risk” AI systems that make consequential employment decisions (hiring, firing, disciplinary actions, pay reductions) and require meaningful human oversight of those systems. - Create notice and an appeals process for employees affected by automated adverse decisions and preserve rights to correct data used by AI. - Several witnesses urged strengthening the bill by removing broad carve‑outs, clarifying definitions (particularly “meaningful human oversight”), and adding a private right of action so workers can seek relief in court.
Key testimony - Ed Hawthorne, Connecticut AFL‑CIO president, urged mandatory impact assessments, meaningful human oversight and making the topic a mandatory subject of collective bargaining so workers and unions can negotiate uses of AI at work. - The AFL‑CIO Tech Institute and the Center for Democracy and Technology recommended tightening language to avoid loopholes (for example, vague exemptions from impact assessments) and suggested phasing and technical assistance for smaller employers. - Witnesses warned that poorly designed AI can discriminate, reduce job quality and threaten employee privacy; several speakers urged that the legislation require developers and deployers to identify training data, testing and mitigation for bias.
Questions from legislators - Committee members asked whether routine uses such as job postings or ad‑targeting would trigger an impact assessment; witnesses said the focus should be on consequential decisions (hiring, firing, pay, discipline) rather than benign advertising placement. - Lawmakers asked about cost: one expert estimated initial independent impact assessments could be costly in a first‑of‑a‑kind deployment (upper five‑digits for complex systems), but that costs generally decline as audit firms develop standard procedures and companies reuse prior assessments.
Ending and next steps - Witnesses asked the committee to consider adding a private right of action and to strengthen the definition of “human oversight” so it is not reduced to a rubber‑stamp review. Committee members signalled openness to amendments and requested additional technical input from experts.
Ending note: The bill would create new procedural protections for employees where automated systems are used in consequential employment decisions; experts suggested targeted, technology‑aware amendments to avoid loopholes and to help small employers comply.

