House Education and Labor Subcommittee Hears Industry and Labor Witnesses on AI to Improve Worker Safety
Loading...
Summary
At a House Education and Labor Subcommittee hearing, industry and labor witnesses said AI can reduce workplace injuries but warned of privacy risks, surveillance-driven pressure and the need for stronger agency capacity, human oversight, and worker input.
The House Education and Labor Committee's Subcommittee on Workforce Protections convened to examine how artificial intelligence and other advanced technologies are being used to keep workers safe and healthy. Chairman Wahlberg opened the hearing by saying the session would explore ‘‘how these technologies work’’ and the questions they raise about validation, privacy and human oversight.
Industry and labor witnesses described use cases ranging from in-cab video and driver monitoring to wearables, predictive maintenance and AI-assisted construction site checks. Johan Land, senior vice president for product and engineering, safety and AI at Samsara, said the company installs cameras and sensors that run local models in vehicles and analyze data in the cloud to detect unsafe situations and spot patterns. "Across Samsara customers, fleets using AI safety technology see an aggregated 37% reduction in crashes within 6 months," Land said, presenting that figure as a company result.
Eric Hopland, CEO of the National Association of Wholesaler-Distributors, described four classes of AI used in distribution centers—environmental ("disembodied") scanning, predictive analytics, human-centered wearables and automated retrieval systems—and urged flexibility so local leaders and workforces can choose appropriate approaches. Jeff Bukowitz, president and CEO of the Mason Contractors Association of America, described an industry-built system called "George" that he said photographs job sites to verify bracing plans and PPE compliance in real time.
Douglas Parker, senior adviser at the National Employment Law Project and a former OSHA official, said the technology can reduce injuries but cautioned that many current tools focus on changing worker behavior rather than removing hazards. "Safety-purposed AI should not be used to discipline employees," Parker said, adding that algorithmic management can create physical and psychosocial hazards by increasing pressure and surveillance.
Members pressed witnesses on human oversight, affordability and the role of federal and state agencies. Ranking Member Omar urged Congress to fully fund the National Institute for Occupational Safety and Health (NIOSH), strengthen OSHA, and support state plan agencies; she also referenced pending bills brought to the committee's attention, including a Warehouse Workers Protection Act and the Empowering App-Based Workers Act, as legislative approaches to shape AI's workplace role.
The hearing included several contested exchanges about surveillance and privacy. Representative Casar asked whether in-cab cameras could record drivers continuously and capture conversations after the vehicle stops; Land replied that recording depends on installation and customer settings and said it is "theoretically possible" for extended recording, though he said he was not aware of customers using the technology that way. Members flagged the potential for employer misuse—such as tracking union activity or health information—if guardrails are not put in place.
The hearing concluded without votes. Members and witnesses repeatedly emphasized that AI safety gains are most likely when technologies are paired with human judgment, worker involvement, transparency, auditing standards and stronger public-agency capacity to regulate and certify safety tools. Chairman Wahlberg thanked the witnesses and adjourned the subcommittee.

