Lifetime Citizen Portal Access — AI Briefings, Alerts & Unlimited Follows
Brookhaven Lab tests ‘Vision,’ a voice‑controlled AI that runs experiments by translating speech into code
Loading...
Summary
Researchers at Brookhaven National Laboratory unveiled Vision, the Virtual Scientific Companion, a voice‑controlled assistant that converts spoken commands into computer code to operate instruments, analyze data and visualize results; it has been used in first voice‑controlled synchrotron x‑ray scattering experiments.
Unidentified Speaker (narrator) described a prototype assistant at Brookhaven National Laboratory called the Virtual Scientific Companion, or Vision, that lets researchers control experiments by voice. "What if running a scientific experiment was as simple as asking your phone for the weather forecast?" the narrator asked, framing the project's ambition.
The narrator said Vision uses large language model technology similar to popular AI chatbots but is adapted for laboratory workflows. According to the recording, the system "translates verbal commands into computer code that can run experiments, analyze data, or visualize results," a capability the presenter identified as the feature that distinguishes Vision from general chatbots.
The presentation included operational examples aimed at illustrating typical user instructions: researchers can tell the system to "take a measurement every minute" or "increase the temperature," and the narrator said the AI companion, when tailored to a scientific instrument, carries out those tasks. The recording also said researchers have already used Vision to perform the first voice‑controlled synchrotron x‑ray scattering experiments and that they are "eager to expand to other sophisticated instrumentation in the future."
The narrator framed Vision's benefit as reducing routine, time‑consuming tasks, saying the assistant "serves as a valuable collaborator, enabling scientists to spend time on what matters most: discovery." The recording did not provide technical performance metrics, error rates, safety protocols, or details about personnel roles, and no additional speakers or external experts are cited in the transcript.
Researchers' next steps, as described in the recording, focus on expanding Vision to additional instruments; the presentation did not specify a timetable, funding sources, or regulatory or safety oversight measures.

