IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Mind-Reading Tech at UT Austin Raises Ethical Questions

Researchers at the University of Texas at Austin were able to successfully glean the gist of a person’s thoughts by pairing a brain scanner with an AI language model, generating concerns about ethical uses of brain data.

brain
(TNS) — Devices that can read our minds are a step closer to becoming reality.

Researchers at the University of Texas at Austin reported they successfully used a brain scanner paired with an AI language model to glean the gist of a person’s thoughts, and translate them into words. It marks the first noninvasive technique able to translate thoughts into continuous speech.

Their findings were published in Nature Neuroscience on May 1. The team used functional magnetic resonance imaging (fMRI), a noninvasive method of measuring brain activity, along with the predictive abilities of a large language model, to roughly translate brain recordings into sentences.

The three volunteers in the study each had their brain activity measured while receiving an fMRI scan. During the scanning sessions, they each listened to a total of 16 hours’ worth of podcasts and radio shows. Gathering data from the scans, researchers created a map of how each participants’ brain responded to language.

Then, participants were asked to imagine telling a story, which the decoder then translated with some degree of accuracy. In the study’s most surprising finding, researchers discovered that the decoder was able to communicate the general storyline of silent movie clips that participants watched. The result suggests that their technique can analyze not just verbalized thoughts, but also intangible ideas.

“I think we are decoding something that is deeper than language,” said Dr. Alexander Huth, a co-author of the study, at a press briefing.

BRAIN-READING TECHNOLOGY'S THREAT TO MENTAL PRIVACY



The technology’s modest but significant progress is raising ethical concerns.

“While this technology is in its infancy, it’s very important to regulate what brain data can and cannot be used for,” said Jerry Tang, another co-author of the study, in an interview with BBC Science Focus.

It remains to be seen whether the technology could one day account for the subtleties of language, like tone and context, or the unique ways language is processed between individuals. Even if those capacities are developed, another question is whether it would be ethical to apply the technology in, for example, the realms of law or healthcare. Though still a work in progress, brain decoding devices could one day help those who have lost their ability to speak, such as those who have suffered a stroke.

Policymakers may not have long to ponder that question. The development of brain sensor technology from the likes of Meta and Snap means that the last bastion of privacy — the inside of our own noggins — could soon be violable.

“I’m not calling for panic, but the development of sophisticated, non-invasive technologies like this one seems to be closer on the horizon than we expected,” bioethicist Gabriel Lázaro-Muñoz at Harvard Medical School in Boston told Nature. “I think it’s a big wake-up call for policymakers and the public.”

©2023 Quartz Media Inc. Distributed by Tribune Content Agency, LLC.