To many, the idea of typing with your thoughts may sound like the stuff of sci-fi movies, but Facebook has been financing research on this endeavor for the last couple of years. And it’s getting somewhere.
This week, Facebook’s research division offered an update on its bold scientific ambition — an ambition that some may believe has the potential to extend Facebook’s sprawling social networking tentacles to the inside of our very minds. What do you mean you didn’t sign up for that?
The company’s plan for a non-invasive, wearable, brain-reading computer interface is making progress following recent research at the University of California, San Francisco (UCSF) on human volunteers. It could ultimately allow those who have lost the ability to vocalize words to instead communicate in real-time through their thoughts, giving them a whole new lease of life.
An update on the work, published in Nature on Tuesday, reveals that the team of Facebook-backed engineers has been able to develop so-called “speech decoders” capable of understanding what a person wants to say through analysis of their brain signals.
“Currently, patients with speech loss due to paralysis are limited to spelling words out very slowly using residual eye movements or muscle twitches to control a computer interface,” speech neuroscientist Eddie Chang MD, who’s working on the research, said in a release. “But in many cases, information needed to produce fluent speech is still there in their brains.” The technology being developed will allow them to express it.
Creating a brain-computer interface
To reach its goal of creating an effective and reliable brain-computer interface, Chang’s team, together with post-doctoral researcher David Moses PhD, conducted research using electrodes implanted into the brains of three volunteers at the UCSF Epilepsy Center.
Experiments were geared toward developing a method of instantly identifying the volunteers’ spoken responses based only on the activity of their brain. After much effort, the researchers reached a point where they were able to see — on a PC screen — a word or phrase derived from brain activity as the participant spoke it.
At the current time, however, the technology is only able to recognize a very limited number of words, but Moses said that in future studies “we hope to increase the flexibility as well as the accuracy of what we can translate from brain activity.”
Chang said that for years, his lab “was mainly interested in fundamental questions about how brain circuits interpret and produce speech,” adding, “With the advances we’ve seen in the field over the past decade, it became clear that we might be able to leverage these discoveries to help patients with speech loss, which is one of the most devastating consequences of neurological damage,” whether through brainstem stroke, spinal cord injury, neurodegenerative disease, or some other condition.
Facebook AR glasses
Technology emerging from the research could also be incorporated into the AR glasses being developed by Facebook. Andrew Bosworth, Facebook vice president of AR/VR, tweeted on Tuesday that the research could help it to build “a non-invasive wearable device that lets people type just by imagining what they want to say,” adding, “Our progress shows real potential in how future inputs and interactions with AR glasses could one day look.”
The company said that its face-based technology is unlikely to be ready anytime soon, describing its current state as “bulky, slow, and unreliable,” but added that it would continue to work on it as it has “significant” potential.
Facebook isn’t the only high-profile tech firm to take an interest in the squishy, gray matter inside our cranium. SpaceX and Tesla founder Elon Musk has also found the time to launch Neuralink, a company with an aim to better understand and treat brain disorders, as well as to “preserve and enhance your own brain” to keep pace with artificial intelligence (A.I.).