Researchers at Stanford University and MIT have developed a computer program that can accurately interpret and voice what people are thinking.

Why It Hits: The computer-brain interface market is heating up, with companies like Neuralink, Merge Labs, and Synchron all attracting major funding. These startups primarily focus on interpreting thoughts to control devices — but giving voice to our “inner voice” could be a multi-billion-dollar game-changer.

Behind The Code: The researchers shared findings from their long-running clinical trial, BrainGate2, in the science journal, Cell.

Some highlights:

  • The study focused on participants who lost their voice due to diseases like ALS or from a stroke, implanting electrodes in their brains and using AI to decipher their physically-attempted speech.

  • The researchers were successful, but participants found the physical approach tiring, so the study shifted to see if it could decipher words that participants simply thought — their inner voice.

  • Eventually, the system could interpret these words with almost 98% accuracy in some cases, allowing participants like Casey Harrell with ALS to converse with his friends and family in a deepfake of his own voice.

The Future: Of course, the tech raised questions about whether it could capture more words than participants actually intended to say out loud, since some people’s minds run on a constant monologue of thoughts, ideas, and observations. Spoiler alert: it did.

So, the researchers developed two solutions: blocking “inner speech” so only physically-attempted speech was detected and adding a “thought password” that let participants turn the inner-voice reading on and off. For the study, they chose “Chitty Chitty Bang Bang” — and both experiments were successful.

Prediction: One day, we may all be able to type on our phone by simply thinking at it.

Reply

or to participate