By Lauren Howells.

Researchers at MIT have developed a device that can ‘hear’ words that users are thinking, without them having to say the words out loud.

Known as AlterEgo, the device uses electrodes to pick up tiny neuromuscular signals, which can’t be detected by the human eye, in the wearer’s face and jaw, when they say words in their mind but do not vocalise them.

Receives sound without earphones

The IA (intelligence-augmentation) device, which is clipped around the user’s ear and curves around the jaw to the chin, also includes a pair of “bone-conduction headphones”, enabling wearers to receive information from the device without having to put anything inside their ears.

This means that the device can be used without interrupting conversations or interfering in any other way with what the wearer can hear in the “real world”.

MIT says that this enables users to “undetectably” ask questions and receive answers to “difficult computational problems”.

In an experiment, the device has already been used by those playing chess to report other players’ moves and receive recommended responses from a computer.

New Device Can Transcribe What You’re Thinking

Combining AI and Humans as a second self

According to the MIT’s video, AlterEgo “aims to combine humans and computers, such that computing, the internet, and AI would weave into human personality as a ‘second self’”. Ultimately, the device could be used by people to communicate with ‘assistants’, such as Amazon’s Alexa, while appearing completely silent to anyone watching.

Graduate student at the MIT Media Lab, Arnav Kapur, who led the device’s development, described the team’s initial inspiration for developing it:

“Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?”

Pattie Maes, Kapur’s thesis advisor and a professor of media arts and sciences said:

“We basically can’t live without our cell phones, our digital devices. But at the moment, the use of those devices is very disruptive. If I want to look something up that’s relevant to a conversation I’m having, I have to find my phone and type in the passcode and open an app and type in some search keyword, and the whole thing requires that I completely shift attention from my environment and the people that I’m with to the phone itself.

Maes described how she, together with her students, had been experimenting with new “types of experience” that would enable people to benefit from all the services and knowledge that smartphones provide, in a way that would allow people to “remain in the present”.

High transcription accuracy

In a usability study conducted by the researchers in which 10 people spent around 15 minutes each customising the device to their own neurophysiology, the system had a transcription accuracy of around 92%. Google’s speech recognition is reported to be at around 95% word accuracy for English.

According to Kapur, the performance of the system should improve with “more training data”, which he says could be collected during its ordinary use.

Potential uses for disabled and special ops

Uses for the device could be wide-ranging, with Thad Starner, a professor at Georgia Tech’s College of Computing, saying that it could potentially be used to communicate in a high-noise environment, such as a flight deck of an aircraft or somewhere with lots of machinery.

“The other thing where this is extremely useful is special ops,” Starner says.

“There’s a lot of places where it’s not a noisy environment but a silent environment. A lot of time, special-ops folks have hand gestures, but you can’t always see those. Wouldn’t it be great to have silent-speech for communication between these folks?”

Starner also says that AlterEgo could potentially be used for those who have a disability which means that they cannot speak normally.