In the 21st century, subvocalization technology is experiencing a renaissance - a group of scientists from MIT, led by Arnav Kapoor, is developing a headset for controlling a computer without voice. They managed to almost completely eliminate the acoustic component when transmitting words from person to machine and vice versa. This is not a superstructure over existing systems, but a fundamentally new interface called AlterEgo.
Unlike a laryngophone, this headset does not respond to mechanical vibrations of the skin when pronouncing words, but to electrical activity in the muscles and nerves of the larynx, jaw and tongue. The user does not need to literally open his mouth, let air in and pronounce the words, it is enough to "clearly and distinctly" imagine how he pronounces the command. A set of electrodes counts the pulses, the neural network will match their pattern with the database and decipher what the person said.
In the course of research, Kapoor's team reduced the number of sensors from 15 to only four, determined the optimal locations for their placement, and developed algorithms for analyzing subvocalization data. The response from the computer is also transmitted silently, through the vibrations of the bones of the skull. Therefore, the experimental headset looks "broken" - it lacks the usual microphone and earpiece. And the process of dialogue itself is more like reading minds, which cannot be overheard from the outside.
The vocabulary of the system is still symbolic, but it is perfect for transmitting short codes, for example, marking chess moves. After 15 minutes of individual tuning, volunteers make only 8% of mistakes during 90 minutes of silent communication. This is so reassuring for Kapoor's team that scientists are already thinking about creating a new human-machine interface. Why should we open the user's skull and implant some kind of transmitters for reading thoughts, if there is an opportunity to go in a different, simpler and more painless way?