Humanity is still very far from the creation of universal neurointerfaces of the "brain-computer" type, but scientists are already doing a good job with individual tasks. We can collect a ton of data about signals in the brain, but how exactly are they to be interpreted? Last year, Canadian experts developed an algorithm for recognizing the faces of the people they saw using brain signals. Recently, their American colleagues have developed a method for decoding signals from the hearing zone in the brain into intelligible human speech.
The experiment was based on the study of the perception of information by ear by different people from the position of observing the reaction of their brain. Five patients were implanted with electrodes to read the signals, after which four performers took turns telling them the same story. Word for word, but each with its own intonation, volume, timbre, etc. - it was important to see how the brain reacts to the meaning of words, and not their presentation. The computers recorded the signal signatures for analysis, after which the test phase began.
Subjects were voiced words, phrases and expressions from the story they heard in no particular order. Observers noted their reactions, collected signatures of brain signals in order to then find matches with previously made recordings. All this data and the revealed patterns were loaded into a neural network to train it to draw parallels between words and signals. And then words that were not in the original story were added to the test - the neural network guessed by indirect data and correctly reproduced their sound in 75% of cases.
Now the team is working on a new algorithm that will also make it possible to recognize the meaning of what a person is thinking about. We are already talking, if not about the technology of reading minds, then about something close. At least - about the interface for communicating with those who, due to physiological problems, cannot talk in the usual way.