A team of researchers at the University of California, San Francisco, led by neurosurgeon Edward Chang, successfully applied a new method of decoding an electrocorticogram - recording the rhythmic electrical activity of the cerebral cortex using electrodes placed directly on its surface.
In a study of four epileptic patients with implanted implants, they were asked to read and repeat a series of specific sentences aloud, while electrodes recorded their brain activity.
The resulting information was then fed into a neural network, which analyzed it against specific speech-related patterns - for example, vowel pronunciation, consonant pronunciation, or specific mouth movements.
Then another neural network decoded this information, obtained after 30-50 repetitions of sentences. Using this data, the system tried to determine what exactly was said by the person, based on the recorded impulses of his brain.
The result exceeded all expectations: the system determined the spoken words with an accuracy of 97%. Thus, the new system can act as a benchmark for assessing the recognition of brain signals using AI. Its accuracy in 3% of errors is comparable - with a number of reservations - with the accuracy of professional human transcribers, the rate of which is about 5%.