An unusual development was presented at the Massachusetts Institute of Technology that expands the capabilities of intelligent systems. Now, the AI only needs to listen to a short recording of a person's voice in order to determine with high reliability what he looks like.
The development was named Speech2Face and is still purely scientific in nature, its distribution is limited to academic circles. Formally, we are talking about another type of generative-adversarial neural networks, which, by trial and error, selects the parameters of a person's appearance based on voice data. Moreover, the accuracy of its work significantly exceeds the results of random guesses.
What algorithms were used by American researchers is still not known for certain. There is only the result of the work of AI, surprisingly accurately reproducing the key features of the appearance of the participants in the experiment. The system, judging by the images, based on the voice determines not only the gender and age of a person, but also the color of his skin!
Most intriguing is the main question: why create such an AI? From the description from the developers it follows that they are now focused on improving algorithms and are very afraid that someone will want to use the "raw" technology for personal gain. But they need large datasets and tests to increase representativeness, so they have to take risks and resort to the help of the global scientific community. It is quite possible that the goal of Speech2Face will be to create a new tool for exposing or, conversely, generating deepfakes.