The neural network has learned to generate realistic photos of non-existent people

Some analysts say that everything will start in 2019, others talk about the “near future”, but they all agree on one thing - fake, fake “people” will fill the information space. We are talking about virtual models that will look indistinguishable from living real people, but do what the programmer tells them to do. It won't do anything good, but how clever these fakes are can be judged from the latest NVIDIA project.

Experts from NVIDIA took GAN - a generative adversarial neural network, which in fact is a tandem of two networks and is capable of operating without human control. The researchers set the task of drawing the most realistic human face to the first neural network, while the second assigned the role of critic. And after thousands of passes, GAN learned to create such photos that none of the people interviewed suspected of being fake.

A conventionally infinite array of images of real living people is taken as a basis, so the neural network knows and uses many small details in its work. She can paint hundreds of faces with glasses, but with different hairstyles, skin textures, wrinkles and scars, add age signs, cultural and ethnic characteristics. As well as emotions, mood or the effects of external factors - from wind in your hair to an uneven tan.

A year ago, the same specialists from NVIDIA conducted a similar experiment, but then the facial images were too rough, the fake was immediately recognized. Today the neural network works incomparably better, plus it draws faces in high resolution. And there is no problem to order her to create, for example, a non-existent illegitimate child of a famous person, in order to arrange a provocation with the help of this information. The family resemblance in the picture will be one hundred percent convincing.