The OpenAI laboratory has created a neural network, which itself considers to be very dangerous.

Research laboratory OpenAI has opened access to the full version of the neural network "GPT-2", designed to generate meaningful text on arbitrary topics. It was ready back in February, but the developers were so impressed with the results of the activity of their brainchild that they were simply afraid to release it into the world. Several stripped-down versions of AI were presented to see how the Internet community responds to them, and most importantly, how it will be applied.

The neural network GPT-2 was trained on 8 million texts from the Internet and is able to quickly and accurately recognize the essence of what is written in order to draw conclusions and continue the text. For example, a catchy headline is enough for her to write the text of "sensational" news, which many will take for the truth. AI knows how to work with literary techniques, with technical texts, he writes poetry and can maintain a conversation, composing detailed answers to questions.

The fears of experts were caused by how convincing the texts from GPT-2 look. AI does not know how to lie in the literal sense, it does not have malicious intent, but it skillfully juggles words to make up weighty-sounding phrases. Of course, the neural network also has enough vulnerabilities - for example, it cannot build a long plot, it works only with small texts. Or she may make a gross mistake by misinterpreting the name of an object she does not know.

As a result, a decision was made on the principle of "knocking out a wedge by a wedge" - instead of hiding GPT-2, the developers gave full access to the AI ​​so that everyone could personally test the neural network. The more people get to know her, the more knowledgeable, and therefore less vulnerable, they will become. And then the use of AI for selfish purposes will no longer have such destructive consequences.