Scientists can't replicate AI research, and that's a serious problem

At a recent meeting of the Association for the Advancement of Artificial Intelligence, scientist Odd Eric Gundersen presented a report, the essence of which is that the industry is driving itself into a dangerous dead end. It turns out that most of the existing AIs do not support the fundamental principle of replication (reproducibility) of their own actions. And there are two reasons for this, the solutions to which are not yet seen.

Replication in this case means getting the same results from the work of AI when setting identical tasks. The user wants to be sure that the control system of his laptop or nuclear reactor works not only efficiently, but also predictably. So far, everything is limited to very simple, stereotyped tasks, the way it is, but we are already seeing beginning deviations in the work of real systems.

The first reason: all modern AIs are constantly learning and changing their way of working, tactics and strategy. They become individual, which is why they need to be forcibly retrained to work in new conditions. But this is extremely difficult to implement for the second reason - the source code, the algorithms for the operation of almost all systems are closed by their developers.

Gundersen's report indicated that of the 400 AIs presented to the community over the past two years, only 6% had an algorithm that could be deciphered and studied. Fewer than a third of the programs supported intermediate data, which made debugging and tuning incredibly difficult. Gundersen acknowledges the right of the authors of AI algorithms to keep their intellectual work secret, but encourages everyone to think about starting to work together. Otherwise, the future of AI looks very hazy.