A group of mathematicians argued that unprovable mathematics is an insurmountable barrier to machine learning algorithms. Now they were able to prove it in practice.

Not everything in the world is knowable. At least this applies to artificial intelligence and machine learning algorithms. At first glance, such a statement in our age of progress may sound like a real heresy - but, alas, this is the case. An international group of mathematicians and AI researchers have found that despite the seemingly limitless potential of machine learning, even the most advanced algorithms are bound by mathematical constraints.

"The advantages of mathematics are sometimes built on the fact that … in simple terms, not everything is provable," write the researchers led by computer scientist Shai Ben-David of the University of Waterloo. They argue that machine learning shares this fate.

How did they come to this conclusion? Mathematical constraints are often associated with the famous Austrian mathematician Kurt Gödel, who in the 1930s developed incompleteness theorems - two assumptions that show the limitations of formal arithmetic (and, as a consequence, any formal system that uses the concepts of this arithmetic: 0 and 1, addition and multiplication, and natural numbers). New research has only proven that machine learning is locked into the same framework.

Right now, AI is literally limited by unprovable math. In other words, artificial intelligence cannot solve a problem whose algorithm does not provide for a "true" or "false" solution. Mathematician Amir Yehudayov from the Technion-Israel Institute of Technology, in an interview with the journal Nature, admitted that this was a surprise for scientists. The study was built around a website: the algorithm had to show targeted ads to visitors who visit the page most often - without knowing in advance which visitors would visit it. This is a so-called estimating the maximum (EMX) problem.

According to the researchers, the roots of the mathematical problem may lie in the structure of the learning algorithm known as “probabilistically approximately correct learning,” or PAC. It is also very similar to the mathematical paradox called the continuum hypothesis. Like completeness theorems, this hypothesis is related to mathematics, which cannot be proved within the framework of a true / false system. Hypothetically, even for the most perfect algorithm, this is a dead end, from which it cannot get out. Mathematicians recognize that unprovability is a burden that machines will now have to bear. Lev Reizin, who was not involved in the study, notes that these measures "may be able to teach AI humility, even if it continues to revolutionize the world around us."

Vasily Makarov