New Technology From IBM Has Made It Possible To Accelerate AI Training By 4 Times - - Alternative View

New Technology From IBM Has Made It Possible To Accelerate AI Training By 4 Times - - Alternative View
New Technology From IBM Has Made It Possible To Accelerate AI Training By 4 Times - - Alternative View

Video: New Technology From IBM Has Made It Possible To Accelerate AI Training By 4 Times - - Alternative View

Video: New Technology From IBM Has Made It Possible To Accelerate AI Training By 4 Times - - Alternative View
Video: A.I. Is Making it Easier to Kill (You). Here’s How. | NYT 2024, May
Anonim

The computational efficiency of artificial intelligence is like a double-edged sword. On the one hand, it must learn rather quickly, but the more the neural network "accelerates", the more it consumes energy. This means that it can become simply unprofitable. However, a way out of the situation can be given by IBM, which has demonstrated new methods of teaching AI, which will allow it to learn several times faster with the same level of resource and energy costs.

To achieve these results, IBM had to abandon computational methods using 32-bit and 16-bit techniques, developing an 8-bit technique, as well as a new chip to work with it.

All IBM developments were presented at NeurIPS 2018 in Montreal. The company's engineers talked about two developments. The first is called "deep machine learning of neural networks using 8-bit floating point numbers." In it, they describe how they managed to reduce arithmetic precision for applications from 32-bit to 16-bit in such a way and save it on an 8-bit model. Experts claim their technique speeds up the training time of deep neural networks by 2-4 times compared to 16-bit systems. The second development is "8-bit multiplication in memory with projected phase transition memory." Here, experts unveil a method that compensates for the low fidelity of analog AI chips by allowing them to consume 33 times less power than comparable digital AI systems.

Vladimir Kuznetsov