Artificial Intelligence DeepMind Has Learned To Come Up With Photographs - Alternative View

Artificial Intelligence DeepMind Has Learned To Come Up With Photographs - Alternative View
Artificial Intelligence DeepMind Has Learned To Come Up With Photographs - Alternative View

Video: Artificial Intelligence DeepMind Has Learned To Come Up With Photographs - Alternative View

Video: Artificial Intelligence DeepMind Has Learned To Come Up With Photographs - Alternative View
Video: This AI Learned To Create Dynamic Photos! 🌁 2024, April
Anonim

The British company DeepMind, which became part of Google in 2014, is constantly working to improve artificial intelligence. In June 2018, its employees presented a neural network capable of creating 3D images from 2D ones. In October, the developers went further - they created a BigGAN neural network to generate images of nature, animals and objects that are difficult to distinguish from real photographs.

As with other artificial imagery projects, this technology is based on a generative adversarial neural network. Recall that it consists of two parts: a generator and a discriminator. The first creates images, and the second evaluates their similarity with the samples of the ideal result.

In this work, we wanted to blur the line between AI-generated images and photographs from the real world. We found that existing generation methods are sufficient for this.

Different sets of images were used to teach BigGAN to create pictures of butterflies, dogs, and food. First, the training was based on the ImageNet database, and then - the larger JFT-300M set of 300 million images, divided into 18,000 categories.

Image
Image

BigGAN training took 2 days. It took 128 Google Tensor Processors designed specifically for machine learning.

Professors from the Scottish Heriot-Watt University also participated in the development of the neural network. Details about the technology are described in the article Training

large-scale generative adversarial neural network GAN for the synthesis of high-fidelity natural images”.

Promotional video:

In September, researchers at Carnegie Melon University used generative adversarial neural networks to create a system for superimposing facial expressions on the faces of others.

Ramis Ganiev