Artificial Intelligence In 2019: Is The Terminator Already Or Not Yet? - Alternative View

Table of contents:

Artificial Intelligence In 2019: Is The Terminator Already Or Not Yet? - Alternative View
Artificial Intelligence In 2019: Is The Terminator Already Or Not Yet? - Alternative View

Video: Artificial Intelligence In 2019: Is The Terminator Already Or Not Yet? - Alternative View

Video: Artificial Intelligence In 2019: Is The Terminator Already Or Not Yet? - Alternative View
Video: Terminator stars’ warning over modern A.I. 2024, May
Anonim

There is a funny psychological phenomenon: repeat any word many times, and in the end it will lose all meaning, turn into a wet rag, into phonetic nothing. For many of us, the phrase "artificial intelligence" has long lost its meaning. AI is everywhere in technology right now, powering everything from TV to toothbrush, but it doesn't mean what it should. It doesn't have to be that way.

Artificial intelligence: good or evil

While the phrase "artificial intelligence" is undeniably misused, this technology is doing more than ever, both good and bad. It is used in healthcare and combat; helps people write music and books; evaluates your creditworthiness and improves the photos taken with your phone. In short, she makes decisions that affect your life, whether you like it or not.

It can be difficult to agree with the hype and hype that tech companies and advertisers are discussing with AI. Take Oral-B's Genius X toothbrush, for example, one of many devices unveiled at this year's CES that touted alleged AI capabilities. But upon closer inspection, it becomes clear that the brush is simply giving you feedback on whether you are brushing your teeth for the right amount of time and in the right places. There are some clever sensors out there that can tell where your brush is in your mouth, but calling it artificial intelligence is bullshit, nothing more.

The hype breeds misunderstanding. The press can inflate and exaggerate any research by sticking the Terminator on any vague AI story. This often leads to confusion about what artificial intelligence is. This can be a tricky topic for non-experts, and people often mistakenly associate modern AI with the version they are most familiar with: the sci-fi representation of a conscious computer many times smarter than humans. Experts call this particular image of AI general artificial intelligence, and if we can ever create something like this, it will be very far away. Until then, exaggerating the capabilities, intelligence, or capabilities of the AI system will not help the process in any way.

It is much better to talk about "machine learning" rather than artificial intelligence. It is a subfield of artificial intelligence that encompasses almost all of the techniques that have the greatest impact on the world today (including what's called deep learning). There is no "AI" mysticism in this phrase, but it is more useful in explaining what this technology does.

How does machine learning work? Over the past few years, you and I have had the opportunity to read dozens of explanations, and the most important difference I found for myself lies right in the name: machine learning is everything that allows computers to learn on their own. But what it really means is a much bigger question.

Promotional video:

Let's start with the problem. Let's say you want to create a program that can recognize cats. You can write it in the old-fashioned way by programming obvious rules like "cats have pointy ears" and "cats are fluffy". But what does the program do when you show it a picture of a tiger? Each rule will be time-consuming to program, and you will have to explain many different concepts such as fluffiness and mottling. Better to let the machine teach itself. So you give her a huge collection of cat pictures and she goes through them to find her own patterns in what she sees. It connects the dots at first, mostly by accident, but you test it over and over to keep the best versions. And over time, she begins to quite well define what a cat is and what is not.

So far, everything is predictable. In fact, you've probably read a similar explanation before - sorry about that. Another thing is important. What are the side effects of training a decision-making system like this?

Image
Image

The biggest advantage of this method is the most obvious: you never have to program this system. Of course, you will work hard to improve the system's data processing principles while it finds smarter ways to extract information, but you will not tell the system what to look for. This means that she will be able to find patterns that people may even miss or not even think about. And since all the program needs is data - 1s and 0s - it can be trained to do all kinds of tasks because the world is literally teeming with data. With the hammer of machine learning in your hand, the digital world will be full of nails ready to go into action.

But now let's think about the disadvantages. If you are not teaching a computer, how do you know how it makes decisions? Machine learning systems cannot explain their thinking, which means that your algorithm might work well for the wrong reasons. Likewise, since all the computer knows is the data that you provide them, it can develop a bias against things, or it can only be good for narrow tasks that are similar to the data it has seen before. It doesn't have the common sense you would expect from a person. You can create the world's best cat recognition software, but it will never tell you that kittens cannot ride motorcycles or that the cat will most likely be called "Koschey the Immortal" or "Alexei Tolstoy".

Teaching computers to learn on their own is a brilliant trick. And like all tricks, this one includes tricks. AI systems have intelligence, if you want to call it that. But this is not an organic mind, and it does not play by the same rules as humans. You might as well ask: How smart is the book? What experience is encoded in the frying pan?

Where are we now, with our artificial intelligence? After years of headlines ringing about another big breakthrough (which hasn't happened yet, and the headlines aren't fading away), some experts conclude that we've reached some plateau. But this does not hinder progress. In terms of research, there is a tremendous amount of opportunity to explore with the knowledge already available to us, and in terms of product, we only saw the tip of the algorithmic iceberg.

Kai-fu Lee, a venture capitalist and former artificial intelligence researcher, describes the current moment as the "era of adoption" - when technology begins to "spill out of the laboratory into the world." Benedict Evans compares machine learning to relational databases, which made a fortune in the 90s and changed entire industries, but it will be so mundane that you get bored if your eyes are clouded by the grandeur of cinematic AI. We are now at the stage when AI should become normal, habitual. Very soon, machine learning will be in each of us and we will stop paying attention to it.

But so far this has not happened.

At the moment, artificial intelligence - machine learning - is still something new, which often remains unexplained or insufficiently studied. But in the future, it will become so familiar and mundane that you will stop noticing it.

Ilya Khel