We Are "wrong" Afraid Of Artificial Intelligence - Alternative View

Table of contents:

We Are "wrong" Afraid Of Artificial Intelligence - Alternative View
We Are "wrong" Afraid Of Artificial Intelligence - Alternative View

Video: We Are "wrong" Afraid Of Artificial Intelligence - Alternative View

Video: We Are
Video: A DARPA Perspective on Artificial Intelligence 2024, May
Anonim

The fear of a robotic apocalypse hides the real problems we face, allowing algorithms to rule our lives. According to experts in the field of artificial intelligence, we are steadily moving towards a certain point, after which we no longer have to invent anything: artificial intelligence will do everything on its own, and machines will improve exponentially. If this happens, what will become of us?

Over the past few years, many prominent scientists, from Stephen Hawking to Elon Musk, have warned us that we should be extremely concerned about the possible dangerous outcomes of superintelligent artificial intelligence. And they back their words with action: Musk patronizes OpenAI, an organization that develops AI that will benefit humanity.

Image
Image

However, many consider their fears exaggerated. As Andrew Ng of Stanford University, who is also the chief scientist of Chinese internet giant Baidu, notes, worrying about a machine uprising is like worrying about Mars overcrowding.

Image
Image

But this, of course, does not mean that our growing dependence on AI carries no real risks. In fact, these risks are already here. As intelligent systems become more involved in everything from healthcare to criminal justice, there is a danger that important parts of our lives will be overlooked.

Moreover, AI can lead to unpleasant consequences if we are not prepared for them, for example, change our attitude towards doctors to sharply hostile.

Promotional video:

Two words about artificial intelligence

In simple terms, these are machines that do things that usually require mental effort on the part of a person: understanding natural language, recognizing faces in photographs, driving cars, and so on.

There is a difference between a mechanical manipulator on a production line, which is programmed to perform the same task, and a manipulator, which independently learns to perform various tasks through trial and error.

How does AI help us?

The leading approach in AI today is machine learning, in which programs are trained to identify certain patterns in large amounts of data, such as identifying a face in an image or making a winning move in the board game go. This method can be applied to a wide variety of problems. For example, train computers to identify a specific pattern in medical images. DeepMind, an artificial intelligence company owned by Google, develops software that learns to diagnose cancer and eye diseases from patient scans. Others use machine learning to detect early signs of heart disease and Alzheimer's.

Image
Image

Artificial intelligence is also already being used to analyze large amounts of molecular information in search of potential new drug options - a process that is extremely time consuming for humans. Very soon, machine learning may become indispensable for medicine.

Artificial intelligence also helps us manage extremely complex systems like the global supply chain. The system at the heart of the Port Botany container terminal in Sydney manages tens of thousands of shipping containers, a fleet of automated vehicles and so on, completely without people. In the mining industry, optimization systems are increasingly used to plan and coordinate the movement of resources such as iron ore.

AI works everywhere you look, from finance to transportation, to fly planes and monitor the stock market. And they protect your mail from spam. But this is just the beginning. As AI develops, it will get more and more complex and interesting.

What is the problem?

Rather than worrying about a future AI revolution, the biggest risk is that we might place too much trust in the intelligent systems we build. Remember, machine learning trains software to identify patterns in data. After training, it proceeds to analyze fresh, not yet studied data. But when a computer spits out an answer, we usually have no idea how it came to it.

There are obvious problems here. A system is only as good as the data from which it learns. Take a system trained to determine which patients with pneumonia are more likely to die so that they are admitted to the hospital first. Let's say she inadvertently classifies patients with bronchial asthma as low-risk patients. Because normally, people with asthma and pneumonia go directly to intensive care, so they receive treatment that reduces the risk of death. Machine learning sees this as "asthma + pneumonia = lower risk of death."

As AI gains access to all areas of your life, so does the risk that something will go wrong - if not foreseen. And since most of the data we feed to AI is imperfect, we shouldn't expect perfect answers in most cases. We build artificial intelligence in our own image and likeness; most likely, he will be "not very", like us.

ILYA KHEL

Recommended: