Artificial Intelligence Will Only Be Dangerous If Used Foolishly - Alternative View

Table of contents:

Artificial Intelligence Will Only Be Dangerous If Used Foolishly - Alternative View
Artificial Intelligence Will Only Be Dangerous If Used Foolishly - Alternative View

Video: Artificial Intelligence Will Only Be Dangerous If Used Foolishly - Alternative View

Video: Artificial Intelligence Will Only Be Dangerous If Used Foolishly - Alternative View
Video: The danger of AI is weirder than you think | Janelle Shane 2024, May
Anonim

Technology is based on continuous improvement over time. As technology progresses, humanity is carried along with it. However, there is a certain fear that surrounds technologies like artificial intelligence (AI) and robotics, in part because of the way they are portrayed in science fiction. This fear, however, is a fear of the unknown. By and large, humanity does not know what will happen with the further improvement of artificial intelligence systems.

The approaching technological singularity - one of the consequences of these improvements - is also heavily influenced by science fiction. It is believed that AI and intelligent machines will become so smart that they will rebel against their human patrons and end the world as we know it. We do not know if this will actually happen, although some institutions are actively working in this direction.

But perhaps the most immediate fear of AI and automated systems comes from people who expect job losses and the subsequent displacement of human workers by robotic workers. Some studies show that increased automation will lead to lower employment over the next 10-20 years.

One study predicts that cars will replace 47% of jobs in the United States. Another study says 40% of jobs will disappear in Canada, and 850,000 in the UK. Meanwhile, 137 million workers in Southeast Asia risk losing their jobs to cars in the next 20 years. This trend will cover a range of industries.

What are you really afraid of?

With all this in mind, are we justifiably afraid of AI?

In short, yes, there is something to worry about. But the worry is more about how we use AI, says ZD Net editor-in-chief Steve Ranger. “AI is a fast growing and intriguing niche,” writes Ranger. "But the problem is very deep."

Promotional video:

The ranger warns of the industry's inability to tackle AI, which could potentially trigger another AI winter. He's writing:

"The lack of qualified employees who will use most of these technologies, as well as massively inflated expectations, can lead to a loss of trust."

What's more, the danger remains that we see AI as a magical answer to all questions, neglecting the fact that AI and machine learning algorithms are only as good as the data they use. The Ranger says that "it should be ensured that AI decisions are well understood." He considers this to be the biggest AI threat. We gradually stop understanding how AI comes to its decisions and why. We will have to ensure that we adhere to five basic principles of AI:

- responsibility (a person must be able to cope with the consequences of the work of AI);

- Explainability (the ability to easily explain the decisions that the AI makes to the people who work with it);

- accuracy (the ability to trace errors to their sources);

- verifiability (third parties should be able to easily view the behavior of the AI);

- fairness (AI should not be subject to human bias or discrimination).

Ultimately, the greatest threat to humanity is not AI. This is how we work with him. We don't need to worry about artificial intelligence and machine learning, but about the shortcomings of human intelligence and our ability to learn.

Monitor and observe

Fortunately, institutions have already been established that are developing guidelines for AI research and development. There is the Artificial Intelligence Partnership, which includes heavyweights like Amazon, Google, IBM, Facebook, Microsoft, and Apple. Another one is the AI Fund, which is run by the Knight Foundation.

The benefits of AI are undeniable, and we don't have to wait until 2047 and the Singularity to figure out how it will affect people's lives. Modern artificial intelligence systems should not be confused with Skynet and HAL-9000. Much of what we call AI today is neural networks and machine learning algorithms that run in conventional devices. When used correctly, AI can help people keep their jobs, reduce car accidents and improve medicine. Our fears cannot outweigh these benefits.

ILYA KHEL