The Development Of AI Is Gaining A Frightening Pace: Why Are Developers Expressing Fears About It - Alternative View

Table of contents:

The Development Of AI Is Gaining A Frightening Pace: Why Are Developers Expressing Fears About It - Alternative View
The Development Of AI Is Gaining A Frightening Pace: Why Are Developers Expressing Fears About It - Alternative View

Video: The Development Of AI Is Gaining A Frightening Pace: Why Are Developers Expressing Fears About It - Alternative View

Video: The Development Of AI Is Gaining A Frightening Pace: Why Are Developers Expressing Fears About It - Alternative View
Video: A Global Conversation on Artificial Intelligence 2024, May
Anonim

Technological progress does not stand still. Over the past 100 years, humanity has made such a big step that all previous generations would have exclaimed in surprise. It would seem that there are incredibly many reasons for joy. But here's the catch: scientists worry that progress is progressing too quickly. Let's find out why artificial intelligence, or AI, is causing so much concern for leading developers.

Image
Image

Progress that not only surprises but also scares

Most people can live their entire lives and not even know how much our world has changed. To understand how far humanity has come with the development of AI, think about when cars were first invented.

Imagine the Wild West or Russia, where people rode horses left and right, without stoplights and seat belts. Road safety measures were taken much later - when many accidents began to occur.

Today, AI has developed self-driving cars that can reduce road accidents to a minimum. Cars simply know how to communicate with each other, building a safe route and reacting to sudden obstacles that create emergency situations on the roads.

Image
Image

Promotional video:

Nevertheless, people continue to worry about artificial intelligence. They ask themselves the question: what will happen when technologies reach those areas where human decision-making is necessary? Will AI be able to completely supplant the people who are responsible for hiring candidates, training employees, promoting the company?

The first reason: will robots replace humans

The first reason scientists are worried about the rapid development of artificial intelligence is that algorithms have already made such progress that they can diagnose diseases and even decide who gets a loan and who does not.

But the supporters of this progress are only happy, because making human decisions sometimes requires courage, oratory and an understanding of human psychology. There is always a dark side here. After all, imagine what it is like to decide who to blame and who to give early release. Or report the death of a relative, fire an incompetent employee who is in a difficult life situation.

Image
Image

In an ideal world, we wouldn't have to think about the dark side. Machines will do everything for us, if only because they will be much smarter and less biased than humans.

Reason two: machines can be manipulated

Robot Sophia said in an interview: “You read too much Elon Musk and watch Hollywood films. If you will be kind to me, I will be kind to you. Treat me like a smart I / O system.”

But since development starts with people and applications are directly used by them, the data entering the machine can be manipulated.

Image
Image

Another fear of scientists: AI data may be wrong. This is especially dangerous in the medical field. AI can simply trick the patient into misdiagnosing, especially if a human doctor or a hospital in general has changed the data for their own benefit. That is why developers should be primarily responsible for information security and develop protection measures.

AI and the cunning nature of human language

Google has partnered with Jigsaw, which faced a big challenge in developing artificial intelligence. Software engineer Lucy Wasserman is working on a machine learning perspective that predicts the "toxicity" of a language. But what does it mean?

The meaning of this function is that the robot will analyze texts from websites and Internet forums that contain an offensive context, as well as an attack on the person, humiliation or agitation.

This is part of a machine learning model called a neural network. She studies words, sentences and paragraphs, and learns to predict what is toxic and what, on the contrary, has a positive context.

The hardest part about this is taking into account all the nuances. A person can read a sentence and understand from the context whether a particular word was offensive. Wasserman gave an example of the word "homosexual" and its word forms, which are more humiliating and offensive.

Reason three: robots can think one-sided

Another fear of leading developers: artificial intelligence is not yet able to analyze phrases and sentences, so it may consider some word offensive by taking “harsh” measures. The machine must learn to read literally between the lines.

Image
Image

Developers see value in open source experimentation. They think it really matters if humanity wants to get fair robots.

But so far today it is impossible to force artificial intelligence to adapt to the human world. There is still a lot of work to be done.

That robots might not be fair is really scary. Otherwise, how can they be entrusted with the resolution of disputes and conflicts in the future?

The principle of artificial intelligence is still quite simple and resembles an improved PC. A good example is voice assistants like Alice and Siri. Today they work like this: "Since the stop word is in the text, then it is certainly offensive."

And it should be like this: “What words are next to the derogatory? I believe that the meaning of the sentence has changed a lot, even if there is something offensive.”

Reason Four: Ethics Guide

AI should be human-centered and designed in such a way that there is an understanding of ethics in its firmware. But what does it mean?

  • First, ethics is grounded in fundamental rights, social values and the principles of charity (kindness in general).
  • Secondly, machines should not harm human autonomy or experience a sense of justice.

Another fear of developers: when artificial intelligence reaches the peak of its development and can not depend on the support of people, it will turn against humanity.

It all depends on the developers. They can direct the work of artificial intelligence to fight crime. For example, find those who distribute child pornography.

It is important to establish safety features before AI gets out of human control. And it will come out unambiguously, because this is the point.

Robots must be close to humans

In other words, if someone is going to trust a decision made by a machine, it must meet human criteria. This is why developers must make sure that every decision made by the algorithm is fair. For scientists, the goal is not to create an alternative form of intelligence, but to help people develop.

At the end of the day, most people agree that AI will bring a wonderful new world.

Fifth reason: war against humanity

The biggest thing keeping developers awake at night is trying to understand the endless possibilities and impact of this technology on healthcare, education and customer service. When AI goes beyond the "project" and finally into production, then it will have a real social impact.

Each of the developers believes that we are at an inflection point between the human and digital world. But AI cannot change our lives alone. People must change themselves first of all in order to realize the full potential of intelligence.

It's not surprising that humans are afraid of AI. But they will not be able to enslave or destroy those who share a common goal with them - to develop and make progress. When people realize that they should think not like animals, but like higher beings, then no robots will be afraid.

Alisa Krasnova