Is Artificial Intelligence Always Objective And Wants To Please A Person - Alternative View

Table of contents:

Is Artificial Intelligence Always Objective And Wants To Please A Person - Alternative View
Is Artificial Intelligence Always Objective And Wants To Please A Person - Alternative View

Video: Is Artificial Intelligence Always Objective And Wants To Please A Person - Alternative View

Video: Is Artificial Intelligence Always Objective And Wants To Please A Person - Alternative View
Video: Humans Need Not Apply 2024, April
Anonim

How computers make strange decisions

Artificial intelligence is developing so rapidly that many are beginning to feel anxiety: if it will soon be able to experience emotions and make its own decisions. Whether it's a bot, an autonomous vehicle or a decision-maker in business and science: artificial intelligence is penetrating all areas of our life and society. Machine brains are considered logical, unemotional and impartial, but practice shows that they do not always decide everything with an open mind.

Image
Image

Artificial intelligence is indispensable where it is important to recognize patterns in large amounts of data and make decisions based on them. But in the future, artificial intelligence systems could also be used in areas where our well-being and even our lives are directly dependent on conclusions - in medicine, in the selection of jobs, the issuance of loans and in justice.

How objectively and morally correct does artificial intelligence make a decision? Can we really trust the machine brain to make such potentially important decisions?

Artificial intelligence is used in all areas

Even now, adaptive algorithms are used in many areas of our daily life. There are digital assistants for understanding voice commands and questions. On the Internet, web bots help maintain and control websites so that they are difficult to distinguish from humans. Even when fake news and image manipulations are detected, artificial intelligence systems are involved.

Promotional video:

Image
Image

AI systems also do their job in industry, business, and even government. They optimize and control production processes, regulate the logistics of container ports and large warehouses or maintain accounting records. Insurance companies and banks already have the first intelligent programs for checking submitted applications or making decisions on payments. At Ikea, Deutsche Telekom, and many other companies, UNICEF or libraries, virtual assistants are involved in customer service.

Large amount of data processed for training

All this is possible thanks to the ability of AI systems to quickly analyze huge amounts of data, to identify patterns and correlations. For example, the algorithm learns from countless photos of pets with matching names. After a training period, the system can select completely new photographs of the desired species. It recognizes the differences on its own.

Image
Image

Many other AI systems use text as the basis for learning. They assess the context in which a word stands and study its semantic meaning, as well as relationships with similar words or synonyms. These semantic relationships translate programs into mathematical expressions and thus learn even complex semantic contexts.

It is only thanks to this ability that artificial intelligence can, for example, evaluate medical data and medical records and infer characteristics of clinical pictures from them. Based on these results, AI systems are already being used as diagnostic aids, they can recognize various types of cancer, signs of Alzheimer's disease, suicidal conditions and dangerous sugar levels in diabetes.

But as attractive as it sounds, more and more AI has a big drawback. The further artificial intelligence penetrates practically all spheres of our society, the greater the potential consequences of its decisions. The financial or professional future of a person, or even his life, depends on his judgment.

AI solutions that can cost lives

For example, if an AI misdiagnoses in medicine without being questioned, the patient may undergo the wrong treatment or even die. The military uses an autonomous weapon system that makes the wrong decisions. For example, the AI cannot distinguish a fleeing friend from an enemy, the innocent die.

Image
Image

AI algorithms are already being used in courts in the United States. They predict the likelihood of recidivism for offenders in court. Their assessment often influences the decision of the court and, consequently, the sentence of the convicted person.

If AI decides on loans

Likewise, a decision in finance and insurance can affect whether a person gets a loan or not. These systems are usually trained using databases.

Image
Image

And here the AI is looking for patterns: which people usually pay back their loans and which do not? He develops his assessment procedures and makes the appropriate decisions. In particular, this means that people with the wrong place of residence or occupation will not receive loans, no matter how much they are willing to pay, despite these circumstances.

With such a distribution of responsibilities with computers, a person will only be an assistant for the execution of decisions of cyber brains. And this is already happening.

Who is responsible for mistakes?

Another problem in the development of AI: the more complex the principles of decision-making, the more difficult it is for humans to understand them. It also raises the question of who is responsible for mistakes.

Cases of accidents when driving cars on autopilot have become more frequent: who is to blame for the fact that some situations are not stored in memory or AI draws the wrong conclusions. Or maybe he makes his own conclusions? There are already such examples.

Playing bots have developed their own rules during the game

For this study, the researchers allowed 100 smart bots to interact with each other in a concession game. Each of them must decide whether they donate something to someone from their team or to a participant in the game from the outside.

Image
Image

Surprisingly, over time, bots have become more and more excluded from outside groups and only donate to their team members.

Image
Image

The reason for this: smart bots adapted their game strategy, copying the behavior of the comrades who raised the most money and were the most successful. Thus, groups of bots emerged that behaved in the same manner and consistently excluded unrelated game participants. Constantly creating something that calculates, analyzes and makes decisions faster than it, humanity will one day lose control over its creation, the actions of which it will not be able to keep track of.

According to scientists, this is obvious: artificial intelligence does not need artificial data to be biased, it is enough to have other machines around it. Because even this can create a group mind that is known from human communities. Therefore, using AI, people should leave the decision to themselves.

Vasilik Svetlana

Recommended: