Artificial Intelligence: How Smart Do We Need Machines? - Alternative View

Table of contents:

Artificial Intelligence: How Smart Do We Need Machines? - Alternative View
Artificial Intelligence: How Smart Do We Need Machines? - Alternative View

Video: Artificial Intelligence: How Smart Do We Need Machines? - Alternative View

Video: Artificial Intelligence: How Smart Do We Need Machines? - Alternative View
Video: How smart is today's artificial intelligence? 2024, May
Anonim

Artificial intelligence has already become a fact in some financial and transportation segments, and as it spreads in other areas, we increasingly want to make sure that we control it, and not vice versa. From 2001 A Space Odyssey to Blade Runner, RoboCop to The Matrix, when people deal with artificial intelligence, they inevitably face the dark fantasy of filmmakers.

Spike Jones' latest film "She" and the upcoming "Out of the Machine" by Alex Garland are already dedicated to the creations of artificial intelligence living among us. The Turing test comes to the fore, and we still cannot determine the main difference between chips and code from flesh and blood.

These fears are also expressed by some of Silicon Valley's famous people: Last month, Elon Musk (CEO of Tesla and SpaceX) described artificial intelligence as "the largest existential threat" to humanity. What many of us do not understand, and perhaps Elon Musk himself, too, is that artificial intelligence is not some incredible technology that exists only in the fantasies of filmmakers and the laboratories of computer geniuses.

Image
Image

Many of our smartphones use rudimentary artificial intelligence techniques to translate from one language to another or answer our questions; in the gaming industry, AI is used to generate complex and ever-changing game scenarios. And as Silicon Valley companies like Google and Facebook continue to buy up AI companies and hire experts, AI will continue to grow.

So artificial intelligence isn't Spielberg's movie?

There are no complaints about the film, but the term that means "artificial intelligence" has a much longer history than the one that Spielberg and Kubrick showed in their films. Artificial intelligence dates back to the birth of computing in the 1950s, when, just 14 years after defining a general-purpose computer, Alan Turing wondered if a machine could think.

Promotional video:

64 years have passed, and this idea still occupies our minds, is embodied in films and books, and is discussed at symposia. It did not stray far from the set of rules laid out by Turing in his 1950 work Computing Machines and the Mind, in which he proposed the “imitation game,” which we now know as the Turing test.

Connect a computer to a text terminal and allow it to communicate with the operator along with a real person. The essence of the test is that when you ask the operator to determine which of his interlocutors was a human, "the operator will make mistakes as many times during this game as he could make mistakes trying to distinguish a man from a woman."

Turing believed that figuring out whether a machine could pass a test would be more useful than answering the vague and philosophical question of whether it was thinking or not thinking. "Regarding this issue … I think it is pointless to discuss it." True, Turing thought that by the year 2000 "language and education will have changed so much that anyone can communicate with a thinking machine without any problems."

Literally speaking, he was not too wrong. Today, you can often hear people say that their computers are "dumb" or "thoughtful". But even if we get more serious about the definition of a thinking machine, this idea will be closer to reality than many might think.

AI already exists?

Relatively. We're still a long way from going through the Turing simulation game despite reports to the contrary. In June, chatbot Evgeny Gustman successfully tricked a third of the judges by taking the Turing test in London, convincing them that he was human.

Image
Image

But instead of thinking, Eugene relied on tricks and tricks. Posing as a 13-year-old boy for whom English is not his native language, the machine explained by this many illogical aspects of its behavior, including a poor sense of humor and offensive statements, often redirecting the conversation to another direction.

Most AI developers try to teach it to process natural language so that we can issue a command in a language we are familiar with. This is what children start to do even before they take their first step, and this is an extremely difficult task for a machine.

Consider a favorite phrase of AI researchers: "time flies like an arrow, fruit flies like a banana." [play on words: “time flies like an arrow, fruit flies love banana”; the second part of the sentence, by analogy with the first, can be translated as “the fruit flies like a banana”]. The decomposition of a sentence into its constituent parts sometimes baffles even native English speakers, not to mention the algorithm.

Does AI have speech problems?

Not certainly in that way. In fact, most of the time AI is not used for conversations. Some of you should know about artificial intelligence not from science fiction or from Alan Turing, but from video games, where AI is used to refer to computer-controlled opponents.

In a first-person shooter, for example, the AI controls the movements of enemies, allowing them to dodge, aim, and fire at you in the most incomprehensible ways. In racing games, the AI can control rival vehicles. As a paragon of AI, video games certainly leave a lot to be desired. But diamonds are made from diamonds, and the simplified rules of the system combine to make something complex.

Take GTA V, for example, where creating cities with a life of their own means you can turn a corner and find a fire crew fighting a driver who runs into a hose; or Dwarf Fortress, where gnomes live in caves with their own life, textured and algorithmically detailed. These emerging gameplay systems show an entirely new way that AI can evolve, not trying to mimic a human, but developing a "good enough" heuristic that turns algorithms into something completely different when scaled enough.

So everyone is investing in AI to make better games?

Not. Companies like Apple and Google are investing a lot of money in AI, trying to create virtual personal assistants like Siri and Google Now.

This may be a little far from Turing's fantastic vision, but voice services essentially do the same hard work as a human. They need to listen and understand spoken language, determine what data it contains, and then return the result, also in the form of a conversation. They are not trying to deceive us into believing that they are human, but it happens by itself. Since all computing takes place in the cloud, the more they hear, the better they understand.

However, leading AI research is not focused on replicating human understanding of the world, but surpassing it. IBM's Watson, for example, is known as the computer that won the Jeopardy! in 2011, using natural language understanding to find answers to the facilitator's questions. But along with understanding natural language, Watson can also read and understand huge amounts of unstructured data, and very quickly.

In the case of Jeopardy !, he worked with 200 million pages of data, including the text of the entire Wikipedia. Watson's real goal is to expand to the entire Internet and provide healthcare professionals with a convenient mechanism to work with. After all, there are scientists who just want to save humanity.

We all will die?

Maybe. There are fears that once a sufficiently versatile AI like Watson is created, its power will increase along with the processing power available to it. Moore's Law predicts that computing power doubles every 24 months, so it is only a matter of time before AI becomes smarter than its creators and can create an even more powerful AI, leading to exponential growth in its capabilities.

But what will superintelligent artificial intelligence do with these capabilities? It all depends on how it is programmed. The problem is that it is very difficult to program a highly intelligent computer so that it does not accidentally destroy humanity.

Let's say you give your AI the task of making paper clips and making them as good as possible. Soon enough, he will realize that improvements in staple production can be achieved by improvements in the production line. What will he do next?

“For example, he will be concerned that people don't turn him off, because then no paper clips will be produced,” explains Nick Bostrom. Paperclip AI, Bostrom says, “can get rid of a person right away because he is a threat. In addition, he will need as many resources as possible because they can be used to make paper clips. For example, atoms in human bodies."

How to deal with such AI?

The only way that will work, according to some theorists like Ray Kurzweil, Google's CTO, is to shut down AI. People should think not only about how to create intelligent AI, but also about the ethical side of this issue - and program according to it.

Image
Image

In the end, writing code is just looking for trouble. A machine with instructions to "make people happy" can solve this problem quite simply by implanting electrodes in people's brains. Therefore, when asking artificial intelligence to solve big philosophical problems, we need to make sure that the machine understands what is “good” and what is “bad”.

So we need an ethics program and everything will be fine?

Not really. Even if we manage to prevent the emergence of malicious AI, the question remains how society adapts to the growing capabilities of artificial intelligence.

The Industrial Revolution is characterized by the automation of a number of jobs that previously relied on manual labor. There is no doubt that the Industrial Revolution was the period of the most significant growth in human well-being. But the coup of that time was unique in its case, and it is unlikely that we will be able to see this again.

What steam power did for physical labor, AI can do for mental labor. The first victims of this sphere are already appearing: there is no place for taxi dispatching in the world with Hailo and Uber; the work of a stockbroker has changed precisely due to the introduction of high-frequency trading; sports and news stories will soon be made by cars.

The real changes are just beginning. In November, Goldman Sachs spearheaded a $ 15 million funding round for Kensho, a financial data analysis service that uses artificial intelligence techniques beyond the reach of the best human analyst. It can handle such a huge amount of data that people are simply powerless in front of it.

Kensho analytics can be used by a high-frequency trading company like Athena, which uses it to gain a millisecond edge in the market - enough to make money if you trade billions of dollars.

After such trading affects the general market, Kensho can provide its algorithms to Forbes, and she will replace her financial analysts. Most business summaries are one-to-one, and if the data is available in a structured format, why waste time with people?

In general, such changes are good. If the work of millions of people replace algorithms, they can do something better, the number of working hours will decrease, and we will get one step closer to utopia.