Rise Of The Machines Undo: Chapter From David Mindell - Alternative View

Table of contents:

Rise Of The Machines Undo: Chapter From David Mindell - Alternative View
Rise Of The Machines Undo: Chapter From David Mindell - Alternative View

Video: Rise Of The Machines Undo: Chapter From David Mindell - Alternative View

Video: Rise Of The Machines Undo: Chapter From David Mindell - Alternative View
Video: Rise Of The Machines 2024, May
Anonim

Chapter from a book published by the Polytechnic Museum

Man and robot - where is the border between them and what dangers does our proximity bring? Based on personal experience, numerous interviews and data from the latest research, the most respected scientist in the field, David Mindell, offers a behind-the-scenes look at the most innovative applications of robotics. Indicaror. Ru publishes a chapter from his book “Rise of the Machines Is Canceled! Myths about robotization”.

Human operated - remote - autonomous

Deep at night, high above the Atlantic Ocean in the vast open space between Brazil and Africa, a scheduled passenger airliner was caught in bad weather. The frozen ice clogged small tubes in the nose of the aircraft, which determined its speed and transmitted data to the computers controlling the aircraft. Computers could continue to fly without this information, but the program embedded in them did not provide for such an alignment. The automatic fly-by-wire system gave up and turned off, transferring control to the people - the pilots sitting in the cockpit of the liner: Pierre-Cedric Bonin, 32, and David Robert, 37. Bonin and Robert, both relaxed and a little tired, were taken by surprise when they suddenly discovered that they would have to manually fly a huge airliner at high altitude in bad weather conditions, and even at night. And in more favorable conditions, it would be a difficult task that pilots have not faced lately. The crew commander, 58-year-old Marc Dubois, was not flying the plane at that moment, but was resting in the cabin. The pilots had to spend precious time to call him into the cockpit. Despite the fact that at the moment when the computers were turned off, the aircraft was at level in straight level flight, it was not easy for the pilots to make sense of the meager air parameters. One of them pulled the control handle towards himself, the other pushed it forward. The airliner continued straight level flight for about a minute, and then began to fall. The pilots had to spend precious time to call him into the cockpit. Despite the fact that at the moment when the computers were turned off, the aircraft was at level in straight level flight, it was not easy for the pilots to make sense of the meager air parameters. One of them pulled the control handle towards himself, the other pushed it forward. The airliner continued straight level flight for about a minute, and then began to fall. The pilots had to spend precious time to call him into the cockpit. Despite the fact that at the moment when the computers were turned off, the aircraft was at level in straight level flight, it was not easy for the pilots to make sense of the meager air parameters. One of them pulled the control handle towards himself, the other pushed it forward. The airliner continued straight level flight for about a minute, and then began to fall.and then began to fall.and then began to fall.

On 1 June 2009, Air France Flight 447 spiraled into the ocean, killing more than 200 passengers and crew. He disappeared into the waves almost without a trace. In a worldwide interconnected system of international airlines, it is inconceivable that the plane would simply disappear. Large-scale coordinated search work was organized. Just a few days later, traces of the plane were found on the ocean floor. Nevertheless, in order to find the bulk of the wreckage of the airliner and the black boxes, thanks to which it would be possible to establish the cause of the tragedy, it was necessary to conduct searches on a vast territory of the ocean floor, which were moving hopelessly slowly. More than two years later, at a depth of 3.2 km, almost at the very point where the airliner crashed into the ocean surface,an autonomous underwater vehicle called the Remus 6000 glided silently through the darkness under the monstrous pressure of the water column. Moving a little faster than a pedestrian, the robot shaped like a torpedo kept a constant height of about 60 m above the bottom. In this position, his acoustic scanner received the clearest images. The acoustic signal traveled about 800 m in all directions, the robot collected gigabytes of information through the returned signals.the robot was collecting gigabytes of information through the returned signals.the robot was collecting gigabytes of information through the returned signals.

The surface was mountainous, so the ocean floor rose sharply. Despite its artificial intelligence, the robot occasionally hit the surface, most often without any consequences. Three of these robots worked harmoniously in tandem: while two of them were searching underwater, the third was on board the ship on the surface. Such a "pit stop" took three hours, during which the people serving the robot rewrote information, recharged the batteries and set new search plans. On the ship, a team of twelve engineers from the Woods Hole Oceanographic Research Institute, led by Mike Purcell, who pioneered the design and development of the search vehicles, worked twelve hour shifts. They were loaded like any Formula 1 mechanic team.

When the device rose to the surface, it took the engineers about 45 minutes to download the information it collected into the computer, then another half hour to process it so that it could be quickly viewed on the monitor. French and German investigators and representatives of Air France peered over their shoulders. Their actions seemed calculated and prudent, but tension hung in the air: the stakes were too high in terms of national pride of the French, and in terms of the reputation of the Airbus manufacturer, and in terms of the safety of all air travel.

Promotional video:

Several previous expeditions were unsuccessful. In France, Brazil and around the world, the families of the victims were waiting for news. Deciphering information from an acoustic scanner requires careful analysis that cannot be completely trusted to a computer. Purcell and his engineers relied on years of accumulated experience. On their monitors, they studied the rocky bottom kilometer after kilometer. This routine work lasted five days, until its monotony was interrupted: an accumulation of debris appeared on the screen, and then the scientists got to the disaster area - they received a strong signal from objects of artificial origin in the ocean desert. At least so they assumed, but still could not say for sure. The engineers reprogrammed the vehicles so that they returned to the disaster area and moved back and forth through it. This time, the robots had to go close enough for the cameras to take photographs at a height of about 9 m above the bottom in the light of the side lights. When the vehicles brought the images to the surface, engineers and investigators saw the area of the disaster and received an answer: they found the wreckage of an airliner that became a grave for hundreds of people. Soon, another team returned to the scene of the tragedy with a different type of robot - a remote-controlled underwater vehicle. Soon, another team returned to the scene of the tragedy with a different type of robot - a remote-controlled underwater vehicle. Soon, another team returned to the scene of the tragedy with a different type of robot - a remote-controlled underwater vehicle.

It was a heavy-duty device specially designed for working at depth. It was connected to the ship using a cable. Using maps generated from a successful search, the ROV located the black boxes - the aircraft's voice recorder and data logger - and lifted them to the surface. Records of the doomed pilots' final minutes were retrieved from the depths of the ocean, and now investigators were able to recreate the fateful circumstances that led to the confusion aboard the robotic airliner. Then the underwater vehicle embarked on a sad mission - to retrieve the remains of the dead.

The crash of Air France Flight 447 and an operation to find its wreckage link modern automation and robotics in two extreme environments: at the edge of the stratosphere and in the depths of the sea. The aircraft fell into the ocean due to errors in human interaction with automated systems. Then its fragments were discovered by people using remotely controlled and autonomous robots.

Although the words “automated” and “autonomous” (in their most common meanings) imply that such systems operate independently, in both cases the failure or success was not due to machines and people acting separately, but due to the combined action of machines. and people. Human pilots fought for the life of an aircraft that was automated for greater safety and reliability; many interconnected ships, satellites and free floating buoys helped locate the crash site; engineers processed information received from robots and acted on it.

Automated and autonomous vehicles constantly returned to their creators - humans - for information, energy, and direction. The tragedy of Air France Flight 447 made it clear that by constantly adapting and modifying our environment, we are remaking ourselves. How could pilots become so addicted to computers that they dropped a perfectly working plane at sea? What is the role of humans in areas such as transportation and transportation, research and military activities, when more and more tasks of primary importance seem to be performed by machines? The extreme view is that humans are close to "going out of use," that robots "literally need one software update" to become fully autonomous, as Scienti fi c American recently wrote. This view tells usthat robots are advancing - we increasingly meet them in a familiar environment. Concerns about the unknown and questionable capabilities of artificial intelligence arise from the belief that we are on the cusp of "superintelligence." Our world is on the brink of change, in fact, it is already changing under the influence of robots and automation.

All of a sudden, new projects appear, embodying old dreams of smart machines that help us fulfill our professional duties, facilitate physical labor and routine tasks in everyday life. Robots that exist and work in close proximity to humans on a physical, cognitive and emotional level are becoming an increasingly vast and promising research topic. Autonomy - the dream that robots will one day behave as completely independent entities - remains a source of inspiration, innovation and fear. The excitement is caused by the severity of the experiment; the precise forms of these technologies are far from complete, and even less certain are their social, psychological and cognitive implications.

How will our robots change us? In what image and likeness will we make them? What will be left of our traditional fields of activity - scientist, lawyer, doctor, soldier, manager, and even driver and janitor - when these tasks will be performed by machines? How are we going to live and work? We do not need to speculate: for the most part, this future has already arrived today, if not in everyday life, then in extreme conditions, where we have been using robots and automation for decades. Man cannot exist by himself in the upper layers of the atmosphere, in the depths of the ocean, in outer space. Due to the need to send people to these dangerous conditions, robotics and automation were created and implemented in these areas earlier than in other areas of activity that are more familiar to us.

In extreme environments, the relationship between humans and robots is tested for strength. The most innovative developments appear in such an environment. Here engineers have the widest freedom to experiment. Despite physical isolation, this is where the cognitive and social effects of various devices first began to manifest. With human lives, expensive equipment and mission-critical missions at stake, autonomy must always be constrained by considerations of safety and reliability. In such conditions, the vanity and affairs of everyday life temporarily recede into the background, and we find, isolate from the surrounding darkness fragmentary, ghostly allegories of human life in the world of technology. Social and technological processes in the cockpit of an airliner or inside a deep-sea vehicle are not fundamentally different from similar processes in a factory, in an office or in a car. But in extreme conditions, they appear more explicitly and therefore are easier to understand.

Every flight of an airliner is a story, just like every oceanographic expedition, space flight or military operation. Through these stories of specific people and machines, we can piece together data on subtle dynamics. In extreme conditions, we get an idea of our near future, when such technologies can be introduced in such areas of human activity as road transport, health care, education, etc. Devices controlled by a person remotely or autonomously open up qualitatively new possibilities of interaction between people and machines, new forms of presence and new experiences, while at the same time drawing our attention to the dangers, ethical aspects and undesirable consequences of living around smart machines. We see a future where human presence and knowledge will become more important,than ever, but in a way unusual and unfamiliar. And these cars are just wonderful.

I'm not the only person who has admired planes, spaceships and submarines all my life. In fact, the heroes of the stories that I will tell below were guided not only by the search for practical benefits - they were also driven by a passion for new technologies. It is no coincidence that such stories have often been described in science fiction works about people and machines. The stories of humans and machines interacting at the limit of their possibilities are captivating, surprising and awakening hopes for who we can become. This enthusiasm is sometimes reflected in a naive belief in the perspective of technology. But gradually, such an interest leads us to the main philosophical and humanistic questions:who are we? How are we connected to our work and to each other? How do our creations expand our experience? How can we live in this changing world? These questions come up on their own when you start talking to the people who create and control robots and machines. I want to share with you the information I received first-hand, from the most detailed interviews and the results of the latest research from the Massachusetts Institute of Technology and other organizations, in the framework of which tests of robotics and automation are carried out in the extreme conditions of the ocean depths, during air flights (civil and military) and in space. This is not an imaginary future, but what is happening today: we will see how people control robots and receive information through autonomous devices, we will analyze how these interactions affect their work,life experience, skills and abilities.

Our story begins where I myself began - in the depths of the ocean. Twenty-five years ago, when I was an engineer developing embedded computers and tools for deep sea robots, I was amazed to find that this technique is changing oceanography, scientific methods, and even the very nature of the oceanographer profession in unpredictable ways. This understanding led me to have two parallel careers. As a scientist, I have studied the interactions between humans and machines, from armored ships during the American Civil War to the computers and software that helped the Apollo astronauts land on the moon.

As an engineer, I have been integrating the data obtained from this research into modern projects - developing robots and devices for use in close interaction with humans. In some stories I appear as a participant, in others - as an observer, and in others - in both of these guises at once. Over the years of accumulating experience, searching and talking with people, I became convinced that we must change our mind about robots. Even the language in which we speak about them is rather taken from science fiction of the 20th century and has nothing to do with the technical achievements of our time. For example, remotely controlled aircraft are called drones, as if they were mindless automatons, when in fact they are strictly controlled by humans.

Robots are often presented (and sold) as completely autonomous intermediaries, but even today's limited autonomy often exists only in the human imagination. The robots that we use so widely and variedly are hardly threat automatons - they are embedded in social and technical networks just like we are. Below we will look at many examples of how we work together with our machines. It's all about the combinations. It's time to consider what functions modern robots actually perform in order to better understand our relationship with these often astoundingly skillful human creations. I offer you a research-backed empirical conclusion: no matter what robots do in the laboratory, in reality, where human lives and real resources are at stake,we strive to limit their autonomy to the large number of required approvals and opportunities for human intervention.

I am not arguing that machines are smart, and I am not saying that one day they may not be smart enough. Rather, my contention is that such machines are not isolated from humans. Let's list three myths of the 20th century related to robotics and automation. The first myth is linear progress - the idea that technology will move from direct human control to remote control, and then to fully autonomous robots. The words of the philosopher Peter Singer, who constantly speaks out in defense of autonomous systems, captures the essence of this myth. He writes that "the ability of the people to maintain control over what is happening is being nullified both by those at the helm and directly by technology, and therefore people will soon be excluded from the control loop." But there is no reason to assumethat evolution will follow this path, that "technology itself", as Singer writes, will lead to something similar. In fact, there is evidence that humans are gradually coming into deeper contact with their machines.

We constantly find that people, remotely controlled by them and autonomous vehicles develop in parallel, influencing each other. For example, unmanned aerial vehicles would not be able to fly in US national airspace without appropriate changes to manned vehicles. Or to take another example: new advances in robotics in the field of spacecraft maintenance are reflected in the work of astronauts with the Hubble Space Telescope. The most advanced (and complex) technologies are not those that work separately from people, but those that are most deeply embedded in the social system and respond more quickly to what is happening in it. The second is the substitution myth, the idea that machines will gradually begin to take over all the tasks humans perform. This myth is a twentieth-century version of what I call the Iron Horse phenomenon.

Initially, people imagined that the railways would negate the need for horses, but trains have shown themselves to be very unimportant horses. Railways took their place when people learned to do completely new things with their help. Human factors researchers and cognitive scientists argue that automata rarely just “mechanize” human tasks. Rather, they tend to make the task more difficult, often by increasing the workload (or redistributing it). Remotely controlled aircraft do not perform the same tasks as manned aircraft; they take on new functions. Remotely controlled robots on Mars do not replicate the work of geologists in the field;they and the people who work with them learn to conduct field research in a new environment using remote mechanisms.

Finally, we have a third myth - the myth of complete autonomy, the utopian idea that robots can act completely independently today or in the future. Yes, automata, of course, can take on some of the tasks previously performed by humans, and they are indeed capable of acting independently for a certain limited period of time in response to changes in the environment. But machines that do not depend on human direction are useless machines. Only stone can be truly autonomous (but even stone was created and placed in its place thanks to its environment). Automation changes the degree of human involvement in the operation of a machine, but does not eliminate the need for it completely. In any system, even a seemingly autonomous system, we can always find an interface thanks to which a person can control its work,read information and thanks to which it becomes useful. To quote one of the most recent reports from the US Department of Defense Science Council, "There are no fully autonomous systems, just as there are no fully autonomous soldiers, sailors, airmen, or marines."

To think in 21st century terms and to change our views on robotics, automation, and especially the newer idea of autonomy, we must understand how human intentions, plans and assumptions change the essence of the machine they create. Each operator, controlling his apparatus, interacts with designers and programmers, whose presence in the machine is invariable - even in the form of structural elements or lines of code created many years ago. Air France Flight 447's on-board computers could continue to fly the plane with limited airspeed data, but humans had programmed them to prevent them from doing so. Even if the software takes actions that cannot be predicted, it behaves within the frameworks and constraints laid down by its creators. That,how the system was developed, by whom and for what purposes, determines its capabilities and ways of interacting with the people who use it. My goal is to get away from these myths and comprehend the concept of autonomy in the context of the 21st century.

Through the stories that follow below, I intend to reshape public discourse and create a concept map for a new era. To create such a map, speaking about devices and robots in this book, I will operate with the concepts of human-controlled, remote and autonomous. The first is an analogue of the not always appropriate word “manned”, therefore in some cases “controlled” will mean “controlled by a person in the vehicle”. These are, of course, old and familiar types of apparatus such as ships, airplanes, trains, and automobiles - the machines through which people travel. Usually, human-controlled systems are not considered robots at all, although they increasingly resemble robots with humans inside. Remote, an abbreviated form of remotely operated vehicle, simply indicates where the operator is in relation to the vehicle. Even when the cognitive task of controlling the remote system almost completely coincides with that directly performed by the physically present operator, the presence or absence of the operator and the associated risks are given great cultural significance.

The most striking example is remote warfare thousands of kilometers from a war zone. This is an experience completely different from the tasks of the ordinary soldier. As a cognitive phenomenon, the human presence is intertwined with the social aspect. Automation is also a twentieth-century idea and still reflects the mechanistic view that machines follow predetermined procedures step by step. The term “automated” is commonly used to describe computers on board aircraft, although they incorporate modern, rather complex algorithms. Autonomy is the more fashionable word these days and one of the top research priorities of the ever-shrinking US Department of Defense. Some researchers clearly distinguish between autonomy and automation, but, in my opinion,the difference between autonomy lies only in a wider degree of independent decision-making than simple feedback; in addition, the concept of "autonomy" encompasses and unites many ideas borrowed from the theory of artificial intelligence and other disciplines. And, of course, the idea of the autonomy of individuals and groups becomes the cause of constant controversy in politics, philosophy, medicine and sociology. This should come as no surprise, since technicians often borrow terms from the social sciences to describe their machines.the idea of the autonomy of individuals and groups becomes the cause of constant controversy in politics, philosophy, medicine and sociology. This should come as no surprise, since technicians often borrow terms from the social sciences to describe their machines.the idea of the autonomy of individuals and groups becomes the cause of constant controversy in politics, philosophy, medicine and sociology. This should come as no surprise, since technicians often borrow terms from the social sciences to describe their machines.

Even within the design business, the term “autonomy” can have several different meanings. Autonomy in spacecraft design consists in onboard processing of the data that is required for the operation of the spacecraft (whether it is an orbiting automated station or a mobile robot), separate from tasks such as mission planning. At the Massachusetts Institute of Technology, where I teach, the content of autonomy engineering courses mainly covers "path planning" - how to get from one point to another, spending an adequate amount of time and without crashing into anything. In other systems, autonomy is analogous to intelligence, the ability to make decisions that a person would make in certain situations, or the ability to act in conditionswhich were not expected or foreseen by the creators of the device.

Autonomous submersibles are so called because they operate on their own and are opposed to remotely controlled vehicles that are connected to the vessel by long cables. Despite this, the engineers who create such autonomous submarines say that their devices are semi-autonomous, as only rarely operate without any contact with the operator. The term "autonomous" implies greater freedom of action. It describes the way the apparatus is operated, which is a potentially volatile factor. A recent study suggests the term "increasing autonomy": in this way, the authors emphasize the relative nature of autonomy and state that "complete" autonomy, that is, machines that do not need to receive information from a person, will always be unattainable.

In this book, a working definition of autonomy will be: human-developed means of transforming information from the environment into targeted plans and actions. Wording matters, and it gives controversy a different flavor. But we shouldn't dwell on them. I will often rely on the language (which can sometimes be inaccurate) used by the people I work with. The point of this book is not in definitions, but in descriptions of real work - how people use these systems in the real world, gaining new experiences, researching, or even fighting and killing. What's really going on? If you pay attention to the living experience of designers and those who use robots, then everything can become clear. For example,the word "drone" hides the inherently human nature of robots and attributes their negative sides to abstract ideas such as "technology" or "autonomy." When we examine the inner workings of the Predator operators, we learn that they do not wage war with automatic devices - humans still invent, program, and control machines.

There is a long debate about the ethics and policies of remote assassination by drones with remote operators, or the secrecy of such devices operating in internal US airspace. But these debates have to do with the nature, place and timing of human decisions, not with autonomous machines. Consequently, the question is not in contrasting manned and unmanned vehicles and not in opposing man-controlled vehicles to autonomous ones. The main questions of this book are: "Where are the people?", "Who are these people?", "What are they doing?", "When are they doing it?" Where are the people? (On a ship … in the air … inside cars … or in an office?) The manipulations of the Predator operator are akin to the actions of an airplane pilot - he monitors the state of on-board systems, perceives information,makes decisions and takes certain steps. But his body is in a different place, perhaps several thousand kilometers from the results of his work. This difference matters. The tasks are different. The risks are different, and so is the balance of power.

The human mind is capable of traveling to other places, other countries, to other planets. Knowledge gained through the mind and senses is different from knowledge gained through the body (where you eat, sleep, communicate, defecate). We decide which of the two paths of obtaining knowledge to follow depending on the specific situation, and this has its consequences for those who are involved in the process. Who are these people? (Pilots … engineers … scientists … untrained workers … managers?) Change the technique, and then both the task and the essence of the specialist working on it will change. In fact, you will change the entire contingent of people capable of managing the system. It takes years of study and training to become a pilot, and this profession is at the top of the personnel hierarchy. Does remote control of an aircraft require the same skills and traits? From which social classes can the workforce be recruited?

The increase in automation on commercial aircraft matches the expansion of pilot demographics both in industrialized countries and around the world. Is a researcher someone who travels in dangerous conditions, or someone who sits at home in front of a computer? Do you have to enjoy life on board to become an oceanographer? Can you explore Mars while you're in a wheelchair? What are these new pilots, researchers and scientists working with remote access? What are they doing? (Fly … control … process information … communicate?) Physical effort turns into processing visual information, and then into a cognitive task. What used to require strength now requires attention, patience, and quick reaction. Does the pilot keep his hands directly on the control levers,when does he fly the plane? Or enter key commands into the autopilot or flight computer to program the flight path of the aircraft? What is the role of the person's assessment of the situation? What is the role of the engineer who programmed the on-board computer, or the aeronautical technician who set it up?

When do they do it? (In real time … with some delay … in advance, years or months before the mission?) The flight of an ordinary airplane takes place in real time: a person immediately reacts to the events that occur, and his actions have an immediate effect. In a space flight scenario, the device may be on Mars (or approaching a distant asteroid), in which case it will take 20 minutes for the device to receive the command, and 20 minutes for the operator to see that something has happened. Or we can say that the craft is landing "in automatic mode", when in reality we understand that it lands under the control of programmers who left instructions several months or years before landing (although here we may have to make adjustments to the very concept "control"). Controlling an automated system can resemble interacting with a ghost. These simple questions draw our attention to reallocation and realignment.

New forms of human presence and activity are not trivial and are not equivalent to the old ones - the cultural identity of a pilot who risks his life flying over the battlefield differs from a person who controls a vehicle remotely from a ground station. But these changes are also unexpected - a remote operator may feel more present on the battlefield than a pilot flying high over it. Scientific information about the moon can be the same or even more complete when it is collected by a remotely controlled vehicle, and not by a person who landed directly on the planet. But the cultural experience of lunar exploration in this case is completely different. Let's replace old-fashioned notions with rich - animated pictures of how humans actually create and control robots and automatic systems in the real world. The stories below are both scientific and technical and humanistic.

We will see that human-driven, remote and autonomous machines allow the movement and reorientation of human presence and action in time and space. The essence of this book is as follows: it is not the very opposition of man-controlled and autonomous systems that is important, but rather the questions - "Where are the people?", "Who are these people?", "What are they doing and when?" The last, the most difficult questions will be: "How does human perception change?", "And why does it matter?"