Unleash Artificial Intelligence: Openai, Elon Musk's Crazy Plan - Alternative View

Table of contents:

Unleash Artificial Intelligence: Openai, Elon Musk's Crazy Plan - Alternative View
Unleash Artificial Intelligence: Openai, Elon Musk's Crazy Plan - Alternative View

Video: Unleash Artificial Intelligence: Openai, Elon Musk's Crazy Plan - Alternative View

Video: Unleash Artificial Intelligence: Openai, Elon Musk's Crazy Plan - Alternative View
Video: SECRET Artificial Intelligence Project - Google's Plan for AI Supremacy 2024, October
Anonim

Politicians and capitalists have a Friday afternoon tradition of dumping a lot of news to hide the bad news among them. So it was a bit odd that Elon Musk, the founder of Tesla, an electric vehicle manufacturer, and Sam Altman, president of the renowned tech incubator Y Combinator, unveiled their new AI-themed company at the very end of a week-long AI conference in Montreal last December.

There was a reason for OpenAI's introduction at such a late hour. Not because nobody was watching. Because everyone was watching. When some of the most powerful companies in Silicon Valley sensed the wind was blowing, they began offering huge sums of money to the newly raised OpenAI AI talent to keep these geniuses at home. At the last minute, the proposals - some made at the conference itself - grew so large that Musk and Altman had to delay the announcement of the new startup. “The sums reached the edge of insanity,” says Wojciech Zaremba, a scientist who joined OpenAI after internships at Google and Facebook and was among those who received major offers at 11:00 pm.

Image
Image

How many dollars is the border of madness? Two years ago, as the market for the latest machine learning technologies began to heat up, Microsoft Research VP Peter Lee said that the cost of top AI researchers has exceeded the cost of a top quarterback in the National Football League - and that's under normal circumstances, not when the biggest entrepreneurs Silicon Valley is trying to poach the best talent. Zaremba says that when OpenAI got together, it was offered two to three times its market value.

OpenAI did not meet these suggestions. But he offered something different: the ability to do research focused exclusively on the future, rather than products and quarterly profits, and ultimately divide most - if not all - of the research among whoever so desires. That's right: Musk, Altman, and company set out to give away what could be the most transformative technology of the 21st century and give it away for free.

Zaremba says proposals on the verge of insanity actually put him off - despite the huge respect for companies like Google and Facebook. He felt that the money was being channeled to prevent the creation of OpenAI, which further reinforced his commitment to a generous startup. “I realized,” says Zaremba, “that OpenAI is the best place for me.”

Do you feel the irony at the heart of this story? While the world's largest tech companies are trying to retain their researchers as hard as NFL teams are trying to retain their star quarterbacks, the researchers themselves just want to share their insights. In the sophisticated world of AI research, the brightest minds are driven not only by the next product cycle or profit, or not at all obsessed with them. They want to make AI better, and it's impossible to make AI better when you keep your latest developments to yourself.

OpenAI recently released the first batch of AI software, a toolkit for creating artificial intelligence systems based on technology called reinforcement learning, one of the key technologies that led to the creation of AlphaGo, Google's AI that surprised the world with its capabilities. go games. With this set of tools, you can build systems that will open up a new generation of robots, play Atari games, and yes, beat the world champions at Go.

Promotional video:

But toys are just the beginning. OpenAI is a billion dollar mission to push AI as far as possible. In the way the company got together and what its plans are, you can see the next new wave of innovation. We don't know yet if OpenAI will be the main driver of these changes. But the forces behind the creation of this rather unusual startup show that the next generation of AI will not only change the technology, but also the way that technology is developed.

Artificial intelligence is everywhere

Silicon Valley has a passion for exaggeration. Bold-sounding statements should always be met with a grain of salt. But in the field of artificial intelligence, changes are indeed taking place. Inside companies such as Google and Facebook, deep learning technologies are already helping Internet services to identify faces in photos, recognize voice commands on smartphones, and respond to Internet searches. And this same technology could solve many other problems in the future. It can help machines learn to understand natural language - which we humans are used to speaking and writing. It can help to develop a new breed of robots, allow automata not only to perform their tasks, but also to learn on the fly. And some believe that this will ultimately give machines something like common sense - the ability to think truly like a person.

And along with such promises, deep anxiety is born. Musk and Altman are concerned that if humans can create AIs that do great things, they can build AIs that do terrible things. They are not alone in their fears. While this may seem counterintuitive, Musk and Altman also believe that the best way to combat malicious AI is not to limit access to artificial intelligence, but to expand it. It is this aspiration that leads to a team of young and intelligent idealists.

OpenAI appeared one evening last summer in a private room at the Rosewood Hotel in Silicon Valley, an upscale city hotel that literally sits at the center of the venture capitalist world. Elon Musk had lunch with Ilya Sutskever, who was then at Google Brain, a company developing deep neural networks - artificial intelligence systems that can learn to perform tasks by analyzing massive amounts of digital data, including recognizing photos and writing emails, learning from correspondence. Suckever was one of the lead developers of the project.

Ilya Sutskever

Image
Image

Sam Altman, whose Y Combinator helped shape companies like Airbnb, Dropbox, and Coinbase, brokered the meeting, bringing together several AI researchers and young but veteran company builder Greg Brockman, formerly chief technology officer at start-up Stripe, another company Y Combinator. It was an eclectic group. But everyone had a goal: to create a new type of AI laboratory that could work outside the control of not only Google, but everyone else. “The best thing I could imagine as a challenge,” Brockman says, “is to move humanity towards creating true AI along a safe path.”

Musk was there because of his long friendship with Altman - and because AI is critical to the future of his various ventures, and to the future as a whole. Tesla needs AI if it wants to build self-driving cars. SpaceX, another of Musk's companies, needs AI to take people into space and keep them safe and sound there. But Musk is also one of the first (and loudest) to warn that one day humanity could lose control of systems that are powerful enough to learn on their own.

There was a problem: a lot of people skilled enough to solve all of these problems already worked at Google (as well as Microsoft, Facebook, Baidu, and Twitter). And no one at that dinner was completely sure that these researchers could be lured into a new startup, even if Musk and Altman were behind it. And at least one key player was ready to leave the ship. “I understood the risks,” says Suckever, “but I also knew it would be interesting to try.”

Break the circle

Encouraged by conversations with Musk, Altman, and others at Rosewood, Brockman soon decided to build the laboratory that they all had in mind. With a strong commitment to the project, he brought in Yoshua Bengio, a University of Montreal scientist and one of the founding fathers of the deep learning movement. Two other industry pioneers - Jeff Hinton and Jan Lecun - work at Google and Facebook, respectively, but Bengio aspired to life in the academic world, somewhere beyond the reach of industrial tentacles. He sketched out a list of the best researchers in the field and over the next few weeks Brockman contacted everyone he could, including several others.

Many of these researchers welcomed the idea, but were wary of the possibility of such a leap. In an attempt to break the circle, Brockman chose the ten explorers he wanted to see the most and invited them to spend the Saturday with wine, snacks, and conversation at a Napa Valley wine cellar. For Brockman, even a trip to Napa was the catalyst for the project. “This is an underrated way of bringing people together, especially at times when you have nowhere to rush to reach your goal,” he says. "You need to go there, but you also need to talk." And when they got to the wine, that atmosphere remained. As Suckever says, "the wine was secondary to the conversation."

Greg Brockman

Image
Image

By the end of the day, Brockman invited all ten researchers to join the lab and gave them three weeks to think. At the end of the term, nine were in command. And they stayed in it, despite the big offers from the giants of Silicon Valley. “I got a very attractive offer to stay, so the decision was not an easy one,” Sutskever says of Google, his former employer. "Finally, I decided to go to OpenAI, partly because of the very strong group of people and, of course, mostly because of their mission."

The deep learning movement started in academia. It’s only recently that companies like Google, Microsoft, and Facebook have decided to tackle this area, as advances in brute computational power have made these deep neural networks a reality, not just a theoretical possibility. People like Hinton and LeCune have left academia at Google and Facebook because of the huge resources at these companies. But remained closely linked in collaboration with other theorists of the field. Yet, as LeCun explains, deep learning research requires the free flow of ideas. "When you do research in secret," he says, "you fall back."

As a result, large companies now have much of their AI research at their disposal. This is a major trump card, especially for Google, which has long kept the technology of its online empire a secret. Not so long ago, Google opened the source code for the software engine that runs its neural networks. But a lot remains under lock and key in this race for the future. Brockman, Altman, and Musk seek to push the notion of openness even further by saying they don't want one or two large corporations to control the future of artificial intelligence.

Limits of openness

It all sounds great. But for all the idealism of OpenAI, researchers may find themselves in the same conditions that forced them to work on old jobs. Openness has its limits. And the long-term vision of AI isn't the only interest in this game. OpenAI does not do charity work. Musk's companies will benefit greatly from the startup's work, and so will Altman's Y Combinator. “There are certainly some competing points,” says Lecun. - It's a non-profit organization, but it has a close relationship with Y Combinator. And people will be paid to work in this industry."

The lab doesn't pay the same astronomical salaries that Google and Facebook provided AI researchers, Brockman said. But he also says the lab “wants to pay them well,” and is offering compensation in the form of stock options, first at Y Combinator, and later possibly at SpaceX (which, unlike Tesla, is a private company).

Nonetheless, Brockman insists that OpenAI does not provide for special treatment of related companies. OpenAI is a research initiative, not a consulting firm. Yet OpenAI's idealism has its limitations. The company may not open the code for everything it produces, although the ultimate goal is still to distribute most of it, whether it be scientific papers or Internet services. “Opening is not necessarily the best option. You need to nourish the idea, see where it leads, and then publish, says Brockman. - We will be producing a lot of open source code. But we'll also have a lot of stuff that isn't ready for release yet."

Suzkever also adds that OpenAI could go as far as patenting some of its work. “We won't be patenting anything anytime soon,” Brockman says. "But we are ready to change tactics in the long term, if we think it will be better for the world." For example, OpenAI can do proactive patenting to discourage others from obtaining patents.

Some may see patents as a motive for profit. But that's the whole point of patents.

The superintelligence problem

When Musk and Altman presented OpenAI, they also outlined the project as a way to neutralize the threat of malicious superintelligence. Of course, such superintelligence could result from the work of OpenAI, but they insist that any threat will be mitigated by making the technology available to everyone. “We think it's much more likely that a multitude of AIs can stop random bad actors,” Altman says.

But not everyone in this area believes in such an outcome. Nick Bostrom, an Oxford philosopher who, like Musk, warned about the dangers of AI, points out that if you share your research without restriction, "bad actors" can catch them before everyone is sure they are safe. "If you have a button that makes the world's shitty," Bostrom says, "you probably don't want to give everyone access to it." If, on the other hand, OpenAI decides to hold back research to keep the bad guys from accessing it, Bostrom asks: how will this differ from Google or Facebook?

Image
Image

He says yes, OpenAI's non-profit status may or may not change things. The real strength of the project, he says, is that it can be verified by Google and Facebook. “It will be able to reduce the likelihood that superintelligence will be monopolized. It will be able to remove one of the possible reasons why some individuals or groups will have access to better AI than everyone else."

In a recent work, the philosopher explains that the main effect of an initiative like OpenAI - an initiative to freely share what is done - is to accelerate the progress of artificial intelligence, at least in the short term. And it can accelerate progress in the long run, provided that a higher level of transparency is maintained than would be commercially optimal.

“It may be that a philanthropically motivated R&D investor could seriously accelerate progress by pursuing open science policies,” he says.

How Xerox PARC

In early January, nine of Brockman's researchers gathered at his apartment in San Francisco. The project was so new that they didn't even have whiteboards. (Imagine.) They bought a few the same day and got to work.

Brockman says OpenAI will start by exploring reinforcement learning, a machine-based way of learning problems by repeating them over and over again, tracking which methods produce the best results. Another major challenge is the so-called "unsupervised learning" - the creation of machines that can truly self-learn without human assistance. Today, deep learning depends on carefully labeled data. If you want to teach a neural network to recognize photos of cats, you need to feed it a certain number of examples - and these examples should be labeled as photos with cats. Such training is carried out with the help of people. But like many other scientists, OpenAI plans to build neural networks that can learn without careful selection of data.

“If you have a really good teaching methodology without a teacher, machines can learn from all the knowledge on the Internet - like a human, just looking around - or by reading books,” says Brockman.

He sees OpenAI as the modern incarnation of Xerox PARC, a technology research lab that existed in the 1970s. Just as PARC's open and free research has given rise to everything from graphical user interfaces and laser printing to object-oriented programming, Brockman and his team want to dive into what used to be science fiction. PARC was owned by Xerox, but fed at the hands of many other companies, including Apple, because people like Steve Jobs welcomed its research. Brockman wants everyone to welcome OpenAI research.

In hopes of nudging this dynamic, Brockman and company recruited several other notable researchers, including Ian Goodfellow, another senior research scientist on the Google Brain team. Brockman says PARC basically got a bunch of smart people together and decided to see what happened. "You need a shared vision without central control."

Lack of control is essential to the ideal. If you bring enough people into the collective goal, the end result will win over whatever you secretly cooked up. But if AI becomes as powerful as promised, the equation will change. We need assurance that the newfound AI will adhere to the same egalitarian ideals that led to its creation in the first place. Musk, Altman and Brockman believe in people, in the crowd. And if they turn out to be right, one day this crowd will not be completely human.

ILYA KHEL