OpenAI Wants To Create An Omnipotent AI. But In A Good Way - Alternative View

Table of contents:

OpenAI Wants To Create An Omnipotent AI. But In A Good Way - Alternative View
OpenAI Wants To Create An Omnipotent AI. But In A Good Way - Alternative View

Video: OpenAI Wants To Create An Omnipotent AI. But In A Good Way - Alternative View

Video: OpenAI Wants To Create An Omnipotent AI. But In A Good Way - Alternative View
Video: OpenAI: Working with GPT-3 2024, May
Anonim

On one Saturday last month, five men, ages 19 to 26, confidently emerged from a cloud of purple smoke at a converted San Francisco auto show. They sat at a row of computer keyboards and were greeted loudly by a crowd of several hundred people. After ninety minutes of intense mouse clicks, the five's smiles were dull and the applause was comforting. Team OG, world champions of the world's most profitable competitive game Dota 2, have lost two games in a row to a team of AI bots.

Image
Image

This result is remarkable because complex video games are mathematically more difficult than board games like chess or go. And yet, leaning against the wall behind the scenes, Sam Altman - CEO of the OpenAI research institute that developed these bots - felt a holiday of relief.

Artificial intelligence versus doters

“We were all quite nervous that morning - I thought the odds were 60 to 40,” Altman says. He became CEO of OpenAI in March, after stepping down as president of the influential startup incubator YCombinator. To succeed in his new role, Altman needed bots that didn't just beat humans in video games - he needed bots that were better than humans at everything.

OpenAI's stated mission is to ensure that any future AI that can outperform “humans in the most cost-effective job” benefits all of humanity. Such technologies - general artificial intelligence, "strong" artificial intelligence, or AGI - are not even on the horizon yet, but OpenAI claims that both it and others are rapidly progressing. The organization has shown that it can conduct research at the best in the world. And she has earned accusations of fanning the AI hype, as well as fixing on AGI and potential AI threats.

Altman's plan is that OpenAI research - and provocations - should only pick up pace. The former chairman of the organization's board of directors, he took over as CEO after helping to move the bulk of the nonprofit staff to a new commercial company, hoping to attract investors with the billions needed to shape the fate of AI and humanity. Altman says the big tech labs at Alphabet and elsewhere should be under pressure from peers who aren't looking to increase their shareholder value. “I don’t want to be in a world in which one company creates AGI, grabs all of its value and makes all decisions,” he says.

Promotional video:

At an MIT event in late 2014, Tesla CEO Elon Musk described AI research as an attempt to "summon a demon." In the summer of 2015, he spoke with Altman and several others over dinner and proposed creating a research lab, independent of the technology segment, that would steer AI in a positive direction. In the same year, OpenAI was created, in which Altman and Musk found themselves. In early 2018, Musk left the board of directors, citing potential conflicts with his other roles.

During its short life, OpenAI has positioned itself as a serious place for AI research. Ilya Sutskever, a co-founder of the organization who retired from the AI research leadership at Google, oversees a staff that includes ex-employees from Google, Facebook, Microsoft and Intel. Their work on robotics and machine learning has appeared at the most prestigious conferences. The victory over Team OG in Dota 2 earned them respect in the field of AI and gaming.

OpenAI's transformation into a commercial corporation was driven by the feeling that in order to keep up with giants like Alphabet, access to ever-growing computing resources would be required. In 2015, OpenAI said it had $ 1 billion in cash from Altman, Musk, LinkedIn co-founder Reid Hoffman, early Facebook investor Peter Thiel and Amazon. Now Altman says one billion won't be enough. “The amount of money we need to make the mission successful is much more than I thought,” he says.

Image
Image

IRS filings show that in 2017, when OpenAI revealed its first bot dota, it spent $ 8 million on cloud computing. Since then, her expenses have likely increased significantly. In 2018, OpenAI announced that the predecessor of the system that beat OG was linking more than 120,000 processors leased from Google's cloud division over a period of weeks. The champion-winning version trained for 10 months, the equivalent of 45,000 years of playing Dota 2 against itself. How much did it cost? Greg Brockman, CTO of OpenAI says the project went into "millions of dollars" but did not disclose details.

Altman isn't sure OpenAI will continue to rely on competing cloud services - it's still open to purchase or even develop AI hardware. The organization is closely monitoring new chips being developed by Google and a host of startups to squeeze even more out of machine learning algorithms.

To raise the funds needed to access future equipment, Altman tried to sell the idea to investors in a scheme that would be wild even in Silicon Valley. Invest in OpenAI, wait, and the company will return your investment a hundredfold - as soon as it invents robots that outperform humans in most economically valuable tasks.

Altman says meeting those expectations was "the most interesting fundraising experience of my life - it won't fit any other model." The biggest interest, he says, is from the wealthy who are interested in AI. Hoffman and venture capital firm Khosla Ventures have invested in a new commercial OpenAI but did not respond to requests for comment. No one says when to expect a refund, but betting on OpenAI is not for the impatient. Companies are told that their refund times may exceed industry standard (ten years). “We tell them right away that you will not receive funds in the next 10 years,” Altman says.

Despite its best efforts, OpenAI is often criticized by industry leaders. In February, the company revealed details about language processing software that was capable of generating surprisingly live text. Some immediately called the invention "dangerous" and expressed concerns that it could be used in a harmful way to litter social networks.

Several prominent people in the field of AI research were annoyed, including Facebook AI chief Ian Lekun. In his posts on Facebook, he defended the open publication of research on AI and joked that people should stop having children, because one day they will post fake news.

For some, this episode highlighted that OpenAI's mission is not much different from other corporate or academic labs. "They're doing, give or take, the same research as the rest, but they want to raise billions of dollars," says Zachary Lipton, professor of machine learning at Carnegie Mellon University. "The only way to do it is to twist your heart a little."

Altman admits OpenAI may have sounded the alarm too early, but says it's better than being late. “The tech industry hasn't done a very good job of trying to be proactive about abuse,” he says. Self-censorship is required by OpenAI itself.

Following the defeat of OG last month, to show sympathy, OpenAI decided to conduct a second experiment to showcase the good side of superhuman AI. Dota 2 experts and a few newbies played in teams alongside bots.

The software, unlucky enough to become teammates of the noobs, for the most part showed a superhuman indifference to helping the novice player. Instead, it focused on winning, following instincts honed by months of costly training.

Nobody knows how to make highly targeted AI more flexible, or what components might be needed to make AGI more than technological fantasy. Even Altman is intimidated by the magnitude of the problem. “I have days when I believe all this will happen, and days when it all seems like a pipe dream,” he says.

What do you think, will he succeed?

Ilya Khel