The quest to build a machine that can understand or learn anything that a human can began as a public service at OpenAI, one of the world’s leading artificial intelligence labs. But four years after its founding, things are changing.

The company’s charter declares that its “primary fiduciary duty is to humanity.” But the guiding principles of transparency, openness and collaboration are giving way to the demands of fundraising, according to an MIT Technology Review report that cited almost three dozen interviews with past and current employees, collaborators, friends and other experts in the field.

OpenAI wants to be the first to create something called artificial general intelligence, or AGI — essentially a machine that can think for itself. The possibility of machines thinking independently raises concerns that the technology may have unforeseen negative impacts and has led to calls for regulations governing AI, including from people in the tech industry.

OpenAI, whose $1 billion backing came from investors and entrepreneurs including Elon Musk, Peter Thiel and the company’s CEO Sam Altman, said when it was founded that it would operate as a nonprofit, to “build value for everyone rather than shareholders.

Because of the potential for abuse of such powerful technology, the goal was to share research and collaborate with other developers and get to AGI as fast as possible — then distribute the benefits evenly around the world.

Over time the company’s public image has diverged from what goes on behind closed doors, the report said. Fierce competitiveness and increasing pressure to attract more funding have chipped away at some of those founding ideals. In March, OpenAI changed its structure by setting up a “capped profit” arm, a for-profit that limits investor returns to 100-fold. In July, it announced a billion-dollar investment from Microsoft.

Musk is gone — parting ways with the company in February 2019 because of disagreements over direction. A month later, Altman became OpenAI’s CEO. And Altman’s 2020 vision for the lab — shared privately with employees — indicates that OpenAI needs to make money to do research, not the other way around, the report said.

Criticism mounted after the lab announced a research model called GPT-2 in February 2019 but said it was too dangerous to release. GPT-2 was able to generate full essays after being fed sample text. Amid the blowback, OpenAI announced plans for a staged release and ultimately released the full code in November. Critics accused it of a publicity stunt and employees grew frustrated, causing the leadership to worry it would affect the lab’s influence and ability to hire top talent, the report said.

  • In response to the MIT Technology Review article, Musk tweeted this week that “all orgs developing advanced AI should be regulated, including Tesla.”
  • OpenAI referenced its charter in response to heightened security at the company: “We expect that safety and security concerns will reduce our traditional publishing in the future,” the section states, “while increasing the importance of sharing safety, policy, and standards research.”
  • Sundar Pichai, CEO of Google and Alphabet, wrote about the importance of government oversight of AI last month in The Financial Times, saying “technology’s virtues aren’t guaranteed.”