DeepMind Says It Has Nothing To Do With Research Paper Saying AI Could End Humanity

DeepMind Says It Has Nothing To Do With Research Paper Saying AI Could End Humanity
Image: NurPhoto/Contributor via Getty Images

After a researcher with a job at DeepMind – the artificial intelligence company owned by Alphabet, Google’s parent company – co-wrote a paper claiming that AI could one day wipe out humanity, DeepMind is distancing itself with work.

The article was recently published in the peer-reviewed journal AI Magazine, and was co-authored by researchers at the University of Oxford and by Marcus Hutter, an artificial intelligence researcher who works at DeepMind. The first line of Hutter’s website reads: “I am a Senior Researcher at Google DeepMind in London and Honorary Professor at the Research School of Computer Science (RSCS) at the Australian National University (ANU) in Canberra.” The paper, which currently lists its affiliation with DeepMind and ANU, reviews some experiments thinking about the future of humanity with super-intelligent AI that works using patterns similar to today’s machine learning programs. today, such as seeking rewards. He concluded that this scenario could escalate into a zero-sum game between humans and AI that would be “fatal” if humanity lost.

After Motherboard published an article about this article with the title “Google Deepmind Researcher Co-Authors Paper Saying AI Will Eliminate Humanity”, the company decided to distance itself from the article and asked Motherboard to remove the mention of the company. In a statement to Motherboard, DeepMind claimed that the affiliation was listed as an “error” and was being removed (it was not at the time of writing), and that Hutter’s contribution was only under the banner of his university position.

“DeepMind was not involved in this work and the authors of the article have requested corrections to reflect this,” a DeepMind spokesperson told Motherboard in an email. university professors and pursue academic research separate from their work at DeepMind, through their academic affiliations.

The spokesperson said that while DeepMind was not involved in the article, the company is investing efforts to guard against harmful uses of AI and is thinking “deeply about safety, ethics and wider societal impacts of AI and research and develop AI models that are safe.” , effective and consistent with human values.

DeepMind declined to say whether it agreed with the conclusions of the article co-authored by Hutter.

Michael Cohen, one of the article’s co-authors, also asked Motherboard for a similar correction. It is Motherboard’s editorial policy not to correct a title unless it contains a factual error.

Although the company says it is committed to AI safety and ethics, it has already shown that when criticism is a little too strong from people in positions within the company, that they whether or not they have outside commitments, she’s all too happy to cut and run.

For example, in 2020, prominent artificial intelligence researcher Timnit Gebru – who at the time held a position at Google – co-authored an article on ethical considerations in large machine learning models. Google demanded that she remove her name from the post and remove it, and eventually fired her. Gebru’s ousting prompted Google employees to post a blog explaining the details that led to the dismissal, including that the document had in fact been approved internally; it was only after it was made public that Google decided it could not be associated with the work and demanded that it be removed from the situation. When she couldn’t, the company released Gebru.

In response to Motherboard’s first article on the newspaper, Gerbu tweeted that when she asked Google if she could add her name to the AI ​​ethics document that ultimately led to her being fired with an affiliation that wasn’t Google, she “was greeted with laughter”.

Margret Mitchell, another AI ethicist who was fired from Google along with Gerbu tweeted that Google told them that, as long as they worked at the company, Google “had FULL say in what we published”.

Having multiple affiliations within academia and the private sector is relatively normal and comes with its own set of heavy ethical concerns related to the long history of corporations tapping into academia to produce supportive research. What Google has shown is that it will exploit this fuzzy division to get rid of criticism when it suits the business.

Comments are closed.