After blocking the card, Google starts with the best artificial intelligence researcher, faces the backlash



[ad_1]

Former Google AI researcher Timnit Gebru speaks on stage during day 3 of TechCrunch Disrupt SF 2018 at Moscone Center on September 7, 2018 in San Francisco, California.
Zoom in / Former Google AI researcher Timnit Gebru speaks on stage during day 3 of TechCrunch Disrupt SF 2018 at Moscone Center on September 7, 2018 in San Francisco, California.

Kimberly White | Getty Images

Google struggled Thursday to limit the fallout from the departure of a major AI researcher after the internet group blocked publication of a paper on an important AI ethics issue.

Timnit Gebru, who had been co-head of Google’s AI ethics, said on Twitter that she was fired after the document was rejected.

Jeff Dean, Google’s head of artificial intelligence, defended the decision in an internal staff email on Thursday, saying the document “did not meet our requirements for publication.” He also described Dr. Gebru’s departure as a waiver in response to Google’s refusal to grant on unspecified terms that it had agreed to remain with the company.

The dispute threatened to shed a harsh light on Google’s handling of internal AI search that could harm its business, as well as the company’s long-standing struggles in trying to diversify its workforce.

Before leaving, Gebru complained in an email to colleagues that there was no “zero liability” within Google regarding the company’s claims that it wanted to increase the percentage of women in its ranks. The email, first published on Platformer, also described the decision to block his article as part of a process to “silence the marginalized voices.”

One person who worked closely with Gebru said there had been tensions with the management of Google in the past due to his activism to promote greater diversity. But the immediate cause of his departure was the company’s decision not to allow the publication of a research paper he co-authored, this person added.

The paper examined the potential bias in large-scale language models, one of the hottest new fields of natural language research. Systems like OpenAI’s GPT-3 and Bert, Google’s system, attempt to predict the next word in any phrase or phrase, a method that has been used to produce surprisingly effective automated writing and which Google uses to better understand search queries. complex.

Language models are trained on large amounts of text, usually taken from the Internet, which has raised warnings that it could regurgitate racial and other bias contained in the underlying training material.

“From the outside, it looks like someone at Google decided this was bad for their interests,” said Emily Bender, professor of computational linguistics at the University of Washington, co-author of the paper.

“Academic freedom is very important, there are risks when [research] takes place in places that [don’t] have that academic freedom, “giving companies or governments the power to” shut down “research they don’t approve of, he added.

Bender said the authors hoped to update the paper with more recent research in time to be accepted at the conference it had already been presented at. But he added that it was normal for such work to be replaced by more recent research, given the speed with which work in fields like this is developing. “In the research literature, no document is perfect.”

Julien Cornebise, a former artificial intelligence researcher at DeepMind, the London-based artificial intelligence group owned by Google’s parent, Alphabet, said the controversy “shows the risks of having artificial intelligence research and research. machine learning concentrated in the few hands of powerful players in the sector, since it allows censorship of the field by deciding what to publish or not “.

He added that Gebru was “extremely talented: we need researchers of her caliber, unfiltered, on these issues.” Gebru did not immediately respond to requests for comment.

Dean said the paper, written with three other Google researchers, as well as external contributors, “did not account for recent research to mitigate” the risk of bias. He added that the paper “talked about the environmental impact of large models, but ignored subsequent research showing much greater efficiencies.”

© 2020 The Financial Times Ltd. All rights reserved Not to be redistributed, copied or modified in any way.

[ad_2]
Source link