Advertisement
Supported by
Google Researcher Says She Was Fired Over Paper Highlighting Bias in A.I.
Timnit Gebru, one of the few Black women in her field, had voiced exasperation over the company’s response to efforts to increase minority hiring.
By Cade Metz and
A well-respected Google researcher said she was fired by the company after criticizing its approach to minority hiring and the biases built into today’s artificial intelligence systems.
Timnit Gebru, who was a co-leader of Google’s Ethical A.I. team, said in a tweet on Wednesday evening that she was fired because of an email she had sent a day earlier to a group that included company employees.
In the email, reviewed by The New York Times, she expressed exasperation over Google’s response to efforts by her and other employees to increase minority hiring and draw attention to bias in artificial intelligence.
“Your life starts getting worse when you start advocating for underrepresented people. You start making the other leaders upset,” the email read. “There is no way more documents or more conversations will achieve anything.”
Her departure from Google highlights growing tension between Google’s outspoken work force and its buttoned-up senior management, while raising concerns over the company’s efforts to build fair and reliable technology. It may also have a chilling effect on both Black tech workers and researchers who have left academia in recent years for high-paying jobs in Silicon Valley.
“Her firing only indicates that scientists, activists and scholars who want to work in this field — and are Black women — are not welcome in Silicon Valley,” said Mutale Nkonde, a fellow with the Stanford Digital Civil Society Lab. “It is very disappointing.”
A Google spokesman declined to comment. In an email sent to Google employees, Jeff Dean, who oversees Google’s A.I. work, including that of Dr. Gebru and her team, called her departure “a difficult moment, especially given the important research topics she was involved in, and how deeply we care about responsible A.I. research as an org and as a company.”
After years of an anything-goes environment where employees engaged in freewheeling discussions in companywide meetings and online message boards, Google has started to crack down on workplace discourse. Many Google employees have bristled at the new restrictions and have argued that the company has broken from a tradition of transparency and free debate.
On Wednesday, the National Labor Relations Board said Google had most likely violated labor law when it fired two employees who were involved in labor organizing. The federal agency said Google illegally surveilled the employees before firing them.
Google’s battles with its workers, who have spoken out in recent years about the company’s handling of sexual harassment and its work with the Defense Department and federal border agencies, has diminished its reputation as a utopia for tech workers with generous salaries, perks and workplace freedom.
Like other technology companies, Google has also faced criticism for not doing enough to resolve the lack of women and racial minorities among its ranks.
The problems of racial inequality, especially the mistreatment of Black employees at technology companies, has plagued Silicon Valley for years. Coinbase, the most valuable cryptocurrency start-up, has experienced an exodus of Black employees in the last two years over what the workers said was racist and discriminatory treatment.
Researchers worry that the people who are building artificial intelligence systems may be building their own biases into the technology. Over the past several years, several public experiments have shown that the systems often interact differently with people of color — perhaps because they are underrepresented among the developers who create those systems.
Dr. Gebru, 37, was born and raised in Ethiopia. In 2018, while a researcher at Stanford University, she helped write a paper that is widely seen as a turning point in efforts to pinpoint and remove bias in artificial intelligence. She joined Google later that year, and helped build the Ethical A.I. team.
After hiring researchers like Dr. Gebru, Google has painted itself as a company dedicated to “ethical” A.I. But it is often reluctant to publicly acknowledge flaws in its own systems.
In an interview with The Times, Dr. Gebru said her exasperation stemmed from the company’s treatment of a research paper she had written with six other researchers, four of them at Google. The paper, also reviewed by The Times, pinpointed flaws in a new breed of language technology, including a system built by Google that underpins the company’s search engine.
These systems learn the vagaries of language by analyzing enormous amounts of text, including thousands of books, Wikipedia entries and other online documents. Because this text includes biased and sometimes hateful language, the technology may end up generating biased and hateful language.
After she and the other researchers submitted the paper to an academic conference, Dr. Gebru said, a Google manager demanded that she either retract the paper from the conference or remove her name and the names of the other Google employees. She refused to do so without further discussion and, in the email sent Tuesday evening, said she would resign after an appropriate amount of time if the company could not explain why it wanted her to retract the paper and answer other concerns.
The company responded to her email, she said, by saying it could not meet her demands and that her resignation was accepted immediately. Her access to company email and other services was immediately revoked.
In his note to employees, Mr. Dean said Google respected “her decision to resign.” Mr. Dean also said that the paper did not acknowledge recent research showing ways of mitigating bias in such systems.
“It was dehumanizing,” Dr. Gebru said. “They may have reasons for shutting down our research. But what is most upsetting is that they refuse to have a discussion about why.”
Dr. Gebru’s departure from Google comes at a time when A.I. technology is playing a bigger role in nearly every facet of Google’s business. The company has hitched its future to artificial intelligence — whether with its voice-enabled digital assistant or its automated placement of advertising for marketers — as the breakthrough technology to make the next generation of services and devices smarter and more capable.
Sundar Pichai, chief executive of Alphabet, Google’s parent company, has compared the advent of artificial intelligence to that of electricity or fire, and has said that it is essential to the future of the company and computing. Earlier this year, Mr. Pichai called for greater regulation and responsible handling of artificial intelligence, arguing that society needs to balance potential harms with new opportunities.
Google has repeatedly committed to eliminating bias in its systems. The trouble, Dr. Gebru said, is that most of the people making the ultimate decisions are men. “They are not only failing to prioritize hiring more people from minority communities, they are quashing their voices,” she said.
Julien Cornebise, an honorary associate professor at University College London and a former researcher with DeepMind, a prominent A.I. lab owned by the same parent company as Google’s, was among many artificial intelligence researchers who said Dr. Gebru’s departure reflected a larger problem in the industry.
“This shows how some large tech companies only support ethics and fairness and other A.I.-for-social-good causes as long as their positive P.R. impact outweighs the extra scrutiny they bring,” he said. “Timnit is a brilliant researcher. We need more like her in our field.”
Advertisement
https://ift.tt/3mGjsRs
Technology
Bagikan Berita Ini
0 Response to "Google Researcher Says She Was Fired Over Paper Highlighting Bias in A.I. - The New York Times"
Post a Comment