By John P. Desmond, AI Trends Editor
Ethicists continue to churn at Google with the departure of Samy Bengio, a Canadian computer scientist known for cofounding Google Brain, who had been leading a large group of researchers working in machine learning.
The impending departure was announced on April 6 in an account from Bloomberg. It follows the departure of several colleagues who questioned how papers are reviewed and diversity practices at Google. Berngio’s last day will be April 28; he joined Google in 2007.
The resignation was preceded by resignations of former Google ethical AI co-leads Timnit Gebru (see AI Trends Dec. 10, 2020) and Margaret Mitchell (reported on Feb. 19), who both had reported to Bengio.
Later in February, Google reorganized the research unit and placed the remaining Ethical AI group members under Dr. Marian Croak, a move that reduced Bengio’s responsibilities.
Bengio has authored some 250 scientific papers on neural networks, machine learning, deep learning, statistics, computer vision and natural language processing, according to DBLP, the computer science bibliography website.
“While I am looking forward to my next challenge, there’s no doubt that leaving this wonderful team is really difficult,” Bengio wrote in the email announcing his resignation, according to Bloomberg. He did not refer to Gebru, Mitchell, or the disagreements that led to their departures. Google declined to comment for Bloomberg.
Tributes for Samy Bengio
“The resignation of Samy Bengio is a big loss for Google,” tweeted El Mahdi El Mhamdi, a scientist at Google Brain who said Bengio helped build “one of the most fundamental research groups in the industry since Bell Labs, and also one of the most profitable ones.”
“I learned so much with all of you, in terms of machine learning research of course, but also on how difficult yet important it is to organize a large team of researchers to promote long term ambitious research, exploration, rigor, diversity, and inclusion,” Bengio stated in his email.
From a report in Reuters, Andrew Ng, an early Brain member who now runs software startup Landing AI, said Bengio “has been instrumental to moving forward AI technology and ethics.” Another founding member, Jeff Dean, now oversees Google’s thousands of researchers.
Google Brain researcher Sara Hooker in a tweet described Bengio’s departure as “a huge loss for Google.”
In February, Google let go staff scientist Margaret Mitchell after alleging she transferred electronic files out of the company. Gebru’s departure followed a dispute over a paper she had submitted to a conference on ethical concerns around large language models. Mitchell has said she tried “to raise concerns about race and gender inequity, and speak up about Google’s problematic firing of Dr. Gebru,” Reuters reported. Gebru has said the company wanted to suppress her criticism of its products. Google has said it accepted her offer to resign.
Bengio had defended the pair, who co-led a team of about a dozen people researching ethical issues related to AI software. In December, Bengio wrote on Facebook that he was stunned that Gebru, whom he was managing, was removed from the company without him being consulted, Reuters reported.
Nicolas Le Roux, a Google Brain researcher, told Reuters that Bengio had devoted himself to making the research organization more inclusive and “created a space where everyone felt welcome.”
Mitchell joined Google in November 2016 after a stint at Microsoft Corp.’s research lab where she worked on the company’s Seeing AI project, a technology to help blind users “visualize” the world around them that was heavily promoted by Chief Executive Officer Satya Nadella. At Google, she founded the Ethical AI team in 2017 and worked on projects including a way to explain what machine-learning models do and their limitations, and how to make machine-learning datasets more accountable and transparent, according to an account in Business Maverick.
Big Tech Companies Framing Conversation About Ethical AI
Google’s PR meltdown around ethical AI is a reminder of the extent to which a handful of giant companies—Big Tech—are able to direct the conversation around ethical AI, suggested a recent account in Fast Company. The discussion is being framed as high stakes, with AI underpinning many important automated systems today, from credit scoring and criminal sentencing to healthcare access and whether one gets a job interview.
Harms the models can cause when deployed in the real world are apparent in discriminatory hiring systems, racial profiling platforms targeting minority ethnic groups, and predictive-policing dashboards that risk being racist. Several lawsuits have been filed by Black men who say they were falsely arrested after being misidentified by facial recognition technology used by law enforcement.
A handful of giant companies determine which ideas get financial support, and decide who gets to be in the room to create and critique the technology.
The experiences of Gebru and Mitchell at Google demonstrate that it’s not clear whether in-house AI ethics researchers have much clout in what their employers are developing. Some observers suggest that Big Tech’s investments in AI ethics are PR moves. “This is bigger than just Timnit,” stated Safiya Noble, professor at UCLA and the cofounder and co-director of the Center for Critical Internet Inquiry. “This is about an industry broadly that is predicated upon extraction and exploitation and that does everything it can to obfuscate that.”
Questions about the diversity of the AI ethics “deciders” are also being raised. A new analysis of the 30 top organizations that work on responsible AI—including Stanford HAI, AI Now, Data & Society and Partnership on AI – showed that of the 94 people leading the institutions, three are Black and 24 are women. The analysis was conducted by Women in AI Ethics, headed by Mia Shah-Dand, a former Google community group manager.
Some suggest the limited diversity leads to a disconnect between research and the communities impacted by AI. AI ethics researchers focus on technical ways of taking bias out of algorithms and achieving mathematical notions of fairness. “It became a computer-science-y problem area instead of something that’s connected and rooted in the world,” stated Emily Bender, a professor of linguistics at University of Washington and a coauthor with Gebru for “On the Dangers of Stochastic Parrots,” the paper that led to issues at Google.
Dr. Marian Croak is the New Ethics Leader at Google Research
Speaking for herself, Marian Croak outlined how she plans to approach her new responsibilities, in an interview with a coworker published recently on the Google Blog.
Dr. Croak is an engineer who worked for many years at AT&T Labs before moving to Google about seven years ago. She is credited as a developer of Voice over IP and has earned over 200 patents. At Google, she has concentrated on service expansion into emerging markets. In one example, she led the deployment of Wi-Fi across the railway system in India, dealing with extreme weather and high population density.
“This field, the field of responsible AI and ethics, is new,” Croak stated. “Most institutions have only developed principles, and they’re very high-level, abstract principles, in the last five years. There’s a lot of dissension, a lot of conflict in terms of trying to standardize on normative definitions of these principles. Whose definition of fairness, or safety, are we going to use? There’s quite a lot of conflict right now within the field, and it can be polarizing at times. And what I’d like to do is have people have the conversation in a more diplomatic way, perhaps, than we’re having it now, so we can truly advance this field.”
- chief executive officer
- computer science
- computer vision
- deep learning
- facial recognition
- Facial Recognition Technology
- How To
- law enforcement
- machine learning
- Natural Language
- natural language processing
- neural networks
- Racial Profiling
- Satya Nadella
- The Source
- university of washington