Ex-Googler Timnit Gebru Starts Her Own AI Research Center
ONE YEAR AGO Google artificial intelligence researcher Timnit Gebru tweeted, “I was fired” and ignited a controversy over the freedom of employees to question the impact of their company’s technology. Thursday, she launched a new research institute to ask questions about responsible use of artificial intelligence that Gebru says Google and other tech companies won’t.
“Instead of fighting from the inside, I want to show a model for an independent institution with a different set of incentive structures,” says Gebru, who is founder and executive director of Distributed Artificial Intelligence Research (DAIR). The first part of the name is a reference to her aim to be more inclusive than most AI labs—which skew white, Western, and male—and to recruit people from parts of the world rarely represented in the tech industry.
Gebru was ejected from Google after clashing with bosses over a research paper urging caution with new text-processing technology enthusiastically adopted by Google and other tech companies. Google has said she resigned and was not fired, but acknowledged that it later fired Margaret Mitchell, another researcher who with Gebru co-led a team researching ethical AI. The company placed new checks on the topics its researchers can explore. Google spokesperson Jason Freidenfelds declined to comment but directed WIRED to a recent report on the company's work on AI governance, which said Google has published more than 500 papers on "responsible innovation" since 2018.
The fallout at Google highlighted the inherent conflicts in tech companies sponsoring or employing researchers to study the implications of technology they seek to profit from. Earlier this year, organizers of a leading conference on technology and society canceled Google’s sponsorship of the event. Gebru says DAIR will be freer to question the potential downsides of AI and will be unencumbered by the academic politics and pressure to publish that she says can complicate university research.
DAIR will also work on demonstrating uses for AI unlikely to be developed elsewhere, Gebru says, aiming to inspire others to take the technology in new directions. One such project is creating a public data set of aerial imagery of South Africa to examine how the legacy of apartheid is still etched into land use. A preliminary analysis of the images found that in a densely populated region once restricted to non-white people where many poor people still live, most vacant land developed between 2011 and 2017 was converted into wealthy residential neighborhoods.
A paper on that project will mark DAIR’s formal debut in academic AI research later this month at NeurIPS, the world’s most prominent AI conference. DAIR’s first research fellow, Raesetje Sefala, who is based in South Africa, is lead author of the paper, which includes outside researchers.
Noble recently launched a nonprofit of her own, Equity Engine, to support the ambitions of Black women. She is joined on DAIR’s advisory board by Ciira wa Maina, a lecturer at Dedan Kimathi University of Technology in Nyeri, Kenya.
DAIR joins a recent flourishing of work and organizations taking a broader and critical view of AI technology. New nonprofits and university centers have sprung up to study and critique AI’s effects in and on the world, such as NYU’s AI Now Institute, the Algorithmic Justice League, and Data for Black Lives. Some researchers in AI labs also study the impacts and proper use of algorithms, and scholars from other fields such as law and sociology have turned their own critical eyes on AI.
Despite those shifts, Baobao Zhang, an assistant professor at Syracuse University, says the US public still seems to broadly trust tech companies to guide development of AI.
Post a Comment