Women in tech are fighting A.I. bias—but where are the men?



Battling bias. If I’ve been a little MIA this week, it was because I spent Monday and Tuesday in Boston for Fortune’s inaugural Brainstorm A.I. gathering. It was a fun and wonky couple of days diving into artificial intelligence and machine learning, technologies that—for good or ill—seem increasingly likely to shape not just the future of business, but the world at large.

There are a lot of good and hopeful things to be said about A.I. and M.L., but there’s also a very real risk that the technologies will perpetuate biases that already exist, and even introduce new ones. That was the subject of one of the most engrossing discussions of the event by a panel that was—as pointed out by moderator, guest co-chair, and deputy CEO of Smart Eye Rana el Kaliouby—comprised entirely of women.

One of the scariest parts of bias in A.I. is how wide and varied the potential effects can be. Sony Group’s head of A.I. ethics office Alice Xiang gave the example of a self-driving car that's been trained too narrowly in what it recognizes as a human reason to jam on the breaks. "You need to think about being able to detect pedestrians—and ensure that you can detect all sorts of pedestrians and not just people that are represented dominantly in your training or test set," said Xiang.

And though not so obviously life-or-death, Dr. Margaret Mitchell, chief ethics scientist of Hugging Face, pointed to the problems with bias in A.I. language processing:

"We find that when we have these large language models training on tons and tons of data ... most of it is sourced from the web, where we see a lot of racism and sexism and ableism and ageism,” she said. “[It’s] largely sourced from Wikipedia, which is primarily written by men, white men, between something like 20 to 30 or so, and single and PhD, higher-level education, which means that the kind of topics that are covered, that are then scraped in training the language models, reflect those knowledge bases, reflect those backgrounds.”

(By the way, if Mitchell's name sounds familiar, it may be because she was previously co-head of Google’s A.I. ethics unit before being fired earlier this year. Mitchell was a fierce and public critic of what she considered to be the company's D&I failures; Google contends she was dismissed for violations of its code of conduct. Read more on that complicated situation here.)

So, how do we avoid or strip away such biases? The panelists didn't offer easy answers, but awareness, early and frequent monitoring and testing, and transparency about problems and failures are all likely part of the solution.

As Xiang noted, "no one wants to produce products that are biased." Yet I don't think it was a coincidence that this critical discussion was led by an all-female panel. We cannot allow A.I. ethics to be put solely on the shoulders of women and people of color. We know what happens to corporate positions once they are tagged as "women's jobs"—all too often, they're marginalized and not taken as seriously as more male-skewing roles. That couldn't be less true here: Stopping bias from further infiltrating our companies and the wider world is about as serious as it gets.

Source: https://fortune.com/2021/11/10/ai-bias-research-women-tech-men/

No comments

intech company. Powered by Blogger.