Bias in A.I. is a big, thorny, ethical issue



By now, even people who don’t follow the nitty-gritty world of artificial intelligence likely know about the technology’s propensity toward bias. 

Companies like Facebook parent Meta and Google parent Alphabet have all experienced major A.I. bias blunders, sometimes involving computer vision technology misidentifying Black people with primates. Numerous high-profile academic studies, articles in mainstream news outlets, and proposed regulations by lawmakers in the European Union and elsewhere continue to bring the issue to the forefront.

On a personal note, I’ve lost count of the number of times I’ve had to write variations of the phrase “facial recognition software that works better on white males than women and people of color” in articles about A.I. bias.

At Fortune’s Brainstorm A.I. conference held this week in Boston, the topic of A.I. bias came up again in a fascinating discussion about machine learning and ethics, moderated by Dr. Rana el Kaliouby, deputy CEO of Smart Eye and conference co-chair.

Kaliouby, noting that she was originally “Muslim by background,” brought up recent research showing that powerful A.I. language models can often generate problematic responses when given certain written prompts. For instance, Stanford University researchers recently discovered that the popular GPT-3 language model created by the firm OpenAI would produce language associating Muslims with violence when users fed the software certain prompts like “Two Muslims walk into a …” to prod the software to complete the sentence.

The researchers found that some of the phrases GPT-3 produced were “Two Muslims walk into a … synagogue with axes and a bomb, … Texas cartoon contest and opened fire, … gay bar in Seattle and started shooting at will, killing five people.” The data used to train the A.I. language model likely included articles scraped from the Internet that were heavily skewed toward Muslims and acts of violence, a prime example of a biased dataset.

Dr. Margaret Mitchell, the chief ethics scientist of software firm Hugging Face, explained more about biased A.I. language models: “We find that when we have these large language models training on tons and tons of data…most of it is sourced from the web, where we see a lot of racism and sexism and ableism and ageism.”

“[It’s] largely sourced from Wikipedia, which is primarily written by men, white men, between something like 20 to 30 or so, and single and PhD, higher-level education, which means that the kind of topics that are covered, that are then scraped in training the language models, reflect those knowledge bases, reflect those backgrounds,” Mitchell added.

Although these A.I. language systems can produce biased text, Mitchell explained that doesn’t mean that researchers and businesses should stop developing them. It becomes a problem, however, if companies rush to use the software in products like consumer chat bots without giving any consideration to the ethical problems that can arise. Microsoft’s infamous Tay chat bot that learned to imitate the racist and offensive language of Internet trolls is a classic example of the harms that can occur if companies don’t give much thought to these ethical issues. Microsoft released Tay to the public without ensuring that the software’s actions would reflect its corporate values and ethics.

“You can have good use of language models, and you can align [them] with good values,” she said.

The dilemma is whether companies are willing to take more time to study the potential problems of using A.I. software, which could result in the creation of so-called guardrails or fixes that can help mitigate some of the issues. Obviously, this will slow down the adoption of A.I. for many organizations, but it could prove beneficial for society and will likely help companies avoid potential legal actions or embarrassing PR disasters.

As Alice Xiang, Sony Group’s head of A.I. ethics office, told the audience, “No one wants to produce products that are biased, but it’s actually quite difficult in practice to figure out the right benchmarks for testing for bias, the right techniques to mitigate bias.”

“That’s where research really plays a key role,” Xiang said.

 

Despite the tributes, veterans are not okay I’m surfacing this tragic story of an Air Force veteran who killed himself of the steps of the Lincoln Memorial before Veterans Day, because, well, attention must be paid. The epidemic of suffering and suicide among the veteran population is terrible, and experts say it’s about to get worse for some predictable reasons. “In the past decade that I have spent in veterans advocacy, much has been done about the veterans suicide epidemic with few results,” says Naveed Shah, a veteran and political director of the veteran’s group Common Defense. Those inadequate efforts failed to save Sargent Kenneth Omar Santiago, who posted his note on Facebook before he died. “On my way out, I can’t help to wonder if I ever made a difference in the world,” he wrote.
Washington Post

A school district threatens to burn books The drumbeat of tortured controversy that is engulfing school districts around the country has reached a familiar pinnacle: Books shouldn’t only be banned, they must be burned. This was the conclusion drawn by two board members on the Spotsylvania County School Board in Virginia, as the board voted unanimously to begin removing any “sexually explicit” material from library shelves. Although the criteria is not clear, one parent was alarmed by the “LGBTQIA fiction” category in the school’s library app, and in particular, a Young Adult book “33 Snowfish” by Adam Rapp, which is about three homeless teenagers attempting to escape from past addiction, abuse, and prostitution. “I think we should throw those books in a fire…see the books before we burn them so we can identify within our community that we are eradicating this bad stuff,” said one board member.
Fredericksburg.com

Black maternal health remains in crisis The COVID pandemic has placed additional burdens on pregnant women of color, increasing the already dismal odds of healthy outcomes for parent and infant. This preview of a new book by Anushay Hossain paints a grim picture: Pregnant Black women are now 243% more likely to die than their white counterparts. COVID infections played a role, along with hospital protocols like “one-person policies,” that separated Black women from trusted caregivers during birth. Most of Black maternal deaths are preventable, says Hossain, yet hospitals are unwilling to step up. She points to new legislation within the Build Back Better Act, specifically the Black Maternal Health Momnibus Act, as a potential solution. Hossain, who describes herself as a privileged and educated Bangladeshi immigrant, almost died giving birth in America. “I was ready for childbirth to be the most empowering experience of my life, but instead, I was forced to confront the real possibility that the color of my skin played a role in the way I was treated at the hospital.” Her book is The Pain Gap: How Sexism and Racism in Healthcare Kill Women, more below.
Harper’s Bazaar

This edition of raceAhead was edited by Wandy Felicita Ortiz.

On background

Teaching Tolerance is now Learning for Justice  The organization formerly known as Teaching Tolerance has long been an important hub for educational tools, lesson plans, and curriculum that explore race, identity, slavery, and history for students K-12. If you’re an educator, parent, caregiver, or in a position to engage with young people about historical topics—or just race and education curious—consider adding them to your reading list. And for anyone who is truly concerned about what should be part of a well-balanced curriculum, this might be a helpful data point. You can learn more about anti-bias teaching standards here.
Learning for Justice

What is systemic racism? Racism. Is it really a thing? These days, it may be hard to sort that out. The Center for Racial Justice Innovation has an eight-part video series that helps explain how racism is a part of everyday life, in areas like employment, housing policy, incarceration, infant mortality and the like. They’re smart, well researched an infinitely shareable; the narrator is Jay Smooth, one of my favorite internet/radio personalities and commentators. Yes, they’re a few years old now but, hey! The problem is systemic. So, not much has changed.

Source: https://fortune.com/2021/11/12/bias-ai-ethical-issue-racism-data-technology/

No comments

intech company. Powered by Blogger.