Hello Robot, Why Overly-Realistic AI Is Bad




Am I talking to a software robot or a human? It’s a question that many of us will have posed at some point when we are online interacting with a web interface chatbot with its chirpy ‘Hi! Can I help?’ message flags and discussion box. 

Some of us deliberately try and second guess these bots in order to be able to work out whether we are talking to a machine or a person. We do this because we somehow hope that this knowledge will enable us to assess more accurately how much help we are likely to get - and so, perhaps, get an idea of how much effort we should put into explaining our customer issues or requests.

Turing & overly-realistic AI

Right now, it is not necessarily that difficult to know if you are speaking to an Artificial Intelligence (AI) engine. The Turing test was of course established to rate the point at which people can't tell the difference between a human and a machine. The AI software robot ‘bots’ that have passed the Turing test so far have been given quirky personality traits to mask the challenges that even the best AI engines have when dealing with human conversational challenges.

So are these quirky human-like overly-realistic AI bots a good thing or a bad thing?

Amanda Curry, PhD student at the National Robotarium at the UK’s Heriot-Watt University (now a post doctoral researcher based in Italy), explains that some AIs are now set up with personality traits and these try to build a relationship with the user. Curry advises that this can lead to privacy issues when humans willingly give more information than intended as they are unaware what a company might do with that information.

“This has inherent risks when people have systems in their home interacting with the wider family, including children. When people are more relaxed, they tend to reveal more personal information which can be a risk associated with what we have categorized as an 'overly realistic AI' - because the interaction and language used is more natural and flowing,” said Curry.

AI bots need gender-neutrality

The average user would be forgiven for thinking that AI bots are mostly built around the principles of gender-neutrality, but in fact, many are created with an almost receptionist/secretary-like female persona. Whether a user’s unconscious bias also comes into play here is open to debate, but the National Robotarium team led by Professor Verena-Rieser confirms that some AI engines have been subject to abuse and even sexual harassment.

 A 2019 UNESCO Artificial Intelligence report demonstrated how AIs can 'reflect, reinforce and spread gender bias' today. With a whole generation of young people interacting with these AI engines daily, this is a growing area of concern. How AI responds to abuse is another challenging area for software application development professionals.

Curry explains that about 5% of the interactions recorded with an experimental chatbot during the National Robotarium’s Amazon Alexa Challenge could be classified as abusive.

Although that figure in and of itself sounds pretty high, the proportion is even higher for AI bots like Kuki (which records around 30% of abuse)... so the concern here is that when and where these behaviors are becoming regular and normalized at the AI bot level, it could have an external impact on online interactions between humans.

AI has a carbon footprint implication

“The environmental impacts of AIs are rarely talked about - when an individual asks an AI assistant to switch on the light (as you would do with another member of your household), that command must be sent all the way to the company to be processed involving potentially massive cloud computing inputs/outputs and connections. When that command does gets sent back to your AI assistant, it uses a lot of energy and produces a lot of carbon,” highlighted Curry.

The more realistic our AIs become, the more likely it is that we will be asking them to do more -– so, questions Curry, does that mean we will be increasing our carbon footprint unintentionally as a result?

Given all these negative forces then, what does Curry think makes a good AI chatbot?

“The most useful chatbots, are not perhaps as sophisticated as some of the overly-realistic variety on the face of it; instead, they are more functional and transactional. The ones that are slightly more realistic are not necessarily as sophisticated behind the scenes. Often there is no real Machine Learning (ML) going on behind it; it is more rule-based and actually not half as smart as people might think,” said Curry.

At the National Robotarium, the team’s goal is not to create something that is indistinguishable from a human, but rather something that is very natural and easy to interact with. The academics say they have noticed that when people know that it's a system they are talking to, like an Amazon Alexa/ Echo, they don't try to have the same conversations that they would with a human. 

“Once people know it is a chatbot, I think they are less interested in its life and what it likes to do for a living, what its favorite color might be etc. They see it more as a spoken dialogue interface for something like a search engine i.e. another ‘gate’ to the Internet so-to-speak,” said Curry.

AI times are a-changing

Curry believes that, humans don’t always want systems that are completely indistinguishable from humans. When we know we’re talking to a software robot we don’t want idle chit chat and thoughts on the weather.

“Initially our bot was designed to replicate a conversation in a bar maybe talking about politics or movies. But then we felt people wanted to consume information so we made that design decision. So, if they were talking about their favorite actor, they wanted the fun facts about that actor and their movies or other performances, rather than opinions and wider thoughts,” concluded Curry.

In April 2021, the European Union (EU) proposed rules that would require companies to disclose when an individual is speaking to an AI software bot - and similar regulations are already in place in California for bots.

The EU AI regulation states that people should be able to trust what AI has to offer. According to the EU, proportionate and flexible rules will address the specific risks posed by AI systems and set the highest standard worldwide. Its plan outlines the necessary policy changes and investment to strengthen the development of human-centric, sustainable, secure, inclusive and trustworthy AI.

It turns out, AI’s role with robots may be to make them more robot-like and not more human-like after all. Yes okay we still like hardware robots to have fingers, arms and probably a couple of blinking eyeballs, but for the most part, don’t get too human please - okay computer?

No comments

intech company. Powered by Blogger.