The Relationship Between AI And Human Rights





The reason AI exists is to make our lives simpler, actions faster, knowledge more usable and decision-making more assured. In these regards, AI has done a fine job and continues to do so in both personal and professional contexts. Despite this, you would come across countless concerns about the ‘ethical issues’ posed by the technology. Pay closer attention, and you will realize that most of these issues stem from human negligence or ignorance.

It goes without saying that the relationship between AI and human rights can only be as good as we humans enable it to be. AI-powered systems act on the basis of how competently they have been built and trained. So, executing those two tasks ethically can make sure that AI tools and applications will never violate any human right.

The use of AI for humanitarian action necessitates the need to involve several different factors in the research and development phases of AI algorithms. Here, we will see how the right policies and practices, coupled with the right personnel, can resolve the so-called ‘problems’ AI related to human rights:

AI and the Right to Privacy

At the risk of stating the obvious, data is the most vital cog of the AI machine. The availability of data is key to an AI model’s learning and adaptability in the long run. The quality and quantity of information gathered by an organization to train their AI algorithms are paramount and often the main differentiators between market leaders and laggards. As we know, all that data is used for bringing in truckloads of money for such businesses. Data is one of the biggest instruments of power today, and organizations need to know how to monetize it to stay relevant and successful.

On the flip side, the biggest organizations in the world are generally accused of sharing their users' data with third-party players for various reasons. Some of them even end up having to pay massive penalties for the same. It is natural for users to believe that their right to privacy is taken away from them in today's day and age.

So, organizations need to implement certain measures so that their AI-powered systems do not violate their users' privacy.

a)   Facilitating Data Anonymization

Data anonymization can address and resolve the privacy-related issues of your users. Additionally, since most of the world's data privacy laws regulate how businesses treat the personally identifiable information of their users, anonymization can enable your organization to adhere to global privacy laws too.


Anonymization comes with its own set of issues, though. For starters, permanently anonymized data won't be of much use to organizations. As we know, AI-powered businesses measure the quality of gathered data based on how accurate and info-heavy it is. In other words, the higher the quality of a dataset, the more information about a person's identity will be visible in it. So, businesses will need to decide when and how to initiate anonymization of user data.

More worryingly, there may be ways for potentially malicious elements to find a way around anonymization barricades, as reported in this New York Times article.

b)  Building Systems That Can Keep User Data Secure

The hardware and software tools used to store and process data need to be built by prioritizing data privacy and regulatory compliance. Generally, it is seen that most compliance issues arise when the staff entrusted with privacy protection and data security do not understand the technology that is involved in building secure systems. So, the presence of experts is necessary during the building process. It is advisable for organizations to involve their privacy adherence lawyers during the research and development phases of their AI models.

Experts who are involved in such processes can be entrusted with the responsibilities of creating data privacy and periodic compliance audits.

c)   Providing Users the Right to Be Forgotten

Global compliance regulations such as GDPR and CCPA make it mandatory for organizations to provide the option of 'refusal to consent' to their users—meaning that users can choose what information they can withhold from service providers—to boost user privacy.

One of the most significant human rights related to privacy is the right to be forgotten. As per this rule, organizations will need to erase user records of customers who are no longer availing of their services. In effect, this disallows businesses from using old customer data for other, potentially dangerous purposes.

Implementing this in AI models can be tricky as there are not many concrete ways to make algorithms 'unlearn' something. To counter this, they must be embedded with the ability to delete personal information during the earliest system design phase.

The employment of AI for humanitarian action, at least when it comes to the right to privacy, depends heavily on global regulations and how far organizations are willing to go in order to make their AI systems privacy-oriented.

AI and the Right to Equality

The presence of biased AI is not exactly news at this point in time. The lack of datasets, or, quite simply, the lack of diversity in datasets, are among the primary reasons for the creation of discriminatory AI. To overcome such problems and uphold the right to equality in AI applications, the following measures can be taken:

a)   Fostering Inclusivity in Model Training

In several fields, such as healthcare and banking, there have been several instances of people of different races or ethnicities being discriminated against by automated AI systems. As we know, the discrimination is a result of AI datasets not containing enough diversity and inclusivity. To resolve this problem, test subjects chosen for the various facial recognition AI tests must be from every part of the society—people of all ages, genders, ethnicities and races—to widen the scope of AI models to the maximum possible degree. The algorithms created in this way, by involving a diverse variety of people in the development process, will rarely, if ever, throw up racist or discriminatory predictions in their outputs.

b)  Employing a Multi-Disciplinary Approach

This approach involves the presence of individuals from various occupational backgrounds. Such individuals engage in discussions regarding their take on AI systems. According to Antony Cook, associate general counsel for Corporate, External and Legal Affairs for Asia, Microsoft, such an approach will let the AI scientists and developers consider everybody’s viewpoints to create the most inclusive and unbiased AI applications that they can.

The debate regarding AI ethics can include individuals from tech firms, the public sector and NGOs too.

As we know, equality is a quintessential human right. The removal of unidimensional elements from AI training and development can allow the bias from AI algorithms to dissipate and eventually allow the inclusion of AI for humanitarian action across several fields.

AI and the Right to Information

The right to information is considered to be a human right in many parts of the world. However, the right to information can be violated if individuals are constantly fed lies that shroud the truth. Here, AI can be used to detect and eliminate misinformation from the internet. Here’s how that can happen:

a)   Using NLP to Detect and Eliminate Misinformation

NLP is a text-generating and processing technology. NLP can be used by news organizations and governments to perform widespread searches for information sources online. It can be used in conjunction with other technologies to identify the primary source of any claim and assess its validity. By doing so, NLP can help organizations identify and flag not just misinformation but also hate speech, political bias, clickbait text and fraudulent claims.

However, NLP is a language-based tool that can only handle misinformation in the textual format. But what about misinformation in the video format, like deepfakes?

b)  Using Computer Vision and Machine Learning to Bust Deepfakes

Deepfakes feel startlingly real, and are among the main sources of misinformation that need to be dealt with effectively. Tools such as computer vision and machine learning can be trained to spot a deepfake by closely monitoring the following aspects in a video:

-       Head and Face Gestures

-       Unnatural Eye Movement

-       Pixel Artifacts

AI tools can be trained to detect irregularities or peculiarities in videos regarding these three aspects and more to deduce whether a given video is a deepfake or not.

AI can become man’s best non-canine friend. However, before that, the developers, analysts and scientists who create AI systems will need to ensure that it never violates any human rights. If this is achieved, the contribution of AI to humanitarian action can be immense in the future.


Source: https://www.forbes.com/sites/naveenjoshi/2021/10/20/the-relationship-between-ai-and-human-rights/?sh=2225fc90e86c


 

No comments

intech company. Powered by Blogger.