Ethics By Design: Steps To Prepare For AI Rules Changes

The EU’s AI Act promises to mitigate the harmful use of machine intelligence but will require deeper education and evangelization to ensure more transparent and ethical use of AI, notes Ursula Morgenstern, Cognizant’s President of Global Growth Markets.


 

With new regulations proposed, AI ethics — like data privacy — has become a top priority for companies. As heads of state from European Union member nations begin to take up discussion of the EU’s proposed Artificial Intelligence (AI) Act, companies are exploring what it will take to adhere to one of the first major policy initiatives focused on harmful AI.

The answer is clear: Compliance will require business to educate and evangelize across their organizations. While it will likely take two years for these new rules to come into effect, it’s not too soon to begin preparing.

Ethics by design is a different way of thinking for companies. In addition to considering profit-driven outcomes, companies will now need to assess the harm and impact of their practices and provide oversight to manage AI’s risks.

The AI Act sets the stage for change. Proposed in April, the legislation aims at mitigating the harmful use of AI. It facilitates the transparent, ethical use of AI — and keeps machine intelligence under human control. The regulations would outlaw four AI technologies that cause physical and psychological harm: social scoring, dark-pattern AI, manipulation and real-time biometric identification systems.

Equally important to companies are the act’s proposed penalties. Fines for noncompliance are significantly higher than those for the EU’s General Data Protection Regulation (GDPR), ranging up to 30 million Euros, or 6% of annual revenue. In contrast, the GDPR imposes fines of up to 20 million Euros or 4% of revenue.

Learning the Lessons of Data Privacy’s Rise to Prominence

Beyond the penalties, we see strong parallels between the proposed AI Act and GDPR. When enacted in 2018, GDPR elevated privacy to priority status for organizations, and the AI Act will trigger a similar lift for AI ethics. The good news is that many of the processes that companies implemented as part of GDPR compliance can serve as the foundation for adopting responsible AI, such as the privacy impact assessments used when sensitive data is being processed.

But there are key differences between data privacy and AI ethics, and they make navigating this new territory more uncertain. For one thing, data privacy is about minimizing the data needed for an intended purpose; AI thrives on mining as much of it as possible.

For another, while there is general agreement on the level of personal details that need to be protected, AI ethics involves complex issues such as bias and fairness that are more nuanced and subjective. While most chatbots won’t meet requirements for the high-risk category because they are narrowly defined and task-oriented — think help desks — evaluating exceptions shows the complexities inherent in AI ethics. For example, a consumer-facing chatbot that helps consumers find social services could be classified as high risk because of its potential for impacting access and outcomes, especially for historically oppressed populations. It comes down to the likely harm to the individual.

Often overlooked in discussion of the AI Act is that the proposed regulations apply not just to personal details but also to the datasets, modeling and algorithms which are used in business decisions. The coverage of nonpersonal data has huge ramifications for businesses. For example, the AI Act’s restrictions, if enacted, would apply to the data banks used to assess the level of risk for commercial loans. Data in these cases typically relates to the type of business, location and other details such as local crime rates. No personal data is involved, yet as an AI-driven business decision, the risk assessment falls under the act’s efforts to avoid the unintended AI bias and real-world harm seen in home mortgages.

Similarly, AI’s use in functional chatbots related to business operation flows, such as directing employees to HR forms, would also be covered by the AI Act. While unlikely to fall into the high-risk category, functional chatbots will require additional justification to demonstrate that the processes aren’t intrusive.

Preparing for the AI Act: Educate and Evangelize

Adhering to the proposed regulations will require change. Yet we see significant opportunities for companies. More prescriptive regulations will help companies understand how to safely implement AI and show where there will be opportunities and ROI, especially useful in regulated industries such as healthcare, banking and brokerage.

Protecting corporate AI investments will entail changes in culture, hierarchy and governance to ensure the precise risk management the AI Act requires. It’s a balancing act of protecting people and profits. We recommend organizations take the following steps to get started:

  • Expand responsibility for AI. By seating more people at the table for AI initiatives, organizations can deepen their understanding of AI’s human factors before they create models. Responsible AI’s success depends on a host of participants that extends beyond data scientists all the way to the “C” level. To ensure transparency in AI decision-making processes, key contributors should include social scientists, ethnographers, industry and governance experts, and design thinking researchers who understand how people engage with computers. For example, we’re now partnering with a governmental entity to build responsible AI. By starting with a set of foundational principles, the agency’s cross-functional team is drawing on multiple points of view to better understand AI and create improved oversight.
  • Establish and empower new roles. Responsible AI is markedly different from compliance. It’s not about spot checks. To create AI efforts that adhere to the proposed AI Act requires new roles. For example, data ethicists are set to emerge as important influencers within organizations. The job description will include conducting risk assessments and ensuring AI-related regulatory compliance. But more important to the success of the position — and to AI initiatives — will be where data ethicists and AI teams sit within the organizational hierarchy. Ensuring they’re empowered to make decisions is key. AI ethics is much bigger than just understanding algorithms; buy-in from executive leadership is paramount.
  • Prepare to play in the sandbox. Among the AI Act’s provisions is the establishment of “regulatory sandboxes” similar to ones used in the fintech industry. The idea behind the measure is to create a window in which companies can develop, test and validate innovative AI systems before taking them to market. Companies remain responsible for risks that arise in the development and testing phase. Sandboxes are useful for encouraging innovation in high-risk or challenging situations and tackling thorny problems with AI. With their emphasis on exploring an application’s potential for harm, sandboxes also signal one of the biggest mindset shifts that the AI Act imposes on companies.

Two years might feel like a long way out, but it’s a comparatively short window of time when it comes to reshaping organizations. Although the details of the AI Act will likely continue to evolve before enactment, regulation is coming. To avoid discovering that your models aren’t compliant, the time to act is now. 

Source: https://www.forbes.com

No comments

intech company. Powered by Blogger.