6min read

What to do when AI behaves badly

Artificial Intelligence (AI) is fast developing into a ubiquitous technology, with applications across all aspects of business and society.

Yet as the use of AI becomes more prevalent, the number of cases where its application is implicated in ethical scandals rises.

A prominent, and well known, example is the 2018 Cambridge Analytica scandal that plunged Facebook into crisis. But behind the Facebook scandal is an increasingly prominent phenomenon: as more companies adopt AI to increase efficiency and effectiveness of their products and services, they expose themselves to new and potentially damaging controversy associated with its use. When AI systems violate social norms and values, organisations are at great risk, as single events have the potential to cause lasting damage to their reputation.

To fully achieve the promise of AI, it is essential to better understand the ethical problems of AI and further prevent such problems from happening. This is where our research comes in. We aimed to answer two questions: First, what are the ethical problems associated with AI? And second, how can we prevent or harness the ethical problems of AI?

Identifying AI ethical failure: privacy, bias, explainability

In our research, we collected and analysed 106 cases involving AI controversy; we identified the root causes of stakeholder concerns and the reputational issues that arose. We then reviewed the organisational response strategies with a view to setting out three steps on how organisations should respond to an AI failure in order to safeguard their reputation.

1) The most common reputational impact from AI ethical failure derives from intrusion of privacy, which accounts for half of our cases. There are two related, yet distinct, failures embedded here: consent to use the data and consent to use the data for the intended purpose. For example, DeepMind, the AI-powered company acquired by Google, accessed data from 1.6 million patients in a London hospital trust to develop its healthcare app streams. However, neither the Trust nor DeepMind had explicitly told those patients that their information would be used to develop the app.

2) The second most common reputational impact of AI ethical failure is algorithmic bias, which accounts for 30% of our cases. It refers to reaching a prediction that systematically disadvantages (or even excludes) one group for example based on personal identifiers such as race, gender, sexual orientation, age or socio-economic background. Biased AI prediction can become a significant threat to fairness in society, especially when attached to institutional decision-making. For example, the Apple Credit Card launched in 2019 was providing larger credit lines to men than women, with – in one reported case – a male tech entrepreneur being given a credit limit twenty times that of his wife despite her having the higher credit score.

3) The third reputational impact of AI failure arises from the problem of explainability. These account for 14% of our cases. Here AI is often described as a ‘black box’ from which people are not able to explain the decision that the AI algorithm has reached. The criticisms – or concerns – stem from the fact that people are usually only informed of the final decisions made by AI, whether that be loan grants, university admission or insurance prices, but at the same time have no idea how or why the decisions are made. Key examples include embedding AI in medical image analysis, as well as using AI to guide autonomous vehicles. The ability to understand decisions that these AI systems make is under increasing scrutiny, especially when ethical trade-offs are involved.

Looking across all 106 AI failure cases, the common theme that runs across these failures is the integrity of the data used by the AI system. AI systems work best when they have access to lots of data. Organisations face significant temptations to acquire and use all the data they have access to, irrespective of users’ consent (‘data creep’) or neglect the fact that customers have not given their explicit consent for this data to be used (‘scope creep’). In both cases, the firm violates the privacy rights of the customer by using data it had not been given consent to use in the first place, or to use for the purpose at hand.

Defining the solution for AI ethical failure: Introducing capAI

Having identified and quantified AI ethical failure, we then developed an ethics-based audit procedure intended to help avoid these failures in the future. We called it capAI (conformity assessment procedure for AI systems) as a response to the draft EU Artificial Intelligence Act (AIA) that explicitly sets out a conformity assessment mandate for AI systems.

AI in its many varieties is meant to benefit humanity and the environment. It is an extremely powerful technology but it can be risky.

CapAI is a governance tool that ensures and demonstrates that the development and operation of an AI system is trustworthy. We define trustworthy as being legally compliant, technically robust and ethically sound.

Specifically, capAI adopts a process view of AI systems by defining and reviewing current practices across the five stages of the AI life cycle: design, development, evaluation, operation, and retirement. CapAI enables technology providers and users to develop an ethical assessment at each stage of the AI life cycle and to check adherence to the core requirements for AI systems set out in the AIA.

的过程,它可以产生三种types of assessment outputs.

1) An internal review protocol (IRP), which provides organisations with a tool for quality assurance and risk management. The IRP fulfils the compliance requirements for a conformity assessment and technical documentation under the AIA. It follows the development stages of the AI system’s lifecycle, and assesses the organisation’s awareness, performance and resources in place to prevent, respond to and rectify potential failures. The IRP is designed to act as a document with restricted access. However, like accounting data, it may be disclosed in a legal context to support business-to-business contractual arrangements or as evidence when responding to legal challenges related to the AI system audited.

2) A summary datasheet (SDS) to be submitted to the EU’s future public database on high-risk AI systems in operation. The SDS is a high-level summary of the AI system’s purpose, functionality and performance that fulfils the public registration requirements, as stated in the AIA.

3)外部记分卡(ESC),(选项al) be made available to customers and other stakeholders of the AI system. The ESC is generated through the IRP and summarises relevant information about the AI system along four key dimensions: purpose, values, data and governance. It is a public reference document that should be made available to all counterparties concerned.

We hope that capAI will become a standard process for all AI systems and prevent the many ethical problems they have caused.

Conjointly, the internal review protocol and external scorecard provide a comprehensive audit that allows organisations to demonstrate the conformance of their AI system with the EU’s Artificial Intelligence Act to all stakeholders.

In the US, an amendment to the 2019 Algorithmic Accountability Act has been proposed in February this year. As part of this revised legislation, an impact assessment is being mandated, and capAI can also be used to address this future requirement.

This article is based on two pieces of research from Saïd Business School:

The Reputational Risks of AIby Matthias Holweg, Rupert Younger and Yuni Wen. This research project was supported byOxford University’s Centre for Corporate Reputation.

capAI——过程进行整合美国卫生工程师协会(Asse)ssment of AI Systems in Line with the EU Artificial Intelligence Actby Luciano Floridi, Matthias Holweg, Marriarosaria Taddeo, Javier Amaya Silva, Jakob Mökander and Yuni Wen. This research projected was conducted jointly by Oxford University's Saïd Business School and the Oxford Internet Institute.

Matthias Holweg is Director of Saïd Business School’sOxford Artificial Intelligence ProgrammeandLuciano Floridi,Professor of Philosophy and Ethics of Information at theOxford Internet Instituteis a contributor to the programme.