Explaining AI

What’s new?

The Information Commissioner’s Office (in the United Kingdom) issued its first draft regulatory guidance into the use of AI (artificial intelligence). One part of the guidance advises organizations to “make your use of AI for decision-making obvious and appropriately explain the decisions you make to individuals in a meaningful way.” This guidance applies to decisions that use personal information to make a decision with legal or similarly significant effects.

What does it mean?

Many methods in artificial intelligence use large databases to generate a mathematical model that fits the data well. The model is not built up from knowledge from human experts, but rather by finding the mathematical patterns in the database. The mathematics is very complicated in order to capture patterns not obvious even to the humans who were involved in generating the data. The resulting model can then be used to make a prediction about a case not included in the original database.

For example, artificial intelligence could use a large database on a bank’s decisions whether to grant loans to create a mathematical model that would, with great accuracy, duplicate those previous decisions. Then the model could be used by a loan officer who inputs data on the current applicant in order to generate a recommendation on whether to grant or deny the loan. In most cases, the human being (the loan officer in this case) could still make a different decision than that recommended by the model, but in the future, the decision could be totally automated with no human intervention.

The mathematical models used in such an approach to artificial intelligence are often quite sophisticated and complicated. The result is that the model is so dense that it is difficult to generate an explanation of the prediction in a traditional sense. While a bank might previously have said, “we denied your application for a loan because of your bad credit rating, the lack of collateral, and the poor forecast for growth in your line of business,” with an AI model, the bank might only be able to say that the model generated a low score. Some fear that the mathematics may be capturing biases in past decisions, for example, denying loans to racial minorities that would be granted to other applicants.

Fairness and transparency argue that someone denied a loan should be able to receive an explanation of the decision. Thus, regulators are pushing for (1) transparency so that the person knows that a model was used to deny the loan and (2) explanation of the decision.

Issues raised by these requirements include defining what is an adequate explanation (not simply “the computer said so”) and deciding who is accountable for a decision (the loan officer can’t say “the computer made me do it”). Without an understandable explanation the person denied a loan cannot appeal, cannot correct incorrect data that drove the decision, and cannot improve the important factors so a future loan will not be denied.

The proposed guidance describes several types of explanations: rationale explanation (the reasons for the decision), responsibility explanation (who was involved), data explanation (what data was used to train the AI), fairness explanation (what steps were taken to eliminate bias and ensure equity), safety and performance explanation (steps taken to ensure accuracy, reliability, and security of decisions), and impact explanation (the impact of the use of the AI system more widely on society).

What does it mean for you?

The guidance described above is only proposed and only affects the United Kingdom. However, it indicates a possible trend in other countries. You may find that any AI application used by your organization may need to meet such requirements in the future. Using an AI application that cannot give meaningful explanations may open your organization to legal challenge of bias.

But, more importantly, you should consider the need for your customers and clients to trust your organization not to treat them capriciously. You may not be able to be completely open about the basis for decisions even without an AI element, for example, if you need to protect some competitive secrets, but starting with the premise of explaining decisions to your customers is part of a customer focus for your organization.

Where can you learn more?

The three parts of the ICO report are available here: https://ico.org.uk/about-the-ico/ico-and-stakeholder-consultations/ico-and-the-turing-consultation-on-explaining-ai-decisions-guidance/

The 34-page “Part 1: The basics of explaining AI” is very readable and could be the focus of a discussion in your organization of principles you want to use concerning AI. “Part 2: Explaining AI in practice” is 108 pages and gives more concrete guidance to an organization about the decisions to be made in deciding what type of explanation to provide. Finally, “Part 3: What explaining AI means for your organization” covers organizational roles, policies and procedures, and documentation in 23 pages. While the three parts are oriented toward organizations (rather, organisations) in the United Kingdom, much of the advice applies in any country.

Source info: New Scientist, issue 3259, December 7-13, 2019, page 10. By Adam Vaughan

Businesses and other organisations could face multimillion-pound fines if they are unable to explain decisions made by artificial intelligence, under plans put forward by the UK’s data watchdog today.

The Information Commissioner’s Office (ICO) said its new guidance was vital because the UK is at a tipping point where many firms are using AI to inform decisions for the first time. This could include human resources departments using machine learning to shortlist job applicants based on analysis of their CVs. The regulator says it is the first in the world to put forward rules on explaining choices taken by AI.

About two-thirds of UK financial service companies are using AI to make decisions, including insurance firms to manage claims, and a survey shows that about half of the UK public are concerned about algorithms making decisions humans would usually explain. AI researchers are already being called on to do more to unpack the “black box” nature of how machine learning arrives at results.

Simon McDougall of the ICO says: “This is purely about explainability. It does touch on the whole issue of black box explainability, but it’s really driving at what rights do people have to an explanation. How do you make an explanation about an AI decision transparent, fair, understandable and accountable to the individual?”

The guidance, which is out for consultation today, tells organisations how to communicate explanations to people in a form they will understand. Failure to do so could, in extreme cases, result in a fine of up to 4 per cent of a company’s global turnover, under the EU’s data protection law.

Not having enough money or time to explain AI decisions won’t be an acceptable excuse, says McDougall. “They have to be accountable for their actions. If they don’t have the resources to properly think through how they are going to use AI to make decisions, then they should be reflecting on whether they should be using it all.” He also hopes the step will result in firms that buy-in AI systems rather than building their own asking more questions of how they work.

Produced in conjunction with the Alan Turing Institute, the guidance is expected to take effect in 2020.

One thought on “Explaining AI”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.