Insights

How Explainable AI helps organizations make better decisions

Artificial Intelligence (AI) is gaining an increasingly steady foothold in society. Driven by the promise that it will help maximize the value of data, more and more organizations are implementing AI in their operations. When AI becomes too complex, though, applications seem to go entirely their own way. This has led to a call for ‘Explainable AI’: a way to understand and explain the outcomes of algorithms. In practice, however, this often turns out to be more difficult than expected.

An article powered by Daan Mocking, Data Scientist.

Date21 Apr 2020

Artificial Intelligence (AI) applications help people make more informed choices, yet there is no full guarantee that all applications will always yield the right results. Several AI applications, for instance, made decisions that evidenced their severely racist preferences, affecting a beauty pageant, the British police, and large sections of the international employment market, to name but a few examples. The only way to prevent these undesirable results in practice is to understand why an algorithm reaches the conclusions it does, and this is where Explainable AI comes in.

Mapping artificial reasoning

Explainable AI, or XAI for short, is a collective term for a range of AI techniques intended to map how other AI systems come to their results, what reasoning they use, and which factors may influence them. This makes it easier to follow the reasoning of an algorithm and to assess whether its outcomes are safe and valid. Mapping the factors that influence the outcome of an algorithm therefore increases its reliability, and in some cases the law requires that decisions made by AI applications can be explained. GDPR, for instance, stipulates that it must be possible to demonstrate exactly how all automated decisions with legal aspects were made. By imposing this requirement, the EU seeks to prevent AI from dismissing people or rejecting mortgage applications without human intervention or further explanation. Partly thanks to the GDPR, this approach is becoming more and more widely supported and interest is growing rapidly.

The limitations of Explainable AI

It can be debated where Explainable AI adds the most value. In some applications, for instance, knowing exactly how a result was derived is simply not that relevant when the result is the added value. Take Spotify’s recommender system: the fact that you like the music of the artist it suggests is functionally more important than understanding how it works. On the other hand, you could argue that if we simply accept an AI outcome as is, without exploring why and how the AI application got there, we ultimately end up none the wiser.

There are also some cases in which mapping the decision-making process of an AI algorithm is virtually impossible in practice. Some AI applications use such complex models, such as deep learning, making it incredibly difficult to figure out the model’s reasoning. And even if its reasoning could be deduced, the people who built the algorithm for that matter, would most likely find it impossible to fully understand. It is therefore necessary to consider carefully which cases would benefit most from Explainable AI. Whenever wrong outcomes would have a major impact, Explainable AI would be a valuable addition, especially if the outcome would affect people.

An example: a medical dilemma
In 2015, a deep learning algorithm was applied to a patient database containing information on approximately 700,000 people in order to discover patterns. The next step was to ascertain what the algorithm had learned by analyzing the data of all these patients. Among other things, it proved to be excellent at predicting exactly when psychiatric patients would have a schizophrenic episode. This marked a breakthrough for these patients, as the predictions can help treating physicians anticipate episodes and administer medication before their onset. This is, admittedly, a huge step forward, but as long as the journey the algorithm took to get to its results remains a mystery, it will be impossible to determine the actual cause of those episodes. As such, our understanding of the disease and its inner workings has not progressed, and we are no closer to potentially preventing it.

People are in charge

One could argue that the use of non-controllable algorithms means that decisions are made of which the underlying motives are not always clear. Decision-makers choosing to use an algorithm, however, are often unaware of this shortcoming: they fail to anticipate that algorithms may teach themselves a structurally undesirable correlation. Now Spotify recommending a less than ideal song is not much of a deal, but if a bank were to use AI to accept or reject loan applications, they would want to know the underlying reasoning in order to prevent discrimination, for example. The future will tell how this insight will impact applications that are currently being used to find connections that we humans could not fully comprehend. In any case, even if the underlying reasoning behind an algorithmic decision cannot be uncovered, people will always be in charge of what is done with the outcome. Although it is not always possible to prevent complex systems from making undesirable links to generate outcomes, Explainable AI gives users the tools they need to identify undesirable results and intervene in the decision-making process. Besides, understanding the process is a great way to discover and repair design flaws in algorithms.

Fair, Accountable & Transparent AI

In an interview, Professor Marc Salomon shares his thoughts on artificial intelligence (AI) and recent developments in this field: "Algorithmic decision-making is forcing companies to reconsider their ethical compass. This is not a new development. However, society’s desire for transparent decision-making has, in recent years, become more pronounced. In my opinion, this is partly due to the popularity of AI, but it’s also simply a sign of the times."

Read the interview
Marc Salomon Univeristy of Amsterdam

In a nutshell

AI applications have enormous potential added value. However, anyone who wishes to work with AI must realize that a certain level of knowledge is required in order to come to reliable and ethical results. Explainable AI can contribute to such results, but there are functional limits to the pursuit of algorithmic transparency. Increasing awareness of the strengths and limitations of Explainable AI, however, is a big step in the right direction.

Don't miss out on the next insights.

Sign up to our mailing list and be the first to receive our newest insights and digital magazine in your mailbox. 

* indicates required