Moshe Kranc, our Chief Technology Officer, always has his eye on the next wave of potential issues surrounding emerging technologies such as artificial intelligence (AI). In the following Q&A we ask Moshe about explainable AI and how it can address the mistrust surrounding the mysterious ways in which AI works.
What is explainable AI?
To put it simply, explainable AI is when recommendations proposed by an AI-based system can be justified to a human being.
The issue we face today is that many AI algorithms, e.g., deep learning, base their recommendations on patterns they discern in large volumes of training data. While these patterns may work well at making recommendations, in many cases, they are based on statistics rather than on any human-understandable logic. Explainable AI adds transparency to this process.
Can you provide an example?
Consider decision tree algorithms, where the training data is used to construct a logical tree with clearly defined criteria. A classic example would be a decision tree that helps predict whether Joe will play tennis today, given the weather, humidity and wind.
This kind of algorithm, while flawed, is certainly understandable to a person. Now, contrast that with a deep learning algorithm, which trains a series of neurons by setting the weights of connections to other neurons.
This algorithm may be highly accurate, but it is near impossible for human beings to understand the meaning of each neuron and why the weight from it to a successive neuron is set so high or low.
Fortunately, there have been some recent technological advances that can help humans understand obscure statistical algorithms like Deep Learning, e.g., Local Interpretable Model-Agnostic Explanations (LIME).
Why is this important for our clients?
At the end of the day, people are responsible for any decisions that are made. If something goes wrong, just doing what the AI algorithm recommended will not be a very convincing defense. So, if we’re going to base mission-critical decisions on AI algorithms, we need to understand why the algorithm recommended what it did and what the underlying logic was.
Not only does this build trust in the system, but it also helps flag inaccurate or otherwise problematic recommendations.
What kind of issues can explainable AI help avoid?
For one, the algorithm may be improperly tuned, e.g., an overfit to the training data. Another major problem is if the training data is biased. For example, the data used to decide whether to sell life insurance to a potential customer may be based on data collected from higher-income neighborhoods only. Along similar lines, the recommendation itself could be biased or unethical. For example, it would perpetuate unfair wages to women if based purely on historical wage data. Explainable AI provides transparency into the decision, making it possible to catch such issues.
What are some potential business use cases for explainable AI applications?
The use cases are many – explainable AI can be applied for anything that impacts people’s lives and could be tainted by bias. This could range from determining admissions to a training program or university to deciding how much to insure someone for or whether to issue someone a credit card or a loan based on demographics.