Beyond Explainability – XAI, Research Areas + TDA

It is becoming well understood that in order to make Artificial Intelligence broadly useful, it is critical that humans can interact with and have confidence in the algorithms that are being used. This observation has led to development of the notion of explainable AI (sometimes called XAI) which was initially a vaguely defined concept that required explanations (of some type) for the algorithms being used.  My colleague Gurjeet Singh has refined this notion considerably in his post The Trust Challenge – Why Explainable AI is Not Enough.  He makes the point that much of what passes for explainable AI is simply a description of the algorithms being used to solve a particular problem, and observes that such transparency, while useful to a degree, does not nearly cover what is needed to make AI interact with humans in a truly productive manner. He then focuses on one aspect of this requirement, which is that an output from an AI algorithm should be justifiable.  For example, an algorithm that predicts revenue based on macroeconomic indicators should have a description of the features that drive the prediction in human terms, such as “U.S. stock market indices” or “short term interest rates”.  However, the requirements are actually broader than that.  In addition to the justifiability of the output, it is also important the functioning of the algorithm and the nature of the inputs be understandable.  The reason for this is that we ultimately expect that there may be problems that arise for AI algorithms, that require the ability to diagnose and intervene in the functioning of the algorithm. We will call this capability accessibility.  

In this post, I’ll lay out three directions of research and development that are moving us in the direction of justifiability and accessibility.

Better Outputs

In his book Exploratory Data Analysis, the statistician John Tukey argued that making sense of data requires outputs that are much more clearly interpretable by humans than lists of numbers, or just summary statistics such as means and variances. This area includes many of the familiar business intelligence constructs, such as histograms, box plots, scatterplots, etc.  Also included are standard regression methods, which approximate the data by a small set of equations, as well as clustering methods, which produce a partition or a dendrogram, and are useful in developing taxonomies for the data. Topological data analysis (TDA), which produces network models which act as similarity maps for the data, is another very useful method, and it can be used in various ways to systematically and automatically generate explanations to characterize subgroups of the data.  

The main point is that in addition to working on the accuracy of the algorithms, we need to take seriously developing better and more informative structures to allow humans to understand the data. To be clear, it is not just a matter of UI, it is the generation of new mathematical and statistical structures, together with the elucidation of their properties. Creation of such structures will enable us to make the algorithms justifiable. 

Better Inputs

In the study of databases of images, one typically starts with each image encoded as a pixel vector, i.e. a vector with one gray scale value for each pixel.  Each such pixel is therefore a feature, and convolutional neural net technology has demonstrated remarkable success for various classification problems on such image databases, although it has also been shown that the methods are vulnerable to adversarial “attacks”.  If one can identify some features that are not simply individual pixels but include some of the cues that humans use in performing the classification, then that will make the algorithm less vulnerable to adversarial attacks, and as a byproduct should also improve the speed with which the algorithms operate.  To give a sense of how this might work, consider the MNIST data set of hand drawn digits. Humans distinguish the digit 8 from the digit 9 by recognizing that the 8 has two loops, and the digit 9 has only a single loop. Similarly, the number of ends, crossings, and corner points are cues that people use to distinguish between 1, 2, 4.   If one could compute such features and include them from the beginning of the calculation, one should try to include them in the calculation. There is an area within TDA called persistent homology which concerns itself exactly with the computation of such features.  A great example of this kind of application is the work of Guowei Wei and collaborators in the area of drug discovery.

Because such features are more human understandable, working with this kind of input creates more accessibility, and will create the right ingredients for making good justifications of results. 

Algorithms as Data Sources

When algorithms have a clear theoretical motivation, it is often possible to understand what the algorithm is doing in human terms.  In many situations, though, although one can name and describe the steps taken by the algorithm, one cannot understand how and why it does what it does.  This is a problem since it limits the degree of trust humans can have in the algorithm, and it makes it difficult to diagnose and ultimately correct problems with the algorithms.  This is particularly true for the neural network technology. In such a case, it is useful to treat the algorithm itself as a problem in data analysis.  The states of the machine at various stages of the calculation can be regarded as a data set, and an appropriately justifiable method of data analysis can give insight into the action of the algorithm.  

Preliminary results indicate that TDA, applied to the weights and activations in convolutional neural nets, can give a great deal of insight into the workings, and can contribute to the improvement of performance of the algorithms.

In summary, we have pointed to some directions that we believe will lead to better explanations and justifications of the results of AI algorithms – moving past the insufficient explainable AI (XAI) that is used today.  The area is enormously broad, and the sooner we address the problems head on the sooner AI can be fully accepted as a critical tool.