De cerca, nadie es normal

Explainable Artificial Intelligence: A Main Foundation in Human-centered AI

Posted: March 30th, 2022 | Author: | Filed under: Artificial Intelligence, Human-centered explainable AI | Tags: , , , , | Comments Off on Explainable Artificial Intelligence: A Main Foundation in Human-centered AI

Human-centered explainable AI (HCXAI) is an approach that puts the human at the center of technology design. It develops a holistic understanding of “who” the human is by considering the interplay of values, interpersonal dynamics, and the socially situated nature of AI systems.

Explainable AI (XAI) refers to artificial intelligence -and particularly machine learning techniques- that can provide human-understandable justification for their output behavior. Much of the previous and current work on explainable AI has focused on interpretability, which can be viewed as a property of machine-learned models that dictates the degree to which a human user—AI expert or non-expert user—can come to conclusions about the performance of the model given specific inputs. 

An important distinction between interpretability and explanation generation is that explanation does not necessarily elucidate precisely how a model works, but aims to provide useful information for practitioners and users in an accessible manner. The challenges of designing and evaluating “black-boxed” AI systems depends crucially on “who” the human in the loop is. Understanding the “who” is crucial because it governs what the explanation requirements are. It also scopes how the data are collected, what data can be collected, and the most effective way of describing the why behind an action.

Explainable AI (XAI) techniques can be applied to AI blackbox models in order to obtain post-hoc explanations, based on the information that they grant. For Pr. Dr. Corcho, rule extraction belongs to the group of post-hoc XAI techniques. This group of techniques are applied over an already trained ML model -generally a blackbox one- in order to explain the decision frontier inferred by using the input features to obtain the predictions. Rule extraction techniques are further differentiated into two subgroups: model specific and model-agnostic. Model specific techniques generate the rules based on specific information from the trained model, while model-agnostic ones only use the input and output information from the trained model, hence they can be applied to any other model. Post-hoc XAI techniques in general are then differentiated depending on whether they provide local explanations -explanations for a particular data point- or global ones -explanations for the whole model. Most rule extraction techniques have the advantage of providing explanations for both cases at the same time.

The researchers Carvalho, Pereira, and Cardoso have defined a taxonomy of properties that should be considered in the individual explanations generated by XAI techniques:

  • Accuracy: It is related to the usage of the explanations to predict the output using unseen data by the model. 
  • Fidelity: It refers to how well the explanations approximate the underlying model. The explanations will have high fidelity if their predictions are constantly similar to the ones obtained by the blackbox model.
  • Consistency: It refers to the similarity of the explanations obtained over two different models trained over the same input data set. High consistency appears when the explanations obtained from the two models are similar. However, a low consistency may not be a bad result since the models may be extracting different valid patterns from the same data set due to the ‘‘Rashomon Effect’’ -seemingly contradictory information is fact telling the same from different perspectives. 
  • Stability: It measures how similar the explanations obtained are for similar data points. Opposed to consistency, stability measures the similarity of explanations using the same underlying model. 
  • Comprehensibility: This metric is related to how well a human will understand the explanation. Due to this, it is a very difficult metric to define mathematically, since it is affected by many subjective elements related to human’s perception such as context, background, prior knowledge, etc. However, there are some objective elements that can be considered in order to measure ‘‘comprehensibility’’, such as whether the explanations are based on the original features (or based on synthetic ones generated after them), the length of the explanations (how many features they include), or the number of explanations generated (i.e. in the case of global explanations). 
  • Certainty: It refers to whether the explanations include the certainty of the model about the prediction or not (i.e. a metric score). 
  • Importance: Some XAI methods that use features for their explanations include a weight associated with the relative importance of each of those features. 
  • Novelty: Some explanations may include whether the data point to be explained comes from a region of the feature space that is far away from the distribution of the training data. This is something important to consider in many cases, since the explanation may not be reliable due to the fact that the data point to be explained is very different from the ones used to generate the explanations. 
  • Representativeness: It measures how many instances are covered by the explanation. Explanations can go from explaining a whole model (i.e. weights in linear regression) to only be able to explain one data point.

In the realm of psychology there are three kinds of views of explanations: 

  • The formal-logical view: an explanation is like a deductive proof, given some propositions.
  • The ontological view: events – state of affairs – explain other events.
  • The pragmatic view: an explanation needs to be understandable by the ‘‘demander’’. 

Explanations that are sound from a formal-logical or ontological view, but leave the demander in the dark, are not considered good explanations. For example, a very long chain of logical steps or events (e.g. hundreds) without any additional structure can hardly be considered a good explanation for a person, simply because he or she will lose track. 

On top of this, the level of explanation refers to whether the explanation is given at a high-level or more detailed level. The right level depends on the knowledge and the need of the demander: he or she may be satisfied with some parts of the explanation happening at the higher level, while other parts need to be at a more detailed level. The kind of explanation refers to notions like causal explanations and mechanistic explanations. Causal explanations provide the causal relationship between events but without explaining how they come about: a kind of ‘‘why’’ question. For instance, smoking causes cancer. A mechanistic explanation would explain the mechanism whereby smoking causes cancer: a kind of ‘‘how’’ question.

As said, a satisfactory explanation does not exist by itself, but depends on the demander’s need. In the context of machine learning algorithms, several typical demanders of explainable algorithms can be distinguished: 

  • Domain experts: those are the ‘‘professional’’ users of the model, such as medical doctors who have a need to understand the workings of the model before they can accept and use the model.
  • Regulators, external and internal auditors: like the domain experts, those demanders need to understand the workings of the model in order to certify its compliance with company policies or existing laws and regulations. 
  • Practitioners: professionals that use the model in the field where they take users’ input and apply the model, and subsequently communicate the result to the users’ situations, such as  for instance loan applications. 
  • Redress authorities: the designated competent authority to verify that an algorithmic decision for a specific case is compliant with the existing laws and regulations. 
  • Users: people to whom the algorithms are applied and that need an explanation of the result. 
  • Data scientists, developers: technical people who develop or reuse the models and need to understand the inner workings in detail.

 
Summing up, for explainable AI to be effective, the final consumers (people) of the explanations need to be duly considered when designing HCXAI systems. AI systems are only truly regarded as “working” when their operation can be narrated in intentional vocabulary, using words whose meaning go beyond the mathematical structures. When an AI system “works” in this broader sense, it is clearly a discursive construction, not just a mathematical fact, and the discursive construction succeeds only if the community assents.


Comments are closed.