0 / 0
Configuring explainability

Configuring explainability

You can configure explainability to reveal which features contribute to the model's predicted outcome for a transaction and predict what changes would result in a different outcome.

You can configure explainability to analyze the factors that influence your model outcomes. You can choose to configure local explanations to analyze the impact of factors for specific model transactions and configure global explanations to analyze general factors that impact model outcomes.

You can configure explainability manually or you can run a custom notebook to generate an explainability archive.

When you run the notebook, you can upload the archive to specify the parameters for your evaluation. If you do not provide training data, you must generate the archive to configure explainability.

When you configure explainability manually, you must set parameters to determine the type of explanations that you want to generate. You can also choose to specify controllable features and enable language support.

Configure parameters

To configure parameters, you must specify the explanation methods that you want to use. You must choose to use SHAP (Shapley Additive explanations) or LIME (Local Interpretable Model-Agnostic explanations) as the local explanation method. If you use SHAP as the local explanation method, you must specify the number of perturbations that the model generates for each local explanation and select an option for using background data. Background data is used to determine the influence of features on outcomes for explanations. If you use LIME as the local explanation, you must only specify the number of perturbations.

By enabling the Global explanation parameter, you can also choose to use SHAP (Shapley Additive explanations) or LIME (Local Interpretable Model-Agnostic explanations) as the global explanation method. To configure the global explanation method, you must specify the sample size of model transactions that is used to generate ongoing explanations and a schedule that determines when the explanations are generated. You must also specify a global explanation stability threshold and select an option that specifies how a baseline global explanation is generated. These settings are used to calculate the global explanation stability metric.

Explainability parameters are displayed with global explanation enabled and LIME selected as the Global and local explanation method

Controllable features

Controllable features are features that can be changed and have a significant impact on your model outcomes. You can specify controllable features to identify changes that might produce different outcomes.

Language support

If you enable language support, you can analyze languages that are not space-delimited to determine explainability. You can configure explainability to automatically detect supported languages or you can manually specify any supported languages that you want analyzed. You can't configure language support for structured and image models.

Supported explainability metrics

The following explainability metrics are supported:

Global explanation stability

Global explanation stability gives the degree of consistency in global explanation over time

  • Description:

A global explanation is generated with the baseline data that you provide when you configure explainability evaluations. Global explanations identify the features that have the most impact on the behavior of your model. When new global explanations are generated, each explanation is compared to the baseline global explanation to calculate global explanation stability. Global explanation stability uses the normalized discounted cumulative gain (NDGC) formula to determine the similarity between new global explanations and the baseline global explanation.

  • How it works: Higher values indicate higher uniformity with the baseline explanation

    • At 0: The explanations are very different.
    • At 1: The explanations are very similar.
  • Do the math:

The following formula is used for calculating global explanation stability:

                  DCGₚ
nDCGₚ  =    _____________________
                  IDCGₚ

Limitations

  • When you configure settings for SHAP global explanations, the following limitations exist:
    • The sample size that you use to configure explanations can affect the number of explanations that are generated during specific time periods. If you attempt to generate multiple explanations for large sample sizes, your transactions might fail to process.
    • If you configure explanations for multiple subscriptions, you must specify the default values for the sample size and number of perturbations settings when your deployment contains 20 features or less.
  • Equal signs (=) in column names are not supported in your data. The equal sign might cause an error.
  • Explainability is not supported for SPSS multiclass models that return only the winning class probability.

Learn more

Parent topic: Evaluating AI models

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more