Skip to main content
Version: 1.43

Explainability

Explain predictions to enhance model transparency, ensure ethical and unbiased outcomes, and gain insights into the underlying data patterns.

Deploy an explainer

There are multiple different ways to deploy an explainer when creating a Deployment. For details information, see Deploying an explainer.

Explain predictions

We assume that you have deployed a model with an explainer. Use the UI, Deeploy API, or Python client to explain a prediction.

  • Navigate to the Test page in your Deployment
  • Add input data. It's only possible to explain a single input at a time on the **Test** page. Select Explain prediction and click Make prediction.
  • The explanation result is part of the response. If available, a visualization of the explanation is shown below the response.
  • Explainability visualizations

    Deeploy provides a visualization of explanation output for the standard explainers and some integrated explainers. Visualizations are currently available for:

  • Tree SHAP
  • Saliency
  • Attention
  • Captum text explainer dashboard
  • Captum image explainer dashboard
  • For details on how to deploy one of these explainers, refer to Deploying an explainer.

    To view the visualization for an explained prediction, navigate to that prediction on the Predictions page and click View visual explanation.