Skip to main content
Version: 1.40

Deploying an explainer

There are three different ways to deploy an explainer on Deeploy:

  • Standard explainer, this can be added to your models without having to train an explainer yourself
  • Trained explainer, provide your own trained explainer artifact using one of the supported frameworks or custom Docker image
  • Integrated explainer, embed an explainability function in your model artifact, e.g. for Pytorch models

Standard Explainers

Standard explainers are non-trained explainers that can be added to your deployment if your model meets the right requirements. Currently we support a Tree SHAP explainer, a saliency based explainer and an attention based one.

Tree SHAP

Deploy a tree SHAP explainer. This is available for tree-based classification models when using XGBoost, Scikit-learn, or LightGBM model frameworks.

Note

In the case of multiple inputs in single explanation request, the explanation values are returned only for the first input.

Note

For models where probabilities are returned, the explanation is for the class with the highest probability.

Saliency

Deploy a saliency based explainer. This is available for text generation and text-to-text generation models when using the Hugging Face model framework. The explainer can be utilized to obtain token importances for generated tokens.

It is derived from following research work Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps (Simonyan et al., 2013).

Attention

Deploy an attention based explainer. This is available for text generation and text-to-text generation models when using Hugging Face model framework. The explainer can be utilized to obtain token importances for generated tokens.

It is derived from following research work on Attention Weight Attribution, from Neural Machine Translation by Jointly Learning to Align and Translate (Bahdanau et al., 2014).

Trained Explainers

Train an explainer yourself using one of these supported frameworks: Anchor Tabular, Images and Text. SHAP kernel, PDP Tabular, MACE tabular and custom explainers via custom Docker. An overview of the supported framework versions can be found here as well as links to example implementations. Most explainer frameworks require the explainer artifact to be stored as dill named explainer.dill.

For custom docker explainers, we provide boilerplate templates in which you can easily embed your own custom explainer functions whilst adhering to the Deeploy API spec. See our Python client CLI documentation page for more information.

Note

When providing an explainer artifact the model shouldn't be part of that artifact. It is set up in a way that the explainer will get the predictions from the deployment's already deployed model. To not include the artifact, set the model reference to None, e.g. explainer = TabularExplainer(), explainer.model = None

Integrated Explainers

With integrated explainers, explainability is embedded within the model artifact. This will still result in two different endpoints, which are located in the same container and will share the same resources. An example of such an implementation can be found in our sentiment analysis model using Pytorch (see the get_insights() function in the handler.py file). This is a convenient solution for explainers that require a model file of a large size to be present, which would be wasteful to duplicate

Using the Captum text explainer dashboard

If you want to visualize explanations in Deeploy with a Captum text explainer (integrated), make sure to use the following format in your response in the handler.py file.

[
{
"raw_input_ids":[[ids]],
"word_attributions":[[word_attributions]],
"pred_class":[class_string],
"attr_score":[score_numeric],
"attr_class":[class_string]
}
]

Using the Captum image explainer dashboard

If you want to visualize explanations in Deeploy with a Captum image explainer (integrated), make sure to use the following format in your response in the handler.py file.

[
{
"originals":[
{
"b64": base64_string
}
],
"explanations":[
{
"b64": base64_string
}
],
"prediction": [
class_predicted_string
]
}
]