omnixai.explainers.vision.agnostic package
The LIME explainer for image classification. |
|
The SHAP explainer for vision tasks. |
|
The partial dependence plots for vision tasks. |
|
The L2X explainer for image data. |
omnixai.explainers.vision.agnostic.lime module
The LIME explainer for image classification.
- class omnixai.explainers.vision.agnostic.lime.LimeImage(predict_function, mode='classification', **kwargs)
Bases:
ExplainerBaseThe LIME explainer for image classification. If using this explainer, please cite the original work: https://github.com/marcotcr/lime. This explainer only supports image classification.
- Parameters
predict_function (
Callable) – The prediction function corresponding to the machine learning model to explain. For classification, the outputs of thepredict_functionare the class probabilities.mode (
str) – The task type can be classification only.
- explanation_type = 'local'
- alias = ['lime']
- explain(X, **kwargs)
Generates the explanations for the input instances.
- Parameters
X – A batch of input instances.
kwargs – Additional parameters, e.g.,
top_labels– the number of the top labels to explain. Please refer to the doc of LimeImageExplainer.explain_instance.
- Return type
- Returns
The explanations for all the input instances.
omnixai.explainers.vision.agnostic.shap module
The SHAP explainer for vision tasks.
- class omnixai.explainers.vision.agnostic.shap.ShapImage(model, preprocess_function, mode='classification', background_data=None, **kwargs)
Bases:
ExplainerBaseThe SHAP explainer for vision tasks. If using this explainer, please cite the original work: https://github.com/slundberg/shap.
- Parameters
model – The model to explain, whose type can be tf.keras.Model or torch.nn.Module.
preprocess_function – The preprocessing function that converts the raw input features into the inputs of
model.mode (
str) – The task type, e.g., classification or regression.background_data (
Image) – The background images to compare with.
- explanation_type = 'local'
- alias = ['shap']
- explain(X, y=None, **kwargs)
Generates the pixel-importance explanations for the input instances.
- Parameters
X (
Image) – A batch of input instances.y – A batch of labels to explain. For regression,
yis ignored. For classification, the top predicted label of each input instance will be explained when y = None.kwargs – Additional parameters, e.g.,
nsamples– the maximum number of images sampled for the background.
- Return type
- Returns
The explanations for all the input instances, e.g., pixel importance scores.
omnixai.explainers.vision.agnostic.pdp module
The partial dependence plots for vision tasks.
- class omnixai.explainers.vision.agnostic.pdp.PartialDependenceImage(predict_function, mode='classification', **kwargs)
Bases:
ExplainerBaseThe partial dependence plots for vision tasks. The input image is segmented by a particular segmentation method, e.g., “quickshift”. For each segment, its importance score is measured by the average change of the predicted value when the segment is replaced by new segments constructed in the grid search.
- Parameters
predict_function – The prediction function corresponding to the model to explain. When the model is for classification, the outputs of the
predict_functionare the class probabilities. When the model is for regression, the outputs of thepredict_functionare the estimated values.mode – The task type, e.g., classification or regression.
- explanation_type = 'local'
- alias = ['pdp', 'partial_dependence']
- explain(X, y=None, **kwargs)
Generates PDP explanations.
- Parameters
X (
Image) – A batch of input instances.y – A batch of labels to explain. For regression,
yis ignored. For classification, the top predicted label of each input instance will be explained wheny = None.kwargs – Additional parameters in the PDP explainer, e.g.,
grid_resolution– the resolution in the grid search, andn_segments– the number of image segments used by image segmentation methods.
- Return type
- Returns
The generated explanations, e.g., the importance scores for image segments.
omnixai.explainers.vision.agnostic.l2x module
The L2X explainer for image data.
- class omnixai.explainers.vision.agnostic.l2x.DefaultSelectionModel(explainer, **kwargs)
Bases:
_DefaultModelBaseThe default selection model in L2X, which is designed for MNIST.
- Parameters
explainer – A L2XImage explainer.
kwargs – Additional parameters.
- forward(inputs)
- Parameters
inputs – The model inputs.
- postprocess(inputs)
Upsamples to the original image size.
- Parameters
inputs – The outputs of
forward.
- training: bool
- class omnixai.explainers.vision.agnostic.l2x.DefaultPredictionModel(explainer, **kwargs)
Bases:
_DefaultModelBaseThe default prediction model in L2X, which is designed for MNIST.
- Parameters
explainer – A L2XImage explainer.
kwargs – Additional parameters.
- forward(inputs, weights)
- Parameters
inputs – The model inputs.
weights – The weights generated via Gumbel-Softmax sampling.
- training: bool
- class omnixai.explainers.vision.agnostic.l2x.L2XImage(training_data, predict_function, mode='classification', tau=0.5, k=10, selection_model=None, prediction_model=None, loss_function=None, optimizer=None, learning_rate=0.001, batch_size=None, num_epochs=20, **kwargs)
Bases:
ExplainerBaseThe LIME explainer for vision tasks. If using this explainer, please cite the original work: Learning to Explain: An Information-Theoretic Perspective on Model Interpretation, Jianbo Chen, Le Song, Martin J. Wainwright, Michael I. Jordan, https://arxiv.org/abs/1802.07814.
- Parameters
training_data (
Image) – The data used to train the explainer.training_datashould be the training dataset for training the machine learning model.predict_function (
Callable) – The prediction function corresponding to the model to explain. When the model is for classification, the outputs of thepredict_functionare the class probabilities. When the model is for regression, the outputs of thepredict_functionare the estimated values.mode (
str) – The task type, e.g., classification or regression.tau (
float) – Parametertauin Gumbel-Softmax.k (
int) – The maximum number of the selected features in L2X.selection_model – A pytorch model class for estimating P(S|X) in L2X. If
selection_model = None, a default model DefaultSelectionModel will be used.prediction_model – A pytorch model class for estimating Q(X_S) in L2X. If
prediction_model = None, a default model DefaultPredictionModel will be used.loss_function (
Optional[Callable]) – The loss function for the task, e.g., nn.CrossEntropyLoss() for classification.optimizer – The optimizer class for training the explainer, e.g., torch.optim.Adam.
learning_rate (
float) – The learning rate for training the explainer.batch_size (
Optional[int]) – The batch size for training the explainer. Ifbatch_sizeis None,batch_sizewill be picked from [32, 64, 128, 256] based on the sample size.num_epochs (
int) – The number of epochs for training the explainer.kwargs – Additional parameters, e.g., parameters for
selection_modelandprediction_model.
- explanation_type = 'local'
- alias = ['l2x', 'L2X']
- explain(X, **kwargs)
Generates the explanations for the input instances. For classification, it explains the top predicted label for each input instance.
- Parameters
X (
Image) – A batch of input instances.- Return type
- Returns
The explanations for all the input instances.
- save(directory, filename=None, **kwargs)
Saves the initialized explainer.
- Parameters
directory (
str) – The folder for the dumped explainer.filename (
Optional[str]) – The filename (the explainer class name if it is None).