omnixai.explainers.vision.specific package
The integrated-gradient explainer for vision tasks. |
|
The Grad-CAM methods for vision tasks. |
|
The contrastive explainer for image classification. |
omnixai.explainers.vision.specific.ig module
The integrated-gradient explainer for vision tasks.
- class omnixai.explainers.vision.specific.ig.IntegratedGradientImage(model, preprocess_function, mode='classification', background_data=None, **kwargs)
Bases:
ExplainerBase
,IntegratedGradient
,GradMixin
The integrated-gradient explainer for vision tasks. If using this explainer, please cite the original work: https://github.com/ankurtaly/Integrated-Gradients.
- Parameters
model – The model to explain, whose type can be tf.keras.Model or torch.nn.Module.
preprocess_function (
Callable
) – The pre-processing function that converts the raw input features into the inputs ofmodel
.mode (
str
) – The task type, e.g., classification or regression.background_data (
Image
) – The background images to compare with. Whenbackground_data
is empty, the baselines for computing integrated gradients will be sampled randomly.kwargs – Additional parameters to initialize the IG explainer, e.g.,
num_random_trials
– the number of trials in generating baselines.
- explanation_type = 'local'
- alias = ['ig', 'integrated_gradient']
- explain(X, y=None, baseline=None, **kwargs)
Generates the pixel-importance explanations for the input instances.
- Parameters
X (
Image
) – A batch of input instances.y – A batch of labels to explain. For regression,
y
is ignored. For classification, the top predicted label of each input instance will be explained wheny = None
.baseline – The baselines for computing integrated gradients. When it is None, the baselines will be sampled randomly.
kwargs – Additional parameters, e.g.,
steps
for IntegratedGradient.compute_integrated_gradients.
- Return type
- Returns
The explanations for all the instances, e.g., pixel importance scores.
omnixai.explainers.vision.specific.gradcam.gradcam module
The Grad-CAM methods for vision tasks.
- class omnixai.explainers.vision.specific.gradcam.gradcam.GradCAM(model, target_layer, preprocess_function, mode='classification', **kwargs)
Bases:
ExplainerBase
The Grad-CAM method for generating visual explanations. If using this explainer, please cite Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization, Selvaraju et al., https://arxiv.org/abs/1610.02391.
- Parameters
model – The model to explain, whose type can be tf.keras.Model or torch.nn.Module.
target_layer – The target layer for explanation, which can be tf.keras.layers.Layer or torch.nn.Module.
preprocess_function (
Callable
) – The preprocessing function that converts the raw data into the inputs ofmodel
.mode (
str
) – The task type, e.g., classification or regression.
- explanation_type = 'local'
- alias = ['gradcam', 'grad-cam']
- explain(X, y=None, **kwargs)
Generates the explanations for the input instances.
- Parameters
X (
Image
) – A batch of input instances.y – A batch of labels to explain. For regression,
y
is ignored. For classification, the top predicted label of each input instance will be explained when y = None.kwargs – Additional parameters.
- Returns
The explanations for all the instances, e.g., pixel importance scores.
- Return type
- class omnixai.explainers.vision.specific.gradcam.gradcam.GradCAMPlus(model, target_layer, preprocess_function, mode='classification', **kwargs)
Bases:
ExplainerBase
The Grad-CAM++ method for generating visual explanations. If using this explainer, please cite Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks, Chattopadhyay et al., https://arxiv.org/pdf/1710.11063.
- Parameters
model – The model whose type can be tf.keras.Model or torch.nn.Module.
target_layer – The target layer for explanation, which can be tf.keras.layers.Layer or torch.nn.Module.
preprocess_function (
Callable
) – The preprocessing function that converts the raw data into the inputs ofmodel
.mode (
str
) – The task type, e.g., classification or regression.
- explanation_type = 'local'
- alias = ['gradcam++', 'grad-cam++']
- explain(X, y=None, **kwargs)
Generates the explanations for the input instances.
- Parameters
X (
Image
) – A batch of input instances.y – A batch of labels to explain. For regression,
y
is ignored. For classification, the top predicted label of each input instance will be explained when y = None.kwargs – Additional parameters.
- Returns
The explanations for all the instances, e.g., pixel importance scores.
- Return type
- class omnixai.explainers.vision.specific.gradcam.gradcam.LayerCAM(model, target_layer, preprocess_function, mode='classification', **kwargs)
Bases:
ExplainerBase
The Layer-CAM method for generating visual explanations. If using this explainer, please cite LayerCAM: Exploring Hierarchical Class Activation Maps for Localization, Jiang et al., http://mmcheng.net/mftp/Papers/21TIP_LayerCAM.pdf.
- Parameters
model – The model whose type can be tf.keras.Model or torch.nn.Module.
target_layer – The target layer for explanation, which can be tf.keras.layers.Layer or torch.nn.Module.
preprocess_function (
Callable
) – The preprocessing function that converts the raw data into the inputs ofmodel
.mode (
str
) – The task type, e.g., classification or regression.
- explanation_type = 'local'
- alias = ['layercam', 'layer-cam']
- explain(X, y=None, **kwargs)
Generates the explanations for the input instances.
- Parameters
X (
Image
) – A batch of input instances.y – A batch of labels to explain. For regression,
y
is ignored. For classification, the top predicted label of each input instance will be explained when y = None.kwargs – Additional parameters.
- Returns
The explanations for all the instances, e.g., pixel importance scores.
- Return type
omnixai.explainers.vision.specific.cem module
The contrastive explainer for image classification.
- class omnixai.explainers.vision.specific.cem.CEMOptimizer(x0, target, model, c=10.0, beta=0.1, gamma=0.0, kappa=10.0, ae_model=None, binary_search_steps=5, learning_rate=0.01, num_iterations=1000, grad_clip=1000.0, background_data=None)
Bases:
object
The optimizer for contrastive explanation. The module is implemented based on the paper: https://arxiv.org/abs/1802.07623.
- Parameters
x0 – The input image.
target – The predicted label of the input image.
model – The classification model which can be torch.nn.Module or tf.keras.Model.
c – The weight of the loss term.
beta – The weight of the L1 regularization term.
gamma – The weight of the AE regularization term.
kappa – The parameter in the hinge loss function.
ae_model – The auto-encoder model used for regularization.
binary_search_steps – The number of iterations to adjust the weight of the loss term.
learning_rate – The learning rate.
num_iterations – The maximum number of iterations during optimization.
grad_clip – The value for clipping gradients.
background_data – Sampled images for estimating background values.
- pn_optimize(verbose=True)
Optimizes pertinent negatives.
- Returns
The pertinent negative.
- Return type
np.ndarray
- pp_optimize(verbose=True)
Optimizes pertinent positives.
- Returns
The pertinent positive.
- Return type
np.ndarray
- class omnixai.explainers.vision.specific.cem.ContrastiveExplainer(model, preprocess_function, mode='classification', background_data=None, c=10.0, beta=0.1, gamma=0.0, kappa=10.0, ae_model=None, binary_search_steps=5, learning_rate=0.01, num_iterations=1000, grad_clip=1000.0, **kwargs)
Bases:
ExplainerBase
The contrastive explainer for image classification. If using this explainer, please cite the original work: https://arxiv.org/abs/1802.07623. This explainer only supports classification tasks.
- Parameters
model – The model to explain, whose type is torch.nn.Module or tf.keras.Model.
preprocess_function (
Callable
) – The pre-processing function that converts the raw input features into the inputs ofmodel
.mode (
str
) – It can be classification only.background_data (
Image
) – Sampled images for estimating background values.c – The weight of the loss term.
beta – The weight of the L1 regularization term.
gamma – The weight of the AE regularization term.
kappa – The parameter in the hinge loss function.
ae_model – The auto-encoder model used for regularization.
binary_search_steps – The number of iterations to adjust the weight of the loss term.
learning_rate – The learning rate.
num_iterations – The maximum number of iterations during optimization.
grad_clip – The value for clipping gradients.
- explanation_type = 'local'
- alias = ['cem', 'contrastive']
- explain(X, **kwargs)
Generates the explanations corresponding to the input images. Note that the returned results including the original input images, the pertinent negatives and the pertinent positives have been processed by the
preprocess_function
, e.g., if thepreprocess_function
rescales [0, 255] to [0, 1], the return results will have range [0, 1].- Parameters
X (
Image
) – A batch of the input images.- Return type
- Returns
The explanations for all the images, e.g., pertinent negatives and pertinent positives.
omnixai.explainers.vision.specific.feature_visualization.visualizer module
The feature visualizer for vision models.
- class omnixai.explainers.vision.specific.feature_visualization.visualizer.FeatureVisualizer(model, objectives, **kwargs)
Bases:
ExplainerBase
Feature visualization for vision models. The input of the model has shape (B, C, H, W) for PyTorch and (B, H, W, C) for TensorFlow. This class applies the optimized based method for visualizing layer, channel, neuron features. For more details, please visit https://distill.pub/2017/feature-visualization/.
- Parameters
model – The model to explain.
objectives (
Union
[Dict
,List
]) – A list of objectives for visualization. Each objective has the following format: {“layer”: layer, “weight”: 1.0, “type”: “layer”, “channel”, “neuron” or “direction”, “index”: channel_idx, neuron_idx or direction_vector}. For example, {“layer”: layer, “weight”: 1.0, “type”: channel, “index”: [0, 1, 2]}. Here, “layer” indicates the target layer and “type” is the objective type. If “type” is “channel” or “neuron”, please set the channel indices or neuron indices. If “type” is “direction”, please set the direction vector who shape is the same as the layer output shape (without batch-size dimension).
- explanation_type = 'global'
- alias = ['fv', 'feature_visualization']
- explain(*, num_iterations=300, learning_rate=0.05, transformers=None, regularizers=None, image_shape=None, use_fft=False, fft_decay=1.0, normal_color=False, verbose=True, **kwargs)
Generates feature visualizations for the specified model and objectives.
- Parameters
num_iterations (
int
) – The number of iterations during optimization.learning_rate (
float
) – The learning rate during optimization.transformers (
Optional
[Pipeline
]) – The transformations applied on images during optimization. transformers is an object of Pipeline defined in the preprocessing package. The available transform functions can be found in .pytorch.preprocess and .tf.preprocess. When transformers is None, a default transformation will be applied.regularizers (
Optional
[List
]) – A list of regularizers applied on images. Each regularizer is a tupe (regularizer_type, weight) where regularizer_type is “l1”, “l2” or “tv”.image_shape (
Optional
[Tuple
]) – The customized image shape. If None, the default shape is (224, 224).use_fft – Whether to use fourier preconditioning.
fft_decay – The value controlling the allowed energy of the high frequency.
normal_color (
bool
) – Whether to map uncorrelated colors to normal colors.verbose (
bool
) – Whether to print the optimization progress.
- Returns
The optimized images for the objectives.
- class omnixai.explainers.vision.specific.feature_visualization.visualizer.FeatureMapVisualizer(model, target_layer, preprocess_function, **kwargs)
Bases:
ExplainerBase
The class for feature map visualization.
- Parameters
model – The model to explain.
target_layer – The target layer for feature map visualization.
preprocess_function (
Callable
) – The preprocessing function that converts the raw data into the inputs ofmodel
.
- explanation_type = 'local'
- alias = ['fm', 'feature_map']
omnixai.explainers.vision.specific.guided_bp module
- class omnixai.explainers.vision.specific.guided_bp.GuidedBP(model, preprocess_function, mode='classification', **kwargs)
Bases:
ExplainerBase
,GradMixin
The guided back propagation method for vision models.
- Parameters
model – The model to explain, whose type can be tf.keras.Model or torch.nn.Module.
preprocess_function (
Callable
) – The preprocessing function that converts the raw data into the inputs ofmodel
.mode (
str
) – The task type, e.g., classification or regression.
- explanation_type = 'local'
- alias = ['guidedbp', 'guided-bp']
- explain(X, y=None, **kwargs)
Generates the explanations for the input instances.
- Parameters
X (
Image
) – A batch of input instances.y – A batch of labels to explain. For regression,
y
is ignored. For classification, the top predicted label of each input instance will be explained when y = None.kwargs – Additional parameters.
- Returns
The explanations for all the instances, e.g., pixel importance scores.
- Return type
omnixai.explainers.vision.specific.smoothgrad module
- class omnixai.explainers.vision.specific.smoothgrad.SmoothGrad(model, preprocess_function, mode='classification', use_guided_bp=False, **kwargs)
Bases:
ExplainerBase
,GradMixin
The Smooth-Grad method for generating visual explanations. If using this explainer, please cite SmoothGrad: removing noise by adding noise, Smilkov et al., https://arxiv.org/abs/1706.03825.
- Parameters
model – The model to explain, whose type can be tf.keras.Model or torch.nn.Module.
preprocess_function (
Callable
) – The preprocessing function that converts the raw data into the inputs ofmodel
.mode (
str
) – The task type, e.g., classification or regression.use_guided_bp (
bool
) – Whether to use guided back propagation when computing gradients.
- explanation_type = 'local'
- alias = ['smoothgrad', 'smooth-grad']
- explain(X, y=None, num_samples=50, sigma=0.1, **kwargs)
Generates the explanations for the input instances.
- Parameters
X (
Image
) – A batch of input instances.y – A batch of labels to explain. For regression,
y
is ignored. For classification, the top predicted label of each input instance will be explained when y = None.num_samples – The number of images used to compute smooth gradients.
sigma – The sigma for calculating standard deviation of noise.
kwargs – Additional parameters.
- Returns
The explanations for all the instances, e.g., pixel importance scores.
- Return type