omnixai.explainers.tabular.counterfactual package

omnixai.explainers.tabular.counterfactual.mace.mace module

The Model-Agnostic Counterfactual Explanation (MACE) for tabular data.

class omnixai.explainers.tabular.counterfactual.mace.mace.MACEExplainer(training_data, predict_function, mode='classification', ignored_features=None, method='gld', use_knn=True, **kwargs)

Bases: ExplainerBase, TabularExplainerMixin

The Model-Agnostic Counterfactual Explanation (MACE) developed by Yang et al. Please cite the paper MACE: An Efficient Model-Agnostic Framework for Counterfactual Explanation. It supports most black-box models for classification whose input features can either be categorical or continuous-valued.

Parameters
  • training_data (Tabular) – The data used to initialize a MACE explainer. training_data can be the training dataset for training the machine learning model. If the training dataset is large, training_data can be its subset by applying omnixai.sampler.tabular.Sampler.subsample.

  • predict_function (Callable) – The prediction function corresponding to the model to explain. The model should be a classifier, the outputs of the predict_function are the class probabilities.

  • mode (str) – The task type can be classification only.

  • ignored_features (Optional[List]) – The features ignored in generating counterfactual examples.

  • use_knn (bool) – Whether to use KNN search to find candidate features for generating counterfactual examples.

  • kwargs – Additional parameters used in CFRetrieval and GLD. For more information, please refer to the classes mace.retrieval.CFRetrieval and mace.gld.GLD.

explanation_type = 'local'
alias = ['mace']
explain(X, y=None, max_number_examples=5, **kwargs)

Generates counterfactual explanations.

Parameters
  • X (Tabular) – A batch of input instances. When X is pd.DataFrame or np.ndarray, X will be converted into Tabular automatically.

  • y (Union[List, ndarray, None]) – A batch of the desired labels, which should be different from the predicted labels of X. If y = None, the desired labels will be the labels different from the predicted labels of X.

  • max_number_examples (int) – The maximum number of the generated counterfactual examples per class for each input instance.

Return type

CFExplanation

Returns

A CFExplanation object containing the generated explanations.

omnixai.explainers.tabular.counterfactual.ce module

The basic counterfactual explainer for tabular data.

class omnixai.explainers.tabular.counterfactual.ce.CounterfactualOptimizer(x0, target, model, c=10.0, kappa=10.0, binary_search_steps=5, learning_rate=0.01, num_iterations=1000, grad_clip=1000.0, gamma=None, bounds=None)

Bases: object

The optimizer for counterfactual explanation, which is implemented based on the paper Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR, Sandra Wachter, Brent Mittelstadt, Chris Russell, https://arxiv.org/abs/1711.00399.

Parameters
  • x0 – The input image.

  • target – The predicted label of the input image.

  • model – The classification model which can be torch.nn.Module or tf.keras.Model.

  • c – The weight of the hinge loss term.

  • kappa – The parameter in the hinge loss function.

  • binary_search_steps – The number of iterations to adjust the weight of the loss term.

  • learning_rate – The learning rate.

  • num_iterations – The maximum number of iterations during optimization.

  • grad_clip – The value for clipping gradients.

  • gamma – The denominator of the regularization term, e.g., |x - x0| / gamma. gamma will be set to 1 if it is None.

  • bounds – The upper and lower bounds of the feature values. None if the default bounds (min(x0), max(x0)) is used.

optimize(verbose=True)

Generates counterfactual examples.

Returns

The counterfactual example.

Return type

np.ndarray

class omnixai.explainers.tabular.counterfactual.ce.CounterfactualExplainer(training_data, predict_function, mode='classification', c=10.0, kappa=10.0, binary_search_steps=5, learning_rate=0.01, num_iterations=1000, grad_clip=1000.0, **kwargs)

Bases: TabularExplainer

The basic counterfactual explainer for tabular data. It only supports continuous-valued features. If using this explainer, please cite the paper Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR, Sandra Wachter, Brent Mittelstadt, Chris Russell, https://arxiv.org/abs/1711.00399.

Parameters
  • training_data (Tabular) – The data used to extract information such as medians of continuous-valued features. training_data can be the training dataset for training the machine learning model. If the training dataset is large, training_data can be its subset by applying omnixai.sampler.tabular.Sampler.subsample.

  • predict_function (Callable) – The prediction function corresponding to the model to explain. When the model is for classification, the outputs of the predict_function are the class probabilities. When the model is for regression, the outputs of the predict_function are the estimated values.

  • mode (str) – The task type, which only supports classification.

  • c – The weight of the hinge loss term.

  • kappa – The parameter in the hinge loss function.

  • binary_search_steps – The number of iterations to adjust the weight of the loss term.

  • learning_rate – The learning rate.

  • num_iterations – The maximum number of iterations during optimization.

  • grad_clip – The value for clipping gradients.

explanation_type = 'local'
alias = ['ce', 'counterfactual']
explain(X, **kwargs)

Generates the counterfactual explanations for the input instances.

Parameters

X – A batch of input instances. When X is pd.DataFrame or np.ndarray, X will be converted into Tabular automatically.

Return type

CFExplanation

Returns

The counterfactual explanations for all the input instances.

save(directory, filename=None, **kwargs)

Saves the initialized explainer.

Parameters
  • directory (str) – The folder for the dumped explainer.

  • filename (Optional[str]) – The filename (the explainer class name if it is None).