omnixai.explainers.vision package

omnixai.explainers.vision.auto module

class omnixai.explainers.vision.auto.VisionExplainer(explainers, mode, model, data=None, preprocess=None, postprocess=None, params=None)

Bases: AutoExplainerBase

The class derived from AutoExplainerBase for vision tasks, allowing users to choose multiple explainers and generate different explanations at the same time.

explainer = VisionExplainer(
    explainers=["gradcam", "lime", "ig"],
    mode="classification",
    model=model,
    preprocess=preprocess_function,
    postprocess=postprocess_function,
    params={"gradcam": {"target_layer": model.layer4[-1]}}
)
local_explanations = explainer.explain(img)
Parameters
  • explainers (Collection) – The names or alias of the explainers to use.

  • mode (str) – The task type, e.g. classification or regression.

  • model (Any) – The machine learning model to explain, which can be a scikit-learn model, a tensorflow model, a torch model, or a black-box prediction function.

  • data (Image) – The training data used to initialize explainers. It can be empty, e.g., data = Image(), for those explainers such as IntegratedGradient and Grad-CAM that don’t require training data.

  • preprocess (Optional[Callable]) – The preprocessing function that converts the raw input features into the inputs of model.

  • postprocess (Optional[Callable]) – The postprocessing function that transforms the outputs of model to a user-specific form, e.g., the predicted probability for each class.

  • params (Optional[Dict]) – A dict containing the additional parameters for initializing each explainer, e.g., params[“gradcam”] = {“param_1”: param_1, …}.

static list_explainers()

List the supported explainers.

Subpackages