Directly Interpretable Unsupervised Explainers

Disentangled Inferred Prior Variational Autoencoder (DIPVAE) Explainer

class aix360.algorithms.dipvae.dipvae.DIPVAEExplainer(model_args, dataset=None, net=None, cuda_available=None)

DIPVAEExplainer can be used to visualize the changes in the latent space of Disentangled Inferred Prior-VAE or DIPVAE [3]. This model is a Variational Autoencoder [4] variant that leads to a disentangled latent space. This is achieved by matching the covariance of the prior distributions with the inferred prior.

References

[3]Variational Inference of Disentangled Latent Concepts from Unlabeled Observations (DIP-VAE), ICLR 2018. Kumar, Sattigeri, Balakrishnan.
[4]Diederik P Kingma and Max Welling. Auto-encoding variational Bayes. ICLR, 2014.

Initialize DIPVAEExplainer explainer.

Parameters:
  • model_args – This should contain all the parameter required for the generative model training and inference. This includes model type (vae, dipvae-i, dipvae-ii, user-defined). The user-defined model can be passed to the parameter net of the fit() function. Each of the model should have encoder and decode function defined. See the notebook example for other model specific parameters.
  • dataset – The dataset object.
  • net – If not None this is the user specified generative model.
  • cuda_available – If True use GPU.
explain(input_images, edit_dim_id, edit_dim_value, edit_z_sample=False)

Edits the images in the latent space and returns the generated images.

Parameters:
  • input_images – The input images.
  • edit_dim_id – The latent dimension id that need to be edited.
  • edit_dim_value – The value that is assigned to the latent dimension with id edit_dim_id.
  • edit_z_sample – If True will use the sample from encoder instead of the mean.
Returns:

Edited images.

fit(visualize=False, save_dir='results')

Train the underlying generative model.

Parameters:
  • visualize – Plot reconstructions during fit.
  • save_dir – directory where plots and model will be saved.
Returns:

elbo

set_params(*argv, **kwargs)

Set parameters for the explainer.

Protodash Explainer

class aix360.algorithms.protodash.PDASH.ProtodashExplainer

ProtodashExplainer provides exemplar-based explanations for summarizing datasets as well as explaining predictions made by an AI model. It employs a fast gradient based algorithm to find prototypes along with their (non-negative) importance weights. The algorithm minimizes the maximum mean discrepancy metric and has constant factor approximation guarantees for this weakly submodular function. [5].

References

[5]Karthik S. Gurumoorthy, Amit Dhurandhar, Guillermo Cecchi, “ProtoDash: Fast Interpretable Prototype Selection”

Constructor method, initializes the explainer

explain(X, Y, m, kernelType='other', sigma=2, optimizer='cvxpy')

Return prototypes for data X, Y.

Parameters:
  • X (double 2d array) – Dataset you want to explain.
  • Y (double 2d array) – Dataset to select prototypical explanations from.
  • m (int) – Number of prototypes
  • kernelType (str) – Type of kernel (viz. ‘Gaussian’, / ‘other’)
  • sigma (double) – width of kernel
  • optimizer (string) – qpsolver (‘cvxpy’ or ‘osqp’)
Returns:

m selected prototypes from X and their (unnormalized) importance weights

set_params(*argv, **kwargs)

Set parameters for the explainer.

CoFrNet Explainer

class aix360.algorithms.cofrnet.CoFrNet.CoFrNet_Explainer(cofrnet_model)
explain(explain_mode, max_layer_num=10, var_num=6)

Provides Explanations of CoFrNet Model

Args: explain_mode: either “importances” or “print_co_fr”, will raise exception if not one of these two options max_layer_num: For “print_co_fr”: Choose Depth of Ladder to Show, Default 10 var_num: For “print_co_fr”: Variable (index of input feature) for Which to Display Ladder, Default 6

set_params(*argv, **kwargs)

Set parameters for the explainer.