shap_enhanced

Enhanced SHAP Explainers

This package provides a collection of advanced SHAP-style explainers and supporting tools designed for structured, sequential, and tabular data. It extends traditional SHAP methodology with interpretable, efficient, and domain-aware enhancements.

Core Modules

  • explainers:

    A suite of explainers including:
    • Latent SHAP

    • RL-SHAP (Reinforcement Learning)

    • Multi-Baseline SHAP (MB-SHAP)

    • Sparse Coalition SHAP (SC-SHAP)

    • Surrogate SHAP (SurroSHAP)

    • TimeSHAP and others

  • tools:

    Utility functions and helper modules for:
    • Synthetic data generation

    • Ground-truth SHAP value estimation

    • Model evaluation and visualizations

    • Benchmark comparisons and profiling

  • base_explainer: Abstract base class (BaseExplainer) that defines the core interface for all SHAP-style explainers in this package.

Usage

Example:

from shap_enhanced.explainers import LatentSHAPExplainer
from shap_enhanced.tools.datasets import generate_synthetic_seqregression
from shap_enhanced.tools.predefined_models import RealisticLSTM

X, y = generate_synthetic_seqregression()
model = RealisticLSTM(input_dim=X.shape[2])
explainer = LatentSHAPExplainer(model=model, ...)
shap_values = explainer.shap_values(X[0])
class shap_enhanced.BaseExplainer(model, background=None)[source]

Bases: ABC

BaseExplainer: Abstract Interface for SHAP-style Explainers

This abstract class defines the required interface for all SHAP-style explainers in the enhanced SHAP framework. Subclasses must implement the shap_values method, and optionally support expected_value computation.

Ensures compatibility with SHAP-style usage patterns such as callable explainers (explainer(X)).

Parameters:
  • model (Any) – The model to explain (e.g., PyTorch or scikit-learn model).

  • background (Optional[Any]) – Background data for imputation or marginalization (used in SHAP computation).

property expected_value

Optional property returning the expected model output on the background dataset.

Returns:

Expected value if defined by the subclass, else None.

Return type:

float or None

explain(X, **kwargs)[source]

Alias to shap_values for flexibility and API compatibility.

Parameters:
  • X (Union[np.ndarray, torch.Tensor, list]) – Input samples to explain.

  • kwargs – Additional arguments.

Returns:

SHAP values.

Return type:

Union[np.ndarray, list]

abstractmethod shap_values(X, check_additivity=True, **kwargs)[source]

Abstract method to compute SHAP values for input samples.

\[\phi_i = \mathbb{E}_{S \subseteq N \setminus \{i\}} \left[ f(x_{S \cup \{i\}}) - f(x_S) \right]\]
Parameters:
  • X (Union[np.ndarray, torch.Tensor, list]) – Input samples to explain (e.g., numpy array, torch tensor, or list).

  • check_additivity (bool) – If True, verifies SHAP values sum to model prediction difference.

  • kwargs – Additional arguments for explainer-specific control.

Returns:

SHAP values matching the shape and structure of X.

Return type:

Union[np.ndarray, list]

Modules

base_explainer

Enhanced SHAP Base Interface

explainers

SHAP Explainers Collection

tools

Tools Module