shap_enhanced.base_explainer¶
Enhanced SHAP Base Interface¶
Overview¶
This module defines the abstract base class for all SHAP-style explainers within the Enhanced SHAP framework. It enforces a common API across all implementations to ensure consistency, flexibility, and SHAP compatibility.
Any explainer that inherits from BaseExplainer must implement the shap_values method, which computes SHAP attributions given input data and optional arguments. The class also provides useful aliases such as explain and a callable __call__ interface to align with shap.Explainer behavior.
Key Concepts¶
Abstract SHAP API: All custom explainers must subclass this interface and define shap_values.
Compatibility Wrappers: Methods like explain and __call__ make the interface flexible for different usage styles.
Expected Value Access: The expected_value property allows subclasses to expose the model’s mean output over background data.
Use Case¶
BaseExplainer is the foundation of the enhanced SHAP ecosystem, supporting custom attribution algorithms like TimeSHAP, Multi-Baseline SHAP, or Surrogate SHAP. By inheriting from this interface, all explainers can be used interchangeably and plugged into benchmarking, visualization, or evaluation tools.
Classes
|
BaseExplainer: Abstract Interface for SHAP-style Explainers |
- class shap_enhanced.base_explainer.BaseExplainer(model, background=None)[source]¶
Bases:
ABC
BaseExplainer: Abstract Interface for SHAP-style Explainers
This abstract class defines the required interface for all SHAP-style explainers in the enhanced SHAP framework. Subclasses must implement the shap_values method, and optionally support expected_value computation.
Ensures compatibility with SHAP-style usage patterns such as callable explainers (explainer(X)).
- Parameters:
model (Any) – The model to explain (e.g., PyTorch or scikit-learn model).
background (Optional[Any]) – Background data for imputation or marginalization (used in SHAP computation).
- property expected_value¶
Optional property returning the expected model output on the background dataset.
- Returns:
Expected value if defined by the subclass, else None.
- Return type:
float or None
- explain(X, **kwargs)[source]¶
Alias to shap_values for flexibility and API compatibility.
- Parameters:
X (Union[np.ndarray, torch.Tensor, list]) – Input samples to explain.
kwargs – Additional arguments.
- Returns:
SHAP values.
- Return type:
Union[np.ndarray, list]
- abstractmethod shap_values(X, check_additivity=True, **kwargs)[source]¶
Abstract method to compute SHAP values for input samples.
\[\phi_i = \mathbb{E}_{S \subseteq N \setminus \{i\}} \left[ f(x_{S \cup \{i\}}) - f(x_S) \right]\]- Parameters:
X (Union[np.ndarray, torch.Tensor, list]) – Input samples to explain (e.g., numpy array, torch tensor, or list).
check_additivity (bool) – If True, verifies SHAP values sum to model prediction difference.
kwargs – Additional arguments for explainer-specific control.
- Returns:
SHAP values matching the shape and structure of X.
- Return type:
Union[np.ndarray, list]