shap_enhanced.tools.comparison¶
Attribution Comparison Utility for SHAP Explainers¶
Overview¶
This module provides a utility class for quantitatively comparing SHAP attributions from multiple explainers against a reference ground truth. It is intended for use in benchmarking or evaluating new SHAP-based methods by computing standard performance metrics.
Currently supported evaluation metrics include:
Mean Squared Error (MSE): Measures the squared deviation between predicted and ground-truth attributions.
Pearson Correlation: Measures the linear correlation between flattened attribution arrays.
Key Components¶
Comparison Class: - Accepts ground-truth SHAP values and a dictionary of predicted attribution maps. - Computes MSE and Pearson correlation for each explainer. - Handles flattened comparison over all timesteps and features.
Use Case¶
This utility is ideal for: - Benchmarking SHAP-style explainers on synthetic datasets with known ground truth. - Evaluating the effect of surrogate or approximation methods. - Comparing different explainer strategies in attribution consistency.
Example
gt = np.random.rand(10, 5) # Ground truth SHAP values
pred1 = gt + np.random.normal(0, 0.1, size=gt.shape)
pred2 = gt + np.random.normal(0, 0.2, size=gt.shape)
comp = Comparison(ground_truth=gt, shap_models={"ExplainerA": pred1, "ExplainerB": pred2})
mse_scores, pearson_scores = comp.calculate_kpis()
Classes
|
Comparison: SHAP Attribution Evaluation Utility |
- class shap_enhanced.tools.comparison.Comparison(ground_truth, shap_models)[source]¶
Bases:
object
Comparison: SHAP Attribution Evaluation Utility
Provides evaluation metrics for comparing predicted SHAP attributions against a ground truth reference. Designed for benchmarking SHAP-based explainers using quantitative metrics.
Supported Metrics¶
Mean Squared Error (MSE): Measures squared deviation between predicted and true SHAP values.
Pearson Correlation: Measures linear correlation between flattened attribution vectors.
- param np.ndarray ground_truth:
Ground-truth SHAP values of shape (T, F).
- param dict shap_models:
Dictionary mapping explainer names to their SHAP attribution arrays.
- calculate_kpis()[source]¶
Compute evaluation metrics (MSE and Pearson correlation) for each SHAP explainer.
Note
Flattened comparisons are used for both MSE and correlation.
- Returns:
Tuple of dictionaries: - MSE values for each explainer. - Pearson correlation values for each explainer.
- Return type:
(dict[str, float], dict[str, float])