Explainable AI Security Dashboard
Interactive visualizations combining SHAP and LIME to explain ML decisions in security applications.
This dashboard enhances model transparency in cybersecurity contexts by visualizing feature contributions and decision paths. Using SHAP and LIME, analysts can understand why models flagged a specific alert or anomaly, helping SOC teams build trust and detect bias or model drift in production systems.
Tech Stack
- Python · Streamlit · SHAP · LIME
- Interactive visual components and comparative explainability
- Integration with IDS outputs and monitoring pipelines
Example (SHAP visualization snippet)
import shap
explainer = shap.Explainer(model, X_train)
shap_values = explainer(X_sample)
shap.plots.bar(shap_values, max_display=10)
Project Highlights
- Explainability: SHAP & LIME interpretations for model predictions.
- Security Focus: Applied to IDS, malware and anomaly detection models.
- Visualization: Interactive charts built with Streamlit and Plotly.
- Operational Use: Helps SOC analysts prioritize and validate alerts.
