Adversarial Image Attack Lab
Generate and defend adversarial images (FGSM, PGD, DeepFool) in CNNs with Streamlit demos and visualization tools.
Learn how adversarial perturbations deceive neural networks and explore defense strategies.
Generate and defend adversarial images (FGSM, PGD, DeepFool) in CNNs with Streamlit demos and visualization tools.
Learn how adversarial perturbations deceive neural networks and explore defense strategies.
Simulate prompt injection, data exfiltration, and jailbreaks on LLMs. Evaluate sanitization defenses.
Test how large language models react to adversarial inputs and implement guardrail mechanisms.
Detect anomalies in network traffic using supervised and unsupervised ML approaches.
Compare Random Forest, XGBoost, and Autoencoders to identify network intrusions in real time.