Adversarial Image Attack Lab

Generate and defend adversarial images (FGSM, PGD, DeepFool) in CNNs with Streamlit demos and visualization tools.

Marcos Martín

Learn how adversarial perturbations deceive neural networks and explore defense strategies.

LLM Security Evaluator

Simulate prompt injection, data exfiltration, and jailbreaks on LLMs. Evaluate sanitization defenses.

Marcos Martín

Test how large language models react to adversarial inputs and implement guardrail mechanisms.