Model Configuration
Attack Defense Strategy
Welcome to the LLM Prompt Injection Vulnerability Assessment Tool!
This tool allows you to test and measure the vulnerability of Large Language Models (LLMs) to prompt injection attacks. You can configure the model, the attack type, and the defense mechanism, then run a series of trials to evaluate the model's robustness.
ASV (Attack Success Value): ___
Vulnerability Level: ___

Run tests to see results here...

Comparison of ASV values will be shown here...