Welcome to the LLM Prompt Injection Vulnerability Assessment Tool!
This tool allows you to test and measure the vulnerability of Large Language Models (LLMs)
to prompt injection attacks. You can configure the model, the attack type, and the defense
mechanism, then run a series of trials to evaluate the model's robustness.
ASV (Attack Success Value):___
Vulnerability Level:___
0%
Run tests to see results here...
Comparison of ASV values will be shown here...
How to Use This Tool
Follow these steps to test and measure the vulnerability of Large Language Models to prompt injection attacks.
On the Homepage, select the LLM model and input your API key for authentication.
Customize the type of prompt injection attack and the model's defense.
Set the number of trials (Range: 1-100).
Press 'Run Tests' to begin the evaluation.
On the Output page, there will be a progress bar showing the progress of the tests.
Once completed, the output page will populate with the sample prompts and responses.
The output page will display the Attack Success Value (ASV) and vulnerability level.
Try running other models and the Analysis tab will compare each test.