Integrations
PromptArmor
Integrating with PromptArmor
PromptArmor
PromptArmor is a service which checks LLM inputs for adversarial content before a completion is made. PromptArmor returns in realtime faster than LLMs, blocks 99% of known threat vectors, and takes less than 15 minutes to integrate.
PromptArmor integration is included in Baserun’s Python SDK as an Automatic Evaluation.
To use PromptArmor, first set the PROMPTARMOR_API_KEY
environment variable to your PromptArmor API key. Similar to other automatic evaluation features, you can then use the baserun.evals.check_injection()
to evaluate a prompt using PromptArmor.
For more details on how our evaluations work, see the Automatic Evaluation documentation for more details.