Documentation Index
Fetch the complete documentation index at: https://docs.baserun.ai/llms.txt
Use this file to discover all available pages before exploring further.
PromptArmor
PromptArmor is a service which checks LLM inputs for adversarial content before a completion is made. PromptArmor returns in realtime faster than LLMs, blocks 99% of known threat vectors, and takes less than 15 minutes to integrate. PromptArmor integration is included in Baserun’s Python SDK as an Automatic Evaluation. To use PromptArmor, first set thePROMPTARMOR_API_KEY environment variable to your PromptArmor API key. Similar to other automatic evaluation features, you can then use the baserun.evals.check_injection() to evaluate a prompt using PromptArmor.
For more details on how our evaluations work, see the Automatic Evaluation documentation for more details.