Logging LLM requests
Start logging your LLM requests with 2 lines of code changes.
Introduction
When the Baserun SDK is initialized explicitly or via our testing plugins, all OpenAI and Anthropic requests are automatically logged with no additional code changes.
For an in-depth overview of how our logging data is structured, please see our Logging Overview page.
Use cases
Get insight into individual LLM requests, including prompt templates, input, output, duration, token usage cost, and duration.
Features
- Is model and framework agnostic
- Provides token usage, estimated cost, duration, input, and output
- Supports evaluation
- Supports annotation
- Supports user feedback
- Supports async functions
Instruction
Install Baserun SDK
Generate an API key
Create an account at https://app.baserun.ai/sign-up. Then generate an API key for your project in the settings tab. Set it as an environment variable:
Alternatively set the Baserun API key when initializing the SDK
Initialize Baserun
At your application’s startup, define the environment in which you’d like to run Baserun. You can use Baserun in the development environment while iterating on your features, utilizing it for debugging and analysis, or in the production environment to monitor your application.
Example
Congrats, you are done! Now, you can navigate to the monitoring tab. You should see your request logged in the monitoring tab.
Demo projects
Python example repo,
Typescript example repo
If you have any questions or feature requests, join our Discord channel or send us an email at hello@baserun.ai