Introduction

When the Baserun SDK is initialized explicitly or via our testing plugins, all OpenAI and Anthropic requests are automatically logged with no additional code changes.

For an in-depth overview of how our logging data is structured, please see our Logging Overview page.

Logging LLM requests

Use cases

Get insight into individual LLM requests, including prompt templates, input, output, duration, token usage cost, and duration.

Features

  • Is model and framework agnostic
  • Provides token usage, estimated cost, duration, input, and output
  • Supports evaluation
  • Supports annotation
  • Supports user feedback
  • Supports async functions

Instruction

1

Install Baserun SDK

2

Generate an API key

Create an account at https://app.baserun.ai/sign-up. Then generate an API key for your project in the settings tab. Set it as an environment variable:

export BASERUN_API_KEY="your_api_key_here"

Alternatively set the Baserun API key when initializing the SDK

3

Initialize Baserun

At your application’s startup, define the environment in which you’d like to run Baserun. You can use Baserun in the development environment while iterating on your features, utilizing it for debugging and analysis, or in the production environment to monitor your application.

Example

Congrats, you are done! Now, you can navigate to the monitoring tab. You should see your request logged in the monitoring tab.

Demo projects

Python example repo,
Typescript example repo

If you have any questions or feature requests, join our Discord channel or send us an email at hello@baserun.ai