Introduction

A Trace comprises a series of events executed within an LLM chain(workflow). Tracing enables Baserun to capture and display the LLM chain’s entire lifecycle, whether synchronous or asynchronous.

Tracing LLM chains allows you to debug your application, monitor your LLM chains’ performance, and also collect user feedback.

Use cases

Please reference the Monitoring Overview to learn why logging LLM chain is critical for LLM feature development.

Features

  • Is model and framework agnostic
  • analysis UI to show sequence of events
  • Provides token usage, estimated cost, duration, input, and output
  • Supports evaluation
  • Supports annotation
  • Supports user feedback
  • Supports async functions
  • Option to add custom trace name
  • Option to log custom metadata
  • Option to set trace result

If you are using Next.js, please reference Monitoring > Tracing with Next.js.

If you are using Lambda, please reference Monitoring > Tracing with Lambda.

Instruction

The first 3 steps are the same as the Logging LLM requests tutorial. So, if you have already done this before, jump to step 4.

1

Install Baserun SDK

pip install baserun
2

Generate an API key

Create an account at https://app.baserun.ai/sign-up. Then generate an API key for your project in the settings tab. Set it as an environment variable:

export BASERUN_API_KEY="your_api_key_here"

Alternatively set the Baserun API key when initializing the SDK


import baserun

baserun.api_key = "br-..."

baserun.init()
3

Initialize Baserun

At your application’s startup, define the environment in which you’d like to run Baserun. You can use Baserun in the development environment while iterating on your features, utilizing it for debugging and analysis, or in the production environment to monitor your application.

import baserun

baserun.init()

4

Decide what to trace

The function(s) to trace are ultimately dependent on your app. It could be a main() function, or it could be a handler for an API call.

Note for TS/JS: Make sure to always call await baserun.init() before you instantiate OpenAI, Anthropic or Replicate.

# Decorate the function that you would like to trace:
import baserun

@baserun.trace
def get_response(message):
...

Full Example

import baserun
import openai

def get_activities():
    response = openai.chat.completions.create(
        model="gpt-3.5-turbo",
        temperature=0.7,
        messages=[
            {
                "role": "user",
                "content": "What are three activities to do on the Moon?"
            }
        ],
    )
    return response.choices[0].message.content

@baserun.trace
def find_best_activity():
    moon_activities = get_activities() 
    response = openai.chat.completions.create(
        model="gpt-3.5-turbo",
        temperature=0.7,
        messages=[
            {
                "role": "user",
                "content": "Pick the best activity to do on the moon from the following, including a convincing reason to do so.\n + {moon_activities}"
            }
        ],
    )
    return response.choices[0].message.content


if __name__ == "__main__":
    baserun.api_key = YOUR_BASERUN_API_KEY_HERE
    openai.api_key = YOUR_OPEANI_API_KEY_HERE
    baserun.init()
    print(find_best_activity())

Congrats, you are done! Now, you can navigate to the monitoring tab. Here is what you will see interact with your application:

Optionally, you can add metadata like trace name, user ID, and session ID to aid in debugging. Read Logging > Advanced tracing features for more details.

Demo projects

Python example repo,
Typescript example repo

If you have any questions or feature requests, join our Discord channel or send us an email at hello@baserun.ai