Update your Python SDK to version 2.0 for enhanced stability and easier
integration. Learn more
This page goes over some advanced tracing features.
Adding a trace name
If you’re tracing multiple functions, you can use the name
parameter to distinguish between them:
def ask_question(question="What is the capital of the US?") -> str:
with baserun.start_trace(name="General knowledge question"):
# Your code here
Currently, when using python sdk, you can name a trace only when using
baserun.start_trace
context manager and it’s not possible to do so when
using @baserun.trace
decorator.
Setting a trace’s result
By default a trace’s result
value will be the return value of the function or context that is traced. If you want to be more explicit, you can set the result
value of a trace.
def ask_question(question="What is the capital of the US?") -> str:
with baserun.start_trace() as trace:
# Your code here, for example:
answer_id = answer_question(question)
trace.result = my_result
You can also add custom metadata. This metadata could be whatever you like, provided that it is JSON serializable. For instance, you may want to include references to other objects or systems.
def ask_question(question="What is the capital of the US?") -> str:
with baserun.start_trace(name="Answer question") as trace:
# Make a completion
completion = client.chat.completions.create(
model="gpt-4-1106-preview",
messages=[{"role": "user", "content": question}],
)
# Your code here, for example:
answer_id = persist_answer(question, completion)
# Add whatever metadata you like
trace.metadata = {
"answer_id": answer_id
}
Associating with an LLM request
By default, annotations such as logs and checks are associated with a trace as a whole. To associate a with particular LLM request, you simply need to pass the completion ID from your LLM request. To do so using OpenAI’s SDK, you can do the following:
@baserun.trace
def ask_question(question="What is the capital of the US?") -> str:
completion = client.chat.completions.create(
model="gpt-4-1106-preview",
messages=[{"role": "user", "content": question}],
)
# Your code here, for example:
answer = persist_answer(question, completion)
# Create the annotation
annotation = baserun.annotate(completion.id)
# Capture whatever annotations you need
annotation.log("Answer", metadata={"answer_id": answer.id})
# Make sure to submit the annotation
annotation.submit()
Here is how the user feedback will look like in Baserun dashboard:
Example
import baserun
import openai
PROMPT = """As a customer service representative for an online pet product retailer, your main goal is to
provide a positive and informative chat experience for customers..."""
def run_chatbot():
"""A basic chatbot"""
client = openai.OpenAI()
conversation = [{"role": "system", "content": PROMPT}]
print(f"Start your conversation. Type `exit` to end the conversation.")
user_input = input(">")
conversation.append({"role": "user", "content": user_input})
# Start a trace before your first OpenAI call, giving it a name
with baserun.start_trace(name="Chatbot CLI loop") as trace:
# Tracing allows each iteration's LLM calls to be grouped together
while user_input != "exit":
completion = client.chat.completions.create(
model="gpt-4-1106-preview",
messages=conversation,
)
content = completion.choices[0].message.content
conversation.append({"role": "assistant", "content": content})
print(content)
user_input = input(">")
conversation.append({"role": "user", "content": user_input})
# Set the trace result for display in the Baserun UI (here it is set to the last content of the conversation)
trace.result = conversation[-1]["content"]
if **name** == "**main**":
baserun.api_key = YOUR_BASERUN_API_KEY_HERE
openai.api_key = YOUR_OPENAI_API_KEY_HERE
baserun.init()
run_chatbot()