LLM request
represents a single query to an LLM provider. Baserun refers to the returned object as a Completion
.
If the request is successful, Baserun logs the completion in the UI as shown above. input
and output
of the request, along with metadata such as the user
, request ID
, and model configurations
.
If the request fails, Baserun logs the error code and message in the LLM requests table.
Arguments: name, completion_id, user
stream
.
Install Baserun SDK
Set the Baserun API key
Import and Init
OpenAI
from baserun
instead of openAI
. Creating an OpenAI
client object automatically starts the trace, and all future LLM requests made with this client object will be captured.Alternate init method
OpenAI
client object imported from baserun
. When that client object is used all completions are automatically traced.
client
object, or can be set after instantiation.
Arguments: name, completion_id, trace_id, user, session, result, metadata
trace_id
. If you wish to associate an LLM request with a trace after the trace has completed, see the example below. Another common use case is when you want to add user feedback or tags to a trace after the pipeline has finished executing.
submit_to_baserun()
. Here’s what that looks like: