For example, you are creating a AI travel agent to assist with booking trips. This bot needs to accurately gather flight ticket details from scrapping user emails. With logging LLM Request, you can:
Track and Fix Errors: Test the same task repeatedly to see how often and where your bot throws an errors.
Ensure Quality: Verify that your bot is correctly identifying flight prices, dates, and booking details. You can check if the bot is giving the user right response based on the email input.
Improve User Experience: Keep track of the response time for each request. This helps find any delays in the bot’s responses, leading to improvement in how users interact with the bot.
import baserunimport openaidef example(): response = openai.chat.completions.create( model="gpt-3.5-turbo", temperature=0.7, messages=[ { "role": "user", "content": "What is the capital of the US?" } ], ) return response.choices[0].messageif __name__ == "__main__": baserun.api_key = YOUR_BASERUN_API_KEY_HERE openai.api_key = YOUR_OPEANI_API_KEY_HERE baserun.init() print(example())
You can use Baserun with tests while iterating on your features, utilizing it for debugging and analysis, or in the production environment to monitor your application.For more information refer to Testing.
A Trace comprises a series of events executed within a pipeline. Tracing enables Baserun to capture and display the workflow’s entire lifecycle, whether synchronous or asynchronous.For example, consider the case of an AI bot used to automate phone calls. The process begins with the bot initiating a call to the user. While the call is in progress, the conversation is transcribed into text. Subsequently, the bot analyzes the transcribed text to produce a response or message. Once the response is crafted, it is converted into audio. Finally, the AI system transmits the audio message.
import baserunimport openaidef get_activities(): response = openai.chat.completions.create( model="gpt-3.5-turbo", temperature=0.7, messages=[ { "role": "user", "content": "What are three activities to do on the Moon?" } ], ) return response.choices[0].message@baserun.tracedef find_best_activity(): moon_activities = get_activities() response = openai.chat.completions.create( model="gpt-3.5-turbo", temperature=0.7, messages=[ { "role": "user", "content": "Pick the best activity to do on the moon from the following, including a convincing reason to do so.\n + {moon_activities}" } ], ) return response.choices[0].messageif __name__ == "__main__": baserun.api_key = YOUR_BASERUN_API_KEY_HERE openai.api_key = YOUR_OPEANI_API_KEY_HERE baserun.init() print(find_best_activity())