When the Baserun SDK is initialized explicitly or via our testing plugins, all OpenAI and Anthropic chat and completion requests are automatically logged with no additional code changes. We capture LLM prompts, configuration, and usage. We do this by injecting a very thin wrapper around the client libraries.
In addition to LLM logs, it’s often helpful to debug and understand your workflow by capturing additional information such as other 3rd party API results or interesting business logic. To accomplish this, you can add a custom baserun.log call.