# Baserun ## Docs - [Sessions](https://docs.baserun.ai/SDK_2.0/python/sessions.md) - [Tags](https://docs.baserun.ai/SDK_2.0/python/tag.md) - [Tracing](https://docs.baserun.ai/SDK_2.0/python/tracing.md) - [User feedback](https://docs.baserun.ai/SDK_2.0/python/user-feedback.md) - [Variables](https://docs.baserun.ai/SDK_2.0/python/variables.md) - [Completion datasets](https://docs.baserun.ai/datasets/overview.md) - [Automatic Evaluators](https://docs.baserun.ai/evaluation/automatic-evaluators.md) - [Human evaluators](https://docs.baserun.ai/evaluation/human-evaluators.md) - [Offline evaluation via SDK](https://docs.baserun.ai/evaluation/offline-evaluation-sdk.md) - [Offline evaluation via UI](https://docs.baserun.ai/evaluation/offline-evaluation-ui.md) - [Online evaluation](https://docs.baserun.ai/evaluation/online-evaluation.md) - [Overview](https://docs.baserun.ai/evaluation/overview.md) - [Fine-tune overview](https://docs.baserun.ai/fine-tune/overview.md) - [Authentication](https://docs.baserun.ai/getstartedwithSDK/authentication.md) - [Logging LLM requests](https://docs.baserun.ai/getstartedwithSDK/logging-LLM-requests.md): Start logging your LLM requests with 2 lines of code changes. - [Manually creating traces and submitting completions](https://docs.baserun.ai/getstartedwithSDK/manual-tracing.md) - [Offline evaluation via SDK](https://docs.baserun.ai/getstartedwithSDK/offline-evaluation-sdk.md) - [Tracing end-to-end LLM pipelines](https://docs.baserun.ai/getstartedwithSDK/tracing-workflows.md) - [LangChain](https://docs.baserun.ai/integrations/langchain.md): Integrating with Langchain - [LlamaIndex](https://docs.baserun.ai/integrations/llamaindex.md): Integrating with LlamaIndex - [PromptArmor](https://docs.baserun.ai/integrations/promptarmor.md): Integrating with PromptArmor - [Advanced tracing features](https://docs.baserun.ai/monitoring/advanced-tracing.md): Tracing additional data such as trace name, custom log, session ID, etc. - [Collect user feedback](https://docs.baserun.ai/monitoring/collect-user-feedback.md) - [Custom Logs](https://docs.baserun.ai/monitoring/custom-log.md) - [Overview](https://docs.baserun.ai/monitoring/overview.md) - [Tracing](https://docs.baserun.ai/monitoring/tracing.md) - [Tracing with Lambda](https://docs.baserun.ai/monitoring/tracing-with-lambda.md) - [Tracing with Next.js](https://docs.baserun.ai/monitoring/tracing-with-nextjs.md): Start tracing your Next.js app. - [Users and sessions](https://docs.baserun.ai/monitoring/users-and-sessions.md): Tracing additional user-specific data and end-to-end user journeys - [Templates](https://docs.baserun.ai/prompt templates/deploy.md) - [Templates](https://docs.baserun.ai/prompt templates/overview.md) - [Templates](https://docs.baserun.ai/prompt templates/register.md) - [Compare prompt versions](https://docs.baserun.ai/prompt-playground/compare-prompts.md) - [Custom Models](https://docs.baserun.ai/prompt-playground/custom-models.md) - [Overview](https://docs.baserun.ai/prompt-playground/overview.md): Iterating, evaluating, and versioning prompts as a team. - [jest integration](https://docs.baserun.ai/testing/js/jest.md) - [vitest integration](https://docs.baserun.ai/testing/js/vitest.md) - [Testing overview](https://docs.baserun.ai/testing/overview.md): With Baserun, testing your LLM app is as easy as writing a unit test - [pytest integration](https://docs.baserun.ai/testing/python/pytest.md) - [👋 Welcome to Baserun](https://docs.baserun.ai/welcome-to-baserun.md) - [Workspace](https://docs.baserun.ai/workspace.md) ## Optional - [Changelog](https://www.baserun.ai/changelog) - [Let's talk](https://calendly.com/baserun/30min) - [Contact us](mailto:support@baserun.ai)