Scope of the integration

If you use LlamaIndex with OpenAI or Anthropic as the LLM backend then Baserun will log API calls generating embeddings and text completions for indexing, querying, retrieval, and response synthesis. Additionally, Baserun will add logs to a trace with more information on a retrieval e.g. which nodes were selected and what score they were assigned.

Example

import baserun
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader


@baserun.trace
def llama():
    # load documents
    documents = SimpleDirectoryReader("recipes").load_data()
    # create index
    index = VectorStoreIndex.from_documents(documents)
    # make a query
    query_engine = index.as_query_engine()
    response = query_engine.query("I have flour, sugar and butter. What am I missing if I want to bake oatmeal cookies?")
    print(response)


if __name__ == "__main__":
    baserun.init()
    llama()

Running this code should result in a trace looking like this: