tree: 5b844968d8583b99be37c8a300dacf1816e44f09 [path history] [tgz]
  1. notebook.ipynb
  2. README.md
  3. requirements.txt
  4. run.py
  5. screenshot.png
examples/LLM_Workflows/observability_openllmetry/README.md

Monitor Hamilton with OpenTelemetry, OpenLLMetry and Traceloop

In this simple example, you'll learn how to use the OpenTelemetryTracer to emit traces of your Hamilton code using the OpenTelemetry format, in particular LLM applications.

Traceloop screenshot

OpenTelemetry is an open-source cross-language tool that allows to instrument, generate, collect, and export telemetry data (metrics, logs, traces), and constitute an industry-recognized standard. Learn more about it in this Awesome OpenTelemetry repository

OpenLLMetry is an open-source Python library that automatically instruments with OpenTelemetry components of your LLM stack including LLM providers (OpenAI, Anthropic, HuggingFace, Cohere, etc.), vector databases (Weaviate, Qdrant, Chroma, etc.), and frameworks (Burr, Haystack, LangChain, LlamaIndex). In concrete terms, it means you automatically get detailed traces of API calls, retrieval operations, or text transformations for example.

One thing to note, OpenTelemetry is a middleware; it doesn‘t provide a destination to store data nor a dashboard. For this example, we’ll use the tool Traceloop, which is built by the developers of OpenLLMetry. It has a generous free-tier and can be conveniently set up in a few lines of code for this demo.

Set up

Having access to a Traceloop account and an API key is a pre-requisite.

  1. Create a virtual environment and activate it

    python -m venv venv && . venv/bin/active
    
  2. Install requirements.

    pip install -r requirements.txt
    
  3. Set environment variables for your API keys OPENAI_API_KEY and TRACELOOP_API_KEY

  4. Execute the code

    python run.py
    
  5. Explore results on Traceloop (or your OpenTelemetry destination).

Without Traceloop

For this example to work without Traceloop, you will need to set up your own OpenTelemetry destination. We suggest using Jaeger and included Python code to route telemetry to it in run.py.

Should I still use the Hamilton UI?

Absolutely! OpenTelemetry focsues on collecting telemetry about the internals of code and external API calls. It‘s a standard amongst web services. There’s no conflict between the OpenTelemetry tracer and the tracking for the Hamilton UI. In fact, the Hamilton UI captures a superset of what OpenTelemetry allows, tailored to the Hamilton framework such as: visualizations, data lineage, summary statistics, and more utilities to improve your development experience. In the not too distant future, the Hamilton UI could ingest OpenTelemetry data 😉 (contributions welcomed!)