Metadata-Version: 2.4
Name: llm-tracekit-langgraph
Version: 1.2.0
Summary: OpenTelemetry instrumentation for LangGraph.
Project-URL: homepage, https://coralogix.com
Project-URL: repository, https://github.com/coralogix/llm-tracekit.git
Author-email: "Coralogix Ltd." <info@coralogix.com>
License-Expression: Apache-2.0
License-File: LICENSE
Classifier: Development Status :: 5 - Production/Stable
Classifier: Framework :: OpenTelemetry :: Instrumentations
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Programming Language :: Python :: 3.14
Requires-Python: >=3.10
Requires-Dist: langchain-core>=0.3.0
Requires-Dist: langgraph<2,>=1.0.6
Requires-Dist: llm-tracekit-core>=1.0.0
Requires-Dist: opentelemetry-instrumentation>=0.53b1
Description-Content-Type: text/markdown

# LLM Tracekit - LangGraph

OpenTelemetry instrumentation for [LangGraph](https://langchain-ai.github.io/langgraph/), focused on **span structure** and **node attributes** for graph runs. Use it together with LangChain, OpenAI, or other LLM instrumentors for full observability.

## Span structure (3 levels)

1. **Global span** — One per graph invocation. Starts when execution leaves START and ends when it reaches END. Span name: `"LangGraph"`.
2. **Node spans** — One per graph node execution, as **children** of the global span. Span name: `"LangGraph Node <node_name>"`. Each node span has two attributes: **node name** (`gen_ai.langgraph.node`) and **step number** (`gen_ai.langgraph.step`, when provided by LangGraph). The node span is the **current span** while the node runs, so any LLM calls inside the node are traced by other instrumentors as **children of that node span**. **Tool nodes** (nodes that only run tools and do not call an LLM) get a node span too; they have no LLM child spans.
3. **LLM spans** — Created by other instrumentors (LangChain, OpenAI, Gemini, etc.) when a node calls an LLM. They appear as children of the corresponding node span.

Resulting trace: **LangGraph** → **LangGraph Node …** → **chat/completion** (from LangChain/OpenAI/etc.) where the node runs an LLM; tool-only nodes appear as **LangGraph Node &lt;name&gt;** with no child.

## Installation

```bash
pip install "llm-tracekit-langgraph"
```

## Usage

### Setting up tracing

You can use the `setup_export_to_coralogix` function to setup tracing and export traces to Coralogix:

```python
from llm_tracekit.langgraph import setup_export_to_coralogix

setup_export_to_coralogix(
    service_name="ai-service",
    application_name="ai-application",
    subsystem_name="ai-subsystem",
)
```

Alternatively, set up tracing manually with your preferred `TracerProvider` and exporter.

### Activation

To instrument all LangGraph runs that use LangChain's callback manager:

```python
from llm_tracekit.langgraph import LangGraphInstrumentor

LangGraphInstrumentor().instrument()
```

### Capturing LLM call spans

This instrumentor only creates the **graph-level** and **node-level** spans above. It does **not** create spans for LLM calls. To get LLM spans (model, token usage, tool calls, etc.) as **children of the node span** that runs the LLM:

- Use **LangChain**: `llm-tracekit-langchain` and `LangChainInstrumentor().instrument(...)` in addition to `LangGraphInstrumentor().instrument(...)`. Both can run together; LangChain will create child spans under the current (node) span.
- Or use **provider-specific** instrumentors (OpenAI, Bedrock, etc.) instead of or alongside LangChain.

Install and activate the extra instrumentor(s) you need. The same tracer provider can be passed to all of them. LLM spans will appear under the correct node span because the node span is set as the current span while the node runs.

### Uninstrument

```python
LangGraphInstrumentor().uninstrument()
```

### Full example

Minimal graph (no LLM):

```python
from langgraph.graph import StateGraph, START, END
from langgraph.checkpoint.memory import MemorySaver

from llm_tracekit.langgraph import LangGraphInstrumentor, setup_export_to_coralogix

setup_export_to_coralogix(service_name="ai-service")

LangGraphInstrumentor().instrument()

def node_a(state: dict) -> dict:
    return {"messages": state.get("messages", []) + ["A"]}

def node_b(state: dict) -> dict:
    return {"messages": state.get("messages", []) + ["B"]}

graph = StateGraph(dict)
graph.add_node("a", node_a)
graph.add_node("b", node_b)
graph.add_edge(START, "a")
graph.add_edge("a", "b")
graph.add_edge("b", END)

app = graph.compile(checkpointer=MemorySaver())
result = app.invoke({"messages": []}, config={"configurable": {"thread_id": "1"}})
```
## Manual handler

You can also add the handler explicitly when invoking a graph (e.g. for testing or when not using the instrumentor):

```python
from llm_tracekit.langgraph.callback import LangGraphCallbackHandler

tracer = tracer_provider.get_tracer(__name__)
handler = LangGraphCallbackHandler(tracer=tracer)
result = app.invoke(initial_state, config={"callbacks": [handler], "configurable": {"thread_id": "1"}})
```
