Skip to main content

跟踪LangGraph(Python 和 JS/TS)

LangSmith 与 LangGraph(PythonJS)无缝集成 帮助您跟踪代理工作流,无论您使用的是 LangChain 模块还是其他 SDK

使用 LangChain

如果你在 LangGraph 中使用 LangChain 模块,你只需要设置几个环境变量来启用跟踪。

本指南将介绍一个基本示例。有关配置的更多详细信息,请参阅使用 LangChain 进行跟踪指南。

1. 安装

安装 LangGraph 库以及适用于 Python 和 JS 的 OpenAI 集成(我们对下面的代码片段使用 OpenAI 集成)。

有关可用软件包的完整列表,请参阅 LangChain Python 文档LangChain JS 文档

pip install langchain_openai langgraph

2. 配置您的环境

export LANGSMITH_TRACING=true
export LANGSMITH_API_KEY=<your-api-key>
# This example uses OpenAI, but you can use any LLM provider of choice
export OPENAI_API_KEY=<your-openai-api-key>

3. 记录跟踪记录

设置环境后,您可以照常调用 LangChain runnables。 LangSmith 将推断出正确的跟踪配置:

from typing import Literal

from langchain_core.messages import HumanMessage
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
from langgraph.graph import StateGraph, MessagesState
from langgraph.prebuilt import ToolNode

@tool
def search(query: str):
"""Call to surf the web."""
if "sf" in query.lower() or "san francisco" in query.lower():
return "It's 60 degrees and foggy."
return "It's 90 degrees and sunny."

tools = [search]

tool_node = ToolNode(tools)

model = ChatOpenAI(model="gpt-4o", temperature=0).bind_tools(tools)

def should_continue(state: MessagesState) -> Literal["tools", "__end__"]:
messages = state['messages']
last_message = messages[-1]
if last_message.tool_calls:
return "tools"
return "__end__"


def call_model(state: MessagesState):
messages = state['messages']

# Invoking `model` will automatically infer the correct tracing context
response = model.invoke(messages)
return {"messages": [response]}


workflow = StateGraph(MessagesState)

workflow.add_node("agent", call_model)
workflow.add_node("tools", tool_node)

workflow.add_edge("__start__", "agent")
workflow.add_conditional_edges(
"agent",
should_continue,
)
workflow.add_edge("tools", 'agent')

app = workflow.compile()

final_state = app.invoke(
{"messages": [HumanMessage(content="what is the weather in sf")]},
config={"configurable": {"thread_id": 42}}
)
final_state["messages"][-1].content

运行上述代码的示例跟踪如下所示

使用 LangChain 运行的 LangGraph 的跟踪树

不使用 LangChain

如果你在 LangGraph 中使用其他 SDK 或自定义函数,则需要适当地包装或装饰它们(使用@traceabledecorator 或traceable函数,或类似 JS 中的内容。wrap_openai用于 SDK)。 如果你这样做,LangSmith 将自动嵌套来自这些包装方法的跟踪。

下面是一个示例。您还可以查看此页面以了解更多信息。

1. 安装

安装 LangGraph 库和适用于 Python 和 JS 的 OpenAI SDK(我们对下面的代码片段使用 OpenAI 集成)。

pip install openai langsmith langgraph

2. 配置您的环境

export LANGSMITH_TRACING=true
export LANGSMITH_API_KEY=<your-api-key>
# This example uses OpenAI, but you can use any LLM provider of choice
export OPENAI_API_KEY=<your-openai-api-key>

3. 记录跟踪记录

设置环境后,包装或装饰要跟踪的自定义函数/SDK。 然后,LangSmith 将推断正确的跟踪配置:

import json
import openai
import operator

from langsmith import traceable
from langsmith.wrappers import wrap_openai

from typing import Annotated, Literal, TypedDict

from langgraph.graph import StateGraph

class State(TypedDict):
messages: Annotated[list, operator.add]

tool_schema = {
"type": "function",
"function": {
"name": "search",
"description": "Call to surf the web.",
"parameters": {
"type": "object",
"properties": {"query": {"type": "string"}},
"required": ["query"],
},
},
}

# Decorating the tool function will automatically trace it with the correct context
@traceable(run_type="tool", name="Search Tool")
def search(query: str):
"""Call to surf the web."""
if "sf" in query.lower() or "san francisco" in query.lower():
return "It's 60 degrees and foggy."
return "It's 90 degrees and sunny."

tools = [search]

def call_tools(state):
function_name_to_function = {"search": search}
messages = state["messages"]

tool_call = messages[-1]["tool_calls"][0]
function_name = tool_call["function"]["name"]
function_arguments = tool_call["function"]["arguments"]
arguments = json.loads(function_arguments)

function_response = function_name_to_function[function_name](**arguments)
tool_message = {
"tool_call_id": tool_call["id"],
"role": "tool",
"name": function_name,
"content": function_response,
}
return {"messages": [tool_message]}

wrapped_client = wrap_openai(openai.Client())

def should_continue(state: State) -> Literal["tools", "__end__"]:
messages = state["messages"]
last_message = messages[-1]
if last_message["tool_calls"]:
return "tools"
return "__end__"


def call_model(state: State):
messages = state["messages"]
# Calling the wrapped client will automatically infer the correct tracing context
response = wrapped_client.chat.completions.create(
messages=messages, model="gpt-4o-mini", tools=[tool_schema]
)
raw_tool_calls = response.choices[0].message.tool_calls
tool_calls = [tool_call.to_dict() for tool_call in raw_tool_calls] if raw_tool_calls else []
response_message = {
"role": "assistant",
"content": response.choices[0].message.content,
"tool_calls": tool_calls,
}
return {"messages": [response_message]}


workflow = StateGraph(State)

workflow.add_node("agent", call_model)
workflow.add_node("tools", call_tools)

workflow.add_edge("__start__", "agent")
workflow.add_conditional_edges(
"agent",
should_continue,
)
workflow.add_edge("tools", 'agent')

app = workflow.compile()

final_state = app.invoke(
{"messages": [{"role": "user", "content": "what is the weather in sf"}]}
)
final_state["messages"][-1]["content"]

运行上述代码的示例跟踪如下所示

没有 LangChain 的 LangGraph 运行的跟踪树


这个页面有帮助吗?


您可以在 GitHub 上留下详细的反馈。