将跟踪记录记录到特定项目
您可以通过环境变量静态更改跟踪的目标项目,也可以在运行时动态更改跟踪的目标项目。
静态设置目标项目
如跟踪概念部分所述,LangSmith 使用Project对跟踪进行分组。如果未指定,则项目将设置为default.您可以设置LANGSMITH_PROJECT环境变量为整个应用程序运行配置自定义项目名称。这应该在执行您的应用程序之前完成。
export LANGSMITH_PROJECT=my-custom-project
JS 中的 SDK 兼容性
这LANGSMITH_PROJECT标志仅在 JS SDK 版本 >= 0.2.16 中受支持,请使用LANGCHAIN_PROJECT相反,如果您使用的是旧版本。
如果指定的项目不存在,则会在摄取第一个跟踪时自动创建该项目。
动态设置目标项目
您还可以在程序运行时以各种方式设置项目名称,具体取决于您如何为代码添加 Comments 以进行跟踪。当您想要记录对同一应用程序中不同项目的跟踪时,这非常有用。
注意
使用以下方法之一动态设置项目名称会覆盖LANGSMITH_PROJECT环境变量。
- 蟒
- TypeScript (类型脚本)
import openai
from langsmith import traceable
from langsmith.run_trees import RunTree
client = openai.Client()
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
# Use the @traceable decorator with the 'project_name' parameter to log traces to LangSmith
# Ensure that the LANGSMITH_TRACING environment variables is set for @traceable to work
@traceable(
run_type="llm",
name="OpenAI Call Decorator",
project_name="My Project"
)
def call_openai(
messages: list[dict], model: str = "gpt-4o-mini"
) -> str:
return client.chat.completions.create(
model=model,
messages=messages,
).choices[0].message.content
# Call the decorated function
call_openai(messages)
# You can also specify the Project via the project_name parameter
# This will override the project_name specified in the @traceable decorator
call_openai(
messages,
langsmith_extra={"project_name": "My Overridden Project"},
)
# The wrapped OpenAI client accepts all the same langsmith_extra parameters
# as @traceable decorated functions, and logs traces to LangSmith automatically.
# Ensure that the LANGSMITH_TRACING environment variables is set for the wrapper to work.
from langsmith import wrappers
wrapped_client = wrappers.wrap_openai(client)
wrapped_client.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
langsmith_extra={"project_name": "My Project"},
)
# Alternatively, create a RunTree object
# You can set the project name using the project_name parameter
rt = RunTree(
run_type="llm",
name="OpenAI Call RunTree",
inputs={"messages": messages},
project_name="My Project"
)
chat_completion = client.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
)
# End and submit the run
rt.end(outputs=chat_completion)
rt.post()
import OpenAI from "openai";
import { traceable } from "langsmith/traceable";
import { wrapOpenAI } from "langsmith/wrappers";
import { RunTree} from "langsmith";
const client = new OpenAI();
const messages = [
{role: "system", content: "You are a helpful assistant."},
{role: "user", content: "Hello!"}
];
const traceableCallOpenAI = traceable(async (messages: {role: string, content: string}[], model: string) => {
const completion = await client.chat.completions.create({
model: model,
messages: messages,
});
return completion.choices[0].message.content;
},{
run_type: "llm",
name: "OpenAI Call Traceable",
project_name: "My Project"
});
// Call the traceable function
await traceableCallOpenAI(messages, "gpt-4o-mini");
// Create and use a RunTree object
const rt = new RunTree({
runType: "llm",
name: "OpenAI Call RunTree",
inputs: { messages },
project_name: "My Project"
});
await rt.postRun();
// Execute a chat completion and handle it within RunTree
rt.end({outputs: chatCompletion});
await rt.patchRun();