Skip to main content

如何定义LLM作为裁判评估员

LLM 应用程序可能难以评估,因为它们通常会生成没有单一正确答案的对话文本。

本指南介绍如何使用 LangSmith SDK 或 UI 定义 LLM 作为裁判的评估器以进行离线评估。注意:要对生产跟踪实时运行评估,请参阅设置在线评估

预构建的 Evaluator

预构建的 EVLUATOR 是设置评估的有用起点。有关如何将预构建的赋值器与 LangSmith 结合使用,请参阅预构建的赋值器

创建您自己的 LLM 作为裁判评估员

为了完全控制评估器逻辑,请创建您自己的 LLM as-a-judge 评估器,并使用 LangSmith SDK (Python / TypeScript) 运行它。

需要langsmith>=0.2.0

from langsmith import evaluate, traceable, wrappers, Client
from openai import OpenAI
# Assumes you've installed pydantic
from pydantic import BaseModel

# Optionally wrap the OpenAI client to trace all model calls.
oai_client = wrappers.wrap_openai(OpenAI())

def valid_reasoning(inputs: dict, outputs: dict) -> bool:
"""Use an LLM to judge if the reasoning and the answer are consistent."""

instructions = """\

Given the following question, answer, and reasoning, determine if the reasoning \
for the answer is logically valid and consistent with question and the answer.\
"""

class Response(BaseModel):
reasoning_is_valid: bool

msg = f"Question: {inputs['question']}\nAnswer: {outputs['answer']}\nReasoning: {outputs['reasoning']}"
response = oai_client.beta.chat.completions.parse(
model="gpt-4o",
messages=[{"role": "system", "content": instructions,}, {"role": "user", "content": msg}],
response_format=Response
)
return response.choices[0].message.parsed.reasoning_is_valid

# Optionally add the 'traceable' decorator to trace the inputs/outputs of this function.
@traceable
def dummy_app(inputs: dict) -> dict:
return {"answer": "hmm i'm not sure", "reasoning": "i didn't understand the question"}

ls_client = Client()
dataset = ls_client.create_dataset("big questions")
examples = [
{"inputs": {"question": "how will the universe end"}},
{"inputs": {"question": "are we alone"}},
]
ls_client.create_examples(dataset_id=dataset.id, examples=examples)

results = evaluate(
dummy_app,
data=dataset,
evaluators=[valid_reasoning]
)

有关如何编写自定义 Evaluation 器的更多信息,请参阅此处


这个页面有帮助吗?


您可以在 GitHub 上留下详细的反馈。