Skip to main content
Open In ColabOpen on GitHub

Aim

Aim 让可视化和调试 LangChain 执行变得超级简单。Aim 会追踪大语言模型(LLM)和工具的输入与输出,以及智能体的操作行为。

使用 Aim,您可以轻松调试和检查单个执行:

此外,您还可以选择并排比较多个执行结果:

Aim 是完全开源的,了解更多信息关于 Aim 在 GitHub 上。

让我们继续,看看如何启用并配置Aim回调。

跟踪LangChain执行情况

在本笔记本中,我们将探索三种使用场景。首先,我们将安装必要的包并导入某些模块。随后,我们将配置两个环境变量,这些变量可以在Python脚本内或通过终端设置。

%pip install --upgrade --quiet  aim
%pip install --upgrade --quiet langchain
%pip install --upgrade --quiet langchain-openai
%pip install --upgrade --quiet google-search-results
import os
from datetime import datetime

from langchain_community.callbacks import AimCallbackHandler
from langchain_core.callbacks import StdOutCallbackHandler
from langchain_openai import OpenAI

我们的示例使用GPT模型作为LLM,OpenAI为此目的提供了API。您可以通过以下链接获取密钥:https://platform.openai.com/account/api-keys

我们将使用SerpApi从Google检索搜索结果。要获取SerpApi密钥,请访问https://serpapi.com/manage-api-key

os.environ["OPENAI_API_KEY"] = "..."
os.environ["SERPAPI_API_KEY"] = "..."

AimCallbackHandler的事件方法接受LangChain模块或代理作为输入,并将至少包括提示和生成结果,以及LangChain模块序列化的版本的日志记录到指定的Aim运行中。

session_group = datetime.now().strftime("%m.%d.%Y_%H.%M.%S")
aim_callback = AimCallbackHandler(
repo=".",
experiment_name="scenario 1: OpenAI LLM",
)

callbacks = [StdOutCallbackHandler(), aim_callback]
llm = OpenAI(temperature=0, callbacks=callbacks)

The flush_tracker函数用于将LangChain资产记录到Aim中。默认情况下,会重置会话而不是直接终止。

情景1

在第一个场景中,我们将使用OpenAI大语言模型。

# scenario 1 - LLM
llm_result = llm.generate(["Tell me a joke", "Tell me a poem"] * 3)
aim_callback.flush_tracker(
langchain_asset=llm,
experiment_name="scenario 2: Chain with multiple SubChains on multiple generations",
)

情景 2

二场景涉及跨多代生成的多个子链之间的串联。

from langchain.chains import LLMChain
from langchain_core.prompts import PromptTemplate
# scenario 2 - Chain
template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.
Title: {title}
Playwright: This is a synopsis for the above play:"""
prompt_template = PromptTemplate(input_variables=["title"], template=template)
synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callbacks=callbacks)

test_prompts = [
{
"title": "documentary about good video games that push the boundary of game design"
},
{"title": "the phenomenon behind the remarkable speed of cheetahs"},
{"title": "the best in class mlops tooling"},
]
synopsis_chain.apply(test_prompts)
aim_callback.flush_tracker(
langchain_asset=synopsis_chain, experiment_name="scenario 3: Agent with Tools"
)

情景3

第三种场景涉及一个带有工具的智能体。

from langchain.agents import AgentType, initialize_agent, load_tools
# scenario 3 - Agent with Tools
tools = load_tools(["serpapi", "llm-math"], llm=llm, callbacks=callbacks)
agent = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
callbacks=callbacks,
)
agent.run(
"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?"
)
aim_callback.flush_tracker(langchain_asset=agent, reset=False, finish=True)


> Entering new AgentExecutor chain...
 I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power.
Action: Search
Action Input: "Leo DiCaprio girlfriend"
Observation: Leonardo DiCaprio seemed to prove a long-held theory about his love life right after splitting from girlfriend Camila Morrone just months ...
Thought: I need to find out Camila Morrone's age
Action: Search
Action Input: "Camila Morrone age"
Observation: 25 years
Thought: I need to calculate 25 raised to the 0.43 power
Action: Calculator
Action Input: 25^0.43
Observation: Answer: 3.991298452658078

Thought: I now know the final answer
Final Answer: Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.991298452658078.

> Finished chain.