如何创建动态(自构建)链
先决条件
本指南假定读者熟悉以下内容:
有时我们希望根据链的输入在运行时构建链的某些部分(路由是最常见的例子)。我们可以使用 RunnableLambda 的一个非常有用的特性来创建动态链,即如果 RunnableLambda 返回一个 Runnable,该 Runnable 本身会被调用。让我们看一个例子。
选择 聊天模型:
pip install -qU "langchain[openai]"
import getpass
import os
if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter API key for OpenAI: ")
from langchain.chat_models import init_chat_model
llm = init_chat_model("gpt-4o-mini", model_provider="openai")
# | echo: false
from langchain_anthropic import ChatAnthropic
llm = ChatAnthropic(model="claude-3-sonnet-20240229")
API 参考:ChatAnthropic
from operator import itemgetter
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import Runnable, RunnablePassthrough, chain
contextualize_instructions = """Convert the latest user question into a standalone question given the chat history. Don't answer the question, return the question and nothing else (no descriptive text)."""
contextualize_prompt = ChatPromptTemplate.from_messages(
[
("system", contextualize_instructions),
("placeholder", "{chat_history}"),
("human", "{question}"),
]
)
contextualize_question = contextualize_prompt | llm | StrOutputParser()
qa_instructions = (
"""Answer the user question given the following context:\n\n{context}."""
)
qa_prompt = ChatPromptTemplate.from_messages(
[("system", qa_instructions), ("human", "{question}")]
)
@chain
def contextualize_if_needed(input_: dict) -> Runnable:
if input_.get("chat_history"):
# NOTE: This is returning another Runnable, not an actual output.
return contextualize_question
else:
return RunnablePassthrough() | itemgetter("question")
@chain
def fake_retriever(input_: dict) -> str:
return "egypt's population in 2024 is about 111 million"
full_chain = (
RunnablePassthrough.assign(question=contextualize_if_needed).assign(
context=fake_retriever
)
| qa_prompt
| llm
| StrOutputParser()
)
full_chain.invoke(
{
"question": "what about egypt",
"chat_history": [
("human", "what's the population of indonesia"),
("ai", "about 276 million"),
],
}
)
"According to the context provided, Egypt's population in 2024 is estimated to be about 111 million."
这里的关键是,contextualize_if_needed 返回的是另一个 Runnable,而不是实际的输出。当整个链被执行时,这个返回的 Runnable 也会被运行。
查看跟踪信息,我们可以看到,由于我们传入了聊天历史,因此在完整链中执行了 contextualize_question 链: https://smith.langchain.com/public/9e0ae34c-4082-4f3f-beed-34a2a2f4c991/r
请注意,返回的 Runnable 所具备的流式传输、批处理等功能全部得以保留
for chunk in contextualize_if_needed.stream(
{
"question": "what about egypt",
"chat_history": [
("human", "what's the population of indonesia"),
("ai", "about 276 million"),
],
}
):
print(chunk)
What
is
the
population
of
Egypt
?