Skip to main content
Open In ColabOpen on GitHub

如何为可运行对象添加回退机制

在使用语言模型时,你可能会经常遇到底层 API 造成的问题,无论是速率限制还是服务中断。因此,当你将大型语言模型应用程序投入生产环境时,防范这些问题变得越来越重要。这就是我们引入“回退机制”概念的原因。

备用方案 是在紧急情况下可能使用的替代计划。

至关重要的是,回退机制不仅可以在大型语言模型(LLM)层面应用,还可以在整个可运行流程(runnable)层面应用。这一点非常重要,因为不同模型通常需要不同的提示(prompt)。因此,如果您的 OpenAI 调用失败,您不仅仅希望将相同的提示发送给 Anthropic——您很可能希望使用不同的提示模板,并在那边发送一个不同的版本。

大型语言模型API错误的备用方案

这可能是回退机制最常见的使用场景。对大型语言模型API的请求可能因多种原因而失败——API可能已宕机,你可能达到了速率限制,或者其他各种情况。因此,使用回退机制可以帮助防范这类问题。

重要提示:默认情况下,许多大型语言模型封装器会捕获错误并重试。在使用回退机制时,你很可能希望关闭这些重试功能。否则,第一个封装器会持续重试而不会失败。

%pip install --upgrade --quiet  langchain langchain-openai
from langchain_anthropic import ChatAnthropic
from langchain_openai import ChatOpenAI
API 参考:ChatAnthropic | ChatOpenAI

首先,让我们模拟一下如果从 OpenAI 遇到 RateLimitError 会发生什么

from unittest.mock import patch

import httpx
from openai import RateLimitError

request = httpx.Request("GET", "/")
response = httpx.Response(200, request=request)
error = RateLimitError("rate limit", response=response, body="")
# Note that we set max_retries = 0 to avoid retrying on RateLimits, etc
openai_llm = ChatOpenAI(model="gpt-4o-mini", max_retries=0)
anthropic_llm = ChatAnthropic(model="claude-3-haiku-20240307")
llm = openai_llm.with_fallbacks([anthropic_llm])
# Let's use just the OpenAI LLm first, to show that we run into an error
with patch("openai.resources.chat.completions.Completions.create", side_effect=error):
try:
print(openai_llm.invoke("Why did the chicken cross the road?"))
except RateLimitError:
print("Hit error")
Hit error
# Now let's try with fallbacks to Anthropic
with patch("openai.resources.chat.completions.Completions.create", side_effect=error):
try:
print(llm.invoke("Why did the chicken cross the road?"))
except RateLimitError:
print("Hit error")
content=' I don\'t actually know why the chicken crossed the road, but here are some possible humorous answers:\n\n- To get to the other side!\n\n- It was too chicken to just stand there. \n\n- It wanted a change of scenery.\n\n- It wanted to show the possum it could be done.\n\n- It was on its way to a poultry farmers\' convention.\n\nThe joke plays on the double meaning of "the other side" - literally crossing the road to the other side, or the "other side" meaning the afterlife. So it\'s an anti-joke, with a silly or unexpected pun as the answer.' additional_kwargs={} example=False

我们可以像使用普通大语言模型一样,使用我们的“具备回退机制的LLM”。

from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You're a nice assistant who always includes a compliment in your response",
),
("human", "Why did the {animal} cross the road"),
]
)
chain = prompt | llm
with patch("openai.resources.chat.completions.Completions.create", side_effect=error):
try:
print(chain.invoke({"animal": "kangaroo"}))
except RateLimitError:
print("Hit error")
API 参考:ChatPromptTemplate
content=" I don't actually know why the kangaroo crossed the road, but I can take a guess! Here are some possible reasons:\n\n- To get to the other side (the classic joke answer!)\n\n- It was trying to find some food or water \n\n- It was trying to find a mate during mating season\n\n- It was fleeing from a predator or perceived threat\n\n- It was disoriented and crossed accidentally \n\n- It was following a herd of other kangaroos who were crossing\n\n- It wanted a change of scenery or environment \n\n- It was trying to reach a new habitat or territory\n\nThe real reason is unknown without more context, but hopefully one of those potential explanations does the joke justice! Let me know if you have any other animal jokes I can try to decipher." additional_kwargs={} example=False

序列的备用方案

我们还可以为序列创建备用方案,这些备用方案本身也是序列。这里我们使用了两种不同的模型:ChatOpenAI 和普通的 OpenAI(后者不使用聊天模型)。由于 OpenAI 并非聊天模型,因此你可能需要使用不同的提示。

# First let's create a chain with a ChatModel
# We add in a string output parser here so the outputs between the two are the same type
from langchain_core.output_parsers import StrOutputParser

chat_prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You're a nice assistant who always includes a compliment in your response",
),
("human", "Why did the {animal} cross the road"),
]
)
# Here we're going to use a bad model name to easily create a chain that will error
chat_model = ChatOpenAI(model="gpt-fake")
bad_chain = chat_prompt | chat_model | StrOutputParser()
API 参考:StrOutputParser
# Now lets create a chain with the normal OpenAI model
from langchain_core.prompts import PromptTemplate
from langchain_openai import OpenAI

prompt_template = """Instructions: You should always include a compliment in your response.

Question: Why did the {animal} cross the road?"""
prompt = PromptTemplate.from_template(prompt_template)
llm = OpenAI()
good_chain = prompt | llm
API 参考:PromptTemplate | OpenAI
# We can now create a final chain which combines the two
chain = bad_chain.with_fallbacks([good_chain])
chain.invoke({"animal": "turtle"})
'\n\nAnswer: The turtle crossed the road to get to the other side, and I have to say he had some impressive determination.'

长输入的备用方案

大型语言模型的一个主要限制因素是其上下文窗口。通常,您可以在将提示发送给大型语言模型之前计算并跟踪提示的长度,但在某些难以或复杂的情况下,您可以回退到使用具有更长上下文长度的模型。

short_llm = ChatOpenAI()
long_llm = ChatOpenAI(model="gpt-3.5-turbo-16k")
llm = short_llm.with_fallbacks([long_llm])
inputs = "What is the next number: " + ", ".join(["one", "two"] * 3000)
try:
print(short_llm.invoke(inputs))
except Exception as e:
print(e)
This model's maximum context length is 4097 tokens. However, your messages resulted in 12012 tokens. Please reduce the length of the messages.
try:
print(llm.invoke(inputs))
except Exception as e:
print(e)
content='The next number in the sequence is two.' additional_kwargs={} example=False

回退到更优模型

很多时候,我们会要求模型以特定格式输出(例如 JSON 格式)。虽然 GPT-3.5 能够较好地完成这类任务,但有时仍会遇到困难。这自然引出了备用方案——我们可以先尝试使用 GPT-3.5(速度更快、成本更低),但如果解析失败,则转而使用 GPT-4。

from langchain.output_parsers import DatetimeOutputParser
prompt = ChatPromptTemplate.from_template(
"what time was {event} (in %Y-%m-%dT%H:%M:%S.%fZ format - only return this value)"
)
# In this case we are going to do the fallbacks on the LLM + output parser level
# Because the error will get raised in the OutputParser
openai_35 = ChatOpenAI() | DatetimeOutputParser()
openai_4 = ChatOpenAI(model="gpt-4") | DatetimeOutputParser()
only_35 = prompt | openai_35
fallback_4 = prompt | openai_35.with_fallbacks([openai_4])
try:
print(only_35.invoke({"event": "the superbowl in 1994"}))
except Exception as e:
print(f"Error: {e}")
Error: Could not parse datetime string: The Super Bowl in 1994 took place on January 30th at 3:30 PM local time. Converting this to the specified format (%Y-%m-%dT%H:%M:%S.%fZ) results in: 1994-01-30T15:30:00.000Z
try:
print(fallback_4.invoke({"event": "the superbowl in 1994"}))
except Exception as e:
print(f"Error: {e}")
1994-01-30 15:30:00