如何解析消息对象中的文本
先决条件
本指南假定您熟悉以下概念:
LangChain 消息对象支持多种格式的内容,包括文本、多模态数据和内容块字典列表。
Chat 模型响应内容的格式可能取决于提供商。例如,Anthropic 的聊天模型将返回典型字符串输入的字符串内容:
from langchain_anthropic import ChatAnthropic
llm = ChatAnthropic(model="claude-3-5-haiku-latest")
response = llm.invoke("Hello")
response.content
API 参考:ChatAnthropic
'Hi there! How are you doing today? Is there anything I can help you with?'
但是,当生成工具调用时,响应内容被构建成内容块,以传达模型的推理过程:
from langchain_core.tools import tool
@tool
def get_weather(location: str) -> str:
"""Get the weather from a location."""
return "Sunny."
llm_with_tools = llm.bind_tools([get_weather])
response = llm_with_tools.invoke("What's the weather in San Francisco, CA?")
response.content
API 参考:工具
[{'text': "I'll help you get the current weather for San Francisco, California. Let me check that for you right away.",
'type': 'text'},
{'id': 'toolu_015PwwcKxWYctKfY3pruHFyy',
'input': {'location': 'San Francisco, CA'},
'name': 'get_weather',
'type': 'tool_use'}]
要自动解析消息对象中的文本,而不管底层内容的格式如何,我们可以使用 StrOutputParser。我们可以用聊天模型来编写它,如下所示:
from langchain_core.output_parsers import StrOutputParser
chain = llm_with_tools | StrOutputParser()
API 参考:StrOutputParser
StrOutputParser 简化了从消息对象中提取文本的过程:
response = chain.invoke("What's the weather in San Francisco, CA?")
print(response)
I'll help you check the weather in San Francisco, CA right away.
这在流式上下文中特别有用:
for chunk in chain.stream("What's the weather in San Francisco, CA?"):
print(chunk, end="|")
|I'll| help| you get| the current| weather for| San Francisco, California|. Let| me retrieve| that| information for you.||||||||||
有关更多信息,请参阅 API 参考。