Skip to main content
Open In Colab在 GitHub 上打开

ChatContextual

这将帮助您开始使用 Contextual AI 的 Grounded Language Model 聊天模型

要了解有关上下文 AI 的更多信息,请访问我们的文档

此集成需要contextual-clientPython SDK 的 SDK 中。在此处了解更多信息。

概述

此集成调用 Contextual AI 的 Grounded Language Model。

集成详细信息

本地化序列 化JS 支持软件包下载最新包装
ChatContextuallangchain-contextualbetaPyPI - DownloadsPyPI - Version

模型特点

工具调用结构化输出JSON 模式图像输入音频输入视频输入令牌级流式处理本机异步Token 使用情况日志

设置

要访问上下文模型,您需要创建一个上下文 AI 帐户,获取 API 密钥,并安装langchain-contextual集成包。

凭据

前往 app.contextual.ai 注册 Textual 并生成 API 密钥。完成此作后,设置 CONTEXTUAL_AI_API_KEY 环境变量:

import getpass
import os

if not os.getenv("CONTEXTUAL_AI_API_KEY"):
os.environ["CONTEXTUAL_AI_API_KEY"] = getpass.getpass(
"Enter your Contextual API key: "
)

如果您想自动跟踪模型调用,您还可以通过取消下面的注释来设置您的 LangSmith API 密钥:

# os.environ["LANGSMITH_TRACING"] = "true"
# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")

安装

LangChain 上下文集成位于langchain-contextual包:

%pip install -qU langchain-contextual

实例

现在我们可以实例化我们的 Model 对象并生成聊天补全。

可以使用以下附加设置来实例化聊天客户端:

参数类型描述违约
temperatureOptional[float]The sampling temperature, which affects the randomness in the response. Note that higher temperature values can reduce groundedness.0
top_pOptional[float]A parameter for nucleus sampling, an alternative to temperature which also affects the randomness of the response. Note that higher top_p values can reduce groundedness.0.9
max_new_tokensOptional[int]The maximum number of tokens that the model can generate in the response. Minimum is 1 and maximum is 2048.1024
from langchain_contextual import ChatContextual

llm = ChatContextual(
model="v1", # defaults to `v1`
api_key="",
temperature=0, # defaults to 0
top_p=0.9, # defaults to 0.9
max_new_tokens=1024, # defaults to 1024
)

调用

上下文接地语言模型接受额外的kwargs调用ChatContextual.invoke方法。

这些额外的输入是:

参数类型描述
knowledgelist[str]Required: A list of strings of knowledge sources the grounded language model can use when generating a response.
system_promptOptional[str]Optional: Instructions the model should follow when generating responses. Note that we do not guarantee that the model follows these instructions exactly.
avoid_commentaryOptional[bool]Optional (Defaults to False): Flag to indicate whether the model should avoid providing additional commentary in responses. Commentary is conversational in nature and does not contain verifiable claims; therefore, commentary is not strictly grounded in available context. However, commentary may provide useful context which improves the helpfulness of responses.
# include a system prompt (optional)
system_prompt = "You are a helpful assistant that uses all of the provided knowledge to answer the user's query to the best of your ability."

# provide your own knowledge from your knowledge-base here in an array of string
knowledge = [
"There are 2 types of dogs in the world: good dogs and best dogs.",
"There are 2 types of cats in the world: good cats and best cats.",
]

# create your message
messages = [
("human", "What type of cats are there in the world and what are the types?"),
]

# invoke the GLM by providing the knowledge strings, optional system prompt
# if you want to turn off the GLM's commentary, pass True to the `avoid_commentary` argument
ai_msg = llm.invoke(
messages, knowledge=knowledge, system_prompt=system_prompt, avoid_commentary=True
)

print(ai_msg.content)

链接

我们可以将上下文模型与输出解析器链接起来。

from langchain_core.output_parsers import StrOutputParser

chain = llm | StrOutputParser

chain.invoke(
messages, knowledge=knowledge, systemp_prompt=system_prompt, avoid_commentary=True
)
API 参考:StrOutputParser

API 参考

有关所有 ChatContextual 功能和配置的详细文档,请前往 Github 页面:https://github.com/ContextualAI//langchain-contextual