Contextual AI
情境化 AI 提供最先进的 RAG 组件,专为准确可靠的企业 AI 应用程序而设计。我们的 LangChain 集成为我们的专用模型公开了独立的 API 端点:
-
接地语言模型 (GLM):世界上最接地气的语言模型,旨在通过优先考虑对检索到的知识的忠实度来最大限度地减少幻觉。GLM 通过内联归因提供卓越的事实准确性,使其成为可靠性至关重要的企业 RAG 和代理应用程序的理想选择。
-
Instruction-Following Reranker:第一个遵循自定义说明的 reranker,根据特定标准(如新近度、来源或文档类型)智能地确定文档的优先级。我们的 reranker 在行业基准上优于竞争对手,解决了企业知识库中相互冲突的信息挑战。
Contextual AI 由 RAG 技术的发明者创立,其专用组件可帮助创新团队加速开发生产就绪型 RAG 代理,以极高的准确性提供响应。
接地语言模型 (GLM)
接地语言模型 (GLM) 经过专门设计,可最大限度地减少企业 RAG 和代理应用程序中的幻觉。GLM 提供:
- 性能强劲,在 FACTS 基准测试中具有 88% 的事实准确率(查看基准测试结果)
- 响应严格基于提供的知识来源,并带有内联归因(阅读产品详细信息)
- 精确的来源引用直接集成到生成的回复中
- 检索到的上下文优先于参数知识(View 技术概述)
- 在信息不可用时明确承认不确定性
GLM 可作为 RAG 管道中通用 LLM 的直接替代品,显著提高任务关键型企业应用程序的可靠性。
指令跟随 Reranker
世界上第一个 Instruction-Following Reranker 以前所未有的控制和准确性彻底改变了文档排名。主要功能包括:
- 自然语言指令,用于根据新近度、来源、元数据等确定文档的优先级(查看工作原理)
- 在 BEIR 基准测试中表现出色,得分为 61.2,明显优于竞争对手(查看基准测试数据)
- 智能解决来自多个知识来源的冲突信息
- 无缝集成,作为现有 reranker 的直接替代品
- 通过自然语言命令对文档排名进行动态控制
reranker 擅长处理具有潜在矛盾信息的企业知识库,允许您准确指定在各种情况下应优先使用哪些来源。
将上下文 AI 与 LangChain 结合使用
在此处查看详细信息。
此集成使您可以轻松地将上下文 AI 的 GLM 和 Instruction-Following Reranker 整合到您的 LangChain 工作流程中。GLM 确保您的应用程序提供严格基于的响应,而 reranker 通过智能地确定最相关文档的优先级来显着提高检索质量。
无论您是为受监管的行业还是注重安全的环境构建应用程序,情境化 AI 都能提供企业用例所需的准确性、控制力和可靠性。
立即开始免费试用,体验适用于企业 AI 应用程序的最扎实的语言模型和指令遵循重新排序器。
扎根语言模型
# Integrating the Grounded Language Model
import getpass
import os
from langchain_contextual import ChatContextual
# Set credentials
if not os.getenv("CONTEXTUAL_AI_API_KEY"):
os.environ["CONTEXTUAL_AI_API_KEY"] = getpass.getpass(
"Enter your Contextual API key: "
)
# initialize Contextual llm
llm = ChatContextual(
model="v1",
api_key="",
)
# include a system prompt (optional)
system_prompt = "You are a helpful assistant that uses all of the provided knowledge to answer the user's query to the best of your ability."
# provide your own knowledge from your knowledge-base here in an array of string
knowledge = [
"There are 2 types of dogs in the world: good dogs and best dogs.",
"There are 2 types of cats in the world: good cats and best cats.",
]
# create your message
messages = [
("human", "What type of cats are there in the world and what are the types?"),
]
# invoke the GLM by providing the knowledge strings, optional system prompt
# if you want to turn off the GLM's commentary, pass True to the `avoid_commentary` argument
ai_msg = llm.invoke(
messages, knowledge=knowledge, system_prompt=system_prompt, avoid_commentary=True
)
print(ai_msg.content)
According to the information available, there are two types of cats in the world:
1. Good cats
2. Best cats
指令跟随 Reranker
import getpass
import os
from langchain_contextual import ContextualRerank
if not os.getenv("CONTEXTUAL_AI_API_KEY"):
os.environ["CONTEXTUAL_AI_API_KEY"] = getpass.getpass(
"Enter your Contextual API key: "
)
api_key = ""
model = "ctxl-rerank-en-v1-instruct"
compressor = ContextualRerank(
model=model,
api_key=api_key,
)
from langchain_core.documents import Document
query = "What is the current enterprise pricing for the RTX 5090 GPU for bulk orders?"
instruction = "Prioritize internal sales documents over market analysis reports. More recent documents should be weighted higher. Enterprise portal content supersedes distributor communications."
document_contents = [
"Following detailed cost analysis and market research, we have implemented the following changes: AI training clusters will see a 15% uplift in raw compute performance, enterprise support packages are being restructured, and bulk procurement programs (100+ units) for the RTX 5090 Enterprise series will operate on a $2,899 baseline.",
"Enterprise pricing for the RTX 5090 GPU bulk orders (100+ units) is currently set at $3,100-$3,300 per unit. This pricing for RTX 5090 enterprise bulk orders has been confirmed across all major distribution channels.",
"RTX 5090 Enterprise GPU requires 450W TDP and 20% cooling overhead.",
]
metadata = [
{
"Date": "January 15, 2025",
"Source": "NVIDIA Enterprise Sales Portal",
"Classification": "Internal Use Only",
},
{"Date": "11/30/2023", "Source": "TechAnalytics Research Group"},
{
"Date": "January 25, 2025",
"Source": "NVIDIA Enterprise Sales Portal",
"Classification": "Internal Use Only",
},
]
documents = [
Document(page_content=content, metadata=metadata[i])
for i, content in enumerate(document_contents)
]
reranked_documents = compressor.compress_documents(
query=query,
instruction=instruction,
documents=documents,
)