UpTrain 上行
UpTrain [github ||网站 ||docs]是一个用于评估和改进 LLM 应用程序的开源平台。它为 20+ 预配置检查(涵盖语言、代码、嵌入用例)提供等级,对故障案例实例执行根本原因分析,并提供解决这些故障的指导。
UpTrain 回调处理程序
此笔记本展示了无缝集成到您的管道中的 UpTrain 回调处理程序,从而促进各种评估。我们选择了一些我们认为适合评估链的评估。这些评估将自动运行,结果显示在输出中。有关 UpTrain 评估的更多详细信息,请点击此处。
突出显示了 Langchain 的选定 retieners 以进行演示:
1. 香草 RAG:
RAG 在检索上下文和生成响应方面起着至关重要的作用。为确保其性能和响应质量,我们进行以下评估:
- Context Relevance:确定从查询中提取的上下文是否与响应相关。
- 事实准确性:评估 LLM 是否在产生幻觉或提供不正确的信息。
- Response Completeness:检查响应是否包含查询请求的所有信息。
2. 多查询生成:
MultiQueryRetriever 创建与原始问题含义相似的问题的多个变体。考虑到复杂性,我们包括前面的评估并添加:
- Multi Query Accuracy(多查询准确性):确保生成的多查询与原始查询的含义相同。
3. 上下文压缩和重新排序:
重新排名包括根据与查询的相关性对节点重新排序并选择前 n 个节点。由于重新排名完成后节点数量可能会减少,因此我们执行以下评估:
- Context Reranking:检查重新排名的节点的顺序是否比原始顺序与查询更相关。
- 上下文简洁性:检查减少的节点数是否仍提供所有必需的信息。
这些评估共同确保了链中 RAG、MultiQueryRetriever 和 Reranking 过程的稳健性和有效性。
安装依赖项
%pip install -qU langchain langchain_openai langchain-community uptrain faiss-cpu flashrank
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
``````output
[33mWARNING: There was an error checking the latest version of pip.[0m[33m
[0mNote: you may need to restart the kernel to use updated packages.
注意:您也可以安装faiss-gpu而不是faiss-cpu如果要使用支持 GPU 的库版本。
导入库
from getpass import getpass
from langchain.chains import RetrievalQA
from langchain.retrievers import ContextualCompressionRetriever
from langchain.retrievers.document_compressors import FlashrankRerank
from langchain.retrievers.multi_query import MultiQueryRetriever
from langchain_community.callbacks.uptrain_callback import UpTrainCallbackHandler
from langchain_community.document_loaders import TextLoader
from langchain_community.vectorstores import FAISS
from langchain_core.output_parsers.string import StrOutputParser
from langchain_core.prompts.chat import ChatPromptTemplate
from langchain_core.runnables.passthrough import RunnablePassthrough
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain_text_splitters import (
RecursiveCharacterTextSplitter,
)
加载文档
loader = TextLoader("../../how_to/state_of_the_union.txt")
documents = loader.load()
将文档拆分为多个块
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
chunks = text_splitter.split_documents(documents)
创建检索器
embeddings = OpenAIEmbeddings()
db = FAISS.from_documents(chunks, embeddings)
retriever = db.as_retriever()
定义 LLM
llm = ChatOpenAI(temperature=0, model="gpt-4")
设置
UpTrain 为您提供:
- 具有高级深入分析和筛选选项的仪表板
- 失败案例中的见解和常见主题
- 生产数据的可观察性和实时监控
- 通过与 CI/CD 管道无缝集成进行回归测试
您可以在以下选项之间进行选择,以使用 UpTrain 进行评估:
1. UpTrain 的开源软件 (OSS):
您可以使用开源评估服务来评估您的模型。在这种情况下,您需要提供 OpenAI API 密钥。UpTrain 使用 GPT 模型来评估 LLM 生成的响应。你可以在这里得到你的。
为了在 UpTrain 控制面板中查看您的评估,您需要通过在终端中运行以下命令来设置它:
git clone https://github.com/uptrain-ai/uptrain
cd uptrain
bash run_uptrain.sh
这将在您的本地计算机上启动 UpTrain 仪表板。您可以在http://localhost:3000/dashboard.
参数:
- key_type=“openai”
- api_key=“OPENAI_API_KEY”
- project_name=“PROJECT_NAME”
2. UpTrain 托管服务和仪表板:
或者,您可以使用 UpTrain 的托管服务来评估您的模型。您可以在此处创建免费的 UpTrain 帐户并获得免费试用积分。如果您想要更多试用积分,请在此处与 UpTrain 的维护者预约电话。
使用托管服务的好处是:
- 无需在本地计算机上设置 UpTrain 仪表板。
- 无需 API 密钥即可访问许多 LLM。
执行评估后,您可以在 UpTrain 控制面板中查看它们,网址为https://dashboard.uptrain.ai/dashboard
参数:
- key_type=“上行”
- api_key=“UPTRAIN_API_KEY”
- project_name=“PROJECT_NAME”
注意:这project_name将是项目名称,执行的评估将显示在 UpTrain 仪表板中。
设置 API 密钥
笔记本将提示您输入 API 密钥。您可以通过更改key_type参数。
KEY_TYPE = "openai" # or "uptrain"
API_KEY = getpass()
1. 香草 RAG
UpTrain 回调处理程序将在生成后自动捕获查询、上下文和响应,并将对响应运行以下三个评估(从 0 到 1):
- Context Relevance:检查从查询中提取的上下文是否与响应相关。
- 事实准确性:检查响应的事实准确性。
- Response Completeness:检查响应是否包含查询要求的所有信息。
# Create the RAG prompt
template = """Answer the question based only on the following context, which can include text and tables:
{context}
Question: {question}
"""
rag_prompt_text = ChatPromptTemplate.from_template(template)
# Create the chain
chain = (
{"context": retriever, "question": RunnablePassthrough()}
| rag_prompt_text
| llm
| StrOutputParser()
)
# Create the uptrain callback handler
uptrain_callback = UpTrainCallbackHandler(key_type=KEY_TYPE, api_key=API_KEY)
config = {"callbacks": [uptrain_callback]}
# Invoke the chain with a query
query = "What did the president say about Ketanji Brown Jackson"
docs = chain.invoke(query, config=config)
[32m2024-04-17 17:03:44.969[0m | [1mINFO [0m | [36muptrain.framework.evalllm[0m:[36mevaluate_on_server[0m:[36m378[0m - [1mSending evaluation request for rows 0 to <50 to the Uptrain[0m
[32m2024-04-17 17:04:05.809[0m | [1mINFO [0m | [36muptrain.framework.evalllm[0m:[36mevaluate[0m:[36m367[0m - [1mLocal server not running, start the server to log data and visualize in the dashboard![0m
``````output
Question: What did the president say about Ketanji Brown Jackson
Response: The president mentioned that he had nominated Ketanji Brown Jackson to serve on the United States Supreme Court 4 days ago. He described her as one of the nation's top legal minds who will continue Justice Breyer’s legacy of excellence. He also mentioned that she is a former top litigator in private practice, a former federal public defender, and comes from a family of public school educators and police officers. He described her as a consensus builder and noted that since her nomination, she has received a broad range of support from various groups, including the Fraternal Order of Police and former judges appointed by both Democrats and Republicans.
Context Relevance Score: 1.0
Factual Accuracy Score: 1.0
Response Completeness Score: 1.0
2. 多查询生成
MultiQueryRetriever 用于解决 RAG 管道可能无法根据查询返回最佳文档集的问题。它会生成多个与原始查询含义相同的查询,然后为每个查询获取文档。
为了评估此检索器,UpTrain 将运行以下评估:
- Multi Query Accuracy:检查生成的多查询是否与原始查询的含义相同。
# Create the retriever
multi_query_retriever = MultiQueryRetriever.from_llm(retriever=retriever, llm=llm)
# Create the uptrain callback
uptrain_callback = UpTrainCallbackHandler(key_type=KEY_TYPE, api_key=API_KEY)
config = {"callbacks": [uptrain_callback]}
# Create the RAG prompt
template = """Answer the question based only on the following context, which can include text and tables:
{context}
Question: {question}
"""
rag_prompt_text = ChatPromptTemplate.from_template(template)
chain = (
{"context": multi_query_retriever, "question": RunnablePassthrough()}
| rag_prompt_text
| llm
| StrOutputParser()
)
# Invoke the chain with a query
question = "What did the president say about Ketanji Brown Jackson"
docs = chain.invoke(question, config=config)
[32m2024-04-17 17:04:10.675[0m | [1mINFO [0m | [36muptrain.framework.evalllm[0m:[36mevaluate_on_server[0m:[36m378[0m - [1mSending evaluation request for rows 0 to <50 to the Uptrain[0m
[32m2024-04-17 17:04:16.804[0m | [1mINFO [0m | [36muptrain.framework.evalllm[0m:[36mevaluate[0m:[36m367[0m - [1mLocal server not running, start the server to log data and visualize in the dashboard![0m
``````output
Question: What did the president say about Ketanji Brown Jackson
Multi Queries:
- How did the president comment on Ketanji Brown Jackson?
- What were the president's remarks regarding Ketanji Brown Jackson?
- What statements has the president made about Ketanji Brown Jackson?
Multi Query Accuracy Score: 0.5
``````output
[32m2024-04-17 17:04:22.027[0m | [1mINFO [0m | [36muptrain.framework.evalllm[0m:[36mevaluate_on_server[0m:[36m378[0m - [1mSending evaluation request for rows 0 to <50 to the Uptrain[0m
[32m2024-04-17 17:04:44.033[0m | [1mINFO [0m | [36muptrain.framework.evalllm[0m:[36mevaluate[0m:[36m367[0m - [1mLocal server not running, start the server to log data and visualize in the dashboard![0m
``````output
Question: What did the president say about Ketanji Brown Jackson
Response: The president mentioned that he had nominated Circuit Court of Appeals Judge Ketanji Brown Jackson to serve on the United States Supreme Court 4 days ago. He described her as one of the nation's top legal minds who will continue Justice Breyer’s legacy of excellence. He also mentioned that since her nomination, she has received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.
Context Relevance Score: 1.0
Factual Accuracy Score: 1.0
Response Completeness Score: 1.0
3. 上下文压缩和重新排名
重新排名过程包括根据与查询的相关性对节点重新排序,并选择前 n 个节点。由于重新排名完成后节点数量可能会减少,因此我们执行以下评估:
- Context Reranking:检查重新排名的节点的顺序是否比原始顺序与查询更相关。
- Context Conciseness:检查减少的节点数是否仍提供所有必需的信息。
# Create the retriever
compressor = FlashrankRerank()
compression_retriever = ContextualCompressionRetriever(
base_compressor=compressor, base_retriever=retriever
)
# Create the chain
chain = RetrievalQA.from_chain_type(llm=llm, retriever=compression_retriever)
# Create the uptrain callback
uptrain_callback = UpTrainCallbackHandler(key_type=KEY_TYPE, api_key=API_KEY)
config = {"callbacks": [uptrain_callback]}
# Invoke the chain with a query
query = "What did the president say about Ketanji Brown Jackson"
result = chain.invoke(query, config=config)
[32m2024-04-17 17:04:46.462[0m | [1mINFO [0m | [36muptrain.framework.evalllm[0m:[36mevaluate_on_server[0m:[36m378[0m - [1mSending evaluation request for rows 0 to <50 to the Uptrain[0m
[32m2024-04-17 17:04:53.561[0m | [1mINFO [0m | [36muptrain.framework.evalllm[0m:[36mevaluate[0m:[36m367[0m - [1mLocal server not running, start the server to log data and visualize in the dashboard