Skip to main content
Open In ColabOpen on GitHub

Yuan2.0

Yuan2.0 是由 IEIT 系统开发的新一代基础大语言模型。我们已发布全部三种型号:Yuan 2.0-102B、Yuan 2.0-51B 和 Yuan 2.0-2B,并为其他开发者提供了用于预训练、微调和推理服务的相关脚本。Yuan2.0 基于 Yuan1.0,采用了更广泛的高质量预训练数据和指令微调数据集,以增强模型在语义理解、数学、推理、代码、知识等方面的能力。

此示例介绍了如何使用 LangChain 与 Yuan2.0(2B/51B/102B) 推理模型进行交互以生成文本。

Yuan2.0 建立了推理服务,用户只需请求推理 API 即可获得结果,具体介绍见 Yuan2.0 Inference-Server

from langchain.chains import LLMChain
from langchain_community.llms.yuan2 import Yuan2
API 参考:LLMChain |Yuan2
# default infer_api for a local deployed Yuan2.0 inference server
infer_api = "http://127.0.0.1:8000/yuan"

# direct access endpoint in a proxied environment
# import os
# os.environ["no_proxy"]="localhost,127.0.0.1,::1"

yuan_llm = Yuan2(
infer_api=infer_api,
max_tokens=2048,
temp=1.0,
top_p=0.9,
use_history=False,
)

# turn on use_history only when you want the Yuan2.0 to keep track of the conversation history
# and send the accumulated context to the backend model api, which make it stateful. By default it is stateless.
# llm.use_history = True
question = "请介绍一下中国。"
print(yuan_llm.invoke(question))