ChatSambaNovaCloud
这将帮助您入门 SambaNova Cloud 交互模型。如需详细了解所有 ChatSambaNovaCloud 功能和配置,请参阅 API 参考。
SambaNova's SambaNova Cloud 是一个用于运行开源模型推理的平台
概览
集成细节
| Class | 包 | 本地 | 序列化 | JS支持 | Package downloads | Package 最新版本 |
|---|---|---|---|---|---|---|
| ChatSambaNovaCloud | langchain-sambanova | ❌ | ❌ | ❌ |
模型特性
| 工具调用 | 结构化输出 | JSON 模式 | 图像输入 | 音频输入 | 视频输入 | Token级流式传输 | 原生异步 | Token 使用 | 对数概率 |
|---|---|---|---|---|---|---|---|---|---|
| ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ | ✅ | ❌ |
设置
要访问ChatSambaNovaCloud模型,您需要创建一个SambaNovaCloud账号,获取API密钥,并安装langchain_sambanova集成包。
pip install langchain-sambanova
Credentials
从 cloud.sambanova.ai 获取 API 密钥,并将其添加到环境变量中:
export SAMBANOVA_API_KEY="your-api-key-here"
import getpass
import os
if not os.getenv("SAMBANOVA_API_KEY"):
os.environ["SAMBANOVA_API_KEY"] = getpass.getpass(
"Enter your SambaNova Cloud API key: "
)
如果您希望自动跟踪模型调用,也可以通过取消注释下方的代码来设置您的LangSmith API密钥:
# os.environ["LANGSMITH_TRACING"] = "true"
# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
安装
The LangChain SambaNovaCloud 整合存在于 langchain_sambanova 包中:
%pip install -qU langchain-sambanova
Instantiation
现在我们就可以实例化我们的模型对象并生成聊天完成内容:
from langchain_sambanova import ChatSambaNovaCloud
llm = ChatSambaNovaCloud(
model="Meta-Llama-3.3-70B-Instruct",
max_tokens=1024,
temperature=0.7,
top_p=0.01,
)
Invocation
messages = [
(
"system",
"You are a helpful assistant that translates English to French. "
"Translate the user sentence.",
),
("human", "I love programming."),
]
ai_msg = llm.invoke(messages)
ai_msg
AIMessage(content="J'adore la programmation.", additional_kwargs={}, response_metadata={'finish_reason': 'stop', 'usage': {'acceptance_rate': 7, 'completion_tokens': 8, 'completion_tokens_after_first_per_sec': 195.0204119588971, 'completion_tokens_after_first_per_sec_first_ten': 618.3422770734173, 'completion_tokens_per_sec': 53.25837044790076, 'end_time': 1731535338.1864908, 'is_last_response': True, 'prompt_tokens': 55, 'start_time': 1731535338.0133238, 'time_to_first_token': 0.13727331161499023, 'total_latency': 0.15021112986973353, 'total_tokens': 63, 'total_tokens_per_sec': 419.4096672772185}, 'model_name': 'Meta-Llama-3.1-70B-Instruct', 'system_fingerprint': 'fastcoe', 'created': 1731535338}, id='f04b7c2c-bc46-47e0-9c6b-19a002e8f390')
print(ai_msg.content)
J'adore la programmation.
链式调用
我们可以通过以下方式将模型与提示模板进行链接:
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate(
[
(
"system",
"You are a helpful assistant that translates {input_language} "
"to {output_language}.",
),
("human", "{input}"),
]
)
chain = prompt | llm
chain.invoke(
{
"input_language": "English",
"output_language": "German",
"input": "I love programming.",
}
)
API 参考:ChatPromptTemplate
AIMessage(content='Ich liebe das Programmieren.', additional_kwargs={}, response_metadata={'finish_reason': 'stop', 'usage': {'acceptance_rate': 2.3333333333333335, 'completion_tokens': 6, 'completion_tokens_after_first_per_sec': 106.06729752831038, 'completion_tokens_after_first_per_sec_first_ten': 204.92722183833433, 'completion_tokens_per_sec': 26.32497272023831, 'end_time': 1731535339.9997504, 'is_last_response': True, 'prompt_tokens': 50, 'start_time': 1731535339.7539687, 'time_to_first_token': 0.19864177703857422, 'total_latency': 0.22792046410696848, 'total_tokens': 56, 'total_tokens_per_sec': 245.6997453888909}, 'model_name': 'Meta-Llama-3.1-70B-Instruct', 'system_fingerprint': 'fastcoe', 'created': 1731535339}, id='dfe0bee6-b297-472e-ac9d-29906d162dcb')
流式传输
system = "You are a helpful assistant with pirate accent."
human = "I want to learn more about this animal: {animal}"
prompt = ChatPromptTemplate.from_messages([("system", system), ("human", human)])
chain = prompt | llm
for chunk in chain.stream({"animal": "owl"}):
print(chunk.content, end="", flush=True)
Yer lookin' fer some knowledge about owls, eh? Alright then, matey, settle yerself down with a pint o' grog and listen close.
Owls be a fascinatin' lot, with their big round eyes and silent wings. They be birds o' prey, which means they hunt other creatures fer food. There be over 220 species o' owls, rangin' in size from the tiny Elf Owl (which be smaller than a parrot) to the Great Grey Owl (which be as big as a small eagle).
One o' the most interestin' things about owls be their eyes. They be huge, with some species havin' eyes that be as big as their brains! This lets 'em see in the dark, which be perfect fer nocturnal huntin'. They also have special feathers on their faces that help 'em hear better, and their ears be specially designed to pinpoint sounds.
Owls be known fer their silent flight, which be due to the special shape o' their wings. They be able to fly without makin' a sound, which be perfect fer sneakin' up on prey. They also be very agile, with some species able to fly through tight spaces and make sharp turns.
Some o' the most common species o' owls include:
* Barn Owl: A medium-sized owl with a heart-shaped face and a screechin' call.
* Tawny Owl: A large owl with a distinctive hootin' call and a reddish-brown plumage.
* Great Horned Owl: A big owl with ear tufts and a deep hootin' call.
* Snowy Owl: A white owl with a round face and a soft, hootin' call.
Owls be found all over the world, in a variety o' habitats, from forests to deserts. They be an important part o' many ecosystems, helpin' to keep populations o' small mammals and birds under control.
So there ye have it, matey! Owls be amazin' creatures, with their big eyes, silent wings, and sharp talons. Now go forth and spread the word about these fascinatin' birds!
Async
prompt = ChatPromptTemplate.from_messages(
[
(
"human",
"what is the capital of {country}?",
)
]
)
chain = prompt | llm
await chain.ainvoke({"country": "France"})
AIMessage(content='The capital of France is Paris.', additional_kwargs={}, response_metadata={'finish_reason': 'stop', 'usage': {'acceptance_rate': 1, 'completion_tokens': 7, 'completion_tokens_after_first_per_sec': 442.126212227688, 'completion_tokens_after_first_per_sec_first_ten': 0, 'completion_tokens_per_sec': 46.28540439646366, 'end_time': 1731535343.0321083, 'is_last_response': True, 'prompt_tokens': 42, 'start_time': 1731535342.8808727, 'time_to_first_token': 0.137664794921875, 'total_latency': 0.15123558044433594, 'total_tokens': 49, 'total_tokens_per_sec': 323.99783077524563}, 'model_name': 'Meta-Llama-3.1-70B-Instruct', 'system_fingerprint': 'fastcoe', 'created': 1731535342}, id='c4b8c714-df38-4206-9aa8-fc8231f7275a')
异步流式传输
prompt = ChatPromptTemplate.from_messages(
[
(
"human",
"in less than {num_words} words explain me {topic} ",
)
]
)
chain = prompt | llm
async for chunk in chain.astream({"num_words": 30, "topic": "quantum computers"}):
print(chunk.content, end="", flush=True)
Quantum computers use quantum bits (qubits) to process info, leveraging superposition and entanglement to perform calculations exponentially faster than classical computers for certain complex problems.
工具调用
from datetime import datetime
from langchain_core.messages import HumanMessage, ToolMessage
from langchain_core.tools import tool
@tool
def get_time(kind: str = "both") -> str:
"""Returns current date, current time or both.
Args:
kind(str): date, time or both
Returns:
str: current date, current time or both
"""
if kind == "date":
date = datetime.now().strftime("%m/%d/%Y")
return f"Current date: {date}"
elif kind == "time":
time = datetime.now().strftime("%H:%M:%S")
return f"Current time: {time}"
else:
date = datetime.now().strftime("%m/%d/%Y")
time = datetime.now().strftime("%H:%M:%S")
return f"Current date: {date}, Current time: {time}"
tools = [get_time]
def invoke_tools(tool_calls, messages):
available_functions = {tool.name: tool for tool in tools}
for tool_call in tool_calls:
selected_tool = available_functions[tool_call["name"]]
tool_output = selected_tool.invoke(tool_call["args"])
print(f"Tool output: {tool_output}")
messages.append(ToolMessage(tool_output, tool_call_id=tool_call["id"]))
return messages
llm_with_tools = llm.bind_tools(tools=tools)
messages = [
HumanMessage(
content="I need to schedule a meeting for two weeks from today. "
"Can you tell me the exact date of the meeting?"
)
]
response = llm_with_tools.invoke(messages)
while len(response.tool_calls) > 0:
print(f"Intermediate model response: {response.tool_calls}")
messages.append(response)
messages = invoke_tools(response.tool_calls, messages)
response = llm_with_tools.invoke(messages)
print(f"final response: {response.content}")
Intermediate model response: [{'name': 'get_time', 'args': {'kind': 'date'}, 'id': 'call_7352ce7a18e24a7c9d', 'type': 'tool_call'}]
Tool output: Current date: 11/13/2024
final response: The meeting should be scheduled for two weeks from November 13th, 2024.
结构化输出
from pydantic import BaseModel, Field
class Joke(BaseModel):
"""Joke to tell user."""
setup: str = Field(description="The setup of the joke")
punchline: str = Field(description="The punchline to the joke")
structured_llm = llm.with_structured_output(Joke)
structured_llm.invoke("Tell me a joke about cats")
Joke(setup='Why did the cat join a band?', punchline='Because it wanted to be the purr-cussionist!')
输入图像
multimodal_llm = ChatSambaNovaCloud(
model="Llama-3.2-11B-Vision-Instruct",
max_tokens=1024,
temperature=0.7,
top_p=0.01,
)
import base64
import httpx
image_url = (
"https://images.pexels.com/photos/147411/italy-mountains-dawn-daybreak-147411.jpeg"
)
image_data = base64.b64encode(httpx.get(image_url).content).decode("utf-8")
message = HumanMessage(
content=[
{"type": "text", "text": "describe the weather in this image in 1 sentence"},
{
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{image_data}"},
},
],
)
response = multimodal_llm.invoke([message])
print(response.content)
The weather in this image is a serene and peaceful atmosphere, with a blue sky and white clouds, suggesting a pleasant day with mild temperatures and gentle breezes.
API 参考
详细文档介绍了所有SambaNovaCloud功能和配置,请访问API参考:https://docs.sambanova.ai/cloud/docs/get-started/overview