使用 SQLiteVec 将 SQLite 作为向量存储
本笔记本介绍了如何开始使用 SQLiteVec 向量存储。
SQLite-Vec 是一个专为向量搜索设计的
SQLite扩展,强调本地优先操作,并可轻松集成到应用程序中,无需外部服务器。它是同一作者开发的 SQLite-VSS 的继任者。它使用零依赖的 C 语言编写,旨在易于构建和使用。
本笔记本展示了如何使用SQLiteVec向量数据库。
设置
您需要使用 langchain-community 安装 pip install -qU langchain-community 才能使用此集成
# You need to install sqlite-vec as a dependency.
%pip install --upgrade --quiet sqlite-vec
Credentials
SQLiteVec 在用作向量存储时不需要任何凭据,因为向量存储只是一个简单的 SQLite 文件。
初始化
from langchain_community.embeddings.sentence_transformer import (
SentenceTransformerEmbeddings,
)
from langchain_community.vectorstores import SQLiteVec
embedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")
vector_store = SQLiteVec(
table="state_union", db_file="/tmp/vec.db", embedding=embedding_function
)
API 参考:SentenceTransformerEmbeddings | SQLiteVec
管理向量存储
添加项到向量存储
vector_store.add_texts(texts=["Ketanji Brown Jackson is awesome", "foo", "bar"])
更新向量存储中的项
暂不支持
删除向量存储中的项
暂不支持
查询向量存储
查询直接
data = vector_store.similarity_search("Ketanji Brown Jackson", k=4)
查询通过转换为检索器
暂不支持
使用检索增强生成
有关如何将 sqlite-vec 用于检索增强生成的更多信息,请参考 https://alexgarcia.xyz/sqlite-vec/ 上的文档。
API 参考
有关 SQLiteVec 所有功能和配置的详细文档,请访问 API 参考: https://python.langchain.com/api_reference/community/vectorstores/langchain_community.vectorstores.sqlitevec.SQLiteVec.html
其他示例
from langchain_community.document_loaders import TextLoader
from langchain_community.embeddings.sentence_transformer import (
SentenceTransformerEmbeddings,
)
from langchain_community.vectorstores import SQLiteVec
from langchain_text_splitters import CharacterTextSplitter
# load the document and split it into chunks
loader = TextLoader("../../how_to/state_of_the_union.txt")
documents = loader.load()
# split it into chunks
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
texts = [doc.page_content for doc in docs]
# create the open-source embedding function
embedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")
# load it in sqlite-vss in a table named state_union.
# the db_file parameter is the name of the file you want
# as your sqlite database.
db = SQLiteVec.from_texts(
texts=texts,
embedding=embedding_function,
table="state_union",
db_file="/tmp/vec.db",
)
# query it
query = "What did the president say about Ketanji Brown Jackson"
data = db.similarity_search(query)
# print results
data[0].page_content
'Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.'
使用现有 SQLite 连接的示例
from langchain_community.document_loaders import TextLoader
from langchain_community.embeddings.sentence_transformer import (
SentenceTransformerEmbeddings,
)
from langchain_community.vectorstores import SQLiteVec
from langchain_text_splitters import CharacterTextSplitter
# load the document and split it into chunks
loader = TextLoader("../../how_to/state_of_the_union.txt")
documents = loader.load()
# split it into chunks
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
texts = [doc.page_content for doc in docs]
# create the open-source embedding function
embedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")
connection = SQLiteVec.create_connection(db_file="/tmp/vec.db")
db1 = SQLiteVec(
table="state_union", embedding=embedding_function, connection=connection
)
db1.add_texts(["Ketanji Brown Jackson is awesome"])
# query it again
query = "What did the president say about Ketanji Brown Jackson"
data = db1.similarity_search(query)
# print results
data[0].page_content
'Ketanji Brown Jackson is awesome'