如何按字符递归地拆分文本
此 文本分割器 适用于通用文本,是推荐的选择。它通过一个字符列表进行参数化,按顺序尝试在这些字符处进行分割,直到分块足够小为止。默认的字符列表是 ["\n\n", "\n", " ", ""],这会尽量保持所有段落(然后是句子,再然后是单词)尽可能完整地组合在一起,因为这些内容通常在语义上关联性最强。
- 文本的分割方式:按字符列表分割。
- 块大小的度量方式:按字符数量计算。
以下是示例用法。
要直接获取字符串内容,请使用 .split_text。
要创建 LangChain 文档 对象(例如用于下游任务),请使用 .create_documents。
%pip install -qU langchain-text-splitters
from langchain_text_splitters import RecursiveCharacterTextSplitter
# Load example document
with open("state_of_the_union.txt") as f:
state_of_the_union = f.read()
text_splitter = RecursiveCharacterTextSplitter(
# Set a really small chunk size, just to show.
chunk_size=100,
chunk_overlap=20,
length_function=len,
is_separator_regex=False,
)
texts = text_splitter.create_documents([state_of_the_union])
print(texts[0])
print(texts[1])
page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and'
page_content='of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.'
text_splitter.split_text(state_of_the_union)[:2]
['Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and',
'of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.']
让我们来了解一下上面设置的参数 RecursiveCharacterTextSplitter:
chunk_size: 块的最大大小,其中大小由length_function决定。chunk_overlap: 块之间的目标重叠。块重叠有助于缓解上下文在块之间分割时的信息丢失问题。length_function: 确定块大小的函数。is_separator_regex: 分隔符列表(默认为["\n\n", "\n", " ", ""])是否应被解释为正则表达式。
从没有词边界的语言中拆分文本
一些书写系统没有 词边界,例如中文、日文和泰文。使用默认的分隔符列表 ["\n\n", "\n", " ", ""] 分割文本可能导致词语被拆分到不同的块中。为了保持词语完整,可以覆盖分隔符列表,添加额外的标点符号:
- 添加ASCII句号 "
."、Unicode全角句号 "."(用于中文文本),以及表意文字句号 "。"(用于日文和中文) - 添加 零宽度空格,用于泰语、缅甸语、高棉语和日语。
- 添加ASCII逗号 "
,"、Unicode全角逗号 "," 和 Unicode中文标点逗号 "、"
text_splitter = RecursiveCharacterTextSplitter(
separators=[
"\n\n",
"\n",
" ",
".",
",",
"\u200b", # Zero-width space
"\uff0c", # Fullwidth comma
"\u3001", # Ideographic comma
"\uff0e", # Fullwidth full stop
"\u3002", # Ideographic full stop
"",
],
# Existing args
)