PyMuPDF保护器
此笔记本提供了入门的快速概述PyMuPDF 文档加载器。有关所有__ModuleName__Loader功能和配置的详细文档,请参阅 API 参考。
概述
集成详细信息
| 类 | 包 | 本地化 | 序列 化 | JS 支持 |
|---|---|---|---|---|
| PyMuPDFLoader | langchain_community | ✅ | ❌ | ❌ |
Loader 功能
| 源 | 文档延迟加载 | 原生异步支持 | 提取图像 | 提取表 |
|---|---|---|---|---|
| PyMuPDFLoader | ✅ | ❌ | ✅ | ✅ |
设置
凭据
使用 PyMuPDFLoader 不需要凭据
如果您想获得模型调用的自动化最佳跟踪,您还可以通过取消下面的注释来设置您的 LangSmith API 密钥:
# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
# os.environ["LANGSMITH_TRACING"] = "true"
安装
安装 langchain_community 和 pymupdf。
%pip install -qU langchain_community pymupdf
Note: you may need to restart the kernel to use updated packages.
初始化
现在我们可以实例化我们的模型对象并加载文档:
from langchain_community.document_loaders import PyMuPDFLoader
file_path = "./example_data/layout-parser-paper.pdf"
loader = PyMuPDFLoader(file_path)
负荷
docs = loader.load()
docs[0]
Document(metadata={'producer': 'pdfTeX-1.40.21', 'creator': 'LaTeX with hyperref', 'creationdate': '2021-06-22T01:27:10+00:00', 'source': './example_data/layout-parser-paper.pdf', 'file_path': './example_data/layout-parser-paper.pdf', 'total_pages': 16, 'format': 'PDF 1.5', 'title': '', 'author': '', 'subject': '', 'keywords': '', 'moddate': '2021-06-22T01:27:10+00:00', 'trapped': '', 'page': 0}, page_content='LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\nZejiang Shen1 (\x00), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\nLee4, Jacob Carlson3, and Weining Li5\n1 Allen Institute for AI\nshannons@allenai.org\n2 Brown University\nruochen zhang@brown.edu\n3 Harvard University\n{melissadell,jacob carlson}@fas.harvard.edu\n4 University of Washington\nbcgl@cs.washington.edu\n5 University of Waterloo\nw422li@uwaterloo.ca\nAbstract. Recent advances in document image analysis (DIA) have been\nprimarily driven by the application of neural networks. Ideally, research\noutcomes could be easily deployed in production and extended for further\ninvestigation. However, various factors like loosely organized codebases\nand sophisticated model configurations complicate the easy reuse of im-\nportant innovations by a wide audience. Though there have been on-going\nefforts to improve reusability and simplify deep learning (DL) model\ndevelopment in disciplines like natural language processing and computer\nvision, none of them are optimized for challenges in the domain of DIA.\nThis represents a major gap in the existing toolkit, as DIA is central to\nacademic research across a wide range of disciplines in the social sciences\nand humanities. This paper introduces LayoutParser, an open-source\nlibrary for streamlining the usage of DL in DIA research and applica-\ntions. The core LayoutParser library comes with a set of simple and\nintuitive interfaces for applying and customizing DL models for layout de-\ntection, character recognition, and many other document processing tasks.\nTo promote extensibility, LayoutParser also incorporates a community\nplatform for sharing both pre-trained models and full document digiti-\nzation pipelines. We demonstrate that LayoutParser is helpful for both\nlightweight and large-scale digitization pipelines in real-word use cases.\nThe library is publicly available at https://layout-parser.github.io.\nKeywords: Document Image Analysis · Deep Learning · Layout Analysis\n· Character Recognition · Open Source library · Toolkit.\n1\nIntroduction\nDeep Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndocument image analysis (DIA) tasks including document image classification [11,\narXiv:2103.15348v2 [cs.CV] 21 Jun 2021')
import pprint
pprint.pp(docs[0].metadata)
{'producer': 'pdfTeX-1.40.21',
'creator': 'LaTeX with hyperref',
'creationdate': '2021-06-22T01:27:10+00:00',
'source': './example_data/layout-parser-paper.pdf',
'file_path': './example_data/layout-parser-paper.pdf',
'total_pages': 16,
'format': 'PDF 1.5',
'title': '',
'author': '',
'subject': '',
'keywords': '',
'moddate': '2021-06-22T01:27:10+00:00',
'trapped': '',
'page': 0}
延迟加载
pages = []
for doc in loader.lazy_load():
pages.append(doc)
if len(pages) >= 10:
# do some paged operation, e.g.
# index.upsert(page)
pages = []
len(pages)
6
print(pages[0].page_content[:100])
pprint.pp(pages[0].metadata)
LayoutParser: A Unified Toolkit for DL-Based DIA
11
focuses on precision, efficiency, and robustness. T
{'producer': 'pdfTeX-1.40.21',
'creator': 'LaTeX with hyperref',
'creationdate': '2021-06-22T01:27:10+00:00',
'source': './example_data/layout-parser-paper.pdf',
'file_path': './example_data/layout-parser-paper.pdf',
'total_pages': 16,
'format': 'PDF 1.5',
'title': '',
'author': '',
'subject': '',
'keywords': '',
'moddate': '2021-06-22T01:27:10+00:00',
'trapped': '',
'page': 10}
metadata 属性至少包含以下键:
- 源
- page (如果处于 page 模式)
- total_page
- 创建日期
- 造物主
- 制作人
其他元数据特定于每个解析器。 这些信息可能会有所帮助(例如,对您的 PDF 进行分类)。
拆分模式和自定义页面分隔符
加载 PDF 文件时,您可以通过两种不同的方式进行拆分:
- 按页面
- 作为单个文本流
默认情况下,PyMuPDFLoader 将按页面拆分 PDF。
按页提取 PDF。每个页面都被提取为 langchain Document 对象:
loader = PyMuPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="page",
)
docs = loader.load()
print(len(docs))
pprint.pp(docs[0].metadata)
16
{'producer': 'pdfTeX-1.40.21',
'creator': 'LaTeX with hyperref',
'creationdate': '2021-06-22T01:27:10+00:00',
'source': './example_data/layout-parser-paper.pdf',
'file_path': './example_data/layout-parser-paper.pdf',
'total_pages': 16,
'format': 'PDF 1.5',
'title': '',
'author': '',
'subject': '',
'keywords': '',
'moddate': '2021-06-22T01:27:10+00:00',
'trapped': '',
'page': 0}
在此模式下,PDF 按页面拆分,生成的 Documents 元数据包含页码。但在某些情况下,我们可能希望将 pdf 作为单个文本流处理(这样我们就不会将某些段落切成两半)。在这种情况下,您可以使用 single 模式:
将整个 PDF 提取为单个 langchain Document 对象:
loader = PyMuPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="single",
)
docs = loader.load()
print(len(docs))
pprint.pp(docs[0].metadata)
1
{'producer': 'pdfTeX-1.40.21',
'creator': 'LaTeX with hyperref',
'creationdate': '2021-06-22T01:27:10+00:00',
'source': './example_data/layout-parser-paper.pdf',
'file_path': './example_data/layout-parser-paper.pdf',
'total_pages': 16,
'format': 'PDF 1.5',
'title': '',
'author': '',
'subject': '',
'keywords': '',
'moddate': '2021-06-22T01:27:10+00:00',
'trapped': ''}
从逻辑上讲,在此模式下,“page_number”元数据会消失。以下是如何清楚地识别页面在文本流中的结束位置:
添加自定义pages_delimiter以识别单个模式下页面的结束位置:
loader = PyMuPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="single",
pages_delimiter="\n-------THIS IS A CUSTOM END OF PAGE-------\n",
)
docs = loader.load()
print(docs[0].page_content[:5780])
这可以简单地是 \n,或者 \f 来清楚地指示页面更改,或者<!-- PAGE BREAK --> 在 Markdown 查看器中无缝注入,没有视觉效果。
从 PDF 中提取图像
您可以选择三种不同的解决方案从 PDF 中提取图像:
- rapidOCR(轻量级光学字符识别工具)
- Tesseract(高精度 OCR 工具)
- 多模态语言模型
您可以调整这些函数以在 html、markdown 或文本中选择提取图像的输出格式
结果将插入到页面文本的倒数第二段和倒数第二段之间。
使用 rapidOCR 从 PDF 中提取图像:
%pip install -qU rapidocr-onnxruntime
Note: you may need to restart the kernel to use updated packages.
from langchain_community.document_loaders.parsers import RapidOCRBlobParser
loader = PyMuPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="page",
images_inner_format="markdown-img",
images_parser=RapidOCRBlobParser(),
)
docs = loader.load()
print(docs[5].page_content)
请注意,RapidOCR 旨在处理中文和英文,而不是其他语言。
使用 Tesseract 从 PDF 中提取图像:
%pip install -qU pytesseract
Note: you may need to restart the kernel to use updated packages.
from langchain_community.document_loaders.parsers import TesseractBlobParser
loader = PyMuPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="page",
images_inner_format="html-img",
images_parser=TesseractBlobParser(),
)
docs = loader.load()
print(docs[5].page_content)
使用多模态模型从 PDF 中提取图像:
%pip install -qU langchain_openai
Note: you may need to restart the kernel to use updated packages.
import os
from dotenv import load_dotenv
load_dotenv()
True
from getpass import getpass
if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass("OpenAI API key =")
from langchain_community.document_loaders.parsers import LLMImageBlobParser
from langchain_openai import ChatOpenAI
loader = PyMuPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="page",
images_inner_format="markdown-img",
images_parser=LLMImageBlobParser(model=ChatOpenAI(model="gpt-4o", max_tokens=1024)),
)
docs = loader.load()
print(docs[5].page_content)
从 PDF 中提取表格
使用 PyMUPDF,您可以从 PDF 中提取 html、markdown 或 csv 格式的表格:
loader = PyMuPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="page",
extract_tables="markdown",
)
docs = loader.load()
print(docs[4].page_content)
LayoutParser: A Unified Toolkit for DL-Based DIA
5
Table 1: Current layout detection models in the LayoutParser model zoo
Dataset
Base Model1 Large Model
Notes
PubLayNet [38]
F / M
M
Layouts of modern scientific documents
PRImA [3]
M
-
Layouts of scanned modern magazines and scientific reports
Newspaper [17]
F
-
Layouts of scanned US newspapers from the 20th century
TableBank [18]
F
F
Table region on modern scientific and business document
HJDataset [31]
F / M
-
Layouts of history Japanese documents
1 For each dataset, we train several models of different sizes for different needs (the trade-offbetween accuracy
vs. computational cost). For “base model” and “large model”, we refer to using the ResNet 50 or ResNet 101
backbones [13], respectively. One can train models of different architectures, like Faster R-CNN [28] (F) and Mask
R-CNN [12] (M). For example, an F in the Large Model column indicates it has a Faster R-CNN model trained
using the ResNet 101 backbone. The platform is maintained and a number of additions will be made to the model
zoo in coming months.
layout data structures, which are optimized for efficiency and versatility. 3) When
necessary, users can employ existing or customized OCR models via the unified
API provided in the OCR module. 4) LayoutParser comes with a set of utility
functions for the visualization and storage of the layout data. 5) LayoutParser
is also highly customizable, via its integration with functions for layout data
annotation and model training. We now provide detailed descriptions for each
component.
3.1
Layout Detection Models
In LayoutParser, a layout model takes a document image as an input and
generates a list of rectangular boxes for the target content regions. Different
from traditional methods, it relies on deep convolutional neural networks rather
than manually curated rules to identify content regions. It is formulated as an
object detection problem and state-of-the-art models like Faster R-CNN [28] and
Mask R-CNN [12] are used. This yields prediction results of high accuracy and
makes it possible to build a concise, generalized interface for layout detection.
LayoutParser, built upon Detectron2 [35], provides a minimal API that can
perform layout detection with only four lines of code in Python:
1 import
layoutparser as lp
2 image = cv2.imread("image_file") # load
images
3 model = lp. Detectron2LayoutModel (
4
"lp:// PubLayNet/ faster_rcnn_R_50_FPN_3x /config")
5 layout = model.detect(image)
LayoutParser provides a wealth of pre-trained model weights using various
datasets covering different languages, time periods, and document types. Due to
domain shift [7], the prediction performance can notably drop when models are ap-
plied to target samples that are significantly different from the training dataset. As
document structures and layouts vary greatly in different domains, it is important
to select models trained on a dataset similar to the test samples. A semantic syntax
is used for initializing the model weights in LayoutParser, using both the dataset
name and model name lp://<dataset-name>/<model-architecture-name>.
|Dataset|Base Model1|Large Model|Notes|
|---|---|---|---|
|PubLayNet [38] PRImA [3] Newspaper [17] TableBank [18] HJDataset [31]|F / M M F F F / M|M &#45; &#45; F &#45;|Layouts of modern scientific documents Layouts of scanned modern magazines and scientific reports Layouts of scanned US newspapers from the 20th century Table region on modern scientific and business document Layouts of history Japanese documents|
使用文件
许多文档加载程序都涉及解析文件。此类加载器之间的区别通常源于文件的解析方式,而不是文件的加载方式。例如,您可以使用open读取 PDF 或 Markdown 文件的二进制内容,但您需要不同的解析逻辑来将该二进制数据转换为文本。
因此,将解析逻辑与加载逻辑分离可能会有所帮助,这样可以更轻松地重用给定的解析器,而不管数据是如何加载的。 您可以使用此策略来分析具有相同解析参数的不同文件。
from langchain_community.document_loaders import FileSystemBlobLoader
from langchain_community.document_loaders.generic import GenericLoader
from langchain_community.document_loaders.parsers import PyMuPDFParser
loader = GenericLoader(
blob_loader=FileSystemBlobLoader(
path="./example_data/",
glob="*.pdf",
),
blob_parser=PyMuPDFParser(),
)
docs = loader.load()
print(docs[0].page_content)
pprint.pp(docs[0].metadata)
LayoutParser: A Unified Toolkit for Deep
Learning Based Document Image Analysis
Zejiang Shen1 (�), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain
Lee4, Jacob Carlson3, and Weining Li5
1 Allen Institute for AI
shannons@allenai.org
2 Brown University
ruochen zhang@brown.edu
3 Harvard University
{melissadell,jacob carlson}@fas.harvard.edu
4 University of Washington
bcgl@cs.washington.edu
5 University of Waterloo
w422li@uwaterloo.ca
Abstract. Recent advances in document image analysis (DIA) have been
primarily driven by the application of neural networks. Ideally, research
outcomes could be easily deployed in production and extended for further
investigation. However, various factors like loosely organized codebases
and sophisticated model configurations complicate the easy reuse of im-
portant innovations by a wide audience. Though there have been on-going
efforts to improve reusability and simplify deep learning (DL) model
development in disciplines like natural language processing and computer
vision, none of them are optimized for challenges in the domain of DIA.
This represents a major gap in the existing toolkit, as DIA is central to
academic research across a wide range of disciplines in the social sciences
and humanities. This paper introduces LayoutParser, an open-source
library for streamlining the usage of DL in DIA research and applica-
tions. The core LayoutParser library comes with a set of simple and
intuitive interfaces for applying and customizing DL models for layout de-
tection, character recognition, and many other document processing tasks.
To promote extensibility, LayoutParser also incorporates a community
platform for sharing both pre-trained models and full document digiti-
zation pipelines. We demonstrate that LayoutParser is helpful for both
lightweight and large-scale digitization pipelines in real-word use cases.
The library is publicly available at https://layout-parser.github.io.
Keywords: Document Image Analysis · Deep Learning · Layout Analysis
· Character Recognition · Open Source library · Toolkit.
1
Introduction
Deep Learning(DL)-based approaches are the state-of-the-art for a wide range of
document image analysis (DIA) tasks including document image classification [11,
arXiv:2103.15348v2 [cs.CV] 21 Jun 2021
{'source': 'example_data/layout-parser-paper.pdf',
'file_path': 'example_data/layout-parser-paper.pdf',
'total_pages': 16,
'format': 'PDF 1.5',
'title': '',
'author': '',
'subject': '',
'keywords': '',
'creator': 'LaTeX with hyperref',
'producer': 'pdfTeX-1.40.21',
'creationdate': '2021-06-22T01:27:10+00:00',
'moddate': '2021-06-22T01:27:10+00:00',
'trapped': '',
'page': 0}
可以处理云存储中的文件。
from langchain_community.document_loaders import CloudBlobLoader
from langchain_community.document_loaders.generic import GenericLoader
loader = GenericLoader(
blob_loader=CloudBlobLoader(
url="s3:/mybucket", # Supports s3://, az://, gs://, file:// schemes.
glob="*.pdf",
),
blob_parser=PyMuPDFParser(),
)
docs = loader.load()
print(docs[0].page_content)
pprint.pp(docs[0].metadata)
API 参考
有关所有PyMuPDFLoader功能和配置可参考 API 参考:https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.PyMuPDFLoader.html