加快 Python 从 PDF 中提取文本的功能
Speed Up Python Function that Extracts Text from PDF
我目前正在开发一个程序,可以从数以万计的法院意见 PDF 中抓取文本。我是 Python 的新手,正在努力使这段代码尽可能高效。我从本网站和其他地方的 很多 帖子中收集到我应该尝试矢量化我的代码,但我已经尝试了三种方法来这样做但没有结果。
我的 reprex 使用这些包和这个示例数据。
import os
import pandas as pd
import pdftotext
import wget
df = pd.DataFrame({'OpinionText': [""], 'URLs': ["https://cases.justia.com/federal/appellate-courts/ca6/20-6226/20-6226-2021-09-17.pdf?ts=1631908842"]})
df = pd.concat([df]*50, ignore_index=True)
我首先定义了这个函数,它下载 PDF、提取文本、删除 PDF,然后 returns 文本。
def Link2Text(Link):
OpinionPDF = wget.download(Link, "Temporary_Opinion.pdf")
with open(OpinionPDF, "rb") as f:
pdf = pdftotext.PDF(f)
OpinionText = "\n\n".join(pdf)
if os.path.exists("Temporary_Opinion.pdf"):
os.remove("Temporary_Opinion.pdf")
return(OpinionText)
我调用函数的第一种方法是:
df['OpinionText'] = df['URLs'].apply(Link2Text)
根据我读到的有关矢量化的内容,我尝试使用以下方式调用该函数:
df['OpinionText'] = Link2Text(df['URLs'])
#and, alternatively:
df['OpinionText'] = Link2Text(df['URLs'].values)
这两个都返回相同的错误,即:
Traceback (most recent call last):
File "/Users/brendanbernicker/Downloads/Reprex for SO Vectorization Q.py", line 22, in <module>
df['OpinionText'] = Link2Text(df['URLs'])
File "/Users/brendanbernicker/Downloads/Reprex for SO Vectorization Q.py", line 10, in Link2Text
OpinionPDF = wget.download(Link, "Temporary_Opinion.pdf")
File "/Applications/anaconda3/lib/python3.8/site-packages/wget.py", line 505, in download
prefix = detect_filename(url, out)
File "/Applications/anaconda3/lib/python3.8/site-packages/wget.py", line 483, in detect_filename
if url:
File "/Applications/anaconda3/lib/python3.8/site-packages/pandas/core/generic.py", line 1442, in __nonzero__
raise ValueError(
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
[Finished in 0.683s]
我知道这是说 Python 不知道如何处理输入,因为它是一个向量,所以我尝试用下面的调用替换调用并得到这个回溯。
df['OpinionText'] = Link2Text(df['URLs'].item)
Traceback (most recent call last):
File "/Users/brendanbernicker/Downloads/Reprex for SO Vectorization Q.py", line 22, in <module>
df['OpinionText'] = Link2Text(df['URLs'].item)
File "/Users/brendanbernicker/Downloads/Reprex for SO Vectorization Q.py", line 10, in Link2Text
OpinionPDF = wget.download(Link, "Temporary_Opinion.pdf")
File "/Applications/anaconda3/lib/python3.8/site-packages/wget.py", line 505, in download
prefix = detect_filename(url, out)
File "/Applications/anaconda3/lib/python3.8/site-packages/wget.py", line 484, in detect_filename
names["url"] = filename_from_url(url) or ''
File "/Applications/anaconda3/lib/python3.8/site-packages/wget.py", line 230, in filename_from_url
fname = os.path.basename(urlparse.urlparse(url).path)
File "/Applications/anaconda3/lib/python3.8/urllib/parse.py", line 372, in urlparse
url, scheme, _coerce_result = _coerce_args(url, scheme)
File "/Applications/anaconda3/lib/python3.8/urllib/parse.py", line 124, in _coerce_args
return _decode_args(args) + (_encode_result,)
File "/Applications/anaconda3/lib/python3.8/urllib/parse.py", line 108, in _decode_args
return tuple(x.decode(encoding, errors) if x else '' for x in args)
File "/Applications/anaconda3/lib/python3.8/urllib/parse.py", line 108, in <genexpr>
return tuple(x.decode(encoding, errors) if x else '' for x in args)
AttributeError: 'function' object has no attribute 'decode'
我尝试将 .decode('utf-8')
添加到我的函数调用中,并在函数中添加到输入中,但两者得到了相同的回溯。在这一点上,我不知道还有什么可以尝试加速我的代码。
我还尝试了 numpy.vectorize
使用 .apply
的版本,但它大大减慢了执行速度。我假设这两个不应该一起使用。
为了完整起见,基于此处的一些优秀答案,我还尝试了:
from numba import njit
@njit
def Link2Text(Link, Opinion):
res = np.empty(Link.shape)
for i in range(length(Link)):
OpinionPDF = wget.download(Link[i], "Temporary_Opinion.pdf")
with open(OpinionPDF, "rb") as f:
pdf = pdftotext.PDF(f)
OpinionText = "\n\n".join(pdf)
if os.path.exists("Temporary_Opinion.pdf"):
os.remove("Temporary_Opinion.pdf")
Opinion[i] = OpinionText
Link2Text(df['URLs'].values, df['OpinionText'].values)
我了解到这不起作用,因为 numba 不适用于我在函数内部调用的包,并且更多地用于数学运算。如果这不正确,我应该为此尝试使用 numba,请告诉我。
我接受了评论中的建议。我没有使用 pandas,使用了列表理解,并将其重写为:
def pdftotext(path):
args = r'pdftotext -layout -q Temporary_Opinion.pdf Opinion_Text.txt'
cp = sp.run(
args, stdout=sp.PIPE, stderr=sp.DEVNULL,
check=True, text=True
)
return cp.stdout
def Link2Text(Link):
OpinionPDF = wget.download(Link, "Temporary_Opinion.pdf")
pdftotext("Temporary_Opinion.pdf")
OpinionText = io.open("Opinion_Text.txt", mode="r", encoding="utf-8")
OpinionText = OpinionText.readlines()
if os.path.exists("Temporary_Opinion.pdf"):
os.remove("Temporary_Opinion.pdf")
if os.path.exists("Opinion_Text.txt"):
os.remove("Opinion_Text.txt")
return(OpinionText)
Opinions = [Link2Text(item) for item in URLs]
这要快得多,并且完全符合我的需要。感谢所有对此提供建议的人!下一步将使用线程和布局分析来加速 IO 和清理数据。
我目前正在开发一个程序,可以从数以万计的法院意见 PDF 中抓取文本。我是 Python 的新手,正在努力使这段代码尽可能高效。我从本网站和其他地方的 很多 帖子中收集到我应该尝试矢量化我的代码,但我已经尝试了三种方法来这样做但没有结果。
我的 reprex 使用这些包和这个示例数据。
import os
import pandas as pd
import pdftotext
import wget
df = pd.DataFrame({'OpinionText': [""], 'URLs': ["https://cases.justia.com/federal/appellate-courts/ca6/20-6226/20-6226-2021-09-17.pdf?ts=1631908842"]})
df = pd.concat([df]*50, ignore_index=True)
我首先定义了这个函数,它下载 PDF、提取文本、删除 PDF,然后 returns 文本。
def Link2Text(Link):
OpinionPDF = wget.download(Link, "Temporary_Opinion.pdf")
with open(OpinionPDF, "rb") as f:
pdf = pdftotext.PDF(f)
OpinionText = "\n\n".join(pdf)
if os.path.exists("Temporary_Opinion.pdf"):
os.remove("Temporary_Opinion.pdf")
return(OpinionText)
我调用函数的第一种方法是:
df['OpinionText'] = df['URLs'].apply(Link2Text)
根据我读到的有关矢量化的内容,我尝试使用以下方式调用该函数:
df['OpinionText'] = Link2Text(df['URLs'])
#and, alternatively:
df['OpinionText'] = Link2Text(df['URLs'].values)
这两个都返回相同的错误,即:
Traceback (most recent call last):
File "/Users/brendanbernicker/Downloads/Reprex for SO Vectorization Q.py", line 22, in <module>
df['OpinionText'] = Link2Text(df['URLs'])
File "/Users/brendanbernicker/Downloads/Reprex for SO Vectorization Q.py", line 10, in Link2Text
OpinionPDF = wget.download(Link, "Temporary_Opinion.pdf")
File "/Applications/anaconda3/lib/python3.8/site-packages/wget.py", line 505, in download
prefix = detect_filename(url, out)
File "/Applications/anaconda3/lib/python3.8/site-packages/wget.py", line 483, in detect_filename
if url:
File "/Applications/anaconda3/lib/python3.8/site-packages/pandas/core/generic.py", line 1442, in __nonzero__
raise ValueError(
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
[Finished in 0.683s]
我知道这是说 Python 不知道如何处理输入,因为它是一个向量,所以我尝试用下面的调用替换调用并得到这个回溯。
df['OpinionText'] = Link2Text(df['URLs'].item)
Traceback (most recent call last):
File "/Users/brendanbernicker/Downloads/Reprex for SO Vectorization Q.py", line 22, in <module>
df['OpinionText'] = Link2Text(df['URLs'].item)
File "/Users/brendanbernicker/Downloads/Reprex for SO Vectorization Q.py", line 10, in Link2Text
OpinionPDF = wget.download(Link, "Temporary_Opinion.pdf")
File "/Applications/anaconda3/lib/python3.8/site-packages/wget.py", line 505, in download
prefix = detect_filename(url, out)
File "/Applications/anaconda3/lib/python3.8/site-packages/wget.py", line 484, in detect_filename
names["url"] = filename_from_url(url) or ''
File "/Applications/anaconda3/lib/python3.8/site-packages/wget.py", line 230, in filename_from_url
fname = os.path.basename(urlparse.urlparse(url).path)
File "/Applications/anaconda3/lib/python3.8/urllib/parse.py", line 372, in urlparse
url, scheme, _coerce_result = _coerce_args(url, scheme)
File "/Applications/anaconda3/lib/python3.8/urllib/parse.py", line 124, in _coerce_args
return _decode_args(args) + (_encode_result,)
File "/Applications/anaconda3/lib/python3.8/urllib/parse.py", line 108, in _decode_args
return tuple(x.decode(encoding, errors) if x else '' for x in args)
File "/Applications/anaconda3/lib/python3.8/urllib/parse.py", line 108, in <genexpr>
return tuple(x.decode(encoding, errors) if x else '' for x in args)
AttributeError: 'function' object has no attribute 'decode'
我尝试将 .decode('utf-8')
添加到我的函数调用中,并在函数中添加到输入中,但两者得到了相同的回溯。在这一点上,我不知道还有什么可以尝试加速我的代码。
我还尝试了 numpy.vectorize
使用 .apply
的版本,但它大大减慢了执行速度。我假设这两个不应该一起使用。
为了完整起见,基于此处的一些优秀答案,我还尝试了:
from numba import njit
@njit
def Link2Text(Link, Opinion):
res = np.empty(Link.shape)
for i in range(length(Link)):
OpinionPDF = wget.download(Link[i], "Temporary_Opinion.pdf")
with open(OpinionPDF, "rb") as f:
pdf = pdftotext.PDF(f)
OpinionText = "\n\n".join(pdf)
if os.path.exists("Temporary_Opinion.pdf"):
os.remove("Temporary_Opinion.pdf")
Opinion[i] = OpinionText
Link2Text(df['URLs'].values, df['OpinionText'].values)
我了解到这不起作用,因为 numba 不适用于我在函数内部调用的包,并且更多地用于数学运算。如果这不正确,我应该为此尝试使用 numba,请告诉我。
我接受了评论中的建议。我没有使用 pandas,使用了列表理解,并将其重写为:
def pdftotext(path):
args = r'pdftotext -layout -q Temporary_Opinion.pdf Opinion_Text.txt'
cp = sp.run(
args, stdout=sp.PIPE, stderr=sp.DEVNULL,
check=True, text=True
)
return cp.stdout
def Link2Text(Link):
OpinionPDF = wget.download(Link, "Temporary_Opinion.pdf")
pdftotext("Temporary_Opinion.pdf")
OpinionText = io.open("Opinion_Text.txt", mode="r", encoding="utf-8")
OpinionText = OpinionText.readlines()
if os.path.exists("Temporary_Opinion.pdf"):
os.remove("Temporary_Opinion.pdf")
if os.path.exists("Opinion_Text.txt"):
os.remove("Opinion_Text.txt")
return(OpinionText)
Opinions = [Link2Text(item) for item in URLs]
这要快得多,并且完全符合我的需要。感谢所有对此提供建议的人!下一步将使用线程和布局分析来加速 IO 和清理数据。