如何获取解压大型 bz2 文件所需的时间?
How to get the time needed for decompressing large bz2 files?
我需要使用 Python 处理大型 bz2 文件(~6G),使用 BZ2File.readline()
逐行解压。问题是我想知道处理整个文件需要多少时间。
我做了很多搜索,试图获得解压文件的实际大小,这样我就可以知道即时处理的百分比,从而知道剩余时间,但发现似乎不可能无需先解压即可知道解压后的文件大小().
除了解压缩文件会占用大量内存外,解压缩本身也需要很多时间。那么,有人可以帮我即时获得剩余的处理时间吗?
您可以根据压缩数据的消耗,而不是生产 个 未压缩 数据。如果数据相对同质,结果将大致相同。 (如果不是,那么使用输入或输出都不会给出准确的估计。)
您可以轻松找到压缩文件的大小,并使用到目前为止在压缩数据上花费的时间来估计处理剩余压缩数据的时间。
这里是一个简单的例子,使用BZ2Decompress
对象,一次对输入一个chunk进行操作,显示读取进度(Python3,从命令行获取文件名):
# Decompress a bzip2 file, showing progress based on consumed input.
import sys
import os
import bz2
import time
def proc(input):
"""Decompress and process a piece of a compressed stream"""
dat = dec.decompress(input)
got = len(dat)
if got != 0: # 0 is common -- waiting for a bzip2 block
# process dat here
pass
return got
# Get the size of the compressed bzip2 file.
path = sys.argv[1]
size = os.path.getsize(path)
# Decompress CHUNK bytes at a time.
CHUNK = 16384
totin = 0
totout = 0
prev = -1
dec = bz2.BZ2Decompressor()
start = time.time()
with open(path, 'rb') as f:
for chunk in iter(lambda: f.read(CHUNK), b''):
# feed chunk to decompressor
got = proc(chunk)
# handle case of concatenated bz2 streams
if dec.eof:
rem = dec.unused_data
dec = bz2.BZ2Decompressor()
got += proc(rem)
# show progress
totin += len(chunk)
totout += got
if got != 0: # only if a bzip2 block emitted
frac = round(1000 * totin / size)
if frac != prev:
left = (size / totin - 1) * (time.time() - start)
print(f'\r{frac / 10:.1f}% (~{left:.1f}s left) ', end='')
prev = frac
# Show the resulting size.
print(end='\r')
print(totout, 'uncompressed bytes')
在另一个答案的帮助下,我终于找到了解决办法。这个想法是使用处理的压缩文件的大小,压缩文件的总大小和用于估计剩余时间的时间。为此,
- 将压缩文件作为字节对象读入内存:
byte_data
,相当快
- 使用
total_size = len(byte_data)
计算 byte_data
的大小
- 将
byte_data
换行为 byte_f = io.BytesIO(byte_data)
- 将
byte_f
换成 bz2f = bz2.BZ2File(byte_f)
- 在处理过程中,使用
pos = byte_f.tell()
获取压缩文件中的当前位置
- 计算准确的处理百分比
percent = pos/total_size
- 记录使用时间,计算剩余时间
几秒钟后,估计会变得相当准确:
0.01% processed, 2.00s elapsed, 17514.27s remaining...
0.02% processed, 4.00s elapsed, 20167.48s remaining...
0.03% processed, 6.00s elapsed, 21239.60s remaining...
0.04% processed, 8.00s elapsed, 21818.91s remaining...
0.05% processed, 10.00s elapsed, 22180.76s remaining...
0.05% processed, 12.00s elapsed, 22427.78s remaining...
0.06% processed, 14.00s elapsed, 22661.80s remaining...
0.07% processed, 16.00s elapsed, 22840.45s remaining...
0.08% processed, 18.00s elapsed, 22937.07s remaining...
....
99.97% processed, 22704.28s elapsed, 6.27s remaining...
99.98% processed, 22706.28s elapsed, 4.40s remaining...
99.99% processed, 22708.28s elapsed, 2.45s remaining...
100.00% processed, 22710.28s elapsed, 0.54s remaining...
可以直接使用bz2
Python模块提供的现有高级API,同时从底层文件处理程序中获取有关已处理多少压缩数据的信息.
import bz2
import datetime
import time
with bz2.open(input_filename, 'rt', encoding='utf8') as input_file:
underlying_file = input_file.buffer._buffer.raw._fp
underlying_file.seek(0, io.SEEK_END)
underlying_file_size = underlying_file.tell()
underlying_file.seek(0, io.SEEK_SET)
lines_count = 0
start_time = time.perf_counter()
progress = f'{0:.2f}%'
while True:
line = input_file.readline().strip()
if not line:
break
process_line(line)
lines_count += 1
current_position = underlying_file.tell()
new_progress = f'{current_position / underlying_file_size * 100:.2f}%'
if progress != new_progress:
progress = new_progress
current_time = time.perf_counter()
elapsed_time = current_time - start_time
elapsed = datetime.timedelta(seconds=elapsed_time)
remaining = datetime.timedelta(seconds=(underlying_file_size / current_position - 1) * elapsed_time)
print(f"{lines_count} lines processed, {progress}, {elapsed} elapsed, {remaining} remaining")
如果你读的不是文本文件,而是二进制文件,那么你必须使用:
with bz2.open(input_filename, 'rb') as input_file:
underlying_file = input_file._buffer.raw._fp
...
我需要使用 Python 处理大型 bz2 文件(~6G),使用 BZ2File.readline()
逐行解压。问题是我想知道处理整个文件需要多少时间。
我做了很多搜索,试图获得解压文件的实际大小,这样我就可以知道即时处理的百分比,从而知道剩余时间,但发现似乎不可能无需先解压即可知道解压后的文件大小().
除了解压缩文件会占用大量内存外,解压缩本身也需要很多时间。那么,有人可以帮我即时获得剩余的处理时间吗?
您可以根据压缩数据的消耗,而不是生产 个 未压缩 数据。如果数据相对同质,结果将大致相同。 (如果不是,那么使用输入或输出都不会给出准确的估计。)
您可以轻松找到压缩文件的大小,并使用到目前为止在压缩数据上花费的时间来估计处理剩余压缩数据的时间。
这里是一个简单的例子,使用BZ2Decompress
对象,一次对输入一个chunk进行操作,显示读取进度(Python3,从命令行获取文件名):
# Decompress a bzip2 file, showing progress based on consumed input.
import sys
import os
import bz2
import time
def proc(input):
"""Decompress and process a piece of a compressed stream"""
dat = dec.decompress(input)
got = len(dat)
if got != 0: # 0 is common -- waiting for a bzip2 block
# process dat here
pass
return got
# Get the size of the compressed bzip2 file.
path = sys.argv[1]
size = os.path.getsize(path)
# Decompress CHUNK bytes at a time.
CHUNK = 16384
totin = 0
totout = 0
prev = -1
dec = bz2.BZ2Decompressor()
start = time.time()
with open(path, 'rb') as f:
for chunk in iter(lambda: f.read(CHUNK), b''):
# feed chunk to decompressor
got = proc(chunk)
# handle case of concatenated bz2 streams
if dec.eof:
rem = dec.unused_data
dec = bz2.BZ2Decompressor()
got += proc(rem)
# show progress
totin += len(chunk)
totout += got
if got != 0: # only if a bzip2 block emitted
frac = round(1000 * totin / size)
if frac != prev:
left = (size / totin - 1) * (time.time() - start)
print(f'\r{frac / 10:.1f}% (~{left:.1f}s left) ', end='')
prev = frac
# Show the resulting size.
print(end='\r')
print(totout, 'uncompressed bytes')
在另一个答案的帮助下,我终于找到了解决办法。这个想法是使用处理的压缩文件的大小,压缩文件的总大小和用于估计剩余时间的时间。为此,
- 将压缩文件作为字节对象读入内存:
byte_data
,相当快 - 使用
total_size = len(byte_data)
计算 - 将
byte_data
换行为byte_f = io.BytesIO(byte_data)
- 将
byte_f
换成bz2f = bz2.BZ2File(byte_f)
- 在处理过程中,使用
pos = byte_f.tell()
获取压缩文件中的当前位置 - 计算准确的处理百分比
percent = pos/total_size
- 记录使用时间,计算剩余时间
byte_data
的大小
几秒钟后,估计会变得相当准确:
0.01% processed, 2.00s elapsed, 17514.27s remaining...
0.02% processed, 4.00s elapsed, 20167.48s remaining...
0.03% processed, 6.00s elapsed, 21239.60s remaining...
0.04% processed, 8.00s elapsed, 21818.91s remaining...
0.05% processed, 10.00s elapsed, 22180.76s remaining...
0.05% processed, 12.00s elapsed, 22427.78s remaining...
0.06% processed, 14.00s elapsed, 22661.80s remaining...
0.07% processed, 16.00s elapsed, 22840.45s remaining...
0.08% processed, 18.00s elapsed, 22937.07s remaining...
....
99.97% processed, 22704.28s elapsed, 6.27s remaining...
99.98% processed, 22706.28s elapsed, 4.40s remaining...
99.99% processed, 22708.28s elapsed, 2.45s remaining...
100.00% processed, 22710.28s elapsed, 0.54s remaining...
可以直接使用bz2
Python模块提供的现有高级API,同时从底层文件处理程序中获取有关已处理多少压缩数据的信息.
import bz2
import datetime
import time
with bz2.open(input_filename, 'rt', encoding='utf8') as input_file:
underlying_file = input_file.buffer._buffer.raw._fp
underlying_file.seek(0, io.SEEK_END)
underlying_file_size = underlying_file.tell()
underlying_file.seek(0, io.SEEK_SET)
lines_count = 0
start_time = time.perf_counter()
progress = f'{0:.2f}%'
while True:
line = input_file.readline().strip()
if not line:
break
process_line(line)
lines_count += 1
current_position = underlying_file.tell()
new_progress = f'{current_position / underlying_file_size * 100:.2f}%'
if progress != new_progress:
progress = new_progress
current_time = time.perf_counter()
elapsed_time = current_time - start_time
elapsed = datetime.timedelta(seconds=elapsed_time)
remaining = datetime.timedelta(seconds=(underlying_file_size / current_position - 1) * elapsed_time)
print(f"{lines_count} lines processed, {progress}, {elapsed} elapsed, {remaining} remaining")
如果你读的不是文本文件,而是二进制文件,那么你必须使用:
with bz2.open(input_filename, 'rb') as input_file:
underlying_file = input_file._buffer.raw._fp
...