如何从 300GB 文件中提取一列到另一个文件
How to extract one column to another file from a 300GB file
问题是庞大的数据量,我必须用我的 12GB RAM 个人笔记本电脑来完成。我尝试了一个 1M 的循环。每一轮线,并使用csv.writer。但是csv.writer写的好像1M。每两个小时排队一次。那么,还有其他值得一试的方法吗?
lines = 10000000
for i in range(0, 330):
list_str = []
with open(file, 'r') as f:
line_flag = 0
for _ in range(i*lines):
next(f)
for line in f:
line_flag = line_flag + 1
data = json.loads(line)['name']
if data != former_str:
list_str.append(data)
former_str = data
if line_flag == lines:
break
with open(self.path + 'data_range\names.csv', 'a', newline='') as writeFile:
writer = csv.writer(writeFile, delimiter='\n')
writer.writerow(list_str)
writeFile.close()
另一个版本
def read_large_file(f):
block_size = 200000000
block = []
for line in f:
block.append(line[:-1])
if len(block) == block_size:
yield block
block = []
if block:
yield block
def split_files():
with open(write_file, 'r') as f:
i = 0
for block in read_large_file(f):
print(i)
file_name = write_name + str(i) + '.csv'
with open(file_name, 'w', newline='') as f_:
writer = csv.writer(f_, delimiter='\n')
writer.writerow(block)
i += 1
这是在读取一个块并写入之后...我想知道数据传输速率为什么保持在0左右。
这样的东西行得通吗?
本质上是使用生成器来避免读取内存中的整个文件,一次一行地写入数据。
import jsonlines # pip install jsonlines
from typing import Generator
def gen_lines(file_path: str, col_name: str) -> Generator[str]:
with jsonline.open(file_path) as f:
for obj in f:
yield obj[col_name]
# Here you can also change to writing a jsonline again
with open(output_file, "w") as out:
for item in gen_lines(your_file_path, col_name_to_extract):
out.write(f"{item}\n")
应该就这么简单:
import json
import csv
with open(read_file, 'rt') as r, open(write_file, 'wt', newline='') as w:
writer = csv.writer(w)
for line in r:
writer.writerow([json.loads(line)['name']])
I tried the loop inside the file, but I always get me a Error, I guessed we cannot write the data into another file while opening the file?
您完全可以在一个文件中写入数据,同时读取另一个文件。不过,在您 post 它说了什么之前,我无法告诉您更多有关您的错误的信息。
您的代码中有一些关于 former_str
的内容未包含在 "extract one column" 中,因此我没有写任何关于它的内容。
问题是庞大的数据量,我必须用我的 12GB RAM 个人笔记本电脑来完成。我尝试了一个 1M 的循环。每一轮线,并使用csv.writer。但是csv.writer写的好像1M。每两个小时排队一次。那么,还有其他值得一试的方法吗?
lines = 10000000
for i in range(0, 330):
list_str = []
with open(file, 'r') as f:
line_flag = 0
for _ in range(i*lines):
next(f)
for line in f:
line_flag = line_flag + 1
data = json.loads(line)['name']
if data != former_str:
list_str.append(data)
former_str = data
if line_flag == lines:
break
with open(self.path + 'data_range\names.csv', 'a', newline='') as writeFile:
writer = csv.writer(writeFile, delimiter='\n')
writer.writerow(list_str)
writeFile.close()
另一个版本
def read_large_file(f):
block_size = 200000000
block = []
for line in f:
block.append(line[:-1])
if len(block) == block_size:
yield block
block = []
if block:
yield block
def split_files():
with open(write_file, 'r') as f:
i = 0
for block in read_large_file(f):
print(i)
file_name = write_name + str(i) + '.csv'
with open(file_name, 'w', newline='') as f_:
writer = csv.writer(f_, delimiter='\n')
writer.writerow(block)
i += 1
这是在读取一个块并写入之后...我想知道数据传输速率为什么保持在0左右。
这样的东西行得通吗?
本质上是使用生成器来避免读取内存中的整个文件,一次一行地写入数据。
import jsonlines # pip install jsonlines
from typing import Generator
def gen_lines(file_path: str, col_name: str) -> Generator[str]:
with jsonline.open(file_path) as f:
for obj in f:
yield obj[col_name]
# Here you can also change to writing a jsonline again
with open(output_file, "w") as out:
for item in gen_lines(your_file_path, col_name_to_extract):
out.write(f"{item}\n")
应该就这么简单:
import json
import csv
with open(read_file, 'rt') as r, open(write_file, 'wt', newline='') as w:
writer = csv.writer(w)
for line in r:
writer.writerow([json.loads(line)['name']])
I tried the loop inside the file, but I always get me a Error, I guessed we cannot write the data into another file while opening the file?
您完全可以在一个文件中写入数据,同时读取另一个文件。不过,在您 post 它说了什么之前,我无法告诉您更多有关您的错误的信息。
您的代码中有一些关于 former_str
的内容未包含在 "extract one column" 中,因此我没有写任何关于它的内容。