拆分列并写入单独的输出文件

Split columns and write to separate output file

我有一个包含 8 列和大约 500 万行的数据集。文件大小超过 400 MB。我正在尝试分隔列。文件扩展名是 .dat,列是 one-space 分隔的。

输入:

00022d3f5b17 00022d9064bc 1073260801 1073260803 819251 440006 819251 440006
00022d9064bc 00022dba8f51 1073260801 1073260803 819251 440006 819251 440006
00022d9064bc 00022de1c6c1 1073260801 1073260803 819251 440006 819251 440006
00022d9064bc 003065f30f37 1073260801 1073260803 819251 440006 819251 440006
00022d9064bc 00904b48a3b6 1073260801 1073260803 819251 440006 819251 440006
00022d9064bc 00904b83a0ea 1073260803 1073260810 819213 439954 819213 439954
00904b4557d3 00904b85d3cf 1073260803 1073261920 817526 439458 817526 439458
00022de73863 00904b14b494 1073260804 1073265410 817558 439525 817558 439525

代码:

import pandas as pd 

df = pd.read_csv('sorted.dat', sep=' ', header=None, names=['id_1', 'id_2', 'time_1', 'time_2', 'gps_1', 'gps_2', 'gps_3', 'gps_4'])

#print df

df.to_csv('output_1.csv', columns = ['id_1', 'time_1', 'time_2', 'gps_1', 'gps_2'])

df.to_csv('output_2.csv', columns = ['id_2', 'time_1', 'time_2', 'gps_3', 'gps_4']) 

输出将是一个 col[1], col[3], col[4], col[5], col[6] 的文件和另一个 col[2], col[3], col[4], col[7], col[8] 的输出。

我遇到了这个错误

Traceback (most recent call last):
  File "split_col_pandas.py", line 3, in <module>
    df = pd.read_csv('dartmouthsorted.dat', sep=' ', header=None, names=['id_1', 'id_2', 'time_1', 'time_2', 'gps_1', 'gps_2', 'gps_3', 'gps_4'])
  File "/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.py", line 562, in parser_f
    return _read(filepath_or_buffer, kwds)
  File "/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.py", line 325, in _read
    return parser.read()
  File "/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.py", line 823, in read
    df = DataFrame(col_dict, columns=columns, index=index)
  File "/usr/local/lib/python2.7/dist-packages/pandas/core/frame.py", line 224, in __init__
    mgr = self._init_dict(data, index, columns, dtype=dtype)
  File "/usr/local/lib/python2.7/dist-packages/pandas/core/frame.py", line 360, in _init_dict
    return _arrays_to_mgr(arrays, data_names, index, columns, dtype=dtype)
  File "/usr/local/lib/python2.7/dist-packages/pandas/core/frame.py", line 5241, in _arrays_to_mgr
    return create_block_manager_from_arrays(arrays, arr_names, axes)
  File "/usr/local/lib/python2.7/dist-packages/pandas/core/internals.py", line 3999, in create_block_manager_from_arrays
    blocks = form_blocks(arrays, names, axes)
  File "/usr/local/lib/python2.7/dist-packages/pandas/core/internals.py", line 4076, in form_blocks
    int_blocks = _multi_blockify(int_items)
  File "/usr/local/lib/python2.7/dist-packages/pandas/core/internals.py", line 4145, in _multi_blockify
    values, placement = _stack_arrays(list(tup_block), dtype)
  File "/usr/local/lib/python2.7/dist-packages/pandas/core/internals.py", line 4188, in _stack_arrays
    stacked = np.empty(shape, dtype=dtype)
MemoryError

试试这个:

columns = ['id_1', 'time_1', 'time_2', 'gps_1', 'gps_2']
df[columns].to_csv('output_1.csv')

columns = ['id_2', 'time_1', 'time_2', 'gps_3', 'gps_4']
df[columns].to_csv('output_2.csv')

另外,查看 post 关于 Python 中的内存错误: Memory errors and list limits?

更新编辑

post作者还要求保存两个新的csv文件后,将output_1.csv和output_2.csv重新组合,使id_1id_2在同一列,gps_1gps_3成为单列,gps_2gps_4成为单列。

有很多方法可以做到这一点,但这里有一种方法(选择可读性而不是效率):

columns = ['id_merged', 'time_1', 'time_2', 'gps_1or3', 'gps_2or4']
df1 = pd.read_csv('output_1.csv', names=columns, skiprows=1)
df2 = pd.read_csv('output_2.csv', names=columns, skiprows=1)

df = pd.concat([df1, df2])  # your final dataframe

一个潜在的问题是,您最终会在某些地方得到 null 值,因此需要适当地处理它们,否则您会抛出错误,而且新的 [=19] 有危险=] 列将有重复的键,但这是另一个问题的问题...

有关更新的更多信息,请参阅有关连接、连接和合并的文档:http://pandas.pydata.org/pandas-docs/stable/merging.html

这种方法非常节省内存,因为它一次只对一行进行操作。它也不需要 Pandas.

import csv

input_file = 'sorted.dat'
output_file_1 = 'output_1.csv'
output_file_2 = 'output_2.csv'
columns_1 = ['id_1', 'time_1', 'time_2', 'gps_1', 'gps_2']
columns_2 = ['id_2', 'time_1', 'time_2', 'gps_3', 'gps_4']

with open(input_file, 'rb') as file_in, \
     open(output_file_1, 'wb') as file_out_1, \ 
     open(output_file_2, 'wb') as file_out_2:

    reader = csv.reader(file_in)
    writer_1 = csv.writer(file_out_1)
    writer_2 = csv.writer(file_out_2)
    writer_1.writerow(columns_1)
    writer_2.writerow(columns_2)
    for line in reader:
        line = line[0].split(' ')
        writer_1.writerow([line[n] for n in [0, 2, 3, 4, 6]])
        writer_2.writerow([line[n] for n in [1, 2, 3, 5, 7]])

!cat output_1.csv
id_1,time_1,time_2,gps_1,gps_2
00022d3f5b17,1073260801,1073260803,819251,819251
00022d9064bc,1073260801,1073260803,819251,819251
00022d9064bc,1073260801,1073260803,819251,819251
00022d9064bc,1073260801,1073260803,819251,819251
00022d9064bc,1073260801,1073260803,819251,819251
00022d9064bc,1073260803,1073260810,819213,819213
00904b4557d3,1073260803,1073261920,817526,817526
00022de73863,1073260804,1073265410,817558,817558

!cat output_2.csv
id_2,time_1,time_2,gps_3,gps_4
00022d9064bc,1073260801,1073260803,440006,440006
00022dba8f51,1073260801,1073260803,440006,440006
00022de1c6c1,1073260801,1073260803,440006,440006
003065f30f37,1073260801,1073260803,440006,440006
00904b48a3b6,1073260801,1073260803,440006,440006
00904b83a0ea,1073260803,1073260810,439954,439954
00904b85d3cf,1073260803,1073261920,439458,439458
00904b14b494,1073260804,1073265410,439525,439525