将大 Pandas df 保存到 hdf 时出现 OverflowError

OverflowError while saving large Pandas df to hdf

我有一个很大的 Pandas 数据框(~15GB,8300 万行),我有兴趣将其另存为 h5(或 feather)文件。一列包含数字的长 ID 字符串,其类型应为 string/object。但即使我确保 pandas 将所有列解析为 object:

df = pd.read_csv('data.csv', dtype=object)
print(df.dtypes)  # sanity check
df.to_hdf('df.h5', 'df')

> client_id                object
  event_id                 object
  account_id               object
  session_id               object
  event_timestamp          object
  # etc...

我收到这个错误:

  File "foo.py", line 14, in <module>
    df.to_hdf('df.h5', 'df')
  File "/shared_directory/projects/env/lib/python3.6/site-packages/pandas/core/generic.py", line 1996, in to_hdf
    return pytables.to_hdf(path_or_buf, key, self, **kwargs)
  File "/shared_directory/projects/env/lib/python3.6/site-packages/pandas/io/pytables.py", line 279, in to_hdf
    f(store)
  File "/shared_directory/projects/env/lib/python3.6/site-packages/pandas/io/pytables.py", line 273, in <lambda>
    f = lambda store: store.put(key, value, **kwargs)
  File "/shared_directory/projects/env/lib/python3.6/site-packages/pandas/io/pytables.py", line 890, in put
    self._write_to_group(key, value, append=append, **kwargs)
  File "/shared_directory/projects/env/lib/python3.6/site-packages/pandas/io/pytables.py", line 1367, in _write_to_group
    s.write(obj=value, append=append, complib=complib, **kwargs)
  File "/shared_directory/projects/env/lib/python3.6/site-packages/pandas/io/pytables.py", line 2963, in write
    self.write_array('block%d_values' % i, blk.values, items=blk_items)
  File "/shared_directory/projects/env/lib/python3.6/site-packages/pandas/io/pytables.py", line 2730, in write_array
    vlarr.append(value)
  File "/shared_directory/projects/env/lib/python3.6/site-packages/tables/vlarray.py", line 547, in append
    self._append(nparr, nobjects)
  File "tables/hdf5extension.pyx", line 2032, in tables.hdf5extension.VLArray._append
OverflowError: value too large to convert to int

显然它正试图将其转换为 int,但失败了。

当运行 df.to_feather() 我遇到了类似的问题:

df.to_feather('df.feather')
  File "/shared_directory/projects/env/lib/python3.6/site-packages/pandas/core/frame.py", line 1892, in to_feather
    to_feather(self, fname)
  File "/shared_directory/projects/env/lib/python3.6/site-packages/pandas/io/feather_format.py", line 83, in to_feather
    feather.write_dataframe(df, path)
  File "/shared_directory/projects/env/lib/python3.6/site-packages/pyarrow/feather.py", line 182, in write_feather
    writer.write(df)
  File "/shared_directory/projects/env/lib/python3.6/site-packages/pyarrow/feather.py", line 93, in write
    table = Table.from_pandas(df, preserve_index=False)
  File "pyarrow/table.pxi", line 1174, in pyarrow.lib.Table.from_pandas
  File "/shared_directory/projects/env/lib/python3.6/site-packages/pyarrow/pandas_compat.py", line 501, in dataframe_to_arrays
    convert_fields))
  File "/usr/lib/python3.6/concurrent/futures/_base.py", line 586, in result_iterator
    yield fs.pop().result()
  File "/usr/lib/python3.6/concurrent/futures/_base.py", line 425, in result
    return self.__get_result()
  File "/usr/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result
    raise self._exception
  File "/usr/lib/python3.6/concurrent/futures/thread.py", line 56, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/shared_directory/projects/env/lib/python3.6/site-packages/pyarrow/pandas_compat.py", line 487, in convert_column
    raise e
  File "/shared_directory/projects/env/lib/python3.6/site-packages/pyarrow/pandas_compat.py", line 481, in convert_column
    result = pa.array(col, type=type_, from_pandas=True, safe=safe)
  File "pyarrow/array.pxi", line 191, in pyarrow.lib.array
  File "pyarrow/array.pxi", line 78, in pyarrow.lib._ndarray_to_array
  File "pyarrow/error.pxi", line 85, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: ('Could not convert 1542852887489 with type str: tried to convert to double', 'Conversion failed for column session_id with type object')

所以:

  1. 是不是任何看起来像数字的东西都被强制转换为数字 在存储?
  2. NaN 的存在会影响这里发生的事情吗?
  3. 是否有替代存储解决方案?什么最好?

阅读了一些关于此主题的内容后,问题似乎是处理 string 类型的列。我的 string 列混合了 all-number 个字符串和带字符的字符串。 Pandas 具有将字符串保留为 object 的灵​​活选项,无需声明类型,但在序列化为 hdf5feather 时,列的内容将转换为类型(strdouble,比方说)并且不能混合使用。当遇到足够大的混合类型库时,这两个库都会失败。

Force-converting 我的字符串混合列允许我将它保存在 feather 中,但在 HDF5 中,文件膨胀并且当我 运行 出磁盘 space 时进程结束。

Here 是评论者指出(2 年前)"This problem is very standard, but solutions are few" 的可比案例中的答案。

一些背景:

Pandas 中的字符串类型称为 object,但这掩盖了它们可能是纯字符串或 混合 dtypes(numpy 具有内置字符串类型, 但 Pandas 从不将它们用于文本)。所以在这种情况下要做的第一件事就是将所有字符串列强制为字符串类型(df[col].astype(str))。但即便如此,在足够大的文件(16GB,长字符串)中,这仍然失败。为什么?

我遇到这个错误的原因是我有 longhigh-entropy 的数据(许多不同的唯一值)字符串。 (使用 low-entropy 数据,切换到 categorical dtype 可能是值得的。)在我的例子中,我意识到我只需要这些字符串来识别行——所以我可以用唯一的替换它们整数!

df[col] = df[col].map(dict(zip(df[col].unique(), range(df[col].nunique()))))

其他解决方案:

对于文本数据,除了hdf5/feather,还有其他推荐方案,包括:

  • json
  • msgpack(注意在 Pandas 0.25 中 read_msgpack 已弃用)
  • pickle(已知 security issues,所以要小心 - 但对于数据帧的内部 storage/transfer 应该没问题)
  • parquet,Apache Arrow 生态系统的一部分。

is an answer from Matthew Rocklin (one of the dask developers) comparing msgpack and pickle. He wrote a broader comparison on his blog.

HDF5 不是适合此用例的解决方案。如果您有许多要存储在单个结构中的数据帧,hdf5 是更好的解决方案。打开文件时它有更多的开销,然后它允许您有效地加载每个数据帧并轻松加载它们的切片。它应该被认为是一个存储数据帧的文件系统。

在时间序列事件的单个数据帧的情况下,推荐的格式将是 Apache Arrow 项目格式之一,即 featherparquet。人们应该将它们视为基于列的(压缩的)csv 文件。 .

很好地阐述了这两者之间的特殊权衡

要考虑的一个特定问题是数据类型。由于 feather 并非设计用于通过压缩优化磁盘 space,因此它可以提供对 larger variety of data types. While parquet tries to provide very efficient compression it can support only a limited subset 的支持,使其能够更好地处理数据压缩。