将 HDF 文件加载到 Python Dask DataFrame 列表中
Load HDF file into list of Python Dask DataFrames
我有一个 HDF5 文件,我想将其加载到 Dask DataFrame 列表中。我在 Dask pipeline approach 的缩写版本之后使用一个循环来设置它。这是代码:
import pandas as pd
from dask import compute, delayed
import dask.dataframe as dd
import os, h5py
@delayed
def load(d,k):
ddf = dd.read_hdf(os.path.join(d,'Cleaned.h5'), key=k)
return ddf
if __name__ == '__main__':
d = 'C:\Users\User\FileD'
loaded = [load(d,'/DF'+str(i)) for i in range(1,10)]
ddf_list = compute(*loaded)
print(ddf_list[0].head(),ddf_list[0].compute().shape)
我收到此错误消息:
C:\Python27\lib\site-packages\tables\group.py:1187: UserWarning: problems loading leaf ``/DF1/table``::
HDF5 error back trace
File "..\..\hdf5-1.8.18\src\H5Dio.c", line 173, in H5Dread
can't read data
File "..\..\hdf5-1.8.18\src\H5Dio.c", line 543, in H5D__read
can't initialize I/O info
File "..\..\hdf5-1.8.18\src\H5Dchunk.c", line 841, in H5D__chunk_io_init
unable to create file chunk selections
File "..\..\hdf5-1.8.18\src\H5Dchunk.c", line 1330, in H5D__create_chunk_file_map_hyper
can't insert chunk into skip list
File "..\..\hdf5-1.8.18\src\H5SL.c", line 1066, in H5SL_insert
can't create new skip list node
File "..\..\hdf5-1.8.18\src\H5SL.c", line 735, in H5SL_insert_common
can't insert duplicate key
End of HDF5 error back trace
Problems reading the array data.
The leaf will become an ``UnImplemented`` node.
% (self._g_join(childname), exc))
消息提到了一个重复的密钥。我迭代了前 9 个文件以测试代码,在循环中,我将每次迭代用于 assemble 与 dd.read_hdf
一起使用的不同密钥。在所有迭代中,我保持文件名相同 - 只有密钥被更改。
我需要使用 dd.concat(list,axis=0,...)
来垂直连接文件的内容。我的方法是先将它们加载到列表中,然后将它们连接起来。
我已经安装了 PyTables and h5Py 并且有 Dask 版本 0.14.3+2
。
使用 Pandas 0.20.1
,我似乎可以使用它:
for i in range(1,10):
hdf = pd.HDFStore(os.path.join(d,'Cleaned.h5'),mode='r')
df = hdf.get('/DF{}' .format(i))
print df.shape
hdf.close()
有没有办法可以将这个 HDF5 文件加载到 Dask DataFrame 列表中?或者是否有另一种方法将它们垂直连接在一起?
Dask.dataframe 已经很懒了,所以没必要用 dask.delayed 让它更懒。您可以重复调用 dd.read_hdf
:
ddfs = [dd.read_hdf(os.path.join(d,'Cleaned.h5'), key=k)
for k in keys]
ddf = dd.concat(ddfs)
我有一个 HDF5 文件,我想将其加载到 Dask DataFrame 列表中。我在 Dask pipeline approach 的缩写版本之后使用一个循环来设置它。这是代码:
import pandas as pd
from dask import compute, delayed
import dask.dataframe as dd
import os, h5py
@delayed
def load(d,k):
ddf = dd.read_hdf(os.path.join(d,'Cleaned.h5'), key=k)
return ddf
if __name__ == '__main__':
d = 'C:\Users\User\FileD'
loaded = [load(d,'/DF'+str(i)) for i in range(1,10)]
ddf_list = compute(*loaded)
print(ddf_list[0].head(),ddf_list[0].compute().shape)
我收到此错误消息:
C:\Python27\lib\site-packages\tables\group.py:1187: UserWarning: problems loading leaf ``/DF1/table``::
HDF5 error back trace
File "..\..\hdf5-1.8.18\src\H5Dio.c", line 173, in H5Dread
can't read data
File "..\..\hdf5-1.8.18\src\H5Dio.c", line 543, in H5D__read
can't initialize I/O info
File "..\..\hdf5-1.8.18\src\H5Dchunk.c", line 841, in H5D__chunk_io_init
unable to create file chunk selections
File "..\..\hdf5-1.8.18\src\H5Dchunk.c", line 1330, in H5D__create_chunk_file_map_hyper
can't insert chunk into skip list
File "..\..\hdf5-1.8.18\src\H5SL.c", line 1066, in H5SL_insert
can't create new skip list node
File "..\..\hdf5-1.8.18\src\H5SL.c", line 735, in H5SL_insert_common
can't insert duplicate key
End of HDF5 error back trace
Problems reading the array data.
The leaf will become an ``UnImplemented`` node.
% (self._g_join(childname), exc))
消息提到了一个重复的密钥。我迭代了前 9 个文件以测试代码,在循环中,我将每次迭代用于 assemble 与 dd.read_hdf
一起使用的不同密钥。在所有迭代中,我保持文件名相同 - 只有密钥被更改。
我需要使用 dd.concat(list,axis=0,...)
来垂直连接文件的内容。我的方法是先将它们加载到列表中,然后将它们连接起来。
我已经安装了 PyTables and h5Py 并且有 Dask 版本 0.14.3+2
。
使用 Pandas 0.20.1
,我似乎可以使用它:
for i in range(1,10):
hdf = pd.HDFStore(os.path.join(d,'Cleaned.h5'),mode='r')
df = hdf.get('/DF{}' .format(i))
print df.shape
hdf.close()
有没有办法可以将这个 HDF5 文件加载到 Dask DataFrame 列表中?或者是否有另一种方法将它们垂直连接在一起?
Dask.dataframe 已经很懒了,所以没必要用 dask.delayed 让它更懒。您可以重复调用 dd.read_hdf
:
ddfs = [dd.read_hdf(os.path.join(d,'Cleaned.h5'), key=k)
for k in keys]
ddf = dd.concat(ddfs)