计算从 hdf5 文件映射的大型 numpy 数组的平均值
Calculate mean of large numpy array which is memmapped from hdf5 file
我在计算 numpy 数组的平均值时遇到问题,该数组对于 RAM (~100G) 来说太大了。
我研究过使用 np.memmap
,但不幸的是我的数组作为数据集存储在 hdf5 文件中。根据我的尝试,np.memmap 不接受 hdf5 数据集作为输入。
TypeError: coercing to Unicode: need string or buffer, Dataset found
那么我如何才能有效地从磁盘调用 np.mean
内存映射数组?当然,我可以遍历部分数据集,其中每个部分都适合内存。
但是,这感觉太像是一种变通方法,我也不确定它是否会达到最佳性能。
下面是一些示例代码:
data = np.randint(0, 255, 100000*10*10*10, dtype=np.uint8)
data.reshape((100000,10,10,10)) # typically lot larger, ~100G
hdf5_file = h5py.File('data.h5', 'w')
hdf5_file.create_dataset('x', data=data, dtype='uint8')
def get_mean_image(filepath):
"""
Returns the mean_array of a dataset.
"""
f = h5py.File(filepath, "r")
xs_mean = np.mean(f['x'], axis=0) # memory error with large enough array
return xs_mean
xs_mean = get_mean_image('./data.h5')
正如 hpaulj 在评论中所建议的,我只是将均值计算分成多个步骤。
这是我的(简化的)代码,如果它可能对某人有用的话:
def get_mean_image(filepath):
"""
Returns the mean_image of a xs dataset.
:param str filepath: Filepath of the data upon which the mean_image should be calculated.
:return: ndarray xs_mean: mean_image of the x dataset.
"""
f = h5py.File(filepath, "r")
# check available memory and divide the mean calculation in steps
total_memory = 0.5 * psutil.virtual_memory() # In bytes. Take 1/2 of what is available, just to make sure.
filesize = os.path.getsize(filepath)
steps = int(np.ceil(filesize/total_memory))
n_rows = f['x'].shape[0]
stepsize = int(n_rows / float(steps))
xs_mean_arr = None
for i in xrange(steps):
if xs_mean_arr is None: # create xs_mean_arr that stores intermediate mean_temp results
xs_mean_arr = np.zeros((steps, ) + f['x'].shape[1:], dtype=np.float64)
if i == steps-1: # for the last step, calculate mean till the end of the file
xs_mean_temp = np.mean(f['x'][i * stepsize: n_rows], axis=0, dtype=np.float64)
else:
xs_mean_temp = np.mean(f['x'][i*stepsize : (i+1) * stepsize], axis=0, dtype=np.float64)
xs_mean_arr[i] = xs_mean_temp
xs_mean = np.mean(xs_mean_arr, axis=0, dtype=np.float64).astype(np.float32)
return xs_mean
我在计算 numpy 数组的平均值时遇到问题,该数组对于 RAM (~100G) 来说太大了。
我研究过使用 np.memmap
,但不幸的是我的数组作为数据集存储在 hdf5 文件中。根据我的尝试,np.memmap 不接受 hdf5 数据集作为输入。
TypeError: coercing to Unicode: need string or buffer, Dataset found
那么我如何才能有效地从磁盘调用 np.mean
内存映射数组?当然,我可以遍历部分数据集,其中每个部分都适合内存。
但是,这感觉太像是一种变通方法,我也不确定它是否会达到最佳性能。
下面是一些示例代码:
data = np.randint(0, 255, 100000*10*10*10, dtype=np.uint8)
data.reshape((100000,10,10,10)) # typically lot larger, ~100G
hdf5_file = h5py.File('data.h5', 'w')
hdf5_file.create_dataset('x', data=data, dtype='uint8')
def get_mean_image(filepath):
"""
Returns the mean_array of a dataset.
"""
f = h5py.File(filepath, "r")
xs_mean = np.mean(f['x'], axis=0) # memory error with large enough array
return xs_mean
xs_mean = get_mean_image('./data.h5')
正如 hpaulj 在评论中所建议的,我只是将均值计算分成多个步骤。
这是我的(简化的)代码,如果它可能对某人有用的话:
def get_mean_image(filepath):
"""
Returns the mean_image of a xs dataset.
:param str filepath: Filepath of the data upon which the mean_image should be calculated.
:return: ndarray xs_mean: mean_image of the x dataset.
"""
f = h5py.File(filepath, "r")
# check available memory and divide the mean calculation in steps
total_memory = 0.5 * psutil.virtual_memory() # In bytes. Take 1/2 of what is available, just to make sure.
filesize = os.path.getsize(filepath)
steps = int(np.ceil(filesize/total_memory))
n_rows = f['x'].shape[0]
stepsize = int(n_rows / float(steps))
xs_mean_arr = None
for i in xrange(steps):
if xs_mean_arr is None: # create xs_mean_arr that stores intermediate mean_temp results
xs_mean_arr = np.zeros((steps, ) + f['x'].shape[1:], dtype=np.float64)
if i == steps-1: # for the last step, calculate mean till the end of the file
xs_mean_temp = np.mean(f['x'][i * stepsize: n_rows], axis=0, dtype=np.float64)
else:
xs_mean_temp = np.mean(f['x'][i*stepsize : (i+1) * stepsize], axis=0, dtype=np.float64)
xs_mean_arr[i] = xs_mean_temp
xs_mean = np.mean(xs_mean_arr, axis=0, dtype=np.float64).astype(np.float32)
return xs_mean