Dask 在应用函数中使用 broadcasted pandas.DataFrame
Dask use broadcasted pandas.DataFrame in apply function
我有一些代码可以从 pandas.DataFrame
中对 dask.DataFrame
中的每条记录进行 k
次采样。
但它会发出警告:
UserWarning: Large object of size 1.12 MB detected in task graph:
( metric label group_1 group_2
6251 1 ... 6f875181063ba')
Consider scattering large objects ahead of time
with client.scatter to reduce scheduler burden and
keep data on workers
future = client.submit(func, big_data) # bad
big_future = client.scatter(big_data) # good
future = client.submit(func, big_future) # good
% (format_bytes(len(b)), s)
尝试使用以下方法解决此问题(手动广播数据):
client.scatter(group_0, broadcast=True)
仍将尝试重新广播 group_0
。
我怎样才能告诉 dask 使用广播的?
我需要收集分散的数据吗?
代码可以进一步优化吗?
查看下面的代码:
import numpy as np
import pandas as pd
seed = 47
np.random.seed(seed)
size = 100000
df = pd.DataFrame({i: np.random.randint(1,100,size=size) for i in ['metric']})
df['label'] = np.random.randint(0,2, size=size)
df['group_1'] = pd.Series(np.random.randint(1,12, size=size)).astype(object)
df['group_2'] = pd.Series(np.random.randint(1,10, size=size)).astype(object)
display(df.head())
group_0 = df[df['label'] == 0]
group_0 = group_0.reset_index(drop=True)
group_0 = group_0.rename(index=str, columns={"metric": "metric_group_0"})
join_columns_enrich = ['group_1', 'group_2']
join_real = ['metric_group_0']
join_real.extend(join_columns_enrich)
group_0 = group_0[join_real]
display(group_0.head())
group_1 = df[df['label'] == 1]
group_1 = group_1.reset_index(drop=True)
display(group_1.head())
import dask.dataframe as dd
from dask.distributed import Client
client = Client()
display(client)
client.cluster
resulting_df = None
k = 3
def knnJoinSingle_series(original_element, group_0, join_columns, random_state):
limits_dict = original_element[join_columns_enrich].to_dict()
query = ' & '.join([f"{k} == {v}" for k, v in limits_dict.items()])
candidates = group_0.query(query)
if len(candidates) > 0:
return candidates.sample(n=1, random_state=random_state)['metric_group_0'].values[0]
else:
return np.nan
for i in range(1, k+1):
print(i)
# WARNING:not setting random state, otherwise always the same record is picked
# in case of same values from group selection variables. Is there a better way?
group_1_dask = dd.from_pandas(group_1, npartitions=8)
group_1_dask['metric_group_0']= group_1_dask.apply(lambda x:
knnJoinSingle_series(x, group_0, join_columns_enrich, random_state=None),
axis = 1, meta=('metric_group_0', 'int64'))
group_1 = group_1_dask.compute()
group_1['run'] = i
if resulting_df is None:
resulting_df = group_1
else:
resulting_df = pd.concat([resulting_df, group_1])
resulting_df['difference'] = resulting_df['metric'] - resulting_df['metric_group_0']
resulting_df['differenceAbs'] = np.abs(resulting_df['difference'])
display(resulting_df.head())
print(len(resulting_df))
print(resulting_df.difference.isnull().sum())
在 dask 数据帧上使用变量之前(可能在创建客户端后立即),您需要执行以下操作:
group0 = client.scatter(group_0, broadcast=True)
即,用未来替换具体数据框的实例,这是对集群上副本的引用。 Dask 会将其解释为使用每个 worker 中数据的本地副本。
我有一些代码可以从 pandas.DataFrame
中对 dask.DataFrame
中的每条记录进行 k
次采样。
但它会发出警告:
UserWarning: Large object of size 1.12 MB detected in task graph:
( metric label group_1 group_2
6251 1 ... 6f875181063ba')
Consider scattering large objects ahead of time
with client.scatter to reduce scheduler burden and
keep data on workers
future = client.submit(func, big_data) # bad
big_future = client.scatter(big_data) # good
future = client.submit(func, big_future) # good
% (format_bytes(len(b)), s)
尝试使用以下方法解决此问题(手动广播数据):
client.scatter(group_0, broadcast=True)
仍将尝试重新广播 group_0
。
我怎样才能告诉 dask 使用广播的?
我需要收集分散的数据吗?
代码可以进一步优化吗?
查看下面的代码:
import numpy as np
import pandas as pd
seed = 47
np.random.seed(seed)
size = 100000
df = pd.DataFrame({i: np.random.randint(1,100,size=size) for i in ['metric']})
df['label'] = np.random.randint(0,2, size=size)
df['group_1'] = pd.Series(np.random.randint(1,12, size=size)).astype(object)
df['group_2'] = pd.Series(np.random.randint(1,10, size=size)).astype(object)
display(df.head())
group_0 = df[df['label'] == 0]
group_0 = group_0.reset_index(drop=True)
group_0 = group_0.rename(index=str, columns={"metric": "metric_group_0"})
join_columns_enrich = ['group_1', 'group_2']
join_real = ['metric_group_0']
join_real.extend(join_columns_enrich)
group_0 = group_0[join_real]
display(group_0.head())
group_1 = df[df['label'] == 1]
group_1 = group_1.reset_index(drop=True)
display(group_1.head())
import dask.dataframe as dd
from dask.distributed import Client
client = Client()
display(client)
client.cluster
resulting_df = None
k = 3
def knnJoinSingle_series(original_element, group_0, join_columns, random_state):
limits_dict = original_element[join_columns_enrich].to_dict()
query = ' & '.join([f"{k} == {v}" for k, v in limits_dict.items()])
candidates = group_0.query(query)
if len(candidates) > 0:
return candidates.sample(n=1, random_state=random_state)['metric_group_0'].values[0]
else:
return np.nan
for i in range(1, k+1):
print(i)
# WARNING:not setting random state, otherwise always the same record is picked
# in case of same values from group selection variables. Is there a better way?
group_1_dask = dd.from_pandas(group_1, npartitions=8)
group_1_dask['metric_group_0']= group_1_dask.apply(lambda x:
knnJoinSingle_series(x, group_0, join_columns_enrich, random_state=None),
axis = 1, meta=('metric_group_0', 'int64'))
group_1 = group_1_dask.compute()
group_1['run'] = i
if resulting_df is None:
resulting_df = group_1
else:
resulting_df = pd.concat([resulting_df, group_1])
resulting_df['difference'] = resulting_df['metric'] - resulting_df['metric_group_0']
resulting_df['differenceAbs'] = np.abs(resulting_df['difference'])
display(resulting_df.head())
print(len(resulting_df))
print(resulting_df.difference.isnull().sum())
在 dask 数据帧上使用变量之前(可能在创建客户端后立即),您需要执行以下操作:
group0 = client.scatter(group_0, broadcast=True)
即,用未来替换具体数据框的实例,这是对集群上副本的引用。 Dask 会将其解释为使用每个 worker 中数据的本地副本。