joblib.Parallel如何处理全局变量?

How does joblib.Parallel deal with global variables?

我的代码看起来像这样:

from joblib import Parallel, delayed

# prediction model - 10s of megabytes on disk
LARGE_MODEL = load_model('path/to/model')

file_paths = glob('path/to/files/*')

def do_thing(file_path):
  pred = LARGE_MODEL.predict(load_image(file_path))
  return pred

Parallel(n_jobs=2)(delayed(do_thing)(fp) for fp in file_paths)

我的问题是 LARGE_MODEL 是否会随着循环的每次迭代而变为 pickled/unpickled。如果是这样,我如何确保每个工作人员都缓存它(如果可能的话)?

TLDR

The parent process pickles large model once. That can be made more performant by ensuring large model is a numpy array backed to a memfile. Workers can load_temporary_memmap much faster than from disk.

您的作业是并行化的,可能会使用 joblibs._parallel_backends.LokyBackend

joblib.parallel.Parallel.__call__, joblib tries to initialize the backend to use LokyBackend when n_jobs is set to a count greater than 1.

LokyBackend 为同一 Parallel 对象使用共享临时文件夹。这与修改默认酸洗行为的减速器相关。

现在,LokyBackendMemmappingExecutor 配置为 shares this folderreducers

如果您有 numpy installed and your model is a clean numpy array, you are guaranteed to have it pickled once as a memmapped file using the ArrayMemmapForwardReducer 并从父进程传递给子进程。

否则它被 pickle using the default pickling 作为一个 bytes 对象。

您可以从 joblib 读取调试日志,了解您的模型是如何在父进程中被 pickle 的。

每个 worker 'unpickles' 大模型,所以在那里缓存大模型真的没有意义。

您只能通过支持您的模型作为 memory mapped file.

来改进从 worker 中加载 pickled 大型模型的来源