什么是 ParallelStream 队列行为?
What is ParallelStream Queue Behavior?
我正在使用 parallelStream 并行上传一些文件,有些是大文件,有些是小文件。我注意到并不是所有的工人都被使用。
一开始一切都运行良好,所有线程都在使用(我将并行度选项设置为 16)。然后在某一点(一旦它到达更大的文件),它只使用一个线程
简化代码:
files.parallelStream().forEach((file) -> {
try (FileInputStream fileInputStream = new FileInputStream(file)) {
IDocumentStorageAdaptor uploader = null;
try {
logger.debug("Adaptors before taking: " + uploaderPool.size());
uploader = uploaderPool.take();
logger.debug("Took an adaptor!");
logger.debug("Adaptors after taking: " + uploaderPool.size());
uploader.addNewFile(file);
} finally {
if (uploader != null) {
logger.debug("Adding one back!");
uploaderPool.put(uploader);
logger.debug("Adaptors after putting: " + uploaderPool.size());
}
}
} catch (InterruptedException | IOException e) {
throw new UploadException(e);
}
});
uploaderPool 是一个 ArrayBlockingQueue。
日志:
[ForkJoinPool.commonPool-worker-8] - Adaptors before taking: 0
[ForkJoinPool.commonPool-worker-15] - Adding one back!
[ForkJoinPool.commonPool-worker-8] - Took an adaptor!
[ForkJoinPool.commonPool-worker-15] - Adaptors after putting: 0
...
...
...
[ForkJoinPool.commonPool-worker-10] - Adding one back!
[ForkJoinPool.commonPool-worker-10] - Adaptors after putting: 16
[ForkJoinPool.commonPool-worker-10] - Adaptors before taking: 16
[ForkJoinPool.commonPool-worker-10] - Took an adaptor!
[ForkJoinPool.commonPool-worker-10] - Adaptors after taking: 15
[ForkJoinPool.commonPool-worker-10] - Adding one back!
[ForkJoinPool.commonPool-worker-10] - Adaptors after putting: 16
[ForkJoinPool.commonPool-worker-10] - Adaptors before taking: 16
[ForkJoinPool.commonPool-worker-10] - Took an adaptor!
[ForkJoinPool.commonPool-worker-10] - Adaptors after taking: 15
似乎所有工作(列表中的项目)都分布在 16 个线程中,委派给一个线程的事情只会等到线程有空闲才能工作,而不是使用可用线程。有没有办法改变 parallelStream 的工作排队方式?我阅读了 forkjoinpool 文档,它提到了工作窃取,但仅适用于生成的子任务。
我的另一个计划可能是对我正在使用 parallelStream 的列表进行随机排序,也许这会平衡一些事情。
谢谢!
并行流的拆分与计算试探法针对数据并行操作进行了调整,而不是针对 IO 并行操作。 (换句话说,它们被调整为让 CPU 保持忙碌,但不会生成比你拥有的 CPU 多得多的任务。)因此,它们偏向于计算而不是分叉。目前没有覆盖这些选择的选项。
我正在使用 parallelStream 并行上传一些文件,有些是大文件,有些是小文件。我注意到并不是所有的工人都被使用。
一开始一切都运行良好,所有线程都在使用(我将并行度选项设置为 16)。然后在某一点(一旦它到达更大的文件),它只使用一个线程
简化代码:
files.parallelStream().forEach((file) -> {
try (FileInputStream fileInputStream = new FileInputStream(file)) {
IDocumentStorageAdaptor uploader = null;
try {
logger.debug("Adaptors before taking: " + uploaderPool.size());
uploader = uploaderPool.take();
logger.debug("Took an adaptor!");
logger.debug("Adaptors after taking: " + uploaderPool.size());
uploader.addNewFile(file);
} finally {
if (uploader != null) {
logger.debug("Adding one back!");
uploaderPool.put(uploader);
logger.debug("Adaptors after putting: " + uploaderPool.size());
}
}
} catch (InterruptedException | IOException e) {
throw new UploadException(e);
}
});
uploaderPool 是一个 ArrayBlockingQueue。 日志:
[ForkJoinPool.commonPool-worker-8] - Adaptors before taking: 0
[ForkJoinPool.commonPool-worker-15] - Adding one back!
[ForkJoinPool.commonPool-worker-8] - Took an adaptor!
[ForkJoinPool.commonPool-worker-15] - Adaptors after putting: 0
...
...
...
[ForkJoinPool.commonPool-worker-10] - Adding one back!
[ForkJoinPool.commonPool-worker-10] - Adaptors after putting: 16
[ForkJoinPool.commonPool-worker-10] - Adaptors before taking: 16
[ForkJoinPool.commonPool-worker-10] - Took an adaptor!
[ForkJoinPool.commonPool-worker-10] - Adaptors after taking: 15
[ForkJoinPool.commonPool-worker-10] - Adding one back!
[ForkJoinPool.commonPool-worker-10] - Adaptors after putting: 16
[ForkJoinPool.commonPool-worker-10] - Adaptors before taking: 16
[ForkJoinPool.commonPool-worker-10] - Took an adaptor!
[ForkJoinPool.commonPool-worker-10] - Adaptors after taking: 15
似乎所有工作(列表中的项目)都分布在 16 个线程中,委派给一个线程的事情只会等到线程有空闲才能工作,而不是使用可用线程。有没有办法改变 parallelStream 的工作排队方式?我阅读了 forkjoinpool 文档,它提到了工作窃取,但仅适用于生成的子任务。
我的另一个计划可能是对我正在使用 parallelStream 的列表进行随机排序,也许这会平衡一些事情。
谢谢!
并行流的拆分与计算试探法针对数据并行操作进行了调整,而不是针对 IO 并行操作。 (换句话说,它们被调整为让 CPU 保持忙碌,但不会生成比你拥有的 CPU 多得多的任务。)因此,它们偏向于计算而不是分叉。目前没有覆盖这些选择的选项。