使用队列和多处理时的死锁
Deadlocks while using Queues and multiprocessing
我不理解多处理文档的这一部分 (python.org),我引用:
“一个会死锁的例子如下:
from multiprocessing import Process, Queue
def f(q):
q.put('X' * 1000000)
if __name__ == '__main__':
queue = Queue()
p = Process(target=f, args=(queue,))
p.start()
p.join() # this deadlocks
obj = queue.get()
”
首先,它为什么会阻塞?
更令人惊讶的是,当我在 f 的定义中尝试使用一些小于 1000000 的值时它工作得很好(它适用于 10,100,1000,10000,但不适用于 100000)。
非常感谢您的帮助!
此示例说明了 17.2.2.2.
中描述的行为
if a child process has put items on a queue (and it has not used JoinableQueue.cancel_join_thread), then that process will not terminate until all buffered items have been flushed to the pipe.
This means that if you try joining that process you may get a deadlock unless you are sure that all items which have been put on the queue have been consumed.
我不理解多处理文档的这一部分 (python.org),我引用:
“一个会死锁的例子如下:
from multiprocessing import Process, Queue
def f(q):
q.put('X' * 1000000)
if __name__ == '__main__':
queue = Queue()
p = Process(target=f, args=(queue,))
p.start()
p.join() # this deadlocks
obj = queue.get()
” 首先,它为什么会阻塞? 更令人惊讶的是,当我在 f 的定义中尝试使用一些小于 1000000 的值时它工作得很好(它适用于 10,100,1000,10000,但不适用于 100000)。
非常感谢您的帮助!
此示例说明了 17.2.2.2.
中描述的行为if a child process has put items on a queue (and it has not used JoinableQueue.cancel_join_thread), then that process will not terminate until all buffered items have been flushed to the pipe. This means that if you try joining that process you may get a deadlock unless you are sure that all items which have been put on the queue have been consumed.