Python - 如何将全局变量传递给 multiprocessing.Process?
Python - How to pass global variable to multiprocessing.Process?
我需要在一段时间后终止一些进程,所以我使用休眠另一个进程来等待。但是我猜新进程无法从主进程访问全局变量。请问我该如何解决?
代码:
import os
from subprocess import Popen, PIPE
import time
import multiprocessing
log_file = open('stdout.log', 'a')
log_file.flush()
err_file = open('stderr.log', 'a')
err_file.flush()
processes = []
def processing():
print "processing"
global processes
global log_file
global err_file
for i in range(0, 5):
p = Popen(['java', '-jar', 'C:\Users\two\Documents\test.jar'], stdout=log_file, stderr=err_file) # something long running
processes.append(p)
print len(processes) # returns 5
def waiting_service():
name = multiprocessing.current_process().name
print name, 'Starting'
global processes
print len(processes) # returns 0
time.sleep(2)
for i in range(0, 5):
processes[i].terminate()
print name, 'Exiting'
if __name__ == '__main__':
processing()
service = multiprocessing.Process(name='waiting_service', target=waiting_service)
service.start()
您应该使用 synchronization primitives。
可能您想要设置一个 Event
,它会在一段时间后由主(父)进程触发。
您可能还想等待进程实际完成并 join
它们(就像线程一样)。
如果你有很多类似的任务,你可以使用像multiprocessing.Pool
这样的处理池。
这是一个如何完成的小例子:
import multiprocessing
import time
kill_event = multiprocessing.Event()
def work(_id):
while not kill_event.is_set():
print "%d is doing stuff" % _id
time.sleep(1)
print "%d quit" % _id
def spawn_processes():
processes = []
# spawn 10 processes
for i in xrange(10):
# spawn process
process = multiprocessing.Process(target=work, args=(i,))
processes.append(process)
process.start()
time.sleep(1)
# kill all processes by setting the kill event
kill_event.set()
# wait for all processes to complete
for process in processes:
process.join()
print "done!"
spawn_processes()
整个问题都在 Windows' Python 中。 Python for Windows 正在阻止在函数中看到的全局变量。我已切换到 linux,我的脚本运行正常。
特别感谢@rchang的评论:
When I tested it, in both cases the print statement came up with 5. Perhaps we have a version mismatch in some way? I tested it with Python 2.7.6 on Linux kernel 3.13.0 (Mint distribution).
我需要在一段时间后终止一些进程,所以我使用休眠另一个进程来等待。但是我猜新进程无法从主进程访问全局变量。请问我该如何解决?
代码:
import os
from subprocess import Popen, PIPE
import time
import multiprocessing
log_file = open('stdout.log', 'a')
log_file.flush()
err_file = open('stderr.log', 'a')
err_file.flush()
processes = []
def processing():
print "processing"
global processes
global log_file
global err_file
for i in range(0, 5):
p = Popen(['java', '-jar', 'C:\Users\two\Documents\test.jar'], stdout=log_file, stderr=err_file) # something long running
processes.append(p)
print len(processes) # returns 5
def waiting_service():
name = multiprocessing.current_process().name
print name, 'Starting'
global processes
print len(processes) # returns 0
time.sleep(2)
for i in range(0, 5):
processes[i].terminate()
print name, 'Exiting'
if __name__ == '__main__':
processing()
service = multiprocessing.Process(name='waiting_service', target=waiting_service)
service.start()
您应该使用 synchronization primitives。
可能您想要设置一个 Event
,它会在一段时间后由主(父)进程触发。
您可能还想等待进程实际完成并 join
它们(就像线程一样)。
如果你有很多类似的任务,你可以使用像multiprocessing.Pool
这样的处理池。
这是一个如何完成的小例子:
import multiprocessing
import time
kill_event = multiprocessing.Event()
def work(_id):
while not kill_event.is_set():
print "%d is doing stuff" % _id
time.sleep(1)
print "%d quit" % _id
def spawn_processes():
processes = []
# spawn 10 processes
for i in xrange(10):
# spawn process
process = multiprocessing.Process(target=work, args=(i,))
processes.append(process)
process.start()
time.sleep(1)
# kill all processes by setting the kill event
kill_event.set()
# wait for all processes to complete
for process in processes:
process.join()
print "done!"
spawn_processes()
整个问题都在 Windows' Python 中。 Python for Windows 正在阻止在函数中看到的全局变量。我已切换到 linux,我的脚本运行正常。
特别感谢@rchang的评论:
When I tested it, in both cases the print statement came up with 5. Perhaps we have a version mismatch in some way? I tested it with Python 2.7.6 on Linux kernel 3.13.0 (Mint distribution).