Python - 替换子进程源文件

Python - Replace subprocess source files

我正在尝试编写一个程序,其中有两个文件,一个名为 launcher.py,另一个名为 sysupdate.py,其中 launcher 同时生成子进程到 运行(包括 sysupdate) 和 sysupdate 在网络上侦听压缩软件更新文件。当 sysupdate 收到更新文件时,它需要能够 kill/pause 其他进程(由 launcher 创建),替换它们的源代码文件,然后重新启动它们。我正在努力寻找一种巧妙的方法来完成这项工作,想知道是否有人对我如何完成这项工作有任何建议?

我应该提到这些子进程被设计为无限循环,因此我不能等待它们不幸退出,我需要能够手动杀死它们,替换它们的源文件,然后重新启动它们。

虽然子进程正在 运行ning,但我需要启动器能够 'keep them alive',因此如果它们因任何原因死掉,则应该重新启动它们。显然,当它们因软件更新而被杀死时,我需要暂停这种行为。此代码用于始终在线的传感器系统,因此我需要一致的循环和重新启动。

例如:

launcher.py:

def launch_threads():   
    # Reading thread
    try:
        readthread = Process(target=read_loop, args=(sendqueue, mqttqueue))
        processes.append(readthread)
    except Exception as ex:
        log("Read process creation failed: " + str(ex), 3)
        
    # ..... Other threads/processes here
    
    # System Update Thread
    try:
        global updatethread
        updatethread = Process(target=update_loop, args=(updatequeue,))
        processes.append(updatethread)
    except Exception as ex:
        log("Software updater process creation failed: " + str(ex), 3)

    return processes


if __name__ == '__main__':
        processes = launch_threads()
        for p in processes:
            p.start()
        for p in processes:              # Here I have it trying to keep processes alive permanently, .. 
            p.join()                     # .. I need a way to 'pause' this
            if not p.is_alive():
                p.start()

sysupdate.py:

def update_loop():

    wait_for_zip_on_network()
    extract_zip()
    
    kill_processes()           # Need sysupdate to be able to tell 'launcher' to kill/pause the processes

    replace_source_files()

    resume_processes()         # Tell 'launcher' to resume/restart the processes

launch_threads 可能用词不当,因为您启动的是进程而不是线程。我假设您正在启动一些可以分配给变量 N_TASKS 的进程和一个由 update_loop 表示的附加进程,因此进程总数为 N_TASKS + 1。此外,我假设这些 N_TASKS 进程最终会在没有源更新的情况下完成。我的建议是使用多处理池,它可以方便地提供多种设施,使我们的工作稍微简单一些。我还会使用 update_loop 的修改版本,它只监听更改、更新源并终止但可以重新启动:

sysupdate.py

def modified_update():
    zip_file = wait_for_zip_on_network()
    return zip_file

然后我们使用 multiprocessing 模块中的 Pool class 和各种回调,这样我们就可以知道各种提交的任务何时完成。我们要等待 modified_update 任务或 all 的“常规”任务完成。在任何一种情况下,我们都会终止所有未完成的任务,但在第一种情况下,我们会重新启动所有任务,而在第二种情况下,我们将完成:

from multiprocessing import Pool
from threading import Event

# the number of processes that need to run besides the modified_update process:
N_TASKS = 4

completed_event = None
completed_count = 0

def regular_task_completed_callback(result):
    global completed_count, completed_event
    completed_count += 1
    if completed_count == N_TASKS:
        completed_event.set() # we are throug with all the tasks

def new_source_files_callback(zip_file):
    global completed_event
    extract_zip(zip_file)
    replace_source_files()
    completed_event.set()

def launch_threads():
    global completed_event, completed_count
    POOLSIZE = N_TASKS + 1
    while True:
        completed_event = Event()
        completed_count = 0
        pool = Pool(POOLSIZE)
        # start the "regular" processes:
        pool.apply_async(read_loop, args=(sendqueue, mqttqueue), callback=regular_task_completed_callback)
        # etc.
        # start modified update_loop:
        pool.apply_async(modified_update, callback=new_source_files_callback)
        # wait for either the source files to have changed or the "regular" tasks to have completed:
        completed_event.wait()
        # terminate all outstanding tasks
        pool.terminate()
        if completed_count == N_TASKS: # all the "regular" tasks have completed
            return # we are done
        # else we start all over again


if __name__ == '__main__':
    processes = launch_threads()

更新

如果“常规”任务永不终止,那么逻辑就大大简化了。 modified_update 变为:

sysupdate.py

def modified_update():
    zip_file = wait_for_zip_on_network()
    extract_zip(zip_file)
    replace_source_files()

然后:

launcher.py

from multiprocessing import Pool


def launch_threads():
    # the number of processes that need to run besides the modified_update process:
    N_TASKS = 4
    POOLSIZE = N_TASKS + 1
    while True:
        pool = Pool(POOLSIZE)
        # start the "regular" processes:
        pool.apply_async(read_loop, args=(sendqueue, mqttqueue))
        # etc.
        # start modified_update:
        result = pool.apply_async(modified_update)
        result.get() # wait for modified_update to complete
        # terminate all outstanding (i.e. "regular") tasks
        pool.terminate()
        # and start all over


if __name__ == '__main__':
    launch_threads()

备注

由于我现在使用的 Pool 设施较少,您 可以 返回到启动单个 Process 实例。正在做的事情的要点是:

  1. modified_update 不再循环,而是在更新源代码后终止。
  2. launch_threads 包含一个循环,该循环启动“常规”和 modified_update 进程并等待 modified_update 完成表示源更新已经发生。结果,必须终止所有“常规”流程,一切重新开始。使用池只是简化了对所有进程的跟踪并通过一次调用终止它们。