如何在Python中通过apply_async()传递gurobipy.Model变量?

How to pass gurobipy.Model variables through apply_async() in Python?

总结 我是 python 并行计算的新手。我用Gurobi搭建了DEA模型,计算了每个DMU(Decision Making Unit)的效率。为了减少总的计算时间,我将模型分为两步求解:

Step1正确且可执行。但是在Step2中,当我通过multiprocessing.Pool.apply_async()将对象参数“gurobipy.Model”传递到我定义的函数Solve()时,出现了TypeError: can't pickle PyCapsule objects。并且函数Solve()没有被执行。如何使用 apply_async 函数传递 gurobipy.Model 变量,或者是否有任何其他并行方法传递 gurobipy.Model 变量?

详情 以下是主程序

from multiprocessing import Pool
import multiprocessing
from gurobipy import *
import gurobipy as gp
import numpy as np
import pandas as pd
import time

def runComputationgurobi(Root, FileName, ResultFileName, numInput, numOutput):
    '''
    input:root path, file name, number of input unit, number of output unit
    output:Excel file (including DMU number, best solution(efficiency), modeling time, solving time)
    '''
    #Data peprocessing
    df = pd.read_csv(f"{Root}/{FileName}", header=None)   #download data
    D = np.array(df)                                      #convert to ndarray
    transD = D.transpose()                                #transpose ndarray
    outputs = []                                          #empty list to store best solutions
    
    scale, S = transD.shape                               #scale : numInput+numOutput;S : total number of DMUs

    print("Build k models...")
    #Step1: Modeling
    '''
    call BuildGurobiModel(list of download data, number of input unit,number of output unit)
    return:k modeling times(list[float])、k Gurobi models(list[gurobipy.Model])
    '''
    build_time_house, model_house = BuildGurobiModels(transD, numInput, numOutput)

    print("Parallel computing k models...")
    #Step2: Parallel optimization model
    '''
    call Solve(kth Gurobi model)
    return:k best solutions(efficiency)(float)、k solving times(float)
    '''
    temp = []
    pool = multiprocessing.Pool(4)
    print("Start parallel solve")
    start_time = time.time()
    for k in range(S):
        temp.append([k+1, build_time_house[k], pool.apply_async(Solve, args=(model_house[k], ))])
    pool.close()
    pool.join()
    print(f"{time.time() - start_time}s")

    for k, build_time, _return in temp:
        outputs.append([k, _return.get()[0], build_time, _return.get()[1]])  #_return.get()=(obj_efficiency, solve_time, )
    
    #Output Excel
    pd.DataFrame(np.array(outputs)).to_excel(f"{Root}/result_parallel_matrix_ChgRHS.xlsx", header=["DMU", "obj_efficiency", "build_time", "solve_time"], index=False)

if __name__=="__main__":
    rootPath = "C:/Users/MB516/Documents/source/Python Scripts/Parallel_processing"
    file_name = "test.csv"
    resultfile_name = "result.csv"

    numInput = 2
    numOutput = 3

    start_time = time.time()
    runComputationgurobi(rootPath, file_name, resultfile_name, numInput, numOutput)
    parallel_solveTime = time.time() - start_time

    print(f"solveTime:{parallel_solveTime}")

构建 k 个模型:

def BuildGurobiModels(transD, numInput, numOutput):
    '''
    input: list of download data(list), number of input unit(int),number of output unit(int)
    return: k modeling times(list[float]), k Gurobi models(list[gurobipy.Model])
    '''
    #Data peprocessing
    model_house = []
    build_time_house = []
    scale, S = transD.shape  #scale : numInput+numOutput;S : total number of DMUs

    for k in range(S):
        #Define model
        start_time = time.time()
        model = gp.Model(f"NaiveDEA{k+1}")
        model.setParam("OutputFlag", 0) # 0: disables solver output
        model.setParam("Method", 0)     # 0: primal simplex

        #Define variables
        #define lambda
        lambdaarray = model.addVars(S, lb = 0.0, ub = GRB.INFINITY, vtype = GRB.CONTINUOUS)

        #define theta
        theta = model.addVar(lb = -GRB.INFINITY, ub = GRB.INFINITY, vtype=GRB.CONTINUOUS, name="theta")
        model.update()

        #Set the objective
        model.setObjective(theta, GRB.MINIMIZE)

        #Define constraints
        #input constraint
        model.addConstrs((LinExpr(transD[i], lambdaarray.values()) <=transD[i, k]*theta for i in range(numInput)), name = "Input")
        model.update()

        #output constraint
        model.addConstrs((LinExpr(transD[j], lambdaarray.values()) >=transD[j, k] for j in range(numInput, scale)), name = "Output")
        model.update()

        #convexity constraint
        model.addConstr(quicksum(lambdaarray)==1, name="Convexity")
        model.update()

        build_time = time.time() - start_time   #modeling time

        model_house.append([model])
        build_time_house.append([build_time])

    return build_time_house, model_house

求解第 k 个模型:

def Solve(model):
    '''
    input: kth Gurobi model(gurobipy.Model)
    return:k best solutions(efficiency)(float), k solving times(float)
    ''' 
    print("Start Solve!!!!!!")      
    #Solve
    start_time = time.time()
    model.optimize()
    solve_time = time.time() - start_time

    #print
    objvalue = model.getObjective()
    getobjv = objvalue.getValue()

当我运行代码时,出现如下结果。

Build k models...
Parallel computing k models...
0.53267502784729s
Traceback (most recent call last):
  File "c:/Users/MB516/Documents/source/Python Scripts/Parallel_processing/ENGLIFH_TEST_PARALLEL.py", line 124, in <module>
    runComputationgurobi(rootPath, file_name, resultfile_name, numInput, numOutput)
  File "c:/Users/MB516/Documents/source/Python Scripts/Parallel_processing/ENGLIFH_TEST_PARALLEL.py", line 47, in runComputationgurobi
    outputs.append([k, _return.get()[0], build_time, _return.get()[1]])  #_return.get()=(obj_efficiency, solve_time, )
TypeError: can't pickle PyCapsule objects

没有执行第2步的Solve函数,因为没有打印出“Start Solve!!!!!!”在函数 Solve() 中。以及下面的程序

for k, build_time, _return in temp:
        outputs.append([k, _return.get()[0], build_time, _return.get()[1]]) #_return.get()=(obj_efficiency, solve_time, )

TypeError: can't pickle PyCapsule objects。我怎么解决这个问题 ?预先感谢您的回答!

环境

这是在 Python 中并行创建和求解多个模型的方法:

import multiprocessing as mp
import gurobipy as gp

def solve_model(input_data):
    with gp.Env() as env, gp.Model(env=env) as model:
        # define model
        model.optimize()
        # retrieve data from model

if __name__ == '__main__':
    with mp.Pool() as pool:
        pool.map(solve_model, [input_data1, input_data2, input_data3]

更多信息,请参考full guide