如何使用 Google 的 CP-SAT 求解器计算 "AddAbsEquality" 或 "AddMultiplicationEqualit" 进行非线性优化?
How to calculate the "AddAbsEquality" or "AddMultiplicationEqualit" using Google's CP-SAT solver for non-linear optimization?
我的目标是根据预测序列恢复数据序列。假设原始数据序列是 x_org = [10, 20, 30, 40, 50] 但我收到的随机数据是 x_ran = [50, 40, 20, 10, 30]。现在,我的目标是通过使它们最接近原始模式(最小化恢复损失)来恢复模式。
我使用了与 google OR-tool 网站 [https://developers.google 上提供的“工作团队分配”和“解决优化问题”几乎相似的方法。 =35=]] 和 [https://developers.google.com/optimization/cp/integer_opt_cp].
我可以最小化损失总和(误差)但不能计算平方和sum/absolute。
from ortools.sat.python import cp_model
x_org = [10, 20, 30, 40, 50]
x_ran = [50, 40, 20, 10, 30]
n = len(x_org)
model = cp_model.CpModel()
# Defidning recovered data
x_rec = [model.NewIntVar(0, 10000, 'x_rec_%i') for i in range(n)]
# Defidning recovery loss
x_loss = [model.NewIntVar(0, 10000, 'x_loss_%i' % i) for i in range(n)]
# Defining a (recovery) mapping matrix
M = {}
for i in range(n):
for j in range(n):
M[i, j] = model.NewBoolVar('M[%i,%i]' % (i, j))
# -----------------Constraints---------------%
# Each sensor is assigned one unique measurement.
for i in range(n):
model.Add(sum([M[i, j] for j in range(n)]) == 1)
# Each measurement is assigned one unique sensor.
for j in range(n):
model.Add(sum([M[i, j] for i in range(n)]) == 1)
# Recovering the remapped data x_rec=M*x_ran (like, Ax =b)
for i in range(n):
model.Add(x_rec[i] == sum([M[i,j]*x_ran[j] for j in range(n)]))
# Loss = orginal data - recovered data
for i in range(n):
x_loss[i] = x_org[i] - x_rec[i]
# minimizing recovery loss
model.Minimize(sum(x_loss))
#--------------- Calling solver -------------%
# Solves and prints out the solution.
solver = cp_model.CpSolver()
status = solver.Solve(model)
print('Solve status: %s' % solver.StatusName(status))
if status == cp_model.OPTIMAL:
print('Optimal objective value: %i' % solver.ObjectiveValue())
for i in range(n):
print('x_loss[%i] = %i' %(i,solver.Value(x_loss[i])))
则不带误差绝对和的输出为:
Solve status: OPTIMAL
Optimal objective value: 0
x_loss[0] = -10
x_loss[1] = -30
x_loss[2] = 0
x_loss[3] = 30
x_loss[4] = 10
这表明即使损失总和为零,恢复也不正确。但是,当我尝试添加另一个 int 变量来存储损失的绝对值时[如下所示],编译器报错。
# Defidning abs recovery loss
x_loss_abs = [model.NewIntVar(0, 10000, 'x_loss_abs_%i' % i) for i in range(n)]
# Loss = orginal data - recovered data
for i in range(n):
model.AddAbsEquality(x_loss_abs[i], x_loss[i])
#model.AddMultiplicationEquality(x_loss_abs[i], [x_loss[i], x_loss[i]])
回溯的错误是:
TypeError Traceback (most recent call last)
<ipython-input-42-2a043a8fef8b> in <module>
3 # Loss = orginal data - recovered data
4 for i in range(n):
----> 5 model.AddAbsEquality(x_loss_abs[i], x_loss[i])
~/anaconda3/envs/tensorgpu/lib/python3.7/site-packages/ortools/sat/python/cp_model.py in AddAbsEquality(self, target, var)
1217 ct = Constraint(self.__model.constraints)
1218 model_ct = self.__model.constraints[ct.Index()]
-> 1219 index = self.GetOrMakeIndex(var)
1220 model_ct.int_max.vars.extend([index, -index - 1])
1221 model_ct.int_max.target = self.GetOrMakeIndex(target)
~/anaconda3/envs/tensorgpu/lib/python3.7/site-packages/ortools/sat/python/cp_model.py in GetOrMakeIndex(self, arg)
1397 else:
1398 raise TypeError('NotSupported: model.GetOrMakeIndex(' + str(arg) +
-> 1399 ')')
1400
1401 def GetOrMakeBooleanIndex(self, arg):
TypeError: NotSupported: model.GetOrMakeIndex((-x_rec_%i + 10))
能否请您建议如何最小化恢复损失的绝对 sum/square 总和?谢谢。
AddAbsEquality
要求参数是变量(而不是像 x_org[i] - x_rec[i]
这样的表达式。所以在使用它之前必须创建一个临时决策变量(这里是 v
)。下面似乎有效:
# ...
x_loss_abs = [model.NewIntVar(0, 10000, 'x_loss_abs_%i' % i) for i in range(n)]
# ...
for i in range(n):
# x_loss[i] = x_org[i] - x_rec[i] # Original
v = model.NewIntVar(-1000,1000,"v") # Temporary variable
model.Add(v == x_org[i] - x_rec[i] )
model.AddAbsEquality(x_loss_abs[i],v)
# ....
model.Minimize(sum(x_loss_abs))
解决方案是(我改变了输出):
Optimal objective value: 0
x_org: [[10, 20, 30, 40, 50]]
x_rec: [10, 20, 30, 40, 50]
x_loss: [0, 0, 0, 0, 0]
我的目标是根据预测序列恢复数据序列。假设原始数据序列是 x_org = [10, 20, 30, 40, 50] 但我收到的随机数据是 x_ran = [50, 40, 20, 10, 30]。现在,我的目标是通过使它们最接近原始模式(最小化恢复损失)来恢复模式。
我使用了与 google OR-tool 网站 [https://developers.google 上提供的“工作团队分配”和“解决优化问题”几乎相似的方法。 =35=]] 和 [https://developers.google.com/optimization/cp/integer_opt_cp].
我可以最小化损失总和(误差)但不能计算平方和sum/absolute。
from ortools.sat.python import cp_model
x_org = [10, 20, 30, 40, 50]
x_ran = [50, 40, 20, 10, 30]
n = len(x_org)
model = cp_model.CpModel()
# Defidning recovered data
x_rec = [model.NewIntVar(0, 10000, 'x_rec_%i') for i in range(n)]
# Defidning recovery loss
x_loss = [model.NewIntVar(0, 10000, 'x_loss_%i' % i) for i in range(n)]
# Defining a (recovery) mapping matrix
M = {}
for i in range(n):
for j in range(n):
M[i, j] = model.NewBoolVar('M[%i,%i]' % (i, j))
# -----------------Constraints---------------%
# Each sensor is assigned one unique measurement.
for i in range(n):
model.Add(sum([M[i, j] for j in range(n)]) == 1)
# Each measurement is assigned one unique sensor.
for j in range(n):
model.Add(sum([M[i, j] for i in range(n)]) == 1)
# Recovering the remapped data x_rec=M*x_ran (like, Ax =b)
for i in range(n):
model.Add(x_rec[i] == sum([M[i,j]*x_ran[j] for j in range(n)]))
# Loss = orginal data - recovered data
for i in range(n):
x_loss[i] = x_org[i] - x_rec[i]
# minimizing recovery loss
model.Minimize(sum(x_loss))
#--------------- Calling solver -------------%
# Solves and prints out the solution.
solver = cp_model.CpSolver()
status = solver.Solve(model)
print('Solve status: %s' % solver.StatusName(status))
if status == cp_model.OPTIMAL:
print('Optimal objective value: %i' % solver.ObjectiveValue())
for i in range(n):
print('x_loss[%i] = %i' %(i,solver.Value(x_loss[i])))
则不带误差绝对和的输出为:
Solve status: OPTIMAL
Optimal objective value: 0
x_loss[0] = -10
x_loss[1] = -30
x_loss[2] = 0
x_loss[3] = 30
x_loss[4] = 10
这表明即使损失总和为零,恢复也不正确。但是,当我尝试添加另一个 int 变量来存储损失的绝对值时[如下所示],编译器报错。
# Defidning abs recovery loss
x_loss_abs = [model.NewIntVar(0, 10000, 'x_loss_abs_%i' % i) for i in range(n)]
# Loss = orginal data - recovered data
for i in range(n):
model.AddAbsEquality(x_loss_abs[i], x_loss[i])
#model.AddMultiplicationEquality(x_loss_abs[i], [x_loss[i], x_loss[i]])
回溯的错误是:
TypeError Traceback (most recent call last)
<ipython-input-42-2a043a8fef8b> in <module>
3 # Loss = orginal data - recovered data
4 for i in range(n):
----> 5 model.AddAbsEquality(x_loss_abs[i], x_loss[i])
~/anaconda3/envs/tensorgpu/lib/python3.7/site-packages/ortools/sat/python/cp_model.py in AddAbsEquality(self, target, var)
1217 ct = Constraint(self.__model.constraints)
1218 model_ct = self.__model.constraints[ct.Index()]
-> 1219 index = self.GetOrMakeIndex(var)
1220 model_ct.int_max.vars.extend([index, -index - 1])
1221 model_ct.int_max.target = self.GetOrMakeIndex(target)
~/anaconda3/envs/tensorgpu/lib/python3.7/site-packages/ortools/sat/python/cp_model.py in GetOrMakeIndex(self, arg)
1397 else:
1398 raise TypeError('NotSupported: model.GetOrMakeIndex(' + str(arg) +
-> 1399 ')')
1400
1401 def GetOrMakeBooleanIndex(self, arg):
TypeError: NotSupported: model.GetOrMakeIndex((-x_rec_%i + 10))
能否请您建议如何最小化恢复损失的绝对 sum/square 总和?谢谢。
AddAbsEquality
要求参数是变量(而不是像 x_org[i] - x_rec[i]
这样的表达式。所以在使用它之前必须创建一个临时决策变量(这里是 v
)。下面似乎有效:
# ...
x_loss_abs = [model.NewIntVar(0, 10000, 'x_loss_abs_%i' % i) for i in range(n)]
# ...
for i in range(n):
# x_loss[i] = x_org[i] - x_rec[i] # Original
v = model.NewIntVar(-1000,1000,"v") # Temporary variable
model.Add(v == x_org[i] - x_rec[i] )
model.AddAbsEquality(x_loss_abs[i],v)
# ....
model.Minimize(sum(x_loss_abs))
解决方案是(我改变了输出):
Optimal objective value: 0
x_org: [[10, 20, 30, 40, 50]]
x_rec: [10, 20, 30, 40, 50]
x_loss: [0, 0, 0, 0, 0]