Matlab 到 Julia 优化:JuMP @SetNLObjective 中的函数
Matlab to Julia Optimization: Function in JuMP @SetNLObjective
我正在尝试用 Julia 重写 Matlab fmincon 优化函数。
这是 Matlab 代码:
function [x,fval] = example3()
x0 = [0; 0; 0; 0; 0; 0; 0; 0];
A = [];
b = [];
Ae = [1000 1000 1000 1000 -1000 -1000 -1000 -1000];
be = [100];
lb = [0; 0; 0; 0; 0; 0; 0; 0];
ub = [1; 1; 1; 1; 1; 1; 1; 1];
noncon = [];
options = optimset('fmincon');
options.Algorithm = 'interior-point';
[x,fval] = fmincon(@objfcn,x0,A,b,Ae,be,lb,ub,@noncon,options);
end
function f = objfcn(x)
% user inputs
Cr = [ 0.0064 0.00408 0.00192 0;
0.00408 0.0289 0.0204 0.0119;
0.00192 0.0204 0.0576 0.0336;
0 0.0119 0.0336 0.1225 ];
w0 = [ 0.3; 0.3; 0.2; 0.1 ];
Er = [0.05; 0.1; 0.12; 0.18];
% calculate objective function
w = w0+x(1:4)-x(5:8);
Er_p = w'*Er;
Sr_p = sqrt(w'*Cr*w);
% f = objective function
f = -Er_p/Sr_p;
end
这是我的 Julia 代码:
using JuMP
using Ipopt
m = Model(solver=IpoptSolver())
# INPUT DATA
w0 = [ 0.3; 0.3; 0.2; 0.1 ]
Er = [0.05; 0.1; 0.12; 0.18]
Cr = [ 0.0064 0.00408 0.00192 0;
0.00408 0.0289 0.0204 0.0119;
0.00192 0.0204 0.0576 0.0336;
0 0.0119 0.0336 0.1225 ]
# VARIABLES
@defVar(m, 0 <= x[i=1:8] <= 1, start = 0.0)
@defNLExpr(w, w0+x[1:4]-x[5:8])
@defNLExpr(Er_p, w'*Er)
@defNLExpr(Sr_p, w'*Cr*w)
@defNLExpr(f, Er_p/Sr_p)
# OBJECTIVE
@setNLObjective(m, Min, f)
# CONSTRAINTS
@addConstraint(m, 1000*x[1] + 1000*x[2] + 1000*x[3] + 1000*x[4] -
1000*x[5] - 1000*x[6] - 1000*x[7] - 1000*x[8] == 100)
# SOLVE
status = solve(m)
# DISPLAY RESULTS
println("x = ", round(getValue(x),4))
println("f = ", round(getObjectiveValue(m),4))
当我在@setNLObjective 中显式定义 objective 函数时,Julia 优化有效,但是这是不合适的,因为用户的输入可能会发生变化,从而导致不同的 objective 函数,您可以从objective 函数已创建。
问题似乎是 JuMP 在如何将 objective 函数输入到 @setNLObjective 参数中的限制:
All expressions must be simple scalar operations. You cannot use dot, matrix-vector products, vector slices, etc. Translate vector operations into explicit sum{} operations.
有办法解决这个问题吗?
或者 Julia 中是否有任何其他包可以解决这个问题,请记住我不会有 jacobian 或 hessian。
非常感谢。
使用 Julia 和 NLopt 优化包的 Matlab 代码的工作示例。
using NLopt
function objective_function(x::Vector{Float64}, grad::Vector{Float64})
w0 = [ 0.3; 0.3; 0.2; 0.1 ]
Er = [0.05; 0.1; 0.12; 0.18]
Cr = [ 0.0064 0.00408 0.00192 0;
0.00408 0.0289 0.0204 0.0119;
0.00192 0.0204 0.0576 0.0336;
0 0.0119 0.0336 0.1225 ]
w = w0 + x[1:4] - x[5:8]
Er_p = w' * Er
Sr_p = sqrt(w' * Cr * w)
f = -Er_p/Sr_p
obj_func_value = f[1]
return(obj_func_value)
end
function constraint_function(x::Vector, grad::Vector)
constraintValue = 1000*x[1] + 1000*x[2] + 1000*x[3] + 1000*x[4] -
1000*x[5] - 1000*x[6] - 1000*x[7] - 1000*x[8] - 100
return constraintValue
end
opt1 = Opt(:LN_COBYLA, 8)
lower_bounds!(opt1, [0, 0, 0, 0, 0, 0, 0, 0])
upper_bounds!(opt1, [1, 1, 1, 1, 1, 1, 1, 1])
#ftol_rel!(opt1, 0.5)
#ftol_abs!(opt1, 0.5)
min_objective!(opt1, objective_function)
equality_constraint!(opt1, constraint_function)
(fObjOpt, xOpt, flag) = optimize(opt1, [0, 0, 0, 0, 0, 0, 0, 0])
我正在尝试用 Julia 重写 Matlab fmincon 优化函数。
这是 Matlab 代码:
function [x,fval] = example3()
x0 = [0; 0; 0; 0; 0; 0; 0; 0];
A = [];
b = [];
Ae = [1000 1000 1000 1000 -1000 -1000 -1000 -1000];
be = [100];
lb = [0; 0; 0; 0; 0; 0; 0; 0];
ub = [1; 1; 1; 1; 1; 1; 1; 1];
noncon = [];
options = optimset('fmincon');
options.Algorithm = 'interior-point';
[x,fval] = fmincon(@objfcn,x0,A,b,Ae,be,lb,ub,@noncon,options);
end
function f = objfcn(x)
% user inputs
Cr = [ 0.0064 0.00408 0.00192 0;
0.00408 0.0289 0.0204 0.0119;
0.00192 0.0204 0.0576 0.0336;
0 0.0119 0.0336 0.1225 ];
w0 = [ 0.3; 0.3; 0.2; 0.1 ];
Er = [0.05; 0.1; 0.12; 0.18];
% calculate objective function
w = w0+x(1:4)-x(5:8);
Er_p = w'*Er;
Sr_p = sqrt(w'*Cr*w);
% f = objective function
f = -Er_p/Sr_p;
end
这是我的 Julia 代码:
using JuMP
using Ipopt
m = Model(solver=IpoptSolver())
# INPUT DATA
w0 = [ 0.3; 0.3; 0.2; 0.1 ]
Er = [0.05; 0.1; 0.12; 0.18]
Cr = [ 0.0064 0.00408 0.00192 0;
0.00408 0.0289 0.0204 0.0119;
0.00192 0.0204 0.0576 0.0336;
0 0.0119 0.0336 0.1225 ]
# VARIABLES
@defVar(m, 0 <= x[i=1:8] <= 1, start = 0.0)
@defNLExpr(w, w0+x[1:4]-x[5:8])
@defNLExpr(Er_p, w'*Er)
@defNLExpr(Sr_p, w'*Cr*w)
@defNLExpr(f, Er_p/Sr_p)
# OBJECTIVE
@setNLObjective(m, Min, f)
# CONSTRAINTS
@addConstraint(m, 1000*x[1] + 1000*x[2] + 1000*x[3] + 1000*x[4] -
1000*x[5] - 1000*x[6] - 1000*x[7] - 1000*x[8] == 100)
# SOLVE
status = solve(m)
# DISPLAY RESULTS
println("x = ", round(getValue(x),4))
println("f = ", round(getObjectiveValue(m),4))
当我在@setNLObjective 中显式定义 objective 函数时,Julia 优化有效,但是这是不合适的,因为用户的输入可能会发生变化,从而导致不同的 objective 函数,您可以从objective 函数已创建。
问题似乎是 JuMP 在如何将 objective 函数输入到 @setNLObjective 参数中的限制:
All expressions must be simple scalar operations. You cannot use dot, matrix-vector products, vector slices, etc. Translate vector operations into explicit sum{} operations.
有办法解决这个问题吗? 或者 Julia 中是否有任何其他包可以解决这个问题,请记住我不会有 jacobian 或 hessian。
非常感谢。
使用 Julia 和 NLopt 优化包的 Matlab 代码的工作示例。
using NLopt
function objective_function(x::Vector{Float64}, grad::Vector{Float64})
w0 = [ 0.3; 0.3; 0.2; 0.1 ]
Er = [0.05; 0.1; 0.12; 0.18]
Cr = [ 0.0064 0.00408 0.00192 0;
0.00408 0.0289 0.0204 0.0119;
0.00192 0.0204 0.0576 0.0336;
0 0.0119 0.0336 0.1225 ]
w = w0 + x[1:4] - x[5:8]
Er_p = w' * Er
Sr_p = sqrt(w' * Cr * w)
f = -Er_p/Sr_p
obj_func_value = f[1]
return(obj_func_value)
end
function constraint_function(x::Vector, grad::Vector)
constraintValue = 1000*x[1] + 1000*x[2] + 1000*x[3] + 1000*x[4] -
1000*x[5] - 1000*x[6] - 1000*x[7] - 1000*x[8] - 100
return constraintValue
end
opt1 = Opt(:LN_COBYLA, 8)
lower_bounds!(opt1, [0, 0, 0, 0, 0, 0, 0, 0])
upper_bounds!(opt1, [1, 1, 1, 1, 1, 1, 1, 1])
#ftol_rel!(opt1, 0.5)
#ftol_abs!(opt1, 0.5)
min_objective!(opt1, objective_function)
equality_constraint!(opt1, constraint_function)
(fObjOpt, xOpt, flag) = optimize(opt1, [0, 0, 0, 0, 0, 0, 0, 0])