从 Python 中的函数优化变量

Optimize Variable From A Function In Python

我习惯使用 Excel 来解决这类问题,但我现在正在尝试 Python。

基本上我有两组数组,一组是常量,另一组的值来自用户定义的函数。

就是这个功能,够简单了。

import scipy.stats as sp

def calculate_probability(spread, std_dev):
    return sp.norm.sf(0.5, spread, std_dev)

我有两个数据数组,一个包含通过 calculate_probability 函数 运行 的条目(这些是 spreads),另一个是一组名为 expected_probabilities.

spreads = [10.5, 9.5, 10, 8.5]

expected_probabilities = [0.8091, 0.7785, 0.7708, 0.7692]

下面的功能是我要优化的。

import numpy as np
def calculate_mse(std_dev):
    spread_inputs = np.array(spreads)
    model_probabilities = calculate_probability(spread_inputs,std_dev)
    subtracted_vector = np.subtract(model_probabilities,expected_probabilities)
    vector_powered = np.power(subtracted_vector,2)
    mse_sum = np.sum(vector_powered)
    return mse_sum/len(spreads)

我想找到 std_dev 的值,使函数 calculate_mse return 尽可能接近零。这在 Excel 中使用求解器非常容易,但我不确定如何在 Python 中进行。什么是最好的方法?

编辑:我已经更改了我的 calculate_mse 函数,因此它只需要一个标准偏差作为要优化的参数。我已经尝试 return Andrew 的答案使用 flask 以 API 格式,但我 运行 遇到了一些问题:

class Minimize(Resource):

    std_dev_guess = 12.0  # might have a better guess than zeros
    result = minimize(calculate_mse, std_dev_guess)

    def get(self):
        return {'data': result},200

api.add_resource(Minimize,'/minimize')

这是错误:

NameError: name 'result' is not defined

我猜输入有问题?

我建议使用 scipy 的优化库。从那里,你有几个选择,从你当前的设置中最简单的就是使用最小化方法。 Minimize 本身有大量选项,从单纯形法(默认)到 BFGS 和 COBYLA。 https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html

from scipy.optimize import minimize

n_params = 4  # based of your code so far
spreads_guess = np.zeros(n_params)  # might have a better guess than zeros
result = minimize(calculate_mse, spreads_guess)

试一试,如果您还有其他问题,我可以编辑答案并根据需要进行详细说明。

这里只是一些清理代码的建议。

class Minimize(Resource):

    def _calculate_probability(self, spread, std_dev):
        return sp.norm.sf(0.5, spread, scale=std_dev)
  
    def _calculate_mse(self, std_dev):
        spread_inputs = np.array(self.spreads)
        model_probabilities = self._calculate_probability(spread_inputs, std_dev)
        mse = np.sum((model_probabilities - self.expected_probabilities)**2) / len(spread_inputs)
        print(mse)
        return mse

    def __init__(self, expected_probabilities, spreads, std_dev_guess):
        self.std_dev_guess = std_dev_guess
        self.spreads = spreads
        self.expected_probabilities = expected_probabilities
        self.result = None

    def solve(self):
        self.result = minimize(self._calculate_mse, self.std_dev_guess, method='BFGS')

    def get(self):
        return {'data': self.result}, 200

# run something like
spreads = [10.5, 9.5, 10, 8.5]
expected_probabilities = [0.8091, 0.7785, 0.7708, 0.7692]
minimizer = Minimize(expected_probabilities, spreads, 10.)
print(minimizer.get())  # returns none since it hasn't been run yet, up to you how to handle this
minimizer.solve()
print(minimizer.get())