弹性网络惩罚如何应用于逻辑回归的最大似然成本函数?

How are Elastic Net penalties applied to Logistic Regression's Maximum Likelihood cost function?

我了解 Ridge / Lasso / Elastic Net 回归惩罚如何应用于线性回归的成本函数,但我想弄清楚它们如何应用于逻辑回归的最大似然成本函数。

我已经尝试通过 google 查看页面,看起来可以做到(我相信 Sci-Kit 的逻辑回归模型接受 L1 和 L2 参数,而且我看过一些 YouTube 视频说惩罚也可以应用于逻辑模型)并且我发现了它们是如何添加到残差平方和成本函数中的,但我很好奇惩罚是如何与最大似然成本函数一起应用的。是最大似然减去惩罚吗?

我在 stats stack exchange 上发帖得到了答案(link). I'll post the answer from ofer-a 在这里帮助大家在 Stack Overflow 上搜索类似的答案。

The elastic net terms are added to the maximum likelihood cost function.i.e. the > final cost function is:

The first term is the likelihood, the second term is the l1 norm part of the elastic net, and the third term is the l2 norm part. i.e. the network is trying to minimize the negative log likelihood and also trying to minimize the weights.