具有多个连续目标的 scikit-learn 回归

scikit-learn regression with multiple continuous targets

I want to perform regression on a dataset where the input has multiple features and the output has multiple continuous targets.

我一直在查看 sklearn 文档,但我发现的唯一多目标示例有 1) 一组离散的目标标签或 2) 使用像 KNN 这样的启发式算法而不是优化-基于算法,​​如回归。添加正则化也很好,但即使是简单的最小二乘法,我也找不到方法。这是一个非常简单、流畅的优化问题,所以如果它还没有在某个地方实现,我会感到震惊。如果有人能指出正确的方向,我将不胜感激!

你可以在这里找到你要找的东西。

https://machinelearningmastery.com/multi-output-regression-models-with-python/

但是如果你有足够的数据(没有任何激活的输出层。

from keras.layers import Dense, Input
from keras.models import Model
from keras.regularizers import l2

num_inputs = 10
num_outputs = 4

inp = Input((num_inputs,))
out = Dense(num_outputs, kernel_regularizer=l2(0.01))(inp)

model = Model(inp, out)
model.compile(optimizer='sgd', loss='mse', metrics=['acc','mse'])

model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_9 (InputLayer)         (None, 10)                0         
_________________________________________________________________
dense_7 (Dense)              (None, 4)                 44        
=================================================================
Total params: 44
Trainable params: 44
Non-trainable params: 0
_________________________________________________________________