使用自定义objective函数进行multi-classclass化时如何设置lightgbm的参数?

How to set parameters for lightgbm when using customized objective function for multi-class classification?

我想在 multi-class classification 中测试 lightgbm 的自定义 objective 函数。 我已经指定了参数"num_class=3"。 然而,一个错误:“

Number of classes must be 1 for non-multiclass training" is thrown

我正在使用 python 3.6 和 lightgbm 版本 0.2

# iris data
from sklearn import datasets
import lightgbm as lgb
import numpy as np

iris = datasets.load_iris()
X = iris['data']
y = iris['target']

# construct train-test
num_train = int(X.shape[0] / 3 * 2)
idx = np.random.permutation(X.shape[0])

x_train = X[idx[:num_train]]
x_test = X[idx[num_train:]]
y_train = y[idx[:num_train]]
y_test = y[idx[num_train:]]
# softmax function
def softmax(x):
    '''
    input x: an np.array of n_sample * n_class
    return : an np.array of n_sample * n_class (probabilities)
    '''
    x = np.where(x>100, 100, x)
    x = np.exp(x)
    return x / np.reshape(np.sum(x, 1), [x.shape[0], 1])
# objective function    
def objective(y_true, y_pred):
    '''
    input: 
        y_true: np.array of size (n_sample,)
        y_pred: np.array of size (n_sample, n_class)
    '''
    y_pred = softmax(y_pred) 
    temp = np.zeros_like(y_pred)
    temp[range(y_pred.shape[0]), y_true] = 1   
    gradient = y_pred - temp
    hessian = y_pred * (1 - y_pred)  
    return [gradient, hessian]
# lightgbm model
model = lgb.LGBMClassifier(n_estimators=10000,
                           num_classes = 3,
                           objective = objective,
                           nthread=4)
model.fit(x_train, y_train, 
          eval_metric = 'multi_logloss',
          eval_set = [(x_test, y_test), (x_train, y_train)],
          eval_names = ['valid', 'train'], 
          early_stopping_rounds = 200, verbose = 100)

让我回答我自己的问题。

objective函数中的参数应该是:

y_true of size [n_sample, ]
y_pred of size [n_sample * n_class, ] instead of [n_sample, n_class]

更具体地说,y_pred应该像

y_pred = [first_class, first_class,..., second_class, second_class,..., third_class, third_class,...]

此外,gradient 和 hessian 应该以相同的方式分组。

def objective(y_true, y_pred):
    '''
    input: 
        y_true: np.array of size [n_sample,]
        y_pred: np.array of size [n_sample * n_class, ]
    return:
        gradient and hessian should have exactly the same form of y_pred
    '''
    y_pred = np.reshape(y_pred, [num_train, 3], order = 'F')
    y_pred = softmax(y_pred)

    temp = np.zeros_like(y_pred)
    temp[range(y_pred.shape[0]), y_true] = 1

    gradient = y_pred - temp
    hessian = y_pred * (1 - y_pred)

    return [gradient.ravel(order = 'F'), hessian.ravel(order = 'F')]