VotingClassifier 的类型错误

Typeerror with VotingClassifier

我想使用 VotingClassifier,但我在交叉验证方面遇到了一些问题

    x_train, x_validation, y_train, y_validation = train_test_split(x, y, test_size=.22, random_state=2)
    x_train = x_train.fillna(0)
    clf1 = CatBoostClassifier()
    clf2 = RandomForestClassifier()
    clf = VotingClassifier(estimators=[('cb', clf1), ('rf', clf2)])
    clf.fit(x_train.values(), y_train)

我预测有误...

    cross_validate(clf, x_train, y_train, scoring='accuracy', return_train_score = True, n_jobs = 4)

TypeError: Cannot cast array data from dtype('float64') to dtype('int64') according to the rule 'safe'

(完全错误 here


并在此处下载x_train和y_train↓

x_train
y_train

这个错误是因为这一行:

np.bincount(x, weights=self._weights_not_none)

这里 x 是预测 return 由 VotingClassifier 内的个人 classifier 编辑。

根据np.bincount的文档:

Count number of occurrences of each value in array of non-negative ints.

x : array_like, 1 dimension, nonnegative ints

此方法只需要数组中的 int 值。

现在,如果您将 CatBoostClassifier 替换为任何其他 Scikit-learn classifier,您的代码将正常工作。因为所有 scikit-learn 估计器 return 数组 np.int64 来自他们的 predict()

但 CatBoostClassifier returns np.float64 作为输出。因此错误。实际上它也应该 return int64 因为 predict() 函数应该 return 而 class 不是任何浮点值。但我不知道为什么它 return 会浮动。

您可以通过扩展 CatBoostClassifier class 并即时转换预测来更正此问题。

import numpy as np
from catboost import CatBoostClassifier
class CatBoostClassifierInt(CatBoostClassifier):
    def predict(self, data, prediction_type='Class', ntree_start=0, ntree_end=0, thread_count=1, verbose=None):
        predictions = self._predict(data, prediction_type, ntree_start, ntree_end, thread_count, verbose)

        # This line is the only change I did
        return np.asarray(predictions, dtype=np.int64).ravel()

clf1 = CatBoostClassifierInt()
clf2 = RandomForestClassifier()
clf = VotingClassifier(estimators=[('cb', clf1), ('rf', clf2)])
cross_validate(clf, x_train, y_train, scoring='accuracy', return_train_score = True)

现在你不会再收到那个错误了。

更正确的版本应该是这个。这将处理具有匹配输入和输出的所有类型的标签,并且可以轻松地在 scikit 中使用:

class CatBoostClassifierCorrected(CatBoostClassifier):
    def fit(self, X, y=None, cat_features=None, sample_weight=None, baseline=None, use_best_model=None,
        eval_set=None, verbose=None, logging_level=None, plot=False, column_description=None, verbose_eval=None):

        self.le_ = LabelEncoder().fit(y)
        transformed_y = self.le_.transform(y)

        self._fit(X, transformed_y, cat_features, None, sample_weight, None, None, None, baseline, use_best_model, eval_set, verbose, logging_level, plot, column_description, verbose_eval)
        return self

    def predict(self, data, prediction_type='Class', ntree_start=0, ntree_end=0, thread_count=1, verbose=None):
        predictions = self._predict(data, prediction_type, ntree_start, ntree_end, thread_count, verbose)

        # This line is the only change I did
        return self.le_.inverse_transform(predictions.astype(np.int64))

这将处理所有不同类型的标签