随机森林在训练和测试中获得 98% 的准确率,但在其他方面总是预测相同 class

Random Forest gets 98% accuracy in training and testing but always predicts the same class otherwise

我已经花了 30 个小时来调试这个单一的问题,这完全没有意义,希望你们中的一个能给我一个不同的视角。

问题是我在随机森林中使用训练数据帧并获得了 98%-99% 的非常好的准确率,但是当我尝试加载新样本进行预测时。模型总是猜测相同 class.

#  Shuffle the data-frames records. The labels are still attached
df = df.sample(frac=1).reset_index(drop=True)

#  Extract the labels and then remove them from the data
y = list(df['label'])
X = df.drop(['label'], axis='columns')

#  Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=TEST_SIZE)

#  Construct the model
model = RandomForestClassifier(n_estimators=N_ESTIMATORS, max_depth=MAX_DEPTH, random_state=RANDOM_STATE,oob_score=True)

#  Calculate the training accuracy
in_sample_accuracy = model.fit(X_train, y_train).score(X_train, y_train)
#  Calculate the testing accuracy
test_accuracy = model.score(X_test, y_test)

print()
print('In Sample Accuracy: {:.2f}%'.format(model.oob_score_ * 100))
print('Test Accuracy: {:.2f}%'.format(test_accuracy * 100))

我处理数据的方式是一样的,但是当我预测 X_test 或 X_train 时,我得到正常的 98%,而当我预测我的新数据时,它总是猜测同样的 class.

    #  The json file is not in the correct format, this function normalizes it
    normalized_json = json_normalizer(json_file, "", training=False)
    #  Turn the json into a list of dictionaries which contain the features
    features_dict = create_dict(normalized_json, label=None)

    #  Convert the dictionaries into pandas dataframes
    df = pd.DataFrame.from_records(features_dict)
    print('Total amount of email samples: ', len(df))
    print()

    df = df.fillna(-1)
    #  One hot encodes string values
    df = one_hot_encode(df, noOverride=True)
    if 'label' in df.columns:
        df = df.drop(['label'], axis='columns')
    print(list(model.predict(df))[:100])
    print(list(model.predict(X_train))[:100])

以上是我的测试场景,你可以在最后两行中看到我预测 X_train 用于训练模型的数据和 df 它总是猜测的样本外数据class0.

一些有用的信息:

任何想法都会有所帮助,如果您需要更多信息,请告诉我我的大脑现在已经炸了,这就是我能想到的。

已解决,问题是数据集的不平衡我也意识到改变深度会给我不同的结果。

例如,10 棵深度为 3 的树 -> 似乎工作正常 10 棵树有 6 个深度 -> 回到只猜测相同的 class