逻辑回归 Python

Logistic Regression Python

我一直在尝试针对分类问题实施逻辑回归,但它给了我非常奇怪的结果。我已经通过梯度提升和随机森林获得了不错的结果,所以我想开始学习基础知识,看看我能做到最好。你能帮我指出导致这种过度拟合的我做错了什么吗? 您可以从中获取数据 https://www.kaggle.com/c/santander-customer-satisfaction/data

这是我的代码:

import pandas as pd
import numpy as np
train = pd.read_csv("path")
test = pd.read_csv("path")
test["TARGET"] = 0
fullData = pd.concat([train,test], ignore_index = True)

remove1 = []
for col in fullData.columns:
 if fullData[col].std() == 0:
    remove1.append(col)

fullData.drop(remove1, axis=1, inplace=True)
import numpy as np
remove = []
cols = fullData.columns
for i in range(len(cols)-1):
 v = fullData[cols[i]].values
  for j in range(i+1,len(cols)):
    if np.array_equal(v,fullData[cols[j]].values):
        remove.append(cols[j])

fullData.drop(remove, axis=1, inplace=True)

 from sklearn.cross_validation import train_test_split
X_train, X_test = train_test_split(fullData, test_size=0.20, random_state=1729)
print(X_train.shape, X_test.shape)

y_train = X_train["TARGET"].values
X = X_train.drop(["TARGET","ID"],axis=1,inplace = False)

from sklearn.ensemble import ExtraTreesClassifier
clf = ExtraTreesClassifier(random_state=1729)
selector = clf.fit(X, y_train)

from sklearn.feature_selection import SelectFromModel
fs = SelectFromModel(selector, prefit=True)
X_t = X_test.drop(["TARGET","ID"],axis=1,inplace = False)
X_t = fs.transform(X_t)
X_tr = X_train.drop(["TARGET","ID"],axis=1,inplace = False)
X_tr = fs.transform(X_tr)


from sklearn.linear_model import LogisticRegression
log = LogisticRegression(penalty ='l2', C = 1, random_state = 1, 
                    )
from sklearn import cross_validation
scores = cross_validation.cross_val_score(log,X_tr,y_train,cv = 10)

print(scores.mean())
log.fit(X_tr,y_train)
predictions = log.predict(X_t)
predictions = predictions.astype(int)
print(predictions.mean())

您没有配置 C 参数 - 嗯,从技术上讲,您配置的是默认值 - 这是过度拟合的常见嫌疑人之一。您可以查看 GridSearchCV 并尝试使用 C 参数的多个值(例如从 10^-5 到 10^5)看看它是否可以缓解您的问题。将惩罚规则更改为 'l1' 也可能有所帮助。

此外,那场比赛也有几个挑战:它是一个不平衡的数据集,训练集和私有LB之间的分布有点不同。所有这些如果要与你对抗,特别是在使用像 LR 这样的简单算法时。