在 RandomForestRegressor 中得到连续不支持的错误
Got continuous is not supported error in RandomForestRegressor
我只是想做一个简单的 RandomForestRegressor 示例。但是在测试准确性时我得到了这个错误
/Users/noppanit/anaconda/lib/python2.7/site-packages/sklearn/metrics/classification.pyc
in accuracy_score(y_true, y_pred, normalize, sample_weight)
177
178 # Compute accuracy for each possible representation
--> 179 y_type, y_true, y_pred = _check_targets(y_true, y_pred)
180 if y_type.startswith('multilabel'):
181 differing_labels = count_nonzero(y_true - y_pred, axis=1)
/Users/noppanit/anaconda/lib/python2.7/site-packages/sklearn/metrics/classification.pyc
in _check_targets(y_true, y_pred)
90 if (y_type not in ["binary", "multiclass", "multilabel-indicator",
91 "multilabel-sequences"]):
---> 92 raise ValueError("{0} is not supported".format(y_type))
93
94 if y_type in ["binary", "multiclass"]:
ValueError: continuous is not supported
这是数据样本。我无法显示真实数据。
target, func_1, func_2, func_2, ... func_200
float, float, float, float, ... float
这是我的代码。
import pandas as pd
import numpy as np
from sklearn.preprocessing import Imputer
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor, ExtraTreesRegressor, GradientBoostingRegressor
from sklearn.cross_validation import train_test_split
from sklearn.metrics import accuracy_score
from sklearn import tree
train = pd.read_csv('data.txt', sep='\t')
labels = train.target
train.drop('target', axis=1, inplace=True)
cat = ['cat']
train_cat = pd.get_dummies(train[cat])
train.drop(train[cat], axis=1, inplace=True)
train = np.hstack((train, train_cat))
imp = Imputer(missing_values='NaN', strategy='mean', axis=0)
imp.fit(train)
train = imp.transform(train)
x_train, x_test, y_train, y_test = train_test_split(train, labels.values, test_size = 0.2)
clf = RandomForestRegressor(n_estimators=10)
clf.fit(x_train, y_train)
y_pred = clf.predict(x_test)
accuracy_score(y_test, y_pred) # This is where I get the error.
因为accuracy_score只做分类任务。
对于回归,你应该使用不同的东西,例如:
clf.score(X_test, y_test)
其中X_test是样本,y_test是对应的ground truth值。它将在内部计算预测。
由于您正在执行分类任务,因此您应该使用指标 R 平方 (决定的共同效率)的
accuracy score(accuracy score用于分类问题)
R-squared可以通过调用RandomForestRegressor提供的score函数来计算,例如:
rfr.score(X_test,Y_test)
尝试
tree_clf.score(x_train, y_train)
你也不能使用混淆矩阵
我只是想做一个简单的 RandomForestRegressor 示例。但是在测试准确性时我得到了这个错误
/Users/noppanit/anaconda/lib/python2.7/site-packages/sklearn/metrics/classification.pyc
in accuracy_score(y_true, y_pred, normalize, sample_weight) 177 178 # Compute accuracy for each possible representation --> 179 y_type, y_true, y_pred = _check_targets(y_true, y_pred) 180 if y_type.startswith('multilabel'): 181 differing_labels = count_nonzero(y_true - y_pred, axis=1)
/Users/noppanit/anaconda/lib/python2.7/site-packages/sklearn/metrics/classification.pyc
in _check_targets(y_true, y_pred) 90 if (y_type not in ["binary", "multiclass", "multilabel-indicator", 91 "multilabel-sequences"]): ---> 92 raise ValueError("{0} is not supported".format(y_type)) 93 94 if y_type in ["binary", "multiclass"]:
ValueError: continuous is not supported
这是数据样本。我无法显示真实数据。
target, func_1, func_2, func_2, ... func_200
float, float, float, float, ... float
这是我的代码。
import pandas as pd
import numpy as np
from sklearn.preprocessing import Imputer
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor, ExtraTreesRegressor, GradientBoostingRegressor
from sklearn.cross_validation import train_test_split
from sklearn.metrics import accuracy_score
from sklearn import tree
train = pd.read_csv('data.txt', sep='\t')
labels = train.target
train.drop('target', axis=1, inplace=True)
cat = ['cat']
train_cat = pd.get_dummies(train[cat])
train.drop(train[cat], axis=1, inplace=True)
train = np.hstack((train, train_cat))
imp = Imputer(missing_values='NaN', strategy='mean', axis=0)
imp.fit(train)
train = imp.transform(train)
x_train, x_test, y_train, y_test = train_test_split(train, labels.values, test_size = 0.2)
clf = RandomForestRegressor(n_estimators=10)
clf.fit(x_train, y_train)
y_pred = clf.predict(x_test)
accuracy_score(y_test, y_pred) # This is where I get the error.
因为accuracy_score只做分类任务。 对于回归,你应该使用不同的东西,例如:
clf.score(X_test, y_test)
其中X_test是样本,y_test是对应的ground truth值。它将在内部计算预测。
由于您正在执行分类任务,因此您应该使用指标 R 平方 (决定的共同效率)的 accuracy score(accuracy score用于分类问题)
R-squared可以通过调用RandomForestRegressor提供的score函数来计算,例如:
rfr.score(X_test,Y_test)
尝试 tree_clf.score(x_train, y_train)
你也不能使用混淆矩阵