TPOT:class多 class 数据化失败
TPOT: classification fails on multi-class data
我无法让 TPot(v. 0.9.2,Python 2.7)处理多类数据(尽管我在 TPot 的文档中找不到任何说它只进行二进制分类的内容)。
下面提供了一个示例。它运行到 9%,然后因错误而死掉:
RuntimeError: There was an error in the TPOT optimization process.
This could be because the data was not formatted properly, or because
data for a regression problem was provided to the TPOTClassifier
object. Please make sure you passed the data to TPOT correctly.
但是把n_classes改成2就运行没问题了
from sklearn.metrics import f1_score, make_scorer
from sklearn.datasets import make_classification
from tpot import TPOTClassifier
scorer = make_scorer(f1_score)
X, y = make_classification(n_samples=200, n_features=100,
n_informative=20, n_redundant=10,
n_classes=3, random_state=42)
tpot = TPOTClassifier(generations=10, population_size=20, verbosity=20, scoring=scorer)
tpot.fit(X, y)
事实上,TPOT 也应该适用于多类数据 - example in the docs 适用于 MNIST 数据集 (10 类)。
错误与f1_score
有关;用 n_classes=3
保存你的代码,并要求
tpot = TPOTClassifier(generations=10, population_size=20, verbosity=2)
(即使用默认值 scoring='accuracy'
)工作正常:
Warning: xgboost.XGBClassifier is not available and will not be used by TPOT.
Generation 1 - Current best internal CV score: 0.7447422496202984
Generation 2 - Current best internal CV score: 0.7447422496202984
Generation 3 - Current best internal CV score: 0.7454927186634503
Generation 4 - Current best internal CV score: 0.7454927186634503
Generation 5 - Current best internal CV score: 0.7706334316090413
Generation 6 - Current best internal CV score: 0.7706334316090413
Generation 7 - Current best internal CV score: 0.7706334316090413
Generation 8 - Current best internal CV score: 0.7706334316090413
Generation 9 - Current best internal CV score: 0.7757616367372464
Generation 10 - Current best internal CV score: 0.7808898418654516
Best pipeline:
LogisticRegression(KNeighborsClassifier(DecisionTreeClassifier(input_matrix, criterion=entropy, max_depth=3, min_samples_leaf=15, min_samples_split=12), n_neighbors=6, p=2, weights=uniform), C=0.01, dual=False, penalty=l2)
TPOTClassifier(config_dict={'sklearn.linear_model.LogisticRegression': {'penalty': ['l1', 'l2'], 'C': [0.0001, 0.001, 0.01, 0.1, 0.5, 1.0, 5.0, 10.0, 15.0, 20.0, 25.0], 'dual': [True, False]}, 'sklearn.decomposition.PCA': {'iterated_power': range(1, 11), 'svd_solver': ['randomized']}, 'sklearn.feature_selection.Se...ocessing.PolynomialFeatures': {'degree': [2], 'interaction_only': [False], 'include_bias': [False]}},
crossover_rate=0.1, cv=5, disable_update_check=False,
early_stop=None, generations=10, max_eval_time_mins=5,
max_time_mins=None, memory=None, mutation_rate=0.9, n_jobs=1,
offspring_size=20, periodic_checkpoint_folder=None,
population_size=20, random_state=None, scoring=None, subsample=1.0,
verbosity=2, warm_start=False)
用suggested in the docs的用法求F1分数,即:
tpot = TPOTClassifier(generations=10, population_size=20, verbosity=2, scoring='f1')
再次产生您报告的错误,可能是因为 default argument in f1_score
is average='binary'
, which indeed does not make sense for multi-class problems, and the simple f1
is only for binary problems (docs).
在 scoring
中明确使用 F1 分数的其他变体,例如f1_macro
、f1_micro
或 f1_weighted
工作正常(未显示)。
我无法让 TPot(v. 0.9.2,Python 2.7)处理多类数据(尽管我在 TPot 的文档中找不到任何说它只进行二进制分类的内容)。
下面提供了一个示例。它运行到 9%,然后因错误而死掉:
RuntimeError: There was an error in the TPOT optimization process.
This could be because the data was not formatted properly, or because
data for a regression problem was provided to the TPOTClassifier
object. Please make sure you passed the data to TPOT correctly.
但是把n_classes改成2就运行没问题了
from sklearn.metrics import f1_score, make_scorer
from sklearn.datasets import make_classification
from tpot import TPOTClassifier
scorer = make_scorer(f1_score)
X, y = make_classification(n_samples=200, n_features=100,
n_informative=20, n_redundant=10,
n_classes=3, random_state=42)
tpot = TPOTClassifier(generations=10, population_size=20, verbosity=20, scoring=scorer)
tpot.fit(X, y)
事实上,TPOT 也应该适用于多类数据 - example in the docs 适用于 MNIST 数据集 (10 类)。
错误与f1_score
有关;用 n_classes=3
保存你的代码,并要求
tpot = TPOTClassifier(generations=10, population_size=20, verbosity=2)
(即使用默认值 scoring='accuracy'
)工作正常:
Warning: xgboost.XGBClassifier is not available and will not be used by TPOT.
Generation 1 - Current best internal CV score: 0.7447422496202984
Generation 2 - Current best internal CV score: 0.7447422496202984
Generation 3 - Current best internal CV score: 0.7454927186634503
Generation 4 - Current best internal CV score: 0.7454927186634503
Generation 5 - Current best internal CV score: 0.7706334316090413
Generation 6 - Current best internal CV score: 0.7706334316090413
Generation 7 - Current best internal CV score: 0.7706334316090413
Generation 8 - Current best internal CV score: 0.7706334316090413
Generation 9 - Current best internal CV score: 0.7757616367372464
Generation 10 - Current best internal CV score: 0.7808898418654516
Best pipeline:
LogisticRegression(KNeighborsClassifier(DecisionTreeClassifier(input_matrix, criterion=entropy, max_depth=3, min_samples_leaf=15, min_samples_split=12), n_neighbors=6, p=2, weights=uniform), C=0.01, dual=False, penalty=l2)
TPOTClassifier(config_dict={'sklearn.linear_model.LogisticRegression': {'penalty': ['l1', 'l2'], 'C': [0.0001, 0.001, 0.01, 0.1, 0.5, 1.0, 5.0, 10.0, 15.0, 20.0, 25.0], 'dual': [True, False]}, 'sklearn.decomposition.PCA': {'iterated_power': range(1, 11), 'svd_solver': ['randomized']}, 'sklearn.feature_selection.Se...ocessing.PolynomialFeatures': {'degree': [2], 'interaction_only': [False], 'include_bias': [False]}},
crossover_rate=0.1, cv=5, disable_update_check=False,
early_stop=None, generations=10, max_eval_time_mins=5,
max_time_mins=None, memory=None, mutation_rate=0.9, n_jobs=1,
offspring_size=20, periodic_checkpoint_folder=None,
population_size=20, random_state=None, scoring=None, subsample=1.0,
verbosity=2, warm_start=False)
用suggested in the docs的用法求F1分数,即:
tpot = TPOTClassifier(generations=10, population_size=20, verbosity=2, scoring='f1')
再次产生您报告的错误,可能是因为 default argument in f1_score
is average='binary'
, which indeed does not make sense for multi-class problems, and the simple f1
is only for binary problems (docs).
在 scoring
中明确使用 F1 分数的其他变体,例如f1_macro
、f1_micro
或 f1_weighted
工作正常(未显示)。