在 scikit learn 中组合随机森林模型
Combining random forest models in scikit learn
我有两个 RandomForestClassifier 模型,我想将它们组合成一个元模型。他们都使用相似但不同的数据进行训练。我该怎么做?
rf1 #this is my first fitted RandomForestClassifier object, with 250 trees
rf2 #this is my second fitted RandomForestClassifier object, also with 250 trees
我想创建 big_rf
将所有树组合成一个 500 棵树模型
我相信这可以通过修改 RandomForestClassifier 对象的 estimators_
和 n_estimators
属性来实现。森林中的每棵树都存储为一个 DecisionTreeClassifier 对象,这些树的列表存储在 estimators_
属性中。为确保不存在不连续性,更改 n_estimators
.
中的估算器数量也很有意义
这种方法的优点是可以在多台机器上并行构建一堆小森林,然后将它们组合起来。
下面是一个使用 iris 数据集的例子:
from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import train_test_split
from sklearn.datasets import load_iris
def generate_rf(X_train, y_train, X_test, y_test):
rf = RandomForestClassifier(n_estimators=5, min_samples_leaf=3)
rf.fit(X_train, y_train)
print "rf score ", rf.score(X_test, y_test)
return rf
def combine_rfs(rf_a, rf_b):
rf_a.estimators_ += rf_b.estimators_
rf_a.n_estimators = len(rf_a.estimators_)
return rf_a
iris = load_iris()
X, y = iris.data[:, [0,1,2]], iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.33)
# in the line below, we create 10 random forest classifier models
rfs = [generate_rf(X_train, y_train, X_test, y_test) for i in xrange(10)]
# in this step below, we combine the list of random forest models into one giant model
rf_combined = reduce(combine_rfs, rfs)
# the combined model scores better than *most* of the component models
print "rf combined score", rf_combined.score(X_test, y_test)
除了@mgoldwasser 解决方案之外,另一种方法是在训练森林时使用 warm_start
。在 Scikit-Learn 0.16-dev 中,您现在可以执行以下操作:
# First build 100 trees on X1, y1
clf = RandomForestClassifier(n_estimators=100, warm_start=True)
clf.fit(X1, y1)
# Build 100 additional trees on X2, y2
clf.set_params(n_estimators=200)
clf.fit(X2, y2)
我有两个 RandomForestClassifier 模型,我想将它们组合成一个元模型。他们都使用相似但不同的数据进行训练。我该怎么做?
rf1 #this is my first fitted RandomForestClassifier object, with 250 trees
rf2 #this is my second fitted RandomForestClassifier object, also with 250 trees
我想创建 big_rf
将所有树组合成一个 500 棵树模型
我相信这可以通过修改 RandomForestClassifier 对象的 estimators_
和 n_estimators
属性来实现。森林中的每棵树都存储为一个 DecisionTreeClassifier 对象,这些树的列表存储在 estimators_
属性中。为确保不存在不连续性,更改 n_estimators
.
这种方法的优点是可以在多台机器上并行构建一堆小森林,然后将它们组合起来。
下面是一个使用 iris 数据集的例子:
from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import train_test_split
from sklearn.datasets import load_iris
def generate_rf(X_train, y_train, X_test, y_test):
rf = RandomForestClassifier(n_estimators=5, min_samples_leaf=3)
rf.fit(X_train, y_train)
print "rf score ", rf.score(X_test, y_test)
return rf
def combine_rfs(rf_a, rf_b):
rf_a.estimators_ += rf_b.estimators_
rf_a.n_estimators = len(rf_a.estimators_)
return rf_a
iris = load_iris()
X, y = iris.data[:, [0,1,2]], iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.33)
# in the line below, we create 10 random forest classifier models
rfs = [generate_rf(X_train, y_train, X_test, y_test) for i in xrange(10)]
# in this step below, we combine the list of random forest models into one giant model
rf_combined = reduce(combine_rfs, rfs)
# the combined model scores better than *most* of the component models
print "rf combined score", rf_combined.score(X_test, y_test)
除了@mgoldwasser 解决方案之外,另一种方法是在训练森林时使用 warm_start
。在 Scikit-Learn 0.16-dev 中,您现在可以执行以下操作:
# First build 100 trees on X1, y1
clf = RandomForestClassifier(n_estimators=100, warm_start=True)
clf.fit(X1, y1)
# Build 100 additional trees on X2, y2
clf.set_params(n_estimators=200)
clf.fit(X2, y2)