LightGBM fit throws "ValueError: Circular reference detected" with categorical feature from pd.cut

LightGBM fit throws "ValueError: Circular reference detected" with categorical feature from pd.cut

我一直非常满意地使用 lightGBM 模型,因为我有包含数十个特征和数百万行的大型数据集,还有很多分类列。 我非常喜欢 lightGBM 可以获得 pandas 数据帧的方式,该数据帧具有仅使用 astype('category') 定义的分类特征,无需任何单热编码。 我还有一些浮动列,我试图将其转换为分类箱以加速收敛并强制确定决策点的边界。 问题是尝试使用 pd.cut 对 float 列进行 bin 会导致 fit 方法失败并抛出 ValueError: Circular reference detected

有一个类似的问题 here 实际上在回溯中提到了 Json 编码器,但我没有那里的答案所建议的 DateTime 列。 我猜 lightGBM 可能不支持 .cut 类别,但我无法在文档中找到有关此的任何信息。

要复制问题,不需要大数据集,这是一个玩具示例,我在其中构建了一个 100 行、10 列的数据集。 5 列是整数,我用 astype 将其转换为分类 5列是浮点数。 将浮点数保持为浮点数一切正常,使用 pd.cut 将一个或多个浮点列转换为分类列会导致 fit 函数抛出错误。

import lightgbm as lgb
from sklearn.model_selection import train_test_split

rows = 100
fcols = 5
ccols = 5
# Let's define some ascii readable names for convenience
fnames = ['Float_'+str(chr(97+n)) for n in range(fcols)]
cnames = ['Cat_'+str(chr(97+n)) for n in range(fcols)]

# The dataset is built by concatenation of the float and the int blocks
dff = pd.DataFrame(np.random.rand(rows,fcols),columns=fnames)
dfc = pd.DataFrame(np.random.randint(0,20,(rows,ccols)),columns=cnames)
df = pd.concat([dfc,dff],axis=1)
# Target column with random output
df['Target'] = (np.random.rand(rows)>0.5).astype(int)

# Conversion into categorical
df[cnames] = df[cnames].astype('category')
df['Float_a'] = pd.cut(x=df['Float_a'],bins=10)

# Dataset split
X = df.drop('Target',axis=1)
y = df['Target'].astype(int)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33)

# Model instantiation
lgbmc = lgb.LGBMClassifier(objective      = 'binary',
                           boosting_type  = 'gbdt' ,
                            is_unbalance   = True,
                           metric         = ['binary_logloss'])

lgbmc.fit(X_train,y_train)

这里是错误,如果没有np.cat列就不会出现。

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-207-751795a98846> in <module>
      4                            metric         = ['binary_logloss'])
      5 
----> 6 lgbmc.fit(X_train,y_train)
      7 
      8 prob_pred = lgbmc.predict(X_test)

~\AppData\Local\conda\conda\envs\py36\lib\site-packages\lightgbm\sklearn.py in fit(self, X, y, sample_weight, init_score, eval_set, eval_names, eval_sample_weight, eval_class_weight, eval_init_score, eval_metric, early_stopping_rounds, verbose, feature_name, categorical_feature, callbacks)
    740                                         verbose=verbose, feature_name=feature_name,
    741                                         categorical_feature=categorical_feature,
--> 742                                         callbacks=callbacks)
    743         return self
    744 

~\AppData\Local\conda\conda\envs\py36\lib\site-packages\lightgbm\sklearn.py in fit(self, X, y, sample_weight, init_score, group, eval_set, eval_names, eval_sample_weight, eval_class_weight, eval_init_score, eval_group, eval_metric, early_stopping_rounds, verbose, feature_name, categorical_feature, callbacks)
    540                               verbose_eval=verbose, feature_name=feature_name,
    541                               categorical_feature=categorical_feature,
--> 542                               callbacks=callbacks)
    543 
    544         if evals_result:

~\AppData\Local\conda\conda\envs\py36\lib\site-packages\lightgbm\engine.py in train(params, train_set, num_boost_round, valid_sets, valid_names, fobj, feval, init_model, feature_name, categorical_feature, early_stopping_rounds, evals_result, verbose_eval, learning_rates, keep_training_booster, callbacks)
    238         booster.best_score[dataset_name][eval_name] = score
    239     if not keep_training_booster:
--> 240         booster.model_from_string(booster.model_to_string(), False).free_dataset()
    241     return booster
    242 

~\AppData\Local\conda\conda\envs\py36\lib\site-packages\lightgbm\basic.py in model_to_string(self, num_iteration, start_iteration)
   2064                 ptr_string_buffer))
   2065         ret = string_buffer.value.decode()
-> 2066         ret += _dump_pandas_categorical(self.pandas_categorical)
   2067         return ret
   2068 

~\AppData\Local\conda\conda\envs\py36\lib\site-packages\lightgbm\basic.py in _dump_pandas_categorical(pandas_categorical, file_name)
    299     pandas_str = ('\npandas_categorical:'
    300                   + json.dumps(pandas_categorical, default=json_default_with_numpy)
--> 301                   + '\n')
    302     if file_name is not None:
    303         with open(file_name, 'a') as f:

~\AppData\Local\conda\conda\envs\py36\lib\json\__init__.py in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, default, sort_keys, **kw)
    236         check_circular=check_circular, allow_nan=allow_nan, indent=indent,
    237         separators=separators, default=default, sort_keys=sort_keys,
--> 238         **kw).encode(obj)
    239 
    240 

~\AppData\Local\conda\conda\envs\py36\lib\json\encoder.py in encode(self, o)
    197         # exceptions aren't as detailed.  The list call should be roughly
    198         # equivalent to the PySequence_Fast that ''.join() would do.
--> 199         chunks = self.iterencode(o, _one_shot=True)
    200         if not isinstance(chunks, (list, tuple)):
    201             chunks = list(chunks)

~\AppData\Local\conda\conda\envs\py36\lib\json\encoder.py in iterencode(self, o, _one_shot)
    255                 self.key_separator, self.item_separator, self.sort_keys,
    256                 self.skipkeys, _one_shot)
--> 257         return _iterencode(o, 0)
    258 
    259 def _make_iterencode(markers, _default, _encoder, _indent, _floatstr,

ValueError: Circular reference detected

here 一样,您的问题与 JSON 序列化有关。序列化程序“不喜欢”由 pd.cut 创建的类别的标签(类似于“(0.109, 0.208]”的标签)。

您可以覆盖使用 cut 函数的 labels 可选参数生成的标签 (https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.cut.html)。

在您的示例中,您可以替换以下行:

df['Float_a'] = pd.cut(x=df['Float_a'],bins=10)

行:

bins = 10
df['Float_a'] = pd.cut(x=df['Float_a'],bins=bins, labels=[f'bin_{i}' for i in range(bins)])