不同特征的数据预处理步骤

Data pre-processing steps with different features

我想在分类器中包含多个功能,以更好地提高模型性能。 我有一个类似于这个的数据集

Text is_it_capital? is_it_upper? contains_num? Label
an example of text 0 0 0 0
ANOTHER example of text 1 1 0 1
What's happening?Let's talk at 5 1 0 1 1

我正在对文本应用不同的预处理算法(BoW、TF-IDF 等)。通过选择 X= df['Text'] 并应用预处理算法,在我的分类器中仅使用文本列是 'easy'。但是,我现在还想包括 is_it_capital? 和其他变量(标签除外)作为特征,因为我发现它们可能对我的分类器有用。 我尝试的是以下内容:

X=df[['Text','is_it_capital?', 'is_it_upper?', 'contains_num?']]
y=df['Label']

from sklearn.base import TransformerMixin
class DenseTransformer(TransformerMixin):
    def fit(self, X, y=None, **fit_params):
        return self
    def transform(self, X, y=None, **fit_params):
        return X.todense()

from sklearn.pipeline import Pipeline
pipeline = Pipeline([
     ('vectorizer', CountVectorizer()), 
     ('to_dense', DenseTransformer()), 
])

transformer = ColumnTransformer([('text', pipeline, 'Text')], remainder='passthrough')

X_train, X_test, y_train, y_test  = train_test_split(X, y, test_size=0.25, random_state=40)

X_train = transformer.fit_transform(X_train)
X_test = transformer.transform(X_test)

df_train = pd.concat([X_train, y_train], axis=1)
df_test = pd.concat([X_test, y_test], axis=1)

#Logistic regression
logR_pipeline = Pipeline([
        ('LogRCV',countV),
        ('LogR_clf',LogisticRegression())
        ])

logR_pipeline.fit(df_train['Text'], df_train['Label'])
predicted_LogR = logR_pipeline.predict(df_test['Text'])
np.mean(predicted_LogR == df_test['Label'])

但是我得到了错误:

TypeError: cannot concatenate object of type '<class 'scipy.sparse.csr.csr_matrix'>'; only Series and DataFrame objs are valid

有没有人处理过类似的问题?我该如何解决? 我的目标是在我的分类器中包含所有特征。

更新:

我也试过这个:

from sklearn.base import BaseEstimator,TransformerMixin

class custom_count_v(BaseEstimator,TransformerMixin):
    def __init__(self,tfidf):
        self.tfidf = tfidf

    def fit(self, X, y=None):
        joined_X = X.apply(lambda x: ' '.join(x), axis=1)
        self.tfidf.fit(joined_X)        
        return self

    def transform(self, X):
        joined_X = X.apply(lambda x: ' '.join(x), axis=1)

        return self.tfidf.transform(joined_X)        


count_v = CountVectorizer() 

clmn = ColumnTransformer([("count", custom_count_v(count_v), ['Text'])],remainder="passthrough")
clmn.fit_transform(df)

它不会 return 任何错误,但不清楚我是否正确地包含了所有功能,以及我是否需要在 train/test [=39= 之前或之后执行此操作] 如果你能告诉我直到应用分类器,那将非常有帮助:

#Logistic regression
logR_pipeline = Pipeline([
        ('LogRCV',....),
        ('LogR_clf',LogisticRegression())
        ])

logR_pipeline.fit(....)
predicted_LogR = logR_pipeline.predict(...)
np.mean(predicted_LogR == ...)

应该用数据框或列代替点(我猜这取决于转换和连接),以便改进我犯的步骤和错误。

您的错误似乎是试图连接数组和系列。

我对pipeline和columntransformer不熟悉,所以我可能弄错了;似乎它没有从 CountVectorizer 捕获特征名称,因此拥有未标记的数据帧不会有任何好处:也许你可以坚持使用 numpy 数组。 如果我弄错了,从 np.array 跳到数据帧应该很容易...

所以,你可以这样做:

df_train = np.append(
  X_train, #this is an array
  np.array(y_train).reshape(len(y_train),1), #convert the Serie to numpy array of correct shape
  axis=1)
print(df_train)

[[1 0 1 0 0 1 0 1 0 1 1 0 1]
 [0 1 0 1 1 0 1 0 1 1 0 1 1]]

希望这会有所帮助(尽管正如我所说,我对这些 sklearn 库不熟悉...)

编辑

更完整的东西,没有那些管道(我不确定是否需要这些管道);由于输入数据集,它在我的电脑上失败了,但你可能会更成功地使用你的完整数据集。

df = pd.DataFrame(
        [["an example of text", 0, 0, 0, 0],
         ["ANOTHER example of text", 1, 1, 0, 1],
         ["What's happening?Let's talk at 5", 1, 0, 1, 1]
        ],
        columns=["Text", "is_it_capital?", "is_it_upper?", "contains_num?", "Label"]
        )

X=df[['Text','is_it_capital?', 'is_it_upper?', 'contains_num?']]
y=df['Label']

X_train, X_test, y_train, y_test  = train_test_split(X, y, test_size=0.25, random_state=40)

cv = CountVectorizer()

X_train = (
        pd.DataFrame(
                cv.fit_transform(X_train['Text']).toarray(),
                columns=cv.get_feature_names(),
                index=X_train.index
                ) #This way you keep the labels/indexes in a dataframe format
        .join(X_train.drop('Text', axis=1)) #add your previous 'get_dummies' columns
        )

X_test = (
        pd.DataFrame(
                cv.transform(X_test['Text']).toarray(),
                columns=cv.get_feature_names(),
                index=X_test.index
                )
        .join(X_test.drop('Text', axis=1))
        )

#Then compute your regression directly :
lr = LogisticRegression()
lr = lr.fit(X_train, y_train)
y_pred = lr.predict(X_test)