逻辑回归的准确性

Accuracy in logistic regression

这是我发现的稍微修改过的代码 here...

我使用的逻辑与原作者相同,但仍然没有得到很好的准确性。平均倒数排名接近(我的:52.79,示例:48.04)

cv = CountVectorizer(binary=True, max_df=0.95)
feature_set = cv.fit_transform(df["short_description"])

X_train, X_test, y_train, y_test = train_test_split(
    feature_set, df["category"].values, random_state=2000)

scikit_log_reg = LogisticRegression(
    verbose=1, solver="liblinear", random_state=0, C=5, penalty="l2", max_iter=1000)

model = scikit_log_reg.fit(X_train, y_train)

target = to_categorical(y_test)
y_pred = model.predict_proba(X_test)
label_ranking_average_precision_score(target, y_pred)
>> 0.5279108613021547

model.score(X_test, y_test)
>> 0.38620071684587814

但是笔记本样本的准确性 (59.80) 与我的代码 (38.62) 不匹配

示例笔记本中使用的以下函数是否正确返回准确度?

def compute_accuracy(eval_items:list):
    correct=0
    total=0

    for item in eval_items:
        true_pred=item[0]
        machine_pred=set(item[1])

        for cat in true_pred:
            if cat in machine_pred:
                correct+=1
                break


    accuracy=correct/float(len(eval_items))
    return accuracy

笔记本代码正在检查实际类别是否在模型返回的前 3 名中:

def get_top_k_predictions(model, X_test, k):
    probs = model.predict_proba(X_test)
    best_n = np.argsort(probs, axis=1)[:, -k:]
    preds=[[model.classes_[predicted_cat] for predicted_cat in prediction] for prediction in best_n] 
    preds=[item[::-1] for item in preds]
    return preds

如果您将代码的评估部分替换为以下代码,您将看到您的模型 returns 的前 3 准确度也为 0.5980:

...    

model = scikit_log_reg.fit(X_train, y_train)

top_preds = get_top_k_predictions(model, X_test, 3)
pred_pairs = list(zip([[v] for v in y_test], top_preds))
print(compute_accuracy(pred_pairs))

# below is a simpler & more Pythonic version of compute_accuracy
print(np.mean([actual in pred for actual, pred in zip(y_test, top_preds)]))