Python (sklearn) - 为什么我对 SVR 中的每个测试元组得到相同的预测?

Python (sklearn) - Why am I getting the same prediction for every testing tuple in SVR?

Whosebug上类似问题的答案建议修改实例SVR()中的参数值,但我不明白如何处理。

这是我使用的代码:

import json
import numpy as np
from sklearn.svm import SVR

f = open('training_data.txt', 'r')
data = json.loads(f.read())
f.close()

f = open('predict_py.txt', 'r')
data1 = json.loads(f.read())
f.close()

features = []
response = []
predict = []

for row in data:
    a = [
        row['star_power'],
        row['view_count'],
        row['like_count'],
        row['dislike_count'],
        row['sentiment_score'],
        row['holidays'],
        row['clashes'],
    ]
    features.append(a)
    response.append(row['collection'])

for row in data1:
    a = [
        row['star_power'],
        row['view_count'],
        row['like_count'],
        row['dislike_count'],
        row['sentiment_score'],
        row['holidays'],
        row['clashes'],
    ]
    predict.append(a)

X = np.array(features).astype(float)
Y = np.array(response).astype(float)
predict = np.array(predict).astype(float)

svm = SVR()
svm.fit(X,Y)
print('svm prediction')
svm_pred = svm.predict(predict)
print(svm_pred)

这是我在代码中使用的两个文本文件的链接

training_data.txt

predict_py.txt

输出:

svm prediction
[ 36.07  36.07  36.07  36.07  36.07  36.07  36.07  36.07  36.07  36.07
36.07  36.07  36.07]

按要求添加两个文本文件的示例:

1) training_data.txt:

[{"star_power":"1300","view_count":"50602729","like_count":"348059","dislike_count":"31748","holidays":"1","clashes":"0","sentiment_score":"0.32938596491228","collection":"383"},{"star_power":"1700","view_count":"36012808","like_count":"205694","dislike_count":"20130","holidays":"0","clashes":"0","sentiment_score":"0.1130303030303","collection":"300.68"},{"star_power":"0","view_count":"23892902","like_count":"86380","dislike_count":"4426","holidays":"0","clashes":"0","sentiment_score":"0.16004079254079","collection":"188.72"},{"star_power":"0","view_count":"27177685","like_count":"374671","dislike_count":"10372","holidays":"0","clashes":"0","sentiment_score":"0.16032407407407","collection":"132.85"},{"star_power":"500","view_count":"7481738","like_count":"42734","dislike_count":"1885","holidays":"0","clashes":"0","sentiment_score":"0.38622493734336","collection":"128.45"},{"star_power":"400","view_count":"16895259","like_count":"99158","dislike_count":"4188","holidays":"0","clashes":"0","sentiment_score":"0.22791203703704","collection":"127.48"},{"star_power":"200","view_count":"16646480","like_count":"63472","dislike_count":"13652","holidays":"1","clashes":"1","sentiment_score":"0.16873480902778","collection":"112.14"},{"star_power":"400","view_count":"18717042","like_count":"67497","dislike_count":"14165","holidays":"0","clashes":"0","sentiment_score":"0.30881006493506","collection":"109.14"}]

2) predict_py.txt

[{"star_power":"0","view_count":"3717403","like_count":"13399","dislike_count":"909","sentiment_score":"0.154167","holidays":"0","clashes":"0"},{"star_power":"0","view_count":"1640896","like_count":"2923","dislike_count":"328","sentiment_score":"0.109112","holidays":"0","clashes":"0"},{"star_power":"100","view_count":"14723084","like_count":"95088","dislike_count":"9816","sentiment_score":"0.352344","holidays":"0","clashes":"0"},{"star_power":"0","view_count":"584922","like_count":"4032","dislike_count":"212","sentiment_score":"0.3495","holidays":"0","clashes":"0"},{"star_power":"0","view_count":"14826843","like_count":"94788","dislike_count":"4169","sentiment_score":"0.208472","holidays":"0","clashes":"0"},{"star_power":"0","view_count":"1866184","like_count":"2750","dislike_count":"904","sentiment_score":"0.1275","holidays":"0","clashes":"0"},{"star_power":"200","view_count":"22006916","like_count":"184780","dislike_count":"13796","sentiment_score":"0.183611","holidays":"0","clashes":"0"},{"star_power":"0","view_count":"2645992","like_count":"4698","dislike_count":"1874","sentiment_score":"0.185487","holidays":"0","clashes":"0"},{"star_power":"0","view_count":"13886030","like_count":"116879","dislike_count":"6608","sentiment_score":"0.243479","holidays":"0","clashes":"0"},{"star_power":"0","view_count":"3102123","like_count":"36790","dislike_count":"769","sentiment_score":"0.065651","holidays":"0","clashes":"0"},{"star_power":"300","view_count":"16469439","like_count":"110054","dislike_count":"17892","sentiment_score":"0.178432","holidays":"0","clashes":"0"},{"star_power":"0","view_count":"6353017","like_count":"81236","dislike_count":"2154","sentiment_score":"0.0480556","holidays":"0","clashes":"0"},{"star_power":"0","view_count":"8679597","like_count":"89531","dislike_count":"6923","sentiment_score":"0.152083","holidays":"0","clashes":"0"}]

有什么建议吗?谢谢。

更改您的代码以标准化数据。

from sklearn.preprocessing import RobustScaler
rbX = RobustScaler()
X = rbX.fit_transform(X)

rbY = RobustScaler()
Y = rbY.fit_transform(Y)

然后执行 fit()

svm = SVR()
svm.fit(X,Y)

预测时,只根据rbX变换predict

svm_pred = svm.predict(rbX.transform(predict))

现在 svm_pred 是标准化格式。您想要正确形式的预测 Y,因此根据 rbY.

svm_pred 进行逆变换
svm_pred = rbY.inverse_transform(svm_pred)

然后打印svm_pred。会得到满意的结果。