Spark - 从 CSV 文件创建(标签,特征)对的 RDD

Spark - create RDD of (label, features) pairs from CSV file

我有一个 CSV 文件,想对数据执行简单的 LinearRegressionWithSGD。

示例数据如下(文件中的总行数为 99,包括标签),objective 用于预测 y_3变量:

y_3,x_6,x_7,x_73_1,x_73_2,x_73_3,x_8
2995.3846153846152,17.0,1800.0,0.0,1.0,0.0,12.0
2236.304347826087,17.0,1432.0,1.0,0.0,0.0,12.0
2001.9512195121952,35.0,1432.0,0.0,1.0,0.0,5.0
992.4324324324324,17.0,1430.0,1.0,0.0,0.0,12.0
4386.666666666667,26.0,1430.0,0.0,0.0,1.0,25.0
1335.9036144578313,17.0,1432.0,0.0,1.0,0.0,5.0
1097.560975609756,17.0,1100.0,0.0,1.0,0.0,5.0
3526.6666666666665,26.0,1432.0,0.0,1.0,0.0,12.0
506.8421052631579,17.0,1430.0,1.0,0.0,0.0,5.0
2095.890410958904,35.0,1430.0,1.0,0.0,0.0,12.0
720.0,35.0,1430.0,1.0,0.0,0.0,5.0
2416.5,17.0,1432.0,0.0,0.0,1.0,12.0
3306.6666666666665,35.0,1800.0,0.0,0.0,1.0,12.0
6105.974025974026,35.0,1800.0,1.0,0.0,0.0,25.0
1400.4624277456646,35.0,1800.0,1.0,0.0,0.0,5.0
1414.5454545454545,26.0,1430.0,1.0,0.0,0.0,12.0
5204.68085106383,26.0,1800.0,0.0,0.0,1.0,25.0
1812.2222222222222,17.0,1800.0,1.0,0.0,0.0,12.0
2763.5928143712576,35.0,1100.0,1.0,0.0,0.0,12.0

我已经使用以下命令读取了数据:

val data = sc.textFile(datadir + "/data_2.csv");

当我想使用以下命令创建(标签,特征)对的 RDD 时:

val parsedData = data.map { line =>
    val parts = line.split(',')
    LabeledPoint(parts(0).toDouble, Vectors.dense(parts(1).split(' ').map(_.toDouble)))
    }.cache()

所以我无法继续训练模型,有什么帮助吗?

P.S。我 运行 在 Windows 7 x64.

中与 Scala IDE 的火花

当您读入文件的第一行时

y_3,x_6,x_7,x_73_1,x_73_2,x_73_3,x_8

也在您的 map 函数中读取和转换,因此您尝试在 y_3 上调用 toDouble。您需要过滤掉第一行并使用剩余的行进行学习。

经过一番努力,我找到了解决办法。第一个问题与 header 行有关,第二个问题与映射函数有关。这是完整的解决方案:

//To read the file
val csv = sc.textFile(datadir + "/data_2.csv");

//To find the headers
val header = csv.first;

//To remove the header
val data = csv.filter(_(0) != header(0));

//To create a RDD of (label, features) pairs
val parsedData = data.map { line =>
    val parts = line.split(',')
    LabeledPoint(parts(0).toDouble, Vectors.dense(parts(1).split(' ').map(_.toDouble)))
    }.cache()

希望能节省您的时间。