Apache Spark spark.read 未按预期工作

Apache Spark spark.read not working as intended

我正在学习 IBM Apache Spark。我正在使用 HMP 数据集。我按照教程中的说明进行操作,但代码未按预期工作。这是我的代码:

!git clone https://github.com/wchill/HMP_Dataset

from pyspark.sql.types import StructType, StructField, IntegerType

schema = StructType([
    StructField("x",IntegerType(), True),
    StructField("y",IntegerType(), True),
    StructField("z",IntegerType(), True)
])

import os
file_list = os.listdir("HMP_Dataset")
file_list_filtered = [file for file in file_list if "_" in file]
from pyspark.sql.functions import lit
for cat in file_list_filtered:
    data_files = os.listdir("HMP_Dataset/" + cat)

    for data_file in data_files:
        print(data_file)

        temp_df = spark.read.option("header","false").option( "delimeter" , " ").csv("HMP_Dataset/" + cat + "/" + data_file, schema=schema)

        temp_df = temp_df.withColumn("class",lit(cat))
        temp_df = temp_df.withColumn("source",lit(data_file))

        if df is None:
            df = temp_df
        else:
            df = df.union(temp_df)

x、y、z 的模式在使用 df.show() 方法时保持为空。这是输出:

+----+----+----+-----------+--------------------+
|   x|   y|   z|      class|              source|
+----+----+----+-----------+--------------------+
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
+----+----+----+-----------+--------------------+
only showing top 20 rows

x、y、z 列必须有数字。我究竟做错了什么?我使用了教程视频中显示的确切代码。我正在为 运行 程序使用 IBM Watson Studio。 Link 到教程 https://www.coursera.org/learn/advanced-machine-learning-signal-processing/lecture/8cfiW/introduction-to-sparkml

您指定 "delimeter" 的选项似乎有错字,而要传递的正确选项是 "delimiter"

temp_df = spark.read.option("header","false").option( "delimeter" , " ").csv("HMP_Dataset/" + cat + "/" + data_file, schema=schema)

正确:-

temp_df = spark.read.option("header","false").option( "delimiter" , " ").csv("HMP_Dataset/" + cat + "/" + data_file, schema=schema)

您也可以使用 "sep" 作为分隔符。 有关更多参考,请在此处或 spark 文档中参考 spark-csv:- https://github.com/databricks/spark-csv