Pyspark SQL select 列为 NaN 的数据

Pyspark SQL select data where a column is NaN

我如何才能 select 在 pyspark 中只有特定列具有 NaN 值的行?

设置

import numpy as np
import pandas as pd


# pyspark
import pyspark
from pyspark.sql import functions as F 
from pyspark.sql.types import *
from pyspark import SparkConf, SparkContext, SQLContext


spark = pyspark.sql.SparkSession.builder.appName('app').getOrCreate()
sc = spark.sparkContext
sqlContext = SQLContext(sc)
sc.setLogLevel("INFO")


# data
dft = pd.DataFrame({
    'Code': [1, 2, 3, 4, 5, 6],
    'Name': ['Odeon', 'Imperial', 'Majestic',
             'Royale', 'Paraiso', 'Nickelodeon'],
    'Movie': [5.0, 1.0, np.nan, 6.0, 3.0, np.nan]})


schema = StructType([
    StructField('Code',IntegerType(),True),
    StructField('Name',StringType(),True),
    StructField('Movie',FloatType(),True),

    ])

sdft = sqlContext.createDataFrame(dft, schema)
sdft.createOrReplaceTempView("MovieTheaters")
sdft.show()

我的尝试

spark.sql("""
select * from MovieTheaters where Movie is null
""").show()

+----+----+-----+
|Code|Name|Movie|
+----+----+-----+
+----+----+-----+

我得到 EMPTY 输出,如何解决这个问题?

预期输出:

+----+-----------+-----+
|Code|       Name|Movie|
+----+-----------+-----+
|   3|   Majestic|  NaN|
|   6|Nickelodeon|  NaN|
+----+-----------+-----+

如果您想从数据框中获取 np.nan 值,请使用以下代码:

>>> spark.sql("""select * from MovieTheaters where Movie = 'NaN' """).show()
+----+-----------+-----+
|Code|       Name|Movie|
+----+-----------+-----+
|   3|   Majestic|  NaN|
|   6|Nickelodeon|  NaN|
+----+-----------+-----+