如何计算忽略 NaN 值的列的均值和标准差

How to compute the mean and standard deviation of columns ignoring NaN values

我有一个包含一些 NaN/Null/NA 值的双打数据框:

val dfDouble = Seq(
  (1.0, 1.0, 1.0, 3.0),
  (1.0, 2.0, 0.0, 0.0),
  (1.0, 3.0, 1.0, 1.0),
  (1.0, NaN, 0.0, 2.0)).toDF("m1", "m2", "m3", "m4")

我想计算每列的均值、标准差和非空观察值的数量,但似乎 spark returns NaN 中的常规聚合函数一个 NaN 值:

dfDouble.select(dfDouble.columns.map(c => mean(col(c))) :_*).show
// +-------+-------+-------+-------+
// |avg(m1)|avg(m2)|avg(m3)|avg(m4)|
// +-------+-------+-------+-------+
// |    1.0|    NaN|    0.5|    1.5|
// +-------+-------+-------+-------+
dfDouble.select(dfDouble.columns.map(c => stddev(col(c))) :_*).show
// +---------------+---------------+------------------+------------------+
// |stddev_samp(m1)|stddev_samp(m2)|   stddev_samp(m3)|   stddev_samp(m4)|
// +---------------+---------------+------------------+------------------+
// |            0.0|            NaN|0.5773502691896257|1.2909944487358056|
// +---------------+---------------+------------------+------------------+

如何计算均值、标准差和非空观测值的数量,不包括 NaN 个值?

您可以在应用 meanstddev 函数之前将 NaN 值替换为 null

val df = dfDouble.na.fill(dfDouble.columns.map((_, "null")).toMap)

df.select(df.columns.map(c => mean(col(c))) :_*).show

//+-------+-------+-------+-------+
//|avg(m1)|avg(m2)|avg(m3)|avg(m4)|
//+-------+-------+-------+-------+
//|    1.0|    2.0|    0.5|    1.5|
//+-------+-------+-------+-------+