Pandas: 如何用平均值填充缺失数据?
Pandas: how to fill missing data with a mean value?
我每 5 秒从远程设备读取一些数据。
它们保存为:
2018-01-01 00:00:00 2
2018-01-01 00:00:05 3
2018-01-01 00:00:10 3
2018-01-01 00:00:15 2
2018-01-01 00:00:20 3
2018-01-01 00:00:25 4
2018-01-01 00:00:30 3
2018-01-01 00:00:35 2
2018-01-01 00:00:40 4
2018-01-01 00:00:45 5
2018-01-01 00:00:50 3
2018-01-01 00:00:55 3
唉,沟通不是最好的,有时候沟通不通。
在这种情况下,远程设备将尽快提供数据的累积值。
之前的数据可以保存为:
2018-01-01 00:00:00 2
2018-01-01 00:00:05 3
2018-01-01 00:00:10 3
.......... 00:00:15 missing...
.......... 00:00:20 missing...
.......... 00:00:25 missing...
2018-01-01 00:00:30 12 <--- sum of the last 4 readings
2018-01-01 00:00:35 2
.......... 00:00:40 missing...
.......... 00:00:45 missing...
2018-01-01 00:00:50 15 <--- sum of the last 3 readings
2018-01-01 00:00:55 3
我需要填充所有缺失的行并删除原始数据中的 peaks,并使用 mean value 计算 =]峰值.
重采样很容易:
2018-01-01 00:00:00 2
2018-01-01 00:00:05 3
2018-01-01 00:00:10 3
2018-01-01 00:00:15 NaN
2018-01-01 00:00:20 NaN
2018-01-01 00:00:25 NaN
2018-01-01 00:00:30 12
2018-01-01 00:00:35 2
2018-01-01 00:00:40 NaN
2018-01-01 00:00:45 NaN
2018-01-01 00:00:50 15
2018-01-01 00:00:55 3
但是如何填充 NaN 并移除峰值?
我检查了 asfreq
和 resample
的各种方法,但其中 none(bfill
、ffill
)在这种情况下很有用。
最终结果应该是:
2018-01-01 00:00:00 2
2018-01-01 00:00:05 3
2018-01-01 00:00:10 3
2018-01-01 00:00:15 3 <--- NaN filled with mean = peak 12/4 rows
2018-01-01 00:00:20 3 <--- NaN filled with mean
2018-01-01 00:00:25 3 <--- NaN filled with mean
2018-01-01 00:00:30 3 <--- peak changed
2018-01-01 00:00:35 2
2018-01-01 00:00:40 5 <--- NaN filled with mean = peak 15/3 rows
2018-01-01 00:00:45 5 <--- NaN filled with mean
2018-01-01 00:00:50 5 <--- peak changed
2018-01-01 00:00:55 3
我用于测试的数据帧:
import numpy as np
import pandas as pd
time = pd.date_range(start='2021-01-01', freq='5s', periods=12)
read_data = pd.Series([2, 3, 3, np.nan, np.nan, np.nan, 12, 2, np.nan, np.nan, 15, 3], index=time).dropna()
read_data.asfreq("5s")
一种方式:
m = (read_data.isna() | read_data.shift(fill_value= 0).isna()).astype(int)
read_data = read_data.bfill() / m.groupby(m.ne(m.shift()).cumsum()).transform('count').where(m.eq(1), 1)
输出:
2021-01-01 00:00:00 2.0
2021-01-01 00:00:05 3.0
2021-01-01 00:00:10 3.0
2021-01-01 00:00:15 3.0
2021-01-01 00:00:20 3.0
2021-01-01 00:00:25 3.0
2021-01-01 00:00:30 3.0
2021-01-01 00:00:35 2.0
2021-01-01 00:00:40 5.0
2021-01-01 00:00:45 5.0
2021-01-01 00:00:50 5.0
2021-01-01 00:00:55 3.0
Freq: 5S, dtype: float64
完整示例:
import numpy as np
import pandas as pd
time = pd.date_range(start='2021-01-01', freq='5s', periods=12)
read_data = pd.Series([2, 3, 3, np.nan, np.nan, np.nan, 12, 2, np.nan, np.nan, 15, 3], index=time).dropna()
read_data = read_data.asfreq("5s")
m = (read_data.isna() | read_data.shift(fill_value= 0).isna()).astype(int)
read_data = read_data.bfill() / m.groupby(m.ne(m.shift()).cumsum()).transform('count').where(m.eq(1), 1)
这可以通过分割(分组)缺失值连同其对应的峰值(重采样后)成一个单一的分组,回填,然后计算每组的平均值:
>>> read_data = read_data.to_frame(name='val').assign(idx=range(len(read_data)))
>>> read_data = read_data.asfreq('5s').bfill()
>>> read_data = read_data/read_data.groupby('idx').transform(len)
>>> read_data.drop('idx', axis=1, inplace=True)
>>> read_data.val
2021-01-01 00:00:00 2.0
2021-01-01 00:00:05 3.0
2021-01-01 00:00:10 3.0
2021-01-01 00:00:15 3.0
2021-01-01 00:00:20 3.0
2021-01-01 00:00:25 3.0
2021-01-01 00:00:30 3.0
2021-01-01 00:00:35 2.0
2021-01-01 00:00:40 5.0
2021-01-01 00:00:45 5.0
2021-01-01 00:00:50 5.0
2021-01-01 00:00:55 3.0
Freq: 5S, Name: val, dtype: float64
解释:
首先将您的原始系列转换为数据框并引入另一列 idx 它将每一行唯一标识为单个组:
>>> read_data = read_data.to_frame(name='val').assign(idx=range(len(read_data)))
>>> read_data
val idx
2021-01-01 00:00:00 2.0 0
2021-01-01 00:00:05 3.0 1
2021-01-01 00:00:10 3.0 2
2021-01-01 00:00:30 12.0 3
2021-01-01 00:00:35 2.0 4
2021-01-01 00:00:50 15.0 5
2021-01-01 00:00:55 3.0 6
重新采样以插入缺失值,然后用峰值回填缺失值:
>>> read_data = read_data.asfreq('5s').bfill()
>>> read_data
val idx
2021-01-01 00:00:00 2.0 0.0
2021-01-01 00:00:05 3.0 1.0
2021-01-01 00:00:10 3.0 2.0
2021-01-01 00:00:15 12.0 3.0
2021-01-01 00:00:20 12.0 3.0
2021-01-01 00:00:25 12.0 3.0
2021-01-01 00:00:30 12.0 3.0
2021-01-01 00:00:35 2.0 4.0
2021-01-01 00:00:40 15.0 5.0
2021-01-01 00:00:45 15.0 5.0
2021-01-01 00:00:50 15.0 5.0
2021-01-01 00:00:55 3.0 6.0
如您现在所见,回填值与其峰值在同一组中(具有相同的 idx)。
所以 groupby
idx
然后将值除以每组的长度。删除 idx 列:
>>> read_data = read_data/read_data.groupby('idx').transform(len)
>>> read_data.drop('idx', axis=1, inplace=True)
>>> read_data
val
2021-01-01 00:00:00 2.0
2021-01-01 00:00:05 3.0
2021-01-01 00:00:10 3.0
2021-01-01 00:00:15 3.0
2021-01-01 00:00:20 3.0
2021-01-01 00:00:25 3.0
2021-01-01 00:00:30 3.0
2021-01-01 00:00:35 2.0
2021-01-01 00:00:40 5.0
2021-01-01 00:00:45 5.0
2021-01-01 00:00:50 5.0
2021-01-01 00:00:55 3.0
我每 5 秒从远程设备读取一些数据。
它们保存为:
2018-01-01 00:00:00 2
2018-01-01 00:00:05 3
2018-01-01 00:00:10 3
2018-01-01 00:00:15 2
2018-01-01 00:00:20 3
2018-01-01 00:00:25 4
2018-01-01 00:00:30 3
2018-01-01 00:00:35 2
2018-01-01 00:00:40 4
2018-01-01 00:00:45 5
2018-01-01 00:00:50 3
2018-01-01 00:00:55 3
唉,沟通不是最好的,有时候沟通不通。
在这种情况下,远程设备将尽快提供数据的累积值。
之前的数据可以保存为:
2018-01-01 00:00:00 2
2018-01-01 00:00:05 3
2018-01-01 00:00:10 3
.......... 00:00:15 missing...
.......... 00:00:20 missing...
.......... 00:00:25 missing...
2018-01-01 00:00:30 12 <--- sum of the last 4 readings
2018-01-01 00:00:35 2
.......... 00:00:40 missing...
.......... 00:00:45 missing...
2018-01-01 00:00:50 15 <--- sum of the last 3 readings
2018-01-01 00:00:55 3
我需要填充所有缺失的行并删除原始数据中的 peaks,并使用 mean value 计算 =]峰值.
重采样很容易:
2018-01-01 00:00:00 2
2018-01-01 00:00:05 3
2018-01-01 00:00:10 3
2018-01-01 00:00:15 NaN
2018-01-01 00:00:20 NaN
2018-01-01 00:00:25 NaN
2018-01-01 00:00:30 12
2018-01-01 00:00:35 2
2018-01-01 00:00:40 NaN
2018-01-01 00:00:45 NaN
2018-01-01 00:00:50 15
2018-01-01 00:00:55 3
但是如何填充 NaN 并移除峰值?
我检查了 asfreq
和 resample
的各种方法,但其中 none(bfill
、ffill
)在这种情况下很有用。
最终结果应该是:
2018-01-01 00:00:00 2
2018-01-01 00:00:05 3
2018-01-01 00:00:10 3
2018-01-01 00:00:15 3 <--- NaN filled with mean = peak 12/4 rows
2018-01-01 00:00:20 3 <--- NaN filled with mean
2018-01-01 00:00:25 3 <--- NaN filled with mean
2018-01-01 00:00:30 3 <--- peak changed
2018-01-01 00:00:35 2
2018-01-01 00:00:40 5 <--- NaN filled with mean = peak 15/3 rows
2018-01-01 00:00:45 5 <--- NaN filled with mean
2018-01-01 00:00:50 5 <--- peak changed
2018-01-01 00:00:55 3
我用于测试的数据帧:
import numpy as np
import pandas as pd
time = pd.date_range(start='2021-01-01', freq='5s', periods=12)
read_data = pd.Series([2, 3, 3, np.nan, np.nan, np.nan, 12, 2, np.nan, np.nan, 15, 3], index=time).dropna()
read_data.asfreq("5s")
一种方式:
m = (read_data.isna() | read_data.shift(fill_value= 0).isna()).astype(int)
read_data = read_data.bfill() / m.groupby(m.ne(m.shift()).cumsum()).transform('count').where(m.eq(1), 1)
输出:
2021-01-01 00:00:00 2.0
2021-01-01 00:00:05 3.0
2021-01-01 00:00:10 3.0
2021-01-01 00:00:15 3.0
2021-01-01 00:00:20 3.0
2021-01-01 00:00:25 3.0
2021-01-01 00:00:30 3.0
2021-01-01 00:00:35 2.0
2021-01-01 00:00:40 5.0
2021-01-01 00:00:45 5.0
2021-01-01 00:00:50 5.0
2021-01-01 00:00:55 3.0
Freq: 5S, dtype: float64
完整示例:
import numpy as np
import pandas as pd
time = pd.date_range(start='2021-01-01', freq='5s', periods=12)
read_data = pd.Series([2, 3, 3, np.nan, np.nan, np.nan, 12, 2, np.nan, np.nan, 15, 3], index=time).dropna()
read_data = read_data.asfreq("5s")
m = (read_data.isna() | read_data.shift(fill_value= 0).isna()).astype(int)
read_data = read_data.bfill() / m.groupby(m.ne(m.shift()).cumsum()).transform('count').where(m.eq(1), 1)
这可以通过分割(分组)缺失值连同其对应的峰值(重采样后)成一个单一的分组,回填,然后计算每组的平均值:
>>> read_data = read_data.to_frame(name='val').assign(idx=range(len(read_data)))
>>> read_data = read_data.asfreq('5s').bfill()
>>> read_data = read_data/read_data.groupby('idx').transform(len)
>>> read_data.drop('idx', axis=1, inplace=True)
>>> read_data.val
2021-01-01 00:00:00 2.0
2021-01-01 00:00:05 3.0
2021-01-01 00:00:10 3.0
2021-01-01 00:00:15 3.0
2021-01-01 00:00:20 3.0
2021-01-01 00:00:25 3.0
2021-01-01 00:00:30 3.0
2021-01-01 00:00:35 2.0
2021-01-01 00:00:40 5.0
2021-01-01 00:00:45 5.0
2021-01-01 00:00:50 5.0
2021-01-01 00:00:55 3.0
Freq: 5S, Name: val, dtype: float64
解释:
首先将您的原始系列转换为数据框并引入另一列 idx 它将每一行唯一标识为单个组:
>>> read_data = read_data.to_frame(name='val').assign(idx=range(len(read_data)))
>>> read_data
val idx
2021-01-01 00:00:00 2.0 0
2021-01-01 00:00:05 3.0 1
2021-01-01 00:00:10 3.0 2
2021-01-01 00:00:30 12.0 3
2021-01-01 00:00:35 2.0 4
2021-01-01 00:00:50 15.0 5
2021-01-01 00:00:55 3.0 6
重新采样以插入缺失值,然后用峰值回填缺失值:
>>> read_data = read_data.asfreq('5s').bfill()
>>> read_data
val idx
2021-01-01 00:00:00 2.0 0.0
2021-01-01 00:00:05 3.0 1.0
2021-01-01 00:00:10 3.0 2.0
2021-01-01 00:00:15 12.0 3.0
2021-01-01 00:00:20 12.0 3.0
2021-01-01 00:00:25 12.0 3.0
2021-01-01 00:00:30 12.0 3.0
2021-01-01 00:00:35 2.0 4.0
2021-01-01 00:00:40 15.0 5.0
2021-01-01 00:00:45 15.0 5.0
2021-01-01 00:00:50 15.0 5.0
2021-01-01 00:00:55 3.0 6.0
如您现在所见,回填值与其峰值在同一组中(具有相同的 idx)。
所以 groupby
idx
然后将值除以每组的长度。删除 idx 列:
>>> read_data = read_data/read_data.groupby('idx').transform(len)
>>> read_data.drop('idx', axis=1, inplace=True)
>>> read_data
val
2021-01-01 00:00:00 2.0
2021-01-01 00:00:05 3.0
2021-01-01 00:00:10 3.0
2021-01-01 00:00:15 3.0
2021-01-01 00:00:20 3.0
2021-01-01 00:00:25 3.0
2021-01-01 00:00:30 3.0
2021-01-01 00:00:35 2.0
2021-01-01 00:00:40 5.0
2021-01-01 00:00:45 5.0
2021-01-01 00:00:50 5.0
2021-01-01 00:00:55 3.0