为运动过程的状态 space 模型实现卡尔曼滤波器
Implementing Kalman filter for state space model of movement process
是否可以在 statsmodels 的示例 3.6 中实现 Bayesian Filtering and Smoothing 中展示的模型?
我可以按照提供的 Matlab 代码进行操作,但我不确定是否以及如何在 statsmodels 中实现这种模型。
该示例涉及跟踪二维对象的位置 space。状态是四维的 x=(x_1, x_2, x_3, x_4)
,但我重新排列了矢量,以便 (x_1, x_3)
表示位置,(x_2, x_4)
表示两个方向上的速度。 Simulated data 的过程由 100 个位置观察组成,排列在 2x100 矩阵中 Y
。
import numpy as np
from scipy import linalg
# The matrices in the dynamic model are set up as follows
q, dt, s = 1, 0.1, 0.5
A = np.array([[1, dt, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, dt],
[0, 0, 0, 1]])
Q = q * np.array([[dt ** 3 / 3, dt ** 2 / 2, 0, 0],
[dt ** 2 / 2, dt, 0, 0],
[0, 0, dt ** 3 / 3, dt ** 2 / 2],
[0, 0, dt ** 2 / 2, dt]])
# Matrices in the measurement model are designed as follows
H = np.array([[1, 0, 0, 0],
[0, 0, 1, 0]])
R = s ** 2 * np.eye(2)
# Starting values
m0 = np.array([[0, 1, 0, -1]]).T # column vector
P0 = np.eye(4)
过程的卡尔曼滤波器实现如下:
n = 100
m = m0
P = P0
kf_m = np.zeros((m.shape[0], n))
kf_P = np.zeros((P.shape[0], P.shape[1], n))
for k in range(n):
m = A @ m
P = A @ P @ A.T + Q
S = H @ P @ H.T + R
K = linalg.lstsq(S.T, (P @ H.T).T)[0].T
m = m + K @ (Y[:, k, np.newaxis] - H @ m)
P = P - K @ S @ K.T
kf_m[:, k] = m.flatten()
kf_P[:, :, k] = P
如果可能的话,如何在 statsmodels 中实现这个过滤器?如果数据更大,statsmodels 可能 运行 更有效,并且可以在子类中的过滤器上实现更平滑。
是的,你可以做到;最主要的是将您的符号映射到 Statsmodels 使用的符号/变量名称。
以下是您如何执行此操作的示例:
import numpy as np
import pandas as pd # Pandas isn't necessary, but makes some output nicer
import statsmodels.api as sm
# The matrices in the dynamic model are set up as follows
q, dt, s = 1, 0.1, 0.5
A = np.array([[1, dt, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, dt],
[0, 0, 0, 1]])
Q = q * np.array([[dt ** 3 / 3, dt ** 2 / 2, 0, 0],
[dt ** 2 / 2, dt, 0, 0],
[0, 0, dt ** 3 / 3, dt ** 2 / 2],
[0, 0, dt ** 2 / 2, dt]])
# Matrices in the measurement model are designed as follows
H = np.array([[1, 0, 0, 0],
[0, 0, 1, 0]])
R = s ** 2 * np.eye(2)
# Starting values
m0 = np.array([[0, 1, 0, -1]]).T # column vector
P0 = np.eye(4)
# Now instantiate a statespace model with the data
# (data should be shaped nobs x n_variables))
kf = sm.tsa.statespace.MLEModel(pd.DataFrame(Y.T), k_states=4)
kf._state_names = ['x1', 'dx1/dt', 'x2', 'dx2/dt']
kf['design'] = H
kf['obs_cov'] = R
kf['transition'] = A
kf['selection'] = np.eye(4)
kf['state_cov'] = Q
# Edit: the timing convention for initialization
# in Statsmodels differs from the the in the question
# So we should not use kf.initialize_known(m0[:, 0], P0)
# But instead, to fit the question's initialization
# into Statsmodels' timing, we just need to use the
# transition equation to move the initialization
# forward, as follows:
kf.initialize_known(A @ m0[:, 0], A @ P0 @ A.T + Q)
# To performan Kalman filtering and smoothing, use:
res = kf.smooth([])
# Then, for example, to print the smoothed estimates of
# the state vector:
print(res.states.smoothed)
是否可以在 statsmodels 的示例 3.6 中实现 Bayesian Filtering and Smoothing 中展示的模型?
我可以按照提供的 Matlab 代码进行操作,但我不确定是否以及如何在 statsmodels 中实现这种模型。
该示例涉及跟踪二维对象的位置 space。状态是四维的 x=(x_1, x_2, x_3, x_4)
,但我重新排列了矢量,以便 (x_1, x_3)
表示位置,(x_2, x_4)
表示两个方向上的速度。 Simulated data 的过程由 100 个位置观察组成,排列在 2x100 矩阵中 Y
。
import numpy as np
from scipy import linalg
# The matrices in the dynamic model are set up as follows
q, dt, s = 1, 0.1, 0.5
A = np.array([[1, dt, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, dt],
[0, 0, 0, 1]])
Q = q * np.array([[dt ** 3 / 3, dt ** 2 / 2, 0, 0],
[dt ** 2 / 2, dt, 0, 0],
[0, 0, dt ** 3 / 3, dt ** 2 / 2],
[0, 0, dt ** 2 / 2, dt]])
# Matrices in the measurement model are designed as follows
H = np.array([[1, 0, 0, 0],
[0, 0, 1, 0]])
R = s ** 2 * np.eye(2)
# Starting values
m0 = np.array([[0, 1, 0, -1]]).T # column vector
P0 = np.eye(4)
过程的卡尔曼滤波器实现如下:
n = 100
m = m0
P = P0
kf_m = np.zeros((m.shape[0], n))
kf_P = np.zeros((P.shape[0], P.shape[1], n))
for k in range(n):
m = A @ m
P = A @ P @ A.T + Q
S = H @ P @ H.T + R
K = linalg.lstsq(S.T, (P @ H.T).T)[0].T
m = m + K @ (Y[:, k, np.newaxis] - H @ m)
P = P - K @ S @ K.T
kf_m[:, k] = m.flatten()
kf_P[:, :, k] = P
如果可能的话,如何在 statsmodels 中实现这个过滤器?如果数据更大,statsmodels 可能 运行 更有效,并且可以在子类中的过滤器上实现更平滑。
是的,你可以做到;最主要的是将您的符号映射到 Statsmodels 使用的符号/变量名称。
以下是您如何执行此操作的示例:
import numpy as np
import pandas as pd # Pandas isn't necessary, but makes some output nicer
import statsmodels.api as sm
# The matrices in the dynamic model are set up as follows
q, dt, s = 1, 0.1, 0.5
A = np.array([[1, dt, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, dt],
[0, 0, 0, 1]])
Q = q * np.array([[dt ** 3 / 3, dt ** 2 / 2, 0, 0],
[dt ** 2 / 2, dt, 0, 0],
[0, 0, dt ** 3 / 3, dt ** 2 / 2],
[0, 0, dt ** 2 / 2, dt]])
# Matrices in the measurement model are designed as follows
H = np.array([[1, 0, 0, 0],
[0, 0, 1, 0]])
R = s ** 2 * np.eye(2)
# Starting values
m0 = np.array([[0, 1, 0, -1]]).T # column vector
P0 = np.eye(4)
# Now instantiate a statespace model with the data
# (data should be shaped nobs x n_variables))
kf = sm.tsa.statespace.MLEModel(pd.DataFrame(Y.T), k_states=4)
kf._state_names = ['x1', 'dx1/dt', 'x2', 'dx2/dt']
kf['design'] = H
kf['obs_cov'] = R
kf['transition'] = A
kf['selection'] = np.eye(4)
kf['state_cov'] = Q
# Edit: the timing convention for initialization
# in Statsmodels differs from the the in the question
# So we should not use kf.initialize_known(m0[:, 0], P0)
# But instead, to fit the question's initialization
# into Statsmodels' timing, we just need to use the
# transition equation to move the initialization
# forward, as follows:
kf.initialize_known(A @ m0[:, 0], A @ P0 @ A.T + Q)
# To performan Kalman filtering and smoothing, use:
res = kf.smooth([])
# Then, for example, to print the smoothed estimates of
# the state vector:
print(res.states.smoothed)