在 PyMC3 中构建离散 HMM 的问题

Problems building a discrete HMM in PyMC3

我正在玩 PyMC3 中的玩具离散 HMM 模型,我 运行 遇到了一些问题。

我的初始代码如下所示:

# Known transition and emission
with pymc3.Model() as hmm1:
    T = tt.as_tensor_variable(Tr)
    E = tt.as_tensor_variable(Er)
    # State models
    p0 = np.ones(num_states) / num_states
    # No shape, so each state is a scalar tensor
    states = [pymc3.Categorical('s0', p=p0)]
    emissions = [pymc3.Categorical('z0', p=E[:,states[0]], observed=Zr[:,0])]
    for i in range(1, num_times):
        states.append(pymc3.Categorical('s{0}'.format(i), p=T[:,states[i-1]]))
        emissions.append(pymc3.Categorical('z{0}'.format(i), p=E[:,states[i]], observed=Zr[:,i]))

这里 TrEr 是真实的 t运行sition 和 emission 矩阵。

我的问题是:

  1. 这个模型似乎没有探索各州的全部价值,它为每个州停留在一个单一的价值上(见笔记本)。

  2. 我还没有找到以更 pymc-tonic 的方式定义 statesemissions 的方法,例如使用 shape=....

  3. 此外,当我扩展模型以解释未知的 t运行 位置或发射矩阵时,我 运行 陷入索引问题,我被迫使用 theano.tensor.clip,如下代码所示:

    # Unknown transitions and emissions
    with pymc3.Model() as hmm3:
        # Transition "matrix"
        a_t = np.ones((num_states,num_states))
        T = pymc3.Dirichlet('T', a=a_t, shape=(num_states,num_states))
        # Emission "matrix"
        a_e = np.ones((num_emissions, num_states))
        E = pymc3.Dirichlet('E', a=a_e, shape=(num_emissions, num_states))
        # State models
        p0 = np.ones(num_states) / num_states
        # No shape, so each state is a scalar tensor
        states = [pymc3.Categorical('s0', p=p0)]
        clp_state = tt.clip(states[0], 0, num_states-1)
        emissions = [pymc3.Categorical('z0', p=E[:,clp_state], observed=Zr[0])]
        for i in range(1, num_times):
            clp_prev_state = tt.clip(states[i-1], 0, num_states-1)
            states.append(pymc3.Categorical('s{0}'.format(i), p=T[:,clp_prev_state]))
            clp_state = tt.clip(states[i], 0, num_states-1)
            emissions.append(pymc3.Categorical('z{0}'.format(i), p=E[:,clp_state], observed=Zr[i]))
    

见下文notebook with a complete code

定义 hidden-markov 状态模型的更 pymc3 方式(或者,更 theano 方式)如下:

class HMMStatesN(pm.Categorical):
    """
    Hidden Markov Model States
    Parameters
    ----------
    P : tensor
        transition probability
        shape = (N_states,N_states)

    PA : tensor
         equilibrium probabilities
         shape = (N_states)

    """

    def __init__(self, PA=None, P=None,
                 *args, **kwargs):
        super(pm.Categorical, self).__init__(*args, **kwargs)
        self.P = P
        self.PA = PA
        self.k = N_states
        self.mode = tt.cast(0,dtype='int64')

    def logp(self, x):
        P = self.P
        PA = self.PA

        # calculate equilibrium

        # now we need to create an array with probabilities
        # so that for x=A: PA=P1, PB=(1-P1)
        # and for x=B: PA=P2, PB=(1-P2)
        #P = tt.switch(x[:-1],P1,P2)

        PS = P[x[:-1]]

        x_i = x[1:]
        ou_like = pm.Categorical.dist(PS).logp(x_i)
        return pm.Categorical.dist(PA).logp(x[0]) + tt.sum(ou_like)

我也创建了一个类似的高斯发射class:

class HMMGaussianEmmisionsN(pm.Continuous):
    """
    Hidden Markov Model Gaussian Emmisions
    Parameters
    ----------
    A : tensor
        prior for Gaussian emmision mu
        shape = (2,N_states)

    S : tensor
        prior for Gaussian emmision width
        shape = (2,N_states)

    states : tensor
         equilibrium probabilities
         shape = (N_states)

    """

    def __init__(self, A=None, S=None, states=None,
                 *args, **kwargs):
        super(HMMGaussianEmmisionsN, self).__init__(*args, **kwargs)
        self.A = A
        self.S = S
        self.states = states
        self.mean = 0.

    def logp(self, x):
        A = self.A
        S = self.S
        states = self.states

        AS = A[states]        
        SS = S[states]

        ou_like = pm.Normal.dist(mu=AS,sd=SS).logp(x)
        return tt.sum(ou_like)

可以通过以下方式调用:

from scipy import optimize
with pm.Model() as model:
    # N_states state model

    P = pm.Dirichlet('P', a=np.ones((N_states,N_states)), shape=(N_states,N_states))
    A = pm.InverseGamma('A',alpha=alphaA, beta=betaA, shape=(N_states))
    S = pm.InverseGamma('S', alpha=alphaS, beta=betaS, shape=(N_states))

    AA = tt.dmatrix('AA')
    AA = tt.eye(N_states) - P + tt.ones(shape=(N_states,N_states))
    PA = pm.Deterministic('PA',sla.solve(AA.T,tt.ones(shape=(N_states))))

    states = HMMStatesN('states',PA=PA, P=P, shape=(len(measurement)))

    emmision = HMMGaussianEmmisionsN('emmision', A=A, S=S, states=states, observed=measurement)

    start = pm.find_MAP(fmin=optimize.fmin_powell)
    step1 = pm.Metropolis(vars=[P,A,S,PA,emmision])
    step2 = pm.CategoricalGibbsMetropolis(vars=[states])
    trace = pm.sample(10000, start=start, step=[step1,step2])

我认为您的模型可以用类似的方式编码。使用 CategoricalGibbsMetropolis 步进器很重要(请参阅此 for more detail) for the HMMstates. See a more detailed code example here.