Bla*_*ack 6 python machine-learning hidden-markov-models tensorflow tensorflow2.0
我正在处理一个包含来自 IoT 设备的数据的数据集,我发现隐马尔可夫模型非常适合我的用例。因此,我正在尝试更改我在此处找到的 Tensorflow 教程中的一些代码。与教程中显示的计数数据相比,数据集包含观察变量的实值。
特别是,我认为需要更改以下内容,以便 HMM 具有正态分布的发射。不幸的是,我找不到任何关于如何改变模型以具有除泊松以外的不同发射的代码。
我应该如何更改代码以发出正态分布的值?
# Define variable to represent the unknown log rates.
trainable_log_rates = tf.Variable(
np.log(np.mean(observed_counts)) + tf.random.normal([num_states]),
name='log_rates')
hmm = tfd.HiddenMarkovModel(
initial_distribution=tfd.Categorical(
logits=initial_state_logits),
transition_distribution=tfd.Categorical(probs=transition_probs),
observation_distribution=tfd.Poisson(log_rate=trainable_log_rates),
num_steps=len(observed_counts))
rate_prior = tfd.LogNormal(5, 5)
def log_prob():
return (tf.reduce_sum(rate_prior.log_prob(tf.math.exp(trainable_log_rates))) +
hmm.log_prob(observed_counts))
optimizer = tf.keras.optimizers.Adam(learning_rate=0.1)
@tf.function(autograph=False)
def train_op():
with tf.GradientTape() as tape:
neg_log_prob = -log_prob()
grads = tape.gradient(neg_log_prob, [trainable_log_rates])[0]
optimizer.apply_gradients([(grads, trainable_log_rates)])
return neg_log_prob, tf.math.exp(trainable_log_rates)
Run Code Online (Sandbox Code Playgroud)
小智 1
@mCoding的答案是正确的,在Tensorflow发布的示例中,您有一个具有均匀零分布的隐马尔可夫模型([0.,0.,0.,0.]),一个重对角线转移矩阵,并且发射概率是泊松分布的。
为了使其适应您的“正常”示例,您只需将这些概率更改为正常概率。例如,考虑作为起点,您的发射概率分布为正态分布,参数为:
training_loc = tf.Variable([0.,0.,0.,0.])
training_scale = tf.Variable([1.,1.,1.,1.])
Run Code Online (Sandbox Code Playgroud)
那么你的observation_distribution意愿是:
observation_distribution = tfp.distributions.Normal(loc= training_loc, scale=training_scale )
Run Code Online (Sandbox Code Playgroud)
最后,您还必须更改您对这些参数的先验知识,设置prior_loc, prior_scale。您可能需要考虑无信息/弱信息先验,因为我发现您之后正在拟合模型。
所以你的代码应该类似于:
# Define the emission probabilities.
training_loc = tf.Variable([0.,0.,0.])
training_scale = tf.Variable([1.,1.,1.])
observation_distribution = tfp.distributions.Normal(loc= training_loc, scale=training_scale ) #Change this to your desired distribution
hmm = tfd.HiddenMarkovModel(
initial_distribution=tfd.Categorical(
logits=initial_state_logits),
transition_distribution=tfd.Categorical(probs=transition_probs),
observation_distribution=observation_distribution,
num_steps=len(observed_counts))
# Prior distributions
prior_loc = tfd.Normal(loc=0., scale=1.)
prior_scale = tfd.HalfNormal(scale=1.)
def log_prob():
log_probability = hmm.log_prob(data)#Use your training data right here
# Compute the log probability of the prior on the mean and standard deviation of the observation distribution
log_probability += tf.reduce_sum(prior_mean.log_prob(observation_distribution.loc))
log_probability += tf.reduce_sum(prior_scale.log_prob(observation_distribution.scale))
# Return the negative log probability, since we want to minimize this quantity
return log_probability
optimizer = tf.keras.optimizers.Adam(learning_rate=0.1)
# Finally train the model like in the example
losses = tfp.math.minimize(
lambda: -log_prob(),
optimizer=tf.optimizers.Adam(learning_rate=0.1),
num_steps=100)
Run Code Online (Sandbox Code Playgroud)
所以现在如果你查看你的参数training_loc和training_scale,它们应该有合适的值。
| 归档时间: |
|
| 查看次数: |
287 次 |
| 最近记录: |