隐马尔可夫模型 (HMM) 是一种观察一系列观察结果的模型,但不知道模型生成观察结果所经历的状态序列。隐马尔可夫模型的分析旨在从观察到的数据中恢复隐藏状态的序列。
我有包含观察值和隐藏状态的数据(观察值具有连续值),其中隐藏状态由专家标记。我想训练一个 HMM,它能够基于(以前未见过的)观察序列来恢复相应的隐藏状态。
有没有R包可以做到这一点?研究现有的包(depmixS4、HMM、seqHMM - 仅适用于分类数据)允许您仅指定多个隐藏状态。
编辑:
示例:
data.tagged.by.expert = data.frame(
hidden.state = c("Wake", "REM", "REM", "NonREM1", "NonREM2", "REM", "REM", "Wake"),
sensor1 = c(1,1.2,1.2,1.3,4,2,1.78,0.65),
sensor2 = c(7.2,5.3,5.1,1.2,2.3,7.5,7.8,2.1),
sensor3 = c(0.01,0.02,0.08,0.8,0.03,0.01,0.15,0.45)
)
data.newly.measured = data.frame(
sensor1 = c(2,3,4,5,2,1,2,4,5,8,4,6,1,2,5,3,2,1,4),
sensor2 = c(2.1,2.3,2.2,4.2,4.2,2.2,2.2,5.3,2.4,1.0,2.5,2.4,1.2,8.4,5.2,5.5,5.2,4.3,7.8),
sensor3 = c(0.23,0.25,0.23,0.54,0.36,0.85,0.01,0.52,0.09,0.12,0.85,0.45,0.26,0.08,0.01,0.55,0.67,0.82,0.35)
)
我想创建一个具有离散时间t的HMM,其中随机变量x(t)表示时间t的隐藏状态,x(t){“Wake”,“REM”, “NonREM1”、“NonREM2”} 和 3 个连续随机变量 sensor1(t)、sensor2(t)、sensor3(t) 代表时间 t 的观测值。
model.hmm = learn.model(data.tagged.by.user)
然后我想使用创建的模型来估计负责新测量观测的隐藏状态
hidden.states = estimate.hidden.states(model.hmm, data.newly.measured)
为了能够运行朴素贝叶斯分类器的学习方法,我们需要更长的数据集
states = c("NonREM1", "NonREM2", "NonREM3", "REM", "Wake")
artificial.hypnogram = rep(c(5,4,1,2,3,4,5), times = c(40,150,200,300,50,90,30))
data.tagged.by.expert = data.frame(
hidden.state = states[artificial.hypnogram],
sensor1 = log(artificial.hypnogram) + runif(n = length(artificial.hypnogram), min = 0.2, max = 0.5),
sensor2 = 10*artificial.hypnogram + sample(c(-8:8), size = length(artificial.hypnogram), replace = T),
sensor3 = sample(1:100, size = length(artificial.hypnogram), replace = T)
)
hidden.hypnogram = rep(c(5,4,1,2,4,5), times = c(10,10,15,10,10,3))
data.newly.measured = data.frame(
sensor1 = log(hidden.hypnogram) + runif(n = length(hidden.hypnogram), min = 0.2, max = 0.5),
sensor2 = 10*hidden.hypnogram + sample(c(-8:8), size = length(hidden.hypnogram), replace = T),
sensor3 = sample(1:100, size = length(hidden.hypnogram), replace = T)
)
在解决方案中,我们使用了维特比算法——结合朴素贝叶斯分类器。
在每个时钟时间t,隐马尔可夫模型包括
未观察到的状态(在本例中表示为hidden.state)采用有限数量的状态
states = c("NonREM1", "NonREM2", "NonREM3", "REM", "Wake")
一组观察到的变量(本例中为sensor1、sensor2、sensor3)
根据转移概率分布进入新状态 (转移矩阵)。这可以很容易地从 data.tagged.by.expert 计算出来,例如使用
library(markovchain)
emit_p <- markovchainFit(data.tagged.by.expert$hidden.state)$estimate
每次转换完成后,根据条件概率分布 (发射矩阵) 生成观测值 (sensor_i),该分布仅取决于hidden.state 的当前状态 H。我们将用朴素贝叶斯分类器替换发射矩阵。
library(caret)
library(klaR)
library(e1071)
model = train(hidden.state ~ .,
data = data.tagged.by.expert,
method = 'nb',
trControl=trainControl(method='cv',number=10)
)
为了解决这个问题,我们使用 Viterbi 算法,“唤醒”状态的初始概率为 1,否则为 0。 (我们希望患者在实验开始时是清醒的)
# we expect the patient to be awake in the beginning
start_p = c(NonREM1 = 0,NonREM2 = 0,NonREM3 = 0, REM = 0, Wake = 1)
# Naive Bayes model
model_nb = model$finalModel
# the observations
observations = data.newly.measured
nObs <- nrow(observations) # number of observations
nStates <- length(states) # number of states
# T1, T2 initialization
T1 <- matrix(0, nrow = nStates, ncol = nObs) #define two 2-dimensional tables
row.names(T1) <- states
T2 <- T1
Byj <- predict(model_nb, newdata = observations[1,])$posterior
# init first column of T1
for(s in states)
T1[s,1] = start_p[s] * Byj[1,s]
# fill T1 and T2 tables
for(j in 2:nObs) {
Byj <- predict(model_nb, newdata = observations[j,])$posterior
for(s in states) {
res <- (T1[,j-1] * emit_p[,s]) * Byj[1,s]
T2[s,j] <- states[which.max(res)]
T1[s,j] <- max(res)
}
}
# backtract best path
result <- rep("", times = nObs)
result[nObs] <- names(which.max(T1[,nObs]))
for (j in nObs:2) {
result[j-1] <- T2[result[j], j]
}
# show the result
result
# show the original artificial data
states[hidden.hypnogram]
要了解有关该问题的更多信息,请参阅 Vomlel Jiří,Kratochvíl Václav:用于睡眠阶段分类的动态贝叶斯网络,第 11 届不确定性处理研讨会论文集 (WUPES’18),第 11 页。 205-215 ,编辑:Kratochvíl Václav、Vejnarová Jiřina,不确定性处理研讨会 (WUPES’18),(Třeboň, CZ, 2018/06/06) [2018] 下载
现在可以在 hmmTMB R 包中完成此操作。 (免责声明:我是 hmmTMB 开发人员。)该包的此功能在小插图“hmmTMB 的高级功能”中进行了描述(请参阅第 2 节,“在新数据上使用经过训练的模型”)。
下面,我使用
quakes
数据集描述一个示例。
第一步是在训练数据上拟合/训练模型,以估计模型参数(转移概率、观察参数)。在这里,我们将使用 2 状态高斯 HMM 来分析地震震级的时间序列(数据框
quakes
加载了数据集包)。
我们首先将数据分为训练数据和新数据,并使用训练数据创建 HMM。在hmmTMB中,这需要为隐藏状态模型创建一个对象,为观察模型创建一个对象,然后将它们组合成一个
HMM
对象。
library(hmmTMB)
# Split time series into training data and "new" data
quakes_train <- quakes[1:800,]
quakes_new <- quakes[801:1000,]
# Create hidden state model
hid_train <- MarkovChain$new(data = quakes_train,
n_states = 2)
# Create observation model
dists <- list(mag = "norm")
par0 <- list(mag = list(mean = c(4.2, 5),
sd = c(0.5, 0.5)))
obs_train <- Observation$new(data = quakes_train,
n_states = 2,
dists = dists,
par = par0)
# Create hidden Markov model
hmm_train <- HMM$new(hid = hid_train, obs = obs_train)
# Fit HMM on training data
hmm_train$fit(silent = TRUE)
然后我们创建一个单独的模型对象,传递新数据而不是训练数据。至关重要的是,我们在创建
init = hmm_train
对象时使用 HMM
来复制适合训练数据的模型参数。这将需要根据拟合模型来估计隐藏状态序列。
# Create model for new data
hid_new <- MarkovChain$new(data = quakes_new,
n_states = 2)
obs_new <- Observation$new(data = quakes_new,
n_states = 2,
dists = dists,
par = par0)
# Copy parameters from hmm_train when creating HMM object
hmm_new <- HMM$new(hid = hid_new,
obs = obs_new,
init = hmm_train)
最后,我们使用方法
$viterbi()
来找到新数据集最可能的状态序列。因为 hmm_new
中的参数已初始化为根据拟合模型估计的值,所以这就是您所要求的。我们还可以使用 $state_probs()
来获取状态概率,或 $plot_ts()
来绘制由最可能的状态序列着色的新数据(下面包含绘图)。
# Get most likely state sequence
states_new <- hmm_new$viterbi()
# Plot new data coloured by most likely state sequence
hmm_new$plot_ts("mag") +
geom_point() +
labs(title = "Decoded states for new data")