R:如何将 lasso lambda 值从 cv.glmnet() 函数转换为 selectiveInference 包?
R: How to translate lasso lambda values from the cv.glmnet() function into the selectiveInference package?
我正在使用 {selectiveInference}
包执行 post-使用套索(“l1 范数”)的选择推理。这个包假定 lambda 是固定的——也就是说,我们事先确定了它。但是,我需要使用交叉验证。
Taylor & Tibshirani (2018) use simulations to show that using cross-validation to determine lambda yields valid inferential statistics, using the selectiveInference::fixedLassoInf()
method. (Another paper 提出了一种处理由交叉验证确定的 lambda 的方法,但它似乎还没有包含在包中,2018 年论文中的模拟对我来说已经足够好了。 )
我在文档中看到它说 {glmnet}
使用 1/n 套索参数化,而 {selectiveInference}
使用通用参数化。该文档展示了如何从普通的 lambda 转换为 {glmnet}
可以使用的东西。
我需要做相反的事情:从 cv.glmnet()
给我的东西出发,把它变成 fixedLassoInf()
想要的通用规模的 lambda。
具体来说,{glmnet}
文档内容如下:
Note also that for "gaussian", glmnet standardizes y to have unit variance (using 1/n rather than 1/(n-1) formula) before computing its lambda sequence (and then unstandardizes the resulting coefficients); if you wish to reproduce/compare results with other software, best to supply a standardized y
虽然 {selectiveInference}
说:
Estimated lasso coefficients (e.g., from glmnet). This is of length p (so the intercept is not included as the first component). Be careful! This function uses the "standard" lasso objective... In contrast, glmnet multiplies the first term by a factor of 1/n. So after running glmnet, to extract the beta corresponding to a value lambda, you need to use beta = coef(obj,s=lambda/n)[-1]...
有关可重现的示例,请参阅下面的代码。
我的问题具体涉及如何调整这条线:si_lambda <- glmnet_lambda
。也就是说,我做了什么转换 从 lambda cv.glmnet()
给我(我将其分配给 glmnet_lambda
)到 {selectiveInference}
的 lambda将使用(我称之为 si_lambda
)?
我最初的想法是,由于文档说除以 n,我的想法是将 cv.glmnet()
给我的样本量乘以我的样本量。它运行时没有发出警告或错误,但它给了我 188.5121 的 lambda,感觉不对。抱歉,如果这就是答案,我只是太啰嗦了——但我想确保我以适当的方式从一种软件转到另一种软件。
library(glmnet)
library(selectiveInference)
library(tidyverse)
set.seed(1839)
n <- 1000 # sample size
B <- c(0, 1, 0) # intercept 0, beta1 = 1, beta2 = 0
eps_sd <- 1 # sd of the error
# make data
X <- cbind(1, replicate(length(B) - 1, rnorm(n, 0, 1)))
y <- X %*% B + rnorm(n, 0, eps_sd)
dat <- as.data.frame(X[, -1])
dat <- as_tibble(cbind(dat, y))
# get lambda by way of cross-validation
glmnet_lambda <- cv.glmnet(
x = as.matrix(select(dat, -y)),
y = dat$y
) %>%
getElement("lambda.1se")
# run glmnet with that lambda
m1 <- glmnet(
x = as.matrix(select(dat, -y)),
y = dat$y,
lambda = glmnet_lambda
)
# get coefs from that model, dropping intercept, per the docs
m1_coefs <- coef(m1)[-1]
# what reparameterization do I do here?
si_lambda <- glmnet_lambda
# do post-selection inference with m1
# runs with warning, so I assume parameterized incorrectly -- how to fix?
m2 <- fixedLassoInf(
x = as.matrix(select(dat, -y)),
y = dat$y,
beta = m1_coefs,
lambda = si_lambda
)
以及会话信息:
> sessionInfo()
R version 4.1.0 (2021-05-18)
Platform: x86_64-apple-darwin17.0 (64-bit)
Running under: macOS Big Sur 11.4
Matrix products: default
LAPACK: /Library/Frameworks/R.framework/Versions/4.1/Resources/lib/libRlapack.dylib
locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
attached base packages:
[1] parallel stats graphics grDevices utils datasets methods base
other attached packages:
[1] forcats_0.5.1 stringr_1.4.0 dplyr_1.0.6
[4] purrr_0.3.4 readr_1.4.0 tidyr_1.1.3
[7] tibble_3.1.2 ggplot2_3.3.3 tidyverse_1.3.1
[10] selectiveInference_1.2.5 MASS_7.3-54 adaptMCMC_1.4
[13] coda_0.19-4 survival_3.2-11 intervals_0.15.2
[16] glmnet_4.1-1 Matrix_1.3-3
需要在fixedLassoInf的文档中转一下例子;使其适应您的示例会授予以下代码:
library(glmnet)
library(selectiveInference)
# Make dataset
set.seed(1839)
n <- 1000 # sample size
B <- c(0, 1, 0) # intercept 0, beta1 = 1, beta2 = 0
eps_sd <- 1 # sd of the error
X <- cbind(1, replicate(length(B) - 1, rnorm(n, 0, 1)))
y <- X %*% B + rnorm(n, 0, eps_sd)
# Cross-validation to find lambda
gfit = cv.glmnet(X[,-1], y) # we need to remove the intercept variable (glmnet will add another one)
lambda = gfit$lambda.min
# Obtain coefficients (properly scaling lambda and removing the intercept coefficient)
(beta = coef(gfit, x=X[,-1], y=y, s=lambda, exact=TRUE)[-1])
# [1] 0.99297607 -0.04300646
# Compute fixed lambda p-values and selection intervals
(out = fixedLassoInf(X[,-1], y, beta, lambda*n))
# Call:fixedLassoInf(x = X[, -1], y = y, beta = beta, lambda = lambda * n)
#
# Standard deviation of noise (specified or estimated) sigma = 1.012
#
# Testing results at lambda = 4.562, with alpha = 0.100
#
# Var Coef Z-score P-value LowConfPt UpConfPt LowTailArea UpTailArea
# 1 0.998 31.475 0.000 0.945 1.050 0.049 0.049
# 2 -0.048 -1.496 0.152 -0.100 0.032 0.050 0.049
#
# Note: coefficients shown are partial regression coefficients
我正在使用 {selectiveInference}
包执行 post-使用套索(“l1 范数”)的选择推理。这个包假定 lambda 是固定的——也就是说,我们事先确定了它。但是,我需要使用交叉验证。
Taylor & Tibshirani (2018) use simulations to show that using cross-validation to determine lambda yields valid inferential statistics, using the selectiveInference::fixedLassoInf()
method. (Another paper 提出了一种处理由交叉验证确定的 lambda 的方法,但它似乎还没有包含在包中,2018 年论文中的模拟对我来说已经足够好了。 )
我在文档中看到它说 {glmnet}
使用 1/n 套索参数化,而 {selectiveInference}
使用通用参数化。该文档展示了如何从普通的 lambda 转换为 {glmnet}
可以使用的东西。
我需要做相反的事情:从 cv.glmnet()
给我的东西出发,把它变成 fixedLassoInf()
想要的通用规模的 lambda。
具体来说,{glmnet}
文档内容如下:
Note also that for "gaussian", glmnet standardizes y to have unit variance (using 1/n rather than 1/(n-1) formula) before computing its lambda sequence (and then unstandardizes the resulting coefficients); if you wish to reproduce/compare results with other software, best to supply a standardized y
虽然 {selectiveInference}
说:
Estimated lasso coefficients (e.g., from glmnet). This is of length p (so the intercept is not included as the first component). Be careful! This function uses the "standard" lasso objective... In contrast, glmnet multiplies the first term by a factor of 1/n. So after running glmnet, to extract the beta corresponding to a value lambda, you need to use beta = coef(obj,s=lambda/n)[-1]...
有关可重现的示例,请参阅下面的代码。
我的问题具体涉及如何调整这条线:si_lambda <- glmnet_lambda
。也就是说,我做了什么转换 从 lambda cv.glmnet()
给我(我将其分配给 glmnet_lambda
)到 {selectiveInference}
的 lambda将使用(我称之为 si_lambda
)?
我最初的想法是,由于文档说除以 n,我的想法是将 cv.glmnet()
给我的样本量乘以我的样本量。它运行时没有发出警告或错误,但它给了我 188.5121 的 lambda,感觉不对。抱歉,如果这就是答案,我只是太啰嗦了——但我想确保我以适当的方式从一种软件转到另一种软件。
library(glmnet)
library(selectiveInference)
library(tidyverse)
set.seed(1839)
n <- 1000 # sample size
B <- c(0, 1, 0) # intercept 0, beta1 = 1, beta2 = 0
eps_sd <- 1 # sd of the error
# make data
X <- cbind(1, replicate(length(B) - 1, rnorm(n, 0, 1)))
y <- X %*% B + rnorm(n, 0, eps_sd)
dat <- as.data.frame(X[, -1])
dat <- as_tibble(cbind(dat, y))
# get lambda by way of cross-validation
glmnet_lambda <- cv.glmnet(
x = as.matrix(select(dat, -y)),
y = dat$y
) %>%
getElement("lambda.1se")
# run glmnet with that lambda
m1 <- glmnet(
x = as.matrix(select(dat, -y)),
y = dat$y,
lambda = glmnet_lambda
)
# get coefs from that model, dropping intercept, per the docs
m1_coefs <- coef(m1)[-1]
# what reparameterization do I do here?
si_lambda <- glmnet_lambda
# do post-selection inference with m1
# runs with warning, so I assume parameterized incorrectly -- how to fix?
m2 <- fixedLassoInf(
x = as.matrix(select(dat, -y)),
y = dat$y,
beta = m1_coefs,
lambda = si_lambda
)
以及会话信息:
> sessionInfo()
R version 4.1.0 (2021-05-18)
Platform: x86_64-apple-darwin17.0 (64-bit)
Running under: macOS Big Sur 11.4
Matrix products: default
LAPACK: /Library/Frameworks/R.framework/Versions/4.1/Resources/lib/libRlapack.dylib
locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
attached base packages:
[1] parallel stats graphics grDevices utils datasets methods base
other attached packages:
[1] forcats_0.5.1 stringr_1.4.0 dplyr_1.0.6
[4] purrr_0.3.4 readr_1.4.0 tidyr_1.1.3
[7] tibble_3.1.2 ggplot2_3.3.3 tidyverse_1.3.1
[10] selectiveInference_1.2.5 MASS_7.3-54 adaptMCMC_1.4
[13] coda_0.19-4 survival_3.2-11 intervals_0.15.2
[16] glmnet_4.1-1 Matrix_1.3-3
需要在fixedLassoInf的文档中转一下例子;使其适应您的示例会授予以下代码:
library(glmnet)
library(selectiveInference)
# Make dataset
set.seed(1839)
n <- 1000 # sample size
B <- c(0, 1, 0) # intercept 0, beta1 = 1, beta2 = 0
eps_sd <- 1 # sd of the error
X <- cbind(1, replicate(length(B) - 1, rnorm(n, 0, 1)))
y <- X %*% B + rnorm(n, 0, eps_sd)
# Cross-validation to find lambda
gfit = cv.glmnet(X[,-1], y) # we need to remove the intercept variable (glmnet will add another one)
lambda = gfit$lambda.min
# Obtain coefficients (properly scaling lambda and removing the intercept coefficient)
(beta = coef(gfit, x=X[,-1], y=y, s=lambda, exact=TRUE)[-1])
# [1] 0.99297607 -0.04300646
# Compute fixed lambda p-values and selection intervals
(out = fixedLassoInf(X[,-1], y, beta, lambda*n))
# Call:fixedLassoInf(x = X[, -1], y = y, beta = beta, lambda = lambda * n)
#
# Standard deviation of noise (specified or estimated) sigma = 1.012
#
# Testing results at lambda = 4.562, with alpha = 0.100
#
# Var Coef Z-score P-value LowConfPt UpConfPt LowTailArea UpTailArea
# 1 0.998 31.475 0.000 0.945 1.050 0.049 0.049
# 2 -0.048 -1.496 0.152 -0.100 0.032 0.050 0.049
#
# Note: coefficients shown are partial regression coefficients