为什么第一个迭代步骤的梯度在 biv.norm 的 nls 中是奇异的

Why is gradient of first iteration step singular in nls with biv.norm

我正在尝试拟合一个非线性回归模型,其中均值函数是二元正态分布。要指定的参数是相关性 rho。 问题:"gradient of first iteration step is singular"。为什么? 我这里有一个模拟数据的小例子。

# given values for independent variables
x1 <- c(rep(0.1,5), rep(0.2,5), rep(0.3,5), rep(0.4,5), rep(0.5,5))
x2 <- c(rep(c(0.1,0.2,0.3,0.4,0.5),5))


## 1 generate values for dependent variable (incl. error term)
#  from bivariate normal distribution with assumed correlation rho=0.5

fun  <- function(b) pmnorm(x = c(qnorm(x1[b]), qnorm(x2[b])), 
                           mean = c(0, 0), 
                           varcov = matrix(c(1, 0.5, 0.5, 1), nrow = 2))

set.seed(123)
y <- sapply(1:25,  function(b) fun(b)) + runif(25)/1000  


# put it in data frame
dat <- data.frame(y=y, x1=x1, x2=x2 )




# 2 : calculate non-linear regression from the generated data
# use rho=0.51 as starting value

fun <- function(x1, x2,rho) pmnorm(x = c(qnorm(x1), qnorm(x2)), 
                                       mean = c(0, 0), 
                                       varcov = matrix(c(1, rho, rho, 1), nrow = 2))

nls(formula= y ~ fun(x1, x2, rho), data= dat,  start=list(rho=0.51),  
     lower=0, upper=1, trace=TRUE)  

这会产生一条错误消息:

Error in nls(formula = y ~ fun(x1, x2, rho), data = dat, start = list(rho = 0.51),  : 
singulärer Gradient
In addition: Warning message:
In nls(formula = y ~ fun(x1, x2, rho), data = dat, start = list(rho = 0.51),  :
Obere oder untere Grenzen ignoriert, wenn nicht algorithm= "port"

我不明白的是

  1. 我只有一个变量 (rho),所以如果梯度矩阵应该是奇异的,那么只有一个梯度必须 =0。那么梯度为什么要=0呢?
  2. 起始值不可能是问题,因为我知道真正的 rho=0.5。所以起始值 =0.51 应该没问题吧?
  3. 数据不能完全线性相关,因为我给 y 添加了一个误差项。

非常感谢您的帮助。已经谢谢了。

也许 "optim" 比 "nls" 做得更好:

library(mnormt)

# given values for independent variables
x1 <- c(rep(0.1,5), rep(0.2,5), rep(0.3,5), rep(0.4,5), rep(0.5,5))
x2 <- c(rep(c(0.1,0.2,0.3,0.4,0.5),5))


## 1 generate values for dependent variable (incl. error term)
#  from bivariate normal distribution with assumed correlation rho=0.5

fun  <- function(b) pmnorm(x = c(qnorm(x1[b]), qnorm(x2[b])), 
                           mean = c(0, 0), 
                           varcov = matrix(c(1, 0.5, 0.5, 1), nrow = 2))

set.seed(123)
y <- sapply(1:25,  function(b) fun(b)) + runif(25)/1000  


# put it in data frame
dat <- data.frame(y=y, x1=x1, x2=x2 )




# 2 : calculate non-linear regression from the generated data
# use rho=0.51 as starting value

fun <- function(x1, x2,rho) pmnorm(x = c(qnorm(x1), qnorm(x2)), 
                                   mean = c(0, 0), 
                                   varcov = matrix(c(1, rho, rho, 1), nrow = 2))

f <- function(rho) { sum( sapply( 1:nrow(dat),
                                  function(i){
                                    (fun(dat[i,2],dat[i,3],rho) - dat[i,1])^2 
                                  } ) ) } 

optim(0.51, f, method="BFGS")

结果还不错:

> optim(0.51, f, method="BFGS")
$par
[1] 0.5043406

$value
[1] 3.479377e-06

$counts
function gradient 
      14        4 

$convergence
[1] 0

$message
NULL

甚至可能比 0.5 好一点:

> f(0.5043406)
[1] 3.479377e-06
> f(0.5)
[1] 1.103484e-05
> 

让我们检查另一个起始值:

> optim(0.8, f, method="BFGS")
$par
[1] 0.5043407

$value
[1] 3.479377e-06

$counts
function gradient 
      28        6 

$convergence
[1] 0

$message
NULL