Encoding Gaussian Logarithmic Probability in R

I am trying to learn R by coding gaussian log likelihood to solve with help optim()

, but after hours of sweat, I am still not familiar. (This is self-study, not homework.)

I follow the agreement in many user-defined functions, which write function of the type loglik <- function(theta, y, x)

, where theta

- is a vector of weights beta

and dispersion sigma

, y

- and the result x

is the data.

My complete code with simulated data is shown below. By running it, you can see my function is out of label compared to lm()

. Can anyone give me an idea of ​​where I am going wrong?

# random data
set.seed(111)
y = sample(1:100,100)
x1 = sample(1:100,100)*rnorm(1,0)
x2 = sample(x1)*rnorm(1,0)
x3 = sample(x2)*rnorm(1,0)
dat = data.frame(x1,x2,x3)

# define gaussian log-likelihood
logLik <- function(theta, Y, X){
  X           <- as.matrix(X) # convert data to matrix
  k           <- ncol(X) # get the number of columns (independent vars)
  beta        <- theta[1:k] # vector of weights intialized with starting values
  expected_y  <- X %*% beta  # X is dimension (n x k) and beta is dimension (k x 1)
  sigma2      <- theta[k+1] # should be pulled from the last of the starting values vector
  LL          <- sum(dnorm(Y, mean = expected_y, sd = sigma2, log = T)) # sum of the PDF over each observation
  return(-LL)
}

      

Here's the result:

> optim(logLik, par=starting_values, method="Nelder-Mead", Y=y, X=dat, hessian = T)$par
[1]   0.4832514  -0.2276684  -0.3860800  32.7168490 -38.9030319
> coefficients(lm(y~x1+x2+x3))
(Intercept)          x1          x2          x3 
58.17347451 -0.06587320  0.13001865 -0.03624233 

      

+3


source to share


1 answer


The basics of your approach sound, but some of the details are wrong. First, it makes sense to plot the data using a Gaussian linear model; eg

set.seed(111)
X <- cbind(1, matrix(rnorm(100*3), 100, 3))
y <- X %*% rep(1, 4) + rnorm(100, 0, 2)

starting.values <- c(1, 1, 1, 1, 2) # actual parameters

# define gaussian log-likelihood
logLik <- function(theta, y, X){
  k           <- ncol(X) # get the number of columns (independent vars)
  beta        <- theta[1:k] # vector of weights intialized with starting values
  expected_y  <- X %*% beta  # X is dimension (n x k) and beta is dimension (k x 1)
  sigma      <- theta[k+1] # should be pulled from the last of the starting values vector
  LL          <- sum(dnorm(y, mean = expected_y, sd = sigma, log = TRUE)) # sum of the PDF over each observation
  return(-LL)
}

      

By the way, functions are *norm()

parameterized in terms of SD, not variance.

Then



> optim(logLik, par=starting.values, y=y, X=X, method="BFGS")$par
[1] 1.0471420 1.1411523 0.8167656 0.9840397 1.8910201
Warning message:
In dnorm(y, mean = expected_y, sd = sigma, log = TRUE) : NaNs produced

> summary(lm(y ~ X - 1))

Call:
lm(formula = y ~ X - 1)

Residuals:
    Min      1Q  Median      3Q     Max 
-4.5062 -1.3293  0.1371  1.2057  5.8116 

Coefficients:
   Estimate Std. Error t value Pr(>|t|)    
X1   1.0471     0.1952   5.365 5.61e-07 ***
X2   1.1412     0.1818   6.275 1.00e-08 ***
X3   0.8168     0.1907   4.282 4.39e-05 ***
X4   0.9840     0.2122   4.638 1.11e-05 ***
---
Signif. codes:  0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1

Residual standard error: 1.93 on 96 degrees of freedom
Multiple R-squared:  0.5333,    Adjusted R-squared:  0.5138 
F-statistic: 27.42 on 4 and 96 DF,  p-value: 3.468e-15

      

Note that it method="BFGS"

gives a warning but gives the correct answer; method="Nelder-Mead"

slightly less accurate. Also note that the usual SD error estimate differs from the ML estimate.

I hope this helps

+4


source







All Articles