Leave "One Out Cross Validation" "manually" using the "for" loop in R

I am trying to use a loop for

to reproduce the behavior of a function cv.glm()

in a library boot

:

library(ISLR)
attach(Weekly) # this has 1089 rows

cutoff <- .5 # determines the prediction

wrong <- 0 # we'll record the number of wrong predictions here
for (i in 1:1089) {
  fit <- glm(Direction~Lag1+Lag2, data=Weekly[-i,], family=binomial) # leaves out the i-th observation
  prob <- predict(fit,Weekly[i,],type="response")
  if (prob<cutoff) pred <- "Down" else pred <- "Up"
  if (pred!=Direction[i]) wrong <- wrong +1
}

wrong / 1089 # approximately 45% LOOCV error rate

# Let try the same with the cv.glm()
library(boot)
cv.glm(Weekly,glm(Direction~Lag1+Lag2, data=Weekly, family=binomial))$delta # approximately 25%, why??

      

As you can see, my problem is that I don't get similar results in two cases. My best guess is that I might confuse the meaning delta

in the case of logistic regression.

Thanks in advance.

+3
r logistic-regression glm cross-validation


source to share


No one has answered this question yet

Check out similar questions:

five
cost function in cv.glm load library in R
1
Leave one of the cross-validations, leaving two IDs during the tutorial
1
One-one-way cross validation in H2O
1
R: Logistic regression using frequency table, cannot find correct Pearson Chi Square statistics
1
predict.glmnet () gives the same predictions for type = "link" and "response" using family = "binomial"
0
cross validation using knn in R
0
Confusion matrix for one-one-one cross validation in sklearn
0
Remaining cross validation with Liblinear
0
Remaining cross validation for IDW in R
0
Fitting binomial GLM on probabilities (i.e. using logistic regression for regression, not classification)



All Articles
Loading...
X
Show
Funny
Dev
Pics