Why does prediction using nn.predict in deepnet package in R return constant value?

I am working with CIFAR-10 dataset . This is how I prepare the data:

library(R.matlab)
A1 <- readMat("data_batch_1.mat")
A2 <- readMat("data_batch_2.mat")
A3 <- readMat("data_batch_3.mat")
A4 <- readMat("data_batch_4.mat")
A5 <- readMat("data_batch_5.mat")
meta <- readMat("batches.meta.mat")
test <- readMat("test_batch.mat")
A <- rbind(A1$data, A2$data, A3$data, A4$data, A5$data)
Gtrain <- 0.21*A[,1:1024] + 0.71*A[,1025:2048] +0.07*A[,2049:3072]
ytrain <- c(A1$labels, A2$labels, A3$labels, A4$labels, A5$labels)
Gtest <- 0.21*test$data[,1:1024] + 0.71*test$data[,1025:2048]     +0.07*test$data[,2049:3072]
ytest <- test$labels
x_train <- Gtrain[ytrain %in% c(7,9),]
y_train <- ytrain[ytrain %in% c(7,9)]==7
x_test <- Gtest[ytest %in% c(7,9),]
y_test <- ytest[ytest %in% c(7,9)]==7

      

I am training a deep neural network:

library(deepnet)
dnn <- dbn.dnn.train(x_train, y_train, hidden = rep(10,2),numepochs = 3)

      

And I make a prediction

prednn <- nn.predict(dnn, x_test)

      

which returns a vector filled with a single value (0.4603409 in this case, but for different parameters there is always something around 0.5). What's wrong?

+3


source to share


1 answer


Based on this answer to a similar question, perhaps consider this approach: neuralnet prediction returns the same values ​​for all predictions



The first reason to consider when you get strange results with neural networks is normalization. Your data needs to be normalized, otherwise yes, training will lead to NN skew, which will give the same result all the time, this is a common symptom.

Looking at your dataset, there are values ​​-> 1, which means they are all handled by NN, essentially the same. The reason for this is that traditionally used response functions are (almost) constant outside some range around 0.

Always normalize your data before entering it into the neural network.

0


source







All Articles