Trying Kaggle Titanic with keras .. getting a loss and valid_loss -0.0000

Hi I am getting strange results for the following code for the problem posted here ( https://www.kaggle.com/c/titanic ) -

from keras.models import Sequential
from keras.layers.core import Dense, Activation, Dropout
from keras.layers.advanced_activations import PReLU, LeakyReLU
from keras.layers.recurrent import SimpleRNN, SimpleDeepRNN
from keras.layers.embeddings import Embedding
from keras.layers.recurrent import LSTM, GRU

import pandas as pd
import numpy as np 
from sklearn import preprocessing

np.random.seed(1919)

### Constants ###
data_folder = "/home/saj1919/Public/Data_Science_Mining_Study/submissions/titanic/data/"
out_folder = "/home/saj1919/Public/Data_Science_Mining_Study/submissions/titanic/output/"
batch_size = 4
nb_epoch = 10

### load train and test ###
train  = pd.read_csv(data_folder+'train.csv', index_col=0)
test  = pd.read_csv(data_folder+'test.csv', index_col=0)
print "Data Read complete"

Y = train.Survived
train.drop('Survived', axis=1, inplace=True)

columns = train.columns
test_ind = test.index

train['Age'] = train['Age'].fillna(train['Age'].mean())
test['Age'] = test['Age'].fillna(test['Age'].mean())
train['Fare'] = train['Fare'].fillna(train['Fare'].mean())
test['Fare'] = test['Fare'].fillna(test['Fare'].mean())

category_index = [0,1,2,4,5,6,8,9]
for i in category_index:
    print str(i)+" : "+columns[i]
    train[columns[i]] = train[columns[i]].fillna('missing')
    test[columns[i]] = test[columns[i]].fillna('missing')

train = np.array(train)
test = np.array(test)

### label encode the categorical variables ###
for i in category_index:
    print str(i)+" : "+str(columns[i])
    lbl = preprocessing.LabelEncoder()
    lbl.fit(list(train[:,i]) + list(test[:,i]))
    train[:,i] = lbl.transform(train[:,i])
    test[:,i] = lbl.transform(test[:,i])

### making data as numpy float ###
train = train.astype(np.float32)
test = test.astype(np.float32)
#Y = np.array(Y).astype(np.int32)

model = Sequential()
model.add(Dense(len(columns), 512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(512, 1))
model.add(Activation('softmax'))

model.compile(loss='categorical_crossentropy', optimizer="adam")
model.fit(train, Y, nb_epoch=nb_epoch, batch_size=batch_size, validation_split=0.20)
preds = model.predict(test,batch_size=batch_size)

pred_arr = []
for pred in preds:
    pred_arr.append(pred[0])

### Output Results ###
preds = pd.DataFrame({"PassengerId": test_ind, "Survived": pred_arr})
preds = preds.set_index('PassengerId')
preds.to_csv(out_folder+'test.csv')

      

I am getting the following results:

Train on 712 samples, validate on 179 samples
Epoch 0
712/712 [==============================] - 0s - loss: -0.0000 - val_loss: -0.0000
Epoch 1
712/712 [==============================] - 0s - loss: -0.0000 - val_loss: -0.0000
Epoch 2
712/712 [==============================] - 0s - loss: -0.0000 - val_loss: -0.0000
Epoch 3
712/712 [==============================] - 0s - loss: -0.0000 - val_loss: -0.0000
Epoch 4
712/712 [==============================] - 0s - loss: -0.0000 - val_loss: -0.0000
Epoch 5
712/712 [==============================] - 0s - loss: -0.0000 - val_loss: -0.0000
Epoch 6
712/712 [==============================] - 0s - loss: -0.0000 - val_loss: -0.0000
Epoch 7
712/712 [==============================] - 0s - loss: -0.0000 - val_loss: -0.0000
Epoch 8
712/712 [==============================] - 0s - loss: -0.0000 - val_loss: -0.0000
Epoch 9
712/712 [==============================] - 0s - loss: -0.0000 - val_loss: -0.0000

      

I am trying to create a simple 3-layer web. Completely basic code. I've tried similar classification issues before using keras on kaggle. But this time get this error.

Whether this is overridden due to less data. What am I missing? Can anyone please help?

+3


source to share


1 answer


Old post, but still replies in case someone tries Titanic with Keras.

Your network may have too many parameters and too little regularization (e.g. lunge).



Call model.summary () right before model.compile and it will show you how many parameters your network has. Just between the two Dense layers, you should have 512 X 512 = 262,144 parameters. That's a lot for 762 examples.

Also you can use last-level sigmoid activation and binary_cross entropy loss as you only have two output classes.

+1


source







All Articles