Pylearn2 CSVDataset TypeError
I am having a problem loading a custom dataset in pylearn2. I am trying to get a simple MLP to train using a tiny XOR dataset. I have a dataset named xor.csv
in the same directory as my yaml file, which is not in the same directory as the pylearn2 train.py
script.
Here's all the content xor.csv
:
label,x,y 0,0,0 1,0,1 1,1,0 0,1,1
Here is the content of my YAML file:
!obj:pylearn2.train.Train {
dataset: &train !obj:pylearn2.datasets.csv_dataset.CSVDataset {
path: 'xor.csv',
task: 'classification'
},
model: !obj:pylearn2.models.mlp.MLP {
layers: [
!obj:pylearn2.models.mlp.Sigmoid {
layer_name: 'h0',
dim: 10,
irange: 0.05,
},
!obj:pylearn2.models.mlp.Softmax {
layer_name: 'y',
n_classes: 1,
irange: 0.
}
],
nvis: 2,
},
algorithm: !obj:pylearn2.training_algorithms.sgd.SGD {
learning_rate: 1e-2,
batch_size: 1,
monitoring_dataset:
{
'train' : *train
},
termination_criterion:
!obj:pylearn2.termination_criteria.EpochCounter {
max_epochs: 10000
},
},
extensions: [
!obj:pylearn2.train_extensions.best_params.MonitorBasedSaveBest {
channel_name: 'valid_y_misclass',
save_path: "best.pkl"
},
]
}
When I run the pylearn2 train.py
script, it fails before training (presumably when compiling anano functions). Here's the whole output:
[COMPUTER_NAME]:some_folder [MY_NAME]$ python [PATH_TO_PYLEARN2_SCRIPTS]/train.py example_mlp.yml
/Users/[MY_NAME]/anaconda/lib/python2.7/site-packages/nose/plugins/manager.py:418: UserWarning: Module argparse was already imported from /Users/[MY_NAME]/anaconda/lib/python2.7/argparse.pyc, but /Users/[MY_NAME]/anaconda/lib/python2.7/site-packages is being added to sys.path
import pkg_resources
Parameter and initial learning rate summary:
h0_W: 0.01
h0_b: 0.01
softmax_b: 0.01
softmax_W: 0.01
Compiling sgd_update...
Compiling sgd_update done. Time elapsed: 1.109511 seconds
compiling begin_record_entry...
compiling begin_record_entry done. Time elapsed: 0.090133 seconds
Monitored channels:
learning_rate
total_seconds_last_epoch
train_h0_col_norms_max
train_h0_col_norms_mean
train_h0_col_norms_min
train_h0_max_x_max_u
train_h0_max_x_mean_u
train_h0_max_x_min_u
train_h0_mean_x_max_u
train_h0_mean_x_mean_u
train_h0_mean_x_min_u
train_h0_min_x_max_u
train_h0_min_x_mean_u
train_h0_min_x_min_u
train_h0_range_x_max_u
train_h0_range_x_mean_u
train_h0_range_x_min_u
train_h0_row_norms_max
train_h0_row_norms_mean
train_h0_row_norms_min
train_objective
train_y_col_norms_max
train_y_col_norms_mean
train_y_col_norms_min
train_y_max_max_class
train_y_mean_max_class
train_y_min_max_class
train_y_misclass
train_y_nll
train_y_row_norms_max
train_y_row_norms_mean
train_y_row_norms_min
training_seconds_this_epoch
Compiling accum...
graph size: 115
Compiling accum done. Time elapsed: 1.647879 seconds
Traceback (most recent call last):
File "/Users/[MY_NAME]/pylearn2/pylearn2/scripts/train.py", line 252, in <module>
args.verbose_logging, args.debug)
File "/Users/[MY_NAME]/pylearn2/pylearn2/scripts/train.py", line 242, in train
train_obj.main_loop(time_budget=time_budget)
File "/Users/[MY_NAME]/pylearn2/pylearn2/train.py", line 196, in main_loop
self.run_callbacks_and_monitoring()
File "/Users/[MY_NAME]/pylearn2/pylearn2/train.py", line 242, in run_callbacks_and_monitoring
self.model.monitor()
File "/Users/[MY_NAME]/pylearn2/pylearn2/monitor.py", line 254, in __call__
for X in myiterator:
File "/Users/[MY_NAME]/pylearn2/pylearn2/utils/iteration.py", line 859, in next
for data, fn in safe_izip(self._raw_data, self._convert))
File "/Users/[MY_NAME]/pylearn2/pylearn2/utils/iteration.py", line 859, in <genexpr>
for data, fn in safe_izip(self._raw_data, self._convert))
File "/Users/[MY_NAME]/pylearn2/pylearn2/utils/iteration.py", line 819, in fn
return dspace.np_format_as(batch, sp)
File "/Users/[MY_NAME]/pylearn2/pylearn2/space/__init__.py", line 458, in np_format_as
space=space)
File "/Users/[MY_NAME]/pylearn2/pylearn2/space/__init__.py", line 513, in _format_as
self._validate(is_numeric, batch)
File "/Users/[MY_NAME]/pylearn2/pylearn2/space/__init__.py", line 617, in _validate
self._validate_impl(is_numeric, batch)
File "/Users/[MY_NAME]/pylearn2/pylearn2/space/__init__.py", line 984, in _validate_impl
super(IndexSpace, self)._validate_impl(is_numeric, batch)
File "/Users/[MY_NAME]/pylearn2/pylearn2/space/__init__.py", line 796, in _validate_impl
(batch.dtype, self.dtype))
TypeError: Cannot safely cast batch dtype float64 to space dtype int64.
What does this mean, exactly? I have looked at the code for CSVDataset
, and it loads data using np.loadtxt
that should cast it as a float. Nothing changes if I edit xor.csv
to look like a float ( 1 -> 1.0
for example).
source to share
This is because the y attribute type for CSVDataset is float64.
I have fixed the __init __ () csv_dataset.py as follows and it works.
I don't know if this is a pylearn2 problem or not.
if self.task == 'regression':
super(CSVDataset, self).__init__(X=X, y=y)
else:
super(CSVDataset, self).__init__(X=X, y=y.astype(int),
y_labels=np.max(y) + 1)
By the way you have to fix your yaml
- n_classes of the Softmax layer should be 2
- "channel_name: 'valid_y_misclass'" raises an error because you are not setting the "valid" attribute for the monitoring_dataset parameter.
Try to set the "valid" benchmark dataset or use "train_y_misclass" instead.
source to share