How to simplify DataLoader for Autoencoder in Pytorch

Is there an easier way to set up the dataloader since the input and target data is the same in the case of an auto encoder and for loading data during training? DataLoader always requires two inputs.

I am currently defining my dataloader like this:

X_train     = rnd.random((300,100))
X_val       = rnd.random((75,100))
train       = data_utils.TensorDataset(torch.from_numpy(X_train).float(), torch.from_numpy(X_train).float())
val         = data_utils.TensorDataset(torch.from_numpy(X_val).float(), torch.from_numpy(X_val).float())
train_loader= data_utils.DataLoader(train, batch_size=1)
val_loader  = data_utils.DataLoader(val, batch_size=1)

      

and follow these steps:

for epoch in range(50):
    for batch_idx, (data, target) in enumerate(train_loader):
        data, target = Variable(data), Variable(target).detach()
        optimizer.zero_grad()
        output = model(data, x)
        loss = criterion(output, target)

      

+2


source to share


2 answers


Why not subclass TensorDataset to be compatible with untagged data?

class UnlabeledTensorDataset(TensorDataset):
    """Dataset wrapping unlabeled data tensors.

    Each sample will be retrieved by indexing tensors along the first
    dimension.

    Arguments:
        data_tensor (Tensor): contains sample data.
    """
    def __init__(self, data_tensor):
        self.data_tensor = data_tensor

    def __getitem__(self, index):
        return self.data_tensor[index]

      



And something along those lines for training your autocoder

X_train     = rnd.random((300,100))
train       = UnlabeledTensorDataset(torch.from_numpy(X_train).float())
train_loader= data_utils.DataLoader(train, batch_size=1)

for epoch in range(50):
    for batch in train_loader:
        data = Variable(batch)
        optimizer.zero_grad()
        output = model(data)
        loss = criterion(output, data)

      

+1


source


I find it to be as easy as it gets. Other than that, I think you will have to implement your own dataset. Below is a sample code.

class ImageLoader(torch.utils.data.Dataset):
def __init__(self, root, tform=None, imgloader=PIL.Image.open):
    super(ImageLoader, self).__init__()

    self.root=root
    self.filenames=sorted(glob(root))
    self.tform=tform
    self.imgloader=imgloader

def __len__(self):
    return len(self.filenames)

def __getitem__(self, i):
    out = self.imgloader(self.filenames[i])  # io.imread(self.filenames[i])
    if self.tform:
        out = self.tform(out)
    return out

      

Then you can use it like this.

source_dataset=ImageLoader(root='/dldata/denoise_ae/clean/*.png', tform=source_depth_transform)
target_dataset=ImageLoader(root='/dldata/denoise_ae/clean_cam_n9dmaps/*.png', tform=target_depth_transform)
source_dataloader=torch.utils.data.DataLoader(source_dataset, batch_size=32, shuffle=False, drop_last=True, num_workers=15)
target_dataloader=torch.utils.data.DataLoader(target_dataset, batch_size=32, shuffle=False, drop_last=True, num_workers=15)

      

To check the 1st batch, do the following.



dataiter = iter(source_dataloader)
images = dataiter.next()
print(images.size())

      

Finally, you can list the loaded data in the batch training loop as follows.

for i, (source, target) in enumerate(zip(source_dataloader, target_dataloader), 0):
    source, target = Variable(source.float().cuda()), Variable(target.float().cuda())

      

Good luck.

PS. The code samples I've shared so don't load validation data.

+1


source







All Articles