Do the trained weights match the data entry order?
Since training will be done in batches, which means optimizing weights on a chunk of data by chunk, the underlying assumption is that batches of data are somewhat representative of the dataset. To make it representative, it is best to randomly display the data.
Bottomline: It will theoretically be better to learn if you feed the neural network at random. I strongly advise you to shuffle your dataset when you feed it in training mode (and there is an option in the .fit () function).
In inference mode, if you only want to make a forward pass in the neural network, then the order doesn't matter as you don't change the weights.
Hope this explains a little for you :-)
source to share
Nassim's answer is considered True for small networks and datasets, but recent articles (or, for example, this one ) leads us to believe that for deeper networks (with more than 4 layers) - not shuffling your dataset can be considered some kind of regularization - like bad lows are expected to be considered deep but small and good lows are expected to be wide and hard to reach.
In the case of inference timing - the only way it can harm your inference process is when you are using a training distribution of your data with a high degree of connectivity - for example using BatchNormalization
or Dropout
as in the training phase (this is sometimes used for some kinds of Bayesian deep learning) ...
source to share