Where is the machine learning algorithm stored?

I think this is a kind of "blasphemy" for a person who comes from the AI โ€‹โ€‹world, but since I come from a world where we program and get the result, and there is a concept of storing something not memory, here is my question:

Machine learning works in iterations, the more there are iterations, the better our algorithm is, but after these iterations is there a result that is stored somewhere? because if I think like a programmer, if I run the program again, should I save the previous results somewhere, or will they be overwritten? or I need to use an array for example to store my results.

For example, if I train my pattern recognition algorithm with a set of cat image sets, what variables do I need to add to my algorithm, so if I use it with an image library, it will always succeed every time I find a cat, but I will use what? because nothing is saved for my next step?

All the videos and tutorials that I have seen, they only draw graphics as a visual solution, and not apply something to use in a future program?

For example, this example, kNN is used to teach how to detect a written digit, but where is the explicit meaning to use?

https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/2_BasicModels/nearest_neighbor.py

NB: people clicking on close request

or downvoting

at least giving a reason.

+3


source to share


2 answers


the more iterations, the better our algorithm, but after these iterations there is a result that is stored somewhere

What you are hinting at here is part of the optimization.

However, to optimize the model, we first have to represent it.

For example, if I create a very simple linear model to predict house prices using its surface in square meters, I can go for this model:

price = a * surface + b

      



This is a performance.

Now that you have presented the model, you want to optimize it, i.e. find parameters a

and b

that minimize the prediction error.

is there a saved result somewhere?

In the above, we say that we have studied the parameters or weights a

and b

.

This is what you hold, the weights that come from optimization (also called training) and of course the model itself.

+2


source


I think there is some confusion. Let me clean it up.

Machine learning models usually have parameters, and these parameters are trainable. This means that the learning algorithm finds the "correct" values โ€‹โ€‹for these parameters to work correctly for a given task. This is the tutorial part. The actual parameter values โ€‹โ€‹are "inferred" from the training data.



What you would call a learning outcome is a model. The model is represented by formulas with parameters, and these parameters must be saved. Typically when you are using an ML / DL framework (like scikit-learn or Keras), the parameters are stored alongside some model type information so it can be restored at runtime.

+2


source







All Articles