Spark MLlib - How to check a compatible implicit feedback filter

I am programming it with Scala, but the language doesn't matter here.

The input to the co-local filter with implicit feedback (ALS.trainImplicit) is in this case the product types:

  • Rating ("user1", "product1", 21.0) // means that user1 has viewed product1 details 21 times
  • Rating ("user2", "product1", 4.0)
  • Rating ("user3", "product2", 7.0)

But the output (MatrixFactorizationModel.recommendProductsForUsers) looks like this:

  • Rating ("user1", "product1", 0.78)
  • Rating ("user2", "product1", 0.63)

The output values ​​of 0.78 and 0.64 appear to be normalized between 0 and 1, but the input values ​​are 21,4,7, etc.

I don't think it makes sense to compute the MSE (mean squared error) between input and output in this case, as we can do when we use explicit feedback collaboration filters.

So the question is, how do you test a co-local filter when using implicit feedback?

+3


source to share


1 answer


Important KPIs for testing implicit feedback are, for example, Accuracy, Reach and many others. It really depends on the use case (how many products do you want to show? How many products do you offer?) And the goal you want to achieve.

When I create an implicit feedback ALS model, I always calculate this 2 KPI. Very well priced models tend to cover fewer products available. Always calculate coverage and decide from there.



Take a closer look at this post: https://stats.stackexchange.com/questions/226825/what-metric-should-i-use-for-assessing-implicit-matrix-factorization-recommender

and this spark library: https://github.com/jongwook/spark-ranking-metrics

0


source







All Articles