Learnings from Recommender Systems Hackathon

I recently took part in a hackathon on Recommender Systems.

Learnings:

  • Create an automated pipeline (e.g. batch script etc) so that different modifications can be easily tried out
  • Its crucial to have a separate hold out set for evaluation.
    • Typically one would test on this held out set
    • This might seem painful at first, but trust me it has a lot of benefits.
  • There are different ways of approaching a recommender systems problem.
    • As a pure MF based collaborative filtering approach.
      • This is probably the way to go if we only have ratings info and dont have content features
    • As a ranking problem
      • If we have features that can be used, then an alternative way is to formulate the problem as a regression problem, and them compute the rankings based on regression scores.
    • We can compare which approach is better by evaluating on the held out set (described above)
      • For one-class recommender systems problem we can use Precision@K as metric
      • For cases where we can reduce to regression scores, we can use some regression metric.

Code:

 

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s