Higher weights for “surprising” items

During a lunchtime discussion with my colleagues there was an interesting question which came up –

  • When training an ML algorithm  (e.g. recommendation engine,  contextual bandit etc) what is the influence of weights assigned to each example ?
  • Is there a good rule of thumb for assigning these weights ?

The discussion gave us good insights. Infact, it seems there has been some recent research on the topic.


  • http://arxiv.org/abs/1602.05352
    • the basic idea is an interesting take on propensity estimation:  training a binary classifier to predict probability that an item is labeled (propensity weight), then weighing actual labeled examples as 1/propensity when training the “regular” model (recommender in their case).   Intuitively, this will give higher weights to “surprising” items
    • check it out!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s