During a lunchtime discussion with my colleagues there was an interesting question which came up –

- When training an ML algorithm (e.g. recommendation engine, contextual bandit etc) what is the influence of weights assigned to each example ?
- Is there a good rule of thumb for assigning these weights ?

The discussion gave us good insights. Infact, it seems there has been some recent research on the topic.

**Reference:**

- http://arxiv.org/abs/1602.05352
- the basic idea is an interesting take on propensity estimation: training a binary classifier to predict probability that an item is labeled (propensity weight), then weighing actual labeled examples as 1/propensity when training the “regular” model (recommender in their case). Intuitively, this will give higher weights to “surprising” items
- check it out!

### Like this:

Like Loading...