Calculating Pi using distributed clusters.

Came across this interesting concept of computing the value of Pi using distributed clusters.





Techniques for Approximate Counting

In many scenarios,  we can live with approximations in counting.

There are a few techniques in particular which are increasingly being used in online scenarios. Example, services like RedShift, Redis and Spark have in built support for these data structures.

  1. Hyper Log Log.
  2. Count Min Sketch.
  3. Bloom Filters.
  4. Locality Sensitive Hashing.



Algorithms on Parenthesis. Catalan Numbers.

There are a few interesting questions I came across recently related to parenthesis.

These gave me interesting insights into a range of algorithms. Also, gave me insight into Catalan Numbers.







Understanding the SynchronizationContext and Best Practices in Asynchronous Programming.

For doing asynchronous programming in C#, one needs to understand the concept of SynchronizationContext.

I came across this nice set of posts by Stephen Cleary, where he explains some of the fundamental design choices and best practices.

The primary reason for this line of exploration for me was because of a deadlock I hit when working on a web app using the async pattern. This problem is described here.



Comparing ML algos : Multi Armed bandit, Contextual Bandit, Logistic Regression, Online Learning

We have a system running Multi-Armed Bandit.

So when it came to select the next generation of ML algo to try out, we had a few choices:

  1. Multi-Armed Bandit  (we had this running)
    • This entails ranking the items based on their respective conversion rates till that point of time.
  2. Contextual Bandit
    • We use Vowpal Wabbit for this.
    • Internally Vowpal Wabbit treats contextual bandit in 2 distinct ways:
      • Without Action Dependent Features (non-ADF)
      • With Action Dependent Features (ADF)
    • Interestingly there is a difference between non-ADF and ADF modes.
      • In non-ADF mode, the VW creates multiple models (i.e. creates a model for each class).
      • In ADF mode, VW creates a single model.
  3. Logistic Regression.
    • This entails reducing the problem to a binary classification problem.
    • Then using the model to score the items. Finally ranking the items based on the model score.
  4. Online ML
    • Again treating this as a binary classification model, except this time we are updating the model in an online fashion.


Interestingly, on the dataset I was using I didn’t see much of a difference in algorithmic performance across the 4 different algorithms above.






Higher weights for “surprising” items

During a lunchtime discussion with my colleagues there was an interesting question which came up –

  • When training an ML algorithm  (e.g. recommendation engine,  contextual bandit etc) what is the influence of weights assigned to each example ?
  • Is there a good rule of thumb for assigning these weights ?

The discussion gave us good insights. Infact, it seems there has been some recent research on the topic.


    • the basic idea is an interesting take on propensity estimation:  training a binary classifier to predict probability that an item is labeled (propensity weight), then weighing actual labeled examples as 1/propensity when training the “regular” model (recommender in their case).   Intuitively, this will give higher weights to “surprising” items
    • check it out!

An Activity Selection Problem

There is a very interesting problem I recently came across as part of Google’s foobar challenge. The problem is called ‘zombie antidote’.

I wanted to write down some thoughts as I made my way towards solving this problem.


  • The problem smells like Dynamic Programming.
  • However, there are some challenges in coming up with the DP solution
    • How to build up the optimal substructure (aka. recurrence relation) for the problem ?
    • How to show overlapping subproblem property ?
  • It is useful to look at problems from different perspectives.
    • E.g. Can the activity selection problem be formulated as a Graph theory problem ?


  • Refresh basics of Dynamic Programming Algos.
    • Especially useful here was understanding the cut-rod problem from Cormen’s algo book
  • Understanding the relationship between the following 3 fundamental algorithmic approaches.
    • Dynamic Programming
    • Greedy Algorithms
    • Graph Theory
  • As it turns out, the activity selection problem can indeed be formulated in all of the 3 algo approaches listed above.