# NPM v/s Bower – Intro

This is a nice intro to NPM.

And here’s a nice intro to NPM.

This post captures some difference between npm and bower:

# Visualization Using D3 (and dependent libraries)

This link gives a nice summary of data visualization libraries using D3:

Interestingly, it mentions mermaid and rickshaw! Two cool libraries I recently came across

Real time:

# Debugging Standard Deviation

In one of my previous posts, I had noted my thoughts around statistical measures like standard deviation and confidence intervals.

The fun part is of course when one has to debug these measures.

To that end I developed some insights by trying to visualize the data and plotting different kinds of charts using matplotlib

• The code below also acts as a reference to one of the pet peeves I have when trying to plot data from a python dataframe.
• Use the code below as reference going forward. Also, sometimes you have to debug plots when they make no sense at all. Like this one below:

• The first plot didnt make sense to me initially. But once I started debugging it made total sense.
• Check the 2nd plot below which is what I get when I ‘sort’ the data Code:

# Plotting Errorbars in Python

As part of my previous post on confidence intervals, I wanted to now plot the error bars  that I got from my dataset.

I had to play around a bit to get this going. The referenced links helped me in this effort.

Tip:

• one interesting point was around the intervals. its +/- the y-value. References:

(additional references which might be useful)

Code:

# Comparing ML algos : Multi Armed bandit, Contextual Bandit, Logistic Regression, Online Learning

We have a system running Multi-Armed Bandit.

So when it came to select the next generation of ML algo to try out, we had a few choices:

1. Multi-Armed Bandit  (we had this running)
• This entails ranking the items based on their respective conversion rates till that point of time.
2. Contextual Bandit
• We use Vowpal Wabbit for this.
• Internally Vowpal Wabbit treats contextual bandit in 2 distinct ways:
• Without Action Dependent Features (non-ADF)
• With Action Dependent Features (ADF)
• In non-ADF mode, the VW creates multiple models (i.e. creates a model for each class).
• In ADF mode, VW creates a single model.
3. Logistic Regression.
• This entails reducing the problem to a binary classification problem.
• Then using the model to score the items. Finally ranking the items based on the model score.
4. Online ML
• Again treating this as a binary classification model, except this time we are updating the model in an online fashion.

Interestingly, on the dataset I was using I didn’t see much of a difference in algorithmic performance across the 4 different algorithms above. Code:

_trials_compare3

# Learning Curves. What to try next in ML ?

A very interesting problem in ML is : What to try next ?  Andrew Ng has some very interesting insights on this topic. (See the reference section below)

• Nowadays most ML platforms, e.g. AzureML give the ability to do parameter sweeps.
• Most of the time they also do cross validation when doing sweeps.
• This simplifies model selection, the platforms will automatically select the parameters during cross validation which give the best accuracy/AUC on the cross validation dataset.
• This is usually the 1st thing to do for pretty much all ML problems.

• However, an interesting question still remains esp from a practical standpoint –
• Should I focus more on feature engineering i.e. add more features.  OR Should I focus more on getting more data
• For these cases I would generally use learning curves.
• There are some nuances. So let me explain what I usually do.

• Plot of Training Error v/s Cross Validation Error.
• This usually indicates whether I am currently suffering from a high bias (underfit) problem or a high variance (overfit) problem.
• High Bias (underfit):
• high training error. high generalization (CV) error
• High Variance (overfit):
• low training error. high generalization (CV) error • High Variance (Overfitting) : Plot how the Error / Accuracy varies with increasing data.
• A good idea here is to use log-base2 scale on the x-axis.
• Using a log-base-2 scheme gives a good sense of how much the Error/Accuracy with decrease/increase with more data • Based on the intuition above the following steps can be taken.

 What to Try next ? Underfit (high bias) Overfit (high variance) Getting More Training Examples No Yes Try smaller set of features No Yes. But first see if you can get more training examples. Additional features Yes Maybe. If we get a feature that gives a strong signal then yes add it. But also invest in more data collection in parallel.

Code:

References:

# Redis : Pipelines.

Recently did some fun visualizations with horizontal bar. Check it out: The purpose was to understand how to reduce redis latencies in the presence of a large number of key lookups.

There are two ways :

(1) using pipeline

(2) hash data structure. Redis has atomic commands (HINCRBY) to increment the value of the hash yet, which is super cool !

Code:

References:

# k Nearest Neighbors

#### I continue to chip away at Data Science from Scratch. This time I tried out Chapter 12: K Nearest Neighbours.

Learnt several few things from this hack:

 How to do XML parsing in Python. (blogged about it as well)

 Visualization.

 Python Coding. Joel’s code is amazing.

Some cool visualizations are as follows:

Here’s how the data looks like plotted onto the US map: Check out how the variation when the value of K varies from K=1 to K=5.

K=1. This is an example of overfitting. K=5 Code: