I was recently reading this paper which discusses Machine Learning Reductions.

It has a very interesting Abstract which got me hooked. The references contains two papers which explain ML reductions in general.

**Abstract **

“Machine learning involves optimizing a loss function on unlabeled data points given examples of labeled data points, where the loss function measures the performance of a learning algorithm. We give an overview of techniques, called reductions, for converting a problem of minimizing one loss function into a problem of minimizing another, simpler loss function. This tutorial discusses how to create robust reductions that perform well in practice. The reductions discussed here can be used to solve any supervised learning problem with a standard binary classification or regression algorithm available in any machine learning toolkit. We also discuss common design flaws in folklore reductions.”

**Reference:**

- http://hunch.net/~jl/projects/reductions/tutorial/paper/chapter.pdf
- http://arxiv.org/pdf/1502.02704v1.pdf

[…] a previous post, I had mentioned about ML Reductions in […]