Monday, December 19, 2011

A statistician’s view on Stanford’s public machine learning course

This past fall, I took Stanford’s class on machine learning. Overall, it was a terrific experience, and I’d like to share a few thoughts on it:

  • A lot of participants were concerned that it was a watered down version of Stanford’s CS229. And, in fact, the course was more limited in scope and more applied than the official Stanford class. However, I found this to be a strength. Because I was already familiar with most of the methods in the beginning (linear and multiple regression, logistic regression), I could focus more on the machine learning perspective that the class brought to these methods. This helped in later sections where I wasn’t so familiar with the methods.
  • The embedded review questions and the end of section review questions were very well done, with some randomization algorithm making it impossible to guess until everything was right.
  • Programming exercises were done in Octave, an open source Matlab-like programming environment. I really enjoyed doing this programming, because it meant I essentially programmed regression and logistic regression algorithms by hand with the exception of a numerical optimization algorithm. I got a huge confidence boost when I managed to get the backpropagation algorithm for neural networks correct. Emphasis on these exercises was on the loops, which you could code using “slow” loops (for loops, for instance), but then really needed to vectorize using the principles of linear algebra. For instance, there was an algorithm for a recommender system that would take hours if coded using for loops, but ran in minutes using a vectorized implementation. (This is because the implicit loops of vectorization were run using optimized linear algebra routines.) In statistics, we don’t always worry about implementation details so much, but in machine learning situations, implementation is important because these algorithms often need to run in real time.
  • The class encouraged me to look at the Kaggle competitions. I’m not doing terribly well in them, but now at least I’m hacking on some data myself and learning a lot in the process.
  • The structure of the public class helps a lot over, for example, the iTunes U version of the class. But now I’m looking at the CS 229 lectures on iTunes U and am understanding them a lot more now.
  • Kudos to Stanford for taking the lead on this effort. This is the next logical progression of distance education, and takes a lot of effort and time.

I also took the databases class, which was even more structured with a mid-term and final exam. This was a bit of a stretch for me, but learning about data storage and retrieval is a good complement to statistics and machine learning. I’ve coded a few complex SQL queries in my life, but this class really took my understanding of both XML-based and relational database systems to the next level.

Stanford is offering machine learning again, along with a gaggle of other classes. I recommend you check them out. (Find a list, for example, at the bottom of the page of Probabilistic Graph Models site.) (Note: Stanford does not offer official credit for these classes.)