Running Theano on EC2

Inspired by Sander Dieleman's internship at Spotify, I've been playing around with deep learning using Theano. Theano is this Python package that lets you define symbolic expressions (cool), does automatic differentiation (really cool), and compiles it down into bytecode to run on a CPU/GPU (super cool). It's built by Yoshua Bengio's deep learning team up in Montreal.

This isn't going to be a long blog post – I just wanted to share two pro tips:

  1. I was messing around for hours trying to get Theano running on the GPU instances in EC2. Turns out Andreas Jansson, a coworker at Spotify, has already built an ready-to-use AMI. When you start an EC2 instance, search for the gpu_theano AMI. (AMI's are Amazon's virtual images that you boot your system from). The gpu_theano AMI runs Ubuntu 14.04 and comes with a bunch of stuff pre-installed. Andreas also has this tool to spin it up from the command line, but I couldn't get it working (somehow the instances it created weren't accessible over SSH) so I ended up just booting machines from the AWS Management Console.
  2. The list price for the g2.2xlarge instances (the ones with GPU's) are $0.65/h. If you end up running something for a week then that's just above $100. The spot instance price, however, is (currently) only $0.0641/h – less than 10%. The downside with spot instances is that your using excess capacity of EC2, so there's a small likelihood your machine will be taken down at any point. But so far supply generally seems to outstrip demand. The price looks fairly stable, and you can always checkpoint data to S3 to persist it.
![image](/assets/2014/08/Screen-Shot-2014-08-19-at-9.18.15-AM.png)
My deep learning model is about 50x faster on a g2.2xlarge (which has 1,536 GPU cores) compared to a c3.4xlarge (which has 16 CPU cores) so the speedup is pretty substantial.