Working with data at large scales requires parallel computing to access large amounts of RAM and CPU cycles. Users need a quick and easy way to leverage these resources without becoming an expert in parallel computing. IPython has parallel computing support that addresses this need by providing a high level parallel API that covers a wide range of usage cases with excellent performance. This API enables Python functions, along with their arguments to be scheduled and called on parallel computing resources using a number of different scheduling algorithms. Programs written using IPython Parallel scale across multicore CPUs, cluster and supercomputers with no modification and can be run, shared and monitored in a web browser using the IPython Notebook. In this talk I will cover the basics of this API and give examples of how it can be used to parallelize your own code.

This talk was presented at PyData NYC 2012: nyc2012.pydata.org/. If you are interested in this topic, be sure to check out PyData Silicon Valley in March of 2013: sv2013.pydata.org/

Loading more stuff…

Hmm…it looks like things are taking a while to load. Try again?

Loading videos…