There has been a recent flurry of websites and online courses designed to teach more people to be competent at programming computers. In this talk we look at some of the data that is available on what works and what doesn't work, and in particular at the features of Python that come into play when it is used as a vehicle for learning programming and computer science.
IPython is a great tool for doing interactive exploration of code and data. IPython.parallel is part of IPython that enables interactive exploration of parallel code, and aims to make distributing your work on local clusters or AWS simple and straightforward. The tutorial will cover the basics of getting IPython.parallel up and running in various environments, and how to do interactive and asynchronous parallel computing with IPython. Some of IPython's cooler interactive features will be demonstrated, such as automatically parallelizing code with magics in the IPython Notebook and interactive debugging of remote execution, all with the help of real-world examples.
Matplotlib is the leading scientific visualization tool for Python. Though its ability to generate publication-quality plots is well-known, some of its more advanced features are less-often utilized. In this tutorial, we will explore the ability to create custom mouse- and key-bindings within matplotlib plot windows, giving participants the background and tools needed to create simple cross-platform GUI applications within matplotlib. After going through the basics, we will walk through some more intricate scripts, including a simple MineSweeper game and a 3D interactive Rubik's cube, both implemented entirely in Matplotlib.
IPython has evolved from an enhanced interactive shell into a large and fairly complex set of components that include a graphical Qt-based console, a parallel computing framework and a web-based notebook interface. All of these seemingly disparate tools actually serve a unified vision of interactive computing that covers everything from one-off exploratory codes to the production of entire books made from live computational documents. In this talk I will attempt to show how these ideas form a coherent whole and how they are represented in IPython's codebase. I will also discuss the evolution of the project, attempting to draw some lessons from the last decade as we plan for the future of scientific computing and data analysis.
Our data pipeline is growing like crazy, processing more than 30 terabytes of data every day and more than tripling in the last year alone. In 2011, we moved our data pipeline to a Hadoop stack in order to enable horizontal scalability for future growth. Our optimization tools used for data exploration, aggregations, and general data hackery are critical for updating budgets and optimization data. However, these tools are built in Python, and integrating them with our Hadoop data pipeline has been an enormous challenge. Our continued explosive growth demands increased efficiency, whether that's in simplifying our infrastructure or building more shared services. Over the past few months, we evaluated multiple solutions for integrating Python with Hadoop including using Hadoop Streaming, PIG with Jython UDFs, writing MapReduce in Jython, and of course, why not just do it in Java? In our talk, we'll explore the different Python-Hadoop integration options, share our evaluation process and best practices, and invite an interactive dialogue of lessons learned.