With over an Exabyte of new data created daily on the Internet, wouldn't it be great if we could utilize Erlang to make sense of it?

Erlang has great concurrency and built in MapReduce should this be a better fit than Hadoop?
Unfortunately, today Erlang doesn't scale well enough and it doesn't have the I/O handling capabilities to match what the Hadoop and Grid computing communities can achieve. We'll talk about how they scale to thousands of servers in a cluster and handle gigabytes of I/O per second and and how we can learn from this to make Erlang scale to match.

Talk objectives:

Understand how big data systems work today.

Explore the options for improving Erlang and its ecosystem so its becomes the best solution for handling big data. Side benefits of this would also improve performance in other large Erlang deployments such as some of the Cloud based systems now being built with Erlang

Target audience:

Anybody interested in Erlang performance and scalability or Big Data or high performance computing

Loading more stuff…

Hmm…it looks like things are taking a while to load. Try again?

Loading videos…