Computation measure: Flink vs Spark by Rebaca Analytics.

computation-measure

The quest to replace Hadoop’s aging MapReduce is a bit like waiting for buses in Britain. You watch a really long time, then a bunch come along at once. We already have Tez and Spark in the mix, but there’s a new contender for the heart of Hadoop, and it comes from Europe: Apache Flink (German for “quick” or “nimble”).

Overview

Apache Flink is an open source platform for distributed stream and batch data processing. Flink’s core is a streaming dataflow engine that provides data distribution, communication, and fault tolerance for distributed computations over data streams. Flink also builds batch processing on top of the streaming engine, overlaying native iteration support, managed memory, and program optimization.

Apache Flink Architecture and Process Model
•    The Processes
•    Component Stack
•    Projects and Dependencies

Flink sprung from Berlin’s Technical University, and it used to be known as Stratosphere before it was added to Apache’s incubator program. It’s a replacement for Hadoop MapReduce that works in both batch and streaming modes, eliminating the mapping and reducing jobs in favor of a directed graph approach that leverages in-memory storage for massive performance gains.

Similarities:

Spark and Flink have a lot in common. Here’s some Scala that shows a simple word count operation in Flink:

As you can see, while there are some differences in syntactic sugar, the APIs are rather similar. I’m a fan of Flink’s use of case classes over Spark’s tuple-based PairRDD construction, but there’s not much in it. Given that Apache Spark is now a stable technology used in many enterprises across the world, another data processing engine seems superfluous. Why should we care about Flink?
The reason Flink may be important lies in the dirty little secret at the heart of Spark Streaming, one you may have come across in a production setting: Instead of being a pure stream-processing engine, it is in fact a fast-batch operation working on a small part of incoming data during a unit of time (known in Spark documentation as “micro-batching”). For many applications, this is not an issue, but where low latency is required (such as financial systems and real-time ad auctions) every millisecond lost can lead to monetary consequences.

Flink flips this on its head. Whereas Spark is a batch processing framework that can approximate stream processing, Flink is primarily a stream processing framework that can look like a batch processor. Immediately you get the benefit of being able to use the same algorithms in both streaming and batch modes (exactly as you do in Spark), but you no longer have to turn to a technology like Apache Storm if you require low-latency responsiveness. You get all you need in one framework, without the overhead of programming and maintaining a separate cluster with a different API.

Also, Flink borrows from the crusty-but-still-has-a-lot-to-teach-us RDBMS to bring us an aggressive optimization engine. Similar to a SQL database’s query planner, the Flink optimizer analyzes the code submitted to the cluster and produces what it thinks is the best pipeline for running on that particular setup (which may be different if the cluster is larger or smaller).
For extra speed, it allows iterative processing to take place on the same nodes rather than having the cluster run each iteration independently. With a bit of reworking of your code to give the optimizer some hints, it can increase performance even further by performing delta iterations only on parts of your data set that are changing (in some cases offering a five-fold speed increase over Flink’s standard iterative process).
Flink has a few more tricks up its sleeve. It is built to be a good YARN citizen (which Spark has not quite achieved yet), and it can run existing MapReduce jobs directly on its execution engine, providing an incremental upgrade path that will be attractive to organizations already heavily invested in MapReduce and loath to start from scratch on a new platform. Flink even works on Hortonworks’ Tez runtime, where it sacrifices some performance for the scalability that Tez can provide.
In addition, Flink takes the approach that a cluster should manage itself rather than require a heavy dose of user tuning. To this end, it has its own memory management system, separate from Java’s garbage collector. While this is normally Something You Shouldn’t Do, high-performance clustered computing changes the rules somewhat. By managing memory explicitly, Flink almost eliminates the memory spikes you often see on Spark clusters. To aid in debugging, Flink supplies its equivalent of a SQL EXPLAIN command. You can easily get the cluster to dump a JSON representation of the pipelines it has constructed for your job, and you can get a quick overview of the optimizations Flink has performed through a built-in HTML viewer, providing better transparency than in Spark at times.
But let’s not count out Spark yet. Flink is still an incubating Apache project. It has only been tested in smaller installations of up to 200 nodes and has limited production deployment at this time (although it’s said to be in testing at Spotify). Spark has a large lead when it comes to mature machine learning and graph processing libraries, although Flink’s maintainers are working on their own versions of MLlib and GraphX. Flink currently lacks a Python API, and most important, it does not have a REPL (read-eval-print-loop), so it’s less attractive to data scientists — though again, these deficiencies have been recognized and are being remedied. I’d bet on both a REPL and Python support arriving before the end of 2015.
Flink seems be a project that has definite promise. If you’re currently using Spark, it might be worthwhile standing up a Flink cluster for evaluation purposes (especially if you’re using Spark Streaming). However, I wonder whether all of the “next-generation MapReduce” communities (including Tez and Impala along with Spark and Flink) might be better served if there were less duplication of effort and more cooperation among the groups. Can’t we all just get along?

Differences:

In contrast to Apache Flink, Apache Spark is not capable of handling data sets larger than the RAM (Cached RDD)  before version 1.5.x

Apache Flink is optimized for cyclic or iterative processes by using iterative transformations on collections. This is achieved by an optimization of join algorithms, operator chaining and reusing of partitioning and sorting. However, Flink is also a strong tool for batch processing.  Flink streaming processes data streams as true streams, i.e., data elements are immediately “moved” through a streaming program as soon as they arrive. This allows to perform flexible window operations on streams.
Apache Spark on the other hand is based on resilient distributed datasets (RDDs). This (mostly) in-memory data structure gives the power to Spark’s functional programming paradigm. It is capable of big batch calculations by pinning memory. Spark streaming wraps data streams into mini-batches, (it collects all data that arrives within a certain period of time and runs a regular batch program on the collected data).  While the batch program is running, in tandem, the data for the next mini-batch is collected.

Yahoo! Benchmarks Apache Flink, Spark and Storm

Yahoo! has benchmarked three of the main stream processing frameworks: Apache Flink, Spark and Storm.
For stream processing Yahoo! used in the past S4, a platform developed internally, but a decision was made in 2012 to replace it with Apache Storm. They are currently using Storm extensively for various data processing needs running it on ~2,300 nodes. But new data processing frameworks have taken the first stage lately and Yahoo! wanted to compare their performance against Storm’s, so they devised a benchmark published on GitHub.
In this benchmark, Yahoo! compared Apache Flink, Spark and Storm. The application tested is related to advertisement, having 100 campaigns and 10 ads per campaign. Five Kafka nodes are used to generate JSON events which are deserialized, passed through a filter, then the events are joined with their associated campaigns and stored into a Redis node. Kafka varied the number of events generated in 10 steps from 50K/s to 170K/s. The entire setup of the benchmark, including the hardware configuration used along with various observations on what was left out and what could be improved is presented in this Yahoo! Engineering post. To be able to compare the three frameworks, Yahoo! measured the percentile latency needed for a touple to be completely processed for each event emitting rate.
According to Yahoo!, both Flink and Storm showed similar behavior, the percentile latency varying linearly until the 99th percentile when the latency grows exponentially. Storm 0.10.0 could not process data starting at the event rate of 135K/s. Storm 0.11.0 with acking had serious trouble processing data at 150K/s. With acking disabled, Storm 0.11 performed much better, beating Flink, but Yahoo! acknowledges that “with acking disabled, the ability to report and handle tuple failures is disabled also.”
In Yahoo!’s tests, Spark’s results were considerably worse than Flink and Storm’s, going up to 70 sec without back-pressure and 120 sec with back-pressure, compared with less than 1 sec for Flink and Storm, as shown in the following chart (percentile latency depicted for the rate 150K/s)