Database Reference
In-Depth Information
CHAPTER 10
Spark Streaming
Many applications benefit from acting on data as soon as it arrives. For example, an
application might track statistics about page views in real time, train a machine learn‐
ing model, or automatically detect anomalies. Spark Streaming is Spark's module for
such applications. It lets users write streaming applications using a very similar API
to batch jobs, and thus reuse a lot of the skills and even code they built for those.
Much like Spark is built on the concept of RDDs, Spark Streaming provides an
abstraction called DStreams , or discretized streams . A DStream is a sequence of data
arriving over time. Internally, each DStream is represented as a sequence of RDDs
arriving at each time step (hence the name “discretized”). DStreams can be created
from various input sources, such as Flume, Kafka, or HDFS. Once built, they offer
two types of operations: transformations , which yield a new DStream, and output
operations , which write data to an external system. DStreams provide many of the
same operations available on RDDs, plus new operations related to time, such as slid‐
ing windows.
Unlike batch programs, Spark Streaming applications need additional setup in order
to operate 24/7. We will discuss checkpointing , the main mechanism Spark Streaming
provides for this purpose, which lets it store data in a reliable file system such as
HDFS. We will also discuss how to restart applications on failure or set them to be
automatically restarted.
Finally, as of Spark 1.1, Spark Streaming is available only in Java and Scala. Experi‐
mental Python support was added in Spark 1.2, though it supports only text data. We
will focus this chapter on Java and Scala to show the full API, but similar concepts
apply in Python.
 
Search WWH ::




Custom Search