Database Reference
In-Depth Information
If you compare the code operating on the RDDs inside the preceding foreachRDD block
with that used in Chapter 1 , Getting Up and Running with Spark , you will notice that it is
virtually the same code. This shows that we can apply any RDD-related processing we
wish within the streaming setting by operating on the underlying RDDs, as well as using
the built-in higher level streaming operations.
Let's run the streaming program again by calling sbt run and selecting Stream-
ingAnalyticsApp .
Tip
Remember that you might also need to restart the producer if you previously terminated
the program. This should be done before starting the streaming application.
After about 10 seconds, you should see output from the streaming program similar to the
following:
...
14/11/15 21:27:30 INFO spark.SparkContext: Job finished:
collect at Streaming.scala:125, took 0.071145 s
== Batch start time: 2014/11/15 9:27 PM ==
Total purchases: 16
Unique users: 10
Total revenue: 123.72
Most popular product: iPad Cover with 6 purchases
...
You can again terminate the streaming program using Ctrl + C .
Search WWH ::




Custom Search