Database Reference
In-Depth Information
Installing and setting up Spark locally
Spark can be run using the built-in standalone cluster scheduler in the local mode. This
means that all the Spark processes are run within the same JVM—effectively, a single,
multithreaded instance of Spark. The local mode is very useful for prototyping, develop-
ment, debugging, and testing. However, this mode can also be useful in real-world scenari-
os to perform parallel computation across multiple cores on a single computer.
As Spark's local mode is fully compatible with the cluster mode, programs written and
tested locally can be run on a cluster with just a few additional steps.
The first step in setting up Spark locally is to download the latest version (at the time of
writing this topic, the version is 1.2.0). The download page of the Spark project website,
found at http://spark.apache.org/downloads.html , contains links to download various ver-
sions as well as to obtain the latest source code via GitHub.
Tip
The Spark project documentation website at http://spark.apache.org/docs/latest/ is a com-
prehensive resource to learn more about Spark. We highly recommend that you explore it!
Spark needs to be built against a specific version of Hadoop in order to access Hadoop
Distributed File System ( HDFS ) as well as standard and custom Hadoop input sources.
The download page provides prebuilt binary packages for Hadoop 1, CDH4 (Cloudera's
Hadoop Distribution), MapR's Hadoop distribution, and Hadoop 2 (YARN). Unless you
wish to build Spark against a specific Hadoop version, we recommend that you download
the prebuilt Hadoop 2.4 package from an Apache mirror using this link: ht-
tp://www.apache.org/dyn/closer.cgi/spark/spark-1.2.0/spark-1.2.0-bin-hadoop2.4.tgz .
Spark requires the Scala programming language (version 2.10.4 at the time of writing this
topic) in order to run. Fortunately, the prebuilt binary package comes with the Scala
runtime packages included, so you don't need to install Scala separately in order to get star-
ted. However, you will need to have a Java Runtime Environment ( JRE ) or Java Devel-
opment Kit ( JDK ) installed (see the software and hardware list in this topic's code bundle
for installation instructions).
Once you have downloaded the Spark binary package, unpack the contents of the package
and change into the newly created directory by running the following commands:
Search WWH ::




Custom Search