Database Reference
In-Depth Information
Spark clusters
A Spark cluster is made up of two types of processes: a driver program and multiple ex-
ecutors. In the local mode, all these processes are run within the same JVM. In a cluster,
these processes are usually run on separate nodes.
For example, a typical cluster that runs in Spark's standalone mode (that is, using Spark's
built-in cluster-management modules) will have:
• A master node that runs the Spark standalone master process as well as the driver
program
• A number of worker nodes, each running an executor process
While we will be using Spark's local standalone mode throughout this topic to illustrate
concepts and examples, the same Spark code that we write can be run on a Spark cluster. In
the preceding example, if we run the code on a Spark standalone cluster, we could simply
pass in the URL for the master node as follows:
>MASTER=spark://IP:PORT ./bin/run-example
org.apache.spark.examples.SparkPi
Here, IP is the IP address, and PORT is the port of the Spark master. This tells Spark to run
the program on the cluster where the Spark master process is running.
A full treatment of Spark's cluster management and deployment is beyond the scope of this
topic. However, we will briefly teach you how to set up and use an Amazon EC2 cluster
later in this chapter.
Note
For an overview of the Spark cluster-application deployment, take a look at the following
links:
http://spark.apache.org/docs/latest/cluster-overview.html
http://spark.apache.org/docs/latest/submitting-applications.html
Search WWH ::




Custom Search