Database Reference
In-Depth Information
Example 7-7. build.sbt file for a Spark application built with sbt 0.13
import AssemblyKeys._
name := "Simple Project"
version := "1.0"
organization := "com.databricks"
scalaVersion := "2.10.3"
libraryDependencies ++= Seq (
// Spark dependency
"org.apache.spark" % "spark-core_2.10" % "1.2.0" % "provided" ,
// Third-party libraries
"net.sf.jopt-simple" % "jopt-simple" % "4.3" ,
"joda-time" % "joda-time" % "2.0"
)
// This statement includes the assembly plug-in capabilities
assemblySettings
// Configure JAR used with the assembly plug-in
jarName in assembly := "my-project-assembly.jar"
// A special option to exclude Scala itself form our assembly JAR, since Spark
// already bundles Scala.
assemblyOption in assembly :=
( assemblyOption in assembly ). value . copy ( includeScala = false )
The first line in this build file imports some functionality from an sbt build plug-in
that supports creating project assembly JARs. To enable this plug-in we have to also
include a small file in a project/ directory that lists the dependency on the plug-in.
Simply create a file called project/assembly.sbt and add the following to it: addSbtPlu
gin("com.eed3si9n" % "sbt-assembly" % "0.11.2") . The exact version of sbt-
assembly you use might differ if you build with a newer version of sbt. Example 7-8
works with sbt 0.13.
Example 7-8. Adding the assembly plug-in to an sbt project build
# Display contents of project/assembly.sbt
$ cat project/assembly.sbt
addSbtPlugin ( "com.eed3si9n" % "sbt-assembly" % "0.11.2" )
Now that we have a well-defined build, we can create a fully assembled Spark applica‐
tion JAR ( Example 7-9 ).
Search WWH ::




Custom Search