---
layout: page
title: "Apache Spark Interpreter for Apache Zeppelin"
description: "Apache Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution engine."
group: interpreter
---
{% include JB/setup %}
# Spark Interpreter for Apache Zeppelin
| Feature |
Description |
| Support multiple versions of Spark |
You can run different versions of Spark in one Zeppelin instance |
| Support multiple versions of Scala |
You can run different Scala versions (2.12/2.13) of Spark in on Zeppelin instance |
| Support multiple languages |
Scala, SQL, Python, R are supported, besides that you can also collaborate across languages, e.g. you can write Scala UDF and use it in PySpark |
| Support multiple execution modes |
Local | Standalone | Yarn | K8s |
| Interactive development |
Interactive development user experience increase your productivity |
| Inline Visualization |
You can visualize Spark Dataset/DataFrame vis Python/R's plotting libraries, and even you can make SparkR Shiny app in Zeppelin |
Multi-tenancy |
Multiple user can work in one Zeppelin instance without affecting each other. |
Rest API Support |
You can not only submit Spark job via Zeppelin notebook UI, but also can do that via its rest api (You can use Zeppelin as Spark job server). |
## Play Spark in Zeppelin docker
For beginner, we would suggest you to play Spark in Zeppelin docker.
In the Zeppelin docker image, we have already installed
miniconda and lots of [useful python and R libraries](https://github.com/apache/zeppelin/blob/branch-0.10/scripts/docker/zeppelin/bin/env_python_3_with_R.yml)
including IPython and IRkernel prerequisites, so `%spark.pyspark` would use IPython and `%spark.ir` is enabled.
Without any extra configuration, you can run most of tutorial notes under folder `Spark Tutorial` directly.
First you need to download Spark, because there's no Spark binary distribution shipped with Zeppelin.
e.g. Here we download Spark 3.1.2 to`/mnt/disk1/spark-3.1.2`,
and we mount it to Zeppelin docker container and run the following command to start Zeppelin docker container.
```bash
docker run -u $(id -u) -p 8080:8080 -p 4040:4040 --rm -v /mnt/disk1/spark-3.1.2:/opt/spark -e SPARK_HOME=/opt/spark --name zeppelin apache/zeppelin:0.10.0
```
After running the above command, you can open `http://localhost:8080` to play Spark in Zeppelin. We only verify the spark local mode in Zeppelin docker, other modes may not work due to network issues.
`-p 4040:4040` is to expose Spark web ui, so that you can access Spark web ui via `http://localhost:8081`.
## Configuration
The Spark interpreter can be configured with properties provided by Zeppelin.
You can also set other Spark properties which are not listed in the table. For a list of additional properties, refer to [Spark Available Properties](http://spark.apache.org/docs/latest/configuration.html#available-properties).
| Property |
Default |
Description |
| `SPARK_HOME` |
|
Location of spark distribution |
| spark.master |
local[*] |
Spark master uri. e.g. spark://master_host:7077 |
| spark.submit.deployMode |
|
The deploy mode of Spark driver program, either "client" or "cluster", Which means to launch driver program locally ("client") or remotely ("cluster") on one of the nodes inside the cluster. |
| spark.app.name |
Zeppelin |
The name of spark application. |
| spark.driver.cores |
1 |
Number of cores to use for the driver process, only in cluster mode. |
| spark.driver.memory |
1g |
Amount of memory to use for the driver process, i.e. where SparkContext is initialized, in the same format as JVM memory strings with a size unit suffix ("k", "m", "g" or "t") (e.g. 512m, 2g). |
| spark.executor.cores |
1 |
The number of cores to use on each executor |
| spark.executor.memory |
1g |
Executor memory per worker instance. e.g. 512m, 32g |
| spark.executor.instances |
2 |
The number of executors for static allocation |
| spark.files |
|
Comma-separated list of files to be placed in the working directory of each executor. Globs are allowed. |
| spark.jars |
|
Comma-separated list of jars to include on the driver and executor classpaths. Globs are allowed. |
| spark.jars.packages |
|
Comma-separated list of Maven coordinates of jars to include on the driver and executor classpaths. The coordinates should be groupId:artifactId:version. If spark.jars.ivySettings is given artifacts will be resolved according to the configuration in the file, otherwise artifacts will be searched for in the local maven repo, then maven central and finally any additional remote repositories given by the command-line option --repositories. |
| `PYSPARK_PYTHON` |
python |
Python binary executable to use for PySpark in both driver and executors (default is python).
Property spark.pyspark.python take precedence if it is set |
| `PYSPARK_DRIVER_PYTHON` |
python |
Python binary executable to use for PySpark in driver only (default is `PYSPARK_PYTHON`).
Property spark.pyspark.driver.python take precedence if it is set |
| zeppelin.pyspark.useIPython |
false |
Whether use IPython when the ipython prerequisites are met in `%spark.pyspark` |
| zeppelin.R.cmd |
R |
R binary executable path. |
| zeppelin.spark.concurrentSQL |
false |
Execute multiple SQL concurrently if set true. |
| zeppelin.spark.concurrentSQL.max |
10 |
Max number of SQL concurrently executed |
| zeppelin.spark.maxResult |
1000 |
Max number rows of Spark SQL result to display. |
| zeppelin.spark.run.asLoginUser |
true |
Whether run spark job as the zeppelin login user, it is only applied when running spark job in hadoop yarn cluster and shiro is enabled. |
| zeppelin.spark.printREPLOutput |
true |
Print scala REPL output |
| zeppelin.spark.useHiveContext |
true |
Use HiveContext instead of SQLContext if it is true. Enable hive for SparkSession |
| zeppelin.spark.enableSupportedVersionCheck |
true |
Do not change - developer only setting, not for production use |
| zeppelin.spark.sql.interpolation |
false |
Enable ZeppelinContext variable interpolation into spark sql |
| zeppelin.spark.uiWebUrl |
|
Overrides Spark UI default URL. Value should be a full URL (ex: http://{hostName}/{uniquePath}.
In Kubernetes mode, value can be Jinja template string with 3 template variables PORT, {% raw %} SERVICE_NAME {% endraw %} and {% raw %} SERVICE_DOMAIN {% endraw %}.
(e.g.: {% raw %}http://{{PORT}}-{{SERVICE_NAME}}.{{SERVICE_DOMAIN}} {% endraw %}). In yarn mode, value could be a knox url with {% raw %} {{applicationId}} {% endraw %} as placeholder,
(e.g.: {% raw %}https://knox-server:8443/gateway/yarnui/yarn/proxy/{{applicationId}}/{% endraw %})
|
| spark.webui.yarn.useProxy |
false |
whether use yarn proxy url as Spark weburl, e.g. http://localhost:8088/proxy/application_1583396598068_0004 |
Without any configuration, Spark interpreter works out of box in local mode. But if you want to connect to your Spark cluster, you'll need to follow below two simple steps.
* Set SPARK_HOME
* Set master
### Set SPARK_HOME
There are several options for setting `SPARK_HOME`.
* Set `SPARK_HOME` in `zeppelin-env.sh`
* Set `SPARK_HOME` in interpreter setting page
* Set `SPARK_HOME` via [inline generic configuration](../usage/interpreter/overview.html#inline-generic-confinterpreter)
#### Set `SPARK_HOME` in `zeppelin-env.sh`
If you work with only one version of Spark, then you can set `SPARK_HOME` in `zeppelin-env.sh` because any setting in `zeppelin-env.sh` is globally applied.
e.g.
```bash
export SPARK_HOME=/usr/lib/spark
```
You can optionally set more environment variables in `zeppelin-env.sh`
```bash
# set hadoop conf dir
export HADOOP_CONF_DIR=/usr/lib/hadoop
```
#### Set `SPARK_HOME` in interpreter setting page
If you want to use multiple versions of Spark, then you need to create multiple Spark interpreters and set `SPARK_HOME` separately. e.g.
Create a new Spark interpreter `spark33` for Spark 3.3 and set its `SPARK_HOME` in interpreter setting page,
Create a new Spark interpreter `spark34` for Spark 3.4 and set its `SPARK_HOME` in interpreter setting page.
#### Set `SPARK_HOME` via [inline generic configuration](../usage/interpreter/overview.html#inline-generic-confinterpreter)
Besides setting `SPARK_HOME` in interpreter setting page, you can also use inline generic configuration to put the
configuration with code together for more flexibility. e.g.
| Spark Property |
Default |
Description |
| zeppelin.R.cmd |
R |
R binary executable path. |
| zeppelin.R.knitr |
true |
Whether use knitr or not. (It is recommended to install knitr and use it in Zeppelin) |
| zeppelin.R.image.width |
100% |
R plotting image width. |
| zeppelin.R.render.options |
out.format = 'html', comment = NA, echo = FALSE, results = 'asis', message = F, warning = F, fig.retina = 2 |
R plotting options. |
| zeppelin.R.shiny.iframe_width |
100% |
IFrame width of Shiny App |
| zeppelin.R.shiny.iframe_height |
500px |
IFrame height of Shiny App |
| zeppelin.R.shiny.portRange |
: |
Shiny app would launch a web app at some port, this property is to specify the portRange via format ':', e.g. '5000:5001'. By default it is ':' which means any port |
Refer [R doc](r.html) for how to use R in Zeppelin.
## SparkSql
Spark sql interpreter share the same SparkContext/SparkSession with other Spark interpreters. That means any table registered in scala, python or r code can be accessed by Spark sql.
For examples:
```scala
%spark
case class People(name: String, age: Int)
var df = spark.createDataFrame(List(People("jeff", 23), People("andy", 20)))
df.createOrReplaceTempView("people")
```
```sql
%spark.sql
select * from people
```
You can write multiple sql statements in one paragraph. Each sql statement is separated by semicolon.
Sql statement in one paragraph would run sequentially.
But sql statements in different paragraphs can run concurrently by the following configuration.
1. Set `zeppelin.spark.concurrentSQL` to true to enable the sql concurrent feature, underneath zeppelin will change to use fairscheduler for Spark. And also set `zeppelin.spark.concurrentSQL.max` to control the max number of sql statements running concurrently.
2. Configure pools by creating `fairscheduler.xml` under your `SPARK_CONF_DIR`, check the official spark doc [Configuring Pool Properties](http://spark.apache.org/docs/latest/job-scheduling.html#configuring-pool-properties)
3. Set pool property via setting paragraph local property. e.g.
```
%spark(pool=pool1)
sql statement
```
This pool feature is also available for all versions of scala Spark, PySpark. For SparkR, it is only available starting from 2.3.0.
## Dependency Management
For Spark interpreter, it is not recommended to use Zeppelin's [Dependency Management](../usage/interpreter/dependency_management.html) for managing
third party dependencies (`%spark.dep` is removed from Zeppelin 0.9 as well). Instead, you should set the standard Spark properties as following: