{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Spark Magic\n", "\n", "BeakerX has a Spark magic that provides deeper integration with Spark. It provides a GUI dialog for connecting to a cluster, a progress meter that shows how your job is working and links to the regular Spark UI, and it forwards kernel interrupt messages onto the cluster so you can stop a job without leaving the notebook, and it automatically displays Datasets using an interactive widget. Finally, it automatically closes the Spark session when the notebook is closed.\n", "\n", "The Spark magic is alpha quality." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": false }, "outputs": [], "source": [ "%%classpath add mvn\n", "org.apache.spark spark-sql_2.11 2.2.1" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The spark cell magic can be run all by itself in a cell. It produces a GUI dialog you fill out to connect to your cluster. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%spark" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Optionally, the contents of the cell can produce a Spark session to fill out default values for the GUI. Only one spark magic can be active at a time." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%spark\n", "SparkSession.builder()\n", " .appName(\"BeakerX Demo\")\n", " .master(\"local[4]\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can also provide a `--connect` (or `-c`) option to automatically connect with the cluster." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%spark --connect\n", "SparkSession.builder().master(\"local[100]\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "val NUM_SAMPLES = 10000000\n", "\n", "val count2 = spark.sparkContext.parallelize(1 to NUM_SAMPLES).map{i =>\n", " val x = Math.random()\n", " val y = Math.random()\n", " if (x*x + y*y < 1) 1 else 0\n", "}.reduce(_ + _)\n", "\n", "println(\"Pi is roughly \" + 4.0 * count2 / NUM_SAMPLES)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "By default the first 1000 rows are materialized to preview a dataset." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "val tornadoesPath = java.nio.file.Paths.get(\"../resources/data/tornadoes_2014.csv\").toAbsolutePath()\n", "\n", "val ds = spark.read.format(\"csv\").option(\"header\", \"true\").load(\"file://\" + tornadoesPath)\n", "ds" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Or you can use the display method to specify any number of rows." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ds.display(1)" ] } ], "metadata": { "kernelspec": { "display_name": "Scala", "language": "scala", "name": "scala" }, "language_info": { "codemirror_mode": "text/x-scala", "file_extension": ".scala", "mimetype": "", "name": "Scala", "nbconverter_exporter": "", "version": "2.11.12" } }, "nbformat": 4, "nbformat_minor": 2 }