{"paragraphs":[{"text":"%md\n\n## Data Science in Apache Spark\n### Mushrooms Workbook - Hunger in the Woods\n#### Random Forest Classifier\n\n**Level**: Moderate\n**Language**: Scala\n**Requirements**: \n- [HDP 2.6.X]\n- Spark 2.x\n\n**Author**: Ian Brooks\n**Follow** [LinkedIn - Ian Brooks PhD] (https://www.linkedin.com/in/ianrbrooksphd/)\n\nWhat would Bear do? \n![alt text][logo]\n\n[logo]:https://raw.githubusercontent.com/kenmoini/RandomForestMushroomClassifier/master/bear-1.jpg\n\n## Pre-Run Instructions\n\n**File Upload:** Upload the source data file MushroomDatabase.csv to HDFS in the /tmp directory ","dateUpdated":"2018-05-15T13:44:07-0700","config":{"tableHide":false,"editorSetting":{"language":"markdown","editOnDblClick":true},"colWidth":12,"editorMode":"ace/mode/markdown","editorHide":false,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"
\n

Data Science in Apache Spark

\n

Mushrooms Workbook - Hunger in the Woods

\n

Random Forest Classifier

\n

Level: Moderate
Language: Scala
Requirements:
- [HDP 2.6.X]
- Spark 2.x

\n

Author: Ian Brooks
Follow LinkedIn - Ian Brooks PhD

\n

What would Bear do?
\"alt

\n

Pre-Run Instructions

\n

File Upload: Upload the source data file MushroomDatabase.csv to HDFS in the /tmp directory

\n
"}]},"apps":[],"jobName":"paragraph_1526391969591_-755164158","id":"20180409-123904_1018491284","dateCreated":"2018-05-15T06:46:09-0700","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"focus":true,"$$hashKey":"object:6421"},{"title":"Load Dependency Libraries","text":"%spark2.dep \n\nz.load(\"com.databricks:spark-csv_2.11:1.5.0\")\n","dateUpdated":"2018-05-15T06:46:09-0700","config":{"editorSetting":{"language":"scala","editOnDblClick":false},"colWidth":12,"editorMode":"ace/mode/scala","title":true,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"ERROR","msg":[{"type":"TEXT","data":"Must be used before SparkInterpreter (%spark) initialized\nHint: put this paragraph before any Spark code and restart Zeppelin/Interpreter"}]},"apps":[],"jobName":"paragraph_1526391969594_-756318405","id":"20180409-124313_161475820","dateCreated":"2018-05-15T06:46:09-0700","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:6422"},{"title":"initialize Spark Session","text":"%spark2\n\nimport org.apache.spark.sql.SparkSession\nval spark: SparkSession = SparkSession.builder\n .appName(\"MushroomClassifier\") // optional and will be autogenerated if not specified\n .master(\"local[*]\") // avoid hardcoding the deployment environment\n .enableHiveSupport() // self-explanatory, isn't it?\n .config(\"spark.sql.warehouse.dir\", \"target/spark-warehouse\")\n .getOrCreate\n\nspark.version","dateUpdated":"2018-05-15T06:46:09-0700","config":{"editorSetting":{"language":"scala","editOnDblClick":false},"colWidth":12,"editorMode":"ace/mode/scala","title":true,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"\nimport org.apache.spark.sql.SparkSession\n\nspark: org.apache.spark.sql.SparkSession = org.apache.spark.sql.SparkSession@1cab7cc5\n\nres182: String = 2.1.1.2.6.1.0-129\n"}]},"apps":[],"jobName":"paragraph_1526391969595_-756703153","id":"20180409-123845_1776601428","dateCreated":"2018-05-15T06:46:09-0700","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:6423"},{"title":"Download Datafile and Copy to HDFS","text":"%sh\n\n# Download Data from Github\nwget https://raw.githubusercontent.com/kirkhas/datascience-workshop/master/datascience-workshop/Mushroom%20Classifier/MushroomDatabase.csv -O /tmp/MushroomDatabase.csv\n\n# Make HDFS Directory and Load CSV files into HDFS\nhadoop fs -mkdir /tmp/\nhadoop fs -put /tmp/MushroomDatabase.csv /tmp/MushroomDatabase.csv\n\nhadoop fs -ls /tmp/Mush*","user":"admin","dateUpdated":"2018-05-15T12:46:26-0700","config":{"colWidth":12,"enabled":true,"results":{},"editorSetting":{"language":"sh","editOnDblClick":false},"editorMode":"ace/mode/sh","title":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"--2018-05-15 12:45:28-- https://raw.githubusercontent.com/kirkhas/datascience-workshop/master/datascience-workshop/Mushroom%20Classifier/MushroomDatabase.csv\nResolving raw.githubusercontent.com... 151.101.192.133, 151.101.0.133, 151.101.64.133, ...\nConnecting to raw.githubusercontent.com|151.101.192.133|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 1273475 (1.2M) [text/plain]\nSaving to: “/tmp/MushroomDatabase.csv”\n\n 0K .......... .......... .......... .......... .......... 4% 78.2M 0s\n 50K .......... .......... .......... .......... .......... 8% 45.1M 0s\n 100K .......... .......... .......... .......... .......... 12% 52.9M 0s\n 150K .......... .......... .......... .......... .......... 16% 42.5M 0s\n 200K .......... .......... .......... .......... .......... 20% 42.4M 0s\n 250K .......... .......... .......... .......... .......... 24% 77.2M 0s\n 300K .......... .......... .......... .......... .......... 28% 59.2M 0s\n 350K .......... .......... .......... .......... .......... 32% 40.2M 0s\n 400K .......... .......... .......... .......... .......... 36% 59.3M 0s\n 450K .......... .......... .......... .......... .......... 40% 85.8M 0s\n 500K .......... .......... .......... .......... .......... 44% 64.4M 0s\n 550K .......... .......... .......... .......... .......... 48% 43.3M 0s\n 600K .......... .......... .......... .......... .......... 52% 155M 0s\n 650K .......... .......... .......... .......... .......... 56% 94.4M 0s\n 700K .......... .......... .......... .......... .......... 60% 49.5M 0s\n 750K .......... .......... .......... .......... .......... 64% 25.8M 0s\n 800K .......... .......... .......... .......... .......... 68% 41.4M 0s\n 850K .......... .......... .......... .......... .......... 72% 41.7M 0s\n 900K .......... .......... .......... .......... .......... 76% 42.6M 0s\n 950K .......... .......... .......... .......... .......... 80% 42.2M 0s\n 1000K .......... .......... .......... .......... .......... 84% 59.2M 0s\n 1050K .......... .......... .......... .......... .......... 88% 90.6M 0s\n 1100K .......... .......... .......... .......... .......... 92% 213M 0s\n 1150K .......... .......... .......... .......... .......... 96% 141M 0s\n 1200K .......... .......... .......... .......... ... 100% 280M=0.02s\n\n2018-05-15 12:45:28 (56.9 MB/s) - “/tmp/MushroomDatabase.csv” saved [1273475/1273475]\n\nbash: line 2: //wget: No such file or directory\nmkdir: `/tmp': File exists\nput: `/tmp/MushroomDatabase.csv': File exists\n-rw-r--r-- 3 admin hdfs 1273475 2018-04-09 12:35 /tmp/MushroomDatabase.csv\n"}]},"apps":[],"jobName":"paragraph_1526392061420_-1175756002","id":"20180515-064741_1099839956","dateCreated":"2018-05-15T06:47:41-0700","dateStarted":"2018-05-15T12:45:28-0700","dateFinished":"2018-05-15T12:45:36-0700","status":"FINISHED","progressUpdateIntervalMs":500,"$$hashKey":"object:6424"},{"title":"Import Data Set","text":"%spark2\n\nimport org.apache.spark.sql.types._\nimport org.apache.spark.sql.DataFrame\nimport scala.collection.mutable.ListBuffer\n\n\n//Note the location of the Data File - hdfs://HostWithNameNode:8020/tmp/MushroomDatabase.csv\n//Create a data frame from CSV File \nval df_WholeSetRaw = sqlContext.read.format(\"com.databricks.spark.csv\").option(\"header\", \"true\").option(\"inferSchema\", \"true\").load(\"/tmp/MushroomDatabase.csv\")\n\n//Create Table from DataFrame\ndf_WholeSetRaw.createOrReplaceTempView(\"RawMushData\")\n\n//Display resulting Infered schema \ndf_WholeSetRaw.printSchema()","dateUpdated":"2018-05-15T13:43:29-0700","config":{"editorSetting":{"language":"scala","editOnDblClick":false},"colWidth":12,"editorMode":"ace/mode/scala","title":true,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"ERROR","msg":[{"type":"TEXT","data":"java.lang.OutOfMemoryError: GC overhead limit exceeded\n\tat java.util.Arrays.copyOf(Arrays.java:3332)\n\tat java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:137)\n\tat java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:121)\n\tat java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:421)\n\tat java.lang.StringBuffer.append(StringBuffer.java:272)\n\tat java.io.StringWriter.write(StringWriter.java:112)\n\tat java.io.PrintWriter.write(PrintWriter.java:456)\n\tat java.io.PrintWriter.write(PrintWriter.java:473)\n\tat java.io.PrintWriter.print(PrintWriter.java:603)\n\tat java.io.PrintWriter.println(PrintWriter.java:756)\n\tat java.lang.Throwable$WrappedPrintWriter.println(Throwable.java:764)\n\tat java.lang.Throwable.printStackTrace(Throwable.java:658)\n\tat java.lang.Throwable.printStackTrace(Throwable.java:721)\n\tat org.apache.log4j.DefaultThrowableRenderer.render(DefaultThrowableRenderer.java:60)\n\tat org.apache.log4j.spi.ThrowableInformation.getThrowableStrRep(ThrowableInformation.java:87)\n\tat org.apache.log4j.spi.LoggingEvent.getThrowableStrRep(LoggingEvent.java:413)\n\tat org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:313)\n\tat org.apache.log4j.DailyRollingFileAppender.subAppend(DailyRollingFileAppender.java:369)\n\tat org.apache.log4j.WriterAppender.append(WriterAppender.java:162)\n\tat org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)\n\tat org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)\n\tat org.apache.log4j.Category.callAppenders(Category.java:206)\n\tat org.apache.log4j.Category.forcedLog(Category.java:391)\n\tat org.apache.log4j.Category.log(Category.java:856)\n\tat org.slf4j.impl.Log4jLoggerAdapter.error(Log4jLoggerAdapter.java:575)\n\tat org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:40)\n\tat org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:33)\n\tat org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:787)\n\tat org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:70)\n\tat org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:483)\n\tat org.apache.zeppelin.scheduler.Job.run(Job.java:175)\n\tat org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)\n"}]},"apps":[],"jobName":"paragraph_1526391969596_-758626898","id":"20180409-124408_494881545","dateCreated":"2018-05-15T06:46:09-0700","status":"ERROR","progressUpdateIntervalMs":500,"$$hashKey":"object:6425","user":"admin","dateFinished":"2018-05-15T13:43:58-0700","dateStarted":"2018-05-15T13:43:29-0700"},{"text":"%sql\n\nselect * from RawMushData limit 10\n","dateUpdated":"2018-05-15T06:46:09-0700","config":{"colWidth":12,"editorMode":"ace/mode/sql","results":{},"enabled":true,"editorSetting":{"language":"sql","editOnDblClick":false}},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TABLE","data":"Label\tCapShape\tCapSurface\tCapColor\tBruises\tOdor\tGillAttachment\tGillSpacing\tGillSize\tGillColor\tStalkShape\tStalkRoot\tSSAR\tSSBR\tSCAR\tSCBR\tVeilType\tVeilColor\tRingNumber\tRingType\tSporePrintColor\tPopulation\tHabitat\nEDIBLE\tCONVEX\tSMOOTH\tWHITE\tBRUISES\tALMOND\tFREE\tCROWDED\tNARROW\tWHITE\tTAPERING\tBULBOUS\tSMOOTH\tSMOOTH\tWHITE\tWHITE\tPARTIAL\tWHITE\tONE\tPENDANT\tPURPLE\tSEVERAL\tWOODS\nEDIBLE\tCONVEX\tSMOOTH\tWHITE\tBRUISES\tALMOND\tFREE\tCROWDED\tNARROW\tWHITE\tTAPERING\tBULBOUS\tSMOOTH\tSMOOTH\tWHITE\tWHITE\tPARTIAL\tWHITE\tONE\tPENDANT\tBROWN\tSEVERAL\tWOODS\nEDIBLE\tCONVEX\tSMOOTH\tWHITE\tBRUISES\tALMOND\tFREE\tCROWDED\tNARROW\tPINK\tTAPERING\tBULBOUS\tSMOOTH\tSMOOTH\tWHITE\tWHITE\tPARTIAL\tWHITE\tONE\tPENDANT\tPURPLE\tSEVERAL\tWOODS\nEDIBLE\tCONVEX\tSMOOTH\tWHITE\tBRUISES\tALMOND\tFREE\tCROWDED\tNARROW\tPINK\tTAPERING\tBULBOUS\tSMOOTH\tSMOOTH\tWHITE\tWHITE\tPARTIAL\tWHITE\tONE\tPENDANT\tBROWN\tSEVERAL\tWOODS\nEDIBLE\tCONVEX\tSMOOTH\tWHITE\tBRUISES\tALMOND\tFREE\tCROWDED\tNARROW\tBROWN\tTAPERING\tBULBOUS\tSMOOTH\tSMOOTH\tWHITE\tWHITE\tPARTIAL\tWHITE\tONE\tPENDANT\tPURPLE\tSEVERAL\tWOODS\nEDIBLE\tCONVEX\tSMOOTH\tWHITE\tBRUISES\tALMOND\tFREE\tCROWDED\tNARROW\tBROWN\tTAPERING\tBULBOUS\tSMOOTH\tSMOOTH\tWHITE\tWHITE\tPARTIAL\tWHITE\tONE\tPENDANT\tBROWN\tSEVERAL\tWOODS\nEDIBLE\tCONVEX\tSMOOTH\tWHITE\tBRUISES\tANISE\tFREE\tCROWDED\tNARROW\tWHITE\tTAPERING\tBULBOUS\tSMOOTH\tSMOOTH\tWHITE\tWHITE\tPARTIAL\tWHITE\tONE\tPENDANT\tPURPLE\tSEVERAL\tWOODS\nEDIBLE\tCONVEX\tSMOOTH\tWHITE\tBRUISES\tANISE\tFREE\tCROWDED\tNARROW\tWHITE\tTAPERING\tBULBOUS\tSMOOTH\tSMOOTH\tWHITE\tWHITE\tPARTIAL\tWHITE\tONE\tPENDANT\tBROWN\tSEVERAL\tWOODS\nEDIBLE\tCONVEX\tSMOOTH\tWHITE\tBRUISES\tANISE\tFREE\tCROWDED\tNARROW\tPINK\tTAPERING\tBULBOUS\tSMOOTH\tSMOOTH\tWHITE\tWHITE\tPARTIAL\tWHITE\tONE\tPENDANT\tPURPLE\tSEVERAL\tWOODS\nEDIBLE\tCONVEX\tSMOOTH\tWHITE\tBRUISES\tANISE\tFREE\tCROWDED\tNARROW\tPINK\tTAPERING\tBULBOUS\tSMOOTH\tSMOOTH\tWHITE\tWHITE\tPARTIAL\tWHITE\tONE\tPENDANT\tBROWN\tSEVERAL\tWOODS\n"}]},"apps":[],"jobName":"paragraph_1526391969597_-759011647","id":"20180409-124734_1577951246","dateCreated":"2018-05-15T06:46:09-0700","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:6426"},{"title":"Mushroom Distribution","text":"%sql\n\nselect Label, count(Label) from RawMushData group by Label","dateUpdated":"2018-05-15T06:46:09-0700","config":{"editorSetting":{"language":"sql"},"colWidth":6,"editorMode":"ace/mode/sql","title":true,"results":{"0":{"graph":{"mode":"pieChart","height":300,"optionOpen":false},"helium":{}}},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TABLE","data":"Label\tcount(Label)\nEDIBLE\t4488\nPOISONOUS\t3928\n"}]},"apps":[],"jobName":"paragraph_1526391969597_-759011647","id":"20180409-124808_580863699","dateCreated":"2018-05-15T06:46:09-0700","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:6427"},{"title":"Odor By Poisonous","text":"%sql\n\nselect Label, Odor, count(Odor) from RawMushData group by Odor, Label","dateUpdated":"2018-05-15T06:46:09-0700","config":{"editorSetting":{"language":"sql"},"colWidth":6,"editorMode":"ace/mode/sql","editorHide":false,"title":true,"results":{"0":{"graph":{"mode":"multiBarChart","height":300,"optionOpen":false,"setting":{"scatterChart":{"xAxis":{"name":"Label","index":0,"aggr":"sum"},"yAxis":{"name":"Odor","index":1,"aggr":"sum"},"group":{"name":"Odor","index":1,"aggr":"sum"}},"multiBarChart":{"stacked":true},"pieChart":{}},"commonSetting":{},"keys":[{"name":"Odor","index":1,"aggr":"sum"}],"groups":[{"name":"Label","index":0,"aggr":"sum"}],"values":[{"name":"count(Odor)","index":2,"aggr":"sum"}]},"helium":{}}},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TABLE","data":"Label\tOdor\tcount(Odor)\nPOISONOUS\tFISHY\t576\nPOISONOUS\tNONE\t120\nEDIBLE\tANISE\t400\nPOISONOUS\tPUNGENT\t256\nEDIBLE\tNONE\t3688\nEDIBLE\tALMOND\t400\nPOISONOUS\tCREOSOTE\t192\nPOISONOUS\tFOUL\t2160\nPOISONOUS\tSPICY\t576\nPOISONOUS\tMUSTY\t48\n"}]},"apps":[],"jobName":"paragraph_1526391969598_-757857400","id":"20180409-130529_129636148","dateCreated":"2018-05-15T06:46:09-0700","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:6428"},{"title":"Gill Color By Poisonous","text":"%sql\n\nselect Label, GillColor, count(GillColor) from RawMushData group by GillColor, Label","dateUpdated":"2018-05-15T06:46:09-0700","config":{"editorSetting":{"language":"sql"},"colWidth":6,"editorMode":"ace/mode/sql","editorHide":false,"title":true,"results":{"0":{"graph":{"mode":"multiBarChart","height":300,"optionOpen":false,"setting":{"multiBarChart":{"stacked":true}},"commonSetting":{},"keys":[{"name":"GillColor","index":1,"aggr":"sum"}],"groups":[{"name":"Label","index":0,"aggr":"sum"}],"values":[{"name":"count(GillColor)","index":2,"aggr":"sum"}]},"helium":{}}},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TABLE","data":"Label\tGillColor\tcount(GillColor)\nPOISONOUS\tGREEN\t24\nEDIBLE\tWHITE\t980\nEDIBLE\tBROWN\t1000\nPOISONOUS\tYELLOW\t28\nEDIBLE\tCHOCOLATE\t268\nPOISONOUS\tGRAY\t504\nEDIBLE\tYELLOW\t64\nEDIBLE\tRED\t96\nEDIBLE\tBLACK\t408\nEDIBLE\tPURPLE\t444\nEDIBLE\tORANGE\t64\nEDIBLE\tPINK\t916\nPOISONOUS\tPURPLE\t48\nPOISONOUS\tCHOCOLATE\t528\nPOISONOUS\tWHITE\t252\nPOISONOUS\tPINK\t640\nPOISONOUS\tBLACK\t64\nEDIBLE\tGRAY\t248\nPOISONOUS\tBUFF\t1728\nPOISONOUS\tBROWN\t112\n"}]},"apps":[],"jobName":"paragraph_1526391969599_-758242149","id":"20180409-130806_691173633","dateCreated":"2018-05-15T06:46:09-0700","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:6429"},{"title":"Spore Print Color By Poisonous","text":"%sql\n\nselect Label, SporePrintColor, count(SporePrintColor) from RawMushData group by SporePrintColor, Label","dateUpdated":"2018-05-15T06:46:09-0700","config":{"editorSetting":{"language":"sql"},"colWidth":6,"editorMode":"ace/mode/sql","editorHide":false,"title":true,"results":{"0":{"graph":{"mode":"multiBarChart","height":300,"optionOpen":false,"setting":{"scatterChart":{"xAxis":{"name":"Label","index":0,"aggr":"sum"},"yAxis":{"name":"Odor","index":1,"aggr":"sum"},"group":{"name":"Odor","index":1,"aggr":"sum"}},"multiBarChart":{"stacked":true},"pieChart":{}},"commonSetting":{},"keys":[{"name":"SporePrintColor","index":1,"aggr":"sum"}],"groups":[{"name":"Label","index":0,"aggr":"sum"}],"values":[{"name":"count(SporePrintColor)","index":2,"aggr":"sum"}]},"helium":{}}},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TABLE","data":"Label\tSporePrintColor\tcount(SporePrintColor)\nPOISONOUS\tGREEN\t72\nEDIBLE\tWHITE\t600\nEDIBLE\tBROWN\t1872\nEDIBLE\tCHOCOLATE\t48\nEDIBLE\tYELLOW\t48\nEDIBLE\tBLACK\t1776\nEDIBLE\tBUFF\t48\nEDIBLE\tPURPLE\t48\nEDIBLE\tORANGE\t48\nPOISONOUS\tCHOCOLATE\t1584\nPOISONOUS\tWHITE\t1824\nPOISONOUS\tBLACK\t224\nPOISONOUS\tBROWN\t224\n"}]},"apps":[],"jobName":"paragraph_1526391969600_-772477858","id":"20180409-125153_878201397","dateCreated":"2018-05-15T06:46:09-0700","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:6430"},{"title":"String to Index the String Columns","text":"%spark2\n\nimport org.apache.spark.ml.feature.{OneHotEncoder, StringIndexer}\n\n//Convert Strings to Index Values\n\n// Poison Label\nval indexer1 = new StringIndexer().setInputCol(\"Label\").setOutputCol(\"Label_Index\").fit(df_WholeSetRaw)\nval indexed1 = indexer1.transform(df_WholeSetRaw)\n\n//Cap Shape\nval indexer2 = new StringIndexer().setInputCol(\"CapShape\").setOutputCol(\"CapShape_Index\").fit(indexed1)\nval indexed2 = indexer2.transform(indexed1)\n\n//Cap Surface \nval indexer3 = new StringIndexer().setInputCol(\"CapSurface\").setOutputCol(\"CapSurface_Index\").fit(indexed2)\nval indexed3 = indexer3.transform(indexed2)\n\n//Cap Color\nval indexer4 = new StringIndexer().setInputCol(\"CapColor\").setOutputCol(\"CapColor_Index\").fit(indexed3)\nval indexed4 = indexer4.transform(indexed3)\n\n//Bruises\nval indexer5 = new StringIndexer().setInputCol(\"Bruises\").setOutputCol(\"Bruises_Index\").fit(indexed4)\nval indexed5 = indexer5.transform(indexed4)\n\n//Odor\nval indexer6 = new StringIndexer().setInputCol(\"Odor\").setOutputCol(\"Odor_Index\").fit(indexed5)\nval indexed6 = indexer6.transform(indexed5)\n\n//Gill Attachment\nval indexer7 = new StringIndexer().setInputCol(\"GillAttachment\").setOutputCol(\"GillAttachment_Index\").fit(indexed6)\nval indexed7 = indexer7.transform(indexed6)\n\n//Gill Spacing\nval indexer8 = new StringIndexer().setInputCol(\"GillSpacing\").setOutputCol(\"GillSpacing_Index\").fit(indexed7)\nval indexed8 = indexer8.transform(indexed7)\n\n//Gill Size\nval indexer9 = new StringIndexer().setInputCol(\"GillSize\").setOutputCol(\"GillSize_Index\").fit(indexed8)\nval indexed9 = indexer9.transform(indexed8)\n\n//Gill Color\nval indexer10 = new StringIndexer().setInputCol(\"GillColor\").setOutputCol(\"GillColor_Index\").fit(indexed9)\nval indexed10 = indexer10.transform(indexed9)\n\n//Stalk Shape\nval indexer11 = new StringIndexer().setInputCol(\"StalkShape\").setOutputCol(\"StalkShape_Index\").fit(indexed10)\nval indexed11 = indexer11.transform(indexed10)\n\n//Stalk Root\nval indexer12 = new StringIndexer().setInputCol(\"StalkRoot\").setOutputCol(\"StalkRoot_Index\").fit(indexed11)\nval indexed12 = indexer12.transform(indexed11)\n\n//SSAR\nval indexer13 = new StringIndexer().setInputCol(\"SSAR\").setOutputCol(\"SSAR_Index\").fit(indexed12)\nval indexed13 = indexer13.transform(indexed12)\n\n//SSBR\nval indexer14 = new StringIndexer().setInputCol(\"SSBR\").setOutputCol(\"SSBR_Index\").fit(indexed13)\nval indexed14 = indexer14.transform(indexed13)\n\n//SCAR\nval indexer15 = new StringIndexer().setInputCol(\"SCAR\").setOutputCol(\"SCAR_Index\").fit(indexed14)\nval indexed15 = indexer15.transform(indexed14)\n\n//SCBR\nval indexer16 = new StringIndexer().setInputCol(\"SCBR\").setOutputCol(\"SCBR_Index\").fit(indexed15)\nval indexed16 = indexer16.transform(indexed15)\n\n//Veil Type\nval indexer17 = new StringIndexer().setInputCol(\"VeilType\").setOutputCol(\"VeilType_Index\").fit(indexed16)\nval indexed17 = indexer17.transform(indexed16)\n\n//Veil Color\nval indexer18 = new StringIndexer().setInputCol(\"VeilColor\").setOutputCol(\"VeilColor_Index\").fit(indexed17)\nval indexed18 = indexer18.transform(indexed17)\n\n//Ring Number\nval indexer19 = new StringIndexer().setInputCol(\"RingNumber\").setOutputCol(\"RingNumber_Index\").fit(indexed18)\nval indexed19 = indexer19.transform(indexed18)\n\n//Ring Type\nval indexer20 = new StringIndexer().setInputCol(\"RingType\").setOutputCol(\"RingType_Index\").fit(indexed19)\nval indexed20 = indexer20.transform(indexed19)\n\n//SporePrintColor\nval indexer21 = new StringIndexer().setInputCol(\"SporePrintColor\").setOutputCol(\"SporePrintColor_Index\").fit(indexed20)\nval indexed21 = indexer21.transform(indexed20)\n\n//Population\nval indexer22 = new StringIndexer().setInputCol(\"Population\").setOutputCol(\"Population_Index\").fit(indexed21)\nval indexed22 = indexer22.transform(indexed21)\n\n//Habitat\nval indexer23 = new StringIndexer().setInputCol(\"Habitat\").setOutputCol(\"Habitat_Index\").fit(indexed22)\nval indexed23 = indexer23.transform(indexed22)\n\nval df_CompletedIndex = indexed23\n","dateUpdated":"2018-05-15T06:46:09-0700","config":{"editorSetting":{"language":"scala","editOnDblClick":false},"colWidth":12,"editorMode":"ace/mode/scala","title":true,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"\nimport org.apache.spark.ml.feature.{OneHotEncoder, StringIndexer}\n\nindexer1: org.apache.spark.ml.feature.StringIndexerModel = strIdx_7a2e9889078d\n\nindexed1: org.apache.spark.sql.DataFrame = [Label: string, CapShape: string ... 22 more fields]\n\nindexer2: org.apache.spark.ml.feature.StringIndexerModel = strIdx_e20a947e4cd0\n\nindexed2: org.apache.spark.sql.DataFrame = [Label: string, CapShape: string ... 23 more fields]\n\nindexer3: org.apache.spark.ml.feature.StringIndexerModel = strIdx_d84d8d97a982\n\nindexed3: org.apache.spark.sql.DataFrame = [Label: string, CapShape: string ... 24 more fields]\n\nindexer4: org.apache.spark.ml.feature.StringIndexerModel = strIdx_311669239462\n\nindexed4: org.apache.spark.sql.DataFrame = [Label: string, CapShape: string ... 25 more fields]\n\nindexer5: org.apache.spark.ml.feature.StringIndexerModel = strIdx_dc3060b154df\n\nindexed5: org.apache.spark.sql.DataFrame = [Label: string, CapShape: string ... 26 more fields]\n\nindexer6: org.apache.spark.ml.feature.StringIndexerModel = strIdx_6d91c7fc9237\n\nindexed6: org.apache.spark.sql.DataFrame = [Label: string, CapShape: string ... 27 more fields]\n\nindexer7: org.apache.spark.ml.feature.StringIndexerModel = strIdx_6ddcaf3e5216\n\nindexed7: org.apache.spark.sql.DataFrame = [Label: string, CapShape: string ... 28 more fields]\n\nindexer8: org.apache.spark.ml.feature.StringIndexerModel = strIdx_a046a791ee61\n\nindexed8: org.apache.spark.sql.DataFrame = [Label: string, CapShape: string ... 29 more fields]\n\nindexer9: org.apache.spark.ml.feature.StringIndexerModel = strIdx_636c22c8a779\n\nindexed9: org.apache.spark.sql.DataFrame = [Label: string, CapShape: string ... 30 more fields]\n\nindexer10: org.apache.spark.ml.feature.StringIndexerModel = strIdx_294d4f9c336f\n\nindexed10: org.apache.spark.sql.DataFrame = [Label: string, CapShape: string ... 31 more fields]\n\nindexer11: org.apache.spark.ml.feature.StringIndexerModel = strIdx_ce462dc8c581\n\nindexed11: org.apache.spark.sql.DataFrame = [Label: string, CapShape: string ... 32 more fields]\n\nindexer12: org.apache.spark.ml.feature.StringIndexerModel = strIdx_7f6111f943b5\n\nindexed12: org.apache.spark.sql.DataFrame = [Label: string, CapShape: string ... 33 more fields]\n\nindexer13: org.apache.spark.ml.feature.StringIndexerModel = strIdx_fc1ed60cb333\n\nindexed13: org.apache.spark.sql.DataFrame = [Label: string, CapShape: string ... 34 more fields]\n\nindexer14: org.apache.spark.ml.feature.StringIndexerModel = strIdx_73ed28a079f6\n\nindexed14: org.apache.spark.sql.DataFrame = [Label: string, CapShape: string ... 35 more fields]\n\nindexer15: org.apache.spark.ml.feature.StringIndexerModel = strIdx_175f5b8919e3\n\nindexed15: org.apache.spark.sql.DataFrame = [Label: string, CapShape: string ... 36 more fields]\n\nindexer16: org.apache.spark.ml.feature.StringIndexerModel = strIdx_b8bede9f03bc\n\nindexed16: org.apache.spark.sql.DataFrame = [Label: string, CapShape: string ... 37 more fields]\n\nindexer17: org.apache.spark.ml.feature.StringIndexerModel = strIdx_ec521186b687\n\nindexed17: org.apache.spark.sql.DataFrame = [Label: string, CapShape: string ... 38 more fields]\n\nindexer18: org.apache.spark.ml.feature.StringIndexerModel = strIdx_480e4a660991\n\nindexed18: org.apache.spark.sql.DataFrame = [Label: string, CapShape: string ... 39 more fields]\n\nindexer19: org.apache.spark.ml.feature.StringIndexerModel = strIdx_aa3b71dd308b\n\nindexed19: org.apache.spark.sql.DataFrame = [Label: string, CapShape: string ... 40 more fields]\n\nindexer20: org.apache.spark.ml.feature.StringIndexerModel = strIdx_f42d47926341\n\nindexed20: org.apache.spark.sql.DataFrame = [Label: string, CapShape: string ... 41 more fields]\n\nindexer21: org.apache.spark.ml.feature.StringIndexerModel = strIdx_f3ebf0c62b4c\n\nindexed21: org.apache.spark.sql.DataFrame = [Label: string, CapShape: string ... 42 more fields]\n\nindexer22: org.apache.spark.ml.feature.StringIndexerModel = strIdx_c19838654eb1\n\nindexed22: org.apache.spark.sql.DataFrame = [Label: string, CapShape: string ... 43 more fields]\n\nindexer23: org.apache.spark.ml.feature.StringIndexerModel = strIdx_1eeba486263d\n\nindexed23: org.apache.spark.sql.DataFrame = [Label: string, CapShape: string ... 44 more fields]\n\ndf_CompletedIndex: org.apache.spark.sql.DataFrame = [Label: string, CapShape: string ... 44 more fields]\n"}]},"apps":[],"jobName":"paragraph_1526391969601_-772862607","id":"20180409-125046_437804174","dateCreated":"2018-05-15T06:46:09-0700","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:6431"},{"title":"Create Vector for Extracted Features","text":"%spark2\nimport org.apache.spark.mllib.linalg.Vectors \nimport org.apache.spark.ml.feature.VectorAssembler\n\n//df_CompletedIndex.printSchema()\n\n//Assemble the Feature Vector from extacted features values\nval assembler = new VectorAssembler().setInputCols(Array(\"CapShape_Index\", \"CapSurface_Index\", \"CapColor_Index\", \"Bruises_Index\",\"Odor_Index\", \"GillAttachment_Index\", \"GillSpacing_Index\", \"GillSize_Index\", \"GillColor_Index\", \"StalkShape_Index\", \"StalkRoot_Index\", \"SSAR_Index\", \"SSBR_Index\", \"SCAR_Index\", \"SCBR_Index\", \"VeilType_Index\", \"VeilColor_Index\", \"RingNumber_Index\", \"RingType_Index\", \"SporePrintColor_Index\", \"Population_Index\", \"Habitat_Index\")).setOutputCol(\"features\")\n\n//Create Feature Vector\nval FeatureVector = assembler.transform(df_CompletedIndex)\n\n//Display results\nFeatureVector.take(2)\n\n//Create Feature List\nval featureList = Array(\"CapShape_Index\", \"CapSurface_Index\", \"CapColor_Index\", \"Bruises_Index\",\"Odor_Index\", \"GillAttachment_Index\", \"GillSpacing_Index\", \"GillSize_Index\", \"GillColor_Index\", \"StalkShape_Index\", \"StalkRoot_Index\", \"SSAR_Index\", \"SSBR_Index\", \"SCAR_Index\", \"SCBR_Index\", \"VeilType_Index\", \"VeilColor_Index\", \"RingNumber_Index\", \"RingType_Index\", \"SporePrintColor_Index\", \"Population_Index\", \"Habitat_Index\")\n\nFeatureVector.printSchema()\n","dateUpdated":"2018-05-15T06:46:09-0700","config":{"editorSetting":{"language":"scala","editOnDblClick":false},"colWidth":12,"editorMode":"ace/mode/scala","title":true,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"\nimport org.apache.spark.mllib.linalg.Vectors\n\nimport org.apache.spark.ml.feature.VectorAssembler\n\nassembler: org.apache.spark.ml.feature.VectorAssembler = vecAssembler_fcf13e45fb5b\n\nFeatureVector: org.apache.spark.sql.DataFrame = [Label: string, CapShape: string ... 45 more fields]\n\nres250: Array[org.apache.spark.sql.Row] = Array([EDIBLE,CONVEX,SMOOTH,WHITE,BRUISES,ALMOND,FREE,CROWDED,NARROW,WHITE,TAPERING,BULBOUS,SMOOTH,SMOOTH,WHITE,WHITE,PARTIAL,WHITE,ONE,PENDANT,PURPLE,SEVERAL,WOODS,0.0,0.0,1.0,4.0,1.0,4.0,0.0,1.0,1.0,2.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,8.0,0.0,0.0,(22,[1,2,3,4,6,7,8,19],[1.0,4.0,1.0,4.0,1.0,1.0,2.0,8.0])], [EDIBLE,CONVEX,SMOOTH,WHITE,BRUISES,ALMOND,FREE,CROWDED,NARROW,WHITE,TAPERING,BULBOUS,SMOOTH,SMOOTH,WHITE,WHITE,PARTIAL,WHITE,ONE,PENDANT,BROWN,SEVERAL,WOODS,0.0,0.0,1.0,4.0,1.0,4.0,0.0,1.0,1.0,2.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,0.0,0.0,(22,[1,2,3,4,6,7,8,19],[1.0,4.0,1.0,4.0,1.0,1.0,2.0,1.0])])\n\nfeatureList: Array[String] = Array(CapShape_Index, CapSurface_Index, CapColor_Index, Bruises_Index, Odor_Index, GillAttachment_Index, GillSpacing_Index, GillSize_Index, GillColor_Index, StalkShape_Index, StalkRoot_Index, SSAR_Index, SSBR_Index, SCAR_Index, SCBR_Index, VeilType_Index, VeilColor_Index, RingNumber_Index, RingType_Index, SporePrintColor_Index, Population_Index, Habitat_Index)\nroot\n |-- Label: string (nullable = true)\n |-- CapShape: string (nullable = true)\n |-- CapSurface: string (nullable = true)\n |-- CapColor: string (nullable = true)\n |-- Bruises: string (nullable = true)\n |-- Odor: string (nullable = true)\n |-- GillAttachment: string (nullable = true)\n |-- GillSpacing: string (nullable = true)\n |-- GillSize: string (nullable = true)\n |-- GillColor: string (nullable = true)\n |-- StalkShape: string (nullable = true)\n |-- StalkRoot: string (nullable = true)\n |-- SSAR: string (nullable = true)\n |-- SSBR: string (nullable = true)\n |-- SCAR: string (nullable = true)\n |-- SCBR: string (nullable = true)\n |-- VeilType: string (nullable = true)\n |-- VeilColor: string (nullable = true)\n |-- RingNumber: string (nullable = true)\n |-- RingType: string (nullable = true)\n |-- SporePrintColor: string (nullable = true)\n |-- Population: string (nullable = true)\n |-- Habitat: string (nullable = true)\n |-- Label_Index: double (nullable = true)\n |-- CapShape_Index: double (nullable = true)\n |-- CapSurface_Index: double (nullable = true)\n |-- CapColor_Index: double (nullable = true)\n |-- Bruises_Index: double (nullable = true)\n |-- Odor_Index: double (nullable = true)\n |-- GillAttachment_Index: double (nullable = true)\n |-- GillSpacing_Index: double (nullable = true)\n |-- GillSize_Index: double (nullable = true)\n |-- GillColor_Index: double (nullable = true)\n |-- StalkShape_Index: double (nullable = true)\n |-- StalkRoot_Index: double (nullable = true)\n |-- SSAR_Index: double (nullable = true)\n |-- SSBR_Index: double (nullable = true)\n |-- SCAR_Index: double (nullable = true)\n |-- SCBR_Index: double (nullable = true)\n |-- VeilType_Index: double (nullable = true)\n |-- VeilColor_Index: double (nullable = true)\n |-- RingNumber_Index: double (nullable = true)\n |-- RingType_Index: double (nullable = true)\n |-- SporePrintColor_Index: double (nullable = true)\n |-- Population_Index: double (nullable = true)\n |-- Habitat_Index: double (nullable = true)\n |-- features: vector (nullable = true)\n\n"}]},"apps":[],"jobName":"paragraph_1526391969602_-771708361","id":"20180409-134424_2021300279","dateCreated":"2018-05-15T06:46:09-0700","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:6432"},{"title":"Create Test and Train Sets","text":"%spark2\nimport org.apache.spark.ml.feature.{IndexToString, StringIndexer}\n\nval splits = FeatureVector.randomSplit(Array(0.8, 0.2))\nval df_TrainSet = splits(0)\nval df_TestSet = splits(1)\n\ndf_TrainSet.printSchema()\n\nval converter = new IndexToString()\n .setInputCol(\"Label_Index\")\n .setOutputCol(\"originalLabel\")\n\nval converted = converter.transform(df_TrainSet)\n\nconverted.take(1)\n","dateUpdated":"2018-05-15T06:46:09-0700","config":{"editorSetting":{"language":"scala","editOnDblClick":false},"colWidth":12,"editorMode":"ace/mode/scala","title":true,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"\nimport org.apache.spark.ml.feature.{IndexToString, StringIndexer}\n\nsplits: Array[org.apache.spark.sql.Dataset[org.apache.spark.sql.Row]] = Array([Label: string, CapShape: string ... 45 more fields], [Label: string, CapShape: string ... 45 more fields])\n\ndf_TrainSet: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [Label: string, CapShape: string ... 45 more fields]\n\ndf_TestSet: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [Label: string, CapShape: string ... 45 more fields]\nroot\n |-- Label: string (nullable = true)\n |-- CapShape: string (nullable = true)\n |-- CapSurface: string (nullable = true)\n |-- CapColor: string (nullable = true)\n |-- Bruises: string (nullable = true)\n |-- Odor: string (nullable = true)\n |-- GillAttachment: string (nullable = true)\n |-- GillSpacing: string (nullable = true)\n |-- GillSize: string (nullable = true)\n |-- GillColor: string (nullable = true)\n |-- StalkShape: string (nullable = true)\n |-- StalkRoot: string (nullable = true)\n |-- SSAR: string (nullable = true)\n |-- SSBR: string (nullable = true)\n |-- SCAR: string (nullable = true)\n |-- SCBR: string (nullable = true)\n |-- VeilType: string (nullable = true)\n |-- VeilColor: string (nullable = true)\n |-- RingNumber: string (nullable = true)\n |-- RingType: string (nullable = true)\n |-- SporePrintColor: string (nullable = true)\n |-- Population: string (nullable = true)\n |-- Habitat: string (nullable = true)\n |-- Label_Index: double (nullable = true)\n |-- CapShape_Index: double (nullable = true)\n |-- CapSurface_Index: double (nullable = true)\n |-- CapColor_Index: double (nullable = true)\n |-- Bruises_Index: double (nullable = true)\n |-- Odor_Index: double (nullable = true)\n |-- GillAttachment_Index: double (nullable = true)\n |-- GillSpacing_Index: double (nullable = true)\n |-- GillSize_Index: double (nullable = true)\n |-- GillColor_Index: double (nullable = true)\n |-- StalkShape_Index: double (nullable = true)\n |-- StalkRoot_Index: double (nullable = true)\n |-- SSAR_Index: double (nullable = true)\n |-- SSBR_Index: double (nullable = true)\n |-- SCAR_Index: double (nullable = true)\n |-- SCBR_Index: double (nullable = true)\n |-- VeilType_Index: double (nullable = true)\n |-- VeilColor_Index: double (nullable = true)\n |-- RingNumber_Index: double (nullable = true)\n |-- RingType_Index: double (nullable = true)\n |-- SporePrintColor_Index: double (nullable = true)\n |-- Population_Index: double (nullable = true)\n |-- Habitat_Index: double (nullable = true)\n |-- features: vector (nullable = true)\n\n\nconverter: org.apache.spark.ml.feature.IndexToString = idxToStr_15523bc081cd\n\nconverted: org.apache.spark.sql.DataFrame = [Label: string, CapShape: string ... 46 more fields]\n\nres261: Array[org.apache.spark.sql.Row] = Array([EDIBLE,BELL,FIBROUS,GRAY,NO,NONE,FREE,CROWDED,BROAD,GRAY,ENLARGING,?,SILKY,SILKY,WHITE,WHITE,PARTIAL,WHITE,TWO,PENDANT,WHITE,NUMEROUS,GRASSES,0.0,3.0,2.0,1.0,0.0,0.0,0.0,1.0,0.0,5.0,1.0,1.0,1.0,1.0,0.0,0.0,0.0,0.0,1.0,0.0,0.0,4.0,1.0,(22,[0,1,2,6,8,9,10,11,12,17,20,21],[3.0,2.0,1.0,1.0,5.0,1.0,1.0,1.0,1.0,1.0,4.0,1.0]),EDIBLE])\n"}]},"apps":[],"jobName":"paragraph_1526391969603_-772093109","id":"20180409-134711_846818220","dateCreated":"2018-05-15T06:46:09-0700","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:6433"},{"title":"Train and Run Model","text":"%spark2\n\nimport org.apache.spark.ml.classification.RandomForestClassificationModel\nimport org.apache.spark.ml.classification.RandomForestClassifier\nimport org.apache.spark.ml.evaluation.BinaryClassificationEvaluator\nimport org.apache.spark.ml.tuning.{ParamGridBuilder, CrossValidator}\nimport org.apache.spark.ml.feature.VectorSlicer\n\n// Train a RandomForest model.\nval rf_classifier = new RandomForestClassifier()\n .setLabelCol(\"Label_Index\")\n .setFeaturesCol(\"features\")\n .setNumTrees(150)\n .setMaxBins(200)\n //.setThresholds(Array(0.45,0.75,0.55,0.55,0.55,0.55,0.55,0.55,0.55,0.55))\n \n//Set up Evalution for Cross-Validation \n//val metric = \"accuracy\"\nval CV_evaluator = new BinaryClassificationEvaluator()\n .setLabelCol(\"Label_Index\")\n .setRawPredictionCol(\"prediction\")\n// .setMetricName(metric)\n\n//Set up grid search for model parameters \nval paramGrid = new ParamGridBuilder()\n .addGrid(rf_classifier.numTrees, Array(50, 150, 350))\n .build()\n\nval cv = new CrossValidator()\n .setEstimator(rf_classifier)\n .setEvaluator(CV_evaluator) \n .setEstimatorParamMaps(paramGrid)\n .setNumFolds(3) \n \n// Train model\nval model = cv.fit(df_TrainSet)\n\n// Make predictions for Model Evaluation\nval Predictions = model.transform(df_TestSet)\n\nval Results = Predictions.select(\"Label\",\"Label_Index\",\"prediction\",\"CapShape\", \"CapSurface\", \"CapColor\", \"Bruises\",\"Odor\", \"GillAttachment\", \"GillSpacing\", \"GillSize\", \"GillColor\", \"StalkShape\", \"StalkRoot\", \"SSAR\", \"SSBR\", \"SCAR\", \"SCBR\", \"VeilType\", \"VeilColor\", \"RingNumber\", \"RingType\", \"SporePrintColor\", \"Population\", \"Habitat\")\n\n// Select example rows to display.\nResults.createOrReplaceTempView(\"MushResults\")","dateUpdated":"2018-05-15T06:46:09-0700","config":{"editorSetting":{"language":"scala","editOnDblClick":false},"colWidth":12,"editorMode":"ace/mode/scala","title":true,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"\nimport org.apache.spark.ml.classification.RandomForestClassificationModel\n\nimport org.apache.spark.ml.classification.RandomForestClassifier\n\nimport org.apache.spark.ml.evaluation.BinaryClassificationEvaluator\n\nimport org.apache.spark.ml.tuning.{ParamGridBuilder, CrossValidator}\n\nimport org.apache.spark.ml.feature.VectorSlicer\n\nrf_classifier: org.apache.spark.ml.classification.RandomForestClassifier = rfc_f317f6f6a820\n\nCV_evaluator: org.apache.spark.ml.evaluation.BinaryClassificationEvaluator = binEval_94e8f7ab6430\n\n\n\n\n\n\n\n\nparamGrid: Array[org.apache.spark.ml.param.ParamMap] =\nArray({\n\trfc_f317f6f6a820-numTrees: 50\n}, {\n\trfc_f317f6f6a820-numTrees: 150\n}, {\n\trfc_f317f6f6a820-numTrees: 350\n})\n\ncv: org.apache.spark.ml.tuning.CrossValidator = cv_329f192e466c\n\nmodel: org.apache.spark.ml.tuning.CrossValidatorModel = cv_329f192e466c\n\nPredictions: org.apache.spark.sql.DataFrame = [Label: string, CapShape: string ... 48 more fields]\n\nResults: org.apache.spark.sql.DataFrame = [Label: string, Label_Index: double ... 23 more fields]\n"}]},"apps":[],"jobName":"paragraph_1526391969604_-774016854","id":"20180409-140159_807737511","dateCreated":"2018-05-15T06:46:09-0700","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:6434"},{"title":"Model Evaluation","text":"%spark2\nimport org.apache.spark.mllib.evaluation.BinaryClassificationMetrics\nimport org.apache.spark.mllib.regression.LabeledPoint \n\nval metrics = new BinaryClassificationEvaluator()\n .setLabelCol(\"Label_Index\")\n .setRawPredictionCol(\"prediction\")\n .setMetricName(\"areaUnderROC\")\nval areaUnderROC = metrics.evaluate(Predictions)\nprintln(\"Area Under ROC = \" + areaUnderROC)\n\nval metrics = new BinaryClassificationEvaluator()\n .setLabelCol(\"Label_Index\")\n .setRawPredictionCol(\"prediction\")\n .setMetricName(\"areaUnderPR\")\nval areaUnderPRC = metrics.evaluate(Predictions)\nprintln(\"Area Under PRC = \" + areaUnderPRC)\n\nval df_LabeledSet = Results.select(\"prediction\", \"Label_Index\").rdd\n\nval predictionAndLabels = df_LabeledSet.map { row => (row(0).asInstanceOf[Double], row(1).asInstanceOf[Double])} \n\n//predictionAndLabels.count()\n\n// Instantiate metrics object\nval metrics = new BinaryClassificationMetrics(predictionAndLabels )\n\n// Precision by threshold\nval precision = metrics.precisionByThreshold.collect()\n\nprecision.foreach { case (t, p) =>\n println(s\"Threshold: $t, Precision: $p\")\n}\n\n// Recall by threshold\nval recall = metrics.recallByThreshold.collect()\n\nrecall.foreach { case (t, r) =>\n println(s\"Threshold: $t, Recall: $r\")\n}\n\n// Precision-Recall Curve\nval PRC = metrics.pr.collect()\n\n// F-measure\nval f1Score = metrics.fMeasureByThreshold.collect()\n\nf1Score.foreach { case (t, f) =>\n println(s\"Threshold: $t, F-score: $f, Beta = 1\")\n}\n","dateUpdated":"2018-05-15T06:46:09-0700","config":{"editorSetting":{"language":"scala","editOnDblClick":false},"colWidth":12,"editorMode":"ace/mode/scala","title":true,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"\nimport org.apache.spark.mllib.evaluation.BinaryClassificationMetrics\n\nimport org.apache.spark.mllib.regression.LabeledPoint\n\nmetrics: org.apache.spark.ml.evaluation.BinaryClassificationEvaluator = binEval_d1e3efe8926d\n\nareaUnderROC: Double = 1.0\nArea Under ROC = 1.0\n\nmetrics: org.apache.spark.ml.evaluation.BinaryClassificationEvaluator = binEval_cd330eac25d6\n\nareaUnderPRC: Double = 1.0\nArea Under PRC = 1.0\n\ndf_LabeledSet: org.apache.spark.rdd.RDD[org.apache.spark.sql.Row] = MapPartitionsRDD[1412] at rdd at :196\n\npredictionAndLabels: org.apache.spark.rdd.RDD[(Double, Double)] = MapPartitionsRDD[1413] at map at :198\n\nmetrics: org.apache.spark.mllib.evaluation.BinaryClassificationMetrics = org.apache.spark.mllib.evaluation.BinaryClassificationMetrics@18c9e324\n\nprecision: Array[(Double, Double)] = Array((1.0,1.0), (0.0,0.4625151148730351))\nThreshold: 1.0, Precision: 1.0\nThreshold: 0.0, Precision: 0.4625151148730351\n\nrecall: Array[(Double, Double)] = Array((1.0,1.0), (0.0,1.0))\nThreshold: 1.0, Recall: 1.0\nThreshold: 0.0, Recall: 1.0\n\nPRC: Array[(Double, Double)] = Array((0.0,1.0), (1.0,1.0), (1.0,0.4625151148730351))\n\nf1Score: Array[(Double, Double)] = Array((1.0,1.0), (0.0,0.6324927656056222))\nThreshold: 1.0, F-score: 1.0, Beta = 1\nThreshold: 0.0, F-score: 0.6324927656056222, Beta = 1\n"}]},"apps":[],"jobName":"paragraph_1526391969605_-774401603","id":"20180409-145829_6825321","dateCreated":"2018-05-15T06:46:09-0700","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:6435"},{"title":"Feature Ranking","text":"%spark2\n\nval rf_model = rf_classifier.fit(df_TestSet)\n\nval features = df_TestSet.select(\"features\")\n\nval featureImp = rf_model.featureImportances\n\nfeatureImp.toArray.zip(featureList).map(_.swap).sortBy(-_._2).foreach(x => println(x._1 + \" -> \" + x._2))\n\n","dateUpdated":"2018-05-15T06:46:09-0700","config":{"editorSetting":{"language":"scala","editOnDblClick":false},"colWidth":12,"editorMode":"ace/mode/scala","title":true,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"\nrf_model: org.apache.spark.ml.classification.RandomForestClassificationModel = RandomForestClassificationModel (uid=rfc_e777f3d2d76f) with 150 trees\n\nfeatures: org.apache.spark.sql.DataFrame = [features: vector]\n\nfeatureImp: org.apache.spark.ml.linalg.Vector = (22,[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,16,17,18,19,20,21],[0.0016254107374385825,0.006920725250441161,0.012605246217834886,0.016383931941766874,0.3959975609245473,3.3549796918427605E-4,0.02108524372352343,0.058662303126419436,0.06248910675619425,0.014267206262315571,0.02770420329794222,0.06035498700671576,0.05951143828127974,0.008393642934204968,0.01098488095311055,1.9442960152014308E-4,0.016718918328919936,0.03114801197595062,0.14363120813685484,0.03710399005608978,0.013882056517745435])\nOdor_Index -> 0.3959975609245473\nSporePrintColor_Index -> 0.14363120813685484\nGillColor_Index -> 0.06248910675619425\nSSAR_Index -> 0.06035498700671576\nSSBR_Index -> 0.05951143828127974\nGillSize_Index -> 0.058662303126419436\nPopulation_Index -> 0.03710399005608978\nRingType_Index -> 0.03114801197595062\nStalkRoot_Index -> 0.02770420329794222\nGillSpacing_Index -> 0.02108524372352343\nRingNumber_Index -> 0.016718918328919936\nBruises_Index -> 0.016383931941766874\nStalkShape_Index -> 0.014267206262315571\nHabitat_Index -> 0.013882056517745435\nCapColor_Index -> 0.012605246217834886\nSCBR_Index -> 0.01098488095311055\nSCAR_Index -> 0.008393642934204968\nCapSurface_Index -> 0.006920725250441161\nCapShape_Index -> 0.0016254107374385825\nGillAttachment_Index -> 3.3549796918427605E-4\nVeilColor_Index -> 1.9442960152014308E-4\nVeilType_Index -> 0.0\n"}]},"apps":[],"jobName":"paragraph_1526391969605_-774401603","id":"20180409-150438_1067399204","dateCreated":"2018-05-15T06:46:09-0700","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:6436"},{"text":"%sql \n\nselect * from MushResults limit 10 \n","dateUpdated":"2018-05-15T06:46:09-0700","config":{"colWidth":12,"editorMode":"ace/mode/sql","results":{},"enabled":true,"editorSetting":{"language":"sql","editOnDblClick":false}},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TABLE","data":"Label\tLabel_Index\tprediction\tCapShape\tCapSurface\tCapColor\tBruises\tOdor\tGillAttachment\tGillSpacing\tGillSize\tGillColor\tStalkShape\tStalkRoot\tSSAR\tSSBR\tSCAR\tSCBR\tVeilType\tVeilColor\tRingNumber\tRingType\tSporePrintColor\tPopulation\tHabitat\nEDIBLE\t0.0\t0.0\tBELL\tFIBROUS\tGRAY\tNO\tNONE\tFREE\tCROWDED\tBROAD\tGRAY\tENLARGING\t?\tSMOOTH\tSILKY\tWHITE\tWHITE\tPARTIAL\tWHITE\tTWO\tPENDANT\tWHITE\tSCATTERED\tGRASSES\nEDIBLE\t0.0\t0.0\tBELL\tFIBROUS\tGRAY\tNO\tNONE\tFREE\tCROWDED\tBROAD\tPINK\tENLARGING\t?\tSILKY\tSMOOTH\tWHITE\tWHITE\tPARTIAL\tWHITE\tTWO\tPENDANT\tWHITE\tNUMEROUS\tGRASSES\nEDIBLE\t0.0\t0.0\tBELL\tFIBROUS\tGRAY\tNO\tNONE\tFREE\tCROWDED\tBROAD\tWHITE\tENLARGING\t?\tSILKY\tSILKY\tWHITE\tWHITE\tPARTIAL\tWHITE\tTWO\tPENDANT\tWHITE\tNUMEROUS\tGRASSES\nEDIBLE\t0.0\t0.0\tBELL\tFIBROUS\tGRAY\tNO\tNONE\tFREE\tCROWDED\tBROAD\tWHITE\tENLARGING\t?\tSMOOTH\tSILKY\tWHITE\tWHITE\tPARTIAL\tWHITE\tTWO\tPENDANT\tWHITE\tNUMEROUS\tGRASSES\nEDIBLE\t0.0\t0.0\tBELL\tFIBROUS\tGRAY\tNO\tNONE\tFREE\tCROWDED\tBROAD\tWHITE\tENLARGING\t?\tSMOOTH\tSMOOTH\tWHITE\tWHITE\tPARTIAL\tWHITE\tTWO\tPENDANT\tWHITE\tNUMEROUS\tGRASSES\nEDIBLE\t0.0\t0.0\tBELL\tFIBROUS\tWHITE\tNO\tNONE\tFREE\tCROWDED\tBROAD\tGRAY\tENLARGING\t?\tSILKY\tSILKY\tWHITE\tWHITE\tPARTIAL\tWHITE\tTWO\tPENDANT\tWHITE\tSCATTERED\tGRASSES\nEDIBLE\t0.0\t0.0\tBELL\tFIBROUS\tWHITE\tNO\tNONE\tFREE\tCROWDED\tBROAD\tGRAY\tENLARGING\t?\tSILKY\tSMOOTH\tWHITE\tWHITE\tPARTIAL\tWHITE\tTWO\tPENDANT\tWHITE\tSCATTERED\tGRASSES\nEDIBLE\t0.0\t0.0\tBELL\tFIBROUS\tWHITE\tNO\tNONE\tFREE\tCROWDED\tBROAD\tPINK\tENLARGING\t?\tSMOOTH\tSMOOTH\tWHITE\tWHITE\tPARTIAL\tWHITE\tTWO\tPENDANT\tWHITE\tSCATTERED\tGRASSES\nEDIBLE\t0.0\t0.0\tBELL\tFIBROUS\tWHITE\tNO\tNONE\tFREE\tCROWDED\tBROAD\tWHITE\tENLARGING\t?\tSILKY\tSMOOTH\tWHITE\tWHITE\tPARTIAL\tWHITE\tTWO\tPENDANT\tWHITE\tNUMEROUS\tGRASSES\nEDIBLE\t0.0\t0.0\tBELL\tSCALY\tWHITE\tBRUISES\tALMOND\tFREE\tCLOSE\tBROAD\tBLACK\tENLARGING\tCLUB\tSMOOTH\tSMOOTH\tWHITE\tWHITE\tPARTIAL\tWHITE\tONE\tPENDANT\tBROWN\tNUMEROUS\tGRASSES\n"}]},"apps":[],"jobName":"paragraph_1526391969606_-773247356","id":"20180410-063709_1893215056","dateCreated":"2018-05-15T06:46:09-0700","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:6437"},{"text":"%sql select label, prediction, count(prediction) from MushResults where group by label, prediction\n","dateUpdated":"2018-05-15T06:46:09-0700","config":{"colWidth":12,"editorMode":"ace/mode/sql","results":{},"enabled":true,"editorSetting":{"language":"sql","editOnDblClick":false}},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TABLE","data":"label\tprediction\tcount(prediction)\nEDIBLE\t0.0\t889\nPOISONOUS\t1.0\t765\n"}]},"apps":[],"jobName":"paragraph_1526391969608_-775555850","id":"20180410-121159_1837585684","dateCreated":"2018-05-15T06:46:09-0700","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:6438"},{"text":"%spark\n","dateUpdated":"2018-05-15T06:46:09-0700","config":{"colWidth":12,"editorMode":"ace/mode/scala","results":{},"enabled":true,"editorSetting":{"language":"scala"}},"settings":{"params":{},"forms":{}},"apps":[],"jobName":"paragraph_1526391969608_-775555850","id":"20180413-090409_362763940","dateCreated":"2018-05-15T06:46:09-0700","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:6439"}],"name":"Mushroom Classifier - Scala","id":"2DDYZ9GY9","angularObjects":{"2CJHMEY2N:shared_process":[],"2C8A4SZ9T_livy2:shared_process":[],"2CMUCW84V:shared_process":[],"2CMN8PT2P:shared_process":[],"2CM7G3PVG:shared_process":[],"2C4U48MY3_spark2:shared_process":[],"2CN3UF7FN:shared_process":[],"2CM6BCY3D:shared_process":[]},"config":{"looknfeel":"default","personalizedMode":"false"},"info":{}}