{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# ** Convert Databricks Exports to IPython Notebooks **\n", "#### This notebook should be uploaded to the VM just like other notebooks. In addition to uploading this notebook, you should upload your export from Databricks. The Databricks export should have the .dbc extension. To download your .dbc file from Databricks, click on the down arrow next to your root folder then hover the mouse over \"Export\" and click on \"DBC Archive\". You can also export a single file by clicking on the down arrow next to the file, hovering the mouse over \"Export\", and clicking on \"DBC Archive\".\n", "#### To perform the conversion, you should just run the cells below. If necessary, you can change the `fileLocation` in the first code cell below. The second code cell will list the contents of the .dbc file, so you can check that you're working with the correct file. The third code cell will list the .ipynb files that will be generated. Note that if the files already exist they will be overwritten. The fourth code cell will perform the conversion and overwrite any files, if necessary. The fifth code cell just cleans up the directory that was used for extracting the .dbc file." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### ** Additional Details (Optional) **\n", "#### Note that this code does some processing tailored to the courses, like special handling of the \"baseDir\" directory which points to the course datasets, so it may do unexpected things to dbc files in rare cases. The notebook handles MathJax conversion, creates a do-nothing `display` function and a working `displayHTML` function. It also replaces `baseDir = ` with `baseDir = 'data'`.\n", "#### The only requirement is IPython version 3, so this conversion can also be run outside of the Virtual Machine (VM) environment." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Change this fileLocation if necessary\n", "# Subsequent downloads might be called 'Databricks (1).dbc', etc.\n", "fileLocation = 'Databricks.dbc'" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Extract dbc file\n", "\n", "# Cleanup from prior run\n", "import shutil\n", "try: shutil.rmtree('tmp_dbc')\n", "except OSError: pass\n", "\n", "import zipfile\n", "import os\n", "try: os.mkdir('tmp_dbc')\n", "except OSError: pass\n", "with zipfile.ZipFile(fileLocation, 'r') as z:\n", " z.extractall('tmp_dbc')\n", "\n", "print '*** Contents from the .dbc file (usually one file or a directory) ***\\n'\n", "print os.listdir('tmp_dbc')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Find files to parse\n", "import fnmatch\n", "\n", "filesToParse = []\n", "for root, dirNames, fileNames in os.walk('tmp_dbc'):\n", " for fileName in fnmatch.filter(fileNames, '*.python'):\n", " filesToParse.append((root, fileName))\n", "\n", "def getIpynbName((path, fileName)):\n", " path = os.path.normpath(path)\n", " pathSplit = path.split(os.sep)[2:]\n", " baseDir = os.path.join(*pathSplit) if len(pathSplit) > 0 else '.'\n", " newFileName = os.path.splitext(fileName)[0] + '_export.ipynb'\n", " return os.path.join(baseDir, newFileName)\n", "\n", "print \"*** Files to be created (relative to your current working directory) ***\"\n", "print \"(Warning: files will be overwritten!)\\n\"\n", "for path, fileName in filesToParse:\n", " print getIpynbName((path, fileName))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Create the IPython Notebooks\n", "# Convert .python files to .ipynb files\n", "import codecs\n", "from IPython import nbformat\n", "from IPython.nbformat.v3.nbpy import PyReader\n", "import json\n", "import re\n", "\n", "_header = u'# -*- coding: utf-8 -*-\\n# 3.0\\n'\n", "_markdownCell = u'\\n\\n# \\n\\n'\n", "_codeCell = u'\\n\\n# \\n\\n'\n", "_firstCell = u\"\"\"# Increase compatibility with Databricks\n", "from IPython.display import display as idisplay, HTML\n", "displayHTML = lambda x: idisplay(HTML(x))\n", "def display(*args, **kargs): pass\"\"\"\n", "\n", "def convertToIpynb(fileToParse):\n", " \n", " with codecs.open(os.path.join(*fileToParse), encoding=\"utf-8\") as fp:\n", " jsonData = json.load(fp)\n", " commands = jsonData['commands']\n", " commandInfo = [(x['position'], x['command']) for x in commands]\n", " commandList = sorted(commandInfo)\n", "\n", " with codecs.open('tmp_ipynb.py', 'w', encoding=\"utf-8\") as fp:\n", " fp.write(_header)\n", " fp.write(_codeCell)\n", " fp.write(_firstCell)\n", "\n", " for position, command in commandList:\n", " if re.match(r'\\s*%md', command):\n", " command = re.sub(r'^\\s*%md', '', command, flags=re.MULTILINE)\n", " command = re.sub(r'(%\\(|\\)%)', '$', command)\n", " command = re.sub(r'(%\\[|\\]%)', '$$', command)\n", "\n", " fp.write(_markdownCell)\n", " asLines = command.split('\\n')\n", " command = '# ' + '\\n# '.join(asLines)\n", " else:\n", " command = re.sub(r'^\\s*baseDir\\s*=.*$', 'baseDir = \\'data\\'', \n", " command, flags=re.MULTILINE)\n", " fp.write(_codeCell)\n", "\n", " fp.write(command)\n", "\n", " outputName = getIpynbName(fileToParse)\n", "\n", " with codecs.open('tmp_ipynb.py', 'r', encoding=\"utf-8\") as intermediate:\n", " nb = PyReader().read(intermediate)\n", "\n", " os.remove('tmp_ipynb.py')\n", " baseDirectory = os.path.split(outputName)[0]\n", "\n", " if not os.path.isdir(baseDirectory):\n", " os.makedirs(baseDirectory)\n", "\n", " with codecs.open(outputName, 'w', encoding=\"utf-8\") as output:\n", " nbformat.write(nbformat.convert(nb, 4.0), output) \n", " print 'Created: {0}'.format(outputName)\n", "\n", "for fileToParse in filesToParse:\n", " convertToIpynb(fileToParse)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Cleanup\n", "import shutil\n", "try: shutil.rmtree('tmp_dbc')\n", "except OSError: pass" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.10" } }, "nbformat": 4, "nbformat_minor": 0 }