{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# 1-Input\n", "mmtf-pyspark operates on 3D structures in the compressed binary MMTF file format.\n", "\n", "Info about MMTF:\n", "* [Website](http://mmtf.rcsb.org/index.html)\n", "* [Format paper](https://doi.org/10.1371/journal.pcbi.1005575)\n", "* [Compression paper](https://doi.org/10.1371/journal.pone.0174846)\n", "* [Specification](https://github.com/rcsb/mmtf/blob/master/spec.md)\n", "\n", "Protein Data Bank structures are available in two MMTF data representations:\n", "* full\n", " * All atom representation \n", " * 0.001Å coordinate precision, 0.01 B-factor and occupancy precision\n", "* reduced\n", " * C-alpha atoms only for polypeptides \n", " * P-backbone atoms only for polynucleotides \n", " * All atom representation for all other residue types \n", " * 0.1Å coordinate precision, 0.1 B-factor and occupancy precision." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Import pyspark and mmtfPyspark" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "from pyspark.sql import SparkSession\n", "from mmtfPyspark.io import mmtfReader" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Configure Spark" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "spark = SparkSession.builder.appName(\"1-Input\").getOrCreate()\n", "sc = spark.sparkContext" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "4" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "sc.defaultParallelism" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Download Structures\n", "For a small list of PDB entries (10s to 100), the download methods are the quickest way to import structures. Here we download a list of 4 structure in the full representation." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "pdbids = ['1LQ9','1LXJ','4XPX','1P1J']\n", "structures = mmtfReader.download_full_mmtf_files(pdbids)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Structures are represented as keyword-value pairs (tuples):\n", "* key: structure identifier (e.g., PDB ID)\n", "* value: MmtfStructure (structure data)\n", "\n", "We can print the keys and values using the collect() method. Note, that the structures are loaded in an arbritray order. You cannot rely on the order of structures." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['1P1J', '1LXJ', '4XPX', '1LQ9']" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "structures.keys().collect()" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[,\n", " ,\n", " ,\n", " ]" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "structures.values().collect()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Spark represents these keyword-value pairs as Resilient Distributed Datasets (RDDs), which are a fault-tolerant collection of elements that can be operated on in parallel. To see how the dataset was distributed, we can print the number of partitions." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "4" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "structures.getNumPartitions()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Reading structures from an MMTF Hadoop Sequence File\n", "Next, we read PDB structures from a local copy of an MMTF Hadoop Sequence file. For the following examples to work, the MMTF_FULL and MMTF_REDUCED environment variables need to be set. See installation instructions for details." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you have long list (1000s) of PDB IDs, you can read the list of structures from a local copy of the MMTF Hadoop Sequence file,\n", "however, it's very inefficent for a few structures, e.g, in the example below." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "path = \"../resources/mmtf_reduced_sample/\"\n", "structures = mmtfReader.read_sequence_file(path, pdbids)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's print the keys again and see how long this takes. You can see that Spark loads the data only when and if it's required." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['1LQ9', '1LXJ', '4XPX', '1P1J']" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "structures.keys().collect()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, let's read a sample of the PDB archive from the MMTF Hadoop Sequence file" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "structures = mmtfReader.read_sequence_file(path).cache()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### There are 9756 structures in the sample file" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 9.6 ms, sys: 3.85 ms, total: 13.4 ms\n", "Wall time: 5.18 s\n" ] }, { "data": { "text/plain": [ "9756" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%%time\n", "structures.count()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### About data flow and caching in Spark\n", "Now, let's count the number of structures again. Should this be faster this time since we already loaded the entire PDB? \n", "\n", "Not necessarily, the data from the Hadoop Sequence file are streamed through parallel threads. If you need the data again, they need to be reloaded from scratch, unless they are cached. See .cache() method call after reading the MMTF Hadoop Sequence file.\n", "\n", "Remove the .cache() method call, run this notebook again and compare the time it takes to count the number of structures." ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 7.45 ms, sys: 2.9 ms, total: 10.3 ms\n", "Wall time: 900 ms\n" ] }, { "data": { "text/plain": [ "9756" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%%time\n", "structures.count()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Reading the whole PDB from MMTF-Hadoop Sequence files" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this workshop we use a sample set of the PDB with about 10,000 structures.\n", "\n", "To use the entire PDB, the MMTF_FULL and MMTF_REDUCED environment variables must to be set. See mmtf-pypark [installation instructions](https://github.com/sbl-sdsc/mmtf-pyspark#installation) for details." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Read whole PDB in the full (all atom) representation\n", "We commented this lines below, since we are using a smaller sample of the PDB for the tutorials. \n", "\n", "To use the whole PDB, the MMTF_FULL and MMTF_REDUCED environment variables need to be set to the `full` and `reduced` MMTF Hadoop Sequence file locations. See [installation instructions](https://github.com/sbl-sdsc/mmtf-pyspark#hadoop-sequence-files) for details." ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "# %%time\n", "# pdb_full = mmtfReader.read_full_sequence_file();\n", "# pdb_full.count()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Read whole PDB in the reduced representation" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "# %%time\n", "# pdb_reduced = mmtfReader.read_reduced_sequence_file();\n", "# pdb_reduced.count()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Very Important: Stop Spark!!!\n", "It is very important to run the notebook all the way to the spark.stop() statement to terminate Spark. Otherwise you may end up running multiple instances of Spark that will interfere with each other." ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [], "source": [ "spark.stop()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.3" } }, "nbformat": 4, "nbformat_minor": 2 }