{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# BLAST on the Command Line with Advanced Python\n", "\n", "This notebook builds on the previous one, [BLAST on the Command Line and Integrating with Python](BLAST%20on%20Command%20Line%20and%20Integrating%20with%20Python.ipynb#BLAST-on-the-Command-Line-and-Integrating-with-Python). See that one for introduction and credits. \n", "\n", "The previous notebook relied on the more traditional/universal routes of passing data that involve file intermediates. Here, a number of advanced Python/Jupyter approaches/tricks are illustrated for working in the Jupyter environment.\n", "\n", "-----" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Preparation\n", "\n", "Get the data, the `blast_to_df` script, and set up the database by running these commands.\n", "\n", "Repeating these steps if you had already done so this session will cause no harm, and so go ahead and run these two cells." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!curl -O http://sgd-archive.yeastgenome.org/sequence/S288C_reference/chromosomes/fasta/chrmt.fsa\n", "!curl -O https://raw.githubusercontent.com/fomightez/sequencework/master/blast-utilities/blast_to_df.py\n", "import pandas as pd\n", "!makeblastdb -in chrmt.fsa -dbtype nucl" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%bash\n", "curl -LO http://yjx1217.github.io/Yeast_PacBio_2016/data/Mitochondrial_Genome/S288C.mt.genome.fa.gz\n", "gunzip -f S288C.mt.genome.fa.gz\n", "# Up until sometime around September / October 2018, the above referenced file was named `S288c.mt.genome.fa.gz` on that server and\n", "# I built things around that. The name has since changed to the more-correct `S288C.mt.genome.fa.gz`, but I am going \n", "# to convert name to earlier form to make subsequent commands work, be more distinguished from SGD S288C reference sequence,\n", "# and make things continue match what was seen before.\n", "mv S288C.mt.genome.fa S288c.mt.genome.fa\n", "curl -LO http://yjx1217.github.io/Yeast_PacBio_2016/data/Mitochondrial_Genome/SK1.mt.genome.fa.gz\n", "gunzip -f SK1.mt.genome.fa.gz\n", "curl -LO http://yjx1217.github.io/Yeast_PacBio_2016/data/Mitochondrial_Genome/Y12.mt.genome.fa.gz\n", "gunzip -f Y12.mt.genome.fa.gz\n", "var=$(echo \"S288c.mt.genome.fa\" | cut -d'.' --complement -f2-).mito\n", "sed -i \"1s/.*/>$var/\" S288c.mt.genome.fa\n", "var=$(echo \"SK1.mt.genome.fa\" | cut -d'.' --complement -f2-).mito\n", "sed -i \"1s/.*/>$var/\" SK1.mt.genome.fa\n", "var=$(echo \"Y12.mt.genome.fa\" | cut -d'.' --complement -f2-).mito\n", "sed -i \"1s/.*/>$var/\" Y12.mt.genome.fa" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now you are prepared to run BLAST on the command line." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## BLAST results into Python without file intermediates\n", "\n", "This is going to rely on approaches very similar to those illustrated [here](https://github.com/fomightez/patmatch-binder/blob/6f7630b2ee061079a72cd117127328fd1abfa6c7/notebooks/PatMatch%20with%20more%20Python.ipynb#Passing-results-data-into-active-memory-without-a-file-intermediate) and [here](https://github.com/fomightez/patmatch-binder/blob/6f7630b2ee061079a72cd117127328fd1abfa6c7/notebooks/Sending%20PatMatch%20output%20directly%20to%20Python.ipynb##Running-Patmatch-and-passing-the-results-to-Python-without-creating-an-output-file-intermediate).\n", "\n", "We obtained the `blast_to_df.py` script in the preparation steps above. However, instead of using it as an external script as in [the first notebook](BLAST%20on%20Command%20Line%20and%20Integrating%20with%20Python.ipynb#BLAST-on-the-Command-Line-and-Integrating-with-Python), we want to use the core function of that script within this notebook for the options that involve no file intermediatess. Similar to the way we imported a lot of other useful modules in the first notebook and a cell above, you can run the next cell to bring in to memory of this notebook's computational environment, the main function associated with the `blast_to_df.py` script, aptly named `blast_to_df`. (As written below the command to do that looks a bit redundant;however, the first from part of the command below actually is referencing the `blast_to_df.py` script, but it doesn't need the `.py` extension because the import only deals with such files.)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from blast_to_df import blast_to_df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can demonstrate that worked by calling the function." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "blast_to_df()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If the module was not imported, you'd see `ModuleNotFoundError: No module named 'blast_to_df'`, but instead you should see it saying it is missing `results` to act on because you passed it nothing.\n", "\n", "After importing the main function of that script into this running notebook, you are ready to demonstrate the additional approaches that don't make file intermediates. These approaches work without the intermediates because instead of being directed to a file, the stdout (standard output stream of the shell) containing the results made by the BLAST command is directed to Python or variables that Python can access. Then, for processing the BLAST results to a dataframe, the imported `blast_to_df` function is used within the computational environment of the notebook and the result assigned to a variable in the running the notebook. In the end, the results are in an active dataframe in the notebook without needing to read the pickled dataframe. **Although bear in mind the pickled dataframe still gets made, and it is good to download and keep that pickled dataframe since you'll find it convenient for reading and getting back into an analysis without need for rerunning BLAST again.**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Option 1\n", "\n", "First we'll do one of the several methods I have found to do this and show how to go to the next step entirely. This first example uses an approach illustrated [here](https://stackoverflow.com/a/42703609/8508004). The result that is returned is of the type `IPython.utils.text.SList`, which offers some handy utility attributes associated with it as detailed [here](http://ipython.readthedocs.io/en/stable/api/generated/IPython.utils.text.html#IPython.utils.text.SList)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!cat S288c.mt.genome.fa Y12.mt.genome.fa SK1.mt.genome.fa > pacbio.mt.fa\n", "result = !blastn -query pacbio.mt.fa -db chrmt.fsa -outfmt \"6 qseqid sseqid stitle pident qcovs length mismatch gapopen qstart qend sstart send qframe sframe frames evalue bitscore qseq sseq\" -task blastn\n", "from blast_to_df import blast_to_df\n", "blast_df = blast_to_df(result.n)\n", "blast_df.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "(Note that I left in this block of commands the import statement that you had just previously run. It is evaluated and not run if Python knows it already has that function imported from that source. I originally separated that import out above to highlight it; however, this placement better reflects that you'd actually use it as a part of the pertinent block of code.)\n", "\n", "The option demonstrated in the above cell has the distinction that all the steps can be combined in one cell of the notebook as demonstrated [below](BLAST%20on%20the%20Command%20Line%20with%20Advanced%20Python.ipynb#Processing-many-files-using-Python). This is because the `!` approach makes a temporary subshell outside of the notebook environment to which it sends the command after the exclamation site. After that command is processed and any communication handled, that subshell where it ran is discarded. Given the transient nature of this environment you'll find `!cd` never seems to work as they discuss [here](http://ipython.readthedocs.io/en/stable/interactive/magics.html), search `!cd` to find the pertinent section where they also show you the line magics solution. ([Here](https://jakevdp.github.io/PythonDataScienceHandbook/01.05-ipython-and-shell-commands.html)is another good resource in this regard.)\n", "\n", "The additional options for passing the results demonstrated below instead rely on cell magics, and so the output needs to be captured from a cell before the next steps can be undertaken in an additional cell. While not a big deal that extra cells are involved, I find the `!` approach can be nice for streamlining things when making mini-pipelines/workflows using the Jupyter environment as a 'glue' to merge Python and command line use. \n", "\n", "In preparation for demonstrating the other options, let's tag that pickled dataframe as corresponding to this demonstration above by running the next cell." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!mv BLAST_pickled_df.pkl BLAST_pickled_dfOPT1.pkl" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The additional options for passing the results will be stepped through in a manner similar to 'Option 1' using different approaches for step 1 each time and step #2 varied to handle as necessary.\n", "\n", "#### Option 2\n", "\n", "In this option, [`%%capture` cell magics](http://ipython.readthedocs.io/en/stable/interactive/magics.html#cellmagic-capture) is used, and then using the attributes of the `utils.cpature` object you can easily get the stdout and/or stderr as a string, see [here]( http://ipython.readthedocs.io/en/stable/api/generated/IPython.utils.capture.html)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%capture out\n", "!cat S288c.mt.genome.fa Y12.mt.genome.fa SK1.mt.genome.fa > pacbio.mt.fa\n", "!blastn -query pacbio.mt.fa -db chrmt.fsa -outfmt \"6 qseqid sseqid stitle pident qcovs length mismatch gapopen qstart qend sstart send qframe sframe frames evalue bitscore qseq sseq\" -task blastn" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from blast_to_df import blast_to_df\n", "blast_df = blast_to_df(out.stdout)\n", "!mv BLAST_pickled_df.pkl BLAST_pickled_dfOPT2.pkl\n", "blast_df.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that in the demonstration of this option, the renaming of the pickled dataframe has been merged into that second step, too." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Option 3\n", "\n", "In this option, a varation of `%%bash` cell magic is used to send the output to a variable as illustrated [here](https://stackoverflow.com/a/24776049/8508004). The `%%bash` magic directs all contents in the cell to the bash shell and so exclamation points are not needed for the command line commands." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%bash --out output\n", "cat S288c.mt.genome.fa Y12.mt.genome.fa SK1.mt.genome.fa > pacbio.mt.fa\n", "blastn -query pacbio.mt.fa -db chrmt.fsa -outfmt \"6 qseqid sseqid stitle pident qcovs length mismatch gapopen qstart qend sstart send qframe sframe frames evalue bitscore qseq sseq\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from blast_to_df import blast_to_df\n", "blast_df = blast_to_df(output, pickle_df=False)\n", "blast_df.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that in the demonstration of this option, an additional argument was provided to the `blast_to_df()` function to indicate to not pickle the dataframe this time.\n", "\n", "\n", "-----" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Processing many files using Python\n", "\n", "Although not ideal, if you had a VERY LARGE set of sequences to query and/or you only needed a tiny piece of infromation out of the BLAST results, it might not make sense, due to disk space, to concantenate all of the associated files into a single Multi-FASTA file. Instead of the approach that I suggested was the recommended way in [the previous notebook](BLAST%20on%20Command%20Line%20and%20Integrating%20with%20Python.ipynb#Practical-BLAST:-Making-things-easier-by-having-related-sequences-to-be-queried-in-same-file), you could use Python and an approach from the first section of this notebook to automate things." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "import fnmatch\n", "import os\n", "import sys\n", "from blast_to_df import blast_to_df\n", "\n", "collected_scores = []\n", "\n", "for file in os.listdir('.'):\n", " if fnmatch.fnmatch(file, '*.mt.genome.fa'):\n", " blast_result = !blastn -query {file} -db chrmt.fsa -outfmt \"6 qseqid sseqid stitle pident qcovs length mismatch gapopen qstart qend sstart send qframe sframe frames evalue bitscore qseq sseq\" -task blastn\n", " blast_df = blast_to_df(blast_result.n, pickle_df=False)\n", " high_score_df = blast_df.sort_values('bitscore', ascending=False)\n", " collected_scores.append(high_score_df.iloc[0][\"bitscore\"])\n", " \n", "#print(collected_scores)\n", "sys.stderr.write(\"\\n\\n\\n\\nThe best scores were {}\".format(collected_scores)) # using this instead of \n", "# `print` because stderr and stdout seem to get out of sync as placement of time seems to show" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "-----\n", "\n", "## Where to next?\n", "\n", "Two other notebooks in this series, [one entitled 'Searching for coding sequences in genomes using BLAST and Python'](Searching%20for%20coding%20sequences%20in%20genomes%20using%20BLAST%20and%20Python.ipynb) and [one entitled 'Searching for homologs among deduced proteins from a genome using BLAST and Python'](Searching%20for%20homologs%20among%20deduced%20proteins%20from%20a%20genome%20using%20BLAST%20and%20Python.ipynb), build on what has been introduced in these two introductory notebooks to accomplish tasks reminiscent of a real workflow. \n", "\n", "In the case of [the notebook 'Searching for coding sequences in genomes using BLAST and Python'](Searching%20for%20coding%20sequences%20in%20genomes%20using%20BLAST%20and%20Python.ipynb) the task is to identify orthologs of a budding yeast gene in the genomes of some different strains and wild cousins.\n", "\n", "In the case of [the notebook 'Searching for homologs among deduced proteins from a genome using BLAST and Python'](Searching%20for%20homologs%20among%20deduced%20proteins%20from%20a%20genome%20using%20BLAST%20and%20Python.ipynb) the task is to identify homologs of budding yeast proteins among the deduced proteins of another organism." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.8" } }, "nbformat": 4, "nbformat_minor": 4 }