{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "**Source of the materials**: Biopython Tutorial and Cookbook (adapted)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Sequence Input/Output" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this notebook we'll discuss in more detail the Bio.SeqIO module, which was briefly introduced before. This aims to provide a simple interface for working with assorted sequence file formats in a uniform way.\n", "See also the Bio.SeqIO wiki page (http://biopython.org/wiki/SeqIO), and the built in documentation (also [online](http://biopython.org/DIST/docs/api/Bio.SeqIO-module.html \"online\")):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from Bio import SeqIO\n", "\n", "help(SeqIO)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The 'catch' is that you have to work with SeqRecord objects (see Chapter 4), which contain a Seq object (Chapter 3) plus annotation like an identifier and description. Note that when dealing with very large FASTA or FASTQ files, the overhead of working with all these objects can make scripts too slow. In this case consider the low-level SimpleFastaParser and FastqGeneralIterator parsers which return just a tuple of strings for each record (see Section 5.6)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 5.1 Parsing or Reading Sequences" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The workhorse function Bio.SeqIO.parse() is used to read in sequence data as SeqRecord objects. This function expects two arguments:\n", "\n", "1. The first argument is a handle to read the data from, or a filename. A handle is typically a file opened for reading, but could be the output from a command line program, or data downloaded from the internet (see Section 5.3). See Section 24.1 for more about handles.\n", "\n", "2. The second argument is a lower case string specifying sequence format -- we don't try and guess the file format for you! See http://biopython.org/wiki/SeqIO for a full listing of supported formats.\n", "\n", "The Bio.SeqIO.parse() function returns an _iterator_ which gives SeqRecord objects. Iterators are typically used in a for loop as shown below.\n", "\n", "Sometimes you'll find yourself dealing with files which contain only a single record. For this situation use the function Bio.SeqIO.read() which takes the same arguments. Provided there is one and only one record in the file, this is returned as a SeqRecord object. Otherwise an exception is raised." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 5.1.1 Reading Sequence Files" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In general Bio.SeqIO.parse() is used to read in sequence files as SeqRecord objects, and is typically used with a for loop like this:\n" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "gi|2765658|emb|Z78533.1|CIZ78533\nSeq('CGTAACAAGGTTTCCGTAGGTGAACCTGCGGAAGGATCATTGATGAGACCGTGG...CGC')\n740\ngi|2765657|emb|Z78532.1|CCZ78532\nSeq('CGTAACAAGGTTTCCGTAGGTGAACCTGCGGAAGGATCATTGTTGAGACAACAG...GGC')\n753\ngi|2765656|emb|Z78531.1|CFZ78531\nSeq('CGTAACAAGGTTTCCGTAGGTGAACCTGCGGAAGGATCATTGTTGAGACAGCAG...TAA')\n748\n" ] } ], "source": [ "# we show the first 3 only\n", "from Bio import SeqIO\n", "\n", "for i, seq_record in enumerate(SeqIO.parse(\"data/ls_orchid.fasta\", \"fasta\")):\n", " print(seq_record.id)\n", " print(repr(seq_record.seq))\n", " print(len(seq_record))\n", " if i == 2:\n", " break" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The above example is repeated from the introduction in Section 2.4, and will load the orchid DNA sequences in the FASTA format file ls_orchid.fasta. If instead you wanted to load a GenBank format file like ls_orchid.gbk then all you need to do is change the filename and the format string:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Z78533.1\nCGTAACAAGGTTTCCGTAGGTGAACCTGCGGAAGGATCATTGATGAGACCGTGGAATAAACGATCGAGTGAATCCGGAGGACCGGTGTACTCAGCTCACCGGGGGCATTGCTCCCGTGGTGACCCTGATTTGTTGTTGGGCCGCCTCGGGAGCGTCCATGGCGGGTTTGAACCTCTAGCCCGGCGCAGTTTGGGCGCCAAGCCATATGAAAGCATCACCGGCGAATGGCATTGTCTTCCCCAAAACCCGGAGCGGCGGCGTGCTGTCGCGTGCCCAATGAATTTTGATGACTCTCGCAAACGGGAATCTTGGCTCTTTGCATCGGATGGAAGGACGCAGCGAAATGCGATAAGTGGTGTGAATTGCAAGATCCCGTGAACCATCGAGTCTTTTGAACGCAAGTTGCGCCCGAGGCCATCAGGCTAAGGGCACGCCTGCTTGGGCGTCGCGCTTCGTCTCTCTCCTGCCAATGCTTGCCCGGCATACAGCCAGGCCGGCGTGGTGCGGATGTGAAAGATTGGCCCCTTGTGCCTAGGTGCGGCGGGTCCAAGAGCTGGTGTTTTGATGGCCCGGAACCCGGCAAGAGGTGGACGGATGCTGGCAGCAGCTGCCGTGCGAATCCCCCATGTTGTCGTGCTTGTCGGACAGGCAGGAGAACCCTTCCGAACCCCAATGGAGGGCGGTTGACCGCCATTCGGATGTGACCCCAGGTCAGGCGGGGGCACCCGCTGAGTTTACGC\n740\nZ78532.1\nCGTAACAAGGTTTCCGTAGGTGAACCTGCGGAAGGATCATTGTTGAGACAACAGAATATATGATCGAGTGAATCTGGAGGACCTGTGGTAACTCAGCTCGTCGTGGCACTGCTTTTGTCGTGACCCTGCTTTGTTGTTGGGCCTCCTCAAGAGCTTTCATGGCAGGTTTGAACTTTAGTACGGTGCAGTTTGCGCCAAGTCATATAAAGCATCACTGATGAATGACATTATTGTCAGAAAAAATCAGAGGGGCAGTATGCTACTGAGCATGCCAGTGAATTTTTATGACTCTCGCAACGGATATCTTGGCTCTAACATCGATGAAGAACGCAGCTAAATGCGATAAGTGGTGTGAATTGCAGAATCCCGTGAACCATCGAGTCTTTGAACGCAAGTTGCGCTCGAGGCCATCAGGCTAAGGGCACGCCTGCCTGGGCGTCGTGTGTTGCGTCTCTCCTACCAATGCTTGCTTGGCATATCGCTAAGCTGGCATTATACGGATGTGAATGATTGGCCCCTTGTGCCTAGGTGCGGTGGGTCTAAGGATTGTTGCTTTGATGGGTAGGAATGTGGCACGAGGTGGAGAATGCTAACAGTCATAAGGCTGCTATTTGAATCCCCCATGTTGTTGTATTTTTTCGAACCTACACAAGAACCTAATTGAACCCCAATGGAGCTAAAATAACCATTGGGCAGTTGATTTCCATTCAGATGCGACCCCAGGTCAGGCGGGGCCACCCGCTGAGTTGAGGC\n753\nZ78531.1\nCGTAACAAGGTTTCCGTAGGTGAACCTGCGGAAGGATCATTGTTGAGACAGCAGAACATACGATCGAGTGAATCCGGAGGACCCGTGGTTACACGGCTCACCGTGGCTTTGCTCTCGTGGTGAACCCGGTTTGCGACCGGGCCGCCTCGGGAACTTTCATGGCGGGTTTGAACGTCTAGCGCGGCGCAGTTTGCGCCAAGTCATATGGAGCGTCACCGATGGATGGCATTTTTGTCAAGAAAAACTCGGAGGGGCGGCGTCTGTTGCGCGTGCCAATGAATTTATGACGACTCTCGGCAACGGGATATCTGGCTCTTGCATCGATGAAGAACGCAGCGAAATGCGATAAGTGGTGTGAATTGCAGAATCCCGCGAACCATCGAGTCTTTGAACGCAAGTTGCGCCCGAGGCCATCAGGCTAAGGGCACGCCTGCCTGGGCGTCGTGTGCTGCGTCTCTCCTGATAATGCTTGATTGGCATGCGGCTAGTCTGTCATTGTGAGGACGTGAAAGATTGGCCCCTTGCGCCTAGGTGCGGCGGGTCTAAGCATCGGTGTTCTGATGGCCCGGAACTTGGCAGTAGGTGGAGGATGCTGGCAGCCGCAAGGCTGCCGTTCGAATCCCCCGTGTTGTCGTACTCGTCAGGCCTACAGAAGAACCTGTTTGAACCCCCAGTGGACGCAAAACCGCCCTCGGGCGGTGATTTCCATTCAGATGCGACCCCAGTCAGGCGGGCCACCCGTGAGTAA\n748\n" ] } ], "source": [ "#we show the frist 3\n", "from Bio import SeqIO\n", "\n", "for i, seq_record in enumerate(SeqIO.parse(\"data/ls_orchid.gbk\", \"genbank\")):\n", " print(seq_record.id)\n", " print(seq_record.seq)\n", " print(len(seq_record))\n", " if i == 2:\n", " break" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Similarly, if you wanted to read in a file in another file format, then assuming Bio.SeqIO.parse() supports it you would just need to change the format string as appropriate, for example 'swiss' for SwissProt files or 'embl' for EMBL text files. There is a full listing on the wiki page (http://biopython.org/wiki/SeqIO) and in the built in documentation (also [online](\\http://biopython.org/DIST/docs/api/Bio.SeqIO-module.html \"online\")).\n", "\n", "Another very common way to use a Python iterator is within a list comprehension (or\n", "a generator expression). For example, if all you wanted to extract from the file was\n", "a list of the record identifiers we can easily do this with the following list comprehension:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "['Z78533.1',\n", " 'Z78532.1',\n", " 'Z78531.1',\n", " 'Z78530.1',\n", " 'Z78529.1',\n", " 'Z78527.1',\n", " 'Z78526.1',\n", " 'Z78525.1',\n", " 'Z78524.1',\n", " 'Z78523.1']" ] }, "metadata": {}, "execution_count": 4 } ], "source": [ "from Bio import SeqIO \n", "\n", "identifiers=[seq_record.id for seq_record in SeqIO.parse(\"data/ls_orchid.gbk\", \"genbank\")][:10] # ten only\n", "identifiers" ] }, { "source": [ "There are more examples using SeqIO.parse() in a list comprehension like this in Section 20.2 (e.g. for plotting sequence lengths or GC%)." ], "cell_type": "markdown", "metadata": {} }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 5.1.2 Iterating over the records in a sequence file" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the above examples, we have usually used a for loop to iterate over all the records one by one. You can use the for loop with all sorts of Python objects (including lists, tuples and strings) which support the iteration interface.\n", "\n", "The object returned by Bio.SeqIO is actually an iterator which returns SeqRecord objects. You get to see each record in turn, but once and only once. The plus point is that an iterator can save you memory when dealing with large files.\n", "\n", "Instead of using a for loop, can also use the next() function on an iterator to step through the entries, like this:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "gi|2765658|emb|Z78533.1|CIZ78533\ngi|2765658|emb|Z78533.1|CIZ78533 C.irapeanum 5.8S rRNA gene and ITS1 and ITS2 DNA\ngi|2765657|emb|Z78532.1|CCZ78532\ngi|2765657|emb|Z78532.1|CCZ78532 C.californicum 5.8S rRNA gene and ITS1 and ITS2 DNA\n" ] } ], "source": [ "record_iterator = SeqIO.parse(\"data/ls_orchid.fasta\", \"fasta\")\n", "\n", "first_record = next(record_iterator)\n", "print(first_record.id)\n", "print(first_record.description)\n", "\n", "second_record = next(record_iterator)\n", "print(second_record.id)\n", "print(second_record.description)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that if you try to use next() and there are no more results, you'll get the special StopIteration exception.\n", "\n", "One special case to consider is when your sequence files have multiple records, but you only want the first one. In this situation the following code is very concise:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "SeqRecord(seq=Seq('CGTAACAAGGTTTCCGTAGGTGAACCTGCGGAAGGATCATTGATGAGACCGTGG...CGC'), id='Z78533.1', name='Z78533', description='C.irapeanum 5.8S rRNA gene and ITS1 and ITS2 DNA', dbxrefs=[])" ] }, "metadata": {}, "execution_count": 6 } ], "source": [ "from Bio import SeqIO\n", "\n", "next(SeqIO.parse(\"data/ls_orchid.gbk\", \"genbank\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A word of warning here -- using the next() function like this will silently ignore any additional records in the file.\n", "If your files have _one and only one_ record, like some of the online examples later in this chapter, or a GenBank file for a single chromosome, then use the new Bio.SeqIO.read() function instead.\n", "This will check there are no extra unexpected records present.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 5.1.3 Getting a list of the records in a sequence file" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the previous section we talked about the fact that Bio.SeqIO.parse() gives you a SeqRecord iterator, and that you get the records one by one. Very often you need to be able to access the records in any order. The Python list data type is perfect for this, and we can turn the record iterator into a list of SeqRecord objects using the built-in Python function list() like so:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Found 94 records\nThe last record\nZ78439.1\nSeq('CATTGTTGAGATCACATAATAATTGATCGAGTTAATCTGGAGGATCTGTTTACT...GCC')\n592\nThe first record\nZ78533.1\nSeq('CGTAACAAGGTTTCCGTAGGTGAACCTGCGGAAGGATCATTGATGAGACCGTGG...CGC')\n740\n" ] } ], "source": [ "from Bio import SeqIO\n", "\n", "records = list(SeqIO.parse(\"data/ls_orchid.gbk\", \"genbank\"))\n", "\n", "print(\"Found %i records\" % len(records))\n", "\n", "print(\"The last record\")\n", "last_record = records[-1] #using Python's list tricks\n", "print(last_record.id)\n", "print(repr(last_record.seq))\n", "print(len(last_record))\n", "\n", "print(\"The first record\")\n", "first_record = records[0] #remember, Python counts from zero\n", "print(first_record.id)\n", "print(repr(first_record.seq))\n", "print(len(first_record))\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can of course still use a for loop with a list of SeqRecord objects. Using a list is much more flexible than an iterator (for example, you can determine the number of records from the length of the list), but does need more memory because it will hold all the records in memory at once." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 5.1.4 Extracting data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The SeqRecord object and its annotation structures are described more fully in Chapter 4. As an example of how annotations are stored, we'll look at the output from parsing the first record in the GenBank file ls_orchid.gbk." ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "ID: Z78533.1\nName: Z78533\nDescription: C.irapeanum 5.8S rRNA gene and ITS1 and ITS2 DNA\nNumber of features: 5\n/molecule_type=DNA\n/topology=linear\n/data_file_division=PLN\n/date=30-NOV-2006\n/accessions=['Z78533']\n/sequence_version=1\n/gi=2765658\n/keywords=['5.8S ribosomal RNA', '5.8S rRNA gene', 'internal transcribed spacer', 'ITS1', 'ITS2']\n/source=Cypripedium irapeanum\n/organism=Cypripedium irapeanum\n/taxonomy=['Eukaryota', 'Viridiplantae', 'Streptophyta', 'Embryophyta', 'Tracheophyta', 'Spermatophyta', 'Magnoliophyta', 'Liliopsida', 'Asparagales', 'Orchidaceae', 'Cypripedioideae', 'Cypripedium']\n/references=[Reference(title='Phylogenetics of the slipper orchids (Cypripedioideae: Orchidaceae): nuclear rDNA ITS sequences', ...), Reference(title='Direct Submission', ...)]\nSeq('CGTAACAAGGTTTCCGTAGGTGAACCTGCGGAAGGATCATTGATGAGACCGTGG...CGC')\n" ] } ], "source": [ "from Bio import SeqIO\n", "\n", "record_iterator = SeqIO.parse(\"data/ls_orchid.gbk\", \"genbank\")\n", "first_record = next(record_iterator)\n", "print(first_record)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This gives a human readable summary of most of the annotation data for the SeqRecord. For this example we're going to use the .annotations attribute which is just a Python dictionary. The contents of this annotations dictionary were shown when we printed the record above. You can also print them out directly:\n", "\n", "print(first_record.annotations)\n", "\n", "Like any Python dictionary, you can easily get a list of the keys:\n", "\n", " print(first_record.annotations.keys())\n", "\n", "or values:\n", "\n", " print(first_record.annotations.values())\n", "\n", "In general, the annotation values are strings, or lists of strings. One special case is any references in the file get stored as reference objects.\n", "\n", "Suppose you wanted to extract a list of the species from the ls_orchid.gbk GenBank file. The information we want, _Cypripedium irapeanum_, is held in the annotations dictionary under 'source' and 'organism', which we can access like this:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Cypripedium irapeanum\nCypripedium irapeanum\n" ] } ], "source": [ "print(first_record.annotations[\"source\"])\n", "print(first_record.annotations[\"organism\"])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In general, 'organism' is used for the scientific name (in Latin, e.g. _Arabidopsis thaliana_),\n", "while 'source' will often be the common name (e.g. thale cress). In this example, as is often the case,\n", "the two fields are identical. \n", "\n", "Now let's go through all the records, building up a list of the species each orchid sequence is from:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "['Cypripedium irapeanum', 'Cypripedium californicum', 'Cypripedium fasciculatum', 'Cypripedium margaritaceum', 'Cypripedium lichiangense', 'Cypripedium yatabeanum', 'Cypripedium guttatum', 'Cypripedium acaule', 'Cypripedium formosanum', 'Cypripedium himalaicum']\n" ] } ], "source": [ "from Bio import SeqIO\n", "\n", "all_species = []\n", "for seq_record in SeqIO.parse(\"data/ls_orchid.gbk\", \"genbank\"):\n", " all_species.append(seq_record.annotations[\"organism\"])\n", "print(all_species[:10]) # we print only 10" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Another way of writing this code is to use a list comprehension:\n" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "['Cypripedium irapeanum', 'Cypripedium californicum', 'Cypripedium fasciculatum', 'Cypripedium margaritaceum', 'Cypripedium lichiangense', 'Cypripedium yatabeanum', 'Cypripedium guttatum', 'Cypripedium acaule', 'Cypripedium formosanum', 'Cypripedium himalaicum']\n" ] } ], "source": [ "from Bio import SeqIO\n", "\n", "all_species = [seq_record.annotations[\"organism\"] \n", "for seq_record in SeqIO.parse(\"data/ls_orchid.gbk\", \"genbank\")\n", "]\n", "print(all_species[:10])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Great. That was pretty easy because GenBank files are annotated in a standardised way.\n", "\n", "Now, let's suppose you wanted to extract a list of the species from a FASTA file, rather than the GenBank file. The bad news is you will have to write some code to extract the data you want from the record's description line - if the information is in the file in the first place! Our example FASTA format file ls_orchid.fasta starts like this:\n", "\n", " >gi|2765658|emb|Z78533.1|CIZ78533 C.irapeanum 5.8S rRNA gene and ITS1 and ITS2 DNA\n", " CGTAACAAGGTTTCCGTAGGTGAACCTGCGGAAGGATCATTGATGAGACCGTGGAATAAACGATCGAGTG\n", " AATCCGGAGGACCGGTGTACTCAGCTCACCGGGGGCATTGCTCCCGTGGTGACCCTGATTTGTTGTTGGG\n", " ...\n", "\n", "You can check by hand, but for every record the species name is in the description line as the second word. This means if we break up each record's .description at the spaces, then the species is there as field number one (field zero is the record identifier). That means we can do this:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "['C.irapeanum', 'C.californicum', 'C.fasciculatum', 'C.margaritaceum', 'C.lichiangense', 'C.yatabeanum', 'C.guttatum', 'C.acaule', 'C.formosanum', 'C.himalaicum']\n" ] } ], "source": [ "from Bio import SeqIO\n", "\n", "all_species = []\n", "for seq_record in SeqIO.parse(\"data/ls_orchid.fasta\", \"fasta\"):\n", " all_species.append(seq_record.description.split()[1])\n", "print(all_species[:10])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The concise alternative using list comprehensions would be:\n" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "['C.irapeanum', 'C.californicum', 'C.fasciculatum', 'C.margaritaceum', 'C.lichiangense', 'C.yatabeanum', 'C.guttatum', 'C.acaule', 'C.formosanum', 'C.himalaicum']\n" ] } ], "source": [ "from Bio import SeqIO\n", "\n", "all_species == [\n", " seq_record.description.split()[1]\n", " for seq_record in SeqIO.parse(\"data/ls_orchid.fasta\", \"fasta\")]\n", "print(all_species[:10])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In general, extracting information from the FASTA description line is not very nice.\n", "If you can get your sequences in a well annotated file format like GenBank or EMBL,\n", "then this sort of annotation information is much easier to deal with.\n", "\n" ] }, { "source": [ "### 5.1.5 Modifying data" ], "cell_type": "markdown", "metadata": {} }, { "source": [ "In the previus section, we demostrated how to extract data from a SeqRecord. Another common task is to alter this data. The attributes of a SeqRecord can be modified directly, for example:" ], "cell_type": "markdown", "metadata": {} }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "'gi|2765658|emb|Z78533.1|CIZ78533'" ] }, "metadata": {}, "execution_count": 14 } ], "source": [ "from Bio import SeqIO\n", "\n", "record_iterator = SeqIO.parse(\"data/ls_orchid.fasta\", \"fasta\")\n", "first_record = next(record_iterator)\n", "first_record.id " ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "'new_id'" ] }, "metadata": {}, "execution_count": 15 } ], "source": [ "first_record.id = \"new_id\"\n", "first_record.id " ] }, { "source": [ "Note, if you want to change the way FASTA is output when written to a file (see Section 5.5), then you should modify both the id and description attributes. To ensure the correct behaviour, it is best to include the id plus a space at the start of the desired description:" ], "cell_type": "markdown", "metadata": {} }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ ">new_id desired new description\nCGTAACAAGGTTTCCGTAGGTGAACCTGCGGAAGGATCATTGATGAGACCGTGGAATAAA\nCGATCGAGTGAATCCGGAGGACCGGTGTACTCAGCTCACCGGGGGCATTGCTCCCGTGGT\nGACCCTGATTTGTTGTTGGGCCGCCTCGGGAGCGTCCATGGCGGGT\n" ] } ], "source": [ "from Bio import SeqIO\n", "\n", "record_iterator = SeqIO.parse(\"data/ls_orchid.fasta\", \"fasta\")\n", "first_record = next(record_iterator)\n", "first_record.id = \"new_id\"\n", "first_record.description = first_record.id + \" \" + \"desired new description\"\n", "print(first_record.format(\"fasta\")[:200])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 5.2 Parsing sequences from compressed files" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the previous section, we looked at parsing sequence data from a file. Instead of using a filename, you can give Bio.SeqIO a handle (see Section 24.1), and in this section we'll use handles to parse sequence from compressed files.\n", "\n", "As you'll have seen above, we can use Bio.SeqIO.read() or Bio.SeqIO.parse() with a filename - for instance this quick example calculates the total length of the sequences in a multiple record GenBank file using a generator expression:" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "67518\n" ] } ], "source": [ "from Bio import SeqIO\n", "\n", "print(sum(len(r) for r in SeqIO.parse(\"data/ls_orchid.gbk\", \"gb\")))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here we use a file handle instead, using the \\verb|with| statement\n", "to close the handle automatically:\n" ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "67518\n" ] } ], "source": [ "from Bio import SeqIO\n", "\n", "with open(\"data/ls_orchid.gbk\") as handle:\n", " print(sum(len(r) for r in SeqIO.parse(handle, \"gb\")))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Or, the old fashioned way where you manually close the handle:\n" ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "67518\n" ] } ], "source": [ "from Bio import SeqIO\n", "\n", "handle = open(\"data/ls_orchid.gbk\")\n", "print(sum(len(r) for r in SeqIO.parse(handle, \"gb\")))" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [], "source": [ "handle.close()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, suppose we have a gzip compressed file instead? These are very\n", "commonly used on Linux. We can use Python's gzip module to open\n", "the compressed file for reading - which gives us a handle object:" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "67518\n" ] } ], "source": [ "import gzip\n", "\n", "from Bio import SeqIO\n", "with gzip.open(\"data/ls_orchid.gbk.gz\", \"rt\") as handle:\n", " print(sum(len(r) for r in SeqIO.parse(handle, \"gb\")))" ] }, { "source": [ "Similarly if we had a bzip2 compressed file:" ], "cell_type": "markdown", "metadata": {} }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import bz2\n", "from Bio import SeqIO\n", "\n", "with bz2.open(\"data/ls_orchid.gbk.bz2\", \"rt\") as handle:\n", " print(sum(len(r) for r in SeqIO.parse(handle, \"gb\")))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There is a gzip (GNU Zip) variant called BGZF (Blocked GNU Zip Format), which can be treated like an ordinary gzip file for reading, but has advantages for random access later which we'll talk about later in Section 5.4.4." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 5.3 Parsing sequences from the net" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the previous sections, we looked at parsing sequence data from a file\n", "(using a filename or handle), and from compressed files (using a handle).\n", "Here we'll use Bio.SeqIO with another type of handle, a network\n", "connection, to download and parse sequences from the internet.\n", "\n", "Note that just because you _can_ download sequence data and parse it into\n", "a SeqRecord object in one go doesn't mean this is a good idea.\n", "In general, you should probably download sequences _once_ and save them to\n", "a file for reuse." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 5.3.1 Parsing GenBank records from the net" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Section 9.6 talks about the Entrez EFetch interface in more detail, but for now let's just connect to NCBI and get a few Opuntia (prickly-pear) sequences from GenBank using their GI numbers.\n", "\n", "First of all, let's fetch just one record. If you don't care about the annotations and features downloading a FASTA file is a good choice as these are compact. Now remember, when you expect the handle to contain one and only one record, use the Bio.SeqIO.read() function:" ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "AF191665.1 with 0 features\n" ] } ], "source": [ "from Bio import Entrez\n", "from Bio import SeqIO\n", "\n", "Entrez.email = \"A.N.Other@example.com\"\n", "with Entrez.efetch(\n", " db=\"nucleotide\", rettype=\"fasta\", retmode=\"text\", id=\"6273291\"\n", ") as handle:\n", " seq_record = SeqIO.read(handle, \"fasta\")\n", "print(\"%s with %i features\" % (seq_record.id, len(seq_record.features)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The NCBI will also let you ask for the file in other formats, in particular as\n", "a GenBank file. Until Easter 2009, the Entrez EFetch API let you use ``genbank''\n", "as the return type, however the NCBI now insist on using the official\n", "return types of 'gb' (or 'gp' for proteins) as described on [EFetch for Sequence and other Molecular Biology Databases](http://www.ncbi.nlm.nih.gov/entrez/query/static/efetchseq_help.html \"EFetch for Sequence and other Molecular Biology Databases\"). As a result, in Biopython 1.50 onwards, we support “gb” as an alias for “genbank” in Bio.SeqIO." ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "AF191665.1 with 3 features\n" ] } ], "source": [ "from Bio import Entrez\n", "from Bio import SeqIO\n", "\n", "Entrez.email = \"A.N.Other@example.com\"\n", "with Entrez.efetch(\n", " db=\"nucleotide\", rettype=\"gb\", retmode=\"text\", id=\"6273291\"\n", ") as handle:\n", " seq_record = SeqIO.read(handle, \"gb\") # using \"gb\" as an alias for \"genbank\"\n", "print(\"%s with %i features\" % (seq_record.id, len(seq_record.features)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice this time we have three features.\n", "\n", "Now let's fetch several records. This time the handle contains multiple records,\n", "so we must use the Bio.SeqIO.parse() function:\n" ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "AF191665.1 Opuntia marenae rpl16 gene; chloroplast gene for c...\nSequence length 902, 3 features, from: chloroplast Grusonia marenae\nAF191664.1 Opuntia clavata rpl16 gene; chloroplast gene for c...\nSequence length 899, 3 features, from: chloroplast Grusonia clavata\nAF191663.1 Opuntia bradtiana rpl16 gene; chloroplast gene for...\nSequence length 899, 3 features, from: chloroplast Grusonia bradtiana\n" ] } ], "source": [ "from Bio import Entrez\n", "from Bio import SeqIO\n", "\n", "Entrez.email = \"A.N.Other@example.com\"\n", "with Entrez.efetch(\n", " db=\"nucleotide\", rettype=\"gb\", retmode=\"text\", id=\"6273291,6273290,6273289\"\n", ") as handle:\n", " for seq_record in SeqIO.parse(handle, \"gb\"):\n", " print(\"%s %s...\" % (seq_record.id, seq_record.description[:50]))\n", " print(\n", " \"Sequence length %i, %i features, from: %s\"\n", " % (\n", " len(seq_record),\n", " len(seq_record.features),\n", " seq_record.annotations[\"source\"],\n", " )\n", " )" ] }, { "source": [ "See Chapter 9 for more about the Bio.Entrez module, and make sure to read about the NCBI guidelines for using Entrez (Section 9.1)." ], "cell_type": "markdown", "metadata": {} }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 5.3.2 Parsing SwissProt sequences from the net" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's use a handle to download a SwissProt file from ExPASy, something covered in more depth in Chapter 10. As mentioned above, when you expect the handle to contain one and only one record, use the Bio.SeqIO.read() function:" ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "O23729\nCHS3_BROFI\nRecName: Full=Chalcone synthase 3; EC=2.3.1.74; AltName: Full=Naringenin-chalcone synthase 3;\nSeq('MAPAMEEIRQAQRAEGPAAVLAIGTSTPPNALYQADYPDYYFRITKSEHLTELK...GAE')\nLength 394\n['Acyltransferase', 'Flavonoid biosynthesis', 'Transferase']\n" ] } ], "source": [ "from Bio import ExPASy\n", "from Bio import SeqIO\n", "\n", "with ExPASy.get_sprot_raw(\"O23729\") as handle:\n", " seq_record = SeqIO.read(handle, \"swiss\")\n", "print(seq_record.id)\n", "print(seq_record.name)\n", "print(seq_record.description)\n", "print(repr(seq_record.seq))\n", "print(\"Length %i\" % len(seq_record))\n", "print(seq_record.annotations[\"keywords\"])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 5.4 5Sequence files as dictionaries" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We're now going to introduce three related functions in the \\verb|Bio.SeqIO|\n", "module which allow dictionary like random access to a multi-sequence file.\n", "There is a trade off here between flexibility and memory usage. In summary:\n", "\n", "* **Bio.SeqIO.to_dict()** is the most flexible but also the most memory demanding option (see Section 5.4.1). This is basically a helper function to build a normal Python dictionary with each entry held as a SeqRecord object in memory, allowing you to modify the records.\n", "\n", "* **Bio.SeqIO.index()** is a useful middle ground, acting like a read only dictionary and parsing sequences into SeqRecord objects on demand (see Section 5.4.2).\n", "\n", "* **Bio.SeqIO.index_db()** also acts like a read only dictionary but stores the identifiers and file offsets in a file on disk (as an SQLite3 database), meaning it has very low memory requirements (see Section 5.4.3), but will be a little bit slower.\n", "\n", "See the discussion for an broad overview (Section 5.4.5)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 5.4.1 Sequence files as Dictionaries -- In memory" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The next thing that we'll do with our ubiquitous orchid files is to show how to index them and access them like a database using the Python dictionary data type (like a hash in Perl). This is very useful for moderately large files where you only need to access certain elements of the file, and makes for a nice quick 'n dirty database. For dealing with larger files where memory becomes a problem, see Section 5.4.2 below.\n", "\n", "You can use the function Bio.SeqIO.to_dict() to make a SeqRecord dictionary (in memory). By default this will use each record's identifier (i.e. the .id attribute) as the key. Let's try this using our GenBank file:" ] }, { "cell_type": "code", "execution_count": 29, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from Bio import SeqIO\n", "\n", "orchid_dict = SeqIO.to_dict(SeqIO.parse(\"data/ls_orchid.gbk\", \"genbank\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There is just one required argument for Bio.SeqIO.to_dict(), a list or\n", "generator giving SeqRecord objects. Here we have just used the output\n", "from the SeqIO.parse function. As the name suggests, this returns a\n", "Python dictionary.\n", "\n", "Since this variable orchid_dict is an ordinary Python dictionary, we can look at all of the keys we have available:" ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "94" ] }, "metadata": {}, "execution_count": 31 } ], "source": [ "len(orchid_dict) " ] }, { "cell_type": "code", "execution_count": 33, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "['Z78533.1',\n", " 'Z78532.1',\n", " 'Z78531.1',\n", " 'Z78530.1',\n", " 'Z78529.1',\n", " 'Z78527.1',\n", " 'Z78526.1',\n", " 'Z78525.1',\n", " 'Z78524.1',\n", " 'Z78523.1']" ] }, "metadata": {}, "execution_count": 33 } ], "source": [ "list(orchid_dict.keys())[:10] #ten only" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Under Python 3 the dictionary methods like \".keys()\" and \".values()\" are iterators rather than lists.\n", "\n", "If you really want to, you can even look at all the records at once:\n" ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "[SeqRecord(seq=Seq('CGTAACAAGGTTTCCGTAGGTGAACCTGCGGAAGGATCATTGTTGAGATCACAT...TTT', IUPACAmbiguousDNA()), id='Z78459.1', name='Z78459', description='P.dayanum 5.8S rRNA gene and ITS1 and ITS2 DNA.', dbxrefs=[]),\n", " SeqRecord(seq=Seq('CGTAACAAGGTTTCCGTAGGTGAACCTGCGGAAGGATCATTGTTGAGATCGCAT...AGC', IUPACAmbiguousDNA()), id='Z78496.1', name='Z78496', description='P.armeniacum 5.8S rRNA gene and ITS1 and ITS2 DNA.', dbxrefs=[]),\n", " SeqRecord(seq=Seq('CGTAACAAGGTTTCCGTAGGTGAACCTGCGGAAGGATCATTGTTGAGACCGCAA...AGA', IUPACAmbiguousDNA()), id='Z78501.1', name='Z78501', description='P.caudatum 5.8S rRNA gene and ITS1 and ITS2 DNA.', dbxrefs=[]),\n", " SeqRecord(seq=Seq('CGTAACAAGGTTTCCGTAGGTGAACCTGCGGAAGGATCATTGTTGAGATCACAT...AGG', IUPACAmbiguousDNA()), id='Z78443.1', name='Z78443', description='P.lawrenceanum 5.8S rRNA gene and ITS1 and ITS2 DNA.', dbxrefs=[]),\n", " SeqRecord(seq=Seq('CGTAACAAGGTTTCCGTAGGTGGACCTTCGGGAGGATCATTTTTGAAGCCCCCA...CTA', IUPACAmbiguousDNA()), id='Z78514.1', name='Z78514', description='P.schlimii 5.8S rRNA gene and ITS1 and ITS2 DNA.', dbxrefs=[])]" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "list(orchid_dict.values())[:5] # Ok not all at once..." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can access a single SeqRecord object via the keys and manipulate the object as normal:\n" ] }, { "cell_type": "code", "execution_count": 34, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "P.supardii 5.8S rRNA gene and ITS1 and ITS2 DNA\n" ] } ], "source": [ "seq_record = orchid_dict[\"Z78475.1\"]\n", "print(seq_record.description)" ] }, { "cell_type": "code", "execution_count": 35, "metadata": {}, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "Seq('CGTAACAAGGTTTCCGTAGGTGAACCTGCGGAAGGATCATTGTTGAGATCACAT...GGT')" ] }, "metadata": {}, "execution_count": 35 } ], "source": [ "seq_record.seq" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So, it is very easy to create an in memory 'database' of our GenBank records. Next we'll try this for the FASTA file instead.\n", "\n", "Note that those of you with prior Python experience should all be able to construct a dictionary like this 'by hand'. However, typical dictionary construction methods will not deal with the case of repeated keys very nicely. Using the Bio.SeqIO.to_dict() will explicitly check for duplicate keys, and raise an exception if any are found." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 5.4.1.1 pecifying the dictionary keys" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Using the same code as above, but for the FASTA file instead:\n" ] }, { "cell_type": "code", "execution_count": 36, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "['gi|2765658|emb|Z78533.1|CIZ78533', 'gi|2765657|emb|Z78532.1|CCZ78532', 'gi|2765656|emb|Z78531.1|CFZ78531', 'gi|2765655|emb|Z78530.1|CMZ78530', 'gi|2765654|emb|Z78529.1|CLZ78529', 'gi|2765652|emb|Z78527.1|CYZ78527', 'gi|2765651|emb|Z78526.1|CGZ78526', 'gi|2765650|emb|Z78525.1|CAZ78525', 'gi|2765649|emb|Z78524.1|CFZ78524', 'gi|2765648|emb|Z78523.1|CHZ78523']\n" ] } ], "source": [ "from Bio import SeqIO\n", "\n", "orchid_dict = SeqIO.to_dict(SeqIO.parse(\"data/ls_orchid.fasta\", \"fasta\"))\n", "print(list(orchid_dict.keys())[:10])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You should recognise these strings from when we parsed the FASTA file earlier in Section 2.4.1. Suppose you would rather have something else as the keys - like the accession numbers. This brings us nicely to SeqIO.to_dict()'s optional argument key_function, which lets you define what to use as the dictionary key for your records.\n", "\n", "First you must write your own function to return the key you want (as a string) when given a SeqRecord object. In general, the details of function will depend on the sort of input records you are dealing with. But for our orchids, we can just split up the record's identifier using the 'pipe' character (the vertical line) and return the fourth entry (field three):" ] }, { "cell_type": "code", "execution_count": 37, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def get_accession(record):\n", " \"\"\"\"Given a SeqRecord, return the accession number as a string.\n", " \n", " e.g. \"gi|2765613|emb|Z78488.1|PTZ78488\" -> \"Z78488.1\"\n", " \"\"\"\n", " parts = record.id.split(\"|\")\n", " assert len(parts) == 5 and parts[0] == \"gi\" and parts[2] == \"emb\"\n", " return parts[3]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then we can give this function to the SeqIO.to_dict() function to use in building the dictionary:" ] }, { "cell_type": "code", "execution_count": 38, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "dict_keys(['Z78533.1', 'Z78532.1', 'Z78531.1', 'Z78530.1', 'Z78529.1', 'Z78527.1', 'Z78526.1', 'Z78525.1', 'Z78524.1', 'Z78523.1', 'Z78522.1', 'Z78521.1', 'Z78520.1', 'Z78519.1', 'Z78518.1', 'Z78517.1', 'Z78516.1', 'Z78515.1', 'Z78514.1', 'Z78513.1', 'Z78512.1', 'Z78511.1', 'Z78510.1', 'Z78509.1', 'Z78508.1', 'Z78507.1', 'Z78506.1', 'Z78505.1', 'Z78504.1', 'Z78503.1', 'Z78502.1', 'Z78501.1', 'Z78500.1', 'Z78499.1', 'Z78498.1', 'Z78497.1', 'Z78496.1', 'Z78495.1', 'Z78494.1', 'Z78493.1', 'Z78492.1', 'Z78491.1', 'Z78490.1', 'Z78489.1', 'Z78488.1', 'Z78487.1', 'Z78486.1', 'Z78485.1', 'Z78484.1', 'Z78483.1', 'Z78482.1', 'Z78481.1', 'Z78480.1', 'Z78479.1', 'Z78478.1', 'Z78477.1', 'Z78476.1', 'Z78475.1', 'Z78474.1', 'Z78473.1', 'Z78472.1', 'Z78471.1', 'Z78470.1', 'Z78469.1', 'Z78468.1', 'Z78467.1', 'Z78466.1', 'Z78465.1', 'Z78464.1', 'Z78463.1', 'Z78462.1', 'Z78461.1', 'Z78460.1', 'Z78459.1', 'Z78458.1', 'Z78457.1', 'Z78456.1', 'Z78455.1', 'Z78454.1', 'Z78453.1', 'Z78452.1', 'Z78451.1', 'Z78450.1', 'Z78449.1', 'Z78448.1', 'Z78447.1', 'Z78446.1', 'Z78445.1', 'Z78444.1', 'Z78443.1', 'Z78442.1', 'Z78441.1', 'Z78440.1', 'Z78439.1'])\n" ] } ], "source": [ "from Bio import SeqIO\n", "\n", "orchid_dict = SeqIO.to_dict(SeqIO.parse(\"data/ls_orchid.fasta\", \"fasta\"), key_function=get_accession)\n", "print(orchid_dict.keys())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, as desired, the new dictionary keys:" ] }, { "cell_type": "code", "execution_count": 56, "metadata": {}, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "['gi|2765658|emb|Z78533.1|CIZ78533', 'gi|2765657|emb|Z78532.1|CCZ78532', 'gi|2765656|emb|Z78531.1|CFZ78531', 'gi|2765655|emb|Z78530.1|CMZ78530', 'gi|2765654|emb|Z78529.1|CLZ78529', 'gi|2765652|emb|Z78527.1|CYZ78527', 'gi|2765651|emb|Z78526.1|CGZ78526', 'gi|2765650|emb|Z78525.1|CAZ78525', 'gi|2765649|emb|Z78524.1|CFZ78524', 'gi|2765648|emb|Z78523.1|CHZ78523']\n" ] } ], "source": [ "print(list(orchid_dict.keys())[:10])" ] }, { "source": [ "Not to complicated, I hope!" ], "cell_type": "markdown", "metadata": {} }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 5.4.1.2 Indexing a dictionary using the SEGUID checksum" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To give another example of working with dictionaries of SeqRecord objects, we'll use the SEGUID checksum function. This is a relatively recent checksum, and collisions should be very rare (i.e. two different sequences with the same checksum), an improvement on the CRC64 checksum.\n", "\n", "Once again, working with the orchids GenBank file:" ] }, { "cell_type": "code", "execution_count": 40, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Z78533.1 JUEoWn6DPhgZ9nAyowsgtoD9TTo\nZ78532.1 MN/s0q9zDoCVEEc+k/IFwCNF2pY\nZ78531.1 xN45pACrTnmBH8a8Y9cWSgoLrwE\nZ78530.1 yMhI5UUQfFOPcoJXb9B19XUyYlY\nZ78529.1 s1Pnjq9zoSHoI/CG9jQr4GyeMZY\n" ] } ], "source": [ "from Bio import SeqIO\n", "from Bio.SeqUtils.CheckSum import seguid\n", "\n", "for i, record in enumerate(SeqIO.parse(\"data/ls_orchid.gbk\", \"genbank\")):\n", " print(record.id, seguid(record.seq))\n", " if i == 4: # OK, 5 is enough!\n", " break" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, recall the Bio.SeqIO.to_dict() function's key_function argument expects a function which turns a SeqRecord into a string. We can't use the seguid() function directly because it expects to be given a Seq object (or a string). However, we can use Python's lambda feature to create a 'one off' function to give to Bio.SeqIO.to_dict() instead:" ] }, { "cell_type": "code", "execution_count": 41, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Z78532.1\n" ] } ], "source": [ "from Bio import SeqIO\n", "from Bio.SeqUtils.CheckSum import seguid\n", "\n", "seguid_dict = SeqIO.to_dict(SeqIO.parse(\"data/ls_orchid.gbk\", \"genbank\"),\n", " lambda rec : seguid(rec.seq))\n", "record = seguid_dict[\"MN/s0q9zDoCVEEc+k/IFwCNF2pY\"]\n", "print(record.id)" ] }, { "cell_type": "code", "execution_count": 42, "metadata": {}, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "C.californicum 5.8S rRNA gene and ITS1 and ITS2 DNA\n" ] } ], "source": [ "print(record.description)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "That should have retrieved the record Z78532.1, the second entry in the file." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 5.4.2 Sequence files as Dictionaries - Indexed files" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As the previous couple of examples tried to illustrate, using\n", "Bio.SeqIO.to_dict() is very flexible. However, because it holds\n", "everything in memory, the size of file you can work with is limited by your\n", "computer's RAM. In general, this will only work on small to medium files.\n", "\n", "For larger files you should consider\n", "Bio.SeqIO.index(), which works a little differently. Although\n", "it still returns a dictionary like object, this does _not_ keep\n", "_everything_ in memory. Instead, it just records where each record\n", "is within the file -- when you ask for a particular record, it then parses\n", "it on demand.\n", "\n", "As an example, let's use the same GenBank file as before:" ] }, { "cell_type": "code", "execution_count": 43, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "94" ] }, "metadata": {}, "execution_count": 43 } ], "source": [ " from Bio import SeqIO\n", "\n", "orchid_dict = SeqIO.index(\"data/ls_orchid.gbk\", \"genbank\")\n", "len(orchid_dict)" ] }, { "cell_type": "code", "execution_count": 44, "metadata": {}, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "['Z78533.1', 'Z78532.1', 'Z78531.1', 'Z78530.1', 'Z78529.1', 'Z78527.1', 'Z78526.1', 'Z78525.1', 'Z78524.1', 'Z78523.1', 'Z78522.1', 'Z78521.1', 'Z78520.1', 'Z78519.1', 'Z78518.1', 'Z78517.1', 'Z78516.1', 'Z78515.1', 'Z78514.1', 'Z78513.1', 'Z78512.1', 'Z78511.1', 'Z78510.1', 'Z78509.1', 'Z78508.1', 'Z78507.1', 'Z78506.1', 'Z78505.1', 'Z78504.1', 'Z78503.1', 'Z78502.1', 'Z78501.1', 'Z78500.1', 'Z78499.1', 'Z78498.1', 'Z78497.1', 'Z78496.1', 'Z78495.1', 'Z78494.1', 'Z78493.1', 'Z78492.1', 'Z78491.1', 'Z78490.1', 'Z78489.1', 'Z78488.1', 'Z78487.1', 'Z78486.1', 'Z78485.1', 'Z78484.1', 'Z78483.1', 'Z78482.1', 'Z78481.1', 'Z78480.1', 'Z78479.1', 'Z78478.1', 'Z78477.1', 'Z78476.1', 'Z78475.1', 'Z78474.1', 'Z78473.1', 'Z78472.1', 'Z78471.1', 'Z78470.1', 'Z78469.1', 'Z78468.1', 'Z78467.1', 'Z78466.1', 'Z78465.1', 'Z78464.1', 'Z78463.1', 'Z78462.1', 'Z78461.1', 'Z78460.1', 'Z78459.1', 'Z78458.1', 'Z78457.1', 'Z78456.1', 'Z78455.1', 'Z78454.1', 'Z78453.1', 'Z78452.1', 'Z78451.1', 'Z78450.1', 'Z78449.1', 'Z78448.1', 'Z78447.1', 'Z78446.1', 'Z78445.1', 'Z78444.1', 'Z78443.1', 'Z78442.1', 'Z78441.1', 'Z78440.1', 'Z78439.1']\n" ] } ], "source": [ "print(list(orchid_dict.keys()))" ] }, { "cell_type": "code", "execution_count": 45, "metadata": {}, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "P.supardii 5.8S rRNA gene and ITS1 and ITS2 DNA\n" ] } ], "source": [ "seq_record = orchid_dict[\"Z78475.1\"]\n", "print(seq_record.description)" ] }, { "cell_type": "code", "execution_count": 46, "metadata": {}, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "Seq('CGTAACAAGGTTTCCGTAGGTGAACCTGCGGAAGGATCATTGTTGAGATCACAT...GGT')" ] }, "metadata": {}, "execution_count": 46 } ], "source": [ "seq_record.seq" ] }, { "cell_type": "code", "execution_count": 47, "metadata": {}, "outputs": [], "source": [ "orchid_dict.close()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that Bio.SeqIO.index() won’t take a handle, but only a filename. There are good reasons for this, but it is a little technical. The second argument is the file format (a lower case string as used in the other Bio.SeqIO functions). You can use many other simple file formats, including FASTA and FASTQ files (see the example in Section 20.1.11). However, alignment formats like PHYLIP or Clustal are not supported. Finally as an optional argument you can supply a key function.\n", "\n", "Here is the same example using the FASTA file - all we change is the filename and the format name:" ] }, { "cell_type": "code", "execution_count": 48, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "94" ] }, "metadata": {}, "execution_count": 48 } ], "source": [ "from Bio import SeqIO\n", "\n", "orchid_dict = SeqIO.index(\"data/ls_orchid.fasta\", \"fasta\")\n", "len(orchid_dict)" ] }, { "cell_type": "code", "execution_count": 53, "metadata": {}, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "['gi|2765658|emb|Z78533.1|CIZ78533', 'gi|2765657|emb|Z78532.1|CCZ78532', 'gi|2765656|emb|Z78531.1|CFZ78531', 'gi|2765655|emb|Z78530.1|CMZ78530', 'gi|2765654|emb|Z78529.1|CLZ78529', 'gi|2765652|emb|Z78527.1|CYZ78527', 'gi|2765651|emb|Z78526.1|CGZ78526', 'gi|2765650|emb|Z78525.1|CAZ78525', 'gi|2765649|emb|Z78524.1|CFZ78524', 'gi|2765648|emb|Z78523.1|CHZ78523']\n" ] } ], "source": [ "print(list(orchid_dict.keys())[:10])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 5.4.2.1 Specifying the dictionary keys" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Suppose you want to use the same keys as before? Much like with the Bio.SeqIO.to_dict() example in Section 5.4.1.1, you’ll need to write a tiny function to map from the FASTA identifier (as a string) to the key you want:" ] }, { "cell_type": "code", "execution_count": 57, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def get_acc(identifier):\n", " \"\"\"\"Given a SeqRecord identifier string, return the accession number as a string.\n", " \n", " e.g. \"gi|2765613|emb|Z78488.1|PTZ78488\" -> \"Z78488.1\"\n", " \"\"\"\n", " parts = identifier.split(\"|\")\n", " assert len(parts) == 5 and parts[0] == \"gi\" and parts[2] == \"emb\"\n", " return parts[3]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then we can give this function to the Bio.SeqIO.index()\n", "function to use in building the dictionary:" ] }, { "cell_type": "code", "execution_count": 58, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "['Z78533.1', 'Z78532.1', 'Z78531.1', 'Z78530.1', 'Z78529.1', 'Z78527.1', 'Z78526.1', 'Z78525.1', 'Z78524.1', 'Z78523.1', 'Z78522.1', 'Z78521.1', 'Z78520.1', 'Z78519.1', 'Z78518.1', 'Z78517.1', 'Z78516.1', 'Z78515.1', 'Z78514.1', 'Z78513.1', 'Z78512.1', 'Z78511.1', 'Z78510.1', 'Z78509.1', 'Z78508.1', 'Z78507.1', 'Z78506.1', 'Z78505.1', 'Z78504.1', 'Z78503.1', 'Z78502.1', 'Z78501.1', 'Z78500.1', 'Z78499.1', 'Z78498.1', 'Z78497.1', 'Z78496.1', 'Z78495.1', 'Z78494.1', 'Z78493.1', 'Z78492.1', 'Z78491.1', 'Z78490.1', 'Z78489.1', 'Z78488.1', 'Z78487.1', 'Z78486.1', 'Z78485.1', 'Z78484.1', 'Z78483.1', 'Z78482.1', 'Z78481.1', 'Z78480.1', 'Z78479.1', 'Z78478.1', 'Z78477.1', 'Z78476.1', 'Z78475.1', 'Z78474.1', 'Z78473.1', 'Z78472.1', 'Z78471.1', 'Z78470.1', 'Z78469.1', 'Z78468.1', 'Z78467.1', 'Z78466.1', 'Z78465.1', 'Z78464.1', 'Z78463.1', 'Z78462.1', 'Z78461.1', 'Z78460.1', 'Z78459.1', 'Z78458.1', 'Z78457.1', 'Z78456.1', 'Z78455.1', 'Z78454.1', 'Z78453.1', 'Z78452.1', 'Z78451.1', 'Z78450.1', 'Z78449.1', 'Z78448.1', 'Z78447.1', 'Z78446.1', 'Z78445.1', 'Z78444.1', 'Z78443.1', 'Z78442.1', 'Z78441.1', 'Z78440.1', 'Z78439.1']\n" ] } ], "source": [ "from Bio import SeqIO\n", "\n", "orchid_dict = SeqIO.index(\"data/ls_orchid.fasta\", \"fasta\", key_function=get_acc)\n", "print(list(orchid_dict.keys()))" ] }, { "source": [ "Easy when you know how?" ], "cell_type": "markdown", "metadata": {} }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 5.4.2.2 Getting the raw data for a record" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The dictionary-like object from Bio.SeqIO.index() gives you each\n", "entry as a SeqRecord object. However, it is sometimes useful to\n", "be able to get the original raw data straight from the file. For this\n", "use the get_raw() method which takes a\n", "single argument (the record identifier) and returns a string (extracted\n", "from the file without modification).\n", "\n", "A motivating example is extracting a subset of a records from a large\n", "file where either Bio.SeqIO.write() does not (yet) support the\n", "output file format (e.g. the plain text SwissProt file format) or\n", "where you need to preserve the text exactly (e.g. GenBank or EMBL\n", "output from Biopython does not yet preserve every last bit of\n", "annotation).\n", "\n", "Let's suppose you have download the whole of UniProt in the plain\n", "text SwissPort file format from their FTP site\n", "(ftp://ftp.uniprot.org/pub/databases/uniprot/current_release/knowledgebase/complete/uniprot_sprot.dat.gz - Careful big download)\n", "and uncompressed it as the file uniprot_sprot.dat, and you\n", "want to extract just a few records from it:\n" ] }, { "cell_type": "code", "execution_count": 59, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "--2020-10-22 21:04:59-- ftp://ftp.uniprot.org/pub/databases/uniprot/current_release/knowledgebase/complete/uniprot_sprot.dat.gz\n", " => ‘data/uniprot_sprot.dat.gz’\n", "Resolving ftp.uniprot.org (ftp.uniprot.org)... 141.161.180.197\n", "Connecting to ftp.uniprot.org (ftp.uniprot.org)|141.161.180.197|:21... connected.\n", "Logging in as anonymous ... Logged in!\n", "==> SYST ... done. ==> PWD ... done.\n", "==> TYPE I ... done. ==> CWD (1) /pub/databases/uniprot/current_release/knowledgebase/complete ... done.\n", "==> SIZE uniprot_sprot.dat.gz ... 594638819\n", "==> PASV ... done. ==> RETR uniprot_sprot.dat.gz ... done.\n", "Length: 594638819 (567M) (unauthoritative)\n", "\n", "uniprot_sprot.dat.g 100%[===================>] 567.09M 270KB/s in 27m 35s \n", "\n", "2020-10-22 21:32:36 (351 KB/s) - ‘data/uniprot_sprot.dat.gz’ saved [594638819]\n", "\n" ] } ], "source": [ "#Use this to download the file\n", "!wget -c ftp://ftp.uniprot.org/pub/databases/uniprot/current_release/knowledgebase/complete/uniprot_sprot.dat.gz -O data/uniprot_sprot.dat.gz\n", "!gzip -d data/uniprot_sprot.dat.gz" ] }, { "cell_type": "code", "execution_count": 60, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from Bio import SeqIO\n", "\n", "uniprot = SeqIO.index(\"data/uniprot_sprot.dat\", \"swiss\")\n", "with open(\"selected.dat\", \"wb\") as out_handle:\n", " for acc in [\"P33487\", \"P19801\", \"P13689\", \"Q8JZQ5\", \"Q9TRC7\"]:\n", " out_handle.write(uniprot.get_raw(acc))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note with Python 3 onwards, we have to open the file for writing in binary mode because the get_raw() method returns bytes strings.\n", "\n", "There is a longer example in Section 20.1.5 using the SeqIO.index() function to sort a large sequence file (without loading everything into memory at once)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 5.4.3 Sequence files as Dictionaries - Database indexed files" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Biopython 1.57 introduced an alternative, Bio.SeqIO.index_db(), which\n", "can work on even extremely large files since it stores the record information\n", "as a file on disk (using an SQLite3 database) rather than in memory. Also,\n", "you can index multiple files together (providing all the record identifiers\n", "are unique).\n", "\n", "The Bio.SeqIO.index() function takes three required arguments:\n", "\n", "* Index filename, we suggest using something ending .idx. This index file is actually an SQLite3 database.\n", "\n", "* List of sequence filenames to index (or a single filename)\n", "\n", "* File format (lower case string as used in the rest of the SeqIO module).\n", "\n", "As an example, consider the GenBank flat file releases from the NCBI FTP site,\n", "ftp://ftp.ncbi.nih.gov/genbank/, which are gzip compressed GenBank files.\n", "\n", "As of GenBank release 210, there are 38 files making up the viral sequences, gbvrl1.seq, ..., gbvrl16.seq, talking about 8GB on disk once decompressed, and containing in total early two million records.\n", "\n", "If you were interested in the viruses, you could download all the virus files from the command line very easily with the rsync command, and then decompress them with gunzip:" ] }, { "source": [ "# For illustration only, see reduced example below\n", "$ rsync -avP (\"ftp.ncbi.nih.gov::genbank/gbvrl*.seq.gz\") \n", "$ gunzip gbvrl*.seq.gz" ], "cell_type": "code", "metadata": {}, "execution_count": null, "outputs": [] }, { "source": [ "Unless you care about viruses, that’s a lot of data to download just for this example - so let’s download _just_ the first four chunks (about 25MB each compressed), and decompress them (taking in all about 1GB of space):" ], "cell_type": "markdown", "metadata": {} }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Reduced example, download only the first four chunks\n", "$ curl -O ftp://ftp.ncbi.nih.gov/genbank/gbvrl1.seq.gz\n", "$ curl -O ftp://ftp.ncbi.nih.gov/genbank/gbvrl2.seq.gz\n", "$ curl -O ftp://ftp.ncbi.nih.gov/genbank/gbvrl3.seq.gz\n", "$ curl -O ftp://ftp.ncbi.nih.gov/genbank/gbvrl4.seq.gz\n", "$ gunzip gbvrl*.seq.gz" ] }, { "source": [ "Now, in Python, index these GenBank files as follows:" ], "cell_type": "markdown", "metadata": {} }, { "cell_type": "code", "execution_count": 61, "metadata": { "collapsed": false }, "outputs": [], "source": [ "#this will download the files - Currently there are more than 16, but we will do only 4\n", "import os\n", "for i in range(1, 5):\n", " os.system('wget ftp://ftp.ncbi.nih.gov/genbank/gbvrl%i.seq.gz -O data/gbvrl%i.seq.gz' % (i, i))\n", " os.system('gzip -d data/gbvrl%i.seq.gz' % i)" ] }, { "cell_type": "code", "execution_count": 62, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "453220 sequences indexed\n" ] } ], "source": [ "files = [\"data/gbvrl%i.seq\" % i for i in range(1, 5)]\n", "gb_vrl = SeqIO.index_db(\"data/gbvrl.idx\", files, \"genbank\")\n", "print(\"%i sequences indexed\" % len(gb_vrl))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Indexing the full set of virus GenBank files took about ten minutes on my machine, just the first four files took about a minute or so.\n", "\n", "However, once done, repeating this will reload the index file gbvrl.idx in a fraction of a second.\n", "\n", "You can use the index as a read only Python dictionary - without having to worry about which file the sequence comes from, e.g.\n" ] }, { "cell_type": "code", "execution_count": 63, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Equine encephalosis virus NS3 gene, complete cds, isolate: Kimron1\n" ] } ], "source": [ "print(gb_vrl[\"AB811634.1\"].description)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 5.4.3.1 Getting the raw data for a record" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Just as with the Bio.SeqIO.index() function discussed above in Section 5.4.2.2, the dictionary like object also lets you get at the raw bytes of each record:" ] }, { "cell_type": "code", "execution_count": 65, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "b'LOCUS AB811634 723 bp RNA linear VRL 17-JUN-2015\\nDEFINITION Equine encephalosis virus NS3 gene, complete cds, isolate: Kimron1.\\nACCESSION AB811634\\nVERSION AB811634.1\\nKEYWORDS .\\nSOURCE Equine encephalosis virus\\n ORGANISM Equine encephalosis virus\\n Viruses; Riboviria; Orthornavirae; Duplornaviricota;\\n Resentoviricetes; Reovirales; Reoviridae; Sedoreovirinae;\\n Orbivirus.\\nREFERENCE 1\\n AUTHORS Westcott,D., Mildenberg,Z., Bellaiche,M., McGowan,S.L.,\\n Grierson,S.S., Choudhury,B. and Steinbach,F.\\n TITLE Evidence for the circulation of equine encephalosis virus in Israel\\n since 2001\\n JOURNAL PLoS ONE 8 (8), E70532 (2013)\\n PUBMED 23950952\\n REMARK DOI:10.1371/journal.pone.0070532\\n Erratum:[PLoS One. 2013;8(9).\\n doi:10.1371/annotation/4875ab92-466a-4f5f-b9c7-bc0e168a8f9b.\\n Wescott, David G [corrected to Westcott, David G]]\\n Publication Status: Online-Only\\nREFERENCE 2 (bases 1 to 723)\\n AUTHORS Westcott,D., Mildenberg,Z., Bellaiche,M., Mcgowan,S.L.,\\n Grierson,S., Choudhury,B. and Steinbach,F.\\n TITLE Direct Submission\\n JOURNAL Submitted (29-MAR-2013) Contact:Bhudipa Choudhury Animal and Plant\\n Health Agency (APHA), Virology; Woodham Lane, Addlestone, Surrey\\n KT15-3NB, U.K\\nFEATURES Location/Qualifiers\\n source 1..723\\n /organism=\"Equine encephalosis virus\"\\n /mol_type=\"genomic RNA\"\\n /isolate=\"Kimron1\"\\n /host=\"Equus caballus\"\\n /db_xref=\"taxon:201490\"\\n /country=\"Israel\"\\n gene 1..723\\n /gene=\"NS3\"\\n CDS 1..723\\n /gene=\"NS3\"\\n /codon_start=1\\n /product=\"NS3 protein\"\\n /protein_id=\"BAN78512.1\"\\n /translation=\"MYPVLSRAVVGNPEERALMVYPPTAPMPPVTTWDNLKIDSVDGM\\n KDLALNILDKNITSTTGADECDKREKAMFASVAESAADSPMVRTIKIQIYNRVLDDME\\n REKRKCEKRRAMLRFVSNAFITLMLMSTFLMAMMQTPPITQYVEKACNGTGGTETNDP\\n CGLMRWSGAVQFLMLIMSGFLYMCKRWITTLSTNADRISKNILKRRAYIDAARSNPNA\\n TVLTVTGGNTGDLPYQFGDTAH\"\\nORIGIN \\n 1 atgtatccag tactttcgag agccgttgtg ggcaatccag aggaacgtgc gttaatggtg\\n 61 tacccgccga cagcgccgat gccgcctgtc acgacttggg ataaccttaa aatcgacagt\\n 121 gttgatggaa tgaaggatct agcattaaat atattggata agaatataac tagcacgacg\\n 181 ggagcggatg agtgcgataa acgtgagaag gccatgttcg cctcggtggc ggaatcagca\\n 241 gcagatagcc cgatggtgcg tacaattaaa atccagatat ataacagagt attggatgat\\n 301 atggagagag agaagcggaa gtgtgagaaa agacgtgcaa tgttgagatt tgtctcaaac\\n 361 gcctttataa cgttaatgct gatgtccaca ttcttgatgg ctatgatgca gaccccgccg\\n 421 ataacgcagt atgtagagaa agcgtgtaat gggacgggag ggacggagac gaacgacccg\\n 481 tgcggtctga tgagatggag tggggctgtc caatttttga tgctgataat gagcggcttt\\n 541 ttgtatatgt gcaaacgttg gatcactacg ctttcaacga acgcagatag gattagtaaa\\n 601 aacattttga aacggcgagc gtacatcgat gcagccagat caaacccaaa tgcgacggtt\\n 661 ctaactgtga ctggaggcaa cacgggggat ctaccgtacc agttcgggga tacggcccat\\n 721 tag\\n//\\n'\n" ] } ], "source": [ "print(gb_vrl.get_raw(\"AB811634.1\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 5.4.4 Indexing compressed files" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Very often when you are indexing a sequence file it can be quite large - so\n", "you may want to compress it on disk. Unfortunately efficient random access\n", "is difficult with the more common file formats like gzip and bzip2. In this\n", "setting, BGZF (Blocked GNU Zip Format) can be very helpful. This is a variant\n", "of gzip (and can be decompressed using standard gzip tools) popularised by\n", "the BAM file format, [samtools](http://samtools.sourceforge.net/ \"samtools\"), and [tabix](http://samtools.sourceforge.net/tabix.shtml \"tabix\").\n", "\n", "To create a BGZF compressed file you can use the command line tool bgzip\n", "which comes with samtools. In our examples we use a filename extension\n", "*.bgz, so they can be distinguished from normal gzipped files (named\n", "*.gz). You can also use the Bio.bgzf module to read and write\n", "BGZF files from within Python.\n", "\n", "The Bio.SeqIO.index() and Bio.SeqIO.index_db() can both be\n", "used with BGZF compressed files. For example, if you started with an\n", "uncompressed GenBank file:" ] }, { "cell_type": "code", "execution_count": 67, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "94" ] }, "metadata": {}, "execution_count": 67 } ], "source": [ "from Bio import SeqIO\n", "\n", "orchid_dict = SeqIO.index(\"data/ls_orchid.gbk\", \"genbank\")\n", "len(orchid_dict)" ] }, { "cell_type": "code", "execution_count": 68, "metadata": {}, "outputs": [], "source": [ "orchid_dict.close()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You could compress this (while keeping the original file) at the command\n", "line using the following command:\n", "\n", " $ bgzip -c ls_orchid.gbk > ls_orchid.gbk.bgz\n", "\n", "You can use the compressed file in exactly the same way:" ] }, { "cell_type": "code", "execution_count": 69, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "94" ] }, "metadata": {}, "execution_count": 69 } ], "source": [ "from Bio import SeqIO\n", "\n", "orchid_dict = SeqIO.index(\"data/ls_orchid.gbk.bgz\", \"genbank\")\n", "len(orchid_dict)" ] }, { "cell_type": "code", "execution_count": 70, "metadata": {}, "outputs": [], "source": [ "orchid_dict.close()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "or" ] }, { "cell_type": "code", "execution_count": 71, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "94" ] }, "metadata": {}, "execution_count": 71 } ], "source": [ "from Bio import SeqIO\n", "\n", "orchid_dict = SeqIO.index_db(\"data/ls_orchid.gbk.bgz.idx\", \"data/ls_orchid.gbk.bgz\", \"genbank\")\n", "len(orchid_dict)" ] }, { "cell_type": "code", "execution_count": 72, "metadata": {}, "outputs": [], "source": [ "orchid_dict.close()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The SeqIO indexing automatically detects the BGZF compression. Note\n", "that you can't use the same index file for the uncompressed and compressed files." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 5.4.5 Discussion" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So, which of these methods should you use and why? It depends on what you are trying to do (and how much data you are dealing with). However, in general picking Bio.SeqIO.index() is a good starting point. If you are dealing with millions of records, multiple files, or repeated analyses, then look at Bio.SeqIO.index_db().\n", "\n", "Reasons to choose Bio.SeqIO.to_dict() over either Bio.SeqIO.index() or Bio.SeqIO.index_db() boil down to a need for flexibility despite its high memory needs. The advantage of storing the SeqRecord objects in memory is they can be changed, added to, or removed at will. In addition to the downside of high memory consumption, indexing can also take longer because all the records must be fully parsed.\n", "\n", "Both Bio.SeqIO.index() and Bio.SeqIO.index_db() only parse records on demand. When indexing, they scan the file once looking for the start of each record and do as little work as possible to extract the identifier.\n", "\n", "Reasons to choose Bio.SeqIO.index() over Bio.SeqIO.index_db() include:\n", "\n", "* Faster to build the index (more noticeable in simple file formats)\n", "\n", "* Slightly faster access as SeqRecord objects (but the difference is only\n", "really noticeable for simple to parse file formats).\n", "\n", "* Can use any immutable Python object as the dictionary keys (e.g. a\n", "tuple of strings, or a frozen set) not just strings.\n", "\n", "* Don't need to worry about the index database being out of date if the\n", "sequence file being indexed has changed.\n", "\n", "Reasons to choose Bio.SeqIO.index_db() over Bio.SeqIO.index()\n", "include:\n", "\n", "* Not memory limited - this is already important with files from second\n", "generation sequencing where 10s of millions of sequences are common, and\n", "using Bio.SeqIO.index() can require more than 4GB of RAM and therefore\n", "a 64bit version of Python.\n", "\n", "* Because the index is kept on disk, it can be reused. Although building\n", "the index database file takes longer, if you have a script which will be\n", "rerun on the same datafiles in future, this could save time in the long run.\n", "\n", "* Indexing multiple files together\n", "\n", "* The get_raw() method can be much faster, since for most file\n", "formats the length of each record is stored as well as its offset." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 5.5 Writing sequence files" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We've talked about using Bio.SeqIO.parse() for sequence input (reading files), and now we'll look at Bio.SeqIO.write() which is for sequence output (writing files). This is a function taking three arguments: some SeqRecord objects, a handle or filename to write to, and a sequence format.\n", "\n", "Here is an example, where we start by creating a few SeqRecord objects the hard way (by hand, rather than by loading them from a file):" ] }, { "cell_type": "code", "execution_count": 74, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from Bio.Seq import Seq\n", "from Bio.SeqRecord import SeqRecord\n", "\n", "rec1 = SeqRecord(\n", " Seq(\n", " \"MMYQQGCFAGGTVLRLAKDLAENNRGARVLVVCSEITAVTFRGPSETHLDSMVGQALFGD\" \\\n", " +\"GAGAVIVGSDPDLSVERPLYELVWTGATLLPDSEGAIDGHLREVGLTFHLLKDVPGLISK\" \\\n", " +\"NIEKSLKEAFTPLGISDWNSTFWIAHPGGPAILDQVEAKLGLKEEKMRATREVLSEYGNM\" \\\n", " +\"SSAC\",\n", " ),\n", " id=\"gi|14150838|gb|AAK54648.1|AF376133_1\",\n", " description=\"chalcone synthase [Cucumis sativus]\")\n", "\n", "rec2 = SeqRecord(\n", " Seq(\n", " \"YPDYYFRITNREHKAELKEKFQRMCDKSMIKKRYMYLTEEILKENPSMCEYMAPSLDARQ\" \\\n", " +\"DMVVVEIPKLGKEAAVKAIKEWGQ\",\n", " ),\n", " id=\"gi|13919613|gb|AAK33142.1|\",\n", " description=\"chalcone synthase [Fragaria vesca subsp. bracteata]\")\n", "\n", "rec3 = SeqRecord(\n", " Seq(\n", " \"MVTVEEFRRAQCAEGPATVMAIGTATPSNCVDQSTYPDYYFRITNSEHKVELKEKFKRMC\" \\\n", " +\"EKSMIKKRYMHLTEEILKENPNICAYMAPSLDARQDIVVVEVPKLGKEAAQKAIKEWGQP\" \\\n", " +\"KSKITHLVFCTTSGVDMPGCDYQLTKLLGLRPSVKRFMMYQQGCFAGGTVLRMAKDLAEN\" \\\n", " +\"NKGARVLVVCSEITAVTFRGPNDTHLDSLVGQALFGDGAAAVIIGSDPIPEVERPLFELV\" \\\n", " +\"SAAQTLLPDSEGAIDGHLREVGLTFHLLKDVPGLISKNIEKSLVEAFQPLGISDWNSLFW\" \\\n", " +\"IAHPGGPAILDQVELKLGLKQEKLKATRKVLSNYGNMSSACVLFILDEMRKASAKEGLGT\" \\\n", " +\"TGEGLEWGVLFGFGPGLTVETVVLHSVAT\",\n", " ),\n", " id=\"gi|13925890|gb|AAK49457.1|\",\n", " description=\"chalcone synthase [Nicotiana tabacum]\")\n", " \n", "my_records = [rec1, rec2, rec3]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we have a list of SeqRecord objects, we'll write them to a FASTA format file:\n" ] }, { "cell_type": "code", "execution_count": 75, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "3" ] }, "metadata": {}, "execution_count": 75 } ], "source": [ "SeqIO.write(my_records, \"data/my_example.faa\", \"fasta\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And if you open this file in your favourite text editor it should look like this:\n", "\n", " >gi|14150838|gb|AAK54648.1|AF376133_1 chalcone synthase [Cucumis sativus]\n", " MMYQQGCFAGGTVLRLAKDLAENNRGARVLVVCSEITAVTFRGPSETHLDSMVGQALFGD\n", " GAGAVIVGSDPDLSVERPLYELVWTGATLLPDSEGAIDGHLREVGLTFHLLKDVPGLISK\n", " NIEKSLKEAFTPLGISDWNSTFWIAHPGGPAILDQVEAKLGLKEEKMRATREVLSEYGNM\n", " SSAC\n", " >gi|13919613|gb|AAK33142.1| chalcone synthase [Fragaria vesca subsp. bracteata]\n", " YPDYYFRITNREHKAELKEKFQRMCDKSMIKKRYMYLTEEILKENPSMCEYMAPSLDARQ\n", " DMVVVEIPKLGKEAAVKAIKEWGQ\n", " >gi|13925890|gb|AAK49457.1| chalcone synthase [Nicotiana tabacum]\n", " MVTVEEFRRAQCAEGPATVMAIGTATPSNCVDQSTYPDYYFRITNSEHKVELKEKFKRMC\n", " EKSMIKKRYMHLTEEILKENPNICAYMAPSLDARQDIVVVEVPKLGKEAAQKAIKEWGQP\n", " KSKITHLVFCTTSGVDMPGCDYQLTKLLGLRPSVKRFMMYQQGCFAGGTVLRMAKDLAEN\n", " NKGARVLVVCSEITAVTFRGPNDTHLDSLVGQALFGDGAAAVIIGSDPIPEVERPLFELV\n", " SAAQTLLPDSEGAIDGHLREVGLTFHLLKDVPGLISKNIEKSLVEAFQPLGISDWNSLFW\n", " IAHPGGPAILDQVELKLGLKQEKLKATRKVLSNYGNMSSACVLFILDEMRKASAKEGLGT\n", " TGEGLEWGVLFGFGPGLTVETVVLHSVAT\n", "\n", "Suppose you wanted to know how many records the Bio.SeqIO.write() function wrote to the handle?\n", "If your records were in a list you could just use len(my_records), however you can't do that when your records come from a generator/iterator. TheBio.SeqIO.write() function returns the number of SeqRecord objects written to the file. \n", "\n", "**Note** - If you tell the Bio.SeqIO.write() function to write to a file that already exists, the old file will be overwritten without any warning." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 5.5.1 Round trips" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "Some people like their parsers to be 'round-tripable', meaning if you read in\n", "a file and write it back out again it is unchanged. This requires that the parser\n", "must extract enough information to reproduce the original file _exactly_.\n", "Bio.SeqIO does _not_ aim to do this.\n", "\n", "As a trivial example, any line wrapping of the sequence data in FASTA files is\n", "allowed. An identical SeqRecord would be given from parsing the following\n", "two examples which differ only in their line breaks:\n", "\n", " >YAL068C-7235.2170 Putative promoter sequence\n", " TACGAGAATAATTTCTCATCATCCAGCTTTAACACAAAATTCGCACAGTTTTCGTTAAGA\n", " GAACTTAACATTTTCTTATGACGTAAATGAAGTTTATATATAAATTTCCTTTTTATTGGA\n", " \n", " >YAL068C-7235.2170 Putative promoter sequence\n", " TACGAGAATAATTTCTCATCATCCAGCTTTAACACAAAATTCGCA\n", " CAGTTTTCGTTAAGAGAACTTAACATTTTCTTATGACGTAAATGA\n", " AGTTTATATATAAATTTCCTTTTTATTGGA\n", "\n", "To make a round-tripable FASTA parser you would need to keep track of where the\n", "sequence line breaks occurred, and this extra information is usually pointless.\n", "Instead Biopython uses a default line wrapping of $60$ characters on output.\n", "The same problem with white space applies in many other file formats too.\n", "Another issue in some cases is that Biopython does not (yet) preserve every\n", "last bit of annotation (e.g. GenBank and EMBL).\n", "\n", "Occasionally preserving the original layout (with any quirks it may have) is important. See Section 5.4.2.2 about the get_raw() method of the Bio.SeqIO.index() dictionary-like object for one potential solution." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 5.5.2 Converting between sequence file formats" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In previous example we used a list of SeqRecord objects as input to the Bio.SeqIO.write() function, but it will also accept a SeqRecord iterator like we get from Bio.SeqIO.parse() - this lets us do file conversion by combining these two functions.\n", "\n", "For this example we'll read in the GenBank format file and write it out in FASTA format:" ] }, { "cell_type": "code", "execution_count": 76, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Converted 94 records\n" ] } ], "source": [ "from Bio import SeqIO\n", "\n", "records = SeqIO.parse(\"data/ls_orchid.gbk\", \"genbank\")\n", "count = SeqIO.write(records, \"data/my_example.fasta\", \"fasta\")\n", "print(\"Converted %i records\" % count)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Still, that is a little bit complicated. So, because file conversion is such a\n", "common task, there is a helper function letting you replace that with just:\n" ] }, { "cell_type": "code", "execution_count": 77, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Converted 94 records\n" ] } ], "source": [ "from Bio import SeqIO\n", "\n", "count = SeqIO.convert(\"data/ls_orchid.gbk\", \"genbank\", \"data/my_example.fasta\", \"fasta\")\n", "print(\"Converted %i records\" % count)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The Bio.SeqIO.convert() function will take handles _or_ filenames.\n", "Watch out though - if the output file already exists, it will overwrite it!\n", "To find out more, see the built in help:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from Bio import SeqIO\n", "\n", "help(SeqIO.convert)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In principle, just by changing the filenames and the format names, this code could be used to convert between any file formats available in Biopython. However, writing some formats requires information (e.g. quality scores) which other files formats don’t contain. For example, while you can turn a FASTQ file into a FASTA file, you can’t do the reverse. See also Sections 20.1.9 and 20.1.10 in the cookbook chapter which looks at inter-converting between different FASTQ formats.\n", "\n", "Finally, as an added incentive for using the Bio.SeqIO.convert() function (on top of the fact your code will be shorter), doing it this way may also be faster! The reason for this is the convert function can take advantage of several file format specific optimisations and tricks." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 5.5.3 Converting a file of sequences to their reverse complements" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Suppose you had a file of nucleotide sequences, and you wanted to turn it into a file containing their reverse complement sequences. This time a little bit of work is required to transform the SeqRecord objects we get from our input file into something suitable for saving to our output file.\n", "\n", "To start with, we’ll use Bio.SeqIO.parse() to load some nucleotide sequences from a file, then print out their reverse complements using the Seq object’s built in .reverse_complement() method (see Section 3.6):" ] }, { "cell_type": "code", "execution_count": 78, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Z78533.1\nGCGTAAACTCAGCGGGTGCCCCCGCCTGACCTGGGGTCACATCCGAATGGCGGTCAACCGCCCTCCATTGGGGTTCGGAAGGGTTCTCCTGCCTGTCCGACAAGCACGACAACATGGGGGATTCGCACGGCAGCTGCTGCCAGCATCCGTCCACCTCTTGCCGGGTTCCGGGCCATCAAAACACCAGCTCTTGGACCCGCCGCACCTAGGCACAAGGGGCCAATCTTTCACATCCGCACCACGCCGGCCTGGCTGTATGCCGGGCAAGCATTGGCAGGAGAGAGACGAAGCGCGACGCCCAAGCAGGCGTGCCCTTAGCCTGATGGCCTCGGGCGCAACTTGCGTTCAAAAGACTCGATGGTTCACGGGATCTTGCAATTCACACCACTTATCGCATTTCGCTGCGTCCTTCCATCCGATGCAAAGAGCCAAGATTCCCGTTTGCGAGAGTCATCAAAATTCATTGGGCACGCGACAGCACGCCGCCGCTCCGGGTTTTGGGGAAGACAATGCCATTCGCCGGTGATGCTTTCATATGGCTTGGCGCCCAAACTGCGCCGGGCTAGAGGTTCAAACCCGCCATGGACGCTCCCGAGGCGGCCCAACAACAAATCAGGGTCACCACGGGAGCAATGCCCCCGGTGAGCTGAGTACACCGGTCCTCCGGATTCACTCGATCGTTTATTCCACGGTCTCATCAATGATCCTTCCGCAGGTTCACCTACGGAAACCTTGTTACG\nZ78532.1\nGCCTCAACTCAGCGGGTGGCCCCGCCTGACCTGGGGTCGCATCTGAATGGAAATCAACTGCCCAATGGTTATTTTAGCTCCATTGGGGTTCAATTAGGTTCTTGTGTAGGTTCGAAAAAATACAACAACATGGGGGATTCAAATAGCAGCCTTATGACTGTTAGCATTCTCCACCTCGTGCCACATTCCTACCCATCAAAGCAACAATCCTTAGACCCACCGCACCTAGGCACAAGGGGCCAATCATTCACATCCGTATAATGCCAGCTTAGCGATATGCCAAGCAAGCATTGGTAGGAGAGACGCAACACACGACGCCCAGGCAGGCGTGCCCTTAGCCTGATGGCCTCGAGCGCAACTTGCGTTCAAAGACTCGATGGTTCACGGGATTCTGCAATTCACACCACTTATCGCATTTAGCTGCGTTCTTCATCGATGTTAGAGCCAAGATATCCGTTGCGAGAGTCATAAAAATTCACTGGCATGCTCAGTAGCATACTGCCCCTCTGATTTTTTCTGACAATAATGTCATTCATCAGTGATGCTTTATATGACTTGGCGCAAACTGCACCGTACTAAAGTTCAAACCTGCCATGAAAGCTCTTGAGGAGGCCCAACAACAAAGCAGGGTCACGACAAAAGCAGTGCCACGACGAGCTGAGTTACCACAGGTCCTCCAGATTCACTCGATCATATATTCTGTTGTCTCAACAATGATCCTTCCGCAGGTTCACCTACGGAAACCTTGTTACG\nZ78531.1\nTTACTCACGGGTGGCCCGCCTGACTGGGGTCGCATCTGAATGGAAATCACCGCCCGAGGGCGGTTTTGCGTCCACTGGGGGTTCAAACAGGTTCTTCTGTAGGCCTGACGAGTACGACAACACGGGGGATTCGAACGGCAGCCTTGCGGCTGCCAGCATCCTCCACCTACTGCCAAGTTCCGGGCCATCAGAACACCGATGCTTAGACCCGCCGCACCTAGGCGCAAGGGGCCAATCTTTCACGTCCTCACAATGACAGACTAGCCGCATGCCAATCAAGCATTATCAGGAGAGACGCAGCACACGACGCCCAGGCAGGCGTGCCCTTAGCCTGATGGCCTCGGGCGCAACTTGCGTTCAAAGACTCGATGGTTCGCGGGATTCTGCAATTCACACCACTTATCGCATTTCGCTGCGTTCTTCATCGATGCAAGAGCCAGATATCCCGTTGCCGAGAGTCGTCATAAATTCATTGGCACGCGCAACAGACGCCGCCCCTCCGAGTTTTTCTTGACAAAAATGCCATCCATCGGTGACGCTCCATATGACTTGGCGCAAACTGCGCCGCGCTAGACGTTCAAACCCGCCATGAAAGTTCCCGAGGCGGCCCGGTCGCAAACCGGGTTCACCACGAGAGCAAAGCCACGGTGAGCCGTGTAACCACGGGTCCTCCGGATTCACTCGATCGTATGTTCTGCTGTCTCAACAATGATCCTTCCGCAGGTTCACCTACGGAAACCTTGTTACG\n" ] } ], "source": [ "from Bio import SeqIO\n", "\n", "for i, record in enumerate(SeqIO.parse(\"data/ls_orchid.gbk\", \"genbank\")):\n", " print(record.id)\n", " print(record.seq.reverse_complement())\n", " if i == 2: # 3 is enough\n", " break" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, if we want to save these reverse complements to a file, we’ll need to make SeqRecord objects. We can use the SeqRecord object’s built in .reverse_complement() method (see Section 4.9) but we must decide how to name our new records.\n", "\n", "This is an excellent place to demonstrate the power of list comprehensions which make a list in memory:" ] }, { "cell_type": "code", "execution_count": 79, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "94" ] }, "metadata": {}, "execution_count": 79 } ], "source": [ "from Bio import SeqIO\n", "\n", "records = [rec.reverse_complement(id=\"rc_\"+rec.id, description = \"reverse complement\") \\\n", " for rec in SeqIO.parse(\"data/ls_orchid.fasta\", \"fasta\")]\n", "len(records)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now list comprehensions have a nice trick up their sleeves, you can add a conditional statement:" ] }, { "cell_type": "code", "execution_count": 80, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "18" ] }, "metadata": {}, "execution_count": 80 } ], "source": [ "records = [rec.reverse_complement(id=\"rc_\"+rec.id, description = \"reverse complement\") \\\n", " for rec in SeqIO.parse(\"data/ls_orchid.fasta\", \"fasta\") if len(rec)<700]\n", "len(records)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "That would create an in memory list of reverse complement records where the sequence length was under 700 base pairs. However, we can do exactly the same with a generator expression - but with the advantage that this does not create a list of all the records in memory at once:\n" ] }, { "cell_type": "code", "execution_count": 82, "metadata": { "collapsed": false }, "outputs": [], "source": [ "records = (rec.reverse_complement(id=\"rc_\"+rec.id, description = \"reverse complement\") \\\n", " for rec in SeqIO.parse(\"data/ls_orchid.fasta\", \"fasta\") if len(rec)<700)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As a complete example:\n" ] }, { "cell_type": "code", "execution_count": 83, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "18" ] }, "metadata": {}, "execution_count": 83 } ], "source": [ "from Bio import SeqIO\n", "\n", "records = (rec.reverse_complement(id=\"rc_\"+rec.id, description = \"reverse complement\") \\\n", " for rec in SeqIO.parse(\"data/ls_orchid.fasta\", \"fasta\") if len(rec)<700)\n", "SeqIO.write(records, \"data/rev_comp.fasta\", \"fasta\")" ] }, { "source": [ "There is a related example in Section 20.1.3, translating each record in a FASTA file from nucleotides to amino acids." ], "cell_type": "markdown", "metadata": {} }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 5.5.4 Getting your SeqRecord objects as formatted strings" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Suppose that you don't really want to write your records to a file or handle - instead you want a string containing the records in a particular file format. The Bio.SeqIO interface is based on handles, but Python has a useful built in module which provides a string based handle.\n", "\n", "For an example of how you might use this, let's load in a bunch of SeqRecord objects from our orchids GenBank file, and create a string containing the records in FASTA format:" ] }, { "cell_type": "code", "execution_count": 94, "metadata": { "collapsed": false, "tags": [] }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ ">Z78533.1 C.irapeanum 5.8S rRNA gene and ITS1 and ITS2 DNA\nCGTAACAAGGTTTCCGTAGGTGAACCTGCGGAAGGATCATTGATGAGACCGTGGAATAAA\nCGATCGAGTGAATCCGGAGGACCGGTGTACTCAGCTCACCGGGGGCATTGCTCCCGTGGT\nGACCCTGATTTGTTGTTGGGCCGCCTCGGGAGCGTCCATGGCGGGTTTGAACCTCTAGCC\nCGGCGCAGTTTGGGCGCCAAGCCATATGAAAGCATCACCGGCGAATGGCATTGTCTTCCC\nCAAAACCCGGAGCGGCGGCGTGCTGTCGCGTGCCCAATGAATTTTGATGACTCTCGCAAA\nCGGGAATCTTGGCTCTTTGCATCGGATGGAAGGACGCAGCGAAATGCGATAAGTGGTGTG\nAATTGCAAGATCCCGTGAACCATCGAGTCTTTTGAACGCAAGTTGCGCCCGAGGCCATCA\nGGCTAAGGGCACGC\n" ] } ], "source": [ "from Bio import SeqIO\n", "from io import StringIO\n", "\n", "records = SeqIO.parse(\"data/ls_orchid.gbk\", \"genbank\")\n", "out_handle = StringIO()\n", "SeqIO.write(records, out_handle, \"fasta\")\n", "fasta_data = out_handle.getvalue()\n", "print(fasta_data[:500])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This isn’t entirely straightforward the first time you see it! On the bright side, for the special case where you would like a string containing a single record in a particular file format, use the the SeqRecord class’ format() method (see Section 4.6).\n", "\n", "Note that although we don’t encourage it, you can use the format() method to write to a file, for example something like this:" ] }, { "cell_type": "code", "execution_count": 86, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from Bio import SeqIO\n", "\n", "with open(\"data/ls_orchid_long.tab\", \"w\") as out_handle:\n", " for record in SeqIO.parse(\"data/ls_orchid.gbk\", \"genbank\"):\n", " if len(record) > 100:\n", " out_handle.write(record.format(\"tab\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "While this style of code will work for a simple sequential file format like FASTA or the simple tab separated format used here, it will _not_ work for more complex or interlaced file formats. This is why we still recommend using Bio.SeqIO.write(), as in the following example:" ] }, { "cell_type": "code", "execution_count": 87, "metadata": { "collapsed": false }, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "94" ] }, "metadata": {}, "execution_count": 87 } ], "source": [ "from Bio import SeqIO\n", "\n", "records = (rec for rec in SeqIO.parse(\"data/ls_orchid.gbk\", \"genbank\") if len(rec) > 100)\n", "SeqIO.write(records, \"data/ls_orchid.tab\", \"tab\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Making a single call to SeqIO.write(...) is also much quicker than\n", "multiple calls to the SeqRecord.format(...) method." ] }, { "source": [ "## 5.6 Low level FASTA and FASTQ parsers" ], "cell_type": "markdown", "metadata": {} }, { "source": [ "Working with the low-level SimpleFastaParser or FastqGeneralIterator is often more practical than Bio.SeqIO.parse when dealing with large high-throughput FASTA or FASTQ sequencing files where speed matters. As noted in the introduction to this chapter, the file-format neutral Bio.SeqIO interface has the overhead of creating many objects even for simple formats like FASTA.\n", "\n", "When parsing FASTA files, internally Bio.SeqIO.parse() calls the low-level SimpleFastaParser with the file handle. You can use this directly - it iterates over the file handle returning each record as a tuple of two strings, the title line (everything after the > character) and the sequence (as a plain string):" ], "cell_type": "markdown", "metadata": {} }, { "cell_type": "code", "execution_count": 88, "metadata": {}, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "94 records with total sequence length 67518\n" ] } ], "source": [ "from Bio.SeqIO.FastaIO import SimpleFastaParser\n", "\n", "count = 0\n", "total_len = 0\n", "with open(\"data/ls_orchid.fasta\") as in_handle:\n", " for title, seq in SimpleFastaParser(in_handle):\n", " count += 1\n", " total_len += len(seq)\n", "\n", "print(\"%i records with total sequence length %i\" % (count, total_len))" ] }, { "source": [ "As long as you don’t care about line wrapping (and you probably don’t for short read high-througput data), then outputing FASTA format from these strings is also very fast:" ], "cell_type": "markdown", "metadata": {} }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "out_handle.write(\">%s\\n%s\\n\" % (title, seq))" ] }, { "source": [ "Likewise, when parsing FASTQ files, internally Bio.SeqIO.parse() calls the low-level FastqGeneralIterator with the file handle. If you don’t need the quality scores turned into integers, or can work with them as ASCII strings this is ideal:" ], "cell_type": "markdown", "metadata": {} }, { "cell_type": "code", "execution_count": 90, "metadata": {}, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "3 records with total sequence length 75\n" ] } ], "source": [ "from Bio.SeqIO.QualityIO import FastqGeneralIterator\n", "\n", "count = 0\n", "total_len = 0\n", "with open(\"data/example.fastq\") as in_handle:\n", " for title, seq, qual in FastqGeneralIterator(in_handle):\n", " count += 1\n", " total_len += len(seq)\n", "\n", "print(\"%i records with total sequence length %i\" % (count, total_len))" ] }, { "source": [ "There are more examples of this in the Cookbook (Chapter 20), including how to output FASTQ efficiently from strings using this code snippet:" ], "cell_type": "markdown", "metadata": {} }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "out_handle.write(\"@%s\\n%s\\n+\\n%s\\n\" % (title, seq, qual))" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.3-final" } }, "nbformat": 4, "nbformat_minor": 0 }