{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "# [Click here for quick-start to go from data slicing to figures in < 2 minutes](http://hannahcarterlab.org/public-jupyterhub/)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Table of Contents\n", "=================\n", "\n", "(Links are clickable if u open this README in JupyterNotebook)\n", " * [Summary](#Summary)\n", " * [Data directory and loading examples](#Data-directory-and-loading-examples)\n", " * [-omic data](#-omic-data)\n", " * [Metadata](#Metadata)\n", " * [Axulilary](#Axulilary)\n", " * [Example jupyter notebook analysis using reprocessed data](#Example-jupyter-notebook-analysis-using-reprocessed-data)\n", " * [1. Locating variant and correlating with RNAseq and metadata](#Locating-variant-and-correlating-with-RNAseq-and-metadata)\n", " * [2. High resolution mouse developmental hierachy map](#High-resolution-mouse-developmental-hierachy-map)\n", " * [3. Simple RNAseq data slicing and hypothesis testing](#Simple-RNAseq-data-slicing-and-hypothesis-testing)\n", " * [Methods](#Methods)\n", " * [Slides](#Slides)\n", " * [Pipeline](#Pipeline)\n", " * [Metadata: Download, parse and merge SRA META DATA](#SRA-Metadata-download-parse-and-merge)\n", " * [Allelic read counts](#Allelic-read-counts)\n", " * [Transcript counting](#Transcript-counting)\n", " * [Metadata layout axuliary](#Metadata-layout-axuliary)\n", " * [Acknowledgement](#Acknowledgement)\n", " * [Data format and coding style](#Data-format-and-coding-style)\n", " * [References](#References)\n", " * [Manuscripts in biorxiv related to this project](#Manuscripts-in-biorxiv-related-to-this-project)\n", " * [Status](#Status)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Feel free to contact me @: btsui@eng.ucsd.edu (I will try to reply within 3 days)**\n", "\n", "# Summary\n", "\n", "Skymap is a standalone database that aims to offer: \n", "1. **a single data matrix** for each omic layer for each species that spans a total of **>400k sequencing runs** from all the public studies, which is done by reprocessing **petabytes** worth of sequencing data. Here is how much data we have reprocessed from the SRA: \n", "![alt text](./Figures/sra_data_processed.png)\n", "2. **a biological metadata file** that describe the relationships between the sequencing runs and also the keywords extracted from over **3 million** freetext annotations using NLP. \n", "3. **a technical metadata file** that describes the relationships between the sequencing runs. \n", "\n", "**Solution: three tables to related > 100k experiments**: \n", "For examples, all the variant data and the data columns can be interpolated like this: \n", "![alt text](./Figures/Skymap_SNP_description.png\n", ")\n", "\n", "**Where they can all fit into your personal computer.**\n", "\n", "[Click here to check out the quick start page and start playing around with the data](https://github.com/brianyiktaktsui/Skymap/blob/master/README.md#quick-start-10-mins)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "# quick-installation-10-mins\n", "\n", "\n", "\n", "1. [install miniconda/anaconda with python version >=3.4]( https://conda.io/miniconda.html) (**won't work with python 2**)\n", "\n", "2. Copy and paste to run this following line in unix terminal\n", " * `conda create --yes -n skymap jupyter python=3.6 pandas=0.23.4 && source activate skymap && jupyter-notebook`\n", "\n", "3. [Click me to download the examples notebooks](https://github.com/brianyiktaktsui/Skymap/raw/master/ExampleDataLoading.zip) \n", "4. Choose one of the following notebooks to run. **The code will automatically update your python pandas**, [create a new conda environment if necessary](https://conda.io/docs/user-guide/tasks/manage-environments.html).\n", " * **loadVariantDataBySRRID.ipynb**: requires 1GB of disk space and 5GB of RAM. \n", " * **loadingRNAseqByGene.ipynb**: requires 20GB of disk space and 1GB RAM. \n", "\n", "\n", "5. Click \"Run All\" to execute all the cells. The notebook will download the example data, install the depedencies and run the data query example. \n", "\n", "\n", "\n", "\n", "[Check here for more info on executing jupyter notebook](http://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/execute.html)\n", "\n", "\n", "\n", "\n", "**Diagnosis:** \n", "\n", "* If you run into errors from packages, try the versions I used: python v3.6.5, pandas v0.23.4, synapse client v1.8.1. \n", "* If sage synapse download fails, download the corresponding python pandas pickle using the web interface instead (https://www.synapse.org/#!Synapse:syn11415602/files/) and read in the pickle using pandas.read_pickle.\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "# Data directory and loading examples\n", "\n", "\n", "\n", "I tried to keep the loading to be as simple as possible. The jupyter-notebook each have <10 lines of python codes and package dependency on python pandas only. The memory requirement are all less than 5G. \n", "\n", "### -omic data\n", "| Title | Data URL | Jupyter-notebook loading examples | Format | Uses|\n", "| ---: | ----: | ----: | ----:| ---: |\n", "| Loading allelic read counts by SRR (SRA sequencing run) ID | https://www.synapse.org/#!Synapse:syn15624400 | [click me to view](./clean_notebooks/ExampleDataLoading/loadVariantDataBySRRID.ipynb) | python pandas pickle dataframe| Variant, CNV detection|\n", "| Expression matrices| https://www.synapse.org/#!Synapse:syn11415787 | [click me to view](./clean_notebooks/ExampleDataLoading/loadingRNAseqByGene.ipynb)| numpy array|Expression level quantification| \n", "| Read coverage | - | [availability depending upon demand](http://seqanswers.com/forums/showthread.php?t=83975)| - | ChIP Peak detection|\n", "|Microbe quantification| - | [availability depending upon demand](http://seqanswers.com/forums/showthread.php?t=83975) | - | Microbiome community detection |\n", "\n", "\n", "### Metadata\n", "\n", "All the metadtata files are located at sage synapse folder: https://www.synapse.org/#!Synapse:syn15661258\n", "\n", "| Title | File name | Jupyter-notebook loading examples | Format | \n", "| ---: | ----: | ----: | ----:|\n", "| biospecieman annotations| allSRS.pickle.gz | [click me to view](./clean_notebooks/ExampleDataLoading/loadInMetaData.ipynb)| python pandas pickle dataframe|\n", "| experimental annotations | allSRX.pickle.gz | [click me to view](./Skymap/blob/master/clean_notebooks/ExampleDataLoading/loadInMetaData.ipynb) | python pandas pickle dataframe|\n", "|biospeiciman experimental and sequencing runs mappings. sequencing and QC stats| sra_dump.fastqc.bowtie_algn.pickle | [click me to view](./Skymap/blob/master/clean_notebooks/ExampleDataLoading/loadInMetaData.ipynb) | python pandas pickle dataframe|\n", "\n", "### Axulilary\n", "|Title| File name | \n", "| ---: | ----: | \n", "| Distribution of data processed over time | [checkProgress.ipynb](./Pipelines/RNAseq/checkProgress.ipynb) | \n", "|Generate RNAseq references| [generateReferences.ipynb](./RNAseq/generateReferences.ipynb)|" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Example jupyter notebook analysis using reprocessed data\n", "\n", "### Locating variant and correlating with RNAseq and metadata\n", "This is probably the best example that give you an idea on how to go from data slicing in Skymap to basic data analysis. \n", "\n", "[jupyter notebook link](https://github.com/brianyiktaktsui/Skymap/blob/master/XGS_WGS/FindStudiesWithBrafV600Mutated.ipynb)\n", "\n", "### High resolution mouse developmental hierachy map\n", "[jupyter notebook link](https://github.com/brianyiktaktsui/Skymap/blob/master/jupyter-notebooks/clean_notebooks/TemporalQuery_V4_all_clean.ipynb\n", ")\n", "\n", "Aggregating many studies (node) to form a smooth mouse developmental hierachy map. By integrating the vast amount of public data, we can cover many developmental time points, which sometime we can see a more transient expression dynamics both across tissues and within tissues over developmental time course. \n", "\n", "Each componenet represent a tissue. Each node represent a particular study at a particular time unit. The color is base on the developmental time extracted from experimental annotation using regex. The node size represent the # of sequencing runs in that particulr time point and study. Each edge represent a differentiate-to or part-of relationship.\n", "![alt text](./Figures/heirachy_time.png \"Logo Title Text 1\")\n", "And you can easily overlay gene expression level on top of it. As an example, Tp53 expression is known to be tightly regulated in development. Let's look at the dynamic of Tp53 expression over time and spatial locations in the following plot.\n", "![alt_text](./Figures/heirachy_Trp53.png \"tp53\")\n", "\n", "### Simple RNAseq data slicing and hypothesis testing\n", "[jupyer notebook link](https://github.com/brianyiktaktsui/Skymap_legacy/blob/master/DataSlicingExample.ipynb)\n", "\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Methods \n", "### Slides \n", "**Google docs and slides with links pointing to jupyter-notebooks**:\n", "The numbers from the jupyter notebooks will be different from the manuscript as there are more data being incoperated everyday. The hope is that it can help you understand each number and figures from the manuscript. \n", "\n", "|Title| Mansucript URL | Figures URL | \n", "| ---: | ---: |---: |\n", "|Extracting allelic read counts from 250,000 human sequencing runs in Sequence Read Archive| https://docs.google.com/document/d/1BGGQOpWczOwan9STqs-J9zpa8A-aj4aJ1RND_qKzRFs |https://docs.google.com/presentation/d/1dERUDHh2ab8UdPaHa-ki-8RMae6yi2eYJQM4b7ArVog |\n", "|Meta-analysis using NLP (Metamap) and reprocessed RNAseq data| https://docs.google.com/presentation/d/14vLJJQ6ziw-2aLDoQAJGyv1sYo5ENzljsqsbZr9jNLM| \n", "\n", "\n", "\n", "\n", "| Title | google docs | google slides |\n", "| ---: | --: | --: | \n", "| Extracting allelic read counts from 250,000 human sequencing runs in Sequence Read Archive | https://docs.google.com/document/d/1BGGQOpWczOwan9STqs-J9zpa8A-aj4aJ1RND_qKzRFs | https://docs.google.com/presentation/d/1dERUDHh2ab8UdPaHa-ki-8RMae6yi2eYJQM4b7ArVog |\n", "\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Unpublished but ongoing manuscripts**\n", "\n", "| Title | google doc |\n", "| ---: | --: |\n", "| Meta-analysis using NLP (Metamap) and reprocessed RNAseq data | https://docs.google.com/document/d/1_nES7vroX7lCwf5NSNBVZ1k2iubYm5wLeFqusq5aZuk | |\n", "| Deep biomedical named entity recognition NLP engine | https://docs.google.com/document/d/1sbm9L8-OCVZ_qoPqwZyedE5uL4I9k0Hg7znZn6El_l0 | https://github.com/brianyiktaktsui/DEEP_NLP |" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Pipeline\n", "\n", "The way I organized the code is trying to keep the code as simple as possible. \n", "For each pipeline, it has 6 scripts, <500 lines each to ensure readability. Run each pipeliine starting with calcuate_uprocessed.py, which calculate the number of files still require for processing.\n", "\n", "\n", "\n", "If you happen to want to make a copy of the pipeline: \n", "* make a copy of the pipeline by cloning this github repo, \n", "\n", "* `conda env create -n environment_conda_py26_btsui --force -f ./conda_envs/environment_conda_py26_btsui.yml`\n", "\n", "* `conda env create -n environment_conda_py36_btsui --force -f ./conda_envs/environment_conda_py36_btsui.yml`\n", "\n", "\n", "* For Python 2 codes, `source activate environment_conda_py26_btsui` before running \n", "\n", "* For Python 3 codes, `source activate environment_conda_py36_btsui` before running \n", "\n", "here are the scripts:\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### SRA Metadata download parse and merge \n", "\n", "\n", "\n", "|Steps | Code| Input description|Input dir|Output description|Output dir| Timing| Python version|\n", "|----:| ----:| ----:|----:|----:|----:| ----:|----:|\n", "|download SRAmetadata|[pull_SRA_meta](./Pipelines/Update_SRA_meta_data/pull_SRA_meta.ipynb) |none, download from web||SRA meta data| /nrnb/users/btsui/tmp/SRA_META/| 30 mins| Python 3 |\n", "|parse SRA metadata using Metamap| [parse SRS and SRX metadata](./SRA_META/SRAManager.ipynb) | SRA files in XML formats|/nrnb/users/btsui/tmp/SRA_META/| list of pandas series containing list of (SRS,attribute, freetext) | /cellar/users/btsui/Data/nrnb01_nobackup/tmp/METAMAP//splittedInput_SRAMangaer_SRA_META/| 20 mins| Python 2| \n", "|merge SRA metadata | [merge SRS and SRX parsed](./SRA_META/SRAmerge.ipynb)| | list of pandas series containing list of (SRS,attribute, freetext) |/cellar/users/btsui/Data/nrnb01_nobackup/tmp/METAMAP//splittedInput_SRAMangaer_SRA_META/(allSRS.pickle,allSRX.pickle) |all SRA SRS biospecieman annotation in allSRS.pickle and allSRX.pickle | /cellar/users/btsui/Data/nrnb01_nobackup/METAMAP/ | 10 mins| Python 3|\n", "|annotate SRA meta data based on SRX parsed | [annotate SRA data](./Pipelines/Update_SRA_meta_data/annotate_SRA_meta_data.ipynb) |\n", "| |[upload meta data to AWS](Pipelines/Update_SRA_meta_data/upload_AWS.ipynb)|\n", "\n", "\n", "Repalce my directory (/cellar/users/btsui/Project/METAMAP/code/metamap/)with your directory if you wanna run it. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### Allelic read counts\n", "|Pipeline | Code| \n", "|---- | ----|\n", "|SNP extraction | ./Pipelines/snp/calculate_unprocessed.py|\n", "|calculating reads coverage | ./Pipelines/chip/calculate_unprocessed.py|\n", "|merge SNP alignment statistics | [merge_variant_aligning_statistics](./Analysis/merge_variant_aligning_statistics.ipynb) |\n", "| merge SNP data | [merge SNP data into pandas pickles](./Pipelines/snp/Merge/calculate_unprocessed.ipynb) |\n", "\n", "\n", "Details of benchmarking against TCGA are located at: \n", "\n", "http://localhost:6001/notebooks/Project/METAMAP/notebook/RapMapTest/XGS_WGS/README_TCGA_benchmark.ipynb\n", "\n", "However, due to the fact where patients allele informations are protected, we are not providing the allellic read counts for TCGA. \n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Transcript counting\n", "\n", "| Code| \n", "|---- | \n", "|./Pipelines/RNAseq/calculate_unprocessed.py |\n", "| [merge RNAseq data into transcripts](./Pipelines/RNAseq/merge/mergeKallistoOutputIntoTranscript.ipynb)|\n", "| [reduce transcripts count to gene count using sum](./Pipelines/RNAseq/merge/reduceToGeneLevel.ipynb)|\n", "|[merge data into single numpy mmap matrix](./Pipelines/RNAseq/merge/mergeGeneMatrix.ipynb)\n", "| [upload expression data to synapse](./Pipelines/RNAseq/merge/upload_AWS.ipynb)|\n", "| [file count](./Pipelines/RNAseq/merge/fileCount.ipynb)|\n", "| [merge Kalististo mapping stats](./Pipelines/RNAseq/merge/mergeKalistoMappingStats.ipynb)|" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Metadata layout axuliary\n", "|Column | meaning|\n", "|: ---: | :---|\n", "| new_ScientificName | the string which the pipeline will use for matching with the reference genome as the species\n", "| ScientificName | original scientific name extracted from NCBI SRS| " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Acknowledgement\n", "\n", "\n", "Please considering citing if you are using Skymap. (https://www.biorxiv.org/content/early/2018/08/07/386441)\n", "\n", "We want to thank for the advice and resources from Dr. Hannah Carter (my PI), Dr. Jill Mesirov, Dr. Trey Ideker and Shamin Mollah. We also want to thank Dr. Ruben Arbagayen, Dr. Nate Lewis for their suggestion. \n", "The method will soon be posted in bioarchive. Also, we want to thank the Sage Bio Network for hosting the data. We also thank to thank the NCBI for holding all the published raw reads at [Sequnece Read Archive](https://www.ncbi.nlm.nih.gov/sra). \n", "\n", "There are also many people who help tested Skymap: Ben Kellman, Rachel Marty, Daniel Carlin, Spiko van Dam. \n", "\n", "Grant money that make this work possible: NIH DP5OD017937,GM103504\n", "\n", "Term of use: Use Skymap however you want. Just dont sue me, I have no money. \n", "\n", "\n", "For why I named it Skymap, I forgot.\n", "\n", "## Data format and coding style\n", "\n", "The storage is in python pandas pickle format. Therefore, the only packges you need to load in the data is numpy and pandas, the backbone of data analysis in python. We keep the process of data loading as lean as possible. Less code means less bugs and less errors. For now, Skymap is geared towards ML/data science folks who are hungry for the vast amount of data and ain't afraid of coding. I will port the data to native HDF5 format to reduce platform dependency once I get a chance. \n", "\n", "I tried to keep the code and parameters to be lean and self-explanatory for your reference. \n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# References\n", "**ISMB 2018 poster:** https://github.com/brianyiktaktsui/Skymap/blob/master/ISMB_poster_Skymap.pdf\t\n", "\n", "**Preprint on allelic read counts:** https://www.synapse.org/#!Synapse:syn11415602/files/\n", "\n", "**Data:** https://www.synapse.org/#!Synapse:syn11415602/files/" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Manuscripts in biorxiv related to this project\n", "| Title | URL to manuscript | github| \n", "| ---: | ---: | ---: | \n", "| Extracting allelic read counts from 250,000 human sequencing runs in Sequence Read Archive| https://www.biorxiv.org/content/biorxiv/early/2018/08/07/386441.full.pdf | |\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Status \n", "[On going task](http://localhost:6001/notebooks/Project/METAMAP/notebook/RapMapTest/Pipelines/RNAseq/merge/mergeKallistoOutputIntoTranscript.ipynb)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python [default]", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.5" } }, "nbformat": 4, "nbformat_minor": 2 }