{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Convert ns-ALEX Becker-Hickl SPC/SET files to Photon-HDF5\n", "

This Jupyter notebook\n", "will guide you through the conversion of a ns-ALEX data file from SPC/SET\n", "to Photon-HDF5 format. For more info on how to edit\n", "a jupyter notebook refer to this example.

\n", "\n", "Please send feedback and report any problem to the \n", "[Photon-HDF5 google group](https://groups.google.com/forum/#!forum/photon-hdf5).\n", "\n", "# 1. How to run it?\n", "\n", "The notebook is composed by \"text cells\", such as this paragraph, and \"code cells\"\n", "containing the code to be executed (and identified by an `In [ ]` prompt). \n", "To execute a code cell, select it and press **SHIFT+ENTER**. \n", "To modify an cell, click on it to enter \"edit mode\" (indicated by a green frame), \n", "then type.\n", "\n", "You can run this notebook directly online (for demo purposes), or you can \n", "run it on your on desktop. For a local installation please refer to:\n", "\n", "- [Jupyter Notebook Quick-Start Guide](http://jupyter-notebook-beginner-guide.readthedocs.org) \n", "\n", "
\n", "
\n", "Please run each each code cell using SHIFT+ENTER.\n", "
\n", "\n", "# 2. Prepare the data file\n", "\n", "## 2.1 Upload the data file\n", "\n", "
\n", "
\n", "Note: if you are running the notebook locally skip to section 2.2.\n", "
\n", "\n", "Before starting, you have to upload a data file to be converted to Photon-HDF5.\n", "You can use one of our example data files available\n", "[on figshare](http://dx.doi.org/10.6084/m9.figshare.1455963). \n", "\n", "To upload a file (up to 35 MB) switch to the \"Home\" tab in your browser, \n", "click the upload button and select the data file. \n", "Wait until the upload completes.\n", "For larger files (like some of our example files) please use the \n", "[Upload notebook](Upload data files.ipynb) instead.\n", "\n", "Once the file is uploaded, come back here and follow the instructions below.\n", "\n", "## 2.2 Select the file\n", "\n", "Specify the input data file in the following cell:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "filename = 'data/dsdna_d7d17_50_50_1.spc'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The next cell will check if the `filename` is correct:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import os\n", "try: \n", " with open(filename): pass\n", " print('Data file found, you can proceed.')\n", "except IOError:\n", " print('ATTENTION: Data file not found, please check the filename.\\n'\n", " ' (current value \"%s\")' % filename)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In case of file not found, please double check the file name\n", "and that the file has been uploaded." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 3. Load the data\n", "\n", "We start by loading the software:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%matplotlib inline\n", "import numpy as np\n", "import phconvert as phc\n", "print('phconvert version: ' + phc.__version__)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then we load the input file:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "d, meta = phc.loader.nsalex_bh(filename,\n", " donor = 4,\n", " acceptor = 6,\n", " alex_period_donor = (1800, 3300),\n", " alex_period_acceptor = (270, 1500),\n", " excitation_wavelengths = (532e-9, 635e-9),\n", " detection_wavelengths = (580e-9, 680e-9),)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And we plot the `nanotimes` histogram:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "phc.plotter.alternation_hist(d)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The previous plot is the `nanotimes` histogram for the donor and acceptor channel separately.\n", "The shaded areas marks the donor (*green*) and acceptor (*red*) excitation periods.\n", "\n", "If the histogram looks wrong in some aspects (no photons, wrong detectors\n", "assignment, wrong period selection) please go back to the previous cell \n", "which loads the file and change the parameters until the histogram looks correct." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You may also find useful to see how many different detectors are present\n", "and their number of photons. This information is shown in the next cell:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "scrolled": true }, "outputs": [], "source": [ "detectors = d['photon_data']['detectors']\n", "\n", "print(\"Detector Counts\")\n", "print(\"-------- --------\")\n", "for det, count in zip(*np.unique(detectors, return_counts=True)):\n", " print(\"%8d %8d\" % (det, count))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 4. Metadata\n", "\n", "In the next few cells, we specify some metadata that will be stored \n", "in the Photon-HDF5 file. Please modify these fields to reflect\n", "the content of the data file:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "author = 'John Doe'\n", "author_affiliation = 'Research Institution'\n", "description = 'A demonstrative smFRET-nsALEX measurement.'\n", "sample_name = '50-50 mixture of two FRET samples'\n", "dye_names = 'ATTO550, ATTO647N'\n", "buffer_name = 'TE50'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 5. Conversion\n", "
\n", "
\n", "

Once you finished editing the the previous sections you can proceed with\n", "the actual conversion. To do that, click on the menu Cells -> Run All Below.\n", "\n", "

After the execution go to Section 6 to download the Photon-HDF5 file.\n", "

\n", "\n", "The cells below contain the code to convert the input file to Photon-HDF5." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 5.1 Add metadata" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "d['description'] = description\n", "\n", "d['sample'] = dict(\n", " sample_name=sample_name,\n", " dye_names=dye_names,\n", " buffer_name=buffer_name,\n", " num_dyes = len(dye_names.split(',')))\n", "\n", "d['identity'] = dict(\n", " author=author,\n", " author_affiliation=author_affiliation)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For completeness, we also store all the raw metadata from SET file in a user group:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "d['user'] = {'becker_hickl': meta}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 5.2 Save to Photon-HDF5\n", "\n", "This command saves the new file to disk. If the input data does not follows the Photon-HDF5 specification it returns an error (`Invalid_PhotonHDF5`) printing what violates the specs." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "phc.hdf5.save_photon_hdf5(d, overwrite=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can check it's content by using an HDF5 viewer such as [HDFView](https://www.hdfgroup.org/products/java/hdfview/).\n", "\n", "# 6. Load Photon-HDF5\n", "\n", "We can load the newly created Photon-HDF5 file to check its content:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from pprint import pprint" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "filename = d['_data_file'].filename" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "h5data = phc.hdf5.load_photon_hdf5(filename)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "phc.hdf5.dict_from_group(h5data.identity)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "phc.hdf5.dict_from_group(h5data.setup)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "pprint(phc.hdf5.dict_from_group(h5data.photon_data))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "h5data._v_file.close()" ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python [default]", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.2" } }, "nbformat": 4, "nbformat_minor": 0 }