{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Convert ns-ALEX HT3 files to Photon-HDF5\n", "\n", "## Summary\n", "This executable document (is called a [Jupyter notebook](http://ipython.org/notebook.html)) will guide you through the conversion of a ns-ALEX data file from **HT3** to [Photon-HDF5](http://photon-hdf5.org) format.\n", "\n", "If it's the first time you are using a Jupyter notebook please click on *Help* -> *User Interface Tour* for a quick tour of the interface.\n", "\n", "In this notebook there are text cells, like this paragraph, and code cells, containing the code to be executed.\n", "To execute a selected code cell hit SHIFT+ENTER. To edit a code cell it must be selected and with a green frame around it.\n", "\n", "If you are reading this notebook online, please refer to \n", "[this quick-start guide](http://jupyter-notebook-beginner-guide.readthedocs.org) for instructions \n", "on how to install the required software and run a Jupyter notebook on your machine. \n", "\n", "## Prepare the data files\n", "Your can run this notebook using example data files available \n", "[on figshare](http://dx.doi.org/10.6084/m9.figshare.1455963). If you use these example files, \n", "please unzip them and put them in a folder named \"data\" (lower case) inside the folder \n", "containing this notebook.\n", "\n", "Alternatively, you can use on your own HT3 file. In this case you need to paste \n", "the full path of your HT3 file in the following cell, replacing value between single quotes `'`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "filename = r'data/Pre.ht3'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The next cell will check if the `filename` location is correct:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import os\n", "try: \n", " with open(filename): pass\n", " print('Data file found, you can proceed.')\n", "except IOError:\n", " print('ATTENTION: Data file not found, please check the filename.\\n'\n", " ' (current value \"%s\")' % filename)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In case of file not found, please double check that you put the example \n", "data files in the \"data\" folder, or that the path you put in `filename`\n", "is correct. Please re-execute the last two cells until the file is found." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Data file description\n", "\n", "Here we specify the additional metadata that will be stored \n", "in the Photon-HDF5 file. If you are using the example file you don't need to \n", "edit any of these. If using your own file, please modify these description\n", "accordingly." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Author\n", "\n", "These fields will go in the [identity](http://photon-hdf5.readthedocs.org/en/latest/phdata.html#identity-group) group:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "author = 'Eitan Lerner'\n", "author_affiliation = 'UCLA'\n", "creator = 'Antonino Ingargiola'\n", "creator_affiliation = 'UCLA'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Sample\n", "\n", "These fields will go in the [sample](http://photon-hdf5.readthedocs.org/en/latest/phdata.html#sample-group) group:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "description = 'A demostrative smFRET-nsALEX measurement.'\n", "sample_name = 'Doubly-labeled ssDNA partially hybridized to a complementary strand.'\n", "dye_names = ['ATTO488', 'ATTO647N']\n", "buffer_name = 'Tris20 mM Ph 7.8'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Please edit the previous cells and execute them (SHIFT+ENTER) to make sure \n", "there are no errors. Then proceed to the next section." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Load the data\n", "\n", "Before loading the data we need to load a few library functions:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%matplotlib inline\n", "import matplotlib.pyplot as plt\n", "import numpy as np\n", "import phconvert as phc\n", "print('phconvert version: ' + phc.__version__)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we can load the input file and assign the measurement parameters \n", "([measurement_specs](http://photon-hdf5.readthedocs.org/en/latest/phdata.html#measurement-specs)), \n", "necessary to create a complete Photon-HDF5 file.\n", "\n", "If using your own file, please review all the parameters in the next cell." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "d, meta = phc.loader.nsalex_ht3(filename,\n", " donor = 0,\n", " acceptor = 1,\n", " alex_period_donor = (150, 1500),\n", " alex_period_acceptor = (1540, 3050),\n", " excitation_wavelengths = (470e-9, 635e-9),\n", " detection_wavelengths = (525e-9, 690e-9),\n", " time_reversed = False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The next cell plots a `nanotimes` histogram for the donor and acceptor channel separately.\n", "The shaded areas marks the donor (*green*) and acceptor (*red*) excitation periods.\n", "\n", "If the histogram looks wrong in some aspects (no photons, wrong detectors\n", "assignment, wrong period selection) please go back to the previous cell \n", "and tweak the relevant parameters until the histogram looks correct." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fig, ax = plt.subplots(figsize=(10, 4))\n", "phc.plotter.alternation_hist(d, ax=ax)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You may also find useful to see how many different detectors are present\n", "and their number of photons. This information is shown in the next cell." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "detectors = d['photon_data']['detectors']\n", "\n", "print(\"Detector Counts\")\n", "print(\"-------- --------\")\n", "for det, count in zip(*np.unique(detectors, return_counts=True)):\n", " print(\"%8d %8d\" % (det, count))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# File conversion\n", "\n", "Once you finished editing the the previous sections you can proceed with\n", "the actual conversion. It is suggested to execute the notebook in\n", "one step by clicking on the menu *Cells* -> *Run All*.\n", "\n", "After that, you should find a new `.hdf5` file in the same folder of the input\n", "file. You can check it's content by using [HDFView](https://www.hdfgroup.org/products/java/hdfview/).\n", "\n", "The cells below contain the code to convert the input file to Photon-HDF5." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Add metadata" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "d['description'] = description\n", "\n", "d['sample'] = dict(\n", " sample_name=sample_name,\n", " dye_names=[n.encode() for n in dye_names],\n", " buffer_name=buffer_name,\n", " num_dyes = len(dye_names))\n", "\n", "d['identity'] = dict(\n", " author=author,\n", " author_affiliation=author_affiliation,\n", " creator=creator,\n", " creator_affiliation=creator_affiliation)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Validate the Photon-HDF5 structure\n", "\n", "Before writing to disk, we assure the file structure follows the Photon-HDF5 format:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "phc.hdf5.assert_valid_photon_hdf5(d)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Save to Photon-HDF5\n", "\n", "This command saves the new file to disk." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "phc.hdf5.save_photon_hdf5(d, close=False, overwrite=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "#d['_data_file'].close()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Save HT3 metadata\n", "\n", "Here we save a custom (*user*) group where we put all the metadata\n", "found in the input .HT3 file. The important metadata from the .HT3\n", "file is already saved in the standard Photon-HDF5 fields.\n", "\n", "Here we save the full original metadata in order to make sure that\n", "no information is lost during the conversion." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "h5file = d['_data_file']\n", "h5file.create_group('/', 'user')\n", "pq_group = h5file.create_group('/user', 'picoquant')\n", "for key in meta: \n", " if np.size(meta[key]) > 0:\n", " h5file.create_table(pq_group, key, obj=meta[key])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "#h5file.close()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Print HDF5 file content\n", "\n", "Finally we print the file content to see what's inside the newly-created Photon-HDF5. Here we print the content of the roor node:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "phc.hdf5.print_children(h5file.root)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And here we retrieve some information from the user group:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "phc.hdf5.print_children(h5file.root.user.picoquant)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "h5file.close()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Load Photon-HDF5\n", "\n", "Finally we try to reload the file to check that there are no errors." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from pprint import pprint" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "filename = d['_data_file'].filename" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "h5data = phc.hdf5.load_photon_hdf5(filename)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "phc.hdf5.dict_from_group(h5data.identity)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "phc.hdf5.dict_from_group(h5data.setup)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "pprint(phc.hdf5.dict_from_group(h5data.photon_data))" ] }, { "cell_type": "markdown", "metadata": { "collapsed": false }, "source": [ "If the next cell output shows \"OK\" then the execution is terminated." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "print('OK')" ] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.9" } }, "nbformat": 4, "nbformat_minor": 0 }