{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# EXPLORATION OF TCIA QIN-HEADNECK DATA COLLECTION\n", "\n", "This is a Jupyter Notebook that demonstrates how Python can be used to explore the content of a publicly available DICOM dataset stored on The Cancer Imaging Archive (TCIA) and described here: https://wiki.cancerimagingarchive.net/display/Public/QIN-HEADNECK. \n", "\n", "This notebook was created as part of the preparations to the [DICOM4MICCAI tutorial](http://qiicr.org/dicom4miccai) at the [MICCAI 2017 conference](https://miccai2017.org) on Sept 10, 2017. \n", "\n", "The tutorial was organized by the [Quantitative Image Informatics for Cancer Research (QIICR)](http://qiicr.org) project funded by the [Informatics Technology for Cancer Research (ITCR)](https://itcr.nci.nih.gov/) program of the National Cancer Institute, award U24 CA180918.\n", "\n", "More pointers related to the material covered in this notebook:\n", "\n", "* DICOM4MICCAI gitbook https://qiicr.gitbooks.io/dicom4miccai-handson\n", "* dcmqi: conversion between DICOM and quantitative image analysis results https://github.com/QIICR/dcmqi\n", "* QIICR project GitHub organization: https://github.org/QIICR\n", "* QIICR home page: http://qiicr.org\n", "\n", "## Feedback\n", "\n", "Questions, comments, suggestions, corrections are welcomed!\n", "\n", "Please email `andrey.fedorov@gmail.com`, or [join the discussion on gitter]( https://gitter.im/QIICR/dcmqi)!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Table of Contents\n", "\n", "* Introduction and prerequisites\n", " * Dataset overview\n", " * Conversion of the DICOM dataset into tabular form\n", " * Python tools\n", "* Exploring the DICOM-stored measurements\n", " * Reading measurements from DICOM SR derived tables\n", " * Linking individual measurements with the images\n", "* Further reading" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Introduction and prerequisites\n", "\n", "The goal of this tutorial is to demonstrate how Python can be used to work with the data produced by quantitative image analysis and stored using the DICOM format. \n", "\n", "You don't need to know much about DICOM to follow along, but you will need to learn more if you want to use DICOM in your work. You will find pointers in the Further reading section." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## DICOM Dataset overview\n", "\n", "The dataset used in this tutorial is discussed in detail in this publication:\n", "\n", "> Fedorov A., Clunie D., Ulrich E., Bauer C., Wahle A., Brown B., Onken M., Riesmeier J., Pieper S., Kikinis R., Buatti J., Beichel RR. _DICOM for quantitative imaging biomarker development: a standards based approach to sharing clinical data and structured PET/CT analysis results in head and neck cancer research_. PeerJ 4:e2057, 2016. DOI: [10.7717/peerj.2057](https://dx.doi.org/10.7717/peerj.2057)\n", "\n", "Here is a bird's eye view of the QIN-HEADNECK dataset: \n", "* 156 subjects with head and neck cancer\n", "* each subject had one or more PET/CT study (each study is expected to include a CT and a PET DICOM imaging series) for disease staging and treatment response assessment\n", "* images for a subset of 59 subjects were analyzed as follows:\n", " * primary tumor and the involved lymph nodes were segmented by each of the two readers, on two occasions, using [3D Slicer](http://slicer.org) both manually and using an interactive automated segmentation tool described in [(Beichel et al. 2016)](http://onlinelibrary.wiley.com/doi/10.1118/1.4948679/full)\n", " * the following reference regions used for PET normalization were segmented using automatic tools: cerebellum, liver and aortic arch\n", " * all segmentations were saved as DICOM Segmentation objects (DICOM SEG)\n", " * all PET images were normalized by Standardized Uptake Value (SUV) body weight \n", " * quantitative measurements were calculated from the PET images after applying Standardized Uptake Value (SUV) normalilzation for all the regions defined by the segmentations; SUV normalization factor for each DICOM series was saved into DICOM Real-World Value Mapping object (DICOM RWVM)\n", " * all resulting measurements were saved as DICOM Structured Report obects following [DICOM SR Template 1500](http://dicom.nema.org/medical/dicom/current/output/chtml/part16/chapter_A.html#sect_TID_1500)\n", " \n", "![](assets/headneck-diagram.jpg)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Conversion of the DICOM dataset into tabular form\n", "\n", "The DICOM dataset was converted into a collection of tables using this converter script: https://github.com/QIICR/dcm2tables. The script extracts data elements from the DICOM files and stores them as a collection of tab-delimited text files that follow [this schema](https://app.quickdatabasediagrams.com/#/schema/_71V1H1AXEqqKWDnvx4VXw).\n", "\n", "You can download the collection of the extracted tables here: https://github.com/fedorov/dicom4miccai-handson/releases/download/miccai2017/QIN-HEADNECK-Tables.tgz. Uncompress the file, note the location of the resulting directory, and set the value of the variable below to that location." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": true }, "outputs": [], "source": [ "tablesPath = '/home/jovyan/data/QIN-HEADNECK-Tables'\n", "# set this to your location of the tables if running locally\n", "#tablesPath = '/Users/fedorov/github/dcm2tables/Tables'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will discuss the contents of the relevant specific tables generated by this script further in this notebook in the context." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Python tools\n", "\n", "In this demonstration we will use the following Python packages:\n", "* Pandas for working with the tabular data\n", "* numpy for numerical operations\n", "* [matplotlib](https://matplotlib.org/index.html), [seaborn](https://seaborn.pydata.org/) and [bokeh](http://bokeh.pydata.org/en/latest/) for plotting\n", "\n", "**NOTE: there appears to be an issue using the (as of writing) latest 0.12.7 version of bokeh for some of the plotting operations in this notebook. If you are using a local installation of bokeh, you will need to make sure you are using bokeh 0.12.6!**\n", "\n", "If you are working with this notebook on your own system, you will need to install those packages as a prerequisite to import the packages!\n", "\n", "Run the cell below to confirm that all prerequisite packages are installed properly." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "
\n", " \n", " Loading BokehJS ...\n", "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/javascript": [ "\n", "(function(global) {\n", " function now() {\n", " return new Date();\n", " }\n", "\n", " var force = true;\n", "\n", " if (typeof (window._bokeh_onload_callbacks) === \"undefined\" || force === true) {\n", " window._bokeh_onload_callbacks = [];\n", " window._bokeh_is_loading = undefined;\n", " }\n", "\n", "\n", " \n", " if (typeof (window._bokeh_timeout) === \"undefined\" || force === true) {\n", " window._bokeh_timeout = Date.now() + 5000;\n", " window._bokeh_failed_load = false;\n", " }\n", "\n", " var NB_LOAD_WARNING = {'data': {'text/html':\n", " \"
\\n\"+\n", " \"

\\n\"+\n", " \"BokehJS does not appear to have successfully loaded. If loading BokehJS from CDN, this \\n\"+\n", " \"may be due to a slow or bad network connection. Possible fixes:\\n\"+\n", " \"

\\n\"+\n", " \"\\n\"+\n", " \"\\n\"+\n", " \"from bokeh.resources import INLINE\\n\"+\n", " \"output_notebook(resources=INLINE)\\n\"+\n", " \"\\n\"+\n", " \"
\"}};\n", "\n", " function display_loaded() {\n", " if (window.Bokeh !== undefined) {\n", " var el = document.getElementById(\"8e5a6245-ac48-402b-8d1a-148aa93f0318\");\n", " el.textContent = \"BokehJS \" + Bokeh.version + \" successfully loaded.\";\n", " } else if (Date.now() < window._bokeh_timeout) {\n", " setTimeout(display_loaded, 100)\n", " }\n", " }\n", "\n", " function run_callbacks() {\n", " try {\n", " window._bokeh_onload_callbacks.forEach(function(callback) { callback() });\n", " }\n", " finally {\n", " delete window._bokeh_onload_callbacks\n", " }\n", " console.info(\"Bokeh: all callbacks have finished\");\n", " }\n", "\n", " function load_libs(js_urls, callback) {\n", " window._bokeh_onload_callbacks.push(callback);\n", " if (window._bokeh_is_loading > 0) {\n", " console.log(\"Bokeh: BokehJS is being loaded, scheduling callback at\", now());\n", " return null;\n", " }\n", " if (js_urls == null || js_urls.length === 0) {\n", " run_callbacks();\n", " return null;\n", " }\n", " console.log(\"Bokeh: BokehJS not loaded, scheduling load and callback at\", now());\n", " window._bokeh_is_loading = js_urls.length;\n", " for (var i = 0; i < js_urls.length; i++) {\n", " var url = js_urls[i];\n", " var s = document.createElement('script');\n", " s.src = url;\n", " s.async = false;\n", " s.onreadystatechange = s.onload = function() {\n", " window._bokeh_is_loading--;\n", " if (window._bokeh_is_loading === 0) {\n", " console.log(\"Bokeh: all BokehJS libraries loaded\");\n", " run_callbacks()\n", " }\n", " };\n", " s.onerror = function() {\n", " console.warn(\"failed to load library \" + url);\n", " };\n", " console.log(\"Bokeh: injecting script tag for BokehJS library: \", url);\n", " document.getElementsByTagName(\"head\")[0].appendChild(s);\n", " }\n", " };var element = document.getElementById(\"8e5a6245-ac48-402b-8d1a-148aa93f0318\");\n", " if (element == null) {\n", " console.log(\"Bokeh: ERROR: autoload.js configured with elementid '8e5a6245-ac48-402b-8d1a-148aa93f0318' but no matching script tag was found. \")\n", " return false;\n", " }\n", "\n", " var js_urls = [\"https://cdn.pydata.org/bokeh/release/bokeh-0.12.6.min.js\", \"https://cdn.pydata.org/bokeh/release/bokeh-widgets-0.12.6.min.js\"];\n", "\n", " var inline_js = [\n", " function(Bokeh) {\n", " Bokeh.set_log_level(\"info\");\n", " },\n", " \n", " function(Bokeh) {\n", " \n", " },\n", " \n", " function(Bokeh) {\n", " \n", " document.getElementById(\"8e5a6245-ac48-402b-8d1a-148aa93f0318\").textContent = \"BokehJS is loading...\";\n", " },\n", " function(Bokeh) {\n", " console.log(\"Bokeh: injecting CSS: https://cdn.pydata.org/bokeh/release/bokeh-0.12.6.min.css\");\n", " Bokeh.embed.inject_css(\"https://cdn.pydata.org/bokeh/release/bokeh-0.12.6.min.css\");\n", " console.log(\"Bokeh: injecting CSS: https://cdn.pydata.org/bokeh/release/bokeh-widgets-0.12.6.min.css\");\n", " Bokeh.embed.inject_css(\"https://cdn.pydata.org/bokeh/release/bokeh-widgets-0.12.6.min.css\");\n", " }\n", " ];\n", "\n", " function run_inline_js() {\n", " \n", " if ((window.Bokeh !== undefined) || (force === true)) {\n", " for (var i = 0; i < inline_js.length; i++) {\n", " inline_js[i](window.Bokeh);\n", " }if (force === true) {\n", " display_loaded();\n", " }} else if (Date.now() < window._bokeh_timeout) {\n", " setTimeout(run_inline_js, 100);\n", " } else if (!window._bokeh_failed_load) {\n", " console.log(\"Bokeh: BokehJS failed to load within specified timeout.\");\n", " window._bokeh_failed_load = true;\n", " } else if (force !== true) {\n", " var cell = $(document.getElementById(\"8e5a6245-ac48-402b-8d1a-148aa93f0318\")).parents('.cell').data().cell;\n", " cell.output_area.append_execute_result(NB_LOAD_WARNING)\n", " }\n", "\n", " }\n", "\n", " if (window._bokeh_is_loading === 0) {\n", " console.log(\"Bokeh: BokehJS loaded, going straight to plotting\");\n", " run_inline_js();\n", " } else {\n", " load_libs(js_urls, function() {\n", " console.log(\"Bokeh: BokehJS plotting callback run at\", now());\n", " run_inline_js();\n", " });\n", " }\n", "}(this));" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "import numpy as np\n", "import pandas as pd\n", "import matplotlib.pyplot as plt\n", "\n", "import seaborn as sns\n", "\n", "from bokeh.models import ColumnDataSource, OpenURL, TapTool\n", "from bokeh.plotting import figure, output_file, show\n", "from bokeh.io import output_notebook\n", "from bokeh.colors import RGB\n", "\n", "output_notebook()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Exploring the DICOM-stored measurements\n", "\n", "In this section we will explore the segmentation-derived measurements originally stored in DICOM SR documents. Specifically, we will create an interactive plot that summarizes the variability of segmentation across different users, sessions and segmentation tools involved. \n", "\n", "First, we will load various tables that will be needed for this task. You can always check out the schema of the tables extracted from the DICOM dataset at [this link](https://app.quickdatabasediagrams.com/#/schema/_71V1H1AXEqqKWDnvx4VXw).\n", "\n", "The flowchart below summarizes the sequence of steps and the tools involved in the processing throughout this tutorial.\n", "\n", "![](assets/processing-flowchart.jpg)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Preparing measurements from DICOM SR derived tables\n", "\n", "In this specific dataset, all SR documents correspond to the segmentation-based measurement reports. To be more specific, all of these SR documents follow the same Structured Reporting template TID 1500. The overall relationship between segmentations and the content of the SR documents is illustrated in the figure below.\n", "\n", "Each segmentation identifies a _finding_, which can be a primary neoplasm, a secondary neoplasm, or one of the reference regions. For each segmentatio} of the neoplasm done using each combination of `{User,Segmentation tool,Segmentation session}`, there is an SR document containing the measurements extracted from these segmentations. \n", "\n", "Each SR document contains one or more measurement groups. Within each SR document, measurements are organized hierarchically into _groups_, such that measurements derived from the segmentation of a single finding are located together. Some of the information about the measurements, common across all measurements in the group, is defined at the group level.\n", "\n", "![](assets/measurements-org.jpg)\n", "\n", "The conversion script generated separate table **`SR1500_MeasurementGroups`** for the measurement groups, where each row corresponds to a single measurement group." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Index(['DeviceObserverName', 'FindingSite_CodeMeaning',\n", " 'FindingSite_CodeValue', 'FindingSite_CodingSchemeDesignator',\n", " 'Finding_CodeMeaning', 'Finding_CodeValue',\n", " 'Finding_CodingSchemeDesignator', 'ObserverType', 'PersonObserverName',\n", " 'SOPInstanceUID', 'TrackingIdentifier', 'TrackingUniqueIdentifier',\n", " 'activitySession', 'measurementMethod_CodeMeaning',\n", " 'measurementMethod_CodeValue',\n", " 'measurementMethod_CodingSchemeDesignator', 'timePoint'],\n", " dtype='object')" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "SR1500_MeasurementGroups = pd.read_csv(tablesPath+'/SR1500_MeasurementGroups.tsv', sep='\\t', low_memory=False)\n", "SR1500_MeasurementGroups.columns" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "... and another table for storing the individual measurements: **`SR1500_Measurements`**, one row per measurements." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Index(['SOPInstanceUID', 'TrackingUniqueIdentifier',\n", " 'derivationModifier_CodeMeaning', 'derivationModifier_CodeValue',\n", " 'derivationModifier_CodingSchemeDesignator', 'quantity_CodeMeaning',\n", " 'quantity_CodeValue', 'quantity_CodingSchemeDesignator',\n", " 'units_CodeMeaning', 'units_CodeValue', 'units_CodingSchemeDesignator',\n", " 'value'],\n", " dtype='object')" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "SR1500_Measurements = pd.read_csv(tablesPath+'/SR1500_Measurements.tsv', sep='\\t', low_memory=False)\n", "SR1500_Measurements.columns" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For our task, it is important to associate the group-level properties of the measurements (e.g., `activitySession` and `PersonObserverName`) with the individual measurements. We can accomplish this with a pandas `merge` operation, utilizing the combination of `SOPInstanceUID` and `TrackingUniqueIdentifier` as merge indices." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(60531, 12)" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "SR1500_Measurements.shape" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(60531, 27)" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "Measurements_merged = pd.merge(SR1500_Measurements,SR1500_MeasurementGroups,on=[\"SOPInstanceUID\",\"TrackingUniqueIdentifier\"])\n", "Measurements_merged.shape" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Index(['SOPInstanceUID', 'TrackingUniqueIdentifier',\n", " 'derivationModifier_CodeMeaning', 'derivationModifier_CodeValue',\n", " 'derivationModifier_CodingSchemeDesignator', 'quantity_CodeMeaning',\n", " 'quantity_CodeValue', 'quantity_CodingSchemeDesignator',\n", " 'units_CodeMeaning', 'units_CodeValue', 'units_CodingSchemeDesignator',\n", " 'value', 'DeviceObserverName', 'FindingSite_CodeMeaning',\n", " 'FindingSite_CodeValue', 'FindingSite_CodingSchemeDesignator',\n", " 'Finding_CodeMeaning', 'Finding_CodeValue',\n", " 'Finding_CodingSchemeDesignator', 'ObserverType', 'PersonObserverName',\n", " 'TrackingIdentifier', 'activitySession',\n", " 'measurementMethod_CodeMeaning', 'measurementMethod_CodeValue',\n", " 'measurementMethod_CodingSchemeDesignator', 'timePoint'],\n", " dtype='object')" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "Measurements_merged.columns" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are different types of measurements, so let's first see what are they. Each measurement is defined by a combination of code tuples (read more about coding measurement quantities on p.18 of [this preprint article](https://peerj.com/preprints/1541/)). We can look at all combinations of these codes for the comprehensive list of measurements available." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array(['SUVbw_Mean', 'SUVbw_Minimum', 'SUVbw_Maximum', 'Volume_nan',\n", " 'SUVbw_Standard Deviation', 'SUVbw_25th Percentile Value',\n", " 'SUVbw_Median', 'SUVbw_75th Percentile Value',\n", " 'SUVbw_Peak Value Within ROI', 'Total Lesion Glycolysis_nan',\n", " 'SUVbw_Upper Adjacent Value', 'SUVbw_RMS',\n", " 'Glycolysis Within First Quarter of Intensity Range_nan',\n", " 'Glycolysis Within Second Quarter of Intensity Range_nan',\n", " 'Glycolysis Within Third Quarter of Intensity Range_nan',\n", " 'Glycolysis Within Fourth Quarter of Intensity Range_nan',\n", " 'Percent Within First Quarter of Intensity Range_nan',\n", " 'Percent Within Second Quarter of Intensity Range_nan',\n", " 'Percent Within Third Quarter of Intensity Range_nan',\n", " 'Percent Within Fourth Quarter of Intensity Range_nan',\n", " 'Standardized Added Metabolic Activity_nan',\n", " 'Standardized Added Metabolic Activity Background_nan'], dtype=object)" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "(Measurements_merged[\"quantity_CodeMeaning\"].map(str)+\"_\"+Measurements_merged[\"derivationModifier_CodeMeaning\"].map(str)).unique()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's start with the basics and look at the variability of the primary lesion volume measurement! \n", "\n", "We know that there multiple lesions for many of the subjects. All possible values for the finding are the following, and let's first consider only the primary lesion." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array(['Reference Region', 'Neoplasm, Primary', 'Neoplasm, Secondary'], dtype=object)" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "Measurements_merged[\"Finding_CodeMeaning\"].unique()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Adding Composite Context" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We are almost ready to make the plot, but we are missing information about the patient for the individual measurements! This information is available in the **`CompositeContext`** table, which contains attributes related to the patient, study and series, and which should be present in every (valid!) DICOM file. This table contains one row per DICOM instance. Let's load and merge it with the individual measurements!" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Index(['BodyPartExamined', 'ManufacturerModelName', 'Modality', 'PatientAge',\n", " 'PatientID', 'PatientName', 'PatientSex', 'PatientWeight',\n", " 'SOPClassUID', 'SOPInstanceUID', 'SeriesDate', 'SeriesDescription',\n", " 'SeriesInstanceUID', 'SeriesTime', 'SoftwareVersions', 'StudyDate',\n", " 'StudyDescription', 'StudyInstanceUID', 'StudyTime'],\n", " dtype='object')" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "CompositeContext=pd.read_csv(tablesPath+'/CompositeContext.tsv', sep='\\t',low_memory=False)\n", "CompositeContext.columns" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(60531, 27)" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "Measurements_merged.shape" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(60531, 45)" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "Measurements_merged = pd.merge(Measurements_merged, CompositeContext, on=\"SOPInstanceUID\")\n", "Measurements_merged.shape" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Index(['SOPInstanceUID', 'TrackingUniqueIdentifier',\n", " 'derivationModifier_CodeMeaning', 'derivationModifier_CodeValue',\n", " 'derivationModifier_CodingSchemeDesignator', 'quantity_CodeMeaning',\n", " 'quantity_CodeValue', 'quantity_CodingSchemeDesignator',\n", " 'units_CodeMeaning', 'units_CodeValue', 'units_CodingSchemeDesignator',\n", " 'value', 'DeviceObserverName', 'FindingSite_CodeMeaning',\n", " 'FindingSite_CodeValue', 'FindingSite_CodingSchemeDesignator',\n", " 'Finding_CodeMeaning', 'Finding_CodeValue',\n", " 'Finding_CodingSchemeDesignator', 'ObserverType', 'PersonObserverName',\n", " 'TrackingIdentifier', 'activitySession',\n", " 'measurementMethod_CodeMeaning', 'measurementMethod_CodeValue',\n", " 'measurementMethod_CodingSchemeDesignator', 'timePoint',\n", " 'BodyPartExamined', 'ManufacturerModelName', 'Modality', 'PatientAge',\n", " 'PatientID', 'PatientName', 'PatientSex', 'PatientWeight',\n", " 'SOPClassUID', 'SeriesDate', 'SeriesDescription', 'SeriesInstanceUID',\n", " 'SeriesTime', 'SoftwareVersions', 'StudyDate', 'StudyDescription',\n", " 'StudyInstanceUID', 'StudyTime'],\n", " dtype='object')" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "Measurements_merged.columns" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we are finally ready to make the plot that summarizes the variability of segmentation across different users and sessions! Let's recap: we will prepare the plot based on the contents of the **`Measurements_merged`** pandas data frame, and will use the following items (all those details of the dataset are explained in the accompanying [manuscript](https://peerj.com/articles/2057/) that we mentioned in the opening of this notebook):\n", "\n", "* `PatientID` associates the measurements with the subjects in the dataset\n", "* the actual measurements are stored in the `value` column\n", "* we will consider the measurements of the lesion volume, these correspond to the rows that have the value `Volume` `quantity_CodeMeaning` \n", "* we will consider the measurements corresponding to the primary lesion, those correspond to the rows that have the value of `Neoplasm, Primary` in the `Finding_CodeMeaning` column\n", "* the identifier of the user performing the segmentation is in the `PersonObserverName` column\n", "* the identifier of the segmentation session is in the `activitySession` column\n", "\n", "All we have to do now is to subset the data frame, and make the plot!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Summary plot of the primary tumor volume measurements" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "
\n", " \n", " Loading BokehJS ...\n", "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/javascript": [ "\n", "(function(global) {\n", " function now() {\n", " return new Date();\n", " }\n", "\n", " var force = true;\n", "\n", " if (typeof (window._bokeh_onload_callbacks) === \"undefined\" || force === true) {\n", " window._bokeh_onload_callbacks = [];\n", " window._bokeh_is_loading = undefined;\n", " }\n", "\n", "\n", " \n", " if (typeof (window._bokeh_timeout) === \"undefined\" || force === true) {\n", " window._bokeh_timeout = Date.now() + 5000;\n", " window._bokeh_failed_load = false;\n", " }\n", "\n", " var NB_LOAD_WARNING = {'data': {'text/html':\n", " \"
\\n\"+\n", " \"

\\n\"+\n", " \"BokehJS does not appear to have successfully loaded. If loading BokehJS from CDN, this \\n\"+\n", " \"may be due to a slow or bad network connection. Possible fixes:\\n\"+\n", " \"

\\n\"+\n", " \"\\n\"+\n", " \"\\n\"+\n", " \"from bokeh.resources import INLINE\\n\"+\n", " \"output_notebook(resources=INLINE)\\n\"+\n", " \"\\n\"+\n", " \"
\"}};\n", "\n", " function display_loaded() {\n", " if (window.Bokeh !== undefined) {\n", " var el = document.getElementById(\"00c6aaa7-ec74-4436-9e34-167281a5e089\");\n", " el.textContent = \"BokehJS \" + Bokeh.version + \" successfully loaded.\";\n", " } else if (Date.now() < window._bokeh_timeout) {\n", " setTimeout(display_loaded, 100)\n", " }\n", " }\n", "\n", " function run_callbacks() {\n", " try {\n", " window._bokeh_onload_callbacks.forEach(function(callback) { callback() });\n", " }\n", " finally {\n", " delete window._bokeh_onload_callbacks\n", " }\n", " console.info(\"Bokeh: all callbacks have finished\");\n", " }\n", "\n", " function load_libs(js_urls, callback) {\n", " window._bokeh_onload_callbacks.push(callback);\n", " if (window._bokeh_is_loading > 0) {\n", " console.log(\"Bokeh: BokehJS is being loaded, scheduling callback at\", now());\n", " return null;\n", " }\n", " if (js_urls == null || js_urls.length === 0) {\n", " run_callbacks();\n", " return null;\n", " }\n", " console.log(\"Bokeh: BokehJS not loaded, scheduling load and callback at\", now());\n", " window._bokeh_is_loading = js_urls.length;\n", " for (var i = 0; i < js_urls.length; i++) {\n", " var url = js_urls[i];\n", " var s = document.createElement('script');\n", " s.src = url;\n", " s.async = false;\n", " s.onreadystatechange = s.onload = function() {\n", " window._bokeh_is_loading--;\n", " if (window._bokeh_is_loading === 0) {\n", " console.log(\"Bokeh: all BokehJS libraries loaded\");\n", " run_callbacks()\n", " }\n", " };\n", " s.onerror = function() {\n", " console.warn(\"failed to load library \" + url);\n", " };\n", " console.log(\"Bokeh: injecting script tag for BokehJS library: \", url);\n", " document.getElementsByTagName(\"head\")[0].appendChild(s);\n", " }\n", " };var element = document.getElementById(\"00c6aaa7-ec74-4436-9e34-167281a5e089\");\n", " if (element == null) {\n", " console.log(\"Bokeh: ERROR: autoload.js configured with elementid '00c6aaa7-ec74-4436-9e34-167281a5e089' but no matching script tag was found. \")\n", " return false;\n", " }\n", "\n", " var js_urls = [\"https://cdn.pydata.org/bokeh/release/bokeh-0.12.6.min.js\", \"https://cdn.pydata.org/bokeh/release/bokeh-widgets-0.12.6.min.js\"];\n", "\n", " var inline_js = [\n", " function(Bokeh) {\n", " Bokeh.set_log_level(\"info\");\n", " },\n", " \n", " function(Bokeh) {\n", " \n", " },\n", " \n", " function(Bokeh) {\n", " \n", " document.getElementById(\"00c6aaa7-ec74-4436-9e34-167281a5e089\").textContent = \"BokehJS is loading...\";\n", " },\n", " function(Bokeh) {\n", " console.log(\"Bokeh: injecting CSS: https://cdn.pydata.org/bokeh/release/bokeh-0.12.6.min.css\");\n", " Bokeh.embed.inject_css(\"https://cdn.pydata.org/bokeh/release/bokeh-0.12.6.min.css\");\n", " console.log(\"Bokeh: injecting CSS: https://cdn.pydata.org/bokeh/release/bokeh-widgets-0.12.6.min.css\");\n", " Bokeh.embed.inject_css(\"https://cdn.pydata.org/bokeh/release/bokeh-widgets-0.12.6.min.css\");\n", " }\n", " ];\n", "\n", " function run_inline_js() {\n", " \n", " if ((window.Bokeh !== undefined) || (force === true)) {\n", " for (var i = 0; i < inline_js.length; i++) {\n", " inline_js[i](window.Bokeh);\n", " }if (force === true) {\n", " display_loaded();\n", " }} else if (Date.now() < window._bokeh_timeout) {\n", " setTimeout(run_inline_js, 100);\n", " } else if (!window._bokeh_failed_load) {\n", " console.log(\"Bokeh: BokehJS failed to load within specified timeout.\");\n", " window._bokeh_failed_load = true;\n", " } else if (force !== true) {\n", " var cell = $(document.getElementById(\"00c6aaa7-ec74-4436-9e34-167281a5e089\")).parents('.cell').data().cell;\n", " cell.output_area.append_execute_result(NB_LOAD_WARNING)\n", " }\n", "\n", " }\n", "\n", " if (window._bokeh_is_loading === 0) {\n", " console.log(\"Bokeh: BokehJS loaded, going straight to plotting\");\n", " run_inline_js();\n", " } else {\n", " load_libs(js_urls, function() {\n", " console.log(\"Bokeh: BokehJS plotting callback run at\", now());\n", " run_inline_js();\n", " });\n", " }\n", "}(this));" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "Identifiers of the users: ['User1' 'User2' 'User3']\n", "Identifiers of the activity sessions: [1 2]\n" ] } ], "source": [ "from bokeh.models import ColumnDataSource, OpenURL, TapTool\n", "from bokeh.plotting import figure, output_file, show\n", "from bokeh.io import output_notebook\n", "from bokeh.colors import RGB\n", "\n", "from bokeh.models import HoverTool, PanTool, WheelZoomTool, BoxZoomTool, ResetTool, TapTool\n", "\n", "output_notebook()\n", "\n", "volume = []\n", "user = []\n", "method = []\n", "sesssion = []\n", "subject = []\n", "\n", "#SR_merged = pd.merge(SR_merged, segReferences)\n", "\n", "\n", "#subset = SR_merged[SR_merged[\"PersonObserverName\"]==\"User1\"]\n", "subset = Measurements_merged[Measurements_merged[\"Finding_CodeMeaning\"]==\"Neoplasm, Primary\"]\n", "subset = subset[subset[\"quantity_CodeMeaning\"]==\"Volume\"]\n", "\n", "print(\"Identifiers of the users: \"+str(subset[\"PersonObserverName\"].unique()))\n", "print(\"Identifiers of the activity sessions: \"+str(subset[\"activitySession\"].unique()))\n", "\n", "#subset = subset[subset[\"activitySession\"]==1]\n", "#subset = subset[subset[\"segmentationToolType\"]==\"SemiAuto\"]\n", "\n", "#subset.sort_values(\"value\", inplace=True)\n", "\n", "#subset=subset[subset[\"PatientID\"]==\"QIN-HEADNECK-01-0003\"]" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "
\n", "
\n", "
\n", "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "volumes = subset[\"value\"].values\n", "observers = subset[\"PersonObserverName\"].values\n", "subjects = subset[\"PatientID\"].values\n", "\n", "#subset[\"segmentationToolType\"].unique()\n", "\n", "colormap = {'User1': 'red', 'User2': 'green', 'User3': 'blue'}\n", "colors = [colormap[x] for x in subset['PersonObserverName'].tolist()]\n", "\n", "source = ColumnDataSource(data=dict(\n", " x=volumes,\n", " y=subjects,\n", " color=colors,\n", " labels = subset[\"PersonObserverName\"].tolist()\n", " ))\n", "\n", "hover = HoverTool(tooltips=[\n", " (\"(Volume, Subject)\", \"($x, $y)\")\n", "])\n", "\n", "wZoom = WheelZoomTool()\n", "bZoom = BoxZoomTool()\n", "reset = ResetTool()\n", "pan = PanTool()\n", "\n", "p = figure(x_range=[np.min(volumes),np.max(volumes)], y_range=subjects.tolist(), \\\n", " tools = [hover, wZoom, bZoom, reset, pan], \\\n", " title=\"Variability of primary neoplasm volume by reader\")\n", "p.yaxis.axis_label = \"PatientID\"\n", "p.xaxis.axis_label = subset[\"quantity_CodeMeaning\"].values[0]+', '+subset['units_CodeMeaning'].values[0]\n", "\n", "p.circle('x','y',color='color',source=source, legend='labels')\n", "\n", "p.legend.location = \"bottom_right\"\n", "\n", "show(p)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Mouse over the dots in the scatter plot to see the PatientID and the volume measurement for the specific segmentation. Check out the tools on the right hand side of the plot.\n", "\n", "Note that there are 4, not 2, circles for each reader. The reason is that each lesion was segmented on 2 occasions using both manual and automated segmentation tools. \n", "\n", "You can figure the type of tool used and highlight the segmentations produced by the automated tools as a challenge - it is all in DICOM ;-)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Linking individual measurements with the images\n", "\n", "One of the cool things about DICOM is that it the storage objects are inherently cross-linked with each other. There are attributes that allow us to keep track of the relationships of the objects in the study and series, identification of the objects that belong to the same patient, dates that allow to track evolution of the disease over time.\n", "\n", "Derived DICOM objects, such as segmentations and measurements, also have the capability to store pointers to the images that were used to derive those segmentations/measurements. Given the measurements SR document, we can trace the related evidence via the unique identifiers it contains!\n", "\n", "For the dataset in hand, information about the references was stored by the `dcm2tables` conversion script into the **`References`** table." ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Index(['ReferencedSOPClassUID', 'ReferencedSOPInstanceUID', 'SOPInstanceUID',\n", " 'SeriesInstanceUID'],\n", " dtype='object')" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "References=pd.read_csv(tablesPath+'/References.tsv', sep='\\t', low_memory=False)\n", "References.columns" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This table allows to identify all DICOM instances (`ReferencedSOPInstanceUID`) for a given instance (`SOPInstanceUID`) and its class UID, which uniquely identifies the type of DICOM object. Therefore, we can find all DICOM Segmentations referenced from the DICOM Structured report containing individual measurements, given the DICOM Structured Report `SOPInstanceUID`.\n", "\n", "**NOTE**: these references are not always mandated! You may not find them in the DICOM objects you encounter \"in the wild\"! :-(" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# 1.2.840.10008.5.1.4.1.1.66.4 is the SOPClassUID corresponding to the DICOM Segmentation image object\n", "segReferences = References[References[\"ReferencedSOPClassUID\"]=='1.2.840.10008.5.1.4.1.1.66.4']\n", "segReferences = segReferences[[\"SOPInstanceUID\",\"SeriesInstanceUID\"]].rename(columns={\"SeriesInstanceUID\":\"ReferencedSeriesInstanceUID\"})" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(60531, 45)" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# I am not a pandas expert, so just to be safe, I check that the dimensions of the data frame \n", "# do not change after the merge operation ...\n", "Measurements_merged.shape" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(60531, 46)" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "Measurements_merged = pd.merge(Measurements_merged, segReferences)\n", "Measurements_merged.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we have the pointer to the `SOPInstanceUID` of the segmentation used to calculate the measurement, for each measurement!\n", "\n", "To complete the integration, we will use the two more magic ingredients!\n", "\n", "1. Web application based on the open source zero-footprint Cornerstone web viewer developed by the [OHIF](https://github.com/ohif) project: https://pieper.github.io/dcmjs/examples/qiicr/index-dev.html. This application allows to browse the content of the QIN-HEADNECK dataset, and render the PET images with the segmentation overlay. A version of this application takes the `SeriesInstanceUID` of the DICOM Segmentation, and dereferences it to get the PET series, download everything to your browser, and do the rendering. Kudos to [Steve Pieper](https://github.com/pieper), [Erik Ziegler](https://github.com/swederik), [OHIF](https://github.com/ohif) and the [Cornerstone](https://github.com/chafey/cornerstone) team for developing this app!\n", "\n", "2. \"Tap tool\" provided by Bokeh out of the box that can be configured to redirect clicks on the plot to open a URL: http://bokeh.pydata.org/en/latest/docs/user_guide/interaction/callbacks.html#openurl. I am very impressed by Bokeh!" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "
\n", "
\n", "
\n", "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "subset = Measurements_merged[Measurements_merged[\"Finding_CodeMeaning\"]==\"Neoplasm, Primary\"]\n", "subset = subset[subset[\"quantity_CodeMeaning\"]==\"Volume\"]\n", "\n", "volumes = subset[\"value\"].values\n", "observers = subset[\"PersonObserverName\"].values\n", "subjects = subset[\"PatientID\"].values\n", "\n", "colormap = {'User1': 'red', 'User2': 'green', 'User3': 'blue'}\n", "colors = [colormap[x] for x in subset['PersonObserverName'].tolist()]\n", "\n", "source = ColumnDataSource(data=dict(\n", " x=volumes,\n", " y=subjects,\n", " color=colors,\n", " labels = subset[\"PersonObserverName\"].tolist(),\n", " seriesUID=subset[\"ReferencedSeriesInstanceUID\"]\n", " ))\n", "\n", "hover = HoverTool(tooltips=[\n", " (\"(Volume, Subject)\", \"($x, $y)\")\n", "])\n", "\n", "wZoom = WheelZoomTool()\n", "bZoom = BoxZoomTool()\n", "reset = ResetTool()\n", "tap = TapTool()\n", "pan = PanTool()\n", "\n", "p = figure(x_range=[np.min(volumes),np.max(volumes)], \\\n", " y_range=subjects.tolist(), \\\n", " tools = [hover, wZoom, bZoom, reset, tap, pan],\n", " title=\"Variability of primary neoplasm volume by reader\")\n", "\n", "p.circle('x','y',color='color',source=source, legend='labels')\n", "\n", "url = \"http://pieper.github.com/dcmjs/examples/qiicr/?seriesUID=@seriesUID\"\n", "taptool = p.select(type=TapTool)\n", "taptool.callback = OpenURL(url=url)\n", "\n", "p.xaxis.axis_label = subset[\"quantity_CodeMeaning\"].values[0]+', '+subset['units_CodeMeaning'].values[0]\n", "p.legend.location = \"bottom_right\"\n", "\n", "show(p)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As before you can mouse over the individual points of the plot, but you can also click on any of the dots, which will open a new browser window, and will show the overlay of the segmented structures over the PET image for the specific segmentation you clicked!\n", "\n", "You can check how the segmentation you did in the first part of the tutorial for subject QIN-HEADNECK-01-0024 agrees with those done by the domain experts!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Further reading\n", "\n", "* Excellent introductory book about DICOM: Pianykh, Oleg S. _Digital imaging and communications in medicine (DICOM): a practical introduction and survival guide_. Springer Science & Business Media, 2009.\n", "* DICOM4MICCAI tutorial at MICCAI 2017 where this notebook was first presented: http://qiicr.org/dicom4miccai\n", "* C++ library and command line conversion tools between research formats and DICOM: https://github.com/qiicr/dcmqi\n", "* TCIA QIN-HEADNECK collection explored in this notebook: https://wiki.cancerimagingarchive.net/display/Public/QIN-HEADNECK\n", "* Manuscript covering everything DICOM about QIN-HEADNECK: Fedorov A., Clunie D., Ulrich E., Bauer C., Wahle A., Brown B., Onken M., Riesmeier J., Pieper S., Kikinis R., Buatti J., Beichel RR. _DICOM for quantitative imaging biomarker development: a standards based approach to sharing clinical data and structured PET/CT analysis results in head and neck cancer research_. PeerJ 4:e2057, 2016. DOI: [10.7717/peerj.2057](https://dx.doi.org/10.7717/peerj.2057)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.3" } }, "nbformat": 4, "nbformat_minor": 2 }