{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide-cell" ] }, "outputs": [], "source": [ "%matplotlib inline\n", "%config InlineBackend.figure_format = 'retina'\n", "%gui qt" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide-cell" ] }, "outputs": [], "source": [ "import time\n", "time.sleep(5)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "from matplotlib import pyplot as plt\n", "from scipy import ndimage as ndi\n", "from skimage import (exposure, feature, filters, io, measure,\n", " morphology, restoration, segmentation, transform,\n", " util)\n", "import napari" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Introduction to three-dimensional image processing\n", "\n", "Images are represented as `numpy` arrays. A single-channel, or grayscale, image is a 2D matrix of pixel intensities of shape `(row, column)`. We can construct a 3D volume as a series of 2D `planes`, giving 3D images the shape `(plane, row, column)`. Multichannel data adds a `channel` dimension in the final position containing color information. \n", "\n", "These conventions are summarized in the table below:\n", "\n", "\n", "|Image type|Coordinates|\n", "|:---|:---|\n", "|2D grayscale|(row, column)|\n", "|2D multichannel|(row, column, channel)|\n", "|3D grayscale|(plane, row, column)|\n", "|3D multichannel|(plane, row, column, channel)|\n", "\n", "Some 3D images are constructed with equal resolution in each dimension; e.g., a computer generated rendering of a sphere. Most experimental data captures one dimension at a lower resolution than the other two; e.g., photographing thin slices to approximate a 3D structure as a stack of 2D images. The distance between pixels in each dimension, called `spacing`, is encoded in a tuple and is accepted as a parameter by some `skimage` functions and can be used to adjust contributions to filters.\n", "\n", "## Input/Output and display\n", "\n", "Three dimensional data can be loaded with `skimage.io.imread`. The data for this tutorial was provided by the Allen Institute for Cell Science. It has been downsampled by a factor of 4 in the `row` and `column` dimensions to reduce computational time." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "nuclei = io.imread('../images/cells.tif')\n", "membranes = io.imread('../images/cells_membrane.tif')\n", "\n", "print(\"shape: {}\".format(nuclei.shape))\n", "print(\"dtype: {}\".format(nuclei.dtype))\n", "print(\"range: ({}, {})\".format(np.min(nuclei), np.max(nuclei)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The distance between pixels was reported by the microscope used to image the cells. This `spacing` information will be used to adjust contributions to filters and helps decide when to apply operations planewise. We've chosen to normalize it to `1.0` in the `row` and `column` dimensions." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# The microscope reports the following spacing (in µm)\n", "original_spacing = np.array([0.2900000, 0.0650000, 0.0650000])\n", "\n", "# We downsampled each slice 4x to make the data smaller\n", "rescaled_spacing = original_spacing * [1, 4, 4]\n", "\n", "# Normalize the spacing so that pixels are a distance of 1 apart\n", "spacing = rescaled_spacing / rescaled_spacing[2]\n", "\n", "print(f'microscope spacing: {original_spacing}')\n", "print(f'after rescaling images: {rescaled_spacing}')\n", "print(f'normalized spacing: {spacing}')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can view the 3D image using napari." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "viewer = napari.view_image(nuclei, contrast_limits=[0, 1],\n", " scale=spacing)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide-input" ] }, "outputs": [], "source": [ "from napari.utils.notebook_display import nbscreenshot\n", "\n", "center = nuclei.shape[0] // 2\n", "\n", "viewer.dims.set_point(0, center)\n", "nbscreenshot(viewer)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exposure" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`skimage.exposure` contains a number of functions for adjusting image contrast. These functions operate on pixel values. Generally, image dimensionality or pixel spacing does not need to be considered.\n", "\n", "[Gamma correction](https://en.wikipedia.org/wiki/Gamma_correction), also known as Power Law Transform, brightens or darkens an image. The function $O = I^\\gamma$ is applied to each pixel in the image. A `gamma < 1` will brighten an image, while a `gamma > 1` will darken an image.\n", "\n", "napari has a built-in gamma correction slider for image layers. Try playing with the gamma slider to see its effect on the image." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Helper function for plotting histograms.\n", "def plot_hist(ax, data, title=None):\n", " ax.hist(data.ravel(), bins=256)\n", " ax.ticklabel_format(axis=\"y\", style=\"scientific\", scilimits=(0, 0))\n", " \n", " if title:\n", " ax.set_title(title)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Histogram equalization](https://en.wikipedia.org/wiki/Histogram_equalization) improves contrast in an image by redistributing pixel intensities. The most common pixel intensities are spread out, allowing areas of lower local contrast to gain a higher contrast. This may enhance background noise." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "equalized = exposure.equalize_hist(nuclei)\n", "\n", "fig, ((a, b), (c, d)) = plt.subplots(nrows=2, ncols=2)\n", "\n", "plot_hist(a, nuclei, title=\"Original\")\n", "plot_hist(b, equalized, title=\"Histogram equalization\")\n", "\n", "cdf, bins = exposure.cumulative_distribution(nuclei.ravel())\n", "c.plot(bins, cdf, \"r\")\n", "c.set_title(\"Original CDF\")\n", "\n", "cdf, bins = exposure.cumulative_distribution(equalized.ravel())\n", "d.plot(bins, cdf, \"r\")\n", "d.set_title(\"Histogram equalization CDF\");\n", "\n", "fig.tight_layout()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can look at the image in our napari viewer:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "viewer.add_image(equalized, contrast_limits=[0, 1], name='histeq')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Most experimental images are affected by salt and pepper noise. A few bright artifacts can decrease the relative intensity of the pixels of interest. A simple way to improve contrast is to clip the pixel values on the lowest and highest extremes. Clipping the darkest and brightest 0.5% of pixels will increase the overall contrast of the image." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "vmin, vmax = np.quantile(nuclei, q=(0.005, 0.995))\n", "\n", "stretched = exposure.rescale_intensity(\n", " nuclei, \n", " in_range=(vmin, vmax), \n", " out_range=np.float32\n", ")\n", "\n", "viewer.add_image(stretched, contrast_limits=[0, 1], name='stretched')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide-input" ] }, "outputs": [], "source": [ "viewer.dims.set_point(0, center)\n", "nbscreenshot(viewer)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Edge detection\n", "\n", "[Edge detection](https://en.wikipedia.org/wiki/Edge_detection) highlights regions in the image where a sharp change in contrast occurs. The intensity of an edge corresponds to the steepness of the transition from one intensity to another. A gradual shift from bright to dark intensity results in a dim edge. An abrupt shift results in a bright edge.\n", "\n", "We saw the [Sobel operator](https://en.wikipedia.org/wiki/Sobel_operator) in the filters lesson. It is an edge detection algorithm that approximates the gradient of the image intensity, and is fast to compute. `skimage.filters.sobel` has not been adapted for 3D images, but it can be readily generalised (see the linked Wikipedia entry). Let's try it!" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide-cell" ] }, "outputs": [], "source": [ "viewer.close()\n", "del viewer" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "edges = filters.sobel(nuclei)\n", "\n", "viewer = napari.view_image(nuclei, blending='additive', colormap='green', name='nuclei')\n", "viewer.add_image(edges, blending='additive', colormap='magenta', name='edges')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide-input" ] }, "outputs": [], "source": [ "viewer.dims.set_point(0, center)\n", "nbscreenshot(viewer)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "denoised = ndi.median_filter(nuclei, size=3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Thresholding](https://en.wikipedia.org/wiki/Thresholding_%28image_processing%29) is used to create binary images. A threshold value determines the intensity value separating foreground pixels from background pixels. Foregound pixels are pixels brighter than the threshold value, background pixels are darker. Thresholding is a form of image segmentation.\n", "\n", "Different thresholding algorithms produce different results. [Otsu's method](https://en.wikipedia.org/wiki/Otsu%27s_method) and Li's minimum cross entropy threshold are two common algorithms. Below, we use Li. You can use `skimage.filters.threshold_` to find different thresholding methods." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "li_thresholded = denoised > filters.threshold_li(denoised)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "viewer.add_image(li_thresholded, name='thresholded', opacity=0.3)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide-input" ] }, "outputs": [], "source": [ "viewer.dims.set_point(0, center)\n", "nbscreenshot(viewer)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see holes due to variations of the image intensity inside the nuclei. We can actually fill them with `scipy.ndimage.binary_fill_holes`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "filled = ndi.binary_fill_holes(li_thresholded)\n", "\n", "viewer.add_image(filled, name='filled', opacity=0.3)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide-input" ] }, "outputs": [], "source": [ "viewer.dims.set_point(0, center)\n", "nbscreenshot(viewer)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Morphological operations" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Mathematical morphology](https://en.wikipedia.org/wiki/Mathematical_morphology) operations and structuring elements are defined in `skimage.morphology`. Structuring elements are shapes which define areas over which an operation is applied. The response to the filter indicates how well the neighborhood corresponds to the structuring element's shape.\n", "\n", "There are a number of two and three dimensional structuring elements defined in `skimage.morphology`. Not all 2D structuring element have a 3D counterpart. The simplest and most commonly used structuring elements are the `disk`/`ball` and `square`/`cube`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Morphology operations can be chained together to denoise an image. For example, a `closing` applied to an `opening` can remove salt and pepper noise from an image." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Functions operating on [connected components](https://en.wikipedia.org/wiki/Connected_space) can remove small undesired elements while preserving larger shapes.\n", "\n", "`skimage.morphology.remove_small_holes` fills holes and `skimage.morphology.remove_small_objects` removes bright regions. Both functions accept a `min_size` parameter, which is the minimum size (in pixels) of accepted holes or objects. The `min_size` can be approximated by a cube." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "width = 20\n", "\n", "remove_holes = morphology.remove_small_holes(\n", " filled, \n", " area_threshold=width ** 3\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "width = 20\n", "\n", "remove_objects = morphology.remove_small_objects(\n", " remove_holes, \n", " min_size=width ** 3\n", ")\n", "\n", "viewer.add_image(remove_objects, name='cleaned', opacity=0.3);" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide-input" ] }, "outputs": [], "source": [ "viewer.dims.set_point(0, center)\n", "nbscreenshot(viewer)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Segmentation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Image segmentation](https://en.wikipedia.org/wiki/Image_segmentation) partitions images into regions of interest. Interger labels are assigned to each region to distinguish regions of interest." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "labels = measure.label(remove_objects)\n", "\n", "viewer.add_labels(labels, name='labels')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide-input" ] }, "outputs": [], "source": [ "viewer.dims.set_point(0, center)\n", "nbscreenshot(viewer)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Connected components of the binary image are assigned the same label via `skimage.measure.label`. Tightly packed cells connected in the binary image are assigned the same label." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A better segmentation would assign different labels to disjoint regions in the original image. \n", "\n", "[Watershed segmentation](https://en.wikipedia.org/wiki/Watershed_%28image_processing%29) can distinguish touching objects. Markers are placed at local minima/maxima and expanded outward until there is a collision with markers from another region, with the image intensity serving as a guide for the marker boundaries." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It can be quite challenging to find markers with the right location. A slight amount of noise in the image can result in very wrong point locations. Here is a common approach: find the distance from the object boundaries, then place points at the maximal distance." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "transformed = ndi.distance_transform_edt(remove_objects, sampling=spacing)\n", "\n", "maxima = morphology.local_maxima(transformed)\n", "viewer.add_points(np.transpose(np.nonzero(maxima)), name='bad points')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide-input" ] }, "outputs": [], "source": [ "viewer.dims.ndisplay = 3\n", "viewer.dims.set_point(0, center)\n", "nbscreenshot(viewer)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "viewer.camera.angles" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With napari, we can combine interactive point selections with the automated watershed algorithm from `skimage.morphology`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "viewer.layers['bad points'].visible = False\n", "points = viewer.add_points(name='interactive points', ndim=3)\n", "points.mode = 'add'\n", "\n", "# now, annotate the centers of the nuclei in your image" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide-cell" ] }, "outputs": [], "source": [ "points.data = np.array(\n", " [[ 30. , 14.2598685 , 27.7741219 ],\n", " [ 30. , 30.10416663, 81.36513029],\n", " [ 30. , 13.32785096, 144.27631406],\n", " [ 30. , 46.8804823 , 191.80920846],\n", " [ 30. , 43.15241215, 211.84758551],\n", " [ 30. , 94.87938547, 160.12061219],\n", " [ 30. , 72.97697335, 112.58771779],\n", " [ 30. , 138.21820096, 189.01315585],\n", " [ 30. , 144.74232372, 242.60416424],\n", " [ 30. , 98.14144685, 251.92433962],\n", " [ 30. , 153.59649032, 112.58771779],\n", " [ 30. , 134.49013081, 40.35635865],\n", " [ 30. , 182.95504275, 48.74451649],\n", " [ 30. , 216.04166532, 80.89912152],\n", " [ 30. , 235.14802483, 130.296051 ],\n", " [ 30. , 196.00328827, 169.44078757],\n", " [ 30. , 245.86622651, 202.06140137],\n", " [ 30. , 213.71162148, 250.52631331],\n", " [ 28. , 87.42324517, 52.00657787]],\n", " dtype=float,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Once you have marked all the points, you can grab the data back, and make a markers image for `skimage.segmentation.watershed`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "marker_locations = points.data\n", "\n", "markers = np.zeros(nuclei.shape, dtype=np.uint32)\n", "marker_indices = tuple(np.round(marker_locations).astype(int).T)\n", "markers[marker_indices] = np.arange(len(marker_locations)) + 1\n", "markers_big = morphology.dilation(markers, morphology.ball(5))\n", "\n", "segmented = segmentation.watershed(\n", " edges,\n", " markers_big, \n", " mask=remove_objects\n", ")\n", "\n", "viewer.add_labels(segmented, name='segmented')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After watershed, we have better disambiguation between internal cells!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Making measurements" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Once we have defined our objects, we can make measurements on them using `skimage.measure.regionprops` and the new `skimage.measure.regionprops_table`. These measurements include features such as area or volume, bounding boxes, and intensity statistics.\n", "\n", "Before measuring objects, it helps to clear objects from the image border. Measurements should only be collected for objects entirely contained in the image.\n", "\n", "Given the layer-like structure of our data, we only want to clear the objects touching the sides of the volume, but not the top and bottom, so we pad and crop the volume along the 0th axis to avoid clearing the mitotic nucleus." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "segmented_padded = np.pad(\n", " segmented,\n", " ((1, 1), (0, 0), (0, 0)),\n", " mode='constant',\n", " constant_values=0,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "interior_labels = segmentation.clear_border(segmented_padded)[1:-1]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After clearing the border, the object labels are no longer sequentially increasing. Optionally, the labels can be renumbered such that there are no jumps in the list of image labels." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "relabeled, fw_map, inv_map = segmentation.relabel_sequential(interior_labels)\n", "\n", "print(\"relabeled labels: {}\".format(np.unique(relabeled)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`skimage.measure.regionprops` automatically measures many labeled image features. Optionally, an `intensity_image` can be supplied and intensity features are extracted per object. It's good practice to make measurements on the original image.\n", "\n", "Not all properties are supported for 3D data. Below are lists of supported and unsupported 3D measurements." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "regionprops = measure.regionprops(relabeled, intensity_image=nuclei)\n", "\n", "supported = [] \n", "unsupported = []\n", "\n", "for prop in regionprops[0]:\n", " try:\n", " regionprops[0][prop]\n", " supported.append(prop)\n", " except NotImplementedError:\n", " unsupported.append(prop)\n", "\n", "print(\"Supported properties:\")\n", "print(\" \" + \"\\n \".join(supported))\n", "print()\n", "print(\"Unsupported properties:\")\n", "print(\" \" + \"\\n \".join(unsupported))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`skimage.measure.regionprops` ignores the 0 label, which represents the background." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(f'measured regions: {[regionprop.label for regionprop in regionprops]}')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`regionprops_table` returns a dictionary of columns compatible with creating a pandas dataframe of properties of the data:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "\n", "\n", "info_table = pd.DataFrame(\n", " measure.regionprops_table(\n", " relabeled, nuclei,\n", " properties=['label', 'slice', 'area', 'mean_intensity', 'solidity'],\n", " )\n", ").set_index('label')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "info_table" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can now use pandas and seaborn for some analysis!" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import seaborn as sns\n", "\n", "sns.scatterplot(x='area', y='solidity', data=info_table, hue='mean_intensity')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see that the mitotic nucleus is a clear outlier from the others in terms of solidity and intensity." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Challenge problems\n", "\n", "Put your 3D image processing skills to the test by working through these challenge problems.\n", "\n", "### Improve the segmentation\n", "\n", "A few objects were oversegmented in the declumping step. Try to improve the segmentation and assign each object a single, unique label. You can try to use [calibrated denoising](https://scikit-image.org/docs/dev/auto_examples/filters/plot_j_invariant.html) to get smoother nuclei and membrane images.\n", "\n", "### Segment cell membranes\n", "\n", "Try segmenting the accompanying membrane channel. In the membrane image, the membrane walls are the bright web-like regions. This channel is difficult due to a high amount of noise in the image. Additionally, it can be hard to determine where the membrane ends in the image (it's not the first and last planes).\n", "\n", "Below is a 2D segmentation of the membrane:\n", "\n", "![](../_images/membrane_segmentation.png)\n", "\n", "Hint: there should only be one nucleus per membrane.\n", "\n", "### Measure the area in µm³ of the cells\n", "\n", "Once you have segmented the cell membranes, use regionprops to measure the distribution of cell areas." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "celltoolbar": "Tags", "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.2" } }, "nbformat": 4, "nbformat_minor": 2 }