{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "%matplotlib inline\n", "from matplotlib import cm\n", "from matplotlib import pyplot as plt\n", "from mpl_toolkits.mplot3d.art3d import Poly3DCollection, Line3DCollection\n", "\n", "import numpy as np\n", "\n", "from scipy import ndimage as ndi\n", "from scipy import stats\n", "\n", "from skimage import (exposure, feature, filters, io, measure,\n", " morphology, restoration, segmentation, transform,\n", " util)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Introduction to three-dimensional image processing\n", "\n", "Images are represented as `numpy` arrays. A single-channel, or grayscale, image is a 2D matrix of pixel intensities of shape `(row, column)`. We can construct a 3D volume as a series of 2D `planes`, giving 3D images the shape `(plane, row, column)`. Multichannel data adds a `channel` dimension in the final position containing color information. \n", "\n", "These conventions are summarized in the table below:\n", "\n", "\n", "|Image type|Coordinates|\n", "|:---|:---|\n", "|2D grayscale|(row, column)|\n", "|2D multichannel|(row, column, channel)|\n", "|3D grayscale|(plane, row, column)|\n", "|3D multichannel|(plane, row, column, channel)|\n", "\n", "Some 3D images are constructed with equal resolution in each dimension; e.g., a computer generated rendering of a sphere. Most experimental data captures one dimension at a lower resolution than the other two; e.g., photographing thin slices to approximate a 3D structure as a stack of 2D images. The distance between pixels in each dimension, called `spacing`, is encoded in a tuple and is accepted as a parameter by some `skimage` functions and can be used to adjust contributions to filters.\n", "\n", "## Input/Output and display\n", "\n", "Three dimensional data can be loaded with `skimage.io.imread`. The data for this tutorial was provided by the Allen Institute for Cell Science. It has been downsampled by a factor of 4 in the `row` and `column` dimensions to reduce computational time." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data = io.imread(\"../images/cells.tif\")\n", "\n", "print(\"shape: {}\".format(data.shape))\n", "print(\"dtype: {}\".format(data.dtype))\n", "print(\"range: ({}, {})\".format(data.min(), data.max()))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The distance between pixels was reported by the microscope used to image the cells. This `spacing` information will be used to adjust contributions to filters and helps decide when to apply operations planewise. We've chosen to normalize it to `1.0` in the `row` and `column` dimensions." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# The microscope reports the following spacing\n", "original_spacing = np.array([0.2900000, 0.0650000, 0.0650000])\n", "\n", "# We downsampled each slice 4x to make the data smaller\n", "rescaled_spacing = original_spacing * [1, 4, 4]\n", "\n", "# Normalize the spacing so that pixels are a distance of 1 apart\n", "spacing = rescaled_spacing / rescaled_spacing[2]\n", "\n", "print(\"microscope spacing: {}\\n\".format(original_spacing))\n", "print(\"after rescaling images: {}\\n\".format(rescaled_spacing))\n", "print(\"normalized spacing: {}\\n\".format(spacing))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To illustrate (no need to read the following cell; execute to generate illustration)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# To make sure we all see the same thing\n", "np.random.seed(0)\n", "\n", "image = np.random.random((8, 8))\n", "image_rescaled = transform.downscale_local_mean(image, (4, 4))\n", "\n", "f, (ax0, ax1) = plt.subplots(1, 2)\n", "\n", "ax0.imshow(image, cmap='gray')\n", "ax0.set_xticks([])\n", "ax0.set_yticks([])\n", "centers = np.indices(image.shape).reshape(2, -1).T\n", "ax0.plot(centers[:, 0], centers[:, 1], '.r')\n", "\n", "ax1.imshow(image_rescaled, cmap='gray')\n", "ax1.set_xticks([])\n", "ax1.set_yticks([])\n", "centers = np.indices(image_rescaled.shape).reshape(2, -1).T\n", "ax1.plot(centers[:, 0], centers[:, 1], '.r');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Back to our original data, let's try visualizing the image with `skimage.io.imshow`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "try:\n", " io.imshow(data, cmap=\"gray\")\n", "except TypeError as e:\n", " print(str(e))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`skimage.io.imshow` can only display grayscale and RGB(A) 2D images. We can use `skimage.io.imshow` to visualize 2D planes. By fixing one axis, we can observe three different views of the image." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def show_plane(ax, plane, cmap=\"gray\", title=None):\n", " ax.imshow(plane, cmap=cmap)\n", " ax.set_xticks([])\n", " ax.set_yticks([])\n", " \n", " if title:\n", " ax.set_title(title)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "_, (a, b, c) = plt.subplots(nrows=1, ncols=3, figsize=(16, 4))\n", "\n", "show_plane(a, data[32], title=\"Plane = 32\")\n", "show_plane(b, data[:, 128, :], title=\"Row = 128\")\n", "show_plane(c, data[:, :, 128], title=\"Column = 128\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Three-dimensional images can be viewed as a series of two-dimensional functions. The `display` helper function displays 30 planes of the provided image. By default, every other plane is displayed." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def slice_in_3D(ax, i):\n", " # From:\n", " # https://stackoverflow.com/questions/44881885/python-draw-3d-cube\n", "\n", " import numpy as np\n", " from mpl_toolkits.mplot3d.art3d import Poly3DCollection, Line3DCollection\n", "\n", " Z = np.array([[0, 0, 0],\n", " [1, 0, 0],\n", " [1, 1, 0],\n", " [0, 1, 0],\n", " [0, 0, 1],\n", " [1, 0, 1],\n", " [1, 1, 1],\n", " [0, 1, 1]])\n", "\n", " Z = Z * data.shape\n", "\n", " r = [-1,1]\n", "\n", " X, Y = np.meshgrid(r, r)\n", " # plot vertices\n", " ax.scatter3D(Z[:, 0], Z[:, 1], Z[:, 2])\n", "\n", " # list of sides' polygons of figure\n", " verts = [[Z[0], Z[1], Z[2], Z[3]],\n", " [Z[4], Z[5], Z[6], Z[7]], \n", " [Z[0], Z[1], Z[5], Z[4]], \n", " [Z[2], Z[3], Z[7], Z[6]], \n", " [Z[1], Z[2], Z[6], Z[5]],\n", " [Z[4], Z[7], Z[3], Z[0]], \n", " [Z[2], Z[3], Z[7], Z[6]]]\n", "\n", " # plot sides\n", " ax.add_collection3d(\n", " Poly3DCollection(verts, facecolors=(0, 1, 1, 0.25), linewidths=1,\n", " edgecolors='darkblue')\n", " )\n", "\n", " verts = np.array([[[0, 0, 0],\n", " [0, 0, 1],\n", " [0, 1, 1],\n", " [0, 1, 0]]])\n", " verts = verts * (60, 256, 256)\n", " verts += [i, 0, 0]\n", "\n", " ax.add_collection3d(Poly3DCollection(verts, \n", " facecolors='magenta', linewidths=1, edgecolors='black'))\n", "\n", " ax.set_xlabel('plane')\n", " ax.set_ylabel('col')\n", " ax.set_zlabel('row')\n", "\n", " # Auto-scale plot axes\n", " scaling = np.array([getattr(ax, 'get_{}lim'.format(dim))() for dim in 'xyz'])\n", " ax.auto_scale_xyz(*[[np.min(scaling), np.max(scaling)]] * 3)\n", "\n", " #plt.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from ipywidgets import interact\n", "\n", "def slice_explorer(data, cmap='gray'):\n", " N = len(data)\n", " \n", " @interact(plane=(0, N - 1))\n", " def display_slice(plane=34):\n", " fig, ax = plt.subplots(figsize=(20, 5))\n", " \n", " ax_3D = fig.add_subplot(133, projection='3d')\n", " \n", " show_plane(ax, data[plane], title=\"Plane {}\".format(plane), cmap=cmap)\n", " slice_in_3D(ax_3D, plane)\n", " \n", " plt.show()\n", "\n", " return display_slice" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "slice_explorer(data);" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def display(im3d, cmap=\"gray\", step=2):\n", " _, axes = plt.subplots(nrows=5, ncols=6, figsize=(16, 14))\n", " \n", " vmin = im3d.min()\n", " vmax = im3d.max()\n", " \n", " for ax, image in zip(axes.flatten(), im3d[::step]):\n", " ax.imshow(image, cmap=cmap, vmin=vmin, vmax=vmax)\n", " ax.set_xticks([])\n", " ax.set_yticks([])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "display(data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exposure" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`skimage.exposure` contains a number of functions for adjusting image contrast. These functions operate on pixel values. Generally, image dimensionality or pixel spacing does not need to be considered.\n", "\n", "[Gamma correction](https://en.wikipedia.org/wiki/Gamma_correction), also known as Power Law Transform, brightens or darkens an image. The function $O = I^\\gamma$ is applied to each pixel in the image. A `gamma < 1` will brighten an image, while a `gamma > 1` will darken an image. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Helper function for plotting histograms.\n", "def plot_hist(ax, data, title=None):\n", " ax.hist(data.ravel(), bins=256)\n", " ax.ticklabel_format(axis=\"y\", style=\"scientific\", scilimits=(0, 0))\n", " \n", " if title:\n", " ax.set_title(title)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "gamma_low_val = 0.5\n", "gamma_low = exposure.adjust_gamma(data, gamma=gamma_low_val)\n", "\n", "gamma_high_val = 1.5\n", "gamma_high = exposure.adjust_gamma(data, gamma=gamma_high_val)\n", "\n", "_, ((a, b, c), (d, e, f)) = plt.subplots(nrows=2, ncols=3, figsize=(12, 8))\n", "\n", "show_plane(a, data[32], title=\"Original\")\n", "show_plane(b, gamma_low[32], title=\"Gamma = {}\".format(gamma_low_val))\n", "show_plane(c, gamma_high[32], title=\"Gamma = {}\".format(gamma_high_val))\n", "\n", "plot_hist(d, data)\n", "plot_hist(e, gamma_low)\n", "plot_hist(f, gamma_high)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Histogram equalization](https://en.wikipedia.org/wiki/Histogram_equalization) improves contrast in an image by redistributing pixel intensities. The most common pixel intensities are spread out, allowing areas of lower local contrast to gain a higher contrast. This may enhance background noise." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "equalized = exposure.equalize_hist(data)\n", "\n", "slice_explorer(equalized)\n", "\n", "_, ((a, b), (c, d)) = plt.subplots(nrows=2, ncols=2, figsize=(16, 8))\n", "\n", "plot_hist(a, data, title=\"Original\")\n", "plot_hist(b, equalized, title=\"Histogram equalization\")\n", "\n", "cdf, bins = exposure.cumulative_distribution(data.ravel())\n", "c.plot(bins, cdf, \"r\")\n", "c.set_title(\"Original CDF\")\n", "\n", "cdf, bins = exposure.cumulative_distribution(equalized.ravel())\n", "d.plot(bins, cdf, \"r\")\n", "d.set_title(\"Histogram equalization CDF\");" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Most experimental images are affected by salt and pepper noise. A few bright artifacts can decrease the relative intensity of the pixels of interest. A simple way to improve contrast is to clip the pixel values on the lowest and highest extremes. Clipping the darkest and brightest 0.5% of pixels will increase the overall contrast of the image." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "vmin, vmax = stats.scoreatpercentile(data, (0.5, 99.5))\n", "\n", "clipped = exposure.rescale_intensity(\n", " data, \n", " in_range=(vmin, vmax), \n", " out_range=np.float32\n", ").astype(np.float32)\n", "\n", "slice_explorer(clipped)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# We'll call our dataset \"rescaled\" from here on\n", "# In this cell, you can choose any of the previous results\n", "# to continue working with.\n", "#\n", "# We'll use the `clipped` version\n", "#\n", "rescaled = clipped" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Edge detection\n", "\n", "[Edge detection](https://en.wikipedia.org/wiki/Edge_detection) highlights regions in the image where a sharp change in contrast occurs. The intensity of an edge corresponds to the steepness of the transition from one intensity to another. A gradual shift from bright to dark intensity results in a dim edge. An abrupt shift results in a bright edge.\n", "\n", "The [Sobel operator](https://en.wikipedia.org/wiki/Sobel_operator) is an edge detection algorithm which approximates the gradient of the image intensity, and is fast to compute. `skimage.filters.sobel` has not been adapted for 3D images. It can be applied planewise to approximate a 3D result." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sobel = np.empty_like(rescaled)\n", "\n", "for plane, image in enumerate(rescaled):\n", " sobel[plane] = filters.sobel(image)\n", " \n", "slice_explorer(sobel);" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "_, ((a, b), (c, d)) = plt.subplots(nrows=2, ncols=2, figsize=(16, 4))\n", "\n", "show_plane(a, sobel[:, 128, :], title=\"3D sobel, row = 128\")\n", "\n", "row_sobel = filters.sobel(rescaled[:, 128, :])\n", "show_plane(b, row_sobel, title=\"2D sobel, row=128\")\n", "\n", "show_plane(c, sobel[:, :, 128], title=\"3D sobel, column = 128\")\n", "\n", "column_sobel = filters.sobel(rescaled[:, :, 128])\n", "show_plane(d, column_sobel, title=\"2D sobel, column=128\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Filters\n", "\n", "In addition to edge detection, `skimage.filters` provides functions for filtering and thresholding images.\n", "\n", "[Gaussian filter](https://en.wikipedia.org/wiki/Gaussian_filter) applies a Gaussian function to an image, creating a smoothing effect. `skimage.filters.gaussian` takes as input `sigma` which can be a scalar or a sequence of scalar. This `sigma` determines the standard deviation of the Gaussian along each axis. When the resolution in the `plane` dimension is much worse than the `row` and `column` dimensions, dividing `base_sigma` by the image `spacing` will balance the contribution to the filter along each axis." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "base_sigma = 3.0\n", "\n", "sigma = base_sigma / spacing\n", "\n", "gaussian = filters.gaussian(rescaled, multichannel=False, sigma=sigma)\n", "\n", "slice_explorer(gaussian);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Median filter](https://en.wikipedia.org/wiki/Median_filter) is a noise removal filter. It is particularly effective against salt and pepper noise. An additional feature of the median filter is its ability to preserve edges. This is helpful in segmentation because the original shape of regions of interest will be preserved.\n", "\n", "`skimage.filters.median` does not support three-dimensional images and needs to be applied planewise." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "rescaled_uint8 = util.img_as_ubyte(rescaled)\n", "\n", "median = np.empty_like(rescaled_uint8)\n", "\n", "for plane, image in enumerate(rescaled_uint8):\n", " median[plane] = filters.median(image)\n", " \n", "median = util.img_as_float(median)\n", " \n", "slice_explorer(median);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A [bilateral filter](https://en.wikipedia.org/wiki/Bilateral_filter) is another edge-preserving, denoising filter. Each pixel is assigned a weighted average based on neighboring pixels. The weight is determined by spatial and radiometric similarity (e.g., distance between two colors).\n", "\n", "`skimage.restoration.denoise_bilateral` requires a `multichannel` parameter. This determines whether the last axis of the image is to be interpreted as multiple channels or another spatial dimension. While the function does not yet support 3D data, the `multichannel` parameter will help distinguish multichannel 2D data from grayscale 3D data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "bilateral = np.empty_like(rescaled)\n", "\n", "for index, plane in enumerate(rescaled):\n", " bilateral[index] = restoration.denoise_bilateral(\n", " plane, \n", " multichannel=False\n", " )\n", "\n", "slice_explorer(bilateral);" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "_, (a, b, c, d) = plt.subplots(nrows=1, ncols=4, figsize=(16, 4))\n", "\n", "show_plane(a, rescaled[32], title=\"Original\")\n", "show_plane(b, gaussian[32], title=\"Gaussian\")\n", "show_plane(c, median[32], title=\"Median\")\n", "show_plane(d, bilateral[32], title=\"Bilateral\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "denoised = median" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Thresholding](https://en.wikipedia.org/wiki/Thresholding_%28image_processing%29) is used to create binary images. A threshold value determines the intensity value separating foreground pixels from background pixels. Foregound pixels are pixels brighter than the threshold value, background pixels are darker. Thresholding is a form of image segmentation.\n", "\n", "Different thresholding algorithms produce different results. [Otsu's method](https://en.wikipedia.org/wiki/Otsu%27s_method) and Li's minimum cross entropy threshold are two common algorithms. The example below demonstrates how a small difference in the threshold value can visibly alter the binarized image." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "threshold_li = filters.threshold_li(denoised)\n", "li = denoised >= threshold_li\n", "\n", "threshold_otsu = filters.threshold_otsu(denoised)\n", "otsu = denoised >= threshold_otsu\n", "\n", "_, (a, b, c) = plt.subplots(nrows=1, ncols=3, figsize=(16, 4))\n", "\n", "plot_hist(a, denoised, \"Thresholds (Li: red, Otsu: blue)\")\n", "a.axvline(threshold_li, c=\"r\")\n", "a.axvline(threshold_otsu, c=\"b\")\n", "\n", "show_plane(b, li[32], title=\"Li's threshold = {:0.3f}\".format(threshold_li))\n", "show_plane(c, otsu[32], title=\"Otsu's threshold = {:0.3f}\".format(threshold_otsu))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "binary = li\n", "\n", "slice_explorer(binary)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Morphological operations" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Mathematical morphology](https://en.wikipedia.org/wiki/Mathematical_morphology) operations and structuring elements are defined in `skimage.morphology`. Structuring elements are shapes which define areas over which an operation is applied. The response to the filter indicates how well the neighborhood corresponds to the structuring element's shape.\n", "\n", "There are a number of two and three dimensional structuring elements defined in `skimage.morphology`. Not all 2D structuring element have a 3D counterpart. The simplest and most commonly used structuring elements are the `disk`/`ball` and `square`/`cube`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ball = morphology.ball(radius=5)\n", "print(\"ball shape: {}\".format(ball.shape))\n", "\n", "cube = morphology.cube(width=5)\n", "print(\"cube shape: {}\".format(cube.shape))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The most basic mathematical morphology operations are `dilation` and `erosion`. Dilation enlarges bright regions and shrinks dark regions. Erosion shrinks bright regions and enlarges dark regions. Other morphological operations are composed of `dilation` and `erosion`.\n", "\n", "The `closing` of an image is defined as a `dilation` followed by an `erosion`. Closing can remove small dark spots (i.e. “pepper”) and connect small bright cracks. This tends to “close” up (dark) gaps between (bright) features. Morphological `opening` on an image is defined as an `erosion` followed by a `dilation`. Opening can remove small bright spots (i.e. “salt”) and connect small dark cracks. This tends to “open” up (dark) gaps between (bright) features.\n", "\n", "These operations in `skimage.morphology` are compatible with 3D images and structuring elements. A 2D structuring element cannot be applied to a 3D image, nor can a 3D structuring element be applied to a 2D image.\n", "\n", "These four operations (`closing`, `dilation`, `erosion`, `opening`) have binary counterparts which are faster to compute than the grayscale algorithms." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "selem = morphology.ball(radius=3)\n", "\n", "closing = morphology.closing(rescaled, selem=selem)\n", "dilation = morphology.dilation(rescaled, selem=selem)\n", "erosion = morphology.erosion(rescaled, selem=selem)\n", "opening = morphology.opening(rescaled, selem=selem)\n", "\n", "binary_closing = morphology.binary_closing(binary, selem=selem)\n", "binary_dilation = morphology.binary_dilation(binary, selem=selem)\n", "binary_erosion = morphology.binary_erosion(binary, selem=selem)\n", "binary_opening = morphology.binary_opening(binary, selem=selem)\n", "\n", "_, ((a, b, c, d), (e, f, g, h)) = plt.subplots(nrows=2, ncols=4, figsize=(16, 8))\n", "\n", "show_plane(a, erosion[32], title=\"Erosion\")\n", "show_plane(b, dilation[32], title=\"Dilation\")\n", "show_plane(c, closing[32], title=\"Closing\")\n", "show_plane(d, opening[32], title=\"Opening\")\n", "\n", "show_plane(e, binary_erosion[32], title=\"Binary erosion\")\n", "show_plane(f, binary_dilation[32], title=\"Binary dilation\")\n", "show_plane(g, binary_closing[32], title=\"Binary closing\")\n", "show_plane(h, binary_opening[32], title=\"Binary opening\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Morphology operations can be chained together to denoise an image. For example, a `closing` applied to an `opening` can remove salt and pepper noise from an image." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "binary_equalized = equalized >= filters.threshold_li(equalized)\n", "\n", "despeckled1 = morphology.closing(\n", " morphology.opening(binary_equalized, selem=morphology.ball(1)),\n", " selem=morphology.ball(1)\n", ")\n", "\n", "despeckled3 = morphology.closing(\n", " morphology.opening(binary_equalized, selem=morphology.ball(3)),\n", " selem=morphology.ball(3)\n", ")\n", "\n", "_, (a, b, c) = plt.subplots(nrows=1, ncols=3, figsize=(16, 4))\n", "\n", "show_plane(a, binary_equalized[32], title=\"Noisy data\")\n", "show_plane(b, despeckled1[32], title=\"Despeckled, r = 1\")\n", "show_plane(c, despeckled3[32], title=\"Despeckled, r = 3\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Functions operating on [connected components](https://en.wikipedia.org/wiki/Connected_space) can remove small undesired elements while preserving larger shapes.\n", "\n", "`skimage.morphology.remove_small_holes` fills holes and `skimage.morphology.remove_small_objects` removes bright regions. Both functions accept a `min_size` parameter, which is the minimum size (in pixels) of accepted holes or objects. The `min_size` can be approximated by a cube." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "width = 20\n", "\n", "remove_holes = morphology.remove_small_holes(\n", " binary, \n", " min_size=width ** 3\n", ")\n", "\n", "slice_explorer(remove_holes);" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "width = 20\n", "\n", "remove_objects = morphology.remove_small_objects(\n", " remove_holes, \n", " min_size=width ** 3\n", ")\n", "\n", "slice_explorer(remove_objects);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Segmentation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Image segmentation](https://en.wikipedia.org/wiki/Image_segmentation) partitions images into regions of interest. Interger labels are assigned to each region to distinguish regions of interest." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "labels = measure.label(remove_objects)\n", "\n", "slice_explorer(labels, cmap='inferno');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Connected components of the binary image are assigned the same label via `skimage.measure.label`. Tightly packed cells connected in the binary image are assigned the same label." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "_, (a, b, c) = plt.subplots(nrows=1, ncols=3, figsize=(16, 4))\n", "\n", "show_plane(a, rescaled[32, :100, 125:], title=\"Rescaled\")\n", "show_plane(b, labels[32, :100, 125:], cmap='inferno', title=\"Labels\")\n", "show_plane(c, labels[32, :100, 125:] == 8, title=\"Labels = 8\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A better segmentation would assign different labels to disjoint regions in the original image. \n", "\n", "[Watershed segmentation](https://en.wikipedia.org/wiki/Watershed_%28image_processing%29) can distinguish touching objects. Markers are placed at local minima and expanded outward until there is a collision with markers from another region. The inverse intensity image transforms bright cell regions into basins which should be filled.\n", "\n", "In declumping, markers are generated from the distance function. Points furthest from an edge have the highest intensity and should be identified as markers using `skimage.feature.peak_local_max`. Regions with pinch points should be assigned multiple markers." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "distance = ndi.distance_transform_edt(remove_objects)\n", "\n", "slice_explorer(distance, cmap='viridis');" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "peak_local_max = feature.peak_local_max(\n", " distance,\n", " footprint=np.ones((15, 15, 15), dtype=np.bool),\n", " indices=False,\n", " labels=measure.label(remove_objects)\n", ")\n", "\n", "markers = measure.label(peak_local_max)\n", "\n", "labels = morphology.watershed(\n", " rescaled, \n", " markers, \n", " mask=remove_objects\n", ")\n", "\n", "slice_explorer(labels, cmap='inferno');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After watershed, we have better disambiguation between internal cells.\n", "\n", "When cells simultaneous touch the border of the image, they may be assigned the same label. In pre-processing, we typically remove these cells.\n", "\n", "*Note:* This is 3D data---you may not always be able to see connections in 2D!" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "_, (a, b) = plt.subplots(nrows=1, ncols=2, figsize=(16, 8))\n", "\n", "show_plane(a, labels[39, 156:, 20:150], cmap='inferno')\n", "show_plane(b, labels[34, 90:190, 126:], cmap='inferno')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The watershed algorithm falsely detected subregions in a few cells. This is referred to as oversegmentation." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "f, ax = plt.subplots()\n", "show_plane(ax, labels[38, 50:100, 20:100], cmap='inferno', title=\"Labels\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Plotting the markers on the distance image reveals the reason for oversegmentation. Cells with multiple markers will be assigned multiple labels, and oversegmented. It can be observed that cells with a uniformly increasing distance map are assigned a single marker near their center. Cells with uneven distance maps are assigned multiple markers, indicating the presence of multiple local maxima." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "_, axes = plt.subplots(nrows=3, ncols=4, figsize=(16, 12))\n", "\n", "vmin = distance.min()\n", "vmax = distance.max()\n", "\n", "offset = 31\n", "\n", "for index, ax in enumerate(axes.flatten()):\n", " ax.imshow(\n", " distance[offset + index],\n", " cmap=\"gray\",\n", " vmin=vmin,\n", " vmax=vmax\n", " )\n", " \n", " peaks = np.nonzero(peak_local_max[offset + index])\n", " \n", " ax.plot(peaks[1], peaks[0], \"r.\")\n", " ax.set_xticks([])\n", " ax.set_yticks([])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "_, (a, b, c) = plt.subplots(nrows=1, ncols=3, figsize=(16, 8))\n", "\n", "\n", "show_plane(a, remove_objects[10:, 193:253, 74])\n", "show_plane(b, distance[10:, 193:253, 74])\n", "\n", "features = feature.peak_local_max(distance[10:, 193:253, 74])\n", "b.plot(features[:, 1], features[:, 0], 'r.')\n", "\n", "# Improve feature selection by blurring, using a larger footprint\n", "# in `peak_local_max`, etc.\n", "\n", "smooth_distance = filters.gaussian(distance[10:, 193:253, 74], sigma=5)\n", "show_plane(c, smooth_distance)\n", "features = feature.peak_local_max(\n", " smooth_distance\n", ")\n", "c.plot(features[:, 1], features[:, 0], 'bx');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Feature extraction" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Feature extraction](https://en.wikipedia.org/wiki/Feature_extraction) reduces data required to describe an image or objects by measuring informative features. These include features such as area or volume, bounding boxes, and intensity statistics.\n", "\n", "Before measuring objects, it helps to clear objects from the image border. Measurements should only be collected for objects entirely contained in the image." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "interior_labels = segmentation.clear_border(labels)\n", "interior_labels = morphology.remove_small_objects(interior_labels, min_size=200)\n", "\n", "print(\"interior labels: {}\".format(np.unique(interior_labels)))\n", "\n", "slice_explorer(interior_labels, cmap='inferno');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After clearing the border, the object labels are no longer sequentially increasing. The labels can be renumbered such that there are no jumps in the list of image labels." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "relabeled, _, _ = segmentation.relabel_sequential(interior_labels)\n", "\n", "print(\"relabeled labels: {}\".format(np.unique(relabeled)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`skimage.measure.regionprops` automatically measures many labeled image features. Optionally, an `intensity_image` can be supplied and intensity features are extracted per object. It's good practice to make measurements on the original image.\n", "\n", "Not all properties are supported for 3D data. Below are lists of supported and unsupported 3D measurements." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "regionprops = measure.regionprops(relabeled, intensity_image=data)\n", "\n", "supported = [] \n", "unsupported = []\n", "\n", "for prop in regionprops[0]:\n", " try:\n", " regionprops[0][prop]\n", " supported.append(prop)\n", " except NotImplementedError:\n", " unsupported.append(prop)\n", "\n", "print(\"Supported properties:\")\n", "print(\" \" + \"\\n \".join(supported))\n", "print()\n", "print(\"Unsupported properties:\")\n", "print(\" \" + \"\\n \".join(unsupported))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`skimage.measure.regionprops` ignores the 0 label, which represents the background." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"measured regions: {}\".format([regionprop.label for regionprop in regionprops]))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "volumes = [regionprop.area for regionprop in regionprops]\n", "\n", "print(\"total pixels: {}\".format(volumes))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Collected measurements can be further reduced by computing per-image statistics such as total, minimum, maximum, mean, and standard deviation." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "max_volume = np.max(volumes)\n", "mean_volume = np.mean(volumes)\n", "min_volume = np.min(volumes)\n", "sd_volume = np.std(volumes)\n", "total_volume = np.sum(volumes)\n", "\n", "print(\"Volume statistics\")\n", "print(\"total: {}\".format(total_volume))\n", "print(\"min: {}\".format(min_volume))\n", "print(\"max: {}\".format(max_volume))\n", "print(\"mean: {:0.2f}\".format(mean_volume))\n", "print(\"standard deviation: {:0.2f}\".format(sd_volume))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Perimeter measurements are not computed for 3D objects. The 3D extension of perimeter is surface area. We can measure the surface of an object by generating a surface mesh with `skimage.measure.marching_cubes` and computing the surface area of the mesh with `skimage.measure.mesh_surface_area`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "selected_cell = 3\n", "\n", "# skimage.measure.marching_cubes expects ordering (row, col, pln)\n", "volume = (relabeled == regionprops[selected_cell].label).transpose(1, 2, 0)\n", "\n", "verts_px, faces_px, _, _ = measure.marching_cubes_lewiner(volume, level=0, spacing=(1.0, 1.0, 1.0))\n", "surface_area_pixels = measure.mesh_surface_area(verts_px, faces_px)\n", "\n", "verts, faces, _, _ = measure.marching_cubes_lewiner(volume, level=0, spacing=tuple(spacing))\n", "surface_area_actual = measure.mesh_surface_area(verts, faces)\n", "\n", "print(\"surface area (total pixels): {:0.2f}\".format(surface_area_pixels))\n", "print(\"surface area (actual): {:0.2f}\".format(surface_area_actual))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The volume can be visualized using the mesh vertexes and faces." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig = plt.figure(figsize=(10, 10))\n", "ax = fig.add_subplot(111, projection=\"3d\")\n", "\n", "mesh = Poly3DCollection(verts_px[faces_px])\n", "mesh.set_edgecolor(\"k\")\n", "ax.add_collection3d(mesh)\n", "\n", "ax.set_xlabel(\"col\")\n", "ax.set_ylabel(\"row\")\n", "ax.set_zlabel(\"pln\")\n", "\n", "min_pln, min_row, min_col, max_pln, max_row, max_col = regionprops[selected_cell].bbox\n", "\n", "ax.set_xlim(min_row, max_row)\n", "ax.set_ylim(min_col, max_col)\n", "ax.set_zlim(min_pln, max_pln)\n", "\n", "plt.tight_layout()\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Challenge problems\n", "\n", "Put your 3D image processing skills to the test by working through these challenge problems.\n", "\n", "### Improve the segmentation\n", "A few objects were oversegmented in the declumping step. Try to improve the segmentation and assign each object a single, unique label. You can try:\n", "\n", "1. generating a smoother image by modifying the `win_size` parameter in `skimage.restoration.denoise_bilateral`, or try another filter. Many filters are available in `skimage.filters` and `skimage.filters.rank`.\n", "1. adjusting the threshold value by trying another threshold algorithm such as `skimage.filters.threshold_otsu` or entering one manually.\n", "1. generating different markers by changing the size of the footprint passed to `skimage.feature.peak_local_max`. Alternatively, try another metric for placing markers or limit the planes on which markers can be placed.\n", "\n", "\n", "### Segment cell membranes\n", "Try segmenting the accompanying membrane channel. In the membrane image, the membrane walls are the bright web-like regions. This channel is difficult due to a high amount of noise in the image. Additionally, it can be hard to determine where the membrane ends in the image (it's not the first and last planes).\n", "\n", "Below is a 2D segmentation of the membrane:\n", "\n", "![](../_images/membrane_segmentation.png)\n", "\n", "The membrane image can be loaded using `skimage.io.imread(\"../images/cells_membrane.tif\")`. \n", "\n", "Hint: there should only be one nucleus per membrane." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.0" } }, "nbformat": 4, "nbformat_minor": 2 }