{ "cells": [ { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "# NIR Tutorial \n", "\n", "When starting an image-based phenotyping project it is important to consider what the end goals of the project are.\n", "This is important because the goals of the project will determine the the camera type, imaging layout, and will help to \n", "guide downstream analysis. If it was \n", "an experiment focused on drought of maize plants and your goal was to get information about water content of plants you\n", "might want to take side-view and top-view images of a single plant with a near-infrared camera.\n", "\n", "To run a NIR workflow over a single NIR image there are three required inputs:\n", "\n", "1. **Image:** NIR images are grayscale matrices (1 signal dimension).\n", "In principle, image processing will work on any grayscale image with adjustments if images are well lit and\n", "there is appreciable contrast difference between the object of interest and the background.\n", "2. **Output directory:** If debug mode is set to 'print' output images from each intermediate step are produced.\n", "3. **Image of estimated background:** Right now this is hardcoded into the workflow (different background at each zoom level) and not implemented as an argument.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import cv2\n", "from plantcv import plantcv as pcv\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class options:\n", " def __init__(self):\n", " self.image = \"img/tutorial_images/nir/original_image.jpg\"\n", " self.debug = \"plot\"\n", " self.writeimg= False\n", " self.result = \"./nir_tutorial_results\"\n", " self.outdir = \".\" # Store the output to the current directory\n", " \n", "# Get options\n", "args = options()\n", "\n", "# Set debug to the global parameter \n", "pcv.params.debug = args.debug\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Read image\n", "\n", "# Inputs:\n", "# filename - Image file to be read in \n", "# mode - How to read in the image; either 'native' (default), 'rgb', 'gray', or 'csv'\n", "img, path, filename = pcv.readimage(filename=args.image)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Read in the background image \n", "img_bkgrd, bkgrd_path, bkgrd_filename = pcv.readimage(filename=\"img/tutorial_images/nir/background_average.jpg\")\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Subtract the background image from the image with the plant. \n", "\n", "# Inputs: \n", "# gray_img1 - Grayscale image data from which gray_img2 will be subtracted\n", "# gray_img2 - Grayscale image data which will be subtracted from gray_img1\n", "bkg_sub_img = pcv.image_subtract(gray_img1=img, gray_img2=img_bkgrd)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# There is very low contrast in the subtracted image produced \n", "# above so normalizing the histogram might help.\n", "\n", "# Inputs:\n", "# gray_img - Grayscale image data \n", "equalized_img = pcv.hist_equalization(gray_img=bkg_sub_img)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# There are often multiple ways to process images. There is also \n", "# a background subtraction function made that creates a binary \n", "# image from performing background subtraction on the foreground\n", "\n", "# Inputs:\n", "# foreground_image - RGB or grayscale image data\n", "# background_image - RGB or grayscale image data \n", "fgmask = pcv.background_subtraction(foreground_image=img, background_image=img_bkgrd)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Threshold the image of interest using the two-sided thresholding function (keep what is between 50-190) \n", "\n", "# Inputs:\n", "# rgb_img - RGB image data \n", "# lower_thresh - List of lower threshold values \n", "# upper_thresh - List of upper threshold values\n", "# channel - Color-space channels of interest (either 'RGB', 'HSV', 'LAB', or 'gray')\n", "bkg_sub_thres_img, masked_img = pcv.threshold.custom_range(rgb_img=bkg_sub_img, lower_thresh=[50], \n", " upper_thresh=[190], channel='gray')\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Laplace filtering (identify edges based on 2nd derivative)\n", "\n", "# Inputs:\n", "# gray_img - Grayscale image data \n", "# ksize - Aperture size used to calculate the second derivative filter, \n", "# specifies the size of the kernel (must be an odd integer)\n", "# scale - Scaling factor applied (multiplied) to computed Laplacian values \n", "# (scale = 1 is unscaled) \n", "lp_img = pcv.laplace_filter(gray_img=img, ksize=1, scale=1)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Lapacian image sharpening, this step will enhance the darkness of the edges detected\n", "lp_shrp_img = pcv.image_subtract(gray_img1=img, gray_img2=lp_img)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Sobel filtering\n", "# 1st derivative sobel filtering along horizontal axis, ksize = 1)\n", "\n", "# Inputs: \n", "# gray_img - Grayscale image data \n", "# dx - Derivative of x to analyze \n", "# dy - Derivative of y to analyze \n", "# ksize - Aperture size used to calculate 2nd derivative, specifies the size of the kernel and must be an odd integer\n", "# NOTE: Aperture size must be greater than the largest derivative (ksize > dx & ksize > dy) \n", "sbx_img = pcv.sobel_filter(gray_img=img, dx=1, dy=0, ksize=1)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# 1st derivative sobel filtering along vertical axis, ksize = 1)\n", "\n", "# Inputs:\n", "# gray_img - Grayscale image data\n", "# dx - derivative of x to analyze\n", "# dy - derivative of y to analyze \n", "# ksize - apertures size used to calculate 2nd derivative filter, \n", "# specifies the size of the kernel (must be an odd integer)\n", "sby_img = pcv.sobel_filter(gray_img=img, dx=0, dy=1, ksize=1)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Another function similar to the sobel filter is the scharr \n", "# filter. Depending on the image, one method might work a bit\n", "# better than the other. Note dx+dy==1 must be satisfied. \n", "\n", "# Inputs:\n", "# img - RGB or grayscale image data \n", "# dx - Derivative of x to analyze (0 or 1)\n", "# dy - Derivative of y to analyze (0 or 1)\n", "# scale - scaling factor applied (multiplied) to computed \n", "# Scharr values (scale = 1 is unscaled)\n", "scharrx_img = pcv.scharr_filter(img=img, dx=1, dy=0, scale=1)\n", "scharry_img = pcv.scharr_filter(img=img, dx=0, dy=1, scale=1)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Combine the effects of both x and y filters through matrix addition\n", "# This will capture edges identified within each plane and emphasize edges found in both images\n", "\n", "# Inputs:\n", "# gray_img1 - Grayscale image data to be added to gray_img2\n", "# gray_img2 - Grayscale image data to be added to gray_img1\n", "sb_img = pcv.image_add(gray_img1=sbx_img, gray_img2=sby_img)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Use a lowpass (blurring) filter to smooth sobel image\n", "\n", "# Inputs:\n", "# gray_img - Grayscale image data \n", "# ksize - Kernel size (integer or tuple), (ksize, ksize) box if integer input,\n", "# (n, m) box if tuple input \n", "mblur_img = pcv.median_blur(gray_img=sb_img, ksize=1)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Invert the image so our background is white \n", "\n", "# Inputs:\n", "# gray_img - Grayscale image data \n", "mblur_invert_img = pcv.invert(gray_img=mblur_img)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Combine the smoothed sobel image with the laplacian sharpened image\n", "# combines the best features of both methods as described in \"Digital Image Processing\" by Gonzalez and Woods pg. 169\n", "edge_shrp_img = pcv.image_add(gray_img1=mblur_invert_img, gray_img2=lp_shrp_img)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Perform thresholding to generate a binary image\n", "\n", "# Inputs: \n", "# gray_img - Grayscale image data \n", "# threshold - Threshold value (0-255)\n", "# max_value - Value to apply above the threshold (255 = white)\n", "# object_type - 'light' (default) or 'dark'. If the object is lighter than \n", "# the background then standard thresholding is done. If the \n", "# object is darker than the background then inverse thresholding. \n", "tr_es_img = pcv.threshold.binary(gray_img=edge_shrp_img, threshold=165, \n", " max_value=255, object_type='dark')\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Use the fill function to get rid of small salt & pepper noise in the image \n", "\n", "# Inputs: \n", "# bin_img - Binary image data \n", "# size - Minimum object area size in pixels (must be an integer), and smaller objects will be filled\n", "f_img = pcv.fill(bin_img=tr_es_img, size=5)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# There is another PlantCV function that can help reduce background \n", "# noise, and can be used with `pcv.dilate` to avoid losing plant \n", "\n", "# Inputs:\n", "# gray_img - Grayscale (usually binary) image data \n", "# ksize - The size used to build a ksize x ksize \n", "# matrix using np.ones. Must be greater than 1 to have an effect \n", "# i - An integer for the number of iterations \n", "eroded_img = pcv.erode(gray_img=tr_es_img, ksize=2, i=1)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Bring the two object identification approaches together.\n", "# Using a logical OR combine object identified by background subtraction and the object identified by derivative filter.\n", "\n", "# Inputs: \n", "# bin_img1 - Binary image data to be compared in bin_img2\n", "# bin_img2 - Binary image data to be compared in bin_img1\n", "comb_img = pcv.logical_or(bin_img1=f_img, bin_img2=bkg_sub_thres_img)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Get masked image, Essentially identify pixels corresponding to plant and keep those.\n", "\n", "# Inputs: \n", "# rgb_img - RGB image data \n", "# mask - Binary mask image data \n", "# mask_color - 'black' or 'white'\n", "masked_erd = pcv.apply_mask(rgb_img=img, mask=comb_img, mask_color='black')\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Need to remove the edges of the image, we did that by generating a set of rectangles to mask the edges\n", "# img is (254 X 320)\n", "# Mask for the bottom of the image\n", "\n", "# Inputs:\n", "# img - RGB or grayscale image data \n", "# p1 - Point at the top left corner of the rectangle (tuple)\n", "# p2 - Point at the bottom right corner of the rectangle (tuple) \n", "# color 'black' (default), 'gray', or 'white'\n", "masked1, box1_img, rect_contour1, hierarchy1 = pcv.rectangle_mask(img=img, p1=(110,185), p2=(215,252))\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Mask for the left side of the image\n", "\n", "masked2, box2_img, rect_contour2, hierarchy2 = pcv.rectangle_mask(img=img, p1=(1,1), p2=(60,252))\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Mask for the right side of the image\n", "\n", "masked3, box3_img, rect_contour3, hierarchy3 = pcv.rectangle_mask(img=img, p1=(240,1), p2=(320,254))\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Mask the bottom edge\n", "\n", "masked4, box4_img, rect_contour4, hierarchy4 = pcv.rectangle_mask(img=img, p1=(0,251), p2=(320,254))\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Combine boxes to filter the edges and car out of the photo\n", "\n", "# Inputs: \n", "# bin_img1 - Binary image data to be compared in bin_img2\n", "# bin_img2 - Binary image data to be compared in bin_img1\n", "bx12_img = pcv.logical_and(bin_img1=box1_img, bin_img2=box2_img)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "bx123_img = pcv.logical_and(bin_img1=bx12_img, bin_img2=box3_img)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "bx1234_img = pcv.logical_and(bin_img1=bx123_img, bin_img2=box4_img)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "edge_masked_img = pcv.apply_mask(rgb_img=masked_erd, mask=bx1234_img, mask_color='black')\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Identify objects\n", "\n", "# Inputs:\n", "# img - RGB or grayscale image data for plotting\n", "# mask - Binary mask used for detecting contours\n", "id_objects,obj_hierarchy = pcv.find_objects(img=img, mask=edge_masked_img)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Define ROI\n", "\n", "# Inputs: \n", "# img - RGB or grayscale image to plot the ROI on \n", "# x - The x-coordinate of the upper left corner of the rectangle \n", "# y - The y-coordinate of the upper left corner of the rectangle \n", "# h - The height of the rectangle \n", "# w - The width of the rectangle \n", "roi1, roi_hierarchy= pcv.roi.rectangle(img=edge_masked_img, x=100, y=45, h=130, w=100)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Decide which objects to keep\n", "\n", "# Inputs:\n", "# img = img to display kept objects\n", "# roi_contour = contour of roi, output from any ROI function\n", "# roi_hierarchy = contour of roi, output from any ROI function\n", "# object_contour = contours of objects, output from pcv.find_objects function\n", "# obj_hierarchy = hierarchy of objects, output from pcv.find_objects function\n", "# roi_type = 'partial' (default, for partially inside the ROI), 'cutto', or \n", "# 'largest' (keep only largest contour)\n", "roi_objects, hierarchy5, kept_mask, obj_area = pcv.roi_objects(img=edge_masked_img, roi_contour=roi1, \n", " roi_hierarchy=roi_hierarchy, object_contour=id_objects, \n", " obj_hierarchy=obj_hierarchy, roi_type='partial')\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# We can perform a distance transformation on a binary image \n", "# that can assist with object segmentation \n", "\n", "# Inputs:\n", "# bin_img - Binary image data \n", "# distance_type - Type of distance. It can be CV_DIST_L1, CV_DIST_L2 , \n", "# or CV_DIST_C which are 1,2 and 3 respectively.\n", "# mask_size - Size of the distance transform mask. It can be 3, 5, \n", "# or CV_DIST_MASK_PRECISE (the latter option is only supported \n", "# by the first function). In case of the CV_DIST_L1 or \n", "# CV_DIST_C distance type, the parameter is forced to 3 because \n", "# a 3 by 3 mask gives the same result as 5 by 5 or any larger aperture.\n", "dist_img = pcv.distance_transform(bin_img=kept_mask, distance_type=1, mask_size=3)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Change the image to have 3 channels \n", "rgb_img = cv2.cvtColor(img,cv2.COLOR_GRAY2RGB)\n", "\n", "# Use the object_composition function to outline the plant \n", "# Inputs:\n", "# img - RGB or grayscale image data for plotting \n", "# contours - Contour list \n", "# hierarchy - Contour hierarchy array \n", "grp_object, img_mask = pcv.object_composition(img=rgb_img, contours=roi_objects, \n", " hierarchy=hierarchy5)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# The outline is too thick for an image this size. Change the line_thickness from it's default of 5 and rerun\n", "pcv.params.line_thickness = 2\n", "\n", "grp_object, img_mask = pcv.object_composition(img=rgb_img, contours=roi_objects, \n", " hierarchy=hierarchy5)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can perform the analysis of pixelwise signal value and object shape attributes.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Perform NIR signal analysis \n", "\n", "# Inputs: \n", "# gray_img - 8 or 16-bit grayscale image data \n", "# mask - Binary mask made from selected contours \n", "# bins - Number of classes to divide the spectrum into \n", "# histplot - If True, plots the histogram of intensity values \n", "nir_hist_img = pcv.analyze_nir_intensity(gray_img=img, mask=kept_mask, \n", " bins=256, histplot=True)\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Print out the NIR histogram that gets returned.\n", "pcv.print_image(img=nir_hist_img, filename=\"nir_hist.jpg\")\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Inputs:\n", "# gray_img - Grayscale image data\n", "# obj - Single or grouped contour object (optional), if provided the pseudocolored image gets cropped down to the region of interest.\n", "# mask - Binary mask (optional) \n", "# background - Background color/type. Options are \"image\" (gray_img), \"white\", or \"black\". A mask must be supplied.\n", "# cmap - Colormap\n", "# min_value - Minimum value for range of interest\n", "# max_value - Maximum value for range of interest\n", "# dpi - Dots per inch for image if printed out (optional, if dpi=None then the default is set to 100 dpi).\n", "# axes - If False then the title, x-axis, and y-axis won't be displayed (default axes=True).\n", "# colorbar - If False then the colorbar won't be displayed (default colorbar=True)\n", "\n", "# Pseudocolor the NIR grayscale image \n", "pseudocolored_img = pcv.visualize.pseudocolor(gray_img=img, obj=None, mask=kept_mask, cmap='viridis')\n", "\n", "# Plot with the option background='image'\n", "simple_pseudo_img = pcv.visualize.pseudocolor(gray_img=img, obj=None, mask=kept_mask, background=\"image\", \n", " axes=False, colorbar=False, cmap='viridis')\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Perform shape analysis\n", "\n", "# Inputs:\n", "# img - RGB or grayscale image data \n", "# obj- Single or grouped contour object\n", "# mask - Binary image mask to use as mask for moments analysis \n", "shape_imgs = pcv.analyze_object(img=rgb_img, obj=grp_object, mask=img_mask)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# The print_results function will take the measurements stored when running any (or all) of these functions, format, \n", "# and print an output text file for data analysis. The Outputs class stores data whenever any of the following functions\n", "# are ran: analyze_bound_horizontal, analyze_bound_vertical, analyze_color, analyze_nir_intensity, analyze_object, \n", "# fluor_fvfm, report_size_marker_area, watershed. If no functions have been run, it will print an empty text file. \n", "pcv.print_results(filename='nir_tutorial_results.txt')\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To view and/or download the text file output (saved in JSON format)...\n", "1) To see the text file with data that got saved out, click “File” tab in top left corner.\n", "2) Click “Open…”\n", "3) Open the file named “nir_tutorial_results.txt”\n", "\n", "Check out documentation on how to [convert JSON](https://plantcv.readthedocs.io/en/latest/tools/#convert-output-json-data-files-to-csv-tables) format output into table formatted output. Depending on the analysis steps a PlantCV user may have two CSV files (single value traits and multivalue traits). \n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.6" } }, "nbformat": 4, "nbformat_minor": 0 }