{ "cells": [ { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "# Watershed Segmentation \n", "\n", "There is a PlantCV function is based on code contributed by Suxing Liu, Arkansas State University.\n", "For more information see [https://github.com/lsx1980/Leaf_count](https://github.com/lsx1980/Leaf_count).\n", "This function uses the watershed algorithm to detect boundary of objects.\n", "Needs a mask file which specifies area which is object is white, and background is black.\n", "Requires cv2 version 3.0+" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from plantcv import plantcv as pcv\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class options:\n", " def __init__(self):\n", " self.image = \"img/tutorial_images/segmentation/arabidopsis.jpg\"\n", " self.debug = \"plot\"\n", " self.writeimg= False \n", " self.outdir = \".\"\n", "# Get options\n", "args = options()\n", "\n", "# Set debug to the global parameter \n", "pcv.params.debug = args.debug\n", "\n", "# Read image\n", "\n", "# Inputs:\n", "# filename - Image file to be read in \n", "# mode - Return mode of image; either 'native' (default), 'rgb', 'gray', or 'csv' \n", "\n", "img, path, filename = pcv.readimage(filename=args.image)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Crop the image down to focus on just one plant \n", "crop_img = img[525:900,2030:2400]\n", "# Print it out to see \n", "pcv.plot_image(crop_img)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Convert image from RGB color space to LAB and keep only the \n", "# green-magenta channel \n", "\n", "# Inputs:\n", "# rgb_img = image object, RGB color space\n", "# channel = color subchannel ('l' = lightness, 'a' = green-magenta , 'b' = blue-yellow)\n", "\n", "a = pcv.rgb2gray_lab(rgb_img=crop_img, channel='a')\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Set a binary threshold on the image \n", "\n", "# Inputs:\n", "# gray_img = img object, grayscale\n", "# threshold = threshold value (0-255)\n", "# max_value = value to apply above threshold (usually 255 = white)\n", "# object_type = light or dark\n", "# - If object is light then standard thresholding is done\n", "# - If object is dark then inverse thresholding is done\n", "\n", "img_binary = pcv.threshold.binary(gray_img=a, threshold=110, max_value=255, object_type='dark')\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Find objects\n", "\n", "# Inputs:\n", "# img = image that the objects will be overlayed\n", "# mask = what is used for object detection\n", "\n", "id_objects, obj_hierarchy = pcv.find_objects(img=crop_img, mask=img_binary)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Combine objects \n", "\n", "# Inputs:\n", "# img = RGB or grayscale image data for plotting \n", "# contours = Contour list \n", "# hierarchy = Contour hierarchy array \n", "\n", "obj, mask = pcv.object_composition(img=crop_img, contours=id_objects, hierarchy=obj_hierarchy)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Appy mask\n", "\n", "# Inputs:\n", "# rgb_img = RGB image data \n", "# mask = Binary mask image data \n", "# mask_color = 'white' or 'black' \n", "\n", "masked = pcv.apply_mask(rgb_img=crop_img, mask=mask, mask_color=\"black\")\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Use watershed segmentation \n", "\n", "# Inputs:\n", "# rgb_img = RGB image data \n", "# mask = Binary image, single channel, object in white and background black\n", "# distance = Minimum distance of local maximum, lower values are more sensitive, \n", "# and segments more objects (default: 10)\n", "\n", "analysis_images = pcv.watershed_segmentation(rgb_img=masked, mask=mask, distance=15)\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# The print_results function will take the measurements stored when running any (or all) of these functions, format, \n", "# and print an output text file for data analysis. The Outputs class stores data whenever any of the following functions\n", "# are ran: analyze_bound_horizontal, analyze_bound_vertical, analyze_color, analyze_nir_intensity, analyze_object, \n", "# fluor_fvfm, report_size_marker_area, watershed. If no functions have been run, it will print an empty text file \n", "pcv.print_results(filename='segmentation_tutorial_results.txt')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To view and/or download the text file output (saved in JSON format)...\n", "1) To see the text file with data that got saved out, click “File” tab in top left corner.\n", "2) Click “Open…”\n", "3) Open the file named “segmentation_tutorial_results.txt”\n", "\n", "Check out documentation on how to [convert JSON](https://plantcv.readthedocs.io/en/latest/tools/#convert-output-json-data-files-to-csv-tables) format output into table formatted output. Depending on the analysis steps a PlantCV user may have two CSV files (single value traits and multivalue traits). \n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.6" } }, "nbformat": 4, "nbformat_minor": 0 }