{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# ImageJ Ops\n", "\n", "[ImageJ Ops](https://imagej.net/ImageJ_Ops) is a library for N-dimensional image processing. The primary design goals of Ops are:\n", "\n", "1. __Ease of use.__ Ops provides a wealth of easy-to-use image processing operations (\"ops\").\n", "2. __Reusability.__ Ops extends Java's mantra of \"[write once, run anywhere](https://en.wikipedia.org/wiki/Write_once,_run_anywhere)\" to image processing algorithms. Algorithms written in the Ops framework are usable as-is from any [SciJava](https://imagej.net/SciJava)-compatible software project, such as [ImageJ](https://imagej.net/ImageJ), [CellProfiler](https://imagej.net/CellProfiler), [KNIME](https://imagej.net/KNIME), [OMERO](https://imagej.net/OMERO) and [Alida](https://imagej.net/Alida).\n", "3. __Reproducibility.__ Ops are deterministic: calling the same op twice with the same arguments yields the same result, always. And all ops are versioned.\n", "4. __Power.__ An op may consist of any number of typed input and output parameters. Ops may operate on arbitrary data structures, including images of N dimensions stored in a myriad of different ways: as files on disk, programmatically generated in memory, or in remote databases.\n", "5. __Extensibility.__ Ops provides a robust framework for writing new ops, and for extending existing ops in new directions. See the \"Extending ImageJ - Ops\" tutorial notebook for details.\n", "6. __Speed.__ The Ops framework provides a means to override any general-but-slow op with a faster-but-more-specific alternative, fully transparently to the user." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Quick Start Guide" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Getting started" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Added new repo: imagej.public\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "", "version_major": 2, "version_minor": 0 }, "method": "display_data" }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "29b9b0dd-d6c0-4f63-9627-988362f2c53d", "version_major": 2, "version_minor": 0 }, "method": "display_data" }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/plain": [ "ImageJ 2.0.0-rc-71 is ready to go." ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%classpath config resolver imagej.public https://maven.imagej.net/content/groups/public\n", "%classpath add mvn net.imagej imagej 2.0.0-rc-71\n", "ij = new net.imagej.ImageJ()\n", "\"ImageJ ${ij.getVersion()} is ready to go.\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's open up the friendly Clown image to use for our experiments. (For the [coulrophobic](https://en.wikipedia.org/wiki/Coulrophobia), feel free to use the [Fluorescent Cells](https://imagej.net/images/FluorescentCells.jpg) instead.)" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "text/html": [ "" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "clown = ij.io().open(\"https://imagej.net/images/clown.png\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It's a bit large, so let's scale it down. We'll use the `transform.scaleView` op. For succinctness, we'll write only `scaleView` rather than the fully qualified op name `transform.scaleView`. We'll use N-linear interpolation, since it tends to look pretty nice." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/html": [ "" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import net.imglib2.interpolation.randomaccess.NLinearInterpolatorFactory\n", "scaleFactors = [0.5, 0.5, 1] // Reduce X and Y to 50%; leave C dimension alone.\n", "interpolationStrategy = new NLinearInterpolatorFactory()\n", "image = ij.op().run(\"scaleView\", clown, scaleFactors, interpolationStrategy)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's also make a single-channel 32-bit floating point version:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/html": [ "" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import net.imglib2.FinalInterval\n", "w = image.dimension(0); h = image.dimension(1)\n", "slice = FinalInterval.createMinSize(0, 0, 0, w, h, 1)\n", "grayImage = ij.op().run(\"crop\", image, slice, true)\n", "image32 = ij.op().convert().float32(grayImage)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, let's define a function for showing multiple uniformly scaled images:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "script1547140967863$_run_closure1@24f40593" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import net.imglib2.RandomAccessibleInterval\n", "tile = { images ->\n", " int[] gridLayout = images[0] in List ?\n", " [images[0].size, images.size] : // 2D images list\n", " [images.size] // 1D images list\n", " RandomAccessibleInterval[] rais = images.flatten()\n", " ij.notebook().mosaic(gridLayout, rais)\n", "}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Learning about available ops" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can get a complete list of available ops using the `ops()` method. But it is a bit overwhelming, so here is a structured version organized by namespace:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
colocicq, kendallTau, maxTKendallTau, pValue, pearsons
convertbit, cfloat32, cfloat64, clip, copy, float32, float64, imageType, int16, int32, int64, int8, normalizeScale, scale, uint12, uint128, uint16, uint2, uint32, uint4, uint64, uint8
copyimg, imgLabeling, iterableInterval, labelingMapping, rai, type
createimg, imgFactory, imgLabeling, imgPlus, integerType, kernel, kernel2ndDerivBiGauss, kernelBiGauss, kernelDiffraction, kernelGabor, kernelGaborComplexDouble, kernelGaborDouble, kernelGauss, kernelLog, kernelSobel, labelingMapping, nativeType, object
deconvolveaccelerate, firstGuess, normalizationFactor, richardsonLucy, richardsonLucyCorrection, richardsonLucyTV, richardsonLucyUpdate
filteraddNoise, addPoissonNoise, allPartialDerivatives, bilateral, convolve, correlate, createFFTOutput, derivativeGauss, dog, fft, fftSize, frangiVesselness, gauss, hessian, ifft, linearFilter, max, mean, median, min, padFFTInput, padInput, padShiftFFTKernel, paddingIntervalCentered, paddingIntervalOrigin, partialDerivative, sigma, sobel, tubeness, variance
geomboundarySize, boundarySizeConvexHull, boundingBox, boxivity, centerOfGravity, centroid, circularity, compactness, contour, convexHull, convexity, eccentricity, feretsAngle, feretsDiameter, mainElongation, majorAxis, marchingCubes, maximumFeret, maximumFeretsAngle, maximumFeretsDiameter, medianElongation, minimumFeret, minimumFeretsAngle, minimumFeretsDiameter, minorAxis, roundness, secondMoment, size, sizeConvexHull, smallestEnclosingBoundingBox, solidity, spareness, sphericity, vertexInterpolator, verticesCount, verticesCountConvexHull, voxelization
haralickasm, clusterPromenence, clusterShade, contrast, correlation, differenceEntropy, differenceVariance, entropy, icm1, icm2, ifdm, maxProbability, sumAverage, sumEntropy, sumVariance, textureHomogeneity, variance
hoghog
imageascii, cooccurrenceMatrix, distancetransform, equation, fill, histogram, integral, invert, normalize, squareIntegral, watershed
imagemomentscentralMoment00, centralMoment01, centralMoment02, centralMoment03, centralMoment10, centralMoment11, centralMoment12, centralMoment20, centralMoment21, centralMoment30, huMoment1, huMoment2, huMoment3, huMoment4, huMoment5, huMoment6, huMoment7, moment00, moment01, moment10, moment11, normalizedCentralMoment02, normalizedCentralMoment03, normalizedCentralMoment11, normalizedCentralMoment12, normalizedCentralMoment20, normalizedCentralMoment21, normalizedCentralMoment30
labelingcca, merge
lbplbp2D
linalgrotate
logicand, conditional, equal, greaterThan, greaterThanOrEqual, lessThan, lessThanOrEqual, not, notEqual, or, xor
mathabs, add, and, arccos, arccosh, arccot, arccoth, arccsc, arccsch, arcsec, arcsech, arcsin, arcsinh, arctan, arctanh, assign, ceil, complement, complexConjugateMultiply, cos, cosh, cot, coth, csc, csch, cubeRoot, divide, exp, expMinusOne, floor, gamma, invert, leftShift, log, log10, log2, logOnePlusX, max, min, multiply, nearestInt, negate, or, power, randomGaussian, randomUniform, reciprocal, remainder, rightShift, round, sec, sech, signum, sin, sinc, sincPi, sinh, sqr, sqrt, step, subtract, tan, tanh, ulp, unsignedRightShift, xor, zero
morphologyblackTopHat, close, dilate, erode, extractHoles, fillHoles, floodFill, open, outline, thinGuoHall, thinHilditch, thinMorphological, thinZhangSuen, topHat
segmentdetectJunctions, detectRidges
statsgeometricMean, harmonicMean, integralMean, integralSum, integralVariance, kurtosis, leastSquares, max, mean, median, min, minMax, moment1AboutMean, moment2AboutMean, moment3AboutMean, moment4AboutMean, percentile, quantile, size, skewness, stdDev, sum, sumOfInverses, sumOfLogs, sumOfSquares, variance
tamuracoarseness, contrast, directionality
threadchunker
thresholdapply, huang, ij1, intermodes, isoData, li, localBernsenThreshold, localContrastThreshold, localMeanThreshold, localMedianThreshold, localMidGreyThreshold, localNiblackThreshold, localPhansalkarThreshold, localSauvolaThreshold, maxEntropy, maxLikelihood, mean, minError, minimum, moments, otsu, percentile, renyiEntropy, rosin, shanbhag, triangle, yen
topologyboxCount, eulerCharacteristic26N, eulerCharacteristic26NFloating, eulerCorrection
transformaddDimensionView, collapseNumericView, collapseRealView, collapseView, concatenateView, crop, dropSingletonDimensionsView, extendBorderView, extendMirrorDoubleView, extendMirrorSingleView, extendPeriodicView, extendRandomView, extendValueView, extendView, extendZeroView, flatIterableView, hyperSliceView, interpolateView, intervalView, invertAxisView, offsetView, permuteCoordinatesInverseView, permuteCoordinatesView, permuteView, project, rasterView, rotateView, scaleView, shearView, stackView, subsampleView, translateView, unshearView, zeroMinView
zernikemagnitude, phase
<global>eval, help, identity, join, loop, map, op, run, slice
" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import net.imagej.ops.OpUtils\n", "opsByNS = [:]\n", "ij.op().ops().each{op ->\n", " ns = OpUtils.getNamespace(op)\n", " name = OpUtils.stripNamespace(op)\n", " if (!opsByNS.containsKey(ns)) opsByNS.put(ns, name)\n", " else opsByNS.put(ns, opsByNS.get(ns) + ', ' + name)\n", "}\n", "opsByNS.put('', opsByNS.remove(null))\n", "ij.notebook().display(opsByNS)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There is a helpful `help` op which prints out information about an op. We can use it to better understand what sorts of operations are possible, and what kinds of arguments they expect:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Available operations:\n", "\t(RandomAccessibleInterval out) =\n", "\tnet.imagej.ops.filter.gauss.DefaultGaussRA(\n", "\t\tRandomAccessibleInterval out,\n", "\t\tRandomAccessible in,\n", "\t\tdouble[] sigmas)\n", "\t(RandomAccessibleInterval out?) =\n", "\tnet.imagej.ops.filter.gauss.GaussRAISingleSigma(\n", "\t\tRandomAccessibleInterval out?,\n", "\t\tRandomAccessibleInterval in,\n", "\t\tdouble sigma,\n", "\t\tOutOfBoundsFactory outOfBounds?)\n", "\t(RandomAccessibleInterval out?) =\n", "\tnet.imagej.ops.filter.gauss.DefaultGaussRAI(\n", "\t\tRandomAccessibleInterval out?,\n", "\t\tRandomAccessibleInterval in,\n", "\t\tdouble[] sigmas,\n", "\t\tOutOfBoundsFactory outOfBounds?)" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ij.op().help('gauss')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Basic Example of Using Ops - Gaussian Filter" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The information above tells us that there are two available `gauss` operations: one which takes a list (`double[]`) of sigmas, and another which takes a single (`double`) sigma.\n", "\n", "There are a couple of important subtleties here:\n", "\n", "1. Some arguments have a `?` suffix, which means they are optional. If you leave them off, something reasonable will be done by default. In this case, the `out` and `outOfBounds` arguments are optional. You can leave them off ___in right-to-left order___. E.g., in the case above, calling `gauss(a, b, c)` means `gauss(out, in, sigmas)` and _not_ `gauss(in, sigmas, outOfBounds)`. If you want to omit the `out` argument while passing the `outOfBounds` argument, you can pass `null` for `out`—i.e., `gauss(null, in, sigmas, outOfBounds)`.\n", "\n", "2. The `out` argument is both an input _and_ an output. Hence, whatever object you pass as the `out` parameter will also be returned as the output. In this case, since `out` is optional, if you do not pass the `out` parameter (or you pass `null`), then one will be synthesized and returned.\n", "\n", "Lastly, you might be wondering what a `RandomAccessibleInterval` is. For now, it is enough to know it is an ImageJ image. See the \"N-dimensional image processing\" tutorial for a more detailed introduction.\n", "\n", "Let's try calling the first `gauss` op:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
imagehorizontal blurvertical blurchannel blur
" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "// Smudge it up horizontally.\n", "double[] horizSigmas = [8, 0, 0]\n", "horizGauss = ij.op().filter().gauss(image, horizSigmas)\n", "\n", "// And now vertically.\n", "double[] vertSigmas = [0, 8, 0]\n", "vertGauss = ij.op().filter().gauss(image, vertSigmas)\n", "\n", "// We can also blur the channels.\n", "double[] channelSigmas = [0, 0, 1]\n", "channelGauss = ij.op().filter().gauss(image, channelSigmas)\n", "\n", "ij.notebook().display([[\"image\":image, \"horizontal blur\":horizGauss, \"vertical blur\":vertGauss, \"channel blur\":channelGauss]])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are two main ways to execute an op:\n", "\n", "1. The _type-safe_ way using a built-in method:\n", " ```java\n", " ij.op().foo().bar(...)\n", " ```\n", " This way tends to be nicer from type-safe languages like Java.\n", " \n", "2. The _dynamic_ way using the `run` method:\n", " ```java\n", " ij.op().run(\"foo.bar\", ...)\n", " ```\n", " This way tends to be nicer from dynamically typed scripting languages.\n", "\n", "With the dynamic approach, passing the namespace is optional, so you can also write:\n", "```java\n", "ij.op().run(\"bar\", ...)\n", "```\n", "If there are ops with the same name across multiple namespaces (e.g., `math.and` and `logic.and`), then passing the short name will consider the op signatures across all namespaces.\n", "\n", "You will see both syntaxes used throughout these tutorials." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Generating Images" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Images have to come from somewhere. You can use `ij.io().open(...)` to read one in from an external source such as files on disk, like we did with previous notebooks. But there are other ways, too." ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "### Creating an empty image" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can create an empty (zero-filled) image using the `create.img` op." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Available operations:\n", "\t(Img out) =\n", "\tnet.imagej.ops.create.img.CreateImgFromImg(\n", "\t\tImg in)\n", "\t(Img out) =\n", "\tnet.imagej.ops.create.img.CreateImgFromII(\n", "\t\tIterableInterval in)\n", "\t(Img out) =\n", "\tnet.imagej.ops.create.img.CreateImgFromRAI(\n", "\t\tRandomAccessibleInterval in)\n", "\t(Img out) =\n", "\tnet.imagej.ops.create.img.CreateImgFromDimsAndType(\n", "\t\tDimensions in1,\n", "\t\tNativeType in2,\n", "\t\tImgFactory factory?)\n", "\t(Img out) =\n", "\tnet.imagej.ops.create.img.CreateImgFromInterval(\n", "\t\tInterval in)" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ij.op().help(\"create.img\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here we will use the clown image as a reference image." ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Original = 160 x 100 x 3 : UnsignedByteType\n", "Empty one = 160 x 100 x 3 : UnsignedByteType\n", "Bit image = 160 x 100 x 3 : BitType\n", "Small float = 25 x 17 x 2 : FloatType\n" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "def info(name, image) {\n", " imageType = image.firstElement().getClass().getSimpleName()\n", " name + \" = \" + image.dimension(0) + \" x \" + image.dimension(1) + \" x \" + image.dimension(2) + \" : \" +\n", " imageType + \"\\n\"\n", "}\n", "\n", "// Create an empty image of the same size as an existing image.\n", "empty = ij.op().run(\"create.img\", image)\n", "\n", "// Create an empty image based on another image, but of different type.\n", "import net.imglib2.type.logic.BitType\n", "bitType = ij.op().run(\"create.img\", image, new BitType())\n", "\n", "// Create an image from scratch.\n", "import net.imglib2.type.numeric.real.FloatType\n", "smallFloat = ij.op().run(\"create.img\", [25, 17, 2], new FloatType())\n", "\n", "info(\"Original \", image) +\n", "info(\"Empty one\", empty) +\n", "info(\"Bit image\", bitType) +\n", "info(\"Small float\", smallFloat)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Copying an image" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can easily make a copy of an image using ops from the `copy` namespace." ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Available operations:\n", "\t(RandomAccessibleInterval out?) =\n", "\tnet.imagej.ops.copy.CopyRAI(\n", "\t\tRandomAccessibleInterval out?,\n", "\t\tRandomAccessibleInterval in)" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ij.op().help(\"copy.rai\")" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "data": { "text/html": [ "" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "copy = ij.op().run(\"copy.rai\", image)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Generating an image from a formula" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `equation` op allows you to generate an image from a formula in JavaScript syntax.\n", "\n", "Such images can be useful for testing without needing an external image source, or a long and bulky list of numbers.\n", "This `Op` will execute the equation on each pixel of the image. Within the equation the current location can be retrieved via the variable `p[]` (see example equation below)." ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Available operations:\n", "\t(IterableInterval out) =\n", "\tnet.imagej.ops.image.equation.DefaultCoordinatesEquation(\n", "\t\tIterableInterval out,\n", "\t\tUnaryFunctionOp in)\n", "\t(IterableInterval out?) =\n", "\tnet.imagej.ops.image.equation.DefaultEquation(\n", "\t\tIterableInterval out?,\n", "\t\tString in)\n", "\t(IterableInterval out) =\n", "\tnet.imagej.ops.image.equation.DefaultXYEquation(\n", "\t\tIterableInterval out,\n", "\t\tDoubleBinaryOperator in)" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ij.op().help(\"equation\")" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/html": [ "" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import net.imglib2.type.numeric.integer.UnsignedByteType\n", "sinusoid = ij.op().run(\"create.img\", [150, 100], new UnsignedByteType())\n", "formula = \"63 * (Math.cos(0.3*p[0]) + Math.sin(0.3*p[1])) + 127\"\n", "ij.op().image().equation(sinusoid, formula)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Wrapping an array as an image" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There is no op yet to create an image by wrapping an existing array of values. For now, you can use ImgLib2 utility methods for that:" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "text/html": [ "" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "// Here we have a gradient ramp in an array.\n", "w = 160; h = 96\n", "byte[] pixels = new byte[w * h]\n", "for (y in 0..h-1) {\n", " for (x in 0..w-1) {\n", " pixels[y * w + x] = x + y\n", " }\n", "}\n", "\n", "// Wrap the array into an image.\n", "import net.imglib2.img.array.ArrayImgs\n", "ramp = ArrayImgs.unsignedBytes(pixels, w, h)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Computing Statistics" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `stats` package has routines for computing numerical statistics.\n", "\n", "Here is a rundown of many of them:" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
geometricMean106.86402515788897
harmonicMean60.81451530914875
kurtosis2.250389182944452
max252.9996057999923
mean130.35596075261444
median129.34019425677636
min1.293813403205192
moment1AboutMean-2.877214910768089E-13
moment2AboutMean3982.1563522625042
moment3AboutMean-10221.960619118927
moment4AboutMean3.569046079615442E7
size15000.0
skewness-0.04067366532821499
stdDev63.106432691542885
sum1955339.4112892165
sumOfInverses246.65164103911627
sumOfLogs70073.35850104403
sumOfSquares3.146224928399938E8
variance3982.4218470523483
" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "sinusoid32 = ij.op().run(\"create.img\", [150, 100])\n", "formula = \"63 * (Math.cos(0.3*p[0]) + Math.sin(0.3*p[1])) + 127\"\n", "ij.op().image().equation(sinusoid32, formula)\n", "\n", "ij.notebook().display([\n", " \"geometricMean\": ij.op().stats().geometricMean(sinusoid32).toString(),\n", " \"harmonicMean\": ij.op().stats().harmonicMean(sinusoid32).toString(),\n", " \"kurtosis\": ij.op().stats().kurtosis(sinusoid32).toString(),\n", " \"max\": ij.op().stats().max(sinusoid32).toString(),\n", " \"mean\": ij.op().stats().mean(sinusoid32).toString(),\n", " \"median\": ij.op().stats().median(sinusoid32).toString(),\n", " \"min\": ij.op().stats().min(sinusoid32).toString(),\n", " \"moment1AboutMean\": ij.op().stats().moment1AboutMean(sinusoid32).toString(),\n", " \"moment2AboutMean\": ij.op().stats().moment2AboutMean(sinusoid32).toString(),\n", " \"moment3AboutMean\": ij.op().stats().moment3AboutMean(sinusoid32).toString(),\n", " \"moment4AboutMean\": ij.op().stats().moment4AboutMean(sinusoid32).toString(),\n", " \"size\": ij.op().stats().size(sinusoid32).toString(),\n", " \"skewness\": ij.op().stats().skewness(sinusoid32).toString(),\n", " \"stdDev\": ij.op().stats().stdDev(sinusoid32).toString(),\n", " \"sum\": ij.op().stats().sum(sinusoid32).toString(),\n", " \"sumOfInverses\": ij.op().stats().sumOfInverses(sinusoid32).toString(),\n", " \"sumOfLogs\": ij.op().stats().sumOfLogs(sinusoid32).toString(),\n", " \"sumOfSquares\": ij.op().stats().sumOfSquares(sinusoid32).toString(),\n", " \"variance\": ij.op().stats().variance(sinusoid32).toString()\n", "])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Math on Image" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the `math` namespace, Ops provides traditional mathematical operations such as `add`, `subtract`, `multiply` and `divide`. These operations are overloaded in several ways:\n", "\n", "* Operate pixelwise between two images—e.g., `math.add(image1, image2)` when `image1` and `image2` have the same dimensions.\n", "* Operate between an image and a constant—e.g., `math.add(image, 5)` to add 5 to each sample of `image`.\n", "* Operate between two numerical values—e.g., `math.add(3, 5)` to compute the sum of 3 and 5.\n", "\n", "Some `math` ops are also already heavily optimized, since we used the `math.add` op as a testbed to validate that Ops could perform as well or better than ImageJ 1.x does." ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "image1 range = (0.0, 254.0)\n", "image2 range = (0.020272091031074524, 255.97271728515625)\n" ] }, { "data": { "text/html": [ "
image1image2
" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "// Prepare a couple of equally sized images.\n", "import net.imglib2.type.numeric.real.FloatType\n", "image1 = ij.op().run(\"create.img\", [160, 96], new FloatType())\n", "image2 = ij.op().run(\"copy.rai\", image1)\n", "\n", "// Gradient toward bottom right.\n", "ij.op().image().equation(image1, \"p[0] + p[1]\")\n", "minMax1 = ij.op().stats().minMax(image1)\n", "println(\"image1 range = (\" + minMax1.getA() + \", \" + minMax1.getB() + \")\")\n", "\n", "// Sinusoid.\n", "ij.op().image().equation(image2, \"64 * (Math.sin(0.1 * p[0]) + Math.cos(0.1 * p[1])) + 128\")\n", "minMax2 = ij.op().stats().minMax(image2)\n", "println(\"image2 range = (\" + minMax2.getA() + \", \" + minMax2.getB() + \")\")\n", "\n", "ij.notebook().display([[\"image1\":image1, \"image2\":image2]])" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "Let's test `math.add(image, number)`:" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "data": { "text/html": [ "" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "addImage = image1 // Try also with image2!\n", "tile([\n", " addImage,\n", " ij.op().run(\"math.add\", ij.op().run(\"copy.rai\", addImage), 60),\n", " ij.op().run(\"math.add\", ij.op().run(\"copy.rai\", addImage), 120),\n", " ij.op().run(\"math.add\", ij.op().run(\"copy.rai\", addImage), 180)\n", "])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice how we had to make a copy of the source image for each `add(image, number)` above? This is because right now, the best-matching `math.add` op is an _inplace_ operation, modifying the source image. Ops is still young, and needs more fine tuning! In the meantime, watch out for details like this.\n", "\n", "Now we'll try `math.add(image1, image2)` and `math.subtract(image1, image2)`:" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "data": { "text/html": [ "" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "sum = ij.op().run(\"math.add\", image1, image2)\n", "diff = ij.op().run(\"math.subtract\", image1, image2)\n", "tile([sum, diff])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here is `math.multiply(image1, image2)`:" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "data": { "text/html": [ "" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ij.op().run(\"math.multiply\", image1, image2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And finally `math.divide(image1, image2)`:" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "data": { "text/html": [ "" ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ij.op().run(\"math.divide\", image1, image2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Evaluating expressions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "ImageJ Ops offers a powerful expression evaluation op, built on SciJava's [Parsington](https://github.com/scijava/parsington) library:" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Available operations:\n", "\t(Object out) =\n", "\tnet.imagej.ops.eval.DefaultEval(\n", "\t\tString in,\n", "\t\tMap vars?)" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ij.op().help(\"eval\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Operators in the expression map to ops as follows:" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
OperatorOp name
||logic.or
>>>math.unsignedRightShift
+math.add
==logic.equal
&&logic.and
>logic.greaterThan
>=logic.greaterThanOrEqual
<<math.leftShift
!=logic.notEqual
-math.negate
+identity
%math.remainder
/math.divide
&math.and
|math.or
-math.subtract
>>math.rightShift
^math.power
*math.multiply
<logic.lessThan
<=logic.lessThanOrEqual
" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import net.imagej.ops.eval.OpEvaluator\n", "ij.notebook().display(new OpEvaluator().getOpMap().entrySet().stream().map{\n", " entry -> [\"Operator\": entry.getKey().toString(), \"Op name\": entry.getValue()]\n", "}.collect())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can also call any op in an `eval` statement as a function, using familiar function syntax.\n", "\n", "The following is an example of the `eval` op being used to compute a [Difference of Gaussians](https://en.wikipedia.org/wiki/Difference_of_Gaussians). Inputs within the formula may be accessed via the `Map vars?` argument of the eval function and the key of the map corresponds to the name of the variable that can be used in the formula." ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "data": { "text/html": [ "" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "dogFormula = \"gauss(image, [10, 10, 1]) - gauss(image, [5, 5, 1])\"\n", "dog = ij.op().eval(dogFormula, [\"image\": image32])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Converting between image types" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This tutorial illustrates ways to convert between pixel types." ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [ { "data": { "text/html": [ "" ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "dogFormula = \"gauss(image, [10, 10, 1]) - gauss(image, [5, 5, 1])\"\n", "dog = ij.op().eval(dogFormula, [\"image\": image32])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Did you notice in the previous cell that we computed the gauss on a `float32` version of the image? If we do not do this, the result will be wrong:" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
dogWrongdogConverted
" ] }, "execution_count": 26, "metadata": {}, "output_type": "execute_result" } ], "source": [ "dogWrong = ij.op().eval(dogFormula, [\"image\": grayImage])\n", "dogConverted = ij.op().convert().uint8(dog)\n", "ij.notebook().display([[\"dogWrong\":dogWrong, \"dogConverted\":dogConverted]])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As you can see, a DoG computed on the original `uint8` version of the image looks essentially the same as the `float32` DoG converted to `uint8`.\n", "\n", "The issue is that `uint8` images have only 8 bits per channel, with values ranging from 0 to 255. So while the equation is dutifully evaluated, non-integer values are truncated, and then only the lowest 8 bits are retained.\n", "\n", "Using a richer image type like `float32` avoids the problem. The `float32` type uses 32-bit [IEEE floating point](https://en.wikipedia.org/wiki/IEEE_floating_point) numbers for its samples, which an approximate range of\n", "$-3.4 \\times 10^{38}$\n", "to\n", "$3.4 \\times 10^{38}$.\n", "This type requires four times as much memory as `uint8`, but math errors will accumulate much less severely in common cases.\n", "\n", "
\n", "\n", "For those familiar with [ImageJ 1.x](https://imagej.net/ImageJ1), `float32` corresponds to ImageJ 1.x's \"32-bit\" type in the Image → Type menu, and is typically what people choose when performing sequences of mathematical operations.\n", "\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Built-in image types" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In ImageJ Ops, many image types are available:" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[bit, cfloat32, cfloat64, float32, float64, int16, int32, int64, int8, uint12, uint128, uint16, uint2, uint32, uint4, uint64, uint8]" ] }, "execution_count": 27, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ij.op().ops().findAll{op ->\n", " op.startsWith(\"convert.big\") ||\n", " op.startsWith(\"convert.bit\") ||\n", " op.startsWith(\"convert.cfloat\") ||\n", " op.startsWith(\"convert.float\") ||\n", " op.startsWith(\"convert.int\") ||\n", " op.startsWith(\"convert.uint\")\n", "}.collect{op -> op[8..-1]}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In some circumstances, one of the above types is more appropriate than `float32`. For example, if we are squaring an image, we could use `uint16`, since the square of a number from 0 to 255 will always be in the range of 0 to 65535. It may sometimes make sense to use a high-precision integer type such as `uint128` rather than a potentially lossy floating point type like `float32`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Methods of type conversion" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Ops provides four built-in ways of converting image values across types:\n", "\n", "* `convert.copy` tranfers the values directly. This is the fastest method, but there is no bounds checking, so you may see overflow, or in some cases even end up with invalid sample values.\n", "* `convert.clip` is like `copy` but includes bounds checking. Values less than the minimum are _clipped_ (also called _clamped_) to the minimum value, and values greater than the maximum are clipped to the maximum.\n", "* `convert.scale` multiplies sample values by a scale factor, such that the intensities fall into the same _relative histogram_ over the whole target type:\n", " $[sourceTypeMin, sourceTypeMax] \\to [destTypeMin, destTypeMax]$.\n", " E.g., converting from `uint8` to `uint16` linearly scales\n", " $[0, 255] \\to [0, 65535]$.\n", "* `convert.normalizeScale` multiplies sample values by a scale factor, such that intensities are distributed across the _full range_ of the target type:\n", " $[sourceDataMin, sourceDataMax] \\to [destTypeMin, destTypeMax]$.\n", " E.g., converting a `uint8` image with an actual minimum sample value of 16 and actual maximum sample value of 127 to `uint16` linearly scales\n", " $[16, 127] \\to [0, 65535]$.\n", "\n", "Below, we show how to convert images using these approaches." ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Original: min = 129, max = 640\n" ] }, { "data": { "text/html": [ "" ] }, "execution_count": 28, "metadata": {}, "output_type": "execute_result" } ], "source": [ "// Make a handy method for printing the image min/max.\n", "printMinMax = { prefix, image ->\n", " minMax = ij.op().stats().minMax(image)\n", " println(prefix + \": min = \" + minMax.getA() + \", max = \" + minMax.getB())\n", "}\n", "\n", "// Create an image with an illustrative range of intensities.\n", "import net.imglib2.type.numeric.integer.UnsignedShortType\n", "imageToConvert = ij.op().run(\"create.img\", [96, 64], new UnsignedShortType())\n", "formula = \"128 * (Math.cos(0.3*p[0]) + Math.sin(0.3*p[1])) + 384\"\n", "ij.op().image().equation(imageToConvert, formula)\n", "printMinMax(\"Original\", imageToConvert)\n", "imageToConvert" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we will call the various convert ops. Please be aware that these ops currently have architectural issues, which we plan to address in a future version of ImageJ Ops. The code below works, but is subject to change until the ImageJ Ops 1.0.0 release." ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "convert.copy: min = 0, max = 255\n", "convert.clip: min = 129, max = 255\n", "convert.normalizeScale: min = 0, max = 255\n" ] }, { "data": { "text/html": [ "" ] }, "execution_count": 29, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import net.imagej.ops.Ops\n", "import net.imagej.ops.special.computer.Computers\n", "import net.imglib2.type.numeric.integer.UnsignedByteType\n", "\n", "convertMethods = [\n", " Ops.Convert.Copy.class,\n", " Ops.Convert.Clip.class,\n", " // HACK: convert.scale op is disabled here due to a bug in Ops.\n", "// Ops.Convert.Scale.class,\n", " Ops.Convert.NormalizeScale.class\n", "]\n", "\n", "convertedImages = []\n", "for (convertMethod in convertMethods) {\n", " // Create a uint8 destination image for the conversion.\n", " convertedImage = ij.op().run(\"create.img\", imageToConvert, new UnsignedByteType())\n", "\n", " // Look up the needed convert op for converting from source to destination.\n", " inType = imageToConvert.firstElement() // Type from which we are converting.\n", " outType = convertedImage.firstElement() // Type to which we are converting.\n", " convertOp = Computers.unary(ij.op(), convertMethod, outType, inType)\n", " \n", " // NB: Prepare the convert op to behave properly.\n", " // The need for these calls is hacky, and will be fixed in a future version of Ops.\n", " convertOp.checkInput(inType, outType)\n", " convertOp.checkInput(imageToConvert)\n", "\n", " // Run this convert op on every sample of our source image.\n", " ij.op().run(\"map\", convertedImage, imageToConvert, convertOp)\n", " methodName = convertMethod.getField(\"NAME\").get(null)\n", " printMinMax(methodName, convertedImage)\n", " convertedImages.add(convertedImage)\n", "}\n", "\n", "tile(convertedImages)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The code above makes use of the `map` op to execute the appropriate `convert` op on each element of a collection. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Filtering" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This notebook demonstrates various image filtering operations. There are lots of image filtering operations available in the `filter` namespace:" ] }, { "cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[addNoise, addPoissonNoise, allPartialDerivatives, bilateral, convolve, correlate, createFFTOutput, derivativeGauss, dog, fft, fftSize, frangiVesselness, gauss, hessian, ifft, linearFilter, max, mean, median, min, padFFTInput, padInput, padShiftFFTKernel, paddingIntervalCentered, paddingIntervalOrigin, partialDerivative, sigma, sobel, tubeness, variance]" ] }, "execution_count": 30, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ij.op().ops().findAll{op ->\n", " op.startsWith(\"filter.\")\n", "}.collect{op -> op[7..-1]}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here is a demonstration of some of them:" ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Available operations:\n", "\t(RealType out) =\n", "\tnet.imagej.ops.filter.addPoissonNoise.AddPoissonNoiseRealType(\n", "\t\tRealType out,\n", "\t\tRealType in,\n", "\t\tlong seed?)\n", "\t(IterableInterval out) =\n", "\tnet.imagej.ops.filter.addPoissonNoise.AddPoissonNoiseMap(\n", "\t\tIterableInterval out,\n", "\t\tIterableInterval in)" ] }, "execution_count": 31, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ij.op().help(\"filter.addPoissonNoise\")" ] }, { "cell_type": "code", "execution_count": 32, "metadata": { "scrolled": false }, "outputs": [ { "data": { "text/html": [ "
imagepoissonNoisegaussSobel
medianminmax
" ] }, "execution_count": 32, "metadata": {}, "output_type": "execute_result" } ], "source": [ "imageToFilter = image32\n", "radius = 3\n", "\n", "// We will use a 3x3 diamond as our neighborhood here.\n", "import net.imglib2.algorithm.neighborhood.DiamondShape\n", "shape = new DiamondShape(radius)\n", "\n", "// Add Poisson noise.\n", "addPoissonNoise = ij.op().run(\"create.img\", imageToFilter)\n", "ij.op().filter().addPoissonNoise(addPoissonNoise, imageToFilter)\n", "\n", "// Gaussian blur.\n", "gauss = ij.op().filter().gauss(imageToFilter, radius)\n", "\n", "// Median filter.\n", "median = ij.op().run(\"create.img\", imageToFilter)\n", "ij.op().filter().median(median, imageToFilter, shape)\n", "\n", "// Min filter.\n", "min = ij.op().run(\"create.img\", imageToFilter)\n", "ij.op().filter().min(min, imageToFilter, shape)\n", "\n", "// Max filter.\n", "max = ij.op().run(\"create.img\", imageToFilter)\n", "ij.op().filter().max(max, imageToFilter, shape)\n", "\n", "// Sobel filter.\n", "sobel = ij.op().filter().sobel(imageToFilter)\n", "\n", "// Display the results side by side.\n", "ij.notebook().display([\n", " [[\"image\":imageToFilter, \"poissonNoise\":addPoissonNoise, \"gauss\":gauss, \"Sobel\":sobel]],\n", " [[\"median\":median,\"min\":min,\"max\":max]]\n", "])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, let's take a special look at the Difference of Gaussians op, `filter.dog`:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Image features" ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
dogevalDoG
" ] }, "execution_count": 33, "metadata": {}, "output_type": "execute_result" } ], "source": [ "imageToProcess = image32\n", "sigma1 = 5\n", "sigma2 = 10\n", "\n", "// Difference of Gaussians.\n", "dog = ij.op().filter().dog(imageToProcess, sigma1, sigma2) // gauss(sigma2) - gauss(sigma1)\n", "\n", "// We can also use eval to perform the DoG.\n", "vars = [\n", " \"image\": imageToProcess,\n", " \"sigma1\": [sigma1, sigma1],\n", " \"sigma2\": [sigma2, sigma2]\n", "]\n", "evalDoG = ij.op().eval(\"gauss(image, sigma2) - gauss(image, sigma1)\", vars)\n", "\n", "ij.notebook().display([[\"dog\":dog, \"evalDoG\":evalDoG]])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Frangi vesselness" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This plugin implements the algorithm for detection of vessel- or tube-like structures in 2D and 3D images described Frangi et al 1998. Example is shown below. Here we use an [MRA image of the brain](https://commons.wikimedia.org/w/index.php?curid=5930234) (By SBarnes - Own work, CC BY-SA 3.0)." ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[INFO] Populating metadata\n", "[INFO] Populating metadata\n" ] }, { "data": { "text/html": [ "" ] }, "execution_count": 34, "metadata": {}, "output_type": "execute_result" } ], "source": [ "input = ij.io().open(\"https://upload.wikimedia.org/wikipedia/commons/6/66/Mra-mip.jpg\")" ] }, { "cell_type": "code", "execution_count": 35, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Available operations:\n", "\t(RandomAccessibleInterval out) =\n", "\tnet.imagej.ops.filter.vesselness.DefaultFrangi(\n", "\t\tRandomAccessibleInterval out,\n", "\t\tRandomAccessibleInterval in,\n", "\t\tdouble[] spacing,\n", "\t\tint scale)" ] }, "execution_count": 35, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ij.op().help(\"filter.frangiVesselness\")" ] }, { "cell_type": "code", "execution_count": 36, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
Scale 2Scale 5Scale 8Scale 13Scale 21
" ] }, "execution_count": 36, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import net.imglib2.img.array.ArrayImgs\n", "\n", "// spacing refers to the physical distance between pixels. The default setting is {1, 1, 1...}\n", "spacings = [1, 1, 1]\n", "\n", "//parse the strings\n", "dims = new long[input.numDimensions()]\n", "input.dimensions(dims)\n", "\n", "// scale refers the the pixel distance at which the filter operates. Larger scales measure larger structures.\n", "filtered = [:]\n", "for (scale in [2, 5, 8, 13, 21])\n", " filtered.put(\"Scale \" + scale, ij.op().run(\"filter.frangiVesselness\", ArrayImgs.doubles(dims), input, spacings, scale))\n", "ij.notebook().display([filtered])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Convolution, deconvolution and Fourier transforms" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Convolution" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Convolution](https://en.wikipedia.org/wiki/Convolution) is a very helpful filter for many circumstances. Below is an example of how to use the convolution filter." ] }, { "cell_type": "code", "execution_count": 37, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
basesmall kernelbig kernelsmall convolvedbig convolved
" ] }, "execution_count": 37, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import net.imglib2.type.numeric.real.FloatType;\n", "\n", "// create the sample image\n", "base = ij.op().run(\"create.img\", [150, 100], new FloatType())\n", "formula = \"p[0]^2 * p[1]\"\n", "ij.op().image().equation(base, formula)\n", "\n", "// create kernel\n", "kernel_small = ij.op().run(\"create.img\", [3,3], new FloatType())\n", "kernel_big = ij.op().run(\"create.img\", [20,20], new FloatType())\n", "\n", "ij.op().image().equation(kernel_small, \"p[0]\")\n", "ij.op().image().equation(kernel_big, \"p[0]\")\n", "\n", "// convolved\n", "convolved_small = ij.op().filter().convolve(base, kernel_small)\n", "convolved_big = ij.op().filter().convolve(base, kernel_big)\n", "\n", "ij.notebook().display([[\n", " \"base\":base,\n", " \"small kernel\":kernel_small,\n", " \"big kernel\":kernel_big,\n", " \"small convolved\":convolved_small,\n", " \"big convolved\":convolved_big\n", "]])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "ImageJ can automatically select which filter to use according to the size of kernel. If kernel is small enough, then NaiveF (brute force way, less efficient, but useful when kernel is small) is used, otherwise, fast fourier transforms is used." ] }, { "cell_type": "code", "execution_count": 38, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[net.imagej.ops.filter.convolve.ConvolveNaiveF, net.imagej.ops.filter.convolve.ConvolveFFTF]" ] }, "execution_count": 38, "metadata": {}, "output_type": "execute_result" } ], "source": [ "op = ij.op().op(\"filter.convolve\", base, kernel_small)\n", "op2 = ij.op().op(\"filter.convolve\", base, kernel_big)\n", "\n", "[op.getClass().getName(), op2.getClass().getName()]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Deconvolution" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Deconvolution in ImageJ is implemented using [Richardson–Lucy deconvolution algorithm](https://en.wikipedia.org/wiki/Richardson%E2%80%93Lucy_deconvolution)." ] }, { "cell_type": "code", "execution_count": 39, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
small kernelbig kernel
" ] }, "execution_count": 39, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import net.imglib2.util.Util\n", "import net.imglib2.outofbounds.OutOfBoundsConstantValueFactory\n", "import net.imagej.ops.deconvolve.RichardsonLucyF\n", "\n", "base_deconvolved = ij.op().run(RichardsonLucyF.class, convolved_small, kernel_small, null, new OutOfBoundsConstantValueFactory<>(Util.getTypeFromInterval(kernel_small).createVariable()), 10)\n", "base_deconvolved_big = ij.op().run(RichardsonLucyF.class, convolved_big, kernel_big, null, new OutOfBoundsConstantValueFactory<>(Util.getTypeFromInterval(kernel_small).createVariable()), 10)\n", "\n", "ij.notebook().display([[\n", " \"small kernel\":base_deconvolved,\n", " \"big kernel\":base_deconvolved_big\n", "]])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A small kernel can give a fairly accurate result. However, if the kernel is a big one, the result would be much better if we run the deconvolution process for many iterations. Here we are going to use the deconvolution filter for 50 times." ] }, { "cell_type": "code", "execution_count": 40, "metadata": {}, "outputs": [ { "data": { "text/html": [ "" ] }, "execution_count": 40, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import net.imagej.ops.deconvolve.RichardsonLucyF\n", "base_deconvoled_big_iterate = ij.op().run(RichardsonLucyF.class,convolved_big,kernel_big,50)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In order to better dealing with edges, we can try to use non-circulant mode. With the same 50 iterations, the result shown below is apparently better than the one above" ] }, { "cell_type": "code", "execution_count": 41, "metadata": {}, "outputs": [ { "data": { "text/html": [ "" ] }, "execution_count": 41, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import net.imagej.ops.deconvolve.RichardsonLucyF\n", "base_deconvolved_big_noncirc = ij.op().run(RichardsonLucyF.class, convolved_big, kernel_big, null, null,null, null,null,50,true,false )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "What if the iterations take too long? In that case, we can try to minimize the time we spend by taking larger steps, hence saving the time by using accelerated algorithms. " ] }, { "cell_type": "code", "execution_count": 42, "metadata": {}, "outputs": [ { "data": { "text/html": [ "" ] }, "execution_count": 42, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import net.imagej.ops.deconvolve.RichardsonLucyF\n", "base_deconvolved_big_acc_noncirc = ij.op().run(RichardsonLucyF.class, convolved_big, kernel_big, null, null,null, null, null,50,true,true)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Fourier transforms" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "ImageJ Ops has forward and inverse Fourier transforms." ] }, { "cell_type": "code", "execution_count": 43, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
realimaginary
" ] }, "execution_count": 43, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import net.imglib2.converter.Converters\n", "import net.imglib2.type.numeric.real.FloatType\n", "\n", "// create image\n", "fft_in = ij.op().run(\"create.img\", [150, 100], new FloatType())\n", "formula = \"p[0]^2 + p[1]\"\n", "ij.op().image().equation(fft_in, formula)\n", "\n", "// apply fft\n", "fft_out = ij.op().run(\"filter.fft\", fft_in)\n", "\n", "// display the complex-valued result\n", "// Quantitatively, it is not very useful to do this, but we like pictures. :-)\n", "import net.imglib2.RandomAccessibleInterval\n", "fft_out_real = Converters.convert((RandomAccessibleInterval) fft_out, {i, o -> o.setReal(i.getRealFloat())}, new FloatType())\n", "fft_out_imaginary = Converters.convert((RandomAccessibleInterval) fft_out, {i, o -> o.setReal(i.getImaginaryFloat())}, new FloatType())\n", "ij.notebook().display([[\n", " \"real\": ij.notebook().display(fft_out_real, -1, 1),\n", " \"imaginary\": ij.notebook().display(fft_out_imaginary, -1, 1)\n", "]])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The usage of inverse FFT is very similar to FFT. Here we are going to invert the result we got from the above example. The result is the same as the original image." ] }, { "cell_type": "code", "execution_count": 44, "metadata": {}, "outputs": [ { "data": { "text/html": [ "" ] }, "execution_count": 44, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import net.imglib2.type.numeric.real.FloatType\n", "\n", "inverted = ij.op().run(\"create.img\", [150,100], new FloatType())\n", "ij.op().filter().ifft(inverted, fft_out)\n", "inverted" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Transforming Images" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Image transformations such as rotation, scaling and cropping are accomplished using ops of the `transform` namespace.\n", "\n", "Most ops of this namespace have the nice property of being _views_: they do not actually copy image samples, but rather wrap the image, offering a modified \"window\" into the original data. Using views helps to greatly reduce computer memory usage, at a very minimal time performance cost. If you need a deep copy of the image for some reason (e.g., if time performance is paramount, or if you want to modify the transformed image samples in-place without affecting other transformed versions of the image), you can still copy it using the `copy` namespace; see [Generating images](03_-_Generating_Images.ipynb) notebook for details." ] }, { "cell_type": "code", "execution_count": 45, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[addDimensionView, collapseNumericView, collapseRealView, collapseView, concatenateView, crop, dropSingletonDimensionsView, extendBorderView, extendMirrorDoubleView, extendMirrorSingleView, extendPeriodicView, extendRandomView, extendValueView, extendView, extendZeroView, flatIterableView, hyperSliceView, interpolateView, intervalView, invertAxisView, offsetView, permuteCoordinatesInverseView, permuteCoordinatesView, permuteView, project, rasterView, rotateView, scaleView, shearView, stackView, subsampleView, translateView, unshearView, zeroMinView]" ] }, "execution_count": 45, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ij.op().ops().findAll{op ->\n", " op.startsWith(\"transform.\")\n", "}.collect{op -> op[10..-1]}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Rotating an image" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `transform.rotateView` op rotates the image 90 degrees from one dimensional axis to another. " ] }, { "cell_type": "code", "execution_count": 46, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Available operations:\n", "\t(IntervalView out) =\n", "\tnet.imagej.ops.transform.rotateView.IntervalRotateView(\n", "\t\tRandomAccessibleInterval in,\n", "\t\tint fromAxis,\n", "\t\tint toAxis)\n", "\t(MixedTransformView out) =\n", "\tnet.imagej.ops.transform.rotateView.DefaultRotateView(\n", "\t\tRandomAccessible in,\n", "\t\tint fromAxis,\n", "\t\tint toAxis)" ] }, "execution_count": 46, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ij.op().help(\"rotateView\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here is an example usage of the `transform.rotateView` op." ] }, { "cell_type": "code", "execution_count": 47, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Old bounds: (0, 0) - (159, 99)\n", "New bounds: (-99, 0) - (0, 159)\n" ] }, { "data": { "text/html": [ "" ] }, "execution_count": 47, "metadata": {}, "output_type": "execute_result" } ], "source": [ "// set parameter for rotate\n", "x = 0; y = 1; c = 2\n", "\n", "// define functions to see the bounds of an image\n", "bounds = {interval ->\n", " return \"(\" + interval.min(0) + \", \" + interval.min(1) + \") - \" +\n", " \"(\" + interval.max(0) + \", \" + interval.max(1) + \")\"\n", "}\n", "\n", "// Rotate the image (image, fromAxis, toAxis)\n", "rotated = ij.op().run(\"rotateView\", image, x, y) // 90 degrees clockwise\n", "//rotated = ij.op().run(\"rotateView\", image, y, x)// 90 degrees counter-clockwise\n", "//rotated = ij.op().run(\"rotateView\", image, x, c) // rotate through channels! WAT\n", "\n", "// The interval bounds have automatically changed!\n", "println(\"Old bounds: \" + bounds(image))\n", "println(\"New bounds: \" + bounds(rotated))\n", "\n", "rotated" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Cropping an image" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `transform.crop` op crops an image N-dimensionally. E.g., you can use it to create a substack of a 3D dataset, cut out irrelevant channels, or crop the XY planes as with 2D image processing software." ] }, { "cell_type": "code", "execution_count": 48, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Available operations:\n", "\t(ImgPlus out) =\n", "\tnet.imagej.ops.transform.crop.CropImgPlus(\n", "\t\tImgPlus in1,\n", "\t\tInterval in2,\n", "\t\tboolean dropSingleDimensions?)\n", "\t(RandomAccessibleInterval out) =\n", "\tnet.imagej.ops.transform.crop.CropRAI(\n", "\t\tRandomAccessibleInterval in1,\n", "\t\tInterval in2,\n", "\t\tboolean dropSingleDimensions?)" ] }, "execution_count": 48, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ij.op().help(\"crop\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Below, we show two ways to crop an image: 1) with `transform.crop`; and 2) using `transform.intervalView`. The former translates the image back to the origin, while the latter does not." ] }, { "cell_type": "code", "execution_count": 49, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Eye bounds: (0, 0) - (39, 27)\n", "EyeView bounds: (75, 27) - (114, 54)\n" ] }, { "data": { "text/html": [ "
eyeview
" ] }, "execution_count": 49, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import net.imglib2.FinalInterval\n", "region = FinalInterval.createMinSize(75, 27, 0, 40, 28, 3)\n", "\n", "eye = ij.op().run(\"crop\", image, region)\n", "eyeView = ij.op().run(\"intervalView\", image, region)\n", "\n", "println(\"Eye bounds: \" + bounds(eye))\n", "println(\"EyeView bounds: \" + bounds(eyeView))\n", "ij.notebook().display([[\"eye\":eye, \"view\":eyeView]])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Scaling an image" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To perform [image scaling](https://en.wikipedia.org/wiki/Image_scaling), use the `transform.scaleView` op. You already saw it in action in the \"Getting started\" section, but here it is again—this time enlarging an image rather than reducing it.\n", "\n", "Just for fun, we compare three different interpolation strategies: [nearest neighbor](https://en.wikipedia.org/wiki/Nearest-neighbor_interpolation), [N-linear](https://en.wikipedia.org/wiki/Linear_interpolation), and [Lanczos](https://en.wikipedia.org/wiki/Lanczos_resampling)." ] }, { "cell_type": "code", "execution_count": 50, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
nearest neighborN-linearLanczos
" ] }, "execution_count": 50, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import net.imglib2.interpolation.randomaccess.NearestNeighborInterpolatorFactory\n", "import net.imglib2.interpolation.randomaccess.NLinearInterpolatorFactory\n", "import net.imglib2.interpolation.randomaccess.LanczosInterpolatorFactory\n", "\n", "scaleFactors = [4, 4, 1] // Enlarge X and Y by 4x; leave channel count the same.\n", "\n", "nearestNeighborEye = ij.op().run(\"scaleView\", eye, scaleFactors, new NearestNeighborInterpolatorFactory())\n", "nLinearEye = ij.op().run(\"scaleView\", eye, scaleFactors, new NLinearInterpolatorFactory())\n", "lanczosEye = ij.op().run(\"scaleView\", eye, scaleFactors, new LanczosInterpolatorFactory())\n", "\n", "ij.notebook().display([[\"nearest neighbor\":nearestNeighborEye, \"N-linear\":nLinearEye, \"Lanczos\":lanczosEye]])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Of course, some detail from the original image has been lost, since we scaled down and then back up again." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Padding an image" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `transform.intervalView` can also be used to expand the bounding box of an image. However, there is one catch: you must first decide what the out-of-bounds sample values will be. The `transform.extend` ops achieve this goal. If you forget to extend the image before padding it via `intervalView`, you will receive an exception when attempting to query any out-of-bounds samples.\n", "\n", "Note that the various `transform.extend` ops explicitly remove the bounding box of an image, expanding the defined sample values to infinity in all directions. In most circumstances, you will want to use `transform.intervalView` to rebound the image after extending it." ] }, { "cell_type": "code", "execution_count": 51, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[extendBorderView, extendMirrorDoubleView, extendMirrorSingleView, extendPeriodicView, extendRandomView, extendValueView, extendView, extendZeroView]" ] }, "execution_count": 51, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ij.op().ops().findAll{op ->\n", " op.startsWith(\"transform.extend\")\n", "}.collect{op -> op[10..-1]}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here are some side-by-side examples of what happens when you pad an image with these various approaches:" ] }, { "cell_type": "code", "execution_count": 52, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
bordermirror doublemirror singleperiodicrandomvaluezero
" ] }, "execution_count": 52, "metadata": {}, "output_type": "execute_result" } ], "source": [ "def pad(image, extended, t, r, b, l) {\n", " min = new long[image.numDimensions()]\n", " max = new long[image.numDimensions()]\n", " image.min(min)\n", " image.max(max)\n", " min[0] -= l; min[1] -= t; max[0] += r; max[1] += b\n", " return ij.op().run(\"intervalView\", extended, min, max)\n", "}\n", "\n", "// Define the top, right, bottom and left padding amounts.\n", "t = r = b = l = 20\n", "\n", "// Pad the image with different out-of-bounds strategies.\n", "\n", "extendedBorder = ij.op().run(\"transform.extendBorderView\", eye)\n", "paddedBorder = pad(eye, extendedBorder, t, r, b, l)\n", "\n", "extendedMirrorDouble = ij.op().run(\"transform.extendMirrorDoubleView\", eye)\n", "paddedMirrorDouble = pad(eye, extendedMirrorDouble, t, r, b, l)\n", "\n", "extendedMirrorSingle = ij.op().run(\"transform.extendMirrorSingleView\", eye)\n", "paddedMirrorSingle = pad(eye, extendedMirrorSingle, t, r, b, l)\n", "\n", "extendedPeriodic = ij.op().run(\"transform.extendPeriodicView\", eye)\n", "paddedPeriodic = pad(eye, extendedPeriodic, t, r, b, l)\n", "\n", "minValue = eye.firstElement().getMinValue().doubleValue()\n", "maxValue = eye.firstElement().getMaxValue().doubleValue()\n", "extendedRandom = ij.op().run(\"transform.extendRandomView\", eye, minValue, maxValue)\n", "paddedRandom = pad(eye, extendedRandom, t, r, b, l)\n", "\n", "value = eye.firstElement().copy(); value.set(100)\n", "extendedValue = ij.op().run(\"transform.extendValueView\", eye, value)\n", "paddedValue = pad(eye, extendedValue, t, r, b, l)\n", "\n", "extendedZero = ij.op().run(\"transform.extendZeroView\", eye)\n", "paddedZero = pad(eye, extendedZero, t, r, b, l)\n", "\n", "ij.notebook().display([[\n", " \"border\":paddedBorder,\n", " \"mirror double\":paddedMirrorDouble,\n", " \"mirror single\":paddedMirrorSingle,\n", " \"periodic\":paddedPeriodic,\n", " \"random\":paddedRandom,\n", " \"value\":paddedValue,\n", " \"zero\":paddedZero\n", "]])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Slicing an image" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `transform.hyperSliceView` op is used to cut out a portion of an image's dimensions—for example, to slice out a single plane. This notebook demonstrates how to use it in conjunction with a Hessian filter." ] }, { "cell_type": "code", "execution_count": 53, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "4\n" ] }, { "data": { "text/html": [ "
Derivative #0Derivative #1Derivative #2Derivative #3
" ] }, "execution_count": 53, "metadata": {}, "output_type": "execute_result" } ], "source": [ "// Hessian filter\n", "hessianComposite = ij.op().filter().hessian(image32)\n", "\n", "hessian = hessianComposite.interval\n", "println(hessian.dimension(2))\n", "\n", "stackIndex = hessian.numDimensions() - 1\n", "stackLength = hessian.dimension(stackIndex)\n", "\n", "hessianSlices = [:]\n", "for (i = 0; i < stackLength; i++)\n", " hessianSlices.put(\"Derivative #\" + i, ij.op().transform().hyperSliceView(hessian, stackIndex, i))\n", "\n", "ij.notebook().display([hessianSlices])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Thresholding" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Thresholding](https://en.wikipedia.org/wiki/Thresholding_(image_processing)) is a technique for dividing an image into two (or more) classes of pixels, which are typically called \"foreground\" and \"background.\" " ] }, { "cell_type": "code", "execution_count": 54, "metadata": {}, "outputs": [ { "data": { "text/html": [ "" ] }, "execution_count": 54, "metadata": {}, "output_type": "execute_result" } ], "source": [ "threshImage = ij.op().run(\"crop\", image, slice, true)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Ops has all the same global thresholding methods as ImageJ 1.x, as well as some local thresholding methods." ] }, { "cell_type": "code", "execution_count": 55, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[apply, huang, ij1, intermodes, isoData, li, localBernsenThreshold, localContrastThreshold, localMeanThreshold, localMedianThreshold, localMidGreyThreshold, localNiblackThreshold, localPhansalkarThreshold, localSauvolaThreshold, maxEntropy, maxLikelihood, mean, minError, minimum, moments, otsu, percentile, renyiEntropy, rosin, shanbhag, triangle, yen]" ] }, "execution_count": 55, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ij.op().ops().findAll{op ->\n", " op.startsWith(\"threshold.\")\n", "}.collect{op -> op[10..-1]}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's check out the global thresholding algorithms:" ] }, { "cell_type": "code", "execution_count": 56, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
huangij1intermodesisodatalimax entropy
max likelihoodmeanmin errorminimummomentsotsu
percentilerenyi entropyrosinshanbhagtriangleyen
" ] }, "execution_count": 56, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tHuang = ij.op().threshold().huang(threshImage)\n", "tIJ1 = ij.op().threshold().ij1(threshImage) // ImageJ 1.x calls this \"Default\"\n", "tIntermodes = ij.op().threshold().intermodes(threshImage)\n", "tIsoData = ij.op().threshold().isoData(threshImage)\n", "tLi = ij.op().threshold().li(threshImage)\n", "tMaxEntropy = ij.op().threshold().maxEntropy(threshImage)\n", "tMaxLikelihood = ij.op().threshold().maxLikelihood(threshImage)\n", "tMean = ij.op().threshold().mean(threshImage)\n", "tMinError = ij.op().threshold().minError(threshImage)\n", "tMinimum = ij.op().threshold().minimum(threshImage)\n", "tMoments = ij.op().threshold().moments(threshImage)\n", "tOtsu = ij.op().threshold().otsu(threshImage)\n", "tPercentile = ij.op().threshold().percentile(threshImage)\n", "tRenyiEntropy = ij.op().threshold().renyiEntropy(threshImage)\n", "tRosin = ij.op().threshold().rosin(threshImage)\n", "tShanbhag = ij.op().threshold().shanbhag(threshImage)\n", "tTriangle = ij.op().threshold().triangle(threshImage)\n", "tYen = ij.op().threshold().yen(threshImage)\n", "\n", "ij.notebook().display([\n", " [[\"huang\":tHuang, \"ij1\":tIJ1, \"intermodes\":tIntermodes, \"isodata\":tIsoData, \"li\":tLi, \"max entropy\":tMaxEntropy]],\n", " [[\"max likelihood\":tMaxLikelihood, \"mean\":tMean, \"min error\":tMinError, \"minimum\":tMinimum, \"moments\":tMoments, \"otsu\":tOtsu]],\n", " [[\"percentile\":tPercentile, \"renyi entropy\":tRenyiEntropy, \"rosin\":tRosin, \"shanbhag\":tShanbhag, \"triangle\":tTriangle, \"yen\":tYen]]\n", "])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's try some local thresholding. This is a bit more complex, because local thresholding algorithms typically need a neighborhood, and often some other secondary parameters. We will take the Bernsen algorithm for a spin:" ] }, { "cell_type": "code", "execution_count": 57, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Available operations:\n", "\t(IterableInterval out) =\n", "\tnet.imagej.ops.threshold.localBernsen.LocalBernsenThreshold(\n", "\t\tIterableInterval out,\n", "\t\tRandomAccessibleInterval in,\n", "\t\tShape shape,\n", "\t\tOutOfBoundsFactory outOfBoundsFactory?,\n", "\t\tdouble contrastThreshold,\n", "\t\tdouble halfMaxValue)" ] }, "execution_count": 57, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ij.op().help(\"threshold.localBernsenThreshold\")" ] }, { "cell_type": "code", "execution_count": 58, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
radius 1radius 3radius 5radius 8radius 12radius 15
" ] }, "execution_count": 58, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import net.imglib2.algorithm.neighborhood.HyperSphereShape\n", "\n", "// Secondary parameter values. These are wild guesses.\n", "contrastThreshold = 10\n", "halfMaxValue = 10\n", "\n", "// Let's try with a variety of neighborhood radii and compare side-by-side.\n", "import net.imglib2.type.logic.BitType\n", "radii = [1, 3, 5, 8, 12, 15]\n", "bernsenImages = [:]\n", "for (radius in radii) {\n", " binaryImage = ij.op().run(\"create.img\", threshImage, new BitType())\n", " ij.op().threshold().localBernsenThreshold(binaryImage, threshImage, new HyperSphereShape(radius),\n", " contrastThreshold, halfMaxValue)\n", " bernsenImages.put(\"radius \" + radius, binaryImage)\n", "}\n", "ij.notebook().display([bernsenImages])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Nice illustration of how much better local thresholding can be, eh? The limited scope when thresholding each pixel can go a long way toward correcting for issues like uneven illumination." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Mathematical morphology" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Get ImageJ ready and prepare both grey image and binary image for processing." ] }, { "cell_type": "code", "execution_count": 59, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
graybinary
" ] }, "execution_count": 59, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import net.imglib2.algorithm.neighborhood.HyperSphereShape\n", "\n", "// get binary image\n", "threshImage = grayImage\n", "\n", "// Secondary parameter values. These are wild guesses.\n", "contrastThreshold = 10\n", "halfMaxValue = 10\n", "\n", "import net.imglib2.type.logic.BitType\n", "radius = 15\n", "binaryImage = ij.op().run(\"create.img\", threshImage, new BitType())\n", "ij.op().threshold().localBernsenThreshold(binaryImage, threshImage, new HyperSphereShape(radius),\n", " contrastThreshold, halfMaxValue)\n", "ij.notebook().display([[\"gray\": grayImage, \"binary\": binaryImage]])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Ops includes operators for mathematical morphology in both binary and grayscale. The two most basic morphological operators are:\n", "\n", "* erosion – reducing bright pixels at the edges of the image\n", "* dilation - growing bright pixels at the edges of the image\n", "\n", "The next two most basic are:\n", "\n", "* opening - dilation of the erosion\n", "* closing - erosion of the dilation\n", "\n", "These latter two are not strictly necessary, since you can simply call erode followed by dilate or vice versa, but they are convenient as a shorthand.\n" ] }, { "cell_type": "code", "execution_count": 60, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
erodedilateopenclosemanual openmanual close
" ] }, "execution_count": 60, "metadata": {}, "output_type": "execute_result" } ], "source": [ "// HINT: Try different binary images here!\n", "morphImage = binaryImage\n", "\n", "// We will use the smallest radius: a single pixel.\n", "import net.imglib2.algorithm.neighborhood.HyperSphereShape\n", "shape = new HyperSphereShape(1)\n", "\n", "erode = ij.op().morphology().erode(morphImage, shape)\n", "dilate = ij.op().morphology().dilate(morphImage, shape)\n", "open = ij.op().morphology().open(morphImage, [shape])\n", "close = ij.op().morphology().close(morphImage, [shape])\n", "// Let's also check whether open and close visually match what we expect.\n", "manualOpen = ij.op().morphology().dilate(erode, shape)\n", "manualClose = ij.op().morphology().erode(dilate, shape)\n", "\n", "ij.notebook().display([[\"erode\":erode, \"dilate\":dilate, \"open\":open, \"close\":close,\n", " \"manual open\":manualOpen, \"manual close\":manualClose]])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The open and close look good—they match their respective definitions.\n", "\n", "Let's quickly try some [grayscale morphological operations](https://en.wikipedia.org/wiki/Mathematical_morphology#Grayscale_morphology) as well. This time we also invoke the [top-hat transform](https://en.wikipedia.org/wiki/Top-hat_transform)." ] }, { "cell_type": "code", "execution_count": 61, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
erodedilateopencloseblack top-hatwhite top-hat
" ] }, "execution_count": 61, "metadata": {}, "output_type": "execute_result" } ], "source": [ "morphImage = grayImage\n", "\n", "// We will use a larger radius of 3 for our neighborhood this time.\n", "import net.imglib2.algorithm.neighborhood.HyperSphereShape\n", "shape = new HyperSphereShape(3)\n", "\n", "erode = ij.op().morphology().erode(morphImage, shape)\n", "dilate = ij.op().morphology().dilate(morphImage, shape)\n", "open = ij.op().morphology().open(morphImage, [shape])\n", "close = ij.op().morphology().close(morphImage, [shape])\n", "blackTopHat = ij.op().morphology().blackTopHat(morphImage, [shape])\n", "whiteTopHat = ij.op().morphology().topHat(morphImage, [shape])\n", "\n", "ij.notebook().display([[\"erode\":erode, \"dilate\":dilate, \"open\":open, \"close\":close,\n", " \"black top-hat\":blackTopHat, \"white top-hat\":whiteTopHat]])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Ops also provides hit-and-miss operations, more specifically, approaches for thinning:" ] }, { "cell_type": "code", "execution_count": 62, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
GuoHallHilditchMorphologicalzhangSuen
" ] }, "execution_count": 62, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import net.imagej.ops.morphology.thin.ThinGuoHall\n", "import net.imagej.ops.morphology.thin.ThinHilditch\n", "import net.imagej.ops.morphology.thin.ThinMorphological\n", "import net.imagej.ops.morphology.thin.ThinZhangSuen\n", "\n", "// HINT: Try different binary images here!\n", "morphImage = binaryImage\n", "\n", "GuoHall = ij.op().run(ThinGuoHall.class, morphImage)\n", "Hilditch = ij.op().run(ThinHilditch.class, morphImage)\n", "Morphological = ij.op().run(ThinMorphological.class, morphImage)\n", "zhangSuen = ij.op().run(ThinZhangSuen.class, morphImage)\n", "\n", "ij.notebook().display([[\"GuoHall\":GuoHall, \"Hilditch\":Hilditch,\n", " \"Morphological\":Morphological, \"zhangSuen\":zhangSuen]])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Ops Index" ] }, { "cell_type": "code", "execution_count": 63, "metadata": { "scrolled": false }, "outputs": [ { "data": { "text/html": [ "

Index of available ops

<global>

eval
help
identity
join
loop
map
op
run
slice

geom

boundarySize
boundarySizeConvexHull
boundingBox
boxivity
centerOfGravity
centroid
circularity
compactness
contour
convexHull
convexity
eccentricity
feretsAngle
feretsDiameter
mainElongation
majorAxis
marchingCubes
maximumFeret
maximumFeretsAngle
maximumFeretsDiameter
medianElongation
minimumFeret
minimumFeretsAngle
minimumFeretsDiameter
minorAxis
roundness
secondMoment
size
sizeConvexHull
smallestEnclosingBoundingBox
solidity
spareness
sphericity
vertexInterpolator
verticesCount
verticesCountConvexHull
voxelization

stats

geometricMean
harmonicMean
integralMean
integralSum
integralVariance
kurtosis
leastSquares
max
mean
median
min
minMax
moment1AboutMean
moment2AboutMean
moment3AboutMean
moment4AboutMean
percentile
quantile
size
skewness
stdDev
sum
sumOfInverses
sumOfLogs
sumOfSquares
variance

coloc

icq
kendallTau
maxTKendallTau
pValue
pearsons

deconvolve

accelerate
firstGuess
normalizationFactor
richardsonLucy
richardsonLucyCorrection
richardsonLucyTV
richardsonLucyUpdate

haralick

asm
clusterPromenence
clusterShade
contrast
correlation
differenceEntropy
differenceVariance
entropy
icm1
icm2
ifdm
maxProbability
sumAverage
sumEntropy
sumVariance
textureHomogeneity
variance

labeling

cca
merge

linalg

rotate

math

abs
add
and
arccos
arccosh
arccot
arccoth
arccsc
arccsch
arcsec
arcsech
arcsin
arcsinh
arctan
arctanh
assign
ceil
complement
complexConjugateMultiply
cos
cosh
cot
coth
csc
csch
cubeRoot
divide
exp
expMinusOne
floor
gamma
invert
leftShift
log
log10
log2
logOnePlusX
max
min
multiply
nearestInt
negate
or
power
randomGaussian
randomUniform
reciprocal
remainder
rightShift
round
sec
sech
signum
sin
sinc
sincPi
sinh
sqr
sqrt
step
subtract
tan
tanh
ulp
unsignedRightShift
xor
zero

convert

bit
cfloat32
cfloat64
clip
copy
float32
float64
imageType
int16
int32
int64
int8
normalizeScale
scale
uint12
uint128
uint16
uint2
uint32
uint4
uint64
uint8

imagemoments

centralMoment00
centralMoment01
centralMoment02
centralMoment03
centralMoment10
centralMoment11
centralMoment12
centralMoment20
centralMoment21
centralMoment30
huMoment1
huMoment2
huMoment3
huMoment4
huMoment5
huMoment6
huMoment7
moment00
moment01
moment10
moment11
normalizedCentralMoment02
normalizedCentralMoment03
normalizedCentralMoment11
normalizedCentralMoment12
normalizedCentralMoment20
normalizedCentralMoment21
normalizedCentralMoment30

thread

chunker

topology

boxCount
eulerCharacteristic26N
eulerCharacteristic26NFloating
eulerCorrection

zernike

magnitude
phase

copy

img
imgLabeling
iterableInterval
labelingMapping
rai
type

filter

addNoise
addPoissonNoise
allPartialDerivatives
bilateral
convolve
correlate
createFFTOutput
derivativeGauss
dog
fft
fftSize
frangiVesselness
gauss
hessian
ifft
linearFilter
max
mean
median
min
padFFTInput
padInput
padShiftFFTKernel
paddingIntervalCentered
paddingIntervalOrigin
partialDerivative
sigma
sobel
tubeness
variance

morphology

blackTopHat
close
dilate
erode
extractHoles
fillHoles
floodFill
open
outline
thinGuoHall
thinHilditch
thinMorphological
thinZhangSuen
topHat

threshold

apply
huang
ij1
intermodes
isoData
li
localBernsenThreshold
localContrastThreshold
localMeanThreshold
localMedianThreshold
localMidGreyThreshold
localNiblackThreshold
localPhansalkarThreshold
localSauvolaThreshold
maxEntropy
maxLikelihood
mean
minError
minimum
moments
otsu
percentile
renyiEntropy
rosin
shanbhag
triangle
yen

create

img
imgFactory
imgLabeling
imgPlus
integerType
kernel
kernel2ndDerivBiGauss
kernelBiGauss
kernelDiffraction
kernelGabor
kernelGaborComplexDouble
kernelGaborDouble
kernelGauss
kernelLog
kernelSobel
labelingMapping
nativeType
object

hog

hog

image

ascii
cooccurrenceMatrix
distancetransform
equation
fill
histogram
integral
invert
normalize
squareIntegral
watershed

lbp

lbp2D

logic

and
conditional
equal
greaterThan
greaterThanOrEqual
lessThan
lessThanOrEqual
not
notEqual
or
xor

segment

detectJunctions
detectRidges

tamura

coarseness
contrast
directionality

transform

addDimensionView
collapseNumericView
collapseRealView
collapseView
concatenateView
crop
dropSingletonDimensionsView
extendBorderView
extendMirrorDoubleView
extendMirrorSingleView
extendPeriodicView
extendRandomView
extendValueView
extendView
extendZeroView
flatIterableView
hyperSliceView
interpolateView
intervalView
invertAxisView
offsetView
permuteCoordinatesInverseView
permuteCoordinatesView
permuteView
project
rasterView
rotateView
scaleView
shearView
stackView
subsampleView
translateView
unshearView
zeroMinView

" ] }, "execution_count": 63, "metadata": {}, "output_type": "execute_result" } ], "source": [ "colCount = 5 // Change it if you wish.\n", "\n", "// Define some helper functions.\n", "notebookPath = { op ->\n", " tokens = op.split(\"\\\\.\")\n", " tokens[-1] += \".ipynb\"\n", " opNotebook = java.nio.file.Paths.get(\"Ops\", tokens)\n", " if (opNotebook.toFile().exists()) return opNotebook\n", " ns = net.imagej.ops.OpUtils.getNamespace(op)\n", " if (ns == null) return null\n", " nsNotebook = java.nio.file.Paths.get(\"Ops\", ns, ns + \".ipynb\")\n", " if (nsNotebook.toFile().exists()) return nsNotebook\n", "}\n", "ns = { net.imagej.ops.OpUtils.getNamespace(it) ?: \"<global>\" }\n", "\n", "// Organize op namespaces by column.\n", "opsByNS = [:]; ij.op().ops().each{ opsByNS.get(ns(it), []).add(it) }\n", "cols = []; (1..colCount).each { cols.add([]) }\n", "for (ns in opsByNS.keySet().sort()) {\n", " col = cols.min { it.flatten().size() } // Add to shortest column.\n", " item = [\"

${ns}

\"]\n", " item.addAll(opsByNS.get(ns))\n", " col.add(item)\n", "}\n", "\n", "// Generate HTML table.\n", "s = \"

Index of available ops

\"\n", "s += \"\"\n", "s += \"\"\n", "for (col in cols) {\n", " s += \"\"\n", "}\n", "s += \"
\"\n", " for (ns in col) {\n", " // Nested tables! Great plan? Or the greatest plan?\n", " s += \"\"\n", " for (op in ns) {\n", " label = net.imagej.ops.OpUtils.stripNamespace(op)\n", " url = notebookPath(op)\n", " html = url == null ? label : \"${label}\"\n", " s += \"\"\n", " }\n", " s += \"
${html}

\"\n", " }\n", " s += \"
\"\n", "(net.imagej.notebook.mime.HTMLObject) { s }" ] } ], "metadata": { "kernelspec": { "display_name": "Groovy", "language": "groovy", "name": "groovy" }, "language_info": { "codemirror_mode": "groovy", "file_extension": ".groovy", "mimetype": "", "name": "Groovy", "nbconverter_exporter": "", "version": "2.4.3" }, "toc": { "base_numbering": 1, "nav_menu": {}, "number_sections": true, "sideBar": true, "skip_h1_title": true, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": false, "toc_position": { "height": "calc(100% - 180px)", "left": "10px", "top": "150px", "width": "307px" }, "toc_section_display": true, "toc_window_display": true } }, "nbformat": 4, "nbformat_minor": 2 }