{ "cells": [ { "cell_type": "markdown", "id": "0", "metadata": {}, "source": [ "# M3C2-EP: Point cloud-based change analysis using error propagation" ] }, { "cell_type": "markdown", "id": "1", "metadata": {}, "source": [ "In this notebook we will perform point cloud-based change analysis using error propagation on two point clouds of the rock glacier Äußres Hochebenkar in Austria. An introduction of the study site can be found [here](https://3dgeo-heidelberg.github.io/etrainee/module3/06_casestudy_rockglacier/06_casestudy_rockglacier.html).\n", "\n", "The objective is to compute distances between two point clouds using the M3C2-EP algorithm ([Winiwarter et al., 2021](#References)).\n", "\n", "The workflow is introduced throughout this notebook. You can also make use of the software documentations!\n", "\n", "## Software and data\n", "This task is solved using Python with the [`py4dgeo`](https://github.com/3dgeo-heidelberg/py4dgeo) library.\n", "The dataset consits of two point cloud captured in 2017 (epoch 1) and 2018 (epoch 2) in the lower tongue area of the rock glacier." ] }, { "cell_type": "markdown", "id": "2", "metadata": {}, "source": [ "## Loading data\n", "As a first step, we import the required packages:" ] }, { "cell_type": "code", "execution_count": null, "id": "3", "metadata": {}, "outputs": [], "source": [ "import py4dgeo\n", "import os\n", "import numpy as np\n", "import shutil\n", "import requests\n", "import tempfile\n", "import zipfile" ] }, { "cell_type": "markdown", "id": "4", "metadata": {}, "source": [ "Next, we need to load two datasets that cover the same scene at two different points in time. Point cloud datasets are represented by `numpy` arrays of shape `n x 3` using a 64 bit floating point type (`np.float64`)." ] }, { "cell_type": "markdown", "id": "5", "metadata": {}, "source": [ "We need to ensure that the two datasets include scan positions which are specified by attribute name `sp_name` and scan positions configuration information in `sp_file`:" ] }, { "cell_type": "markdown", "id": "6", "metadata": {}, "source": [ "Now we work with a rather small data set:" ] }, { "cell_type": "code", "execution_count": null, "id": "7", "metadata": {}, "outputs": [], "source": [ "epoch1, epoch2 = py4dgeo.read_from_las(\n", " \"ahk_2017_652900_5189100_gnd_subarea.laz\",\n", " \"ahk_2018A_652900_5189100_gnd_subarea.laz\",\n", " additional_dimensions={\"point_source_id\": \"scanpos_id\"},\n", ")" ] }, { "cell_type": "markdown", "id": "8", "metadata": {}, "source": [ "## Extract sensor orientation details\n", "M3C2-EP leverages knowledge regarding the data acquisition sensor. We extract the sensor orientation details from a JSON configuration file and assign them to a dictionary. These settings are then applied to each epoch of the data since both epochs share the same sensor configuration:" ] }, { "cell_type": "code", "execution_count": null, "id": "9", "metadata": {}, "outputs": [], "source": [ "with open(py4dgeo.find_file(\"sps.json\"), \"r\") as load_f:\n", " scanpos_info_dict = eval(load_f.read())\n", "epoch1.scanpos_info = scanpos_info_dict\n", "epoch2.scanpos_info = scanpos_info_dict" ] }, { "cell_type": "markdown", "id": "10", "metadata": {}, "source": [ "## Load corepoints\n", "The analysis of point cloud distances is executed on so-called *core points* (cf. [Lague et al., 2013](#References)). These could be, e.g., one of the input point clouds, a subsampled version thereof, points in an equidistant grid, or something else. Here, we choose a subsampling by taking every 50th point of the reference point cloud:" ] }, { "cell_type": "code", "execution_count": null, "id": "11", "metadata": {}, "outputs": [], "source": [ "corepoints = py4dgeo.read_from_las(\"ahk_cp_652900_5189100_subarea.laz\").cloud" ] }, { "cell_type": "markdown", "id": "12", "metadata": {}, "source": [ "## Provide transformation information\n", "The algorithm needs an alignment covariance matrix of shape `12 x 12`, an affine transformation matrix $T$ of shape `3 x 4` and a reduction point $(x_0, y_0, z_0)^T$ (rotation origin, 3 parameters) obtained from aligning the two point clouds. The transformation follows this notation:\n", "\n", "$$\n", "T = \\begin{pmatrix}\n", "t_1 & t_2 & t_3 & t_4 \\\\\n", "t_5 & t_6 & t_7 & t_8 \\\\\n", "t_9 & t_{10} & t_{11} & t_{12} \\\\\n", "\\end{pmatrix}\n", "$$\n", "\n", "Where the transformation is applied as follows:\n", "\n", "$$\n", "y = \\begin{pmatrix}\n", "t_1 & t_2 & t_3 \\\\\n", "t_5 & t_6 & t_7 \\\\\n", "t_9 & t_{10} & t_{11} \\\\\n", "\\end{pmatrix} \\left( x-\\begin{pmatrix} x_0 \\\\ y_0 \\\\ z_0 \\\\\n", "\\end{pmatrix} \\right) + \\begin{pmatrix}\n", " t_4 \\\\\n", " t_8 \\\\\n", " t_{12} \\\\\n", "\\end{pmatrix}\n", "+ \\begin{pmatrix} x_0 \\\\ y_0 \\\\ z_0 \\\\\n", "\\end{pmatrix}\n", "$$\n", "\n", "The order of the elements in the covariance matrix is:\n", "\n", "$$\n", "t_1, t_2, t_3, t_4, t_5, t_6, t_7, t_8, t_9, t_{10}, t_{11}, t_{12}\n", "$$\n", "\n", ", meaning that transformation and rotation/scaling parameters are interleaved.\n", "\n", "We can decide whether to perform the transformation by a boolean flag 'perform_trans' and it is performed by default:" ] }, { "cell_type": "code", "execution_count": null, "id": "13", "metadata": {}, "outputs": [], "source": [ "Cxx = np.loadtxt(py4dgeo.find_file(\"Cxx.csv\"), dtype=np.float64, delimiter=\",\")\n", "tfM = np.loadtxt(py4dgeo.find_file(\"tfM.csv\"), dtype=np.float64, delimiter=\",\")\n", "refPointMov = np.loadtxt(\n", " py4dgeo.find_file(\"redPoint.csv\"), dtype=np.float64, delimiter=\",\"\n", ")" ] }, { "cell_type": "markdown", "id": "14", "metadata": {}, "source": [ "## Run distance calculation\n", "Next, we instantiate the algorithm class and run the distance calculation. The parameters are very similar to the base `M3C2` implementation, but extended to work for M3C2-EP:" ] }, { "cell_type": "code", "execution_count": null, "id": "15", "metadata": {}, "outputs": [], "source": [ "m3c2_ep = py4dgeo.M3C2EP(\n", " epochs=(epoch1, epoch2),\n", " corepoints=corepoints,\n", " normal_radii=[0.5, 1.0, 2.0],\n", " cyl_radius=0.5,\n", " max_distance=3.0,\n", " Cxx=Cxx,\n", " tfM=tfM,\n", " refPointMov=refPointMov,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "16", "metadata": {}, "outputs": [], "source": [ "distances, uncertainties, covariance = m3c2_ep.run()" ] }, { "cell_type": "markdown", "id": "17", "metadata": {}, "source": [ "The calculated result is an array with one distance per core point. The order of distances corresponds exactly to the order of input core points. In contrast to the base `M3C2`, additional `covariance` information is returned as a third element in the tuple:" ] }, { "cell_type": "code", "execution_count": null, "id": "18", "metadata": {}, "outputs": [], "source": [ "distances[0:8]" ] }, { "cell_type": "markdown", "id": "19", "metadata": {}, "source": [ "Corresponding to the derived distances, an uncertainty array is returned which contains several quantities that can be accessed individually: The level of detection `lodetection`, the spread of the distance across points in either cloud (`spread1` and `spread2`, by default measured as the standard deviation of distances) and the total number of points taken into consideration in either cloud (`num_samples1` and `num_samples2`):" ] }, { "cell_type": "code", "execution_count": null, "id": "20", "metadata": {}, "outputs": [], "source": [ "uncertainties[\"lodetection\"][0:8]" ] }, { "cell_type": "code", "execution_count": null, "id": "21", "metadata": {}, "outputs": [], "source": [ "uncertainties[\"spread1\"][0:8]" ] }, { "cell_type": "code", "execution_count": null, "id": "22", "metadata": {}, "outputs": [], "source": [ "uncertainties[\"num_samples1\"][0:8]" ] }, { "cell_type": "markdown", "id": "23", "metadata": {}, "source": [ "Corresponding to the derived distances, a 3D covariance information for the point cloud is returned:" ] }, { "cell_type": "code", "execution_count": null, "id": "24", "metadata": {}, "outputs": [], "source": [ "covariance[\"cov1\"][0, :, :]" ] }, { "cell_type": "markdown", "id": "25", "metadata": {}, "source": [ "## Visualize results\n", "Finally we can visualize our distances results:" ] }, { "cell_type": "code", "execution_count": null, "id": "26", "metadata": {}, "outputs": [], "source": [ "import matplotlib.cm as cm\n", "import matplotlib.pyplot as plt\n", "\n", "\n", "def plt_3d(corepoints, distances):\n", " fig, ax = plt.subplots(figsize=(10, 10), subplot_kw={\"projection\": \"3d\"})\n", "\n", " # Add axis labels\n", " ax.set_xlabel(\"X [m]\")\n", " ax.set_ylabel(\"Y [m]\")\n", " ax.set_zlabel(\"Z [m]\")\n", "\n", " # Plot the corepoints colored by their distance\n", " x, y, z = np.transpose(corepoints)\n", " vmin = np.min(distances)\n", " vmax = np.max(distances)\n", " pts = ax.scatter(\n", " x, y, z, s=10, c=distances, vmin=vmin, vmax=vmax, cmap=cm.seismic_r\n", " )\n", "\n", " # Add colorbar\n", " cmap = plt.colorbar(pts, shrink=0.5, label=\"Distance [m]\", ax=ax)\n", "\n", " # Add title\n", " ax.set_title(\"Visualize Changes\")\n", "\n", " ax.set_aspect(\"equal\")\n", " ax.view_init(22, 113)\n", " plt.show()" ] }, { "cell_type": "code", "execution_count": null, "id": "27", "metadata": {}, "outputs": [], "source": [ "plt_3d(corepoints, distances)" ] }, { "cell_type": "markdown", "id": "28", "metadata": {}, "source": [ "### References\n", "* Winiwarter, L., Anders, K., & Höfle, B. (2021). M3C2-EP: Pushing the limits of 3D topographic point cloud change detection by error propagation. ISPRS Journal of Photogrammetry and Remote Sensing, 178, 240-258. doi: [10.1016/j.isprsjprs.2021.06.011](https://doi.org/10.1016/j.isprsjprs.2021.06.011).\n", "\n", "* Lague, D., Brodu, N., & Leroux, J. (2013). Accurate 3D comparison of complex topography with terrestrial laser scanner: Application to the Rangitikei canyon (N-Z). ISPRS Journal of Photogrammetry and Remote Sensing, 82, pp. 10-26. doi: [10.1016/j.isprsjprs.2013.04.009](https://doi.org/10.1016/j.isprsjprs.2013.04.009)." ] } ], "metadata": { "kernelspec": { "display_name": "py12", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.8" } }, "nbformat": 4, "nbformat_minor": 5 }