{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Inference Time\n", "This notebook contains details on the inference time measurements reported in table V of the paper.\n", "\n", "### *Optional Config and Installation*\n", "\n", "Simply jump over the steps you already did set up.\n", "\n", "**1. Configuration**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import xview\n", "from os import path\n", "\n", "# PLEASE EDIT THE FOLLOWING PATHS FOR YOUR LOCAL SETTINGS\n", "\n", "# path where the image data will be/is stored\n", "DATA_BASEPATH = '/tmp/test'\n", "\n", "# path where experiment configs will be stored\n", "EXPERIMENT_STORAGE_FOLDER = '/tmp/exp'\n", "\n", "# only change in case you want to use tensorboard for model training, otherwise not relevant\n", "EXP_OUT = '/tmp'\n", "\n", "print('writing settings to %s' % path.join(path.dirname(xview.__file__), 'settings.py'))\n", "with open(path.join(path.dirname(xview.__file__), 'settings.py'), 'w') as settings_file:\n", " settings_file.write(\"DATA_BASEPATH = '%s'\\n\" % DATA_BASEPATH)\n", " settings_file.write(\"EXPERIMENT_STORAGE_FOLDER = '%s'\\n\" % EXPERIMENT_STORAGE_FOLDER)\n", " settings_file.write(\"EXP_OUT = '%s'\\n\" % EXP_OUT)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**2. Downloading Experimental Data** \n", "All training and measurement experiments are bundeled into an archive that is downloaded and installed. This gives you access to pre-trained models and all experimental configurations." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "! wget http://robotics.ethz.ch/~asl-datasets/2018_modular_semantic_segmentation/experimental_data.tar.gz -O /tmp/experimental_data.tar.gz\n", "import tarfile\n", "from os import path\n", "from xview.settings import EXPERIMENT_STORAGE_FOLDER\n", "tar = tarfile.open('/tmp/experimental_data.tar.gz')\n", "tar.extractall(path=EXPERIMENT_STORAGE_FOLDER)\n", "tar.close()\n", "! rm /tmp/experimental_data.tar.gz" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Inference Time\n", "The following cells load the results from our timing experiments.\n", "\n", "The inference time was measured using on a single GPU that did not have enough memory to make use of any parallellisation possibilities. In addition, the inference time measurements do not include time required to load data into the gpu as this heavily depends on code optimization and used hardware specs. We evaluated the inference time on a defined constant of equal size as the actual RGB and Depth images used. For more details, please have a look at [the implementation of the experiment](https://github.com/ethz-asl/modular_semantic_segmentation/blob/publish/experiments/timing.py)." ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [], "source": [ "from experiments.utils import ExperimentData\n", "import matplotlib.pyplot as plt\n", "import numpy as np\n", "import pandas as pd\n", "\n", "# Load the relevant experiments. If this fails, your config from above is not setup properly.\n", "rgb = ExperimentData(1062)\n", "depth = ExperimentData(1063)\n", "bayes = ExperimentData(1071)\n", "dirichlet = ExperimentData(1067)\n", "variance = ExperimentData(1065)\n", "fusionfcn = ExperimentData(1059)\n", "average = ExperimentData(1064)" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", " | Fusion_Fcn | \n", "Dirichlet | \n", "Bayes | \n", "Average | \n", "Variance | \n", "RGB | \n", "Depth | \n", "
---|---|---|---|---|---|---|---|
mean | \n", "0.0720 | \n", "0.0517 | \n", "0.0461 | \n", "0.0432 | \n", "0.3064 | \n", "0.0219 | \n", "0.0218 | \n", "
std | \n", "0.0221 | \n", "0.0238 | \n", "0.0156 | \n", "0.0113 | \n", "0.0183 | \n", "0.0114 | \n", "0.0121 | \n", "