{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# File-access Experiments on TIFF File" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## TIFF File Structure\n", "\n", "The following is the structure of the TIFF file.\n", "\n", "Each Image File Directory (IFD) has the information of a sub-resolution (including the main/highest resolution) image as TAGs.\n", "\n", "![](static_images/File-access_Experiments_on_TIFF_FileFormat.png)\n", "\n", "(Above image is from http://paulbourke.net/dataformats/tiff/tiff_summary.pdf [accessed Dec 9th, 2020])\n", "\n", "\n", "For a tiled-multi-resolution TIFF image, `TileWidth` and `TileLength` TAGs of an IFD have tile size information, and `TileOffsets` and `TileByteCounts` TAGs include the information on each tile's the byte offset and the number of (compressed) bytes in the tile.\n", "\n", "([This link](https://libtiff.gitlab.io/libtiff/man/TIFFGetField.3tiff.html) shows all the TAGs available through the `libtiff` library.)\n", "\n", "\n", "![](static_images/File-access_Experiments_on_TIFF_FileFormat2.png)\n", "\n", "(Above image is from https://www.blackice.com/images/Cisco.GIF and https://docs.nframes.com/input-%2526-output/output-formats/ [accessed July 30th, 2020]])\n", "\n", "\n", "Since `TileOffsets` and `TileByteCounts` are an array of numbers to access each tile's raw(compressed) data, it is important to fast-read relevant tiles' compressed RAW image data from the file in any access patterns.\n", "\n", "### Access patterns\n", "\n", "#### 1. Accessing tiles sequentially (left to right, top to bottom) from one TIFF file\n", "\n", "This can happen when a TIFF file is read from a single or multi threads/processes to convert/inference without any optimization.\n", "\n", "#### 2. Accessing tiles randomly from one TIFF file\n", "\n", "This access pattern can happen usually on DeepLearning model **inference** use cases.\n", "For inference, only part of images are used, and accessing each tile is not done sequentially.\n", "\n", "For example, a list of regions to be loaded/processed can be split into multiple threads/processes so accessing tiles can be out of order.\n", "Forthermore, (internal) tiles to be read for a specific region (patch) are not necessarily contiguous (e.g., tile index for position[x, y] (0, 0) and (0, 1) wouldn't be contiguous).\n", "\n", "#### 3. Accessing partial tiles randomly from multiple TIFF files\n", "\n", "This access pattern usually happens on DeepLearning model **training** use cases.\n", "\n", "To get unbiased weights of the neural network, it is necessary to provide *randomized* augmented training data during the model training, which means a random partial image region(patch) with the label needs to be picked from possible patch positions and file paths.\n", "\n", "\n", "In the following experiment, we are exploring the implication of the various file access methods on reading partial images with different access patterns.\n", "We didn't experiment with access pattern #3 yet but experiment results for #1 and #2 would give us some insight about the possible improvements.\n", "\n", "## Experiment Setup\n", "\n", "### TIFF File Information\n", "\n", "Information on the TIFF file under experiment:\n", "```bash\n", "# 92344 x 81017 pixels (@highest resolution) JPEG-compressed RGB image. Tile size: 256x256\n", "# (input/image2.tif)\n", "\n", "- file_size : 3,253,334,884\n", "- tile_count : 114,437 (at the highest resolution)\n", "- min_tile_bytecount: 1,677\n", "- max_tile_bytecount: 31,361\n", "- avg_tile_bytecount: 17,406.599404038905\n", "- min_tile_offset : 1,373,824\n", "- max_tile_offset : 1,993,332,840\n", "```\n", "\n", "### System Information\n", "\n", "- OS: Ubuntu 18.04\n", "- CPU: [Intel(R) Core(TM) i7-7800X CPU @ 3.50GHz](https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i7-7800X+%40+3.50GHz&id=3037)\n", "- Memory: 64GB (G-Skill DDR4 2133 16GB X 4)\n", "- Storage\n", " - NVMe SSD: [Samsung SSD 970 PRO 1TB](https://www.samsung.com/us/computing/memory-storage/solid-state-drives/ssd-970-pro-nvme-m2-1tb-mz-v7p1t0bw/)\n", " - SATA SSD: [Samsung SSD 850 EVO 1TB](https://www.samsung.com/us/computing/memory-storage/solid-state-drives/ssd-850-evo-2-5-sata-iii-1tb-mz-75e1t0b-am/)\n", " - HDD: [WDC WD40EZRX-00SPEB0 4TB](http://products.wdc.com/library/SpecSheet/ENG/2879-771438.pdf)\n", "\n", "### Procedure\n", "\n", "We tried to load all tiles' raw data in the 3GB TIFF image 1) sequentially and 2) randomly with the following methods:\n", "\n", "#### 1) Regular POSIX\n", "\n", "Using [pread()](https://man7.org/linux/man-pages/man2/pread.2.html) with a regular file descriptor, read each tile's raw (compressed) data into CPU memory.\n", "\n", "\n", "```python\n", "import cucim.clara.filesystem as fs\n", "\n", "fd = fs.open(\"image2.tif\", \"rnp\")\n", "...\n", "fd.close()\n", "```\n", "\n", "#### 2) O_DIRECT\n", "\n", "Using [pread()](https://man7.org/linux/man-pages/man2/pread.2.html) with a file descriptor having `O_DIRECT` flag, read each tile's raw (compressed) data into CPU memory.\n", "\n", "cuCIM's filesystem API handles unaligned memory/file offset for direct access (O_DIRECT).\n", "\n", "```python\n", "import cucim.clara.filesystem as fs\n", "\n", "fd = fs.open(\"image2.tif\", \"rp\")\n", "...\n", "fd.close()\n", "```\n", "\n", "#### 3) O_DIRECT pre-load\n", "\n", "Load necessary whole data block at once (with O_DIRECT flag) that is necessary to access all tiles at the highest-resolution, into the temporary CPU memory.\n", "Then, copy the necessary data for each tile into the target buffer.\n", "\n", "#### 4) mmap\n", "\n", "Use [mmap()](https://man7.org/linux/man-pages/man2/mmap.2.html) methods internally.\n", "\n", "```python\n", "import cucim.clara.filesystem as fs\n", "\n", "fd = fs.open(\"image2.tif\", \"rm\")\n", "...\n", "fd.close()\n", "```\n", "\n", "Note:: Actual experiment was done with C++ implementation/APIs.\n", "\n", "## Results\n", "\n", "Link to the spreadsheet: https://docs.google.com/spreadsheets/d/1DbPe0m2KRqlEFbZZTmP9rhDLZdG6mn_Uv8_DrZy97Uc/edit#gid=1257255419\n", "\n", "### NVMe\n", "\n", "![](static_images/File-access_Experiments_on_TIFF_NVMe.png)\n", "\n", "### SSD\n", "\n", "![](static_images/File-access_Experiments_on_TIFF_SSD.png)\n", "\n", "### HDD\n", "\n", "![](static_images/File-access_Experiments_on_TIFF_HDD.png)\n", "\n", "\n", "## Analysis & Implication\n", "\n", "- Reading tile data sequentially doesn't show much difference across configurations (except `O_DIRECT`)\n", " - Using `O_DIRECT` doesn't perform well due to its unaligned memory access\n", "- Using `O_DIRECT pre-load` approach performs best, and using `mmap` performs better than `Regular POSIX` or `O_DIRECT` methods\n", " - but `O_DIRECT pre-load` approach requires more CPU memory for pre-loading data so it may not good for the use case where only very small numbers of patches are needed from the file or the list of the patches to load from the file is not available in advance.\n", " - Using `mmap` for accessing TIFF tiles is a viable solution (current OpenSlide and cuCIM is using Regular POSIX APIs to access tile data) for improving cuCIM's performance and we can leverage `O_DIRECT pre-load` approach depending on the workflow.\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Appendix\n", "\n", "### Code used to measure performance\n", "\n", "The following variables were changed according to the configuration.\n", "\n", "```C++\n", " constexpr bool SHUFFLE_LIST = true;\n", " constexpr int iter_max = 32;\n", " constexpr int skip_count = 2;\n", "```\n", "\n", "```C++\n", "/*\n", " * Copyright (c) 2020, NVIDIA CORPORATION.\n", " *\n", " * Licensed under the Apache License, Version 2.0 (the \"License\");\n", " * you may not use this file except in compliance with the License.\n", " * You may obtain a copy of the License at\n", " *\n", " * http://www.apache.org/licenses/LICENSE-2.0\n", " *\n", " * Unless required by applicable law or agreed to in writing, software\n", " * distributed under the License is distributed on an \"AS IS\" BASIS,\n", " * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", " * See the License for the specific language governing permissions and\n", " * limitations under the License.\n", " */\n", "\n", "#include \"cuslide/tiff/tiff.h\"\n", "#include \"config.h\"\n", "\n", "#include \n", "#include \n", "#include \n", "#include \n", "#include \n", "#include \n", "#include \n", "#include \n", "#include \n", "#include \n", "#include \n", "#include \n", "\n", "#define ALIGN_UP(x, align_to) (((uint64_t)(x) + ((uint64_t)(align_to)-1)) & ~((uint64_t)(align_to)-1))\n", "#define ALIGN_DOWN(x, align_to) ((uint64_t)(x) & ~((uint64_t)(align_to)-1))\n", "\n", "static void shuffle_offsets(uint32_t count, uint64_t* offsets, uint64_t* bytecounts)\n", "{\n", " // Fisher-Yates shuffle\n", " for (int i = 0; i < count; ++i)\n", " {\n", " int j = (std::rand() % (count - i)) + i;\n", " std::swap(offsets[i], offsets[j]);\n", " std::swap(bytecounts[i], bytecounts[j]);\n", " }\n", "}\n", "\n", "TEST_CASE(\"Verify raw tiff read\", \"[test_read_rawtiff.cpp]\")\n", "{\n", " cudaError_t cuda_status;\n", " int err;\n", " constexpr int BLOCK_SECTOR_SIZE = 4096;\n", " constexpr bool SHUFFLE_LIST = true;\n", " constexpr int iter_max = 32;\n", " constexpr int skip_count = 2;\n", "\n", " std::srand(std::time(nullptr));\n", "\n", " auto input_file = g_config.input_file.c_str(); // \"/nvme/image2.tif\"\n", "\n", " struct stat sb;\n", " auto fd_temp = ::open(input_file, O_RDONLY);\n", " fstat(fd_temp, &sb);\n", " uint64_t test_file_size = sb.st_size;\n", " ::close(fd_temp);\n", "\n", " auto tif = std::make_shared(input_file, O_RDONLY);\n", " tif->construct_ifds();\n", " tif->ifd(0)->write_offsets_(input_file);\n", "\n", "\n", " std::ifstream offsets(fmt::format(\"{}.offsets\", input_file), std::ios::in | std::ios::binary);\n", " std::ifstream bytecounts(fmt::format(\"{}.bytecounts\", input_file), std::ios::in | std::ios::binary);\n", "\n", " // Read image piece count\n", " uint32_t image_piece_count_ = 0;\n", " offsets.read(reinterpret_cast(&image_piece_count_), sizeof(image_piece_count_));\n", " bytecounts.read(reinterpret_cast(&image_piece_count_), sizeof(image_piece_count_));\n", "\n", " uint64_t image_piece_offsets_[image_piece_count_];\n", " uint64_t image_piece_bytecounts_[image_piece_count_];\n", " uint64_t min_bytecount = 9999999999;\n", " uint64_t max_bytecount = 0;\n", " uint64_t sum_bytecount = 0;\n", "\n", " uint64_t min_offset = 9999999999;\n", " uint64_t max_offset = 0;\n", " for (uint32_t i = 0; i < image_piece_count_; i++)\n", " {\n", " offsets.read((char*)&image_piece_offsets_[i], sizeof(image_piece_offsets_[i]));\n", " bytecounts.read((char*)&image_piece_bytecounts_[i], sizeof(image_piece_bytecounts_[i]));\n", "\n", " min_bytecount = std::min(min_bytecount, image_piece_bytecounts_[i]);\n", " max_bytecount = std::max(max_bytecount, image_piece_bytecounts_[i]);\n", " sum_bytecount += image_piece_bytecounts_[i];\n", "\n", " min_offset = std::min(min_offset, image_piece_offsets_[i]);\n", " max_offset = std::max(max_offset, image_piece_offsets_[i] + image_piece_bytecounts_[i]);\n", " }\n", " bytecounts.close();\n", " offsets.close();\n", "\n", " fmt::print(\"file_size : {}\\n\", test_file_size);\n", " fmt::print(\"min_bytecount: {}\\n\", min_bytecount);\n", " fmt::print(\"max_bytecount: {}\\n\", max_bytecount);\n", " fmt::print(\"avg_bytecount: {}\\n\", static_cast(sum_bytecount) / image_piece_count_);\n", " fmt::print(\"min_offset : {}\\n\", min_offset);\n", " fmt::print(\"max_offset : {}\\n\", max_offset);\n", "\n", " uint64_t test_size = max_offset + max_bytecount;\n", "\n", " // Shuffle offsets\n", " if (SHUFFLE_LIST)\n", " {\n", " shuffle_offsets(image_piece_count_, image_piece_offsets_, image_piece_bytecounts_);\n", " }\n", "\n", " // Allocate memory\n", " uint8_t* unaligned_host = static_cast(malloc(test_file_size + BLOCK_SECTOR_SIZE * 2));\n", " uint8_t* buffer_host = static_cast(malloc(test_file_size + BLOCK_SECTOR_SIZE * 2));\n", " uint8_t* aligned_host = reinterpret_cast(ALIGN_UP(unaligned_host, BLOCK_SECTOR_SIZE));\n", "\n", " cucim::filesystem::discard_page_cache(input_file);\n", "\n", " fmt::print(\"count:{} \\n\", image_piece_count_);\n", "\n", " SECTION(\"Regular POSIX\")\n", " {\n", " fmt::print(\"Regular POSIX\\n\");\n", "\n", " double total_elapsed_time = 0;\n", " for (int iter = 0; iter < iter_max; ++iter)\n", " {\n", " cucim::filesystem::discard_page_cache(input_file);\n", " auto fd = cucim::filesystem::open(input_file, \"rpn\");\n", " {\n", " cucim::logger::Timer timer(\"- read whole : {:.7f}\\n\", true, false);\n", "\n", " ssize_t read_cnt = fd->pread(aligned_host, test_file_size, 0);\n", "\n", " double elapsed_time = timer.stop();\n", " if (iter >= skip_count)\n", " {\n", " total_elapsed_time += elapsed_time;\n", " }\n", " timer.print();\n", " }\n", " }\n", " fmt::print(\"- Read whole average: {}\\n\", total_elapsed_time / (iter_max - skip_count));\n", "\n", " total_elapsed_time = 0;\n", " for (int iter = 0; iter < iter_max; ++iter)\n", " {\n", " cucim::filesystem::discard_page_cache(input_file);\n", " auto fd = cucim::filesystem::open(input_file, \"rpn\");\n", " {\n", " cucim::logger::Timer timer(\"- read tiles : {:.7f}\\n\", true, false);\n", "\n", " for (uint32_t i = 0; i < image_piece_count_; ++i)\n", " {\n", " ssize_t read_cnt = fd->pread(aligned_host, image_piece_bytecounts_[i], image_piece_offsets_[i]);\n", " }\n", "\n", " double elapsed_time = timer.stop();\n", " if (iter >= skip_count)\n", " {\n", " total_elapsed_time += elapsed_time;\n", " }\n", " timer.print();\n", " }\n", " }\n", " fmt::print(\"- Read tiles average: {}\\n\", total_elapsed_time / (iter_max - skip_count));\n", " }\n", "\n", " SECTION(\"O_DIRECT\")\n", " {\n", " fmt::print(\"O_DIRECT\\n\");\n", "\n", " double total_elapsed_time = 0;\n", " for (int iter = 0; iter < iter_max; ++iter)\n", " {\n", " cucim::filesystem::discard_page_cache(input_file);\n", " auto fd = cucim::filesystem::open(input_file, \"rp\");\n", " {\n", " cucim::logger::Timer timer(\"- read whole : {:.7f}\\n\", true, false);\n", "\n", " ssize_t read_cnt = fd->pread(aligned_host, test_file_size, 0);\n", "\n", " double elapsed_time = timer.stop();\n", " if (iter >= skip_count)\n", " {\n", " total_elapsed_time += elapsed_time;\n", " }\n", " timer.print();\n", " }\n", " }\n", " fmt::print(\"- Read whole average: {}\\n\", total_elapsed_time / (iter_max - skip_count));\n", "\n", " total_elapsed_time = 0;\n", " for (int iter = 0; iter < iter_max; ++iter)\n", " {\n", " cucim::filesystem::discard_page_cache(input_file);\n", " auto fd = cucim::filesystem::open(input_file, \"rp\");\n", " {\n", " cucim::logger::Timer timer(\"- read tiles : {:.7f}\\n\", true, false);\n", "\n", " for (uint32_t i = 0; i < image_piece_count_; ++i)\n", " {\n", " ssize_t read_cnt = fd->pread(buffer_host, image_piece_bytecounts_[i], image_piece_offsets_[i]);\n", " }\n", "\n", " double elapsed_time = timer.stop();\n", " if (iter >= skip_count)\n", " {\n", " total_elapsed_time += elapsed_time;\n", " }\n", " timer.print();\n", " }\n", " }\n", " fmt::print(\"- Read tiles average: {}\\n\", total_elapsed_time / (iter_max - skip_count));\n", " }\n", "\n", " SECTION(\"O_DIRECT pre-load\")\n", " {\n", " fmt::print(\"O_DIRECT pre-load\\n\");\n", "\n", " size_t file_start_offset = ALIGN_DOWN(min_offset, BLOCK_SECTOR_SIZE);\n", " size_t end_boundary_offset = ALIGN_UP(max_offset + max_bytecount, BLOCK_SECTOR_SIZE);\n", " size_t large_block_size = end_boundary_offset - file_start_offset;\n", "\n", " fmt::print(\"- size:{}\\n\", end_boundary_offset - file_start_offset);\n", "\n", " double total_elapsed_time = 0;\n", " for (int iter = 0; iter < iter_max; ++iter)\n", " {\n", " cucim::filesystem::discard_page_cache(input_file);\n", " auto fd = cucim::filesystem::open(input_file, \"rp\");\n", " {\n", " cucim::logger::Timer timer(\"- preload : {:.7f}\\n\", true, false);\n", "\n", " ssize_t read_cnt = fd->pread(aligned_host, large_block_size, file_start_offset);\n", "\n", " double elapsed_time = timer.stop();\n", " if (iter >= skip_count)\n", " {\n", " total_elapsed_time += elapsed_time;\n", " }\n", " timer.print();\n", " }\n", " }\n", " fmt::print(\"- Preload average: {}\\n\", total_elapsed_time / (iter_max - skip_count));\n", "\n", " total_elapsed_time = 0;\n", " for (int iter = 0; iter < iter_max; ++iter)\n", " {\n", " cucim::filesystem::discard_page_cache(input_file);\n", " auto fd = cucim::filesystem::open(input_file, \"rp\");\n", " {\n", " cucim::logger::Timer timer(\"- read tiles : {:.7f}\\n\", true, false);\n", "\n", " for (uint32_t i = 0; i < image_piece_count_; ++i)\n", " {\n", " memcpy(buffer_host, aligned_host + image_piece_offsets_[i] - file_start_offset,\n", " image_piece_bytecounts_[i]);\n", " }\n", "\n", " double elapsed_time = timer.stop();\n", " if (iter >= skip_count)\n", " {\n", " total_elapsed_time += elapsed_time;\n", " }\n", " timer.print();\n", " }\n", " }\n", " fmt::print(\"- Read tiles average: {}\\n\", total_elapsed_time / (iter_max - skip_count));\n", " }\n", "\n", " SECTION(\"mmap\")\n", " {\n", " fmt::print(\"mmap\\n\");\n", "\n", " double total_elapsed_time = 0;\n", " for (int iter = 0; iter < iter_max; ++iter)\n", " {\n", " cucim::filesystem::discard_page_cache(input_file);\n", " auto fd_mmap = open(input_file, O_RDONLY);\n", " {\n", " cucim::logger::Timer timer(\"- open/close : {:.7f}\\n\", true, false);\n", "\n", " void* mmap_host = mmap((void*)0, test_file_size, PROT_READ, MAP_SHARED, fd_mmap, 0);\n", "\n", " REQUIRE(mmap_host != MAP_FAILED);\n", "\n", " if (mmap_host != MAP_FAILED)\n", " {\n", " REQUIRE(munmap(mmap_host, test_file_size) != -1);\n", " close(fd_mmap);\n", " }\n", "\n", " double elapsed_time = timer.stop();\n", " if (iter >= skip_count)\n", " {\n", " total_elapsed_time += elapsed_time;\n", " }\n", " timer.print();\n", " }\n", " }\n", " fmt::print(\"- mmap/munmap average: {}\\n\", total_elapsed_time / (iter_max - skip_count));\n", "\n", " total_elapsed_time = 0;\n", " for (int iter = 0; iter < iter_max; ++iter)\n", " {\n", " cucim::filesystem::discard_page_cache(input_file);\n", " auto fd = cucim::filesystem::open(input_file, \"rm\");\n", " {\n", " cucim::logger::Timer timer(\"- read tiles : {:.7f}\\n\", true, false);\n", "\n", " for (uint32_t i = 0; i < image_piece_count_; ++i)\n", " {\n", " ssize_t read_cnt = fd->pread(buffer_host, image_piece_bytecounts_[i], image_piece_offsets_[i]);\n", " }\n", "\n", " double elapsed_time = timer.stop();\n", " if (iter >= skip_count)\n", " {\n", " total_elapsed_time += elapsed_time;\n", " }\n", " timer.print();\n", " }\n", " }\n", " fmt::print(\"- Read tiles average: {}\\n\", total_elapsed_time / (iter_max - skip_count));\n", " }\n", "\n", " free(unaligned_host);\n", " free(buffer_host);\n", "}\n", "```\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 4 }