{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# \"Benchmarking Image Retrieval for Visual Localization\"\n", "> \"Short review\"\n", "- toc: false\n", "- image: images/retrieval-for-loc.png\n", "- branch: master\n", "- badges: true\n", "- comments: true\n", "- hide: false\n", "- search_exclude: false" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "I would like to share my thoughts on 3DV 2020 paper \"[Benchmarking Image Retrieval for Visual Localization](https://arxiv.org/pdf/2011.11946.pdf)\" by Pion et.al. \n", "\n", "\n", "![](2020-11-25-review-of-retrieval-for-localization_files/att_00000.png \"Overview of the benchmark pipeline\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# What is the paper about? \n", "\n", "How one would approach visual localization? The most viable way to do it is hierarchical approach, similar to image retrieval with spatial verification. \n", "\n", "You get the query image, retrieve the most similar images to it from some database by some efficient method, e.g. global descriptor search. Then given the top-k images you estimate the pose of the query image by doing two-view matching, or some other method. \n", "\n", "The question is -- how much influence the quality of the image retrieval has? Should you spend more or less time improving it? That is the questions, paper trying to answer. \n", "\n", "Authors design 3 re-localization systems. \n", "\n", "1. \"Task 1\" system estimates the query image pose as average of the short-list images pose, weighted by the similarity to the query image.\n", "\n", "2. \"Task 2a\" system performs pairwise two-view matching between short-list images and query triangulates the query image pose using 3d map built from the successully matches images.\n", "\n", "3. \"Task 2b\" pre-builds the 3D map from the database images offline. At the inference time, local feature 2d-3d matching is done on the shortlist images. \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# What is benchmarked?\n", "\n", "\n", "The paper compares \n", "\n", "- SIFT-based DenseVLAD\n", "\n", "and CNN-based\n", "\n", "- [NetVLAD](https://arxiv.org/abs/1511.07247)\n", "- [APGeM](https://arxiv.org/pdf/1906.07589.pdf)\n", "- [DELG](https://arxiv.org/abs/2001.05027)\n", "\n", "What is important (and adequately mentioned in the paper, although I would prefer the disclamer in the each figure) is that **all CNN-based methods have different architectures AND training data**. Basically, the paper uses author-released models. Thus one cannot say if APGeM is better or worse than NetVLAD as method, because they were trained on the very different data. However, I also understand that one cannot easily afford to re-implement and re-train everything.\n", "\n", "As the sanity check paper provides the results on the [Revisited Oxford and Paris](http://cmp.felk.cvut.cz/revisitop/) image retrieval benchmark.\n", "![](2020-11-25-review-of-retrieval-for-localization_files/att_00001.png \"Retrieval metrics on Revised Oxford and Paris\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Summary of results\n", "\n", "Paper contains a lot of information and I definitely recommend you to read it. Nevertheless, let me try to summarize paper messages and then my take on it.\n", "\n", "1. For the task1 (similarity-weighted pose) there is no clear winner. (SIFT)-DenseVLAD works the best for the daytime datasets. Probably DenseVLAD is good because it is not invariant and if it can match images, they are really close -> high pose accuracy. For the night both DeLG and AP-GeM are good. As paper guesses, that it because they are only ones, which were trained on night images as well.\n", "![image.png](2020-11-25-review-of-retrieval-for-localization_files/att_00002.png)\n", "\n", "\n", "2. There is almost no difference between CNN-based methods for the task2a and task2b (retrieval -> local features matching). This indicates that the limit is the mostly in the number of images and local features.\n", "\n", "\n", "![image.png](2020-11-25-review-of-retrieval-for-localization_files/att_00003.png)\n", "\n", "\n", "![image.png](2020-11-25-review-of-retrieval-for-localization_files/att_00004.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# My take-away messages\n", "\n", "### Image Relocalization seems to be is more real-world and engineering task, than image retrieval. \n", "\n", "And that it why it actually ALREADY WORKS, because if there some weak spot, it is compensated by the system design. Thhe same conclusion from our [IMC paper](https://arxiv.org/abs/2003.01587), experiment with ground truth -- if you have 1k images for the 3d model, you can use as bad features, as you want. The COLMAP will recover anyway\n", "\n", "![image.png](2020-11-25-review-of-retrieval-for-localization_files/att_00009.png)\n", "\n", "The retrieval, on the other hand is more interesting to work on, because it is kind of deliberately hard and you can do some fancy stuff, which do not matter in the real world.\n", "\n", "\n", "### Task1 (global descriptor-only) system are quite useless now\n", "\n", "\n", "Unless we are speaking about the quite dense image representation. I mean, top-accuracy is 35% vs almost 100% for those, which include local features.\n", "\n", "Good news: it has a LOT of space for the improvement to work on.\n", "\n", "### For the task 2a and 2b, robust global descriptors are a way to do the retrieval, sorry VLAD.\n", "\n", "The precision will come from the local features. Which I like a lot, because VLAD is more complex to train and initalize, I never liked it (nothing personal).\n", "\n", "\n", "### For the task2a and 2b we need new metrics, e.g. precisition @ X Mb memory footprint\n", "\n", "Because otherwise, the task is easily solved by the brute force -- either by photo taking, or, at least with image syntesis, see [24/7 place recognition by view synthesis](https://hal.inria.fr/hal-01616660/document).\n", "\n", "Such steps are already taken in the paper [Learning and aggregating deep local descriptors for instance-level recognition](https://arxiv.org/abs/2007.13172) -- see the table with memory footprint.\n", "\n", "![image.png](2020-11-25-review-of-retrieval-for-localization_files/att_00006.png)\n", "\n", "That is how one could have an interesting research challenge, also having some grounds in the real-world -- to work in mobile phones. Otherwise, any method would work, if the database is dense enough.\n", "\n", "\n", "\n", "\n", "### Robust local features matter for illumination changes\n", "\n", "It is a bit hidden in the Appendix, so go directly to the Figure 9. It clearly shows that localization performance is bounded by SIFT, if it is used for two view matching, making retrieval improvements irrelevant. When R2D2 or D2Net are used for matching instead, the overall results for night-time are much better. \n", "\n", "![image.png](2020-11-25-review-of-retrieval-for-localization_files/att_00007.png)\n", "\n", "That is in line with my small visual benchmark I did recently.\n", "\n", "https://twitter.com/ducha_aiki/status/1330495426865344515\n", "\n", "![image.png](2020-11-25-review-of-retrieval-for-localization_files/att_00008.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "That's all, folks! Now please, check the [paper](https://arxiv.org/abs/2011.11946.pdf) and the [code](https://github.com/naver/kapture-localization) they provided." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.4" }, "latex_envs": { "LaTeX_envs_menu_present": true, "autoclose": false, "autocomplete": true, "bibliofile": "biblio.bib", "cite_by": "apalike", "current_citInitial": 1, "eqLabelWithNumbers": true, "eqNumInitial": 1, "hotkeys": { "equation": "Ctrl-E", "itemize": "Ctrl-I" }, "labels_anchors": false, "latex_user_defs": false, "report_style_numbering": false, "user_envs_cfg": false } }, "nbformat": 4, "nbformat_minor": 4 }