{"metadata":{"kernelspec":{"language":"python","display_name":"Python 3","name":"python3"},"language_info":{"pygments_lexer":"ipython3","nbconvert_exporter":"python","version":"3.6.4","file_extension":".py","codemirror_mode":{"name":"ipython","version":3},"name":"python","mimetype":"text/x-python"}},"nbformat_minor":4,"nbformat":4,"cells":[{"cell_type":"markdown","source":"*The data, concept, and initial implementation of this notebook was done in Colab by Ross Wightman, the creator of timm. I (Jeremy Howard) did some refactoring, curating, and expanding of the analysis, and added prose.*","metadata":{}},{"cell_type":"markdown","source":"## timm\n\n[PyTorch Image Models](https://timm.fast.ai/) (timm) is a wonderful library by Ross Wightman which provides state-of-the-art pre-trained computer vision models. It's like Huggingface Transformers, but for computer vision instead of NLP (and it's not restricted to transformers-based models)!\n\nRoss has been kind enough to help me understand how to best take advantage of this library by identifying the top models. I'm going to share here so of what I've learned from him, plus some additional ideas.","metadata":{}},{"cell_type":"markdown","source":"## The data\n\nRoss regularly benchmarks new models as they are added to timm, and puts the results in a CSV in the project's GitHub repo. To analyse the data, we'll first clone the repo:","metadata":{}},{"cell_type":"code","source":"! git clone --depth 1 https://github.com/rwightman/pytorch-image-models.git\n%cd pytorch-image-models/results","metadata":{"execution":{"iopub.status.busy":"2022-05-21T21:57:16.844988Z","iopub.execute_input":"2022-05-21T21:57:16.845484Z","iopub.status.idle":"2022-05-21T21:57:19.074407Z","shell.execute_reply.started":"2022-05-21T21:57:16.845369Z","shell.execute_reply":"2022-05-21T21:57:19.073343Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"Using Pandas, we can read the two CSV files we need, and merge them together.","metadata":{}},{"cell_type":"code","source":"import pandas as pd\ndf_results = pd.read_csv('results-imagenet.csv')","metadata":{"execution":{"iopub.status.busy":"2022-05-21T22:32:00.836794Z","iopub.execute_input":"2022-05-21T22:32:00.838168Z","iopub.status.idle":"2022-05-21T22:32:00.84788Z","shell.execute_reply.started":"2022-05-21T22:32:00.838091Z","shell.execute_reply":"2022-05-21T22:32:00.846328Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"We'll also add a \"family\" column that will allow us to group architectures into categories with similar characteristics:\n\nRoss has told me which models he's found the most usable in practice, so I'll limit the charts to just look at these. (I also include VGG, not because it's good, but as a comparison to show how far things have come in the last few years.)","metadata":{}},{"cell_type":"code","source":"def get_data(part, col):\n df = pd.read_csv(f'benchmark-{part}-amp-nhwc-pt111-cu113-rtx3090.csv').merge(df_results, on='model')\n df['secs'] = 1. / df[col]\n df['family'] = df.model.str.extract('^([a-z]+?(?:v2)?)(?:\\d|_|$)')\n df = df[~df.model.str.endswith('gn')]\n df.loc[df.model.str.contains('in22'),'family'] = df.loc[df.model.str.contains('in22'),'family'] + '_in22'\n df.loc[df.model.str.contains('resnet.*d'),'family'] = df.loc[df.model.str.contains('resnet.*d'),'family'] + 'd'\n return df[df.family.str.contains('^re[sg]netd?|beit|convnext|levit|efficient|vit|vgg')]","metadata":{"execution":{"iopub.status.busy":"2022-05-21T22:28:40.166722Z","iopub.execute_input":"2022-05-21T22:28:40.167131Z","iopub.status.idle":"2022-05-21T22:28:40.175037Z","shell.execute_reply.started":"2022-05-21T22:28:40.1671Z","shell.execute_reply":"2022-05-21T22:28:40.174201Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"code","source":"df = get_data('infer', 'infer_samples_per_sec')","metadata":{"execution":{"iopub.status.busy":"2022-05-21T22:28:40.334426Z","iopub.execute_input":"2022-05-21T22:28:40.335394Z","iopub.status.idle":"2022-05-21T22:28:40.363688Z","shell.execute_reply.started":"2022-05-21T22:28:40.335349Z","shell.execute_reply":"2022-05-21T22:28:40.362455Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"## Inference results","metadata":{}},{"cell_type":"markdown","source":"Here's the results for inference performance (see the last section for training performance). In this chart:\n\n- the x axis shows how many seconds it takes to process one image (**note**: it's a log scale)\n- the y axis is the accuracy on Imagenet\n- the size of each bubble is proportional to the size of images used in testing\n- the color shows what \"family\" the architecture is from.\n\nHover your mouse over a marker to see details about the model. Double-click in the legend to display just one family. Single-click in the legend to show or hide a family.\n\n**Note**: on my screen, Kaggle cuts off the family selector and some plotly functionality -- to see the whole thing, collapse the table of contents on the right by clicking the little arrow to the right of \"*Contents*\".","metadata":{}},{"cell_type":"code","source":"import plotly.express as px\nw,h = 1000,800\n\ndef show_all(df, title, size):\n return px.scatter(df, width=w, height=h, size=df[size]**2, title=title,\n x='secs', y='top1', log_x=True, color='family', hover_name='model', hover_data=[size])","metadata":{"execution":{"iopub.status.busy":"2022-05-21T22:28:40.792348Z","iopub.execute_input":"2022-05-21T22:28:40.792677Z","iopub.status.idle":"2022-05-21T22:28:40.79978Z","shell.execute_reply.started":"2022-05-21T22:28:40.79264Z","shell.execute_reply":"2022-05-21T22:28:40.799001Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"code","source":"show_all(df, 'Inference', 'infer_img_size')","metadata":{"execution":{"iopub.status.busy":"2022-05-21T22:28:40.927292Z","iopub.execute_input":"2022-05-21T22:28:40.927563Z","iopub.status.idle":"2022-05-21T22:28:41.145146Z","shell.execute_reply.started":"2022-05-21T22:28:40.927535Z","shell.execute_reply":"2022-05-21T22:28:41.144313Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"That number of families can be a bit overwhelming, so I'll just pick a subset which represents a single key model from each of the families that are looking best in our plot. I've also separated convnext models into those which have been pretrained on the larger 22,000 category imagenet sample (`convnext_in22`) vs those that haven't (`convnext`). (Note that many of the best performing models were trained on the larger sample -- see the papers for details before coming to conclusions about the effectiveness of these architectures more generally.)","metadata":{}},{"cell_type":"code","source":"subs = 'levit|resnetd?|regnetx|vgg|convnext.*|efficientnetv2|beit'","metadata":{"execution":{"iopub.status.busy":"2022-05-21T22:20:03.286446Z","iopub.execute_input":"2022-05-21T22:20:03.286855Z","iopub.status.idle":"2022-05-21T22:20:03.292554Z","shell.execute_reply.started":"2022-05-21T22:20:03.286818Z","shell.execute_reply":"2022-05-21T22:20:03.291071Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"In this chart, I'll add lines through the points of each family, to help see how they compare -- but note that we can see that a linear fit isn't actually ideal here! It's just there to help visually see the groups.","metadata":{}},{"cell_type":"code","source":"def show_subs(df, title, size):\n df_subs = df[df.family.str.fullmatch(subs)]\n return px.scatter(df_subs, width=w, height=h, size=df_subs[size]**2, title=title,\n trendline=\"ols\", trendline_options={'log_x':True},\n x='secs', y='top1', log_x=True, color='family', hover_name='model', hover_data=[size])","metadata":{"execution":{"iopub.status.busy":"2022-05-21T22:28:49.422877Z","iopub.execute_input":"2022-05-21T22:28:49.423771Z","iopub.status.idle":"2022-05-21T22:28:49.429789Z","shell.execute_reply.started":"2022-05-21T22:28:49.423737Z","shell.execute_reply":"2022-05-21T22:28:49.429193Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"code","source":"show_subs(df, 'Inference', 'infer_img_size')","metadata":{"execution":{"iopub.status.busy":"2022-05-21T22:28:49.622859Z","iopub.execute_input":"2022-05-21T22:28:49.623275Z","iopub.status.idle":"2022-05-21T22:28:49.825596Z","shell.execute_reply.started":"2022-05-21T22:28:49.623231Z","shell.execute_reply":"2022-05-21T22:28:49.824909Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"From this, we can see that the *levit* family models are extremely fast for image recognition, and clearly the most accurate amongst the faster models. That's not surprising, since these models are a hybrid of the best ideas from CNNs and transformers, so get the benefit of each. In fact, we see a similar thing even in the middle category of speeds -- the best is the ConvNeXt, which is a pure CNN, but which takes advantage of ideas from the transformers literature.\n\nFor the slowest models, *beit* is the most accurate -- although we need to be a bit careful of interpreting this, since it's trained on a larger dataset (ImageNet-21k, which is also used for *vit* models).\n\nI'll add one other plot here, which is of speed vs parameter count. Often, parameter count is used in papers as a proxy for speed. However, as we see, there is a wide variation in speeds at each level of parameter count, so it's really not a useful proxy.\n\n(Parameter count may be be useful for identifying how much memory a model needs, but even for that it's not always a great proxy.)","metadata":{}},{"cell_type":"code","source":"px.scatter(df, width=w, height=h,\n x='param_count_x', y='secs', log_x=True, log_y=True, color='infer_img_size',\n hover_name='model', hover_data=['infer_samples_per_sec', 'family']\n)","metadata":{"execution":{"iopub.status.busy":"2022-05-21T22:28:50.296775Z","iopub.execute_input":"2022-05-21T22:28:50.298941Z","iopub.status.idle":"2022-05-21T22:28:50.400432Z","shell.execute_reply.started":"2022-05-21T22:28:50.298868Z","shell.execute_reply":"2022-05-21T22:28:50.398417Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"## Training results","metadata":{}},{"cell_type":"markdown","source":"We'll now replicate the above analysis for training performance. First we grab the data:","metadata":{}},{"cell_type":"code","source":"tdf = get_data('train', 'train_samples_per_sec')","metadata":{"execution":{"iopub.status.busy":"2022-05-21T22:28:52.606993Z","iopub.execute_input":"2022-05-21T22:28:52.607289Z","iopub.status.idle":"2022-05-21T22:28:52.63373Z","shell.execute_reply.started":"2022-05-21T22:28:52.607259Z","shell.execute_reply":"2022-05-21T22:28:52.632402Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"Now we can repeat the same *family* plot we did above:","metadata":{}},{"cell_type":"code","source":"show_all(tdf, 'Training', 'train_img_size')","metadata":{"execution":{"iopub.status.busy":"2022-05-21T22:28:53.584779Z","iopub.execute_input":"2022-05-21T22:28:53.585093Z","iopub.status.idle":"2022-05-21T22:28:53.839268Z","shell.execute_reply.started":"2022-05-21T22:28:53.585056Z","shell.execute_reply":"2022-05-21T22:28:53.838135Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"...and we'll also look at our chosen subset of models:","metadata":{}},{"cell_type":"code","source":"show_subs(tdf, 'Training', 'train_img_size')","metadata":{"execution":{"iopub.status.busy":"2022-05-21T22:28:54.284202Z","iopub.execute_input":"2022-05-21T22:28:54.284502Z","iopub.status.idle":"2022-05-21T22:28:54.486759Z","shell.execute_reply.started":"2022-05-21T22:28:54.284471Z","shell.execute_reply":"2022-05-21T22:28:54.485781Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"Finally, we should remember that speed depends on hardware. If you're using something other than a modern NVIDIA GPU, your results may be different. In particular, I suspect that transformers-based models might have worse performance in general on CPUs (although I need to study this more to be sure).","metadata":{}},{"cell_type":"code","source":"","metadata":{},"execution_count":null,"outputs":[]}]}