{ "cells": [ { "cell_type": "code", "execution_count": 2, "id": "51f17fe5", "metadata": { "_kg_hide-input": true, "_kg_hide-output": true }, "outputs": [], "source": [ "# install fastkaggle if not available\n", "try: import fastkaggle\n", "except ModuleNotFoundError:\n", " !pip install -Uq fastkaggle\n", "\n", "from fastkaggle import *" ] }, { "cell_type": "markdown", "id": "394d59b3", "metadata": {}, "source": [ "This is part 3 of the [Road to the Top](https://www.kaggle.com/code/jhoward/first-steps-road-to-the-top-part-1) series, in which I show the process I used to tackle the [Paddy Doctor](https://www.kaggle.com/competitions/paddy-disease-classification) competition, leading to four 1st place submissions. The previous notebook is available here: [part 2](https://www.kaggle.com/code/jhoward/first-steps-road-to-the-top-part-1)." ] }, { "cell_type": "markdown", "id": "75eadfab", "metadata": {}, "source": [ "## Memory and gradient accumulation" ] }, { "cell_type": "markdown", "id": "e7535345", "metadata": {}, "source": [ "First we'll repeat the steps we used last time to access the data and ensure all the latest libraries are installed, and we'll also grab the files we'll need for the test set:" ] }, { "cell_type": "code", "execution_count": 3, "id": "1c8e2512", "metadata": { "_kg_hide-output": true }, "outputs": [], "source": [ "comp = 'paddy-disease-classification'\n", "path = setup_comp(comp, install='fastai \"timm>=0.6.2.dev0\"')\n", "from fastai.vision.all import *\n", "set_seed(42)\n", "\n", "tst_files = get_image_files(path/'test_images').sorted()" ] }, { "cell_type": "markdown", "id": "d983950a", "metadata": {}, "source": [ "In this analysis our goal will be to train an ensemble of larger models with larger inputs. The challenge when training such models is generally GPU memory. Kaggle GPUs have 16280MiB of memory available, as at the time of writing. I like to try out my notebooks on my home PC, then upload them -- but I still need them to run OK on Kaggle (especially if it's a code competition, where this is required). My home PC has 24GiB cards, so just because it runs OK at home doesn't mean it'll run OK on Kaggle.\n", " \n", "It's really helpful to be able to quickly try a few models and image sizes and find out what will run successfully. To make this quick, we can just grab a small subset of the data for running short epochs -- the memory use will still be the same, but it'll be much faster.\n", "\n", "One easy way to do this is to simply pick a category with few files in it. Here's our options:" ] }, { "cell_type": "code", "execution_count": 4, "id": "16c330ca", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "normal 1764\n", "blast 1738\n", "hispa 1594\n", "dead_heart 1442\n", "tungro 1088\n", "brown_spot 965\n", "downy_mildew 620\n", "bacterial_leaf_blight 479\n", "bacterial_leaf_streak 380\n", "bacterial_panicle_blight 337\n", "Name: label, dtype: int64" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df = pd.read_csv(path/'train.csv')\n", "df.label.value_counts()" ] }, { "cell_type": "markdown", "id": "2350bf5f", "metadata": {}, "source": [ "Let's use *bacterial_panicle_blight* since it's the smallest:" ] }, { "cell_type": "code", "execution_count": 5, "id": "2231c84a", "metadata": {}, "outputs": [], "source": [ "trn_path = path/'train_images'/'bacterial_panicle_blight'" ] }, { "cell_type": "markdown", "id": "51cf7cc3", "metadata": {}, "source": [ "Now we'll set up a `train` function which is very similar to the steps we used for training in the last notebook. But there's a few significant differences...\n", "\n", "The first is that I'm using a `finetune` argument to pick whether we are going to run the `fine_tune()` method, or the `fit_one_cycle()` method -- the latter is faster since it doesn't do an initial fine-tuning of the head. When we fine tune in this function I also have it calculate and return the TTA predictions on the test set, since later on we'll be ensembling the TTA results of a number of models. Note also that we no longer have `seed=42` in the `ImageDataLoaders` line -- that means we'll have different training and validation sets each time we call this. That's what we'll want for ensembling, since it means that each model will use slightly different data.\n", "\n", "The more important change is that I've added an `accum` argument to implement *gradient accumulation*. As you'll see in the code below, this does two things:\n", "\n", "1. Divide the batch size by `accum`\n", "1. Add the `GradientAccumulation` callback, passing in `accum`." ] }, { "cell_type": "code", "execution_count": 6, "id": "73c61814", "metadata": {}, "outputs": [], "source": [ "def train(arch, size, item=Resize(480, method='squish'), accum=1, finetune=True, epochs=12):\n", " dls = ImageDataLoaders.from_folder(trn_path, valid_pct=0.2, item_tfms=item,\n", " batch_tfms=aug_transforms(size=size, min_scale=0.75), bs=64//accum)\n", " cbs = GradientAccumulation(64) if accum else []\n", " learn = vision_learner(dls, arch, metrics=error_rate, cbs=cbs).to_fp16()\n", " if finetune:\n", " learn.fine_tune(epochs, 0.01)\n", " return learn.tta(dl=dls.test_dl(tst_files))\n", " else:\n", " learn.unfreeze()\n", " learn.fit_one_cycle(epochs, 0.01)" ] }, { "cell_type": "markdown", "id": "0eb24c71", "metadata": {}, "source": [ "*Gradient accumulation* refers to a very simple trick: rather than updating the model weights after every batch based on that batch's gradients, instead keep *accumulating* (adding up) the gradients for a few batches, and them update the model weights with those accumulated gradients. In fastai, the parameter you pass to `GradientAccumulation` defines how many batches of gradients are accumulated. Since we're adding up the gradients over `accum` batches, we therefore need to divide the batch size by that same number. The resulting training loop is nearly mathematically identical to using the original batch size, but the amount of memory used is the same as using a batch size `accum` times smaller!\n", "\n", "For instance, here's a basic example of a single epoch of a training loop without gradient accumulation:\n", "\n", "```python\n", "for x,y in dl:\n", " calc_loss(coeffs, x, y).backward()\n", " coeffs.data.sub_(coeffs.grad * lr)\n", " coeffs.grad.zero_()\n", "```\n", "\n", "Here's the same thing, but with gradient accumulation added (assuming a target effective batch size of 64):\n", "\n", "```python\n", "count = 0 # track count of items seen since last weight update\n", "for x,y in dl:\n", " count += len(x) # update count based on this minibatch size\n", " calc_loss(coeffs, x, y).backward()\n", " if count>64: # count is greater than accumulation target, so do weight update\n", " coeffs.data.sub_(coeffs.grad * lr)\n", " coeffs.grad.zero_()\n", " count=0 # reset count\n", "```\n", "\n", "The full implementation in fastai is only a few lines of code -- here's the [source code](https://github.com/fastai/fastai/blob/master/fastai/callback/training.py#L26).\n", "\n", "To see the impact of gradient accumulation, consider this small model:" ] }, { "cell_type": "code", "execution_count": 6, "id": "0e656c0d", "metadata": {}, "outputs": [], "source": [ "train('convnext_small_in22k', 128, epochs=1, accum=1, finetune=False)" ] }, { "cell_type": "markdown", "id": "8f39ee28", "metadata": {}, "source": [ "Let's create a function to find out how much memory it used, and also to then clear out the memory for the next run:" ] }, { "cell_type": "code", "execution_count": 7, "id": "5d6b0e03", "metadata": {}, "outputs": [], "source": [ "import gc\n", "def report_gpu():\n", " print(torch.cuda.list_gpu_processes())\n", " gc.collect()\n", " torch.cuda.empty_cache()" ] }, { "cell_type": "code", "execution_count": 8, "id": "5c0b2441", "metadata": {}, "outputs": [], "source": [ "report_gpu()" ] }, { "cell_type": "markdown", "id": "6f4c2d43", "metadata": {}, "source": [ "So with `accum=1` the GPU used around 5GB RAM. Let's try `accum=2`:" ] }, { "cell_type": "code", "execution_count": 9, "id": "d0a5f501", "metadata": {}, "outputs": [], "source": [ "train('convnext_small_in22k', 128, epochs=1, accum=2, finetune=False)\n", "report_gpu()" ] }, { "cell_type": "markdown", "id": "9f03cc62", "metadata": {}, "source": [ "As you see, the RAM usage has now gone down to 4GB. It's not halved since there's other overhead involved (for larger models this overhead is likely to be relatively lower).\n", "\n", "Let's try `4`:" ] }, { "cell_type": "code", "execution_count": 10, "id": "5afc86bd", "metadata": {}, "outputs": [], "source": [ "train('convnext_small_in22k', 128, epochs=1, accum=4, finetune=False)\n", "report_gpu()" ] }, { "cell_type": "markdown", "id": "77f6c5b8", "metadata": {}, "source": [ "The memory use is even lower!" ] }, { "cell_type": "markdown", "id": "7376e138", "metadata": {}, "source": [ "## Checking memory use" ] }, { "cell_type": "markdown", "id": "3f19e2ff", "metadata": {}, "source": [ "We'll now check the memory use for each of the architectures and sizes we'll be training later, to ensure they all fit in 16GB RAM. For each of these, I tried `accum=1` first, and then doubled it any time the resulting memory use was over 16GB. As it turns out, `accum=2` was what I needed for every case.\n", "\n", "First, `convnext_large`:" ] }, { "cell_type": "code", "execution_count": 11, "id": "e7bf9f72", "metadata": {}, "outputs": [], "source": [ "train('convnext_large_in22k', 224, epochs=1, accum=2, finetune=False)\n", "report_gpu()" ] }, { "cell_type": "code", "execution_count": 12, "id": "adf6d642", "metadata": {}, "outputs": [], "source": [ "train('convnext_large_in22k', (320,240), epochs=1, accum=2, finetune=False)\n", "report_gpu()" ] }, { "cell_type": "markdown", "id": "11bba1f4", "metadata": {}, "source": [ "Here's `vit_large`. This one is very close to going over the 16280MiB we've got on Kaggle!" ] }, { "cell_type": "code", "execution_count": 13, "id": "dbb9f38c", "metadata": {}, "outputs": [], "source": [ "train('vit_large_patch16_224', 224, epochs=1, accum=2, finetune=False)\n", "report_gpu()" ] }, { "cell_type": "markdown", "id": "49ec997c", "metadata": {}, "source": [ "Then finally our `swinv2` and `swin` models:" ] }, { "cell_type": "code", "execution_count": 14, "id": "dc92938b", "metadata": { "scrolled": false }, "outputs": [], "source": [ "train('swinv2_large_window12_192_22k', 192, epochs=1, accum=2, finetune=False)\n", "report_gpu()" ] }, { "cell_type": "code", "execution_count": 15, "id": "3e8cefbb", "metadata": { "scrolled": false }, "outputs": [], "source": [ "train('swin_large_patch4_window7_224', 224, epochs=1, accum=2, finetune=False)\n", "report_gpu()" ] }, { "cell_type": "markdown", "id": "c5a55b7f", "metadata": {}, "source": [ "## Running the models" ] }, { "cell_type": "markdown", "id": "aa5f9c00", "metadata": {}, "source": [ "Using the previous notebook, I tried a bunch of different architectures and preprocessing approaches on small models, and picked a few which looked good. We'll using a `dict` to list our the preprocessing approaches we'll use for each architecture of interest based on that analysis:" ] }, { "cell_type": "code", "execution_count": null, "id": "c58260ff", "metadata": {}, "outputs": [], "source": [ "res = 640,480" ] }, { "cell_type": "code", "execution_count": 16, "id": "a8dc102b", "metadata": {}, "outputs": [], "source": [ "models = {\n", " 'convnext_large_in22k': {\n", " (Resize(res), 224),\n", " (Resize(res), (320,224)),\n", " }, 'vit_large_patch16_224': {\n", " (Resize(480, method='squish'), 224),\n", " (Resize(res), 224),\n", " }, 'swinv2_large_window12_192_22k': {\n", " (Resize(480, method='squish'), 192),\n", " (Resize(res), 192),\n", " }, 'swin_large_patch4_window7_224': {\n", " (Resize(480, method='squish'), 224),\n", " (Resize(res), 224),\n", " }\n", "}" ] }, { "cell_type": "markdown", "id": "83706c47", "metadata": {}, "source": [ "We'll need to switch to using the full training set of course!" ] }, { "cell_type": "code", "execution_count": 17, "id": "b418ece4", "metadata": {}, "outputs": [], "source": [ "trn_path = path/'train_images'" ] }, { "cell_type": "markdown", "id": "2e4dc386", "metadata": {}, "source": [ "Now we're ready to train all these models. Remember that each is using a different training and validation set, so the results aren't directly comparable.\n", "\n", "We'll append each set of TTA predictions on the test set into a list called `tta_res`." ] }, { "cell_type": "code", "execution_count": 18, "id": "91ed5157", "metadata": { "scrolled": false }, "outputs": [], "source": [ "tta_res = []\n", "\n", "for arch,details in models.items():\n", " for item,size in details:\n", " print('---',arch)\n", " print(size)\n", " print(item.name)\n", " tta_res.append(train(arch, size, item=item, accum=2)) #, epochs=1))\n", " gc.collect()\n", " torch.cuda.empty_cache()" ] }, { "cell_type": "markdown", "id": "f3bd2d9b", "metadata": {}, "source": [ "## Ensembling" ] }, { "cell_type": "markdown", "id": "1d720dc3", "metadata": {}, "source": [ "Since this has taken quite a while to run, let's save the results, just in case something goes wrong!" ] }, { "cell_type": "code", "execution_count": 19, "id": "b370e7e3", "metadata": {}, "outputs": [], "source": [ "save_pickle('tta_res.pkl', tta_res)" ] }, { "cell_type": "markdown", "id": "4d1fcf43", "metadata": {}, "source": [ "`Learner.tta` returns predictions and targets for each rows. We just want the predictions:" ] }, { "cell_type": "code", "execution_count": 20, "id": "8de40a4d", "metadata": {}, "outputs": [], "source": [ "tta_prs = first(zip(*tta_res))" ] }, { "cell_type": "markdown", "id": "22a890f4", "metadata": {}, "source": [ "Originally I just used the above predictions, but later I realised in my experiments on smaller models that `vit` was a bit better than everything else, so I decided to give those double the weight in my ensemble. I did that by simply adding the to the list a second time (we could also do this by using a weighted average):" ] }, { "cell_type": "code", "execution_count": 21, "id": "3093b5a6", "metadata": {}, "outputs": [], "source": [ "tta_prs += tta_prs[2:4]" ] }, { "cell_type": "markdown", "id": "383ef283", "metadata": {}, "source": [ "An *ensemble* simply refers to a model which is itself the result of combining a number of other models. The simplest way to do ensembling is to take the average of the predictions of each model:" ] }, { "cell_type": "code", "execution_count": 22, "id": "64d79d4c", "metadata": {}, "outputs": [], "source": [ "avg_pr = torch.stack(tta_prs).mean(0)\n", "avg_pr.shape" ] }, { "cell_type": "markdown", "id": "31a34e9c", "metadata": {}, "source": [ "That's all that's needed to create an ensemble! Finally, we copy the steps we used in the last notebook to create a submission file:" ] }, { "cell_type": "code", "execution_count": 23, "id": "4d71eab6", "metadata": {}, "outputs": [], "source": [ "dls = ImageDataLoaders.from_folder(trn_path, valid_pct=0.2, item_tfms=Resize(480, method='squish'),\n", " batch_tfms=aug_transforms(size=224, min_scale=0.75))" ] }, { "cell_type": "code", "execution_count": 24, "id": "8d8bad75", "metadata": {}, "outputs": [], "source": [ "idxs = avg_pr.argmax(dim=1)\n", "vocab = np.array(dls.vocab)\n", "ss = pd.read_csv(path/'sample_submission.csv')\n", "ss['label'] = vocab[idxs]\n", "ss.to_csv('subm.csv', index=False)" ] }, { "cell_type": "markdown", "id": "a4e5be6b", "metadata": {}, "source": [ "Now we can submit:" ] }, { "cell_type": "code", "execution_count": 31, "id": "8e244be1", "metadata": {}, "outputs": [], "source": [ "if not iskaggle:\n", " from kaggle import api\n", " api.competition_submit_cli('subm.csv', 'part 3 v2', comp)" ] }, { "cell_type": "markdown", "id": "1acaf9aa", "metadata": {}, "source": [ "That's it -- at the time of creating this analysis, that got easily to the top of the leaderboard! Here are the four submissions I entered, each of which was better than the last, and each of which was ranked #1:\n", "\n", "\n", "\n", "*Edit: Actually the one that got to the top of the leaderboard timed out when I ran it on Kaggle Notebooks, so I had to remove two of the runs from the ensemble. There's only a very small difference in accuracy however.*" ] }, { "cell_type": "markdown", "id": "ef8f6ced", "metadata": {}, "source": [ "Going from bottom to top, here's what each one was:\n", "\n", "1. `convnext_small` trained for 12 epochs, with TTA\n", "1. `convnext_large` trained the same way\n", "1. The ensemble in this notebook, with `vit` models not over-weighted\n", "1. The ensemble in this notebook, with `vit` models over-weighted." ] }, { "cell_type": "markdown", "id": "e9ed54c9", "metadata": {}, "source": [ "## Conclusion" ] }, { "cell_type": "markdown", "id": "e5a4838e", "metadata": {}, "source": [ "The key takeaway I hope to get across from this series so far is that you can get great results in image recognition using very little code and a very standardised approach, and that with a rigorous process you can improve in significant steps. Our training function, including data processing and TTA, is just half a dozen lines of code, plus another 7 lines of code to ensemble the models and create a submission file!\n", "\n", "If you found this notebook useful, please remember to click the little up-arrow at the top to upvote it, since I like to know when people have found my work useful, and it helps others find it too. If you have any questions or comments, please pop them below -- I read every comment I receive!" ] }, { "cell_type": "code", "execution_count": 8, "id": "c5b966c6", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Kernel version 6 successfully pushed. Please check progress at https://www.kaggle.com/code/jhoward/scaling-up-road-to-the-top-part-3\n" ] } ], "source": [ "# This is what I use to push my notebook from my home PC to Kaggle\n", "\n", "if not iskaggle:\n", " push_notebook('jhoward', 'scaling-up-road-to-the-top-part-3',\n", " title='Scaling Up: Road to the Top, Part 3',\n", " file='10-scaling-up-road-to-the-top-part-3.ipynb',\n", " competition=comp, private=False, gpu=True)" ] }, { "cell_type": "code", "execution_count": null, "id": "ef58b48e", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.10" }, "toc": { "base_numbering": 1, "nav_menu": {}, "number_sections": false, "sideBar": true, "skip_h1_title": false, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": false, "toc_position": {}, "toc_section_display": true, "toc_window_display": false } }, "nbformat": 4, "nbformat_minor": 5 }