{ "cells": [ { "cell_type": "markdown", "id": "88852514-d89d-4fed-9a47-e4083ad7b575", "metadata": {}, "source": [ "# 1. Use the NestMLP model defined in Section 6.1 and access the parameters of the various layers." ] }, { "cell_type": "code", "execution_count": 5, "id": "71dc7c41-d805-4f90-8d5b-40414fd7b150", "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": [ "['0.weight', '0.bias', '2.weight', '2.bias']" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import torch.nn as nn\n", "class NestMLP(nn.Module):\n", " def __init__(self):\n", " super().__init__()\n", " self.net = nn.Sequential(nn.LazyLinear(64), nn.ReLU(),\n", " nn.LazyLinear(32), nn.ReLU())\n", " self.linear = nn.LazyLinear(16)\n", "\n", " def forward(self, X):\n", " return self.linear(self.net(X))\n", "\n", "model = NestMLP()\n", "[name for name, param in model.net.named_parameters()]" ] }, { "cell_type": "markdown", "id": "ed12e022-9afc-4f75-a818-d296d9d4a867", "metadata": {}, "source": [ "# 2. Construct an MLP containing a shared parameter layer and train it. During the training process, observe the model parameters and gradients of each layer." ] }, { "cell_type": "code", "execution_count": 5, "id": "0eeb9ae2-e3c9-4c0e-8881-18db312b2fda", "metadata": { "tags": [] }, "outputs": [], "source": [ "import sys\n", "import torch.nn as nn\n", "import torch\n", "import warnings\n", "sys.path.append('/home/jovyan/work/d2l_solutions/notebooks/exercises/d2l_utils/')\n", "import d2l\n", "warnings.filterwarnings(\"ignore\")\n", "\n", "class PlotParameterMLP(d2l.Classifier):\n", " def __init__(self, num_outputs, num_hiddens, lr, dropouts):\n", " super().__init__()\n", " self.save_hyperparameters()\n", " layers = [nn.Flatten(),nn.LazyLinear(num_hiddens[0]),nn.ReLU()]\n", " shared = nn.LazyLinear(num_hiddens[1])\n", " self.activations = []\n", " for i in range(1,len(num_hiddens)):\n", " layers.append(shared)\n", " layers.append(nn.ReLU())\n", " layers.append(nn.Dropout(dropouts[i]))\n", " self.activations.append(i*3)\n", " layers.append(nn.LazyLinear(num_outputs))\n", " self.net = nn.Sequential(*layers)\n", " \n", " def training_step(self, batch, plot_flag=True):\n", " y_hat = self(*batch[:-1])\n", " # auc = torch.tensor(roc_auc_score(batch[-1].detach().numpy() , y_hat[:,1].detach().numpy()))\n", " if plot_flag:\n", " for i in self.activations:\n", " # print(self.net[i].weight.data,self.net[i].weight.grad)\n", " self.plot(f'layer_{i}_weight',self.net[i].weight.data.mean(),train=True)\n", " # self.plot(f'layer_{i}_weight',self.net[i].weight.grad.mean(),train=True)\n", " return self.loss(y_hat, batch[-1])\n", " \n", " def validation_step(self, batch, plot_flag=True):\n", " y_hat = self(*batch[:-1])\n", " # auc = torch.tensor(roc_auc_score(batch[-1].detach().numpy() , y_hat[:,1].detach().numpy()))\n", " if plot_flag:\n", " for i in self.activations:\n", " # self.plot(f'layer_{i}_weight',self.net[i].weight.data.mean(),train=True)\n", " self.plot(f'layer_{i}_weight',self.net[i].weight.grad.mean(),train=True)\n", " return self.loss(y_hat, batch[-1])\n", " \n", " def stat_activation_variance(self, i, X):\n", " activation = self.net[:i](X)\n", " return ((activation-activation.mean(axis=0,keepdim=True))**2).mean()" ] }, { "cell_type": "code", "execution_count": 2, "id": "30dbf993-8ea0-4261-9b7a-2c24e065c0f3", "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": [ "(118.06766620278358, 24.128756165504456)" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" }, { "data": { "image/svg+xml": [ "\n", "\n", "\n", " \n", " \n", " \n", " \n", " 2023-08-23T14:25:22.060862\n", " image/svg+xml\n", " \n", " \n", " Matplotlib v3.4.0, https://matplotlib.org/\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "\n" ], "text/plain": [ "
" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "data = d2l.FashionMNIST(batch_size=256)\n", "hparams = {'num_outputs':10,'num_hiddens':[8]*3,\n", " 'dropouts':[0]*3,'lr':0.1}\n", "model = PlotParameterMLP(**hparams)\n", "trainer = d2l.Trainer(max_epochs=10)\n", "trainer.fit(model, data)" ] }, { "cell_type": "markdown", "id": "90a58609-6140-4649-a042-f7a38552d30e", "metadata": {}, "source": [ "# 3. Why is sharing parameters a good idea?" ] }, { "cell_type": "markdown", "id": "513d2d7a-85d4-4fc2-a0f0-7a0f3e9b3f53", "metadata": {}, "source": [ "Sharing parameters in a neural network can be a good idea for several reasons:\n", "\n", "1. **Reduced Model Size:** Sharing parameters reduces the number of unique parameters in the model, which can lead to a more compact model representation. This is especially important when dealing with limited computational resources or memory constraints.\n", "\n", "2. **Improved Generalization:** Sharing parameters encourages weight sharing across different parts of the network, promoting regularization and preventing overfitting. It enforces a form of parameter tying, which helps the model generalize better to unseen data.\n", "\n", "3. **Transfer Learning:** Sharing parameters enables transfer learning, where a pre-trained model on one task can be fine-tuned or adapted for a related task with less labeled data. The shared parameters capture general features that can be useful for multiple tasks.\n", "\n", "4. **Learning from Limited Data:** When training data is limited, sharing parameters allows the model to leverage information from multiple similar examples, leading to improved learning from a small dataset.\n", "\n", "5. **Invariance and Abstraction:** Shared parameters can capture high-level features that are invariant across different parts of the input space, leading to the extraction of abstract representations that are beneficial for various tasks.\n", "\n", "6. **Faster Convergence:** Sharing parameters can help the model converge faster because it can learn common patterns more effectively. This can be particularly helpful in cases where training resources are limited.\n", "\n", "7. **Interpretable Representations:** Shared parameters can lead to learned features that are more interpretable and meaningful, making it easier to understand what the model is learning.\n", "\n", "8. **Simpler Architectures:** Sharing parameters can simplify the architecture of the model by reducing the need for separate weights for similar tasks or components. This can lead to easier model design and maintenance.\n", "\n", "9. **Efficient Resource Usage:** Shared parameters allow you to use the same set of weights for multiple instances of a module, saving memory and computation during inference.\n", "\n", "However, it's important to carefully consider which parameters to share and under what conditions. Not all parts of a neural network can or should share parameters. It depends on the task, the data, and the architectural choices. Sharing too many parameters or sharing inappropriately can lead to poor performance or failed convergence. Therefore, it's crucial to analyze the problem, experiment with different parameter-sharing strategies, and monitor the model's performance to ensure that parameter sharing is indeed beneficial." ] } ], "metadata": { "kernelspec": { "display_name": "Python [conda env:d2l]", "language": "python", "name": "conda-env-d2l-py" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.4" } }, "nbformat": 4, "nbformat_minor": 5 }