{ "cells": [ { "cell_type": "code", "execution_count": 23, "id": "91c1514c-8ab4-4f25-814f-fae34e461359", "metadata": { "tags": [] }, "outputs": [], "source": [ "import sys\n", "import torch.nn as nn\n", "import torch\n", "import warnings\n", "sys.path.append('/home/jovyan/work/d2l_solutions/notebooks/exercises/d2l_utils/')\n", "import d2l\n", "from torchsummary import summary\n", "warnings.filterwarnings(\"ignore\")\n", "\n", "def nin_block(out_channels, kernel_size, strides, padding, conv1s=[[1,0],[1,0]]):\n", " layers = [nn.LazyConv2d(out_channels, kernel_size=kernel_size, stride=strides, padding=padding),nn.ReLU()]\n", " for conv1_size,conv1_padding in conv1s:\n", " layers.append(nn.LazyConv2d(out_channels, kernel_size=conv1_size,padding=conv1_padding))\n", " layers.append(nn.ReLU())\n", " return nn.Sequential(*layers)\n", "\n", "class Nin(d2l.Classifier):\n", " def __init__(self, arch, lr=0.1):\n", " super().__init__()\n", " self.save_hyperparameters()\n", " layers = []\n", " for i in range(len(arch)-1):\n", " layers.append(nin_block(*arch[i]))\n", " layers.append(nn.MaxPool2d(3, stride=2))\n", " layers.append(nn.Dropout(0.5))\n", " layers.append(nin_block(*arch[-1]))\n", " layers.append(nn.AdaptiveAvgPool2d((1, 1)))\n", " layers.append(nn.Flatten())\n", " self.net = nn.Sequential(*layers)\n", " self.net.apply(d2l.init_cnn)" ] }, { "cell_type": "code", "execution_count": 13, "id": "6f94a8fb-7eb0-4a5f-80f8-b9af28e03fd2", "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": [ "Nin(\n", " (net): Sequential(\n", " (0): Sequential(\n", " (0): LazyConv2d(0, 96, kernel_size=(11, 11), stride=(4, 4))\n", " (1): ReLU()\n", " (2): LazyConv2d(0, 96, kernel_size=(1, 1), stride=(1, 1))\n", " (3): ReLU()\n", " (4): LazyConv2d(0, 96, kernel_size=(1, 1), stride=(1, 1))\n", " (5): ReLU()\n", " )\n", " (1): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)\n", " (2): Sequential(\n", " (0): LazyConv2d(0, 256, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))\n", " (1): ReLU()\n", " (2): LazyConv2d(0, 256, kernel_size=(1, 1), stride=(1, 1))\n", " (3): ReLU()\n", " (4): LazyConv2d(0, 256, kernel_size=(1, 1), stride=(1, 1))\n", " (5): ReLU()\n", " )\n", " (3): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)\n", " (4): Sequential(\n", " (0): LazyConv2d(0, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n", " (1): ReLU()\n", " (2): LazyConv2d(0, 384, kernel_size=(1, 1), stride=(1, 1))\n", " (3): ReLU()\n", " (4): LazyConv2d(0, 384, kernel_size=(1, 1), stride=(1, 1))\n", " (5): ReLU()\n", " )\n", " (5): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)\n", " (6): Dropout(p=0.5, inplace=False)\n", " (7): Sequential(\n", " (0): LazyConv2d(0, 10, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n", " (1): ReLU()\n", " (2): LazyConv2d(0, 10, kernel_size=(1, 1), stride=(1, 1))\n", " (3): ReLU()\n", " (4): LazyConv2d(0, 10, kernel_size=(1, 1), stride=(1, 1))\n", " (5): ReLU()\n", " )\n", " (8): AdaptiveAvgPool2d(output_size=(1, 1))\n", " (9): Flatten(start_dim=1, end_dim=-1)\n", " )\n", ")" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "data = d2l.FashionMNIST(batch_size=128, resize=(224, 224))\n", "arch = ((96,11,4,0),(256,5,1,2),(384,3,1,1),(10,3,1,1))\n", "model = Nin(arch, lr=0.03)\n", "model.apply_init([next(iter(data.get_dataloader(True)))[0]], d2l.init_cnn)\n", "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n", "# Initialize memory counters\n", "torch.cuda.reset_peak_memory_stats()\n", "torch.cuda.empty_cache()\n", "trainer = d2l.Trainer(max_epochs=10, num_gpus=1)\n", "trainer.fit(model, data)\n", "memory_stats = torch.cuda.memory_stats(device=device)\n", "# Print peak memory usage and other memory statistics\n", "print(\"Peak memory usage:\", memory_stats[\"allocated_bytes.all.peak\"] / (1024 ** 2), \"MB\")\n", "print(\"Current memory usage:\", memory_stats[\"allocated_bytes.all.current\"] / (1024 ** 2), \"MB\")\n", "X,y = next(iter(data.get_dataloader(False)))\n", "X = X.to('cuda')\n", "y = y.to('cuda')\n", "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n", "torch.cuda.reset_peak_memory_stats()\n", "torch.cuda.empty_cache()\n", "y_hat = model(X) \n", "print(f'acc: {model.accuracy(y_hat,y).item():.2f}')\n", "memory_stats = torch.cuda.memory_stats(device=device)\n", "print(\"Peak memory usage:\", memory_stats[\"allocated_bytes.all.peak\"] / (1024 ** 2), \"MB\")\n", "print(\"Current memory usage:\", memory_stats[\"allocated_bytes.all.current\"] / (1024 ** 2), \"MB\")" ] }, { "cell_type": "markdown", "id": "4989934d-2b05-406c-a62e-d90b5a09bd29", "metadata": {}, "source": [ "# 1. Why are there two $1\\times1$ convolutional layers per NiN block? Increase their number to three. Reduce their number to one. What changes?" ] }, { "cell_type": "markdown", "id": "e0ffa8f3-8c08-4b4b-91cf-ac1cdf1e93ee", "metadata": {}, "source": [ "In Network in Network (NiN) architecture, $1\\times1$ convolutional layers are used to introduce additional non-linearity and increase the capacity of the network without introducing too many parameters. The inclusion of these $1\\times1$ convolutions has specific effects on the network's expressiveness and complexity:\n", "\n", "1. **Two $1\\times1$ Convolutional Layers per NiN Block**:\n", " - When there are two $1\\times1$ convolutional layers per NiN block, it creates multiple pathways for feature transformation. Each $1\\times1$ convolution performs its own set of operations, allowing the network to capture complex relationships between features and enable better representation learning.\n", " - Having two $1\\times1$ convolutions can increase the model's capacity and non-linearity, potentially leading to improved accuracy and more expressive features.\n", "\n", "2. **Three $1\\times1$ Convolutional Layers per NiN Block**:\n", " - Increasing the number of $1\\times1$ convolutional layers further amplifies the network's capacity. Each additional convolutional layer introduces more non-linearity and the possibility of capturing more complex interactions between features.\n", " - However, increasing the number of $1\\times1$ convolutions also increases the number of parameters and computations, potentially leading to overfitting and higher computational costs.\n", "\n", "3. **One $1\\times1$ Convolutional Layer per NiN Block**:\n", " - Using only one $1\\times1$ convolutional layer reduces the complexity of each NiN block. It limits the capacity of the network to capture complex feature interactions, and may lead to underfitting if the dataset and task are complex.\n", " - Reducing the number of $1\\times1$ convolutions also decreases the number of parameters and computations, which can be beneficial for faster training and reduced memory usage.\n", "\n", "Overall, the number of $1\\times1$ convolutional layers in NiN blocks impacts the network's capacity, complexity, and computational requirements. The optimal choice depends on factors such as the dataset's complexity, available computational resources, and desired trade-off between accuracy and efficiency. Experimentation and validation on a specific task are necessary to determine the most suitable configuration for the network." ] }, { "cell_type": "code", "execution_count": 9, "id": "3be56373-dbba-4733-8eda-53ae418d812b", "metadata": { "tags": [] }, "outputs": [], "source": [ "arch = ((96,11,4,0,3),(256,5,1,2,3),(384,3,1,1,3),(10,3,1,1,3))\n", "model = Nin(arch)\n", "trainer = d2l.Trainer(max_epochs=10, num_gpus=1)\n", "trainer.fit(model, data)\n", "X,y = next(iter(data.get_dataloader(False)))\n", "X = X.to('cuda')\n", "y = y.to('cuda')\n", "y_hat = model(X) \n", "print(f'acc: {model.accuracy(y_hat,y).item():.2f}')" ] }, { "cell_type": "code", "execution_count": null, "id": "c15157aa-285b-4048-bd03-40fc3d4b92a9", "metadata": {}, "outputs": [], "source": [ "data = d2l.FashionMNIST(batch_size=128, resize=(224, 224))\n", "arch = ((96,11,4,0,[[1,0]]),(256,5,1,2,[[1,0]]),(384,3,1,1,[[1,0]]),(10,3,1,1,[[1,0]]))\n", "model = Nin(arch, lr=0.05)\n", "model.apply_init([next(iter(data.get_dataloader(True)))[0]], d2l.init_cnn)\n", "trainer = d2l.Trainer(max_epochs=10, num_gpus=1)\n", "trainer.fit(model, data)\n", "X,y = next(iter(data.get_dataloader(False)))\n", "X = X.to('cuda')\n", "y = y.to('cuda')\n", "y_hat = model(X) \n", "print(f'acc: {model.accuracy(y_hat,y).item():.2f}')" ] }, { "cell_type": "markdown", "id": "cccad96b-760c-4e10-a20b-0ae800640c8b", "metadata": {}, "source": [ "# 2. What changes if you replace the $1\\times1$ convolutions by $3\\times3$ convolutions?" ] }, { "cell_type": "code", "execution_count": 15, "id": "04de0a08-6c85-4cf2-bdb9-2622b440365e", "metadata": { "tags": [] }, "outputs": [], "source": [ "arch = ((96,11,4,0,[[3,1],[3,1]]),(256,5,1,2,[[3,1],[3,1]]),(384,3,1,1,[[3,1],[3,1]]),(10,3,1,1,[[3,1],[3,1]]))\n", "model = Nin(arch)\n", "model.apply_init([next(iter(data.get_dataloader(True)))[0]], d2l.init_cnn)\n", "trainer = d2l.Trainer(max_epochs=10, num_gpus=1)\n", "trainer.fit(model, data)\n", "X,y = next(iter(data.get_dataloader(False)))\n", "X = X.to('cuda')\n", "y = y.to('cuda')\n", "y_hat = model(X) \n", "print(f'acc: {model.accuracy(y_hat,y).item():.2f}')" ] }, { "cell_type": "markdown", "id": "3048f1be-7b68-427a-b910-a20d6e22ef35", "metadata": {}, "source": [ "# 3. What happens if you replace the global average pooling by a fully connected layer (speed, accuracy, number of parameters)?" ] }, { "cell_type": "code", "execution_count": null, "id": "2f6705bb-38cc-425c-bc41-d6fb62f42162", "metadata": {}, "outputs": [], "source": [ "class MLPNin(d2l.Classifier):\n", " def __init__(self, arch, lr=0.1, num_classes=10):\n", " super().__init__()\n", " self.save_hyperparameters()\n", " layers = []\n", " for i in range(len(arch)-1):\n", " layers.append(nin_block(*arch[i]))\n", " layers.append(nn.MaxPool2d(3, stride=2))\n", " layers.append(nn.Dropout(0.5))\n", " layers.append(nin_block(*arch[-1]))\n", " layers.append(nn.Flatten())\n", " layers.append(nn.LazyLinear(num_classes))\n", " self.net = nn.Sequential(*layers)\n", " self.net.apply(d2l.init_cnn)" ] }, { "cell_type": "markdown", "id": "e6d951a1-f0e4-4069-a11c-4075a3be3fe7", "metadata": {}, "source": [ "# 4. Calculate the resource usage for NiN." ] }, { "cell_type": "markdown", "id": "d3f9c0d2-05c4-4e33-adad-9cd567408607", "metadata": {}, "source": [ "## 4.1 What is the number of parameters?" ] }, { "cell_type": "code", "execution_count": 17, "id": "51d14dda-da4e-4b90-949b-d714fbdaaa4e", "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Total parameters: 2015398\n" ] } ], "source": [ "arch = ((96,11,4,0,2),(256,5,1,2,2),(384,3,1,1,2),(10,3,1,1,2))\n", "model = Nin(arch)\n", "X = torch.randn(1,3, 224, 224)\n", "_ = model(X)\n", "total_params = sum(p.numel() for p in model.parameters())\n", "print(\"Total parameters:\", total_params)" ] }, { "cell_type": "markdown", "id": "fca4f6d1-eca1-44c5-8f5b-760351ae39e0", "metadata": {}, "source": [ "## 4.2 What is the amount of computation?" ] }, { "cell_type": "code", "execution_count": 19, "id": "6757f550-0806-411f-8ca1-3a05957a5e5d", "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[INFO] Register count_convNd() for .\n", "[INFO] Register zero_ops() for .\n", "[INFO] Register zero_ops() for .\n", "[INFO] Register zero_ops() for .\n", "[INFO] Register zero_ops() for .\n", "[INFO] Register count_adap_avgpool() for .\n", "Total FLOPs: 830042124.0\n" ] } ], "source": [ "from thop import profile\n", "flops, params = profile(model, inputs=(X,))\n", "print(\"Total FLOPs:\", flops)" ] }, { "cell_type": "markdown", "id": "cad13106-f5bd-418f-b4aa-2c467decee8e", "metadata": {}, "source": [ "## 4.3 What is the amount of memory needed during training?" ] }, { "cell_type": "code", "execution_count": null, "id": "8c0650fd-1ee0-45af-85f5-11c4beecd5fe", "metadata": {}, "outputs": [], "source": [ "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n", "# Initialize memory counters\n", "torch.cuda.reset_peak_memory_stats()\n", "torch.cuda.empty_cache()\n", "trainer = d2l.Trainer(max_epochs=10, num_gpus=1)\n", "trainer.fit(model, data)\n", "memory_stats = torch.cuda.memory_stats(device=device)\n", "# Print peak memory usage and other memory statistics\n", "print(\"Peak memory usage:\", memory_stats[\"allocated_bytes.all.peak\"] / (1024 ** 2), \"MB\")\n", "print(\"Current memory usage:\", memory_stats[\"allocated_bytes.all.current\"] / (1024 ** 2), \"MB\")" ] }, { "cell_type": "markdown", "id": "dd3ba416-6d59-4b7e-a51c-a3dd01ea660a", "metadata": {}, "source": [ "## 4.4 What is the amount of memory needed during prediction?" ] }, { "cell_type": "code", "execution_count": 30, "id": "e61536c7-4ef5-4442-99fa-b58d37309fd5", "metadata": { "tags": [] }, "outputs": [], "source": [ "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n", "torch.cuda.reset_peak_memory_stats()\n", "torch.cuda.empty_cache()\n", "_ = model(X)\n", "memory_stats = torch.cuda.memory_stats(device=device)\n", "print(\"Peak memory usage:\", memory_stats[\"allocated_bytes.all.peak\"] / (1024 ** 2), \"MB\")\n", "print(\"Current memory usage:\", memory_stats[\"allocated_bytes.all.current\"] / (1024 ** 2), \"MB\")" ] }, { "cell_type": "markdown", "id": "b8e30da1-d8b8-454d-beec-3837afbe7a38", "metadata": {}, "source": [ "# 5. What are possible problems with reducing the $384\\times5\\times5$ representation to a $10\\times5\\times5$ representation in one step?" ] }, { "cell_type": "markdown", "id": "267e19f9-9794-413c-9841-ca44d23cf3af", "metadata": {}, "source": [ "Reducing the 384×5×5 representation to a 10×5×5 representation in one step in Network in Network (NiN) architecture can lead to several potential problems:\n", "\n", "1. **Loss of Information**: Reducing the representation from 384 channels to only 10 channels in a single step can result in a significant loss of information. Each channel contains specific features and patterns learned by the network, and reducing them abruptly might lead to loss of discriminative power.\n", "\n", "2. **Underfitting**: The reduced 10-channel representation might not have enough capacity to capture the complexity of the original input. This can result in the model underfitting the data, leading to poor generalization and performance.\n", "\n", "3. **Information Bottleneck**: Such a drastic reduction in the number of channels creates an information bottleneck, limiting the network's ability to transform the input effectively. It can hinder the network's learning capability and limit its expressive power.\n", "\n", "4. **Reduced Expressiveness**: Reducing the number of channels too quickly can limit the model's ability to learn high-level features and hierarchical representations of the input data. Deep networks often rely on progressively learning more abstract features.\n", "\n", "5. **Spatial Features**: A representation of 10×5×5 doesn't capture spatial features well. Important spatial patterns and relationships present in the original representation might be lost, making the network less capable of recognizing objects.\n", "\n", "6. **Loss of Discriminative Power**: With a smaller representation, the network might struggle to differentiate between different classes, leading to confusion and decreased accuracy.\n", "\n", "To mitigate these problems, it's common to use intermediate layers with smaller reductions in the number of channels, allowing the network to learn gradually more abstract and complex features. The NiN architecture typically uses multiple consecutive NiN blocks to avoid these issues by applying multiple nonlinear transformations with $1\\times1$ convolutions, gradually reducing the number of channels over several steps, and maintaining the network's ability to learn meaningful representations." ] }, { "cell_type": "markdown", "id": "97824f2b-28aa-4a42-9f16-00d294825b49", "metadata": {}, "source": [ "# 6. Use the structural design decisions in VGG that led to VGG-11, VGG-16, and VGG-19 to design a family of NiN-like networks." ] }, { "cell_type": "code", "execution_count": 21, "id": "deeb49e3-9554-439b-bd67-ced533eae8e1", "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": [ "Nin(\n", " (net): Sequential(\n", " (0): Sequential(\n", " (0): LazyConv2d(0, 96, kernel_size=(11, 11), stride=(4, 4))\n", " (1): ReLU()\n", " (2): LazyConv2d(0, 96, kernel_size=(1, 1), stride=(1, 1))\n", " (3): ReLU()\n", " (4): LazyConv2d(0, 96, kernel_size=(1, 1), stride=(1, 1))\n", " (5): ReLU()\n", " )\n", " (1): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)\n", " (2): Sequential(\n", " (0): LazyConv2d(0, 256, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))\n", " (1): ReLU()\n", " (2): LazyConv2d(0, 256, kernel_size=(1, 1), stride=(1, 1))\n", " (3): ReLU()\n", " (4): LazyConv2d(0, 256, kernel_size=(1, 1), stride=(1, 1))\n", " (5): ReLU()\n", " )\n", " (3): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)\n", " (4): Sequential(\n", " (0): LazyConv2d(0, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n", " (1): ReLU()\n", " (2): LazyConv2d(0, 384, kernel_size=(1, 1), stride=(1, 1))\n", " (3): ReLU()\n", " (4): LazyConv2d(0, 384, kernel_size=(1, 1), stride=(1, 1))\n", " (5): ReLU()\n", " )\n", " (5): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)\n", " (6): Dropout(p=0.5, inplace=False)\n", " (7): Sequential(\n", " (0): LazyConv2d(0, 10, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n", " (1): ReLU()\n", " (2): LazyConv2d(0, 10, kernel_size=(1, 1), stride=(1, 1))\n", " (3): ReLU()\n", " (4): LazyConv2d(0, 10, kernel_size=(1, 1), stride=(1, 1))\n", " (5): ReLU()\n", " )\n", " (8): AdaptiveAvgPool2d(output_size=(1, 1))\n", " (9): Flatten(start_dim=1, end_dim=-1)\n", " )\n", ")" ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "arch = ((96,11,4,0,2),(256,5,1,2,2),(384,3,1,1,2),(10,3,1,1,2))\n", "nin = Nin(arch)\n", "nin" ] }, { "cell_type": "code", "execution_count": 34, "id": "644a0e7b-8b78-48bf-b16c-7ebf0f59fd7b", "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": [ "Nin(\n", " (net): Sequential(\n", " (0): Sequential(\n", " (0): LazyConv2d(0, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))\n", " (1): ReLU()\n", " (2): LazyConv2d(0, 64, kernel_size=(1, 1), stride=(1, 1))\n", " (3): ReLU()\n", " (4): LazyConv2d(0, 64, kernel_size=(1, 1), stride=(1, 1))\n", " (5): ReLU()\n", " )\n", " (1): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)\n", " (2): Sequential(\n", " (0): LazyConv2d(0, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n", " (1): ReLU()\n", " (2): LazyConv2d(0, 256, kernel_size=(1, 1), stride=(1, 1))\n", " (3): ReLU()\n", " (4): LazyConv2d(0, 256, kernel_size=(1, 1), stride=(1, 1))\n", " (5): ReLU()\n", " )\n", " (3): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)\n", " (4): Sequential(\n", " (0): LazyConv2d(0, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n", " (1): ReLU()\n", " (2): LazyConv2d(0, 256, kernel_size=(1, 1), stride=(1, 1))\n", " (3): ReLU()\n", " (4): LazyConv2d(0, 256, kernel_size=(1, 1), stride=(1, 1))\n", " (5): ReLU()\n", " )\n", " (5): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)\n", " (6): Sequential(\n", " (0): LazyConv2d(0, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n", " (1): ReLU()\n", " (2): LazyConv2d(0, 384, kernel_size=(1, 1), stride=(1, 1))\n", " (3): ReLU()\n", " (4): LazyConv2d(0, 384, kernel_size=(1, 1), stride=(1, 1))\n", " (5): ReLU()\n", " )\n", " (7): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)\n", " (8): Dropout(p=0.5, inplace=False)\n", " (9): Sequential(\n", " (0): LazyConv2d(0, 10, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n", " (1): ReLU()\n", " (2): LazyConv2d(0, 10, kernel_size=(1, 1), stride=(1, 1))\n", " (3): ReLU()\n", " (4): LazyConv2d(0, 10, kernel_size=(1, 1), stride=(1, 1))\n", " (5): ReLU()\n", " )\n", " (10): AdaptiveAvgPool2d(output_size=(1, 1))\n", " (11): Flatten(start_dim=1, end_dim=-1)\n", " )\n", ")" ] }, "execution_count": 34, "metadata": {}, "output_type": "execute_result" } ], "source": [ "arch15 = ((64,3,2,1),\n", " (256,3,1,1),\n", " (256,3,1,1),\n", " (384,3,1,1),\n", " (10,3,1,1))\n", "nin15 = Nin(arch15)\n", "nin15" ] } ], "metadata": { "kernelspec": { "display_name": "Python [conda env:d2l]", "language": "python", "name": "conda-env-d2l-py" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.4" } }, "nbformat": 4, "nbformat_minor": 5 }