{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%matplotlib inline\n", "\n", "import numpy as np\n", "#import mplcyberpunk\n", "import torch\n", "from torch import nn\n", "from torch.nn import functional as F\n", "from torch.utils.data import Dataset, DataLoader\n", "from matplotlib import pyplot as plt\n", "from matplotlib import rcParams\n", "from matplotlib.colors import LinearSegmentedColormap\n", "from sklearn.datasets import make_moons\n", "\n", "plt.style.use(\"fivethirtyeight\")\n", "\n", "#rcParams[\"font.sans-serif\"] = \"Roboto\"\n", "rcParams[\"xtick.labelsize\"] = 14.\n", "rcParams[\"ytick.labelsize\"] = 14.\n", "rcParams[\"axes.labelsize\"] = 14.\n", "rcParams[\"legend.fontsize\"] = 14\n", "rcParams[\"axes.titlesize\"] = 16.\n", "\n", "np.random.seed(42)\n", "\n", "_ = torch.manual_seed(42)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Introduction to PyTorch\n", "\n", "In this notebook, we're going to be taking the core ideas from the first notebook, and abstracting all of the computation away using the relatively high-level interface by PyTorch. \n", "\n", "I highly recommend using the PyTorch documentation as a reference, in particular [the page on the `torch.nn` module](https://pytorch.org/docs/stable/nn.html) is where I frequently go back to, as it provides instructions (and equations) on relevant implementations in PyTorch. When in doubt, you can also go one layer deeper into the problem, and start looking into the source code.\n", "\n", "## A Guide to the PyTorch Interface\n", "\n", "Before we dive into any coding, we should assess how PyTorch as an API is laid out, so you will know where to look for things as you need to. There's also a level of underlying design choices to consider, so that when it comes to routine model building we can take advantage of PyTorch _and_ Python as languages as much as possible. As eluded to in the first notebook, PyTorch is very much an object-oriented and [Pythonic](https://www.python.org/dev/peps/pep-0020/) library, and so there's a strong emphasis on not needing to repeat code by taking advantage of things like inheritance. For the most part, building models in PyTorch is essentially building classes in Python—all of the standard neural network layers are built off the same class, and so will your own models/layers. This substantially reduces the need for a lot of different boiler plate codes, for example when you come to implement training loops.\n", "\n", "For most purposes, the `torch.nn` module provides access to high-level, object-oriented implementations of standard neural network building blocks, including your standard fully-connected layer (`nn.Linear`), convolution layers (e.g. `nn.Conv1d`), and recurrent layers (e.g. `nn.LSTMCell`). All of these objects inherit from the `nn.Module` base class, and abstracts away a lot of concepts like nesting multiple layers, saving and loading models, moving your code onto GPUs, and hopping between training and evaluation phases. The `torch.nn` module also provides access to other neural network tools, such as standard activation functions and losses (see notebook 2)—these also inherit from `nn.Module`. \n", "\n", "While `torch.nn` modules give you access to objects that represent the building blocks, when it comes to computation they actually just call routines in their functional `torch.nn.functional` interface; for example, `nn.ReLU()` creates an object that, at runtime, calls `torch.nn.functional.relu()` on your inputs. In principle, you could build entire models using only the functional interface, but because you're giving up the benefits of class inheritance, you will have to write a lot of boiler plate code. An intermediate solution is to write a mixture of `torch.nn` and `torch.nn.functional` calls, but if you want the code to be simple and unified you should keep this to a minimum and use the recommended object-oriented interface. To go down yet another layer, `torch.nn.functional` ends up using low-level C/C++/CUDA routines, including those from Caffe2. The other submodule of `torch` you will frequently access is `torch.optim`, which provides access to various optimization algorithms. Each implemented optimizer inherits from `torch.optim.BaseOptimizer`, which has a very similar implementation to `torch.nn.Module`.\n", "\n", "Finally, at the top level of `torch` are standard math functions and random number generation. So for the most part, this is a drop and replace of NumPy routines, like `torch.exp` and `torch.rand`. One key difference between the two interfaces is the keyword argument `axis` in NumPy is actually `dim` in PyTorch. So when performing operations on nominally the type of array, you will have to swap the kwargs.\n", "\n", "---\n", "\n", "## `torch.Tensor` — the Heart of PyTorch\n", "\n", "The `torch.Tensor` is at the core of everything to do with PyTorch." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X = torch.Tensor([0.])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Tensors more or less behave like `numpy.ndarray`, doing all the fancy broadcasting and vectorization stuff. There's two big differences with `torch.Tensor`, however, that make it the true centerpiece of PyTorch:\n", "\n", "1. You can move it onto your GPU" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if torch.cuda.is_available():\n", " torch.cuda.empty_cache()\n", " X.to(\"cuda\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "which it will then subsequently do all its computation on your GPU. This is an easy way to do extremely scalable calculations with minimal change in the way that you code (if you can't write CUDA, for example). The other big difference is:\n", "\n", "2. Automatic differentiation\n", "\n", "This is definitely more of a general PyTorch thing, but the support for autograd in PyTorch is amazing. The idea is you can perform any combination of computations you want, and you can simply backpropagate through all your calculations to compute the gradient with respect to your tensor. For example:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Create a PyTorch Variable from a tensor, and ask it to track gradients\n", "X = torch.autograd.Variable(torch.Tensor([10.]), requires_grad=True)\n", "\n", "# Do some computation\n", "Y = 20. + X\n", "# Do some probabilistic computation\n", "Y *= torch.rand(1)\n", "Y = torch.sin(Y)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# if you inspect X, it shows that you've specified gradients\n", "print(f\"Tensor X: {X}\")\n", "print(f\"Tensor Y: {Y}\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Compute gradients\n", "Y.backward()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Calculate the dY/dX\n", "X.grad" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The important thing about this easy access to auto differentiation is for all of our neural network building: because we work with tensors, all of which support this feature, we can perform backpropagation for training without even needing to think about the problem as it does a bunch of chain rule computations in the background. In general, this is also a nifty feature because it means if you ever need to compute derivatives of complicated functions, you know you can do it numerically with PyTorch quite easily.\n", "\n", "Instead of calling `Y.backward()`, you can also access the automatic differentiation interface with the `torch.autograd` module. This gives you some freedom to perform the computation, say if you wanted to take partial derivatives with respect to specific variables." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "A = torch.autograd.Variable(torch.Tensor([3.]), requires_grad=True)\n", "B = torch.autograd.Variable(torch.Tensor([23.]), requires_grad=True)\n", "\n", "C = (A * B) / (A + B)\n", "C = torch.sigmoid(C)\n", "\n", "print(f\"Tensor C: {C}\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Take derivative of C with respect to B\n", "torch.autograd.backward([C], [B])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# dC/dB\n", "B.grad" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you want to do the same computation for `A`, you'll have to ask it to `retain_graphs=True`: by default once autograd is done, the computational graph used for the chain of calculations is freed up.\n", "\n", "## Building a PyTorch model\n", "\n", "We now have all the ingredients necessary to start doing deep learning with PyTorch. As mentioned previously, everything is built using pieces that work with `torch.Tensor` objects and `torch.nn.Module` objects. But in addition to these, PyTorch also provides convenience classes for working with datasets, including such features as minibatching and random test/validation splitting.\n", "\n", "To start, we will create a [`torch.utils.data.Dataset`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset) class that provides an interface for accessing data from our moon samples. Subclasses of the `Dataset` class must implement a `__len__()` function that returns the total number of data points in the dataset and a `__getitem__(idx)` function that returns one data point (in the form of a tensor) and its target value (label), also in the form of a tensor." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class MoonData(Dataset):\n", " def __init__(self, n_samples : int, noise : float):\n", " self.n_samples = n_samples\n", " self.noise = noise\n", " \n", " self.data, self.labels = make_moons(n_samples=self.n_samples,noise=self.noise)\n", " \n", " def __len__(self):\n", " return self.n_samples\n", " \n", " def __getitem__(self, idx):\n", " #since self.labels[idx] returns a single number, need to enclose it in a list when creating the tensor\n", " return torch.Tensor(self.data[idx]), torch.Tensor([self.labels[idx]])\n", " \n", "train_data = MoonData(1000,0.1)\n", "test_data = MoonData(200,0.1)\n", "\n", "fig, ax = plt.subplots()\n", "ax.scatter(train_data.data[:,0], train_data.data[:,1], c=train_data.labels, cmap=\"Spectral\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here we've created 2 `MoonData` datasets: one that will be used for training the model, and another that will be used for testing the performance of the model. This is a standard and important practice in machine learning. The idea is to ensure that the model isn't \"overfitting\" the training data and memorizing exactly how to reproduce the examples it's seen. We want the model to also be able to provide meaningful predictions for data that it hasn't previously seen. If the model performs well (has a low loss) for the training data, but performs poorly for the test/validation data, then it is likely that the model is either overtrained or requires additional [regularization](https://en.wikipedia.org/wiki/Regularization_(mathematics)).\n", "\n", "PyTorch also provides a [`torch.utils.data.DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader) class that is designed to work with `Dataset` objects. The `DataLoader` class supports minibatching and randomizing the order of samples drawn from the set. In particular, minibatching is a widely used method for training models: instead of computing the loss for the entire dataset and updating the model, a \"mini-batch\" of samples is drawn from the dataset and the model is updated based only on those samples. The process is repeated until all/most samples have been used, and this comprises one training \"epoch.\" This algorithm is beneficial both from a computational standpoint (it allows for speedups in the underlying calculations by using parallel algorithms without requiring excessive memory use) and from a practical standpoint: the model is less likely to get stuck in a local minimum of the loss function. Because of the computational efficiency, mini-batching is also often used for computing validation losses as well.\n", "\n", "Here we create one DataLoader object for our test data, and another for our validation data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "train_dataloader = DataLoader(train_data, batch_size=20, shuffle=True)\n", "test_dataloader = DataLoader(test_data, batch_size=20, shuffle=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With the data in place, we can create a classifier model for the Moon Data. As mentioned before, PyTorch is written such that its models/layers are reusable: each inherits from `torch.nn.Module` so that it can be reused and is interoperable. At its simplest, a `torch.nn.Module` object needs to implement a `forward` function that takes an input tensor `X` and performs a forward pass of the calculation. This may involve one or more layers of calculations, activations, transformations, or other operations. Any calculations should be performed using `torch` functions so that gradients can later be passed through the network by backpropagation.\n", "\n", "There are many, many ways to build models with PyTorch. Here we will show one strategy that makes use of `torch.nn.Sequential`, which is a convenience class that stores a series of [`torch.nn.Module`](https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html) objects that will be called in sequence. We will simply use a sequence of linear layers followed by activation functions. Since our input MoonData has 2 features, the first linear layer must have an input size of 2. The [`torch.nn.Linear`](https://pytorch.org/docs/stable/generated/torch.nn.Linear.html) class is an implementation of a fully-connected linear layer like we've used before: it takes the number of input features and the number of output features as arguments to its constructor (along with an optional `bias` parameter that controls whether or not to include bias parameters-- the default is True). Internally, the object will initialize all of the parameters needed for the linear layer calculation. We don't have to keep track of those ourselves! Of course, PyTorch does provide mechanisms for allowing you to access/initialize the parameters manually if you need to do so.\n", "\n", "After each linear layer, we will include an activation layer. Many standard types are [available](https://pytorch.org/docs/stable/nn.html#non-linear-activations-weighted-sum-nonlinearity) as `torch.nn.Module` objects for convenience, and they are also available through PyTorch's [functional interface](https://pytorch.org/docs/stable/nn.functional.html#non-linear-activation-functions) if you wish to use them as functions directly instead of as modules. Just to illustrate, we'll use ReLU and Tanh along the way.\n", "\n", "Our goal with the MoonClassifier is to determine which moon distribution a given datapoint comes from. There are 2 categories, which are represented as 0 and 1. Therefore, for every input, we want to produce 1 number that ideally is 0 or 1. As we saw previously, the sigmoid activation function is useful for this purpose. Our final layer will have an output size of 1, and we will run it through a sigmoid layer to compress that number into the range \\[0,1\\]." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class MoonClassifier(nn.Module):\n", " def __init__(self):\n", " super(MoonClassifier, self).__init__()\n", " self.layers = nn.Sequential(\n", " nn.Linear(2,4),\n", " nn.ReLU(),\n", " nn.Linear(4,20),\n", " nn.ReLU(),\n", " nn.Linear(20,5),\n", " nn.Tanh(),\n", " nn.Linear(5,1),\n", " nn.Sigmoid()\n", " )\n", " \n", " def forward(self, X):\n", " X = self.layers(X)\n", " return X" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the code above, we initialize the `torch.nn.Module` superclass and create a `torch.nn.Sequential` module that contains the calculations we wish to perform. Inside the `forward` call, we simply pass the input `X` to this `self.layers` object, which in turn will perform a forward calculation through the sequence of layers we defined before. As we saw in the first notebook, PyTorch modules all come with a predefined `__call__` function which invokes the `forward` function when the module itself is called like a function. \n", "\n", "With our completely untrained model, we can pass data through it and see what it outputs." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model = MoonClassifier()\n", "X,y = train_data[0]\n", "model(X)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When using the `Dataset` classes, PyTorch will automatically concatenate samples along the appropriate axis so that the model can evaluate many samples in a single pass using broadcasting. Here we select 10 samples and show that we can run them though the model the same way." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X,y = train_data[0:10]\n", "print(model(X),y)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we need to train the model, which requires the use of a loss function (e.g., mean squared error) and an optimization strategy (e.g., gradient descent). PyTorch provides a variety of optimizers in the [`torch.optim`](https://pytorch.org/docs/stable/optim.html) module. The key idea is that the optimizer operates on the model's parameters, each of which has a gradient that PyTorch can calculate. The workflow is:\n", "1. Calculate the output from the model\n", "2. Use the model output and the target values to compute the loss (which is a `torch.Tensor`)\n", "3. Zero out the gradients of the model parameters.\n", "4. Compute the gradients of the loss tensor by back-propagation. Since they depend on the model parameters, the model parameter tensors' gradients will be updated.\n", "5. Use the optimizer to update the model parameters by performing some kind of calculation with the parameters and their gradients. In PyTorch, this is done by the `optimizer.step` function.\n", "\n", "In the code below, we define a training loop that will perform a training cycle over a single epoch: it will loop over all minibatches in the dataset, and for each mini-batch, it will follow the workflow above. In addition, it keeps track of the total loss for the entire dataset, which in the end is normalized by dividing by the number of samples." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def train_loop(dataloader, model, loss_fn, optimizer):\n", " size = len(dataloader.dataset)\n", " loss_total = 0\n", " for batch, (X, y) in enumerate(dataloader):\n", " # Compute prediction and loss\n", " pred = model(X)\n", " loss = loss_fn(pred, y)\n", "\n", " # Backpropagation\n", " optimizer.zero_grad()\n", " loss.backward()\n", " optimizer.step()\n", "\n", " # Print some output every 5 batches\n", " if batch % 5 == 0:\n", " loss, current = loss.item(), batch * len(X)\n", " print(f\"loss: {loss:>7f} [{current:>5d}/{size:>5d}]\")\n", " loss_total += loss\n", " \n", " print(f\"Training epoch complete! Loss = {loss_total/size:>7f}\")\n", " return loss_total/size\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Our code also needs to compute the validation loss, which is done in the `test_loop` function below. Here it keeps track of the total loss and the number of correct predictions from the model, which we'll use to calculate the model's accuracy. Here we loop over the data in the minibatch as when we trained, but this time we don't do anything with an optimizer: we just calculate the loss and the number of correct guesses. In the code below, we consider a guess to be correct if it rounds to the correct number (or if the absolute value of the difference between the prediction and the true label is less than 0.5)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def test_loop(dataloader, model, loss_fn):\n", " size = len(dataloader.dataset)\n", " test_loss, correct = 0, 0\n", "\n", " with torch.no_grad():\n", " for X, y in dataloader:\n", " pred = model(X)\n", " test_loss += loss_fn(pred, y).item() #item returns the number from a Torch tensor containing a single value\n", " \n", " #since X is a minibatch, pred and y are tensors with several elements.\n", " #Add up to get the number of correct predictions in this minibatch\n", " correct += (torch.abs(pred-y) < 0.5).type(torch.float).sum().item()\n", "\n", " #normalize the loss for size, and convert correct to a fraction\n", " test_loss /= size\n", " correct /= size\n", " print(f\"Test Error: \\n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \\n\")\n", " return correct" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we'll wrap these two functions into a single function that performs a training run and a test run for a given number of epochs, then plots the training loss and accuracy as a function of epoch." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def train_and_test(model,epochs,train_loader,test_loader,loss_fn,optimizer):\n", " epoch = []\n", " loss = []\n", " acc = []\n", " for t in range(epochs):\n", " print(f\"Epoch {t+1}\\n-------------------------------\")\n", " l = train_loop(train_loader, model, loss_fn, optimizer)\n", " a = test_loop(test_loader, model, loss_fn)\n", " epoch.append(t+1)\n", " loss.append(l)\n", " acc.append(a)\n", " \n", " print(\"Done!\")\n", " \n", " fig,axes = plt.subplots(1,2,figsize=(10,4))\n", " axes[0].plot(epoch,loss,label=\"Training loss\")\n", " axes[0].set_ylabel(\"Training loss\")\n", " axes[0].set_xlabel(\"Epoch\")\n", " axes[1].plot(epoch,acc,label=\"Testing accuracy\")\n", " axes[1].set_ylabel(\"Testing accuracy\")\n", " axes[1].set_xlabel(\"Epoch\")\n", " fig.tight_layout()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now all we need to do is define our loss function and optimizer. We'll use PyTorch's standard implementation of Mean Squared Error loss and its stochastic gradient descent optimizer. There are many [optimizers available](https://pytorch.org/docs/stable/optim.html#algorithms), but they all require as a first argument a list of model parameters. As long as our model is a `torch.nn.Module`, we can access its parameters easily with the `parameters()` function. Each algorithm may also take other optional arguments, which are model hyperparameters. Here we set the learning rate of the gradient descent algorithm to 0.3.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "loss_fn = nn.MSELoss() \n", "optimizer = torch.optim.SGD(model.parameters(), lr = 0.3) #lr is learning rate\n", "train_and_test(model,10,train_dataloader,test_dataloader,loss_fn,optimizer)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can visualize the results of the model. We'll run the entire test set through the model (`test_data[:][0]` returns the `X` tensor for the entire test data set), then visualize the results, color coding by the model's prediction. Note the use of `with torch.no_grad()`: this context manager will disable gradients on all `torch.Tensor` objects. Anytime you are not training the model, gradients are unnecessary, and retaining them slows down the calculations." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with torch.no_grad():\n", " pred = model(test_data[:][0])\n", " \n", "fig,axes = plt.subplots(1,3,figsize=(15,4))\n", "\n", "axes[0].scatter(test_data.data[:,0],test_data.data[:,1],c=test_data.labels,cmap=\"Spectral\")\n", "axes[0].set_title(\"Truth\")\n", "axes[1].scatter(test_data.data[:,0],test_data.data[:,1],c=pred,cmap=\"Spectral\")\n", "axes[1].set_title(\"Predictions\")\n", "axes[2].scatter(test_data.data[:,0],test_data.data[:,1],c=pred.round(),cmap=\"Spectral\")\n", "axes[2].set_title(\"Rounded\")\n", "fig.tight_layout()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With these pieces in place, it is pretty easy to modify the model, assess its performance, and retry new architectures. The code below shows a new version of the MoonClassifier with the Tanh activation layer replaced by ReLU. You can make other modifications as well. It is possible to define your own layers and combine them into a model, add normalization, dropout, pooling, or other features, and so on. The pytorch interface is very flexible and allows you to mix and match pieces in nearly any way you want." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class MoonClassifierV2(nn.Module):\n", " def __init__(self):\n", " super(MoonClassifierV2, self).__init__()\n", " self.layers = nn.Sequential(\n", " nn.Linear(2,4),\n", " nn.ReLU(),\n", " nn.Linear(4,20),\n", " nn.ReLU(),\n", " nn.Linear(20,5),\n", " nn.ReLU(),\n", " nn.Linear(5,1),\n", " nn.Sigmoid()\n", " )\n", " \n", " def forward(self, X):\n", " X = self.layers(X)\n", " return X" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since the optimizer and loss function are defined independently of the model, it is very easy to change either of these if you wish. Here they are kept the same as before, and we can see that this code snippet can be used to quickly train and test a new model with our desired optimization strategy." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model = MoonClassifierV2()\n", "loss_fn = nn.MSELoss() \n", "optimizer = torch.optim.SGD(model.parameters(), lr = 0.3)\n", "train_and_test(model,10,train_dataloader,test_dataloader,loss_fn,optimizer)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you didn't change the random seed and are running this notebook straight through, you may notice that the testing accuracy still seems to be climbing. We can run the training for another 5 epochs to see if the improvement continues:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "train_and_test(model,5,train_dataloader,test_dataloader,loss_fn,optimizer)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with torch.no_grad():\n", " pred = model(torch.Tensor(test_data.data))\n", " \n", "fig,axes = plt.subplots(1,3,figsize=(15,4))\n", "\n", "axes[0].scatter(test_data.data[:,0],test_data.data[:,1],c=test_data.labels,cmap=\"Spectral\")\n", "axes[0].set_title(\"Truth\")\n", "axes[1].scatter(test_data.data[:,0],test_data.data[:,1],c=pred,cmap=\"Spectral\")\n", "axes[1].set_title(\"Predictions\")\n", "axes[2].scatter(test_data.data[:,0],test_data.data[:,1],c=pred.round(),cmap=\"Spectral\")\n", "axes[2].set_title(\"Rounded\")\n", "fig.tight_layout()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.18" } }, "nbformat": 4, "nbformat_minor": 4 }