{ "cells": [ { "metadata": {}, "cell_type": "markdown", "source": [ "# CS6493 - Tutorial 1\n", "## Introduction to Google Colab and PyTorch\n", "\n", "Welcome to the CS6493 tutorial. In this session, you will become familiar with our experimental environment and practice some basic PyTorch operations.\n", "\n", "## 1. Google Colab\n", "\n", "You can use Google Colab to run the toy models. Here are some important notes for using Colab:\n", "\n", "- You are expected to be familiar with Python and Jupyter.\n", "- We will use **Google Colab** for the following experiments. Please run the experiments in [Google Colab](https://colab.research.google.com/). \n", " (If you do not have a Google Account, please register for one.)\n", "- Please go to **Edit -> Notebook Settings** and select Python 3 and GPU as the hardware accelerator.\n", "- Before running a specific model, check the resources you need and compare them with the available resources by using **!nvidia-smi**.\n", "- We will be happy to assist you throughout the tutorial, so please feel free to ask any questions you may have.\n", "\n", "\n", "## 2. PyTorch\n", "\n", "We use [PyTorch](https://pytorch.org/) framework to finish the implementations. In this section, we will introduce the installation, the basic operations of PyTorch.\n", "\n", "### 2.1 Installation\n", "Since the Colab has installed the PyTorch by default, you can check the version of PyTorch and whether it supports to GPUs by the following command. " ] }, { "metadata": {}, "cell_type": "code", "outputs": [], "execution_count": null, "source": [ "# check the GPU resource in the Colab\n", "!nvidia-smi" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import torch\n", "print(\"PyTorch version: \", torch.__version__)" ] }, { "metadata": {}, "cell_type": "markdown", "source": [ "Additionally, if a specific version of PyTorch is required by some of the repositories, you can visit the official PyTorch website to find the appropriate version. It is recommended to use a full command with the exact version details, as shown below:\n", "```\n", "# CUDA 12.1\n", "pip install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 --index-url https://download.pytorch.org/whl/cu121\n", "```\n", "\n" ] }, { "metadata": {}, "cell_type": "code", "outputs": [], "execution_count": null, "source": [ "# you can try this if you want to install a specific version of PyTorch\n", "!pip install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 --index-url https://download.pytorch.org/whl/cu121" ] }, { "metadata": {}, "cell_type": "markdown", "source": "You can use the following code to check more details about the information of GPUs." }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import torch\n", "print(\"PyTorch version: \", torch.__version__)\n", "print(\"GPU support: \", torch.cuda.is_available())\n", "print(\"Available devices count: \", torch.cuda.device_count())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2.2 Quick start - Tensor in PyTorch\n", "\n", "In this section, we introcue some basic concepts and operations of Tensor." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np" ] }, { "metadata": {}, "cell_type": "markdown", "source": [ "Tensors are a specialized data structure that are very similar to arrays and matrices. In PyTorch, we use tensors to encode the inputs and outputs of a model, as well as the model’s parameters.\n", "\n", "Tensors are similar to NumPy’s ndarrays, except that tensors can run on GPUs or other hardware accelerators. \n", "\n", "One simple way to understand and utilize tensors is to know what each dimension represents.\n", "\n", "### Create Tensors\n", "\n", "Tensors can be created directly from data or NumPy arrays. You can assign the data type to the tensor. Otherwise, the data type would be automatically inferred." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data = [[0,1], [2,3]]\n", "tensor_data = torch.tensor(data)\n", "tensor_data_float = torch.tensor(data).float()\n", "print(f\"Long Tensor: \\n {tensor_data} \\n\") # the data type is LongTensor\n", "print(f\"Float Tensor: \\n {tensor_data_float} \\n\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "np_data = np.array(data)\n", "tensor_np_data = torch.tensor(np_data)\n", "tensor_np_data_float = torch.tensor(np_data).float()\n", "print(f\"Long Tensor: \\n {tensor_np_data} \\n\") # the data type is LongTensor\n", "print(f\"Float Tensor: \\n {tensor_np_data_float} \\n\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can also create the tensors filled with constant (e.g., 0 and 1) or random values," ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "zeros_tensor = torch.zeros((2,3))\n", "ones_tensor = torch.ones((2,3))\n", "random_tensor = torch.rand((2,3))\n", "print(f\"Zeros Tensor: \\n {zeros_tensor} \\n\")\n", "print(f\"Ones Tensor: \\n {ones_tensor} \\n\")\n", "print(f\"Random Tensor: \\n {random_tensor} \\n\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Attributes of a Tensor\n", "\n", "Tensor attributes describe their shape, datatype, and the device on which they are stored." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tensor = torch.rand(2,3)\n", "\n", "print(f\"Shape of tensor: {tensor.shape}\")\n", "print(f\"Datatype of tensor: {tensor.dtype}\")\n", "print(f\"Device tensor is stored on: {tensor.device}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Operations on Tensors\n", "\n", "There are over 100 tensor operations, including arthmetic, linear algebra, matrix manipulation and more. In this section, we only introduce some frequently used operations in our later tutorials and projects.\n", "\n", "**Move Tensor to Device**\n", "\n", "By default, tensors are created on the CPU. We need to explicitly move tensors to the GPU using `.to()` method (after checking for GPU availability). Keep in mind that copying large tensors across devices can be expensive in terms of time and memory!" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# move tensor to GPU if available\n", "device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n", "tensor = tensor.to(device)\n", "print(f\"Device tensor is stored on: {tensor.device}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Tensor indexing, slicing and reshape**" ] }, { "cell_type": "code", "metadata": { "ExecuteTime": { "end_time": "2025-01-06T14:53:42.140075Z", "start_time": "2025-01-06T14:53:29.975453Z" } }, "source": [ "import torch\n", "tensor = torch.rand(4, 6)\n", "tensor" ], "outputs": [ { "data": { "text/plain": [ "tensor([[0.7335, 0.7830, 0.2709, 0.9149, 0.0203, 0.6076],\n", " [0.1010, 0.8982, 0.8029, 0.2490, 0.0831, 0.7048],\n", " [0.6930, 0.3721, 0.4666, 0.3729, 0.9197, 0.5189],\n", " [0.7800, 0.2683, 0.0027, 0.6551, 0.1588, 0.7311]])" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "execution_count": 2 }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# let take a look at its first row and column\n", "print(f\"First row: {tensor[0]}\")\n", "print(f\"First column: {tensor[:,0]}\")\n", "print(f\"Last column: {tensor[:, -1]}\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# reshape\n", "print(f\"Reshape to (2,12): \\n {tensor.view(2, 12)} \\n\")\n", "print(f\"Reshape to (2,2,6): \\n {tensor.view(-1, 2, 6)} \\n\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Joining tensors.** You can use torch.cat to concatenate a sequence of tensors along a given dimension." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "t1 = torch.zeros(4, 2)\n", "new_t = torch.cat([tensor, t1, t1], dim=1)\n", "new_t" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Arithmetic operations**\n", "\n", "The basic arithmetic operations of Pytorch are similar with those in Numpy, such as `.pow()`, `.div()`, `.sum()` and more. Here we talk more about multiplication in Pytorch." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# This computes the matrix multiplication between two tensors. y1, y2 will have the same value\n", "print(f\"Shape of original tensor: {tensor.shape}\")\n", "y1 = tensor @ tensor.T\n", "y2 = tensor.matmul(tensor.T)\n", "\n", "print(f\"Shape of matrix multiplication resulting tensor: {y1.shape}\")\n", "\n", "# This computes the element-wise product. z1, z2, z3 will have the same value\n", "z1 = tensor * tensor\n", "z2 = tensor.mul(tensor)\n", "\n", "print(f\"Shape of element-wise product resulting tensor: {z1.shape}\")" ] }, { "metadata": {}, "cell_type": "markdown", "source": [ "## 2.3 Practice\n", "\n", "In NLP, we have a very popular and famous techique, termed **Attention** which is used to measure the improtance among each components. Formally, we define the attention mechanism as:\n", "\n", "$Attention(\\mathbf{Q},\\mathbf{K},\\mathbf{V}) = \\text{Softmax}(\\frac{\\mathbf{Q}\\mathbf{K}^T}{\\sqrt{d_k}})\\mathbf{V}$\n", "\n", "$\\text{Softmax}(x_{i}) = \\frac{\\exp(x_i)}{\\sum_j \\exp(x_j)}$,\n", "\n", "you can attempt to implement softmax function and attention by yourself.\n", "\n", "Hint: you can decompose the equation into some basic components and check how to achieve these basic components. Google your questions and you can find the answers on stackoverflow or official documentations of Numpy or Pytorch." ] }, { "metadata": {}, "cell_type": "markdown", "source": "" }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# please notice that we have a batch size of 2, \n", "# and the dimension of Q, K, V is 4x8 \n", "# you can regard them as 4 queries, 4 keys and 4 values with the representation of 8 dimensional vectors\n", "v= torch.rand((2,4,8))\n", "k = v\n", "q = torch.rand((2,4,8))\n", "d_k = 8" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# insert your code\n", "def attention(q, k, v):\n", " pass\n", "\n", "def softmax(x):\n", " pass\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.12" } }, "nbformat": 4, "nbformat_minor": 4 }