{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/duoan/TorchCode/blob/master/templates/40_linear_regression.ipynb)\n", "\n", "# ๐ŸŸก Medium: Linear Regression\n", "\n", "Implement **linear regression** using three different approaches โ€” all in pure PyTorch.\n", "\n", "Given data `X` of shape `(N, D)` and targets `y` of shape `(N,)`, find weight `w` of shape `(D,)` and bias `b` (scalar) such that:\n", "\n", "$$\\hat{y} = Xw + b$$\n", "\n", "### Signature\n", "```python\n", "class LinearRegression:\n", " def closed_form(self, X: Tensor, y: Tensor) -> tuple[Tensor, Tensor]: ...\n", " def gradient_descent(self, X: Tensor, y: Tensor, lr=0.01, steps=1000) -> tuple[Tensor, Tensor]: ...\n", " def nn_linear(self, X: Tensor, y: Tensor, lr=0.01, steps=1000) -> tuple[Tensor, Tensor]: ...\n", "```\n", "\n", "All methods return `(w, b)` where `w` has shape `(D,)` and `b` has shape `()`.\n", "\n", "### Method 1 โ€” Closed-Form (Normal Equation)\n", "Augment X with a ones column, then solve:\n", "\n", "$$\\theta = (X_{aug}^T X_{aug})^{-1} X_{aug}^T y$$\n", "\n", "Or use `torch.linalg.lstsq` / `torch.linalg.solve`.\n", "\n", "### Method 2 โ€” Gradient Descent from Scratch\n", "Initialize `w` and `b` to zeros. Repeat for `steps` iterations:\n", "```\n", "pred = X @ w + b\n", "error = pred - y\n", "grad_w = (2/N) * X^T @ error\n", "grad_b = (2/N) * error.sum()\n", "w -= lr * grad_w\n", "b -= lr * grad_b\n", "```\n", "\n", "### Method 3 โ€” PyTorch nn.Linear\n", "Create `nn.Linear(D, 1)`, use `nn.MSELoss` and an optimizer (e.g., `torch.optim.SGD`).\n", "After training, extract `w` and `b` from the layer.\n", "\n", "### Rules\n", "- All inputs and outputs must be **PyTorch tensors**\n", "- Do **NOT** use numpy or sklearn\n", "- `closed_form` must not use iterative optimization\n", "- `gradient_descent` must manually compute gradients (no `autograd`)\n", "- `nn_linear` should use `torch.nn.Linear` and `loss.backward()`" ] }, { "cell_type": "code", "metadata": {}, "source": [ "# Install torch-judge in Colab (no-op in JupyterLab/Docker)\n", "try:\n", " import google.colab\n", " get_ipython().run_line_magic('pip', 'install -q torch-judge')\n", "except ImportError:\n", " pass\n" ], "outputs": [], "execution_count": null }, { "cell_type": "code", "metadata": {}, "outputs": [], "source": [ "import torch\n", "import torch.nn as nn" ], "execution_count": null }, { "cell_type": "code", "metadata": {}, "outputs": [], "source": [ "# โœ๏ธ YOUR IMPLEMENTATION HERE\n", "\n", "class LinearRegression:\n", " def closed_form(self, X: torch.Tensor, y: torch.Tensor):\n", " \"\"\"Normal equation: w = (X^T X)^{-1} X^T y\"\"\"\n", " pass # Return (w, b)\n", "\n", " def gradient_descent(self, X: torch.Tensor, y: torch.Tensor,\n", " lr: float = 0.01, steps: int = 1000):\n", " \"\"\"Manual gradient descent loop\"\"\"\n", " pass # Return (w, b)\n", "\n", " def nn_linear(self, X: torch.Tensor, y: torch.Tensor,\n", " lr: float = 0.01, steps: int = 1000):\n", " \"\"\"Train nn.Linear with autograd\"\"\"\n", " pass # Return (w, b)" ], "execution_count": null }, { "cell_type": "code", "metadata": {}, "outputs": [], "source": [ "# ๐Ÿงช Debug\n", "torch.manual_seed(42)\n", "X = torch.randn(100, 3)\n", "true_w = torch.tensor([2.0, -1.0, 0.5])\n", "y = X @ true_w + 3.0\n", "\n", "model = LinearRegression()\n", "\n", "w_cf, b_cf = model.closed_form(X, y)\n", "print(f\"Closed-form: w={w_cf}, b={b_cf.item():.4f}\")\n", "\n", "w_gd, b_gd = model.gradient_descent(X, y, lr=0.05, steps=2000)\n", "print(f\"Grad descent: w={w_gd}, b={b_gd.item():.4f}\")\n", "\n", "w_nn, b_nn = model.nn_linear(X, y, lr=0.05, steps=2000)\n", "print(f\"nn.Linear: w={w_nn}, b={b_nn.item():.4f}\")\n", "\n", "print(f\"\\nTrue: w={true_w}, b=3.0\")" ], "execution_count": null }, { "cell_type": "code", "metadata": {}, "outputs": [], "source": [ "# โœ… SUBMIT\n", "from torch_judge import check\n", "check(\"linear_regression\")" ], "execution_count": null } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "name": "python", "version": "3.11.0" } }, "nbformat": 4, "nbformat_minor": 4 }