{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Intro to Data Science\n", "## Part VIII. - Deep Learning and it's applications\n", "\n", "### Table of contents\n", "\n", "- #### Deep learning basics\n", " - Theory\n", " - Layer Architecture types\n", " - Dense Neural Networks\n", " - Activision and Loss Functions\n", " - Convolutional Neural Networks\n", " - Recurrent Neural Networks\n", " - Word Embeddings\n", " - Regularization\n", " \n", "---\n", "\n", "# I. Deep learning basics\n", "\n", "## What is Deep Learning?\n", "\n", "> _Deep learning consists of neural networks with multiple hidden layers that learn increasingly abstract representations of input data._ [source](https://elitedatascience.com/keras-tutorial-deep-learning-in-python)\n", "\n", "> _Deep learning is a class of neural network algorithms that:_\n", "> - _Use a cascade of __multiple layers__ of nonlinear processing units for feature extraction and transformation. Each successive layer uses the output from the previous layer as input._\n", "> - _Learn in supervised (e.g., classification) and/or unsupervised (e.g., pattern analysis) settings._\n", "> - _Learn __multiple levels of representations__ that correspond to __different levels of abstraction__; the levels form a hierarchy of concepts._ \n", "[source](https://en.wikipedia.org/wiki/Deep_learning#Definition)\n", "\n", "## Why is it important?\n", "\n", "Deep learning is widely used in our daily lives. It powers web search engines, recommender systems, image recognition systems, and self-driving cars. It enables the generation of realistic sound, images, and text, as well as the development of advanced AI agents. \n", "\n", "It represents the current state-of-the-art in machine learning for many tasks, including image recognition, text mining, and classification.\n", "\n", "## Tools\n", "- TensorFlow\n", "- PyTorch\n", "- Keras\n", "- Gensim (for word embeddings)\n", "- Hugging Face `transformers` (for pre-trained deep learning models)\n", "- (Optional: Scikit-Learn, primarily for simpler neural networks)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# II. Deep Neural Network Architectures\n", "\n", "## [Dense Feedforward Network](https://keras.io/api/layers/core_layers/dense/)\n", "\n", "A **dense layer** is a fully connected neural network layer where each neuron receives input from all the neurons in the previous layer. This makes it **densely connected**. The layer has a **weight matrix (W)**, a **bias vector (b)**, and the activations of the previous layer (a). \n", "\n", "The following is the definition from the Keras documentation:\n", "\n", "> `output = activation(dot(input, kernel) + bias)`\n", "\n", "where:\n", "- **activation** is the non-linear activation function passed as an argument.\n", "- **kernel** is the weight matrix learned by the layer.\n", "- **bias** is the bias vector.\n", "\n", "---\n", "\n", "### [Activation Functions](https://keras.io/api/layers/activations/)\n", "\n", "Activation functions introduce **non-linearity** into neural networks, allowing them to learn complex patterns. Here are some commonly used activation functions:\n", "\n", "- **Sigmoid**: $\\frac{{\\rm e}^x}{{\\rm e}^x + 1}$ \n", " - Maps input values to the range (0,1). Useful for binary classification.\n", "- **Tanh**: $\\tanh(x)$ \n", " - Maps input values to the range (-1,1), making it zero-centered.\n", "- **ReLU (Rectified Linear Unit)**: $\\max(0, x)$ \n", " - Sets negative values to 0 while keeping positive values unchanged. Helps mitigate vanishing gradient problems.\n", "- **Softmax**: $\\frac{{\\rm e}^{x_i}}{\\sum{{\\rm e}^{x_j}}}$ \n", " - Converts raw scores into probabilities for multi-class classification.\n", "- **Hierarchical Softmax** \n", " - Used for large output spaces, speeding up computations.\n", "\n", "#### Further Reading:\n", "- [Deep Learning: Neurons and Activation Functions](https://medium.com/@srnghn/deep-learning-overview-of-neurons-and-activation-functions-1d98286cf1e4)\n", "- [Choosing the Right Activation Function](https://www.analyticsvidhya.com/blog/2017/10/fundamentals-deep-learning-activation-functions-when-to-use-them/)\n", "- [Stanford CS231n: Activation Functions](http://cs231n.github.io/neural-networks-1/#actfun)\n", "\n", "---\n", "\n", "### [Loss Functions](https://keras.io/api/losses/)\n", "\n", "Loss functions measure how well a model's predictions match the actual values. Some commonly used loss functions include:\n", "\n", "- **Mean Squared Error (MSE)**: \n", " $$ MSE = \\frac{1}{n} \\sum (y_{\\text{true}} - y_{\\text{pred}})^2 $$ \n", " - Penalizes large errors more than small ones. Used for regression tasks.\n", " \n", "- **Mean Absolute Error (MAE)**: \n", " $$ MAE = \\frac{1}{n} \\sum \\left| y_{\\text{true}} - y_{\\text{pred}} \\right| $$ \n", " - Measures absolute differences. More robust to outliers than MSE.\n", "\n", "- **Categorical Hinge Loss**: \n", " $$ \\max(0, 1 - t \\cdot y) $$ \n", " - Used for multi-class classification with hinge loss.\n", "\n", "- **Cross-Entropy Loss**: \n", " $$ V(f(x), t) = -t \\ln(f(x)) - (1 - t) \\ln(1 - f(x)) $$ \n", " - Used for binary and multi-class classification tasks.\n", "\n", "- **Cosine Proximity** \n", " - Measures similarity between predicted and true vectors.\n", "\n", "#### Further Reading:\n", "- [Stanford CS231n: Loss Functions](http://cs231n.github.io/neural-networks-2/#loss-functions)\n", "- [Choosing Loss and Activation Functions](https://towardsdatascience.com/deep-learning-which-loss-and-activation-functions-should-i-use-ac02f1c56aa8)\n", "\n", "---\n", "\n", "### [Regularization](https://chatbotslife.com/regularization-in-deep-learning-f649a45d6e0)\n", "\n", "Regularization techniques help **prevent overfitting** by constraining the model’s complexity.\n", "\n", "#### **Early Stopping**\n", "Early stopping monitors a validation loss criterion during training. If the loss stops improving for a specified number of iterations (patience parameter), training is halted to prevent overfitting.\n", "\n", "#### **Dropout**\n", " \n", "\n", "*By [Srivastava et al. (2014)](http://jmlr.org/papers/volume15/srivastava14a.old/srivastava14a.pdf)* \n", "\n", "Dropout is a **regularization technique** that randomly drops a fraction of neurons during training. This prevents co-adaptation of neurons and reduces overfitting. \n", "\n", "The Dropout method in Keras (`keras.layers.Dropout`) takes a float between 0 and 1, representing the fraction of neurons to drop. \n", "\n", "From the Keras documentation:\n", "\n", "> Dropout consists in randomly setting a fraction rate of input units to 0 at each update during training time, which helps prevent overfitting.\n", "\n", "#### **Weight Penalty (L1 / L2 Regularization)**\n", "- **L1 Regularization (Lasso)**: Adds an absolute value penalty, promoting sparsity in weights.\n", "- **L2 Regularization (Ridge)**: Adds a squared value penalty, reducing large weights but keeping all features.\n", "\n", "#### Further Reading:\n", "- [Stanford CS231n: Regularization](http://cs231n.github.io/neural-networks-2/#reg)\n", "\n", "---\n", "\n", "### In Practice\n", "#### Building a simple dense network to classify hand-written digits\n", "\n", "#### 1. Loading data" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "\n", "from sklearn.datasets import load_digits\n", "from sklearn.model_selection import train_test_split\n", "from sklearn.preprocessing import OneHotEncoder" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X, y = load_digits(return_X_y=True)\n", "yt = OneHotEncoder(categories='auto', sparse_output=False).fit_transform(y.reshape(-1, 1))\n", "\n", "X_train, X_test, y_train, y_test = train_test_split(X, yt, random_state=42)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 2. Model Construction\n", "\n", "We'll use the [`TensorFlow`](https://www.tensorflow.org/) library to define neural networks. \n", "\n", "To install TensorFlow, activate your environment and run:\n", "\n", "```bash\n", "conda activate szisz_ds_2025\n", "conda install tensorflow\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import tensorflow as tf\n", "from keras.models import Sequential\n", "from keras.layers import Input, Dense\n", "from keras.callbacks import EarlyStopping, TensorBoard" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model = Sequential([\n", " Input((64,)),\n", " Dense(8, activation='relu'),\n", " Dense(10, activation='softmax'),\n", "])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 3. Assembly" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### 3.a Model validation" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model.summary()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 4. Creating callback functions" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# stopping early to prevent overfitting\n", "earlystopping = EarlyStopping(patience=3)\n", "\n", "# monitor training process through an UI\n", "tensorboard = TensorBoard(\n", " log_dir='tensor', \n", " histogram_freq=0, \n", " write_graph=True, \n", " write_images=True, \n", " update_freq='epoch'\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 5. Model training" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "model.fit(\n", " X_train, y_train, # training data\n", " batch_size=16, # number of data points to use in a training round\n", " epochs=100, # number of full training cycle \n", " validation_data=(X_test, y_test), # validation dataset\n", " callbacks=[earlystopping, tensorboard] # function to execute at the end of each epoch\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### Follow training process throuch [TensorBoard UI](https://www.tensorflow.org/tensorboard/get_started)\n", "\n", "Run at your terminal:\n", "```bash\n", "tensorboard --logdir tensor\n", "```\n", "\n", "Then open in browser:\n", "```bash\n", "http://localhost:6007\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 6. Model evaluation" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "loss, acc = model.evaluate(X_test, y_test)\n", "print(f'test loss: {loss}, test acc: {acc}')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 7. Prediction" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def predict_classes(model, X):\n", " predictions = model.predict(X)\n", " predicted_classes = np.argmax(predictions, axis=-1)\n", " return predicted_classes" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "predict_classes(model, X)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Exercise: Build a classification model for the iris dataset" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "### [Convolutional Neural Network (CNN)](https://keras.io/layers/convolutional/)\n", "\n", "\"Typical
By Aphex34 - Own work, CC BY-SA 4.0, Link\n", "\n", "> _Convolutional Neural Networks are very similar to ordinary Neural Networks: they are made up of neurons that have learnable weights and biases. Each neuron receives some inputs, performs a dot product and optionally follows it with a non-linearity. The whole network still expresses a single differentiable score function: from the raw image pixels on one end to class scores at the other. And they still have a loss function (e.g. SVM/Softmax) on the last (fully-connected) layer and all the tips/tricks we developed for learning regular Neural Networks still apply._ \n", "> _So what changes? ConvNet architectures make the explicit assumption that the inputs are images, which allows us to encode certain properties into the architecture. These then make the forward function more efficient to implement and vastly reduce the amount of parameters in the network._ - [source](http://cs231n.github.io/convolutional-networks/)\n", "\n", "> _Convolutional Neural Networks have a different architecture than regular Neural Networks. Regular Neural Networks transform an input by putting it through a series of hidden layers. Every layer is made up of a set of neurons, where each layer is fully connected to all neurons in the layer before. Finally, there is a last fully-connected layer — the output layer — that represent the predictions._ \n", "> _Convolutional Neural Networks are a bit different. First of all, the layers are organised in 3 dimensions: width, height and depth. Further, the neurons in one layer do not connect to all the neurons in the next layer but only to a small region of it. Lastly, the final output will be reduced to a single vector of probability scores, organized along the depth dimension._ - [source](https://medium.freecodecamp.org/an-intuitive-guide-to-convolutional-neural-networks-260c2de0a050)\n", "\n", "### **Key Components of a CNN**\n", "A Convolutional Neural Network consists of several building blocks:\n", "- **Convolutional layers**: Feature extraction\n", "- **Pooling layers**: Feature selection and dimensionality reduction\n", "- **Fully connected layers**: Classification\n", "\n", "#### **Convolutional Layer**\n", "\n", "A convolutional layer applies a set of learnable filters (kernels) to detect patterns in the input. In image processing, these patterns range from simple edges to complex textures and objects.\n", "\n", "> _In mathematics convolution is a mathematical operation on two functions (f and g) to produce a third function that expresses how the shape of one is modified by the other._ - [source](https://en.wikipedia.org/wiki/Convolution)\n", "\n", "\"Convolution
By Brian Amberg, derivative work: Tinos, CC BY-SA 3.0, Link\n", "\n", "The key parameters of a convolutional layer are:\n", "- **Depth**: Number of filters used in the layer\n", "- **Stride**: Step size when moving the filter over the input\n", "- **Padding**: Adding zero-padding to retain spatial dimensions\n", "\n", "During convolution, the filter slides over the input, performing matrix multiplications at each step, creating a feature map.\n", "> _We execute a convolution by sliding the filter over the input. At every location, a matrix multiplication is performed and sums the result onto the feature map._ \n", "> _In the animation below, you can see the convolution operation. You can see the filter (the green square) is sliding over our input (the blue square) and the sum of the convolution goes into the feature map (the red square)._ - [source](https://medium.freecodecamp.org/an-intuitive-guide-to-convolutional-neural-networks-260c2de0a050)\n", "\n", "
\n", "\n", "\n", "
\n", "\n", "
\n", "
\n", "Animation by Arden Dertat, Link \n", "Image by Arden Dertat, Link\n", "
\n", "\n", "#### **Pooling Layer**\n", "\n", "
By Andrej Karpathy, Link\n", "\n", "Pooling reduces the spatial dimensions of the input, making the network computationally efficient while retaining important features. It helps prevent overfitting and reduces the number of parameters.\n", "\n", "The most common pooling operation is **max pooling**, which selects the highest value in a given region. \n", "\n", "#### **Fully Connected Layer**\n", "A standard fully connected (dense) layer follows the convolutional and pooling layers. This layer performs classification using an appropriate loss function, such as cross-entropy loss for multi-class problems.\n", "\n", "---\n", "\n", "### **Further Reading**\n", "- [An Intuitive Guide to CNNs](https://medium.freecodecamp.org/an-intuitive-guide-to-convolutional-neural-networks-260c2de0a050)\n", "- [CS231n: Convolutional Networks](http://cs231n.github.io/convolutional-networks/)\n", "- [Applied Deep Learning: CNNs](https://towardsdatascience.com/applied-deep-learning-part-4-convolutional-neural-networks-584bc134c1e2)\n", "- [Beginner’s Guide to CNNs](https://www.analyticsvidhya.com/blog/2018/12/guide-convolutional-neural-network-cnn/)\n", "- [Keras CNN Tutorial](https://github.com/ardendertat/Applied-Deep-Learning-with-Keras/blob/master/notebooks/Part%204%20%28GPU%29%20-%20Convolutional%20Neural%20Networks.ipynb)\n", "\n", "---\n", "\n", "### In Practice\n", "\n", "#### Build a CNN classifier for the hand digits dataset" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "import seaborn as sns\n", "\n", "from keras.layers import Dense, Flatten\n", "from keras.layers import Conv2D, MaxPooling2D" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X, y = load_digits(return_X_y=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# number of cases, width, height, channels (rgb)\n", "Xt = X.reshape((X.shape[0], 8, 8, 1))\n", "yt = OneHotEncoder(categories='auto', sparse_output=False).fit_transform(y.reshape(-1, 1))\n", "\n", "X_train, X_test, y_train, y_test = train_test_split(Xt, yt, random_state=42)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.heatmap(Xt[1, :, :, 0], cmap=\"gray\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model = Sequential([\n", " Input((8, 8, 1)),\n", " Conv2D(32, kernel_size=(3, 3), strides=(1, 1), activation='relu'),\n", " MaxPooling2D(pool_size=(2, 2), strides=(2, 2)),\n", " Flatten(),\n", " Dense(10, activation='softmax')\n", "])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model.summary()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "model.fit(\n", " X_train, y_train, # training data\n", " batch_size=16, # number of data points to use in a training round\n", " epochs=100, # number of full training cycle \n", " validation_data=(X_test, y_test), # validation dataset\n", " callbacks=[earlystopping], # function to execute at the end of each epoch\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "loss, acc = model.evaluate(X_test, y_test)\n", "print(f'test loss: {loss}, test acc: {acc}')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Exercise: Build a CNN for the MNIST classification problem\n", "\n", "In case you stuck in the process, use [this](https://github.com/adventuresinML/adventures-in-ml-code/blob/master/keras_cnn.py) [tutorial]((https://adventuresinmachinelearning.com/keras-tutorial-cnn-11-lines/))." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from keras.datasets import mnist\n", "from keras.utils import to_categorical\n", "\n", "num_classes = 10\n", "\n", "# input image dimensions\n", "img_x, img_y = 28, 28\n", "\n", "# load the MNIST data set, which already splits into train and test sets for us\n", "(X_train, y_train), (X_test, y_test) = mnist.load_data()\n", "\n", "# because the MNIST is greyscale, we only have a single channel\n", "X_train = X_train.reshape() # TODO: fill in the required shape \n", "X_test = X_test.reshape() # TODO: fill in the required shape \n", "input_shape = () # TODO: fill in the required shape \n", "\n", "# keras built-in OneHotEncoder solution\n", "y_train = to_categorical(y_train, num_classes)\n", "y_test = to_categorical(y_test, num_classes)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# plot the first image in Xtrain with sns.heatmap\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# define model here\n", "model = Sequential([\n", " \n", "])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# compile model here\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model.summary()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# fit model\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# evaluate model\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "### [Recurrent Neural Networks (RNN)](https://keras.io/layers/recurrent/)\n", "\n", "\"Recurrent
By François Deloche - Own work, CC BY-SA 4.0, Link\n", "\n", "\n", "> _A recurrent neural network (RNN) is a class of artificial neural network where connections between nodes form a directed graph along a temporal sequence. This allows it to exhibit temporal dynamic behavior. Unlike feedforward neural networks, RNNs can use their internal state (memory) to process sequences of inputs. This makes them applicable to tasks such as unsegmented, connected handwriting recognition or speech recognition._ (or time-series forecasting). - [source](https://en.wikipedia.org/wiki/Recurrent_neural_network)\n", "\n", "RNNs differ from traditional neural networks because they have memory, allowing them to retain and utilize information from previous time steps. This is achieved by passing outputs from one step as inputs to the next, effectively creating a chain-like structure.\n", "\n", "> _A recurrent neural network can be thought of as multiple copies of the same network, each passing a message to a successor. Consider what happens if we unroll the loop: as you can see above, this chain-like nature reveals that recurrent neural networks are intimately related to sequences and lists._ - [source](https://colah.github.io/posts/2015-08-Understanding-LSTMs/)\n", "\n", "#### The Challenge: Long-Term Dependencies\n", "\n", "A major issue with standard RNNs is their difficulty in capturing long-term dependencies in sequences. For example, in the sentence: _\"I grew up in **France**... that's why I speak fluent **French**.\"_, the model must remember \"France\" to correctly infer \"French.\" However, as the gap between relevant information increases, RNNs struggle to retain context due to the vanishing gradient problem.\n", "\n", "To address this, more advanced architectures such as Long Short-Term Memory (LSTM) networks were developed.\n", "\n", "---\n", "\n", "### [Long Short-Term Memory (LSTM) Networks](https://keras.io/layers/recurrent/#lstm)\n", "\n", "LSTMs are designed to handle long-term dependencies more effectively. While they follow the same chain-like structure as RNNs, their internal mechanisms are different, allowing them to selectively retain or discard information.\n", "\n", "
By Christopher Olah, Link\n", "\n", "> _LSTMs are a specialized form of recurrent neural networks with feedback connections that enable them to process sequences of data. Unlike standard RNNs, LSTMs include mechanisms that prevent the vanishing gradient problem, allowing them to learn long-term dependencies. LSTMs are widely used in applications like speech recognition, language modeling, and time-series prediction._ - [source](https://en.wikipedia.org/wiki/Long_short-term_memory)\n", "\n", "#### Key Components of LSTMs:\n", "\n", "- **Cell states** – Remembers values over arbitrary time intervals, and the gates regulate the flow of information into and out of the cell.\n", "- **Forget gates** – Decide what information to discard from the previous state, by mapping the previous state and the current input to a value between 0 and 1. A (rounded) value of 1 signifies retention of the information, and a value of 0 represents discarding.\n", "- **Input gates** – Decide which pieces of new information to store in the current cell state, using the same system as forget gates.\n", "- **Output gates** – Control which pieces of information in the current cell state to output, by assigning a value from 0 to 1 to the information, considering the previous and current states.\n", "\n", "By using these gates, LSTMs can learn which information is important and maintain it across long sequences, making them superior to standard RNNs in many tasks.\n", "\n", "---\n", "\n", "### Further Reading:\n", "\n", "- [Understanding LSTMs](https://colah.github.io/posts/2015-08-Understanding-LSTMs/)\n", "- [A High-Level Introduction to LSTMs](https://medium.com/datadriveninvestor/a-high-level-introduction-to-lstms-34f81bfa262d)\n", "- [LSTM Overview by Skymind](https://skymind.ai/wiki/lstm)\n", "- [Keras: Understanding `return_state` and `return_sequences`](https://www.dlology.com/blog/how-to-use-return_state-or-return_sequences-in-keras/)\n", "\n", "---\n", "\n", "### In Practice\n", "\n", "#### Build a sentiment predictor on movie reviews\n", "\n", "Based on [this](https://machinelearningmastery.com/sequence-classification-lstm-recurrent-neural-networks-python-keras/) tutorial." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from keras.layers import Embedding\n", "from keras.layers import LSTM\n", "\n", "from keras.datasets import imdb\n", "\n", "from keras.utils import pad_sequences" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "top_words = 5000\n", "(X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=top_words)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "max_review_length = 500\n", "\n", "# pad sequences will fill every doc in the corpus to a given length\n", "X_train = pad_sequences(X_train, maxlen=max_review_length)\n", "X_test = pad_sequences(X_test, maxlen=max_review_length)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "embedding_vector_length = 32\n", "\n", "model = Sequential([\n", " Input((max_review_length,)),\n", " Embedding(input_dim=top_words, # number of words in the vocab\n", " output_dim=embedding_vector_length), # size of the embedding vector)\n", " LSTM(units=100),\n", " Dense(1, activation='sigmoid')\n", "])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model.compile(\n", " loss='binary_crossentropy', \n", " optimizer='adam', \n", " metrics=['accuracy']\n", ")\n", "model.summary()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=3, batch_size=64)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "score = model.evaluate(X_test, y_test, batch_size=16)\n", "print('test loss: {}, test accuracy: {}'.format(*score))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Exercise: Predict simulated stock prices\n", "\n", "Follow this [tutorial](https://stackabuse.com/time-series-analysis-with-lstm-using-pythons-keras-library/)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "### [Word](https://keras.io/layers/embeddings/) [Embeddings](https://radimrehurek.com/gensim/models/word2vec.html)\n", "\n", "
\n", " \n", " \n", "
\n", "\n", "
\n", "
Images from The Morning Paper\n", "\n", "> _Word embeddings refer to a set of techniques in natural language processing (NLP) that map words or phrases from a vocabulary into continuous vector spaces with much lower dimensions. These embeddings capture semantic relationships between words, enabling models to understand context and meaning._ \n", "> _Methods for generating word embeddings include neural networks, dimensionality reduction techniques applied to word co-occurrence matrices, probabilistic models, and explicit representations based on word contexts._ - [source](https://en.wikipedia.org/wiki/Word_embedding)\n", "\n", "The key intuition behind word embeddings is that words appearing in similar contexts tend to have similar meanings.\n", "\n", "### Training Word Embeddings\n", "\n", "
\n", " \n", " \n", " \n", "
\n", "\n", "
\n", "
Images from Word2Vec Tutorial - The Skip-Gram Model, by Chris McCormick\n", "\n", "There are two primary architectures for learning word embeddings:\n", "\n", "- **Continuous Bag-of-Words (CBOW)**: Predicts a target word based on surrounding context words in a given window. The model is order-invariant, meaning word order in the context does not affect predictions.\n", "- **Skip-Gram Model**: Predicts context words based on a given target word. Context words closer to the target word are weighted more heavily than those further away.\n", "\n", "#### How Neural Network Weights Become Embedding Vectors\n", "\n", "The Word2Vec model is trained as a shallow neural network with one hidden layer. It learns to predict words based on their surrounding context (CBOW) or vice versa (Skip-Gram). The key insight is that after training, we **discard the output layer** and use the **weights of the hidden layer** as the word embeddings. \n", "\n", "1. **Input Representation**: Each word in the vocabulary is assigned a unique one-hot vector (a sparse vector where only one element is 1, and the rest are 0).\n", "2. **Projection Layer (Hidden Layer Weights)**: The one-hot input is multiplied by a weight matrix \\( W \\), mapping it into a dense vector space of lower dimensionality.\n", "3. **Output Layer (Discarded After Training)**: The model is trained to predict either a target word (CBOW) or context words (Skip-Gram) using another weight matrix \\( W' \\). However, once training is complete, we do not need this layer for embeddings.\n", "4. **Final Embedding Extraction**: The trained weights of the **first weight matrix \\( W \\) (input-to-hidden layer)** become the word embeddings. Each row in this matrix corresponds to a word's dense vector representation.\n", "\n", "This process allows similar words (based on their contexts) to have similar vector representations, capturing semantic relationships like **\"King - Man + Woman = Queen\"**.\n", "\n", "### Further Reading\n", "\n", "- [Vector Representations of Words (TensorFlow)](https://www.tensorflow.org/tutorials/representation/word2vec#vector-representations-of-words)\n", "- [How Does Word2Vec Work? (Quora)](https://www.quora.com/How-does-word2vec-work-Can-someone-walk-through-a-specific-example)\n", "- [Word2Vec Tutorial - Skip-Gram Model](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/)\n", "- [Introduction to Word Embeddings and Word2Vec](https://towardsdatascience.com/introduction-to-word-embedding-and-word2vec-652d0c2060fa)\n", "- [Neural Network Embeddings Explained](https://towardsdatascience.com/neural-network-embeddings-explained-4d028e6f0526)\n", "- [Word Embeddings in NLP and Their Applications](https://hackernoon.com/word-embeddings-in-nlp-and-its-applications-fab15eaf7430)\n", "- [Build Your Own Embedding and Use It in a Neural Network](https://blog.cambridgespark.com/tutorial-build-your-own-embedding-and-use-it-in-a-neural-network-e9cde4a81296)\n", "- [Word2Vec Wiki - Skymind AI](https://skymind.ai/wiki/word2vec)\n", "- [Word2Vec Graph Visualization](https://github.com/anvaka/word2vec-graph)\n", "- [Using Word Embedding Layers in Deep Learning with Keras](https://machinelearningmastery.com/use-word-embedding-layers-deep-learning-keras/)\n", "- [Handling Text Data Using Keras Embedding Layer](https://heartbeat.fritz.ai/using-a-keras-embedding-layer-to-handle-text-data-2c88dc019600)\n", "- [Google Word2Vec Archive](https://code.google.com/archive/p/word2vec/)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "\n", "from tensorflow.keras.preprocessing.text import one_hot" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "docs = ['Well done!',\n", " 'Good work',\n", " 'Great effort',\n", " 'nice work',\n", " 'Excellent!',\n", " 'Weak',\n", " 'Poor effort!',\n", " 'not good',\n", " 'poor work',\n", " 'Could have done better.']\n", "\n", "labels = np.array([1, 1, 1, 1, 1,\n", " 0, 0, 0, 0, 0])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# integer encode the documents\n", "vocab_size = 50\n", "encoded_docs = [one_hot(d, vocab_size) for d in docs]\n", "print(encoded_docs)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# pad documents to a max length of 4 words\n", "max_length = 4\n", "padded_docs = pad_sequences(encoded_docs, maxlen=max_length, padding='post')\n", "print(padded_docs)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# define the model\n", "model = Sequential([\n", " Input((max_length,)),\n", " Embedding(vocab_size, 8),\n", " Flatten(),\n", " Dense(1, activation='sigmoid')\n", "])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# compile the model\n", "model.compile(\n", " optimizer='adam', \n", " loss='binary_crossentropy', \n", " metrics=['accuracy']\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# summarize the model\n", "print(model.summary())" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# fit the model\n", "model.fit(padded_docs, labels, epochs=50, verbose=0)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# evaluate the model\n", "score = model.evaluate(padded_docs, labels, verbose=0)\n", "print('loss: {}, accuracy: {}'.format(*score))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Test model with an example:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "text = \"good effort\"\n", "enc_text = [one_hot(text, vocab_size)]\n", "pad_text = pad_sequences(enc_text, maxlen=max_length, padding='post')\n", "pred_text = predict_classes(model, pad_text)\n", "\n", "text, enc_text, pad_text, pred_text" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Exercise: News classification\n", "\n", "Classify the 20newsgroups dataset while building an embedding. As a first step, try to separate the atheism documents (`alt.atheism`) from the christian documents (`soc.religion.christian`)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "### Further tutorials:\n", "- https://www.pyimagesearch.com/2018/09/10/keras-tutorial-how-to-get-started-with-keras-deep-learning-and-python/\n", "- https://machinelearningmastery.com/multi-class-classification-tutorial-keras-deep-learning-library/\n", "- https://www.datacamp.com/community/tutorials/deep-learning-python\n", "- https://elitedatascience.com/keras-tutorial-deep-learning-in-python\n", "- https://www.guru99.com/keras-tutorial.html\n", "- https://github.com/adventuresinML/adventures-in-ml-code" ] } ], "metadata": { "kernelspec": { "display_name": "szisz_ds_2025", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.9" } }, "nbformat": 4, "nbformat_minor": 2 }