{ "nbformat": 4, "nbformat_minor": 0, "metadata": { "colab": { "name": "2021-07-26-online-learning-concept.ipynb", "provenance": [], "toc_visible": true, "authorship_tag": "ABX9TyOInylA/7l8JBwp9/GWRPk3" }, "kernelspec": { "name": "python3", "display_name": "Python 3" }, "language_info": { "name": "python" } }, "cells": [ { "cell_type": "markdown", "metadata": { "id": "uFnpg9coKEpG" }, "source": [ "# From Batch to Online/Stream (Concept)\n", "> Learning and validating the concept of incremental (online) learning and comparing to its counterpart batch learning method\n", "\n", "- toc: true\n", "- badges: true\n", "- comments: true\n", "- categories: [Concept, OnlineLearning]\n", "- image:" ] }, { "cell_type": "markdown", "metadata": { "id": "PasR1CuXJxNv" }, "source": [ "## Setup" ] }, { "cell_type": "code", "metadata": { "id": "It-vcW2kIjMr" }, "source": [ "!pip install river\n", "!pip install -U numpy" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "sw24c-Z0JpdE" }, "source": [ "import numpy as np\n", "\n", "from sklearn import datasets, linear_model, metrics\n", "from sklearn import model_selection, pipeline, preprocessing\n", "\n", "from river import preprocessing, linear_model\n", "from river import optim, compat, compose, stream" ], "execution_count": 9, "outputs": [] }, { "cell_type": "code", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "anFBvNsfJ3_d", "outputId": "d20d9671-5613-4455-d4ce-f4bb6456d045" }, "source": [ "!pip install -q watermark\n", "%reload_ext watermark\n", "%watermark -m -iv -u -t -d" ], "execution_count": 22, "outputs": [ { "output_type": "stream", "text": [ "Last updated: 2021-07-26 14:34:20\n", "\n", "Compiler : GCC 7.5.0\n", "OS : Linux\n", "Release : 5.4.104+\n", "Machine : x86_64\n", "Processor : x86_64\n", "CPU cores : 2\n", "Architecture: 64bit\n", "\n", "river : 0.7.1\n", "IPython: 5.5.0\n", "sklearn: 0.0\n", "numpy : 1.21.1\n", "\n" ], "name": "stdout" } ] }, { "cell_type": "markdown", "metadata": { "id": "gIOe7-dQNXAO" }, "source": [ "## A quick overview of batch learning\n", "\n", "If you've already delved into machine learning, then you shouldn't have any difficulty in getting to use incremental learning. If you are somewhat new to machine learning, then do not worry! The point of this notebook in particular is to introduce simple notions. We'll also start to show how `river` fits in and explain how to use it.\n", "\n", "The whole point of machine learning is to *learn from data*. In *supervised learning* you want to learn how to predict a target $y$ given a set of features $X$. Meanwhile in an unsupervised learning there is no target, and the goal is rather to identify patterns and trends in the features $X$. At this point most people tend to imagine $X$ as a somewhat big table where each row is an observation and each column is a feature, and they would be quite right. Learning from tabular data is part of what's called *batch learning*, which basically that all of the data is available to our learning algorithm at once. Multiple libraries have been created to handle the batch learning regime, with one of the most prominent being Python's [scikit-learn](https://scikit-learn.org/stable/).\n", "\n", "As a simple example of batch learning let's say we want to learn to predict if a women has breast cancer or not. We'll use the [breast cancer dataset available with scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_breast_cancer.html). We'll learn to map a set of features to a binary decision using a [logistic regression](https://www.wikiwand.com/en/Logistic_regression). Like many other models based on numerical weights, logistic regression is sensitive to the scale of the features. Rescaling the data so that each feature has mean 0 and variance 1 is generally considered good practice. We can apply the rescaling and fit the logistic regression sequentially in an elegant manner using a [Pipeline](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html). To measure the performance of the model we'll evaluate the average [ROC AUC score](https://www.wikiwand.com/en/Receiver_operating_characteristic) using a 5 fold [cross-validation](https://www.wikiwand.com/en/Cross-validation_(statistics)). " ] }, { "cell_type": "code", "metadata": { "execution": { "iopub.execute_input": "2021-04-16T18:31:34.816222Z", "iopub.status.busy": "2021-04-16T18:31:34.814857Z", "iopub.status.idle": "2021-04-16T18:31:36.003613Z", "shell.execute_reply": "2021-04-16T18:31:36.004126Z" }, "id": "aC6f3NOsNXAR", "colab": { "base_uri": "https://localhost:8080/" }, "outputId": "8e9f9eed-3062-4516-bffc-6e863aa84403" }, "source": [ "# Load the data\n", "dataset = datasets.load_breast_cancer()\n", "X, y = dataset.data, dataset.target\n", "\n", "# Define the steps of the model\n", "model = pipeline.Pipeline([\n", " ('scale', preprocessing.StandardScaler()),\n", " ('lin_reg', linear_model.LogisticRegression(solver='lbfgs'))\n", "])\n", "\n", "# Define a determistic cross-validation procedure\n", "cv = model_selection.KFold(n_splits=5, shuffle=True, random_state=42)\n", "\n", "# Compute the MSE values\n", "scorer = metrics.make_scorer(metrics.roc_auc_score)\n", "scores = model_selection.cross_val_score(model, X, y, scoring=scorer, cv=cv)\n", "\n", "# Display the average score and it's standard deviation\n", "print(f'ROC AUC: {scores.mean():.3f} (± {scores.std():.3f})')" ], "execution_count": 2, "outputs": [ { "output_type": "stream", "text": [ "ROC AUC: 0.975 (± 0.011)\n" ], "name": "stdout" } ] }, { "cell_type": "markdown", "metadata": { "id": "1vCeknVRNXAX" }, "source": [ "This might be a lot to take in if you're not accustomed to scikit-learn, but it probably isn't if you are. Batch learning basically boils down to:\n", "\n", "1. Loading (and preprocessing) the data\n", "2. Fitting a model to the data\n", "3. Computing the performance of the model on unseen data\n", "\n", "This is pretty standard and is maybe how most people imagine a machine learning pipeline. However, this way of proceeding has certain downsides. First of all your laptop would crash if the `load_boston` function returned a dataset who's size exceeds your available amount of RAM. Sometimes you can use some tricks to get around this. For example by optimizing the data types and by using sparse representations when applicable you can potentially save precious gigabytes of RAM. However, like many tricks this only goes so far. If your dataset weighs hundreds of gigabytes then you won't go far without some special hardware. One solution is to do out-of-core learning; that is, algorithms that can learn by being presented the data in chunks or mini-batches. If you want to go down this road then take a look at [Dask](https://examples.dask.org/machine-learning.html) and [Spark's MLlib](https://spark.apache.org/mllib/).\n", "\n", "Another issue with the batch learning regime is that it can't elegantly learn from new data. Indeed if new data is made available, then the model has to learn from scratch with a new dataset composed of the old data and the new data. This is particularly annoying in a real situation where you might have new incoming data every week, day, hour, minute, or even setting. For example if you're building a recommendation engine for an e-commerce app, then you're probably training your model from 0 every week or so. As your app grows in popularity, so does the dataset you're training on. This will lead to longer and longer training times and might require a hardware upgrade.\n", "\n", "A final downside that isn't very easy to grasp concerns the manner in which features are extracted. Every time you want to train your model you first have to extract features. The trick is that some features might not be accessible at the particular point in time you are at. For example maybe that some attributes in your data warehouse get overwritten with time. In other words maybe that all the features pertaining to a particular observations are not available, whereas they were a week ago. This happens more often than not in real scenarios, and apart if you have a sophisticated data engineering pipeline then you will encounter these issues at some point. " ] }, { "cell_type": "markdown", "metadata": { "id": "jVsH2HaqNXAd" }, "source": [ "## A hands-on introduction to incremental learning\n", "\n", "Incremental learning is also often called *online learning* or *stream learning*, but if you [google online learning](https://www.google.com/search?q=online+learning) a lot of the results will point to educational websites. Hence, the terms \"incremental learning\" and \"stream learning\" (from which `river` derives it's name) are prefered. The point of incremental learning is to fit a model to a stream of data. In other words, the data isn't available in it's entirety, but rather the observations are provided one by one. As an example let's stream through the dataset used previously." ] }, { "cell_type": "code", "metadata": { "execution": { "iopub.execute_input": "2021-04-16T18:31:36.008164Z", "iopub.status.busy": "2021-04-16T18:31:36.007514Z", "iopub.status.idle": "2021-04-16T18:31:36.009192Z", "shell.execute_reply": "2021-04-16T18:31:36.009790Z" }, "id": "8LyFqjlMNXAe" }, "source": [ "for xi, yi in zip(X, y):\n", " # This is where the model learns\n", " pass" ], "execution_count": 3, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "z3BvgUFuNXAf" }, "source": [ "In this case we're iterating over a dataset that is already in memory, but we could just as well stream from a CSV file, a Kafka stream, an SQL query, etc. If we look at `xi` we can notice that it is a `numpy.ndarray`." ] }, { "cell_type": "code", "metadata": { "execution": { "iopub.execute_input": "2021-04-16T18:31:36.018136Z", "iopub.status.busy": "2021-04-16T18:31:36.017467Z", "iopub.status.idle": "2021-04-16T18:31:36.019901Z", "shell.execute_reply": "2021-04-16T18:31:36.020316Z" }, "id": "ZTpjG4dRNXAg", "colab": { "base_uri": "https://localhost:8080/" }, "outputId": "8a30e71e-a1a2-4776-f56b-e5e87f1f759e" }, "source": [ "xi" ], "execution_count": 4, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "array([7.760e+00, 2.454e+01, 4.792e+01, 1.810e+02, 5.263e-02, 4.362e-02,\n", " 0.000e+00, 0.000e+00, 1.587e-01, 5.884e-02, 3.857e-01, 1.428e+00,\n", " 2.548e+00, 1.915e+01, 7.189e-03, 4.660e-03, 0.000e+00, 0.000e+00,\n", " 2.676e-02, 2.783e-03, 9.456e+00, 3.037e+01, 5.916e+01, 2.686e+02,\n", " 8.996e-02, 6.444e-02, 0.000e+00, 0.000e+00, 2.871e-01, 7.039e-02])" ] }, "metadata": { "tags": [] }, "execution_count": 4 } ] }, { "cell_type": "markdown", "metadata": { "id": "jTHGTFTzNXAi" }, "source": [ "`river` by design works with `dict`s. We believe that `dict`s are more enjoyable to program with than `numpy.ndarray`s, at least for when single observations are concerned. `dict`'s bring the added benefit that each feature can be accessed by name rather than by position." ] }, { "cell_type": "code", "metadata": { "execution": { "iopub.execute_input": "2021-04-16T18:31:36.032784Z", "iopub.status.busy": "2021-04-16T18:31:36.032007Z", "iopub.status.idle": "2021-04-16T18:31:36.034496Z", "shell.execute_reply": "2021-04-16T18:31:36.034908Z" }, "id": "faTWoh1nNXAk", "colab": { "base_uri": "https://localhost:8080/" }, "outputId": "950e22b7-29e6-4b64-971e-ba7ff9069269" }, "source": [ "for xi, yi in zip(X, y):\n", " xi = dict(zip(dataset.feature_names, xi))\n", " pass\n", "\n", "xi" ], "execution_count": 5, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "{'area error': 19.15,\n", " 'compactness error': 0.00466,\n", " 'concave points error': 0.0,\n", " 'concavity error': 0.0,\n", " 'fractal dimension error': 0.002783,\n", " 'mean area': 181.0,\n", " 'mean compactness': 0.04362,\n", " 'mean concave points': 0.0,\n", " 'mean concavity': 0.0,\n", " 'mean fractal dimension': 0.05884,\n", " 'mean perimeter': 47.92,\n", " 'mean radius': 7.76,\n", " 'mean smoothness': 0.05263,\n", " 'mean symmetry': 0.1587,\n", " 'mean texture': 24.54,\n", " 'perimeter error': 2.548,\n", " 'radius error': 0.3857,\n", " 'smoothness error': 0.007189,\n", " 'symmetry error': 0.02676,\n", " 'texture error': 1.428,\n", " 'worst area': 268.6,\n", " 'worst compactness': 0.06444,\n", " 'worst concave points': 0.0,\n", " 'worst concavity': 0.0,\n", " 'worst fractal dimension': 0.07039,\n", " 'worst perimeter': 59.16,\n", " 'worst radius': 9.456,\n", " 'worst smoothness': 0.08996,\n", " 'worst symmetry': 0.2871,\n", " 'worst texture': 30.37}" ] }, "metadata": { "tags": [] }, "execution_count": 5 } ] }, { "cell_type": "markdown", "metadata": { "id": "ZiCA406mNXAn" }, "source": [ "Conveniently, `river`'s `stream` module has an `iter_sklearn_dataset` method that we can use instead." ] }, { "cell_type": "code", "metadata": { "execution": { "iopub.execute_input": "2021-04-16T18:31:36.038486Z", "iopub.status.busy": "2021-04-16T18:31:36.037807Z", "iopub.status.idle": "2021-04-16T18:31:36.959471Z", "shell.execute_reply": "2021-04-16T18:31:36.959892Z" }, "id": "w3WbsuX0NXAo" }, "source": [ "for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):\n", " pass" ], "execution_count": 6, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "77_fWU-UNXAo" }, "source": [ "The simple fact that we are getting the data as a stream means that we can't do a lot of things the same way as in a batch setting. For example let's say we want to scale the data so that it has mean 0 and variance 1, as we did earlier. To do so we simply have to subtract the mean of each feature to each value and then divide the result by the standard deviation of the feature. The problem is that we can't possible known the values of the mean and the standard deviation before actually going through all the data! One way to proceed would be to do a first pass over the data to compute the necessary values and then scale the values during a second pass. The problem is that this defeats our purpose, which is to learn by only looking at the data once. Although this might seem rather restrictive, it reaps sizable benefits down the road.\n", "\n", "The way we do feature scaling in `river` involves computing *running statistics* (also know as *moving statistics*). The idea is that we use a data structure that estimates the mean and updates itself when it is provided with a value. The same goes for the variance (and thus the standard deviation). For example, if we denote $\\mu_t$ the mean and $n_t$ the count at any moment $t$, then updating the mean can be done as so:\n", "\n", "$$\n", "\\begin{cases}\n", "n_{t+1} = n_t + 1 \\\\\n", "\\mu_{t+1} = \\mu_t + \\frac{x - \\mu_t}{n_{t+1}}\n", "\\end{cases}\n", "$$\n", "\n", "Likewise, the running variance can be computed as so:\n", "\n", "$$\n", "\\begin{cases}\n", "n_{t+1} = n_t + 1 \\\\\n", "\\mu_{t+1} = \\mu_t + \\frac{x - \\mu_t}{n_{t+1}} \\\\\n", "s_{t+1} = s_t + (x - \\mu_t) \\times (x - \\mu_{t+1}) \\\\\n", "\\sigma_{t+1} = \\frac{s_{t+1}}{n_{t+1}}\n", "\\end{cases}\n", "$$\n", "\n", "where $s_t$ is a running sum of squares and $\\sigma_t$ is the running variance at time $t$. This might seem a tad more involved than the batch algorithms you learn in school, but it is rather elegant. Implementing this in Python is not too difficult. For example let's compute the running mean and variance of the `'mean area'` variable." ] }, { "cell_type": "code", "metadata": { "execution": { "iopub.execute_input": "2021-04-16T18:31:36.964579Z", "iopub.status.busy": "2021-04-16T18:31:36.963945Z", "iopub.status.idle": "2021-04-16T18:31:36.980938Z", "shell.execute_reply": "2021-04-16T18:31:36.981403Z" }, "id": "-yUEM0MtNXAq", "colab": { "base_uri": "https://localhost:8080/" }, "outputId": "572bd9cd-0c23-4eda-b392-c8d128508bd3" }, "source": [ "n, mean, sum_of_squares, variance = 0, 0, 0, 0\n", "\n", "for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):\n", " n += 1\n", " old_mean = mean\n", " mean += (xi['mean area'] - mean) / n\n", " sum_of_squares += (xi['mean area'] - old_mean) * (xi['mean area'] - mean)\n", " variance = sum_of_squares / n\n", " \n", "print(f'Running mean: {mean:.3f}')\n", "print(f'Running variance: {variance:.3f}')" ], "execution_count": 7, "outputs": [ { "output_type": "stream", "text": [ "Running mean: 654.889\n", "Running variance: 123625.903\n" ], "name": "stdout" } ] }, { "cell_type": "markdown", "metadata": { "id": "F8vbyBe4NXAs" }, "source": [ "Let's compare this with `numpy`. But remember, `numpy` requires access to \"all\" the data." ] }, { "cell_type": "code", "metadata": { "execution": { "iopub.execute_input": "2021-04-16T18:31:36.985590Z", "iopub.status.busy": "2021-04-16T18:31:36.984910Z", "iopub.status.idle": "2021-04-16T18:31:36.986896Z", "shell.execute_reply": "2021-04-16T18:31:36.987268Z" }, "id": "uLrCoJG5NXAt", "colab": { "base_uri": "https://localhost:8080/" }, "outputId": "b8d62011-06cc-437b-f200-7e2acbc05e31" }, "source": [ "i = list(dataset.feature_names).index('mean area')\n", "print(f'True mean: {np.mean(X[:, i]):.3f}')\n", "print(f'True variance: {np.var(X[:, i]):.3f}')" ], "execution_count": 10, "outputs": [ { "output_type": "stream", "text": [ "True mean: 654.889\n", "True variance: 123625.903\n" ], "name": "stdout" } ] }, { "cell_type": "markdown", "metadata": { "id": "9BrrseaBNXAt" }, "source": [ "The results seem to be exactly the same! The twist is that the running statistics won't be very accurate for the first few observations. In general though this doesn't matter too much. Some would even go as far as to say that this descrepancy is beneficial and acts as some sort of regularization...\n", "\n", "Now the idea is that we can compute the running statistics of each feature and scale them as they come along. The way to do this with `river` is to use the `StandardScaler` class from the `preprocessing` module, as so:" ] }, { "cell_type": "code", "metadata": { "execution": { "iopub.execute_input": "2021-04-16T18:31:36.990825Z", "iopub.status.busy": "2021-04-16T18:31:36.990238Z", "iopub.status.idle": "2021-04-16T18:31:37.034936Z", "shell.execute_reply": "2021-04-16T18:31:37.035331Z" }, "id": "n6P3W4JVNXAu" }, "source": [ "scaler = preprocessing.StandardScaler()\n", "\n", "for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):\n", " scaler = scaler.learn_one(xi)" ], "execution_count": 12, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "ZVTLgC5LNXAv" }, "source": [ "Now that we are scaling the data, we can start doing some actual machine learning. We're going to implement an online linear regression task. Because all the data isn't available at once, we are obliged to do what is called *stochastic gradient descent*, which is a popular research topic and has a lot of variants. SGD is commonly used to train neural networks. The idea is that at each step we compute the loss between the target prediction and the truth. We then calculate the gradient, which is simply a set of derivatives with respect to each weight from the linear regression. Once we have obtained the gradient, we can update the weights by moving them in the opposite direction of the gradient. The amount by which the weights are moved typically depends on a *learning rate*, which is typically set by the user. Different optimizers have different ways of managing the weight update, and some handle the learning rate implicitly. Online linear regression can be done in `river` with the `LinearRegression` class from the `linear_model` module. We'll be using plain and simple SGD using the `SGD` optimizer from the `optim` module. During training we'll measure the squared error between the truth and the predictions." ] }, { "cell_type": "code", "metadata": { "execution": { "iopub.execute_input": "2021-04-16T18:31:37.041671Z", "iopub.status.busy": "2021-04-16T18:31:37.040278Z", "iopub.status.idle": "2021-04-16T18:31:37.151131Z", "shell.execute_reply": "2021-04-16T18:31:37.151533Z" }, "id": "__4AHNo4NXAw", "colab": { "base_uri": "https://localhost:8080/" }, "outputId": "f9cb90ed-11eb-4e70-974a-43fa0c6d84af" }, "source": [ "scaler = preprocessing.StandardScaler()\n", "optimizer = optim.SGD(lr=0.01)\n", "log_reg = linear_model.LogisticRegression(optimizer)\n", "\n", "y_true = []\n", "y_pred = []\n", "\n", "for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer(), shuffle=True, seed=42):\n", " \n", " # Scale the features\n", " xi_scaled = scaler.learn_one(xi).transform_one(xi)\n", " \n", " # Test the current model on the new \"unobserved\" sample\n", " yi_pred = log_reg.predict_proba_one(xi_scaled)\n", " # Train the model with the new sample\n", " log_reg.learn_one(xi_scaled, yi)\n", " \n", " # Store the truth and the prediction\n", " y_true.append(yi)\n", " y_pred.append(yi_pred[True])\n", " \n", "print(f'ROC AUC: {metrics.roc_auc_score(y_true, y_pred):.3f}')" ], "execution_count": 14, "outputs": [ { "output_type": "stream", "text": [ "ROC AUC: 0.990\n" ], "name": "stdout" } ] }, { "cell_type": "markdown", "metadata": { "id": "WS4_NQO_NXAx" }, "source": [ "The ROC AUC is significantly better than the one obtained from the cross-validation of scikit-learn's logisitic regression. However to make things really comparable it would be nice to compare with the same cross-validation procedure. `river` has a `compat` module that contains utilities for making `river` compatible with other Python libraries. Because we're doing regression we'll be using the `SKLRegressorWrapper`. We'll also be using `Pipeline` to encapsulate the logic of the `StandardScaler` and the `LogisticRegression` in one single object." ] }, { "cell_type": "code", "metadata": { "execution": { "iopub.execute_input": "2021-04-16T18:31:37.156707Z", "iopub.status.busy": "2021-04-16T18:31:37.156129Z", "iopub.status.idle": "2021-04-16T18:31:37.451580Z", "shell.execute_reply": "2021-04-16T18:31:37.452139Z" }, "id": "HK1UEAP5NXAy", "colab": { "base_uri": "https://localhost:8080/" }, "outputId": "d726c81b-49cc-46c1-c241-10331b24c135" }, "source": [ "# We define a Pipeline, exactly like we did earlier for sklearn \n", "model = compose.Pipeline(\n", " ('scale', preprocessing.StandardScaler()),\n", " ('log_reg', linear_model.LogisticRegression())\n", ")\n", "\n", "# We make the Pipeline compatible with sklearn\n", "model = compat.convert_river_to_sklearn(model)\n", "\n", "# We compute the CV scores using the same CV scheme and the same scoring\n", "scores = model_selection.cross_val_score(model, X, y, scoring=scorer, cv=cv)\n", "\n", "# Display the average score and it's standard deviation\n", "print(f'ROC AUC: {scores.mean():.3f} (± {scores.std():.3f})')" ], "execution_count": 17, "outputs": [ { "output_type": "stream", "text": [ "ROC AUC: 0.964 (± 0.016)\n" ], "name": "stdout" } ] }, { "cell_type": "markdown", "metadata": { "id": "S97lPgTxNXA0" }, "source": [ "This time the ROC AUC score is lower, which is what we would expect. Indeed online learning isn't as accurate as batch learning. However it all depends in what you're interested in. If you're only interested in predicting the next observation then the online learning regime would be better. That's why it's a bit hard to compare both approaches: they're both suited to different scenarios." ] }, { "cell_type": "markdown", "metadata": { "id": "CPzjeR21NXA1" }, "source": [ "## Going further" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" }, "id": "HGK4NVpvNXA3" }, "source": [ "Here a few resources if you want to do some reading:\n", "\n", "- [Online learning -- Wikipedia](https://www.wikiwand.com/en/Online_machine_learning)\n", "- [What is online machine learning? -- Max Pagels](https://medium.com/value-stream-design/online-machine-learning-515556ff72c5)\n", "- [Introduction to Online Learning -- USC course](http://www-bcf.usc.edu/~haipengl/courses/CSCI699/)\n", "- [Online Methods in Machine Learning -- MIT course](http://www.mit.edu/~rakhlin/6.883/)\n", "- [Online Learning: A Comprehensive Survey](https://arxiv.org/pdf/1802.02871.pdf)\n", "- [Streaming 101: The world beyond batch](https://www.oreilly.com/ideas/the-world-beyond-batch-streaming-101)\n", "- [Machine learning for data streams](https://www.cms.waikato.ac.nz/~abifet/book/contents.html)\n", "- [Data Stream Mining: A Practical Approach](https://www.cs.waikato.ac.nz/~abifet/MOA/StreamMining.pdf)" ] } ] }