{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Hyperparameter optimization with Dask\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Every machine learning model has some values that are specified before training begins. These values help adapt the model to the data but must be given before any training data is seen. For example, this might be `penalty` or `C` in Scikit-learn's [LogisiticRegression]. These values that come before any training data and are called \"hyperparameters\". Typical usage looks something like:\n", "\n", "``` python\n", "from sklearn.linear_model import LogisiticRegression\n", "from sklearn.datasets import make_classification\n", "\n", "X, y = make_classification()\n", "est = LogisiticRegression(C=10, penalty=\"l2\")\n", "est.fit(X, y)\n", "```\n", "\n", "These hyperparameters influence the quality of the prediction. For example, if `C` is too small in the example above, the output of the estimator will not fit the data well.\n", "\n", "Determining the values of these hyperparameters is difficult. In fact, Scikit-learn has an entire documentation page on finding the best values: https://scikit-learn.org/stable/modules/grid_search.html\n", "\n", "[LogisiticRegression]:https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Dask enables some new techniques and opportunities for hyperparameter optimization. One of these opportunities involves stopping training early to limit computation. Naturally, this requires some way to stop and restart training (`partial_fit` or `warm_start` in Scikit-learn parlance).\n", "\n", "This is especially useful when the search is complex and has many search parameters. Good examples are most deep learning models, which has specialized algorithms for handling many data but have difficulty providing basic hyperparameters (e.g., \"learning rate\", \"momentum\" or \"weight decay\").\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**This notebook will walk through**\n", "\n", "* setting up a realistic example\n", "* how to use `HyperbandSearchCV`, including\n", " * understanding the input parameters to `HyperbandSearchCV`\n", " * running the hyperparameter optimization\n", " * how to access informantion from `HyperbandSearchCV`\n", " \n", "This notebook will specifically *not* show a performance comparison motivating `HyperbandSearchCV` use. `HyperbandSearchCV` finds high scores with minimal training; however, this is a tutorial on how to *use* it. All performance comparisons are relegated to section [*Learn more*](#Learn-more)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Setup Dask" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from distributed import Client\n", "client = Client(processes=False, threads_per_worker=4,\n", " n_workers=1, memory_limit='2GB')\n", "client" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Create Data" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.datasets import make_circles\n", "import numpy as np\n", "import pandas as pd\n", "\n", "X, y = make_circles(n_samples=30_000, random_state=0, noise=0.09)\n", "\n", "pd.DataFrame({0: X[:, 0], 1: X[:, 1], \"class\": y}).sample(4_000).plot.scatter(\n", " x=0, y=1, alpha=0.2, c=\"class\", cmap=\"bwr\"\n", ");" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Add random dimensions" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.utils import check_random_state\n", "\n", "rng = check_random_state(42)\n", "random_feats = rng.uniform(-1, 1, size=(X.shape[0], 4))\n", "X = np.hstack((X, random_feats))\n", "X.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Split and scale data" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.model_selection import train_test_split\n", "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=5_000, random_state=42)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.preprocessing import StandardScaler\n", "from sklearn.model_selection import train_test_split\n", "scaler = StandardScaler().fit(X_train)\n", "\n", "X_train = scaler.transform(X_train)\n", "X_test = scaler.transform(X_test)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from dask.utils import format_bytes\n", "\n", "for name, X in [(\"train\", X_train), (\"test\", X_test)]:\n", " print(\"dataset =\", name)\n", " print(\"shape =\", X.shape)\n", " print(\"bytes =\", format_bytes(X.nbytes))\n", " print(\"-\" * 20)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we have our train and test sets." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Create model and search space" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's use Scikit-learn's MLPClassifier as our model (for convenience). Let's use this model with 24 neurons and tune some of the other basic hyperparameters.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "from sklearn.neural_network import MLPClassifier\n", "\n", "model = MLPClassifier()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Deep learning libraries can be used as well. In particular, [PyTorch]'s Scikit-Learn wrapper [Skorch] works well with `HyperbandSearchCV`.\n", "\n", "[PyTorch]:https://pytorch.org/\n", "[Skorch]:https://skorch.readthedocs.io/en/stable/" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "params = {\n", " \"hidden_layer_sizes\": [\n", " (24, ),\n", " (12, 12),\n", " (6, 6, 6, 6),\n", " (4, 4, 4, 4, 4, 4),\n", " (12, 6, 3, 3),\n", " ],\n", " \"activation\": [\"relu\", \"logistic\", \"tanh\"],\n", " \"alpha\": np.logspace(-6, -3, num=1000), # cnts\n", " \"batch_size\": [16, 32, 64, 128, 256, 512],\n", "}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Hyperparameter optimization" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`HyperbandSearchCV` is Dask-ML's meta-estimator to find the best hyperparameters. It can be used as an alternative to `RandomizedSearchCV` to find similar hyper-parameters in less time by not wasting time on hyper-parameters that are not promising. Specifically, it is almost guaranteed that it will find high performing models with minimal training.\n", "\n", "This section will focus on\n", "\n", "1. Understanding the input parameters to `HyperbandSearchCV`\n", "2. Using `HyperbandSearchCV` to find the best hyperparameters\n", "3. Seeing other use cases of `HyperbandSearchCV`" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from dask_ml.model_selection import HyperbandSearchCV" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Determining input parameters" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A rule-of-thumb to determine `HyperbandSearchCV`'s input parameters requires knowing:\n", "\n", "1. the number of examples the longest trained model will see\n", "2. the number of hyperparameters to evaluate" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's write down what these should be for this example:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# For quick response\n", "n_examples = 4 * len(X_train)\n", "n_params = 8\n", "\n", "# In practice, HyperbandSearchCV is most useful for longer searches\n", "# n_examples = 15 * len(X_train)\n", "# n_params = 15" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this, models that are trained the longest will see `n_examples` examples. This is how much data is required, normally set be the problem difficulty. Simple problems may only need 10 passes through the dataset; more complex problems may need 100 passes through the dataset.\n", "\n", "There will be `n_params` parameters sampled so `n_params` models will be evaluated. Models with low scores will be terminated before they see `n_examples` examples. This helps perserve computation." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "How can we use these values to determine the inputs for `HyperbandSearchCV`?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "max_iter = n_params # number of times partial_fit will be called\n", "chunks = n_examples // n_params # number of examples each call sees\n", "\n", "max_iter, chunks" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This means that the longest trained estimator will see about `n_examples` examples (specifically `n_params * (n_examples // n_params`)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Applying input parameters" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's create a Dask array with this chunk size:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import dask.array as da\n", "X_train2 = da.from_array(X_train, chunks=chunks)\n", "y_train2 = da.from_array(y_train, chunks=chunks)\n", "X_train2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Each `partial_fit` call will receive one chunk." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "That means the number of exmaples in each chunk should be (about) the same, and `n_examples` and `n_params` should be chosen to make that happen. (e.g., with 100 examples, shoot for chunks with `(33, 33, 34)` examples not `(48, 48, 4)` examples)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's use `max_iter` to create our `HyperbandSearchCV` object:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "search = HyperbandSearchCV(\n", " model,\n", " params,\n", " max_iter=max_iter,\n", " patience=True,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## How much computation will be performed?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It isn't clear how to determine how much computation is done from `max_iter` and `chunks`. Luckily, `HyperbandSearchCV` has a `metadata` attribute to determine this beforehand:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "search.metadata[\"partial_fit_calls\"]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This shows how many `partial_fit` calls will be performed in the computation. `metadata` also includes information on the number of models created." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So far, all that's been done is getting the search ready for computation (and seeing how much computation will be performed). So far, all the computation has been quick and easy." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Performing the computation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, let's do the model selection search and find the best hyperparameters. This is the real core of this notebook. This computation will be take place on all the hardware Dask has available.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "search.fit(X_train2, y_train2, classes=[0, 1, 2, 3])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The dashboard will be active while this is running. It will show which workers are running `partial_fit` and `score` calls.\n", "This takes about 10 seconds." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Integration" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`HyperbandSearchCV` follows the Scikit-learn API and mirrors Scikit-learn's `RandomizedSearchCV`. This means that it \"just works\". All the Scikit-learn attributes and methods are available:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "search.best_score_" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "search.best_estimator_" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cv_results = pd.DataFrame(search.cv_results_)\n", "cv_results.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "search.score(X_test, y_test)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "search.predict(X_test)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "search.predict(X_test).compute()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It also has some other attributes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "hist = pd.DataFrame(search.history_)\n", "hist.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This illustrates the history after every `partial_fit` call. There's also an attributed `model_history_` that records the history for each model (it's a reorganization of `history_`)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Learn more" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This notebook covered basic usage `HyperbandSearchCV`. The following documentation and resources might be useful to learn more about `HyperbandSearchCV`, including some of the finer use cases:\n", "\n", "* [A talk](https://www.youtube.com/watch?v=x67K9FiPFBQ) introducing `HyperbandSearchCV` to the SciPy 2019 audience and the [corresponding paper](https://conference.scipy.org/proceedings/scipy2019/pdfs/scott_sievert.pdf)\n", "* [HyperbandSearchCV's documentation](https://ml.dask.org/modules/generated/dask_ml.model_selection.HyperbandSearchCV.html)\n", "\n", "Performance comparisons can be found in the SciPy 2019 talk/paper." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.12" } }, "nbformat": 4, "nbformat_minor": 4 }