{ "cells": [ { "cell_type": "markdown", "source": [ "# MLJ for Data Scientists in Two Hours" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "An application of the [MLJ\n", "toolbox](https://alan-turing-institute.github.io/MLJ.jl/dev/) to the\n", "Telco Customer Churn dataset, aimed at practicing data scientists\n", "new to MLJ (Machine Learning in Julia). This tutorial does not\n", "cover exploratory data analysis." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "MLJ is a *multi-paradigm* machine learning toolbox (i.e., not just\n", "deep-learning)." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "For other MLJ learning resources see the [Learning\n", "MLJ](https://alan-turing-institute.github.io/MLJ.jl/dev/learning_mlj/)\n", "section of the\n", "[manual](https://alan-turing-institute.github.io/MLJ.jl/dev/)." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "**Topics covered**: Grabbing and preparing a dataset, basic\n", "fit/predict workflow, constructing a pipeline to include data\n", "pre-processing, estimating performance metrics, ROC curves, confusion\n", "matrices, feature importance, basic feature selection, controlling iterative\n", "models, hyper-parameter optimization (tuning)." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "**Prerequisites for this tutorial.** Previous experience building,\n", "evaluating, and optimizing machine learning models using\n", "scikit-learn, caret, MLR, weka, or similar tool. No previous\n", "experience with MLJ. Only fairly basic familiarity with Julia is\n", "required. Uses\n", "[DataFrames.jl](https://dataframes.juliadata.org/stable/) but in a\n", "minimal way ([this\n", "cheatsheet](https://ahsmart.com/pub/data-wrangling-with-data-frames-jl-cheat-sheet/index.html)\n", "may help)." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "**Time.** Between two and three hours, first time through." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Summary of methods and types introduced" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "|code | purpose|\n", "|:-------|:-------------------------------------------------------|\n", "| `OpenML.load(id)` | grab a dataset from [OpenML.org](https://www.openml.org)|\n", "| `scitype(X)` | inspect the scientific type (scitype) of object `X`|\n", "| `schema(X)` | inspect the column scitypes (scientific types) of a table `X`|\n", "| `coerce(X, ...)` | fix column encodings to get appropriate scitypes|\n", "| `partition(data, frac1, frac2, ...; rng=...)` | vertically split `data`, which can be a table, vector or matrix|\n", "| `unpack(table, f1, f2, ...)` | horizontally split `table` based on conditions `f1`, `f2`, ..., applied to column names|\n", "| `@load ModelType pkg=...` | load code defining a model type|\n", "| `input_scitype(model)` | inspect the scitype that a model requires for features (inputs)|\n", "| `target_scitype(model)`| inspect the scitype that a model requires for the target (labels)|\n", "| `ContinuousEncoder` | built-in model type for re-encoding all features as `Continuous`|\n", "| `model1 ∣> model2 ∣> ...` | combine multiple models into a pipeline|\n", "| `measures(\"under curve\")` | list all measures (metrics) with string \"under curve\" in documentation|\n", "| `accuracy(yhat, y)` | compute accuracy of predictions `yhat` against ground truth observations `y`|\n", "| `auc(yhat, y)`, `brier_loss(yhat, y)` | evaluate two probabilistic measures (`yhat` a vector of probability distributions)|\n", "| `machine(model, X, y)` | bind `model` to training data `X` (features) and `y` (target)|\n", "| `fit!(mach, rows=...)` | train machine using specified rows (observation indices)|\n", "| `predict(mach, rows=...)`, | make in-sample model predictions given specified rows|\n", "| `predict(mach, Xnew)` | make predictions given new features `Xnew`|\n", "| `fitted_params(mach)` | inspect learned parameters|\n", "| `report(mach)` | inspect other outcomes of training|\n", "| `confmat(yhat, y)` | confusion matrix for predictions `yhat` and ground truth `y`|\n", "| `roc(yhat, y)` | compute points on the receiver-operator Characteristic|\n", "| `StratifiedCV(nfolds=6)` | 6-fold stratified cross-validation resampling strategy|\n", "| `Holdout(fraction_train=0.7)` | holdout resampling strategy|\n", "| `evaluate(model, X, y; resampling=..., options...)` | estimate performance metrics `model` using the data `X`, `y`|\n", "| `FeatureSelector()` | transformer for selecting features|\n", "| `Step(3)` | iteration control for stepping 3 iterations|\n", "| `NumberSinceBest(6)`, `TimeLimit(60/5), InvalidValue()` | iteration control stopping criteria|\n", "| `IteratedModel(model=..., controls=..., options...)` | wrap an iterative `model` in control strategies|\n", "| `range(model, :some_hyperparam, lower=..., upper=...)` | define a numeric range|\n", "| `RandomSearch()` | random search tuning strategy|\n", "| `TunedModel(model=..., tuning=..., options...)` | wrap the supervised `model` in specified `tuning` strategy|" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Instantiate a Julia environment" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "The following code replicates precisely the set of Julia packages\n", "used to develop this tutorial. If this is your first time running\n", "the notebook, package instantiation and pre-compilation may take a\n", "minute or so to complete. **This step will fail** if the [correct\n", "Manifest.toml and Project.toml\n", "files](https://github.com/alan-turing-institute/MLJ.jl/tree/dev/examples/telco)\n", "are not in the same directory as this notebook." ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "using Pkg\n", "Pkg.activate(@__DIR__) # get env from TOML files in same directory as this notebook\n", "Pkg.instantiate()" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "## Warm up: Building a model for the iris dataset" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "Before turning to the Telco Customer Churn dataset, we very quickly\n", "build a predictive model for Fisher's well-known iris data set, as way of\n", "introducing the main actors in any MLJ workflow. Details that you\n", "don't fully grasp should become clearer in the Telco study." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "This section is a condensed adaption of the [Getting Started\n", "example](https://alan-turing-institute.github.io/MLJ.jl/dev/getting_started/#Fit-and-predict)\n", "in the MLJ documentation." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "First, using the built-in iris dataset, we load and inspect the features\n", "`X_iris` (a table) and target variable `y_iris` (a vector):" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "using MLJ" ], "metadata": {}, "execution_count": null }, { "outputs": [], "cell_type": "code", "source": [ "const X_iris, y_iris = @load_iris;\n", "schema(X_iris)" ], "metadata": {}, "execution_count": null }, { "outputs": [], "cell_type": "code", "source": [ "y_iris[1:4]" ], "metadata": {}, "execution_count": null }, { "outputs": [], "cell_type": "code", "source": [ "levels(y_iris)" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "We load a decision tree model, from the package DecisionTree.jl:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "DecisionTree = @load DecisionTreeClassifier pkg=DecisionTree # model type\n", "model = DecisionTree(min_samples_split=5) # model instance" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "In MLJ, a *model* is just a container for hyper-parameters of\n", "some learning algorithm. It does not store learned parameters." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "Next, we bind the model together with the available data in what's\n", "called a *machine*:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "mach = machine(model, X_iris, y_iris)" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "A machine is essentially just a model (ie, hyper-parameters) plus data, but\n", "it additionally stores *learned parameters* (the tree) once it is\n", "trained on some view of the data:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "train_rows = vcat(1:60, 91:150); # some row indices (observations are rows not columns)\n", "fit!(mach, rows=train_rows)\n", "fitted_params(mach)" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "A machine stores some other information enabling [warm\n", "restart](https://alan-turing-institute.github.io/MLJ.jl/dev/machines/#Warm-restarts)\n", "for some models, but we won't go into that here. You are allowed to\n", "access and mutate the `model` parameter:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "mach.model.min_samples_split = 10\n", "fit!(mach, rows=train_rows) # re-train with new hyper-parameter" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "Now we can make predictions on some other view of the data, as in" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "predict(mach, rows=71:73)" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "or on completely new data, as in" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "Xnew = (sepal_length = [5.1, 6.3],\n", " sepal_width = [3.0, 2.5],\n", " petal_length = [1.4, 4.9],\n", " petal_width = [0.3, 1.5])\n", "yhat = predict(mach, Xnew)" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "These are probabilistic predictions which can be manipulated using a\n", "widely adopted interface defined in the Distributions.jl\n", "package. For example, we can get raw probabilities like this:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "pdf.(yhat, \"virginica\")" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "We now turn to the Telco dataset." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Getting the Telco data" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "import DataFrames" ], "metadata": {}, "execution_count": null }, { "outputs": [], "cell_type": "code", "source": [ "data = OpenML.load(42178) # data set from OpenML.org\n", "df0 = DataFrames.DataFrame(data)\n", "first(df0, 4)" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "The object of this tutorial is to build and evaluate supervised\n", "learning models to predict the `:Churn` variable, a binary variable\n", "measuring customer retention, based on other variables that are\n", "relevant." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "In the table, observations correspond to rows, and features to\n", "columns, which is the convention for representing all\n", "two-dimensional data in MLJ." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Type coercion" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "> Introduces: `scitype`, `schema`, `coerce`" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "A [\"scientific\n", "type\"](https://juliaai.github.io/ScientificTypes.jl/dev/) or\n", "*scitype* indicates how MLJ will *interpret* data. For example,\n", "`typeof(3.14) == Float64`, while `scitype(3.14) == Continuous` and\n", "also `scitype(3.14f0) == Continuous`. In MLJ, model data\n", "requirements are articulated using scitypes." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "Here are common \"scalar\" scitypes:" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "![](assets/scitypes.png)" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "There are also container scitypes. For example, the scitype of any\n", "`N`-dimensional array is `AbstractArray{S, N}`, where `S` is the scitype of the\n", "elements:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "scitype([\"cat\", \"mouse\", \"dog\"])" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "The `schema` operator summarizes the column scitypes of a table:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "schema(df0) |> DataFrames.DataFrame # converted to DataFrame for better display" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "All of the fields being interpreted as `Textual` are really\n", "something else, either `Multiclass` or, in the case of\n", "`:TotalCharges`, `Continuous`. In fact, `:TotalCharges` is\n", "mostly floats wrapped as strings. However, it needs special\n", "treatment because some elements consist of a single space, \" \",\n", "which we'll treat as \"0.0\"." ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "fix_blanks(v) = map(v) do x\n", " if x == \" \"\n", " return \"0.0\"\n", " else\n", " return x\n", " end\n", "end\n", "\n", "df0.TotalCharges = fix_blanks(df0.TotalCharges);" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "Coercing the `:TotalCharges` type to ensure a `Continuous` scitype:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "coerce!(df0, :TotalCharges => Continuous);" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "Coercing all remaining `Textual` data to `Multiclass`:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "coerce!(df0, Textual => Multiclass);" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "Finally, we'll coerce our target variable `:Churn` to be\n", "`OrderedFactor`, rather than `Multiclass`, to enable a reliable\n", "interpretation of metrics like \"true positive rate\". By convention,\n", "the first class is the negative one:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "coerce!(df0, :Churn => OrderedFactor)\n", "levels(df0.Churn) # to check order" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "Re-inspecting the scitypes:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "schema(df0) |> DataFrames.DataFrame" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "## Preparing a holdout set for final testing" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "> Introduces: `partition`" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "To reduce training times for the purposes of this tutorial, we're\n", "going to dump 90% of observations (after shuffling) and split off\n", "30% of the remainder for use as a lock-and-throw-away-the-key\n", "holdout set:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "df, df_test, df_dumped = partition(df0, 0.07, 0.03, # in ratios 7:3:90\n", " stratify=df0.Churn,\n", " rng=123);" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "The reader interested in including all data can instead do\n", "`df, df_test = partition(df0, 0.7, stratify=df0.Churn, rng=123)`." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Splitting data into target and features" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "> Introduces: `unpack`" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "In the following call, the column with name `:Churn` is copied over\n", "to a vector `y`, and every remaining column, except `:customerID`\n", "(which contains no useful information) goes into a table `X`. Here\n", "`:Churn` is the target variable for which we seek predictions, given\n", "new versions of the features `X`." ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "const y, X = unpack(df, ==(:Churn), !=(:customerID));\n", "schema(X).names" ], "metadata": {}, "execution_count": null }, { "outputs": [], "cell_type": "code", "source": [ "intersect([:Churn, :customerID], schema(X).names)" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "We'll do the same for the holdout data:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "const ytest, Xtest = unpack(df_test, ==(:Churn), !=(:customerID));" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "## Loading a model and checking type requirements" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "> Introduces: `@load`, `input_scitype`, `target_scitype`" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "For tools helping us to identify suitable models, see the [Model\n", "Search](https://alan-turing-institute.github.io/MLJ.jl/dev/model_search/#model_search)\n", "section of the manual. We will build a gradient tree-boosting model,\n", "a popular first choice for structured data like we have here. Model\n", "code is contained in a third-party package called\n", "[EvoTrees.jl](https://github.com/Evovest/EvoTrees.jl) which is\n", "loaded as follows:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "Booster = @load EvoTreeClassifier pkg=EvoTrees" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "Recall that a *model* is just a container for some algorithm's\n", "hyper-parameters. Let's create a `Booster` with default values for\n", "the hyper-parameters:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "booster = Booster()" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "This model is appropriate for the kind of target variable we have because of\n", "the following passing test:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "scitype(y) <: target_scitype(booster)" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "However, our features `X` cannot be directly used with `booster`:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "scitype(X) <: input_scitype(booster)" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "As it turns out, this is because `booster`, like the majority of MLJ\n", "supervised models, expects the features to be `Continuous`. (With\n", "some experience, this can be gleaned from `input_scitype(booster)`.)\n", "So we need categorical feature encoding, discussed next." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Building a model pipeline to incorporate feature encoding" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "> Introduces: `ContinuousEncoder`, pipeline operator `|>`" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "The built-in `ContinuousEncoder` model transforms an arbitrary table\n", "to a table whose features are all `Continuous` (dropping any fields\n", "it does not know how to encode). In particular, all `Multiclass`\n", "features are one-hot encoded." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "A *pipeline* is a stand-alone model that internally combines one or\n", "more models in a linear (non-branching) pipeline. Here's a pipeline\n", "that adds the `ContinuousEncoder` as a pre-processor to the\n", "gradient tree-boosting model above:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "pipe = ContinuousEncoder() |> booster" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "Note that the component models appear as hyper-parameters of\n", "`pipe`. Pipelines are an implementation of a more general [model\n", "composition](https://alan-turing-institute.github.io/MLJ.jl/dev/composing_models/#Composing-Models)\n", "interface provided by MLJ that advanced users may want to learn about." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "From the above display, we see that component model hyper-parameters\n", "are now *nested*, but they are still accessible (important in hyper-parameter\n", "optimization):" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "pipe.evo_tree_classifier.max_depth" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "## Evaluating the pipeline model's performance" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "> Introduces: `measures` (function), **measures:** `brier_loss`, `auc`, `accuracy`;\n", "> `machine`, `fit!`, `predict`, `fitted_params`, `report`, `roc`, **resampling strategy** `StratifiedCV`, `evaluate`, `FeatureSelector`" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "Without touching our test set `Xtest`, `ytest`, we will estimate the\n", "performance of our pipeline model, with default hyper-parameters, in\n", "two different ways:" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "**Evaluating by hand.** First, we'll do this \"by hand\" using the `fit!` and `predict`\n", "workflow illustrated for the iris data set above, using a\n", "holdout resampling strategy. At the same time we'll see how to\n", "generate a **confusion matrix**, **ROC curve**, and inspect\n", "**feature importances**." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "**Automated performance evaluation.** Next we'll apply the more\n", "typical and convenient `evaluate` workflow, but using `StratifiedCV`\n", "(stratified cross-validation) which is more informative." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "In any case, we need to choose some measures (metrics) to quantify\n", "the performance of our model. For a complete list of measures, one\n", "does `measures()`. Or we also can do:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "measures(\"Brier\")" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "We will be primarily using `brier_loss`, but also `auc` (area under\n", "the ROC curve) and `accuracy`." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "### Evaluating by hand (with a holdout set)" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "Our pipeline model can be trained just like the decision tree model\n", "we built for the iris data set. Binding all non-test data to the\n", "pipeline model:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "mach_pipe = machine(pipe, X, y)" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "We already encountered the `partition` method above. Here we apply\n", "it to row indices, instead of data containers, as `fit!` and\n", "`predict` only need a *view* of the data to work." ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "train, validation = partition(1:length(y), 0.7)\n", "fit!(mach_pipe, rows=train)" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "We note in passing that we can access two kinds of information from a trained machine:" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "- The **learned parameters** (eg, coefficients of a linear model): We use `fitted_params(mach_pipe)`\n", "- Other **by-products of training** (eg, feature importances): We use `report(mach_pipe)`" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "fp = fitted_params(mach_pipe);\n", "keys(fp)" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "For example, we can check that the encoder did not actually drop any features:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "Set(fp.continuous_encoder.features_to_keep) == Set(schema(X).names)" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "And, from the report, extract feature importances:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "rpt = report(mach_pipe)\n", "keys(rpt.evo_tree_classifier)" ], "metadata": {}, "execution_count": null }, { "outputs": [], "cell_type": "code", "source": [ "fi = rpt.evo_tree_classifier.feature_importances\n", "feature_importance_table =\n", " (feature=Symbol.(first.(fi)), importance=last.(fi)) |> DataFrames.DataFrame" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "For models not reporting feature importances, we recommend the\n", "[Shapley.jl](https://expandingman.gitlab.io/Shapley.jl/) package." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "Returning to predictions and evaluations of our measures:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "ŷ = predict(mach_pipe, rows=validation);\n", "@info(\"Measurements\",\n", " brier_loss(ŷ, y[validation]) |> mean,\n", " auc(ŷ, y[validation]),\n", " accuracy(mode.(ŷ), y[validation])\n", " )" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "Note that we need `mode` in the last case because `accuracy` expects\n", "point predictions, not probabilistic ones. (One can alternatively\n", "use `predict_mode` to generate the predictions.)" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "While we're here, lets also generate a **confusion matrix** and\n", "[receiver-operator\n", "characteristic](https://en.wikipedia.org/wiki/Receiver_operating_characteristic)\n", "(ROC):" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "confmat(mode.(ŷ), y[validation])" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "Note: Importing the plotting package and calling the plotting\n", "functions for the first time can take a minute or so." ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "using Plots" ], "metadata": {}, "execution_count": null }, { "outputs": [], "cell_type": "code", "source": [ "roc_curve = roc(ŷ, y[validation])\n", "plt = scatter(roc_curve, legend=false)\n", "plot!(plt, xlab=\"false positive rate\", ylab=\"true positive rate\")\n", "plot!([0, 1], [0, 1], linewidth=2, linestyle=:dash, color=:black)" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "### Automated performance evaluation (more typical workflow)" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "We can also get performance estimates with a single call to the\n", "`evaluate` function, which also allows for more complicated\n", "resampling - in this case stratified cross-validation. To make this\n", "more comprehensive, we set `repeats=3` below to make our\n", "cross-validation \"Monte Carlo\" (3 random size-6 partitions of the\n", "observation space, for a total of 18 folds) and set\n", "`acceleration=CPUThreads()` to parallelize the computation." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "We choose a `StratifiedCV` resampling strategy; the complete list of options is\n", "[here](https://alan-turing-institute.github.io/MLJ.jl/dev/evaluating_model_performance/#Built-in-resampling-strategies)." ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "e_pipe = evaluate(pipe, X, y,\n", " resampling=StratifiedCV(nfolds=6, rng=123),\n", " measures=[brier_loss, auc, accuracy],\n", " repeats=3,\n", " acceleration=CPUThreads())" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "(There is also a version of `evaluate` for machines. Query the\n", "`evaluate` and `evaluate!` doc-strings to learn more about these\n", "functions and what the `PerformanceEvaluation` object `e_pipe` records.)" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "While [less than ideal](https://arxiv.org/abs/2104.00673), let's\n", "adopt the common practice of using the standard error of a\n", "cross-validation score as an estimate of the uncertainty of a\n", "performance measure's expected value. Here's a utility function to\n", "calculate 95% confidence intervals for our performance estimates based\n", "on this practice, and it's application to the current evaluation:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "using Measurements" ], "metadata": {}, "execution_count": null }, { "outputs": [], "cell_type": "code", "source": [ "function confidence_intervals(e)\n", " factor = 2.0 # to get level of 95%\n", " measure = e.measure\n", " nfolds = length(e.per_fold[1])\n", " measurement = [e.measurement[j] ± factor*std(e.per_fold[j])/sqrt(nfolds - 1)\n", " for j in eachindex(measure)]\n", " table = (measure=measure, measurement=measurement)\n", " return DataFrames.DataFrame(table)\n", "end\n", "\n", "const confidence_intervals_basic_model = confidence_intervals(e_pipe)" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "## Filtering out unimportant features" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "> Introduces: `FeatureSelector`" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "Before continuing, we'll modify our pipeline to drop those features\n", "with low feature importance, to speed up later optimization:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "unimportant_features = filter(:importance => <(0.005), feature_importance_table).feature\n", "\n", "pipe2 = ContinuousEncoder() |>\n", " FeatureSelector(features=unimportant_features, ignore=true) |> booster" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "## Wrapping our iterative model in control strategies" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "> Introduces: **control strategies:** `Step`, `NumberSinceBest`, `TimeLimit`, `InvalidValue`, **model wrapper** `IteratedModel`, **resampling strategy:** `Holdout`" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "We want to optimize the hyper-parameters of our model. Since our\n", "model is iterative, these parameters include the (nested) iteration\n", "parameter `pipe.evo_tree_classifier.nrounds`. Sometimes this\n", "parameter is optimized first, fixed, and then maybe optimized again\n", "after the other parameters. Here we take a more principled approach,\n", "**wrapping our model in a control strategy** that makes it\n", "\"self-iterating\". The strategy applies a stopping criterion to\n", "*out-of-sample* estimates of the model performance, constructed\n", "using an internally constructed holdout set. In this way, we avoid\n", "some data hygiene issues, and, when we subsequently optimize other\n", "parameters, we will always being using an optimal number of\n", "iterations." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "Note that this approach can be applied to any iterative MLJ model,\n", "eg, the neural network models provided by\n", "[MLJFlux.jl](https://github.com/FluxML/MLJFlux.jl)." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "First, we select appropriate controls from [this\n", "list](https://alan-turing-institute.github.io/MLJ.jl/dev/controlling_iterative_models/#Controls-provided):" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "controls = [\n", " Step(1), # to increment iteration parameter (`pipe.nrounds`)\n", " NumberSinceBest(4), # main stopping criterion\n", " TimeLimit(2/3600), # never train more than 2 sec\n", " InvalidValue() # stop if NaN or ±Inf encountered\n", "]" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "Now we wrap our pipeline model using the `IteratedModel` wrapper,\n", "being sure to specify the `measure` on which internal estimates of\n", "the out-of-sample performance will be based:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "iterated_pipe = IteratedModel(model=pipe2,\n", " controls=controls,\n", " measure=brier_loss,\n", " resampling=Holdout(fraction_train=0.7))" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "We've set `resampling=Holdout(fraction_train=0.7)` to arrange that\n", "data attached to our model should be internally split into a train\n", "set (70%) and a holdout set (30%) for determining the out-of-sample\n", "estimate of the Brier loss." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "For demonstration purposes, let's bind `iterated_model` to all data\n", "not in our don't-touch holdout set, and train on all of that data:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "mach_iterated_pipe = machine(iterated_pipe, X, y)\n", "fit!(mach_iterated_pipe);" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "To recap, internally this training is split into two separate steps:" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "- A controlled iteration step, training on the holdout set, with the total number of iterations determined by the specified stopping criteria (based on the out-of-sample performance estimates)\n", "- A final step that trains the atomic model on *all* available\n", " data using the number of iterations determined in the first step. Calling `predict` on `mach_iterated_pipe` means using the learned parameters of the second step." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Hyper-parameter optimization (model tuning)" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "> Introduces: `range`, **model wrapper** `TunedModel`, `RandomSearch`" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "We now turn to hyper-parameter optimization. A tool not discussed\n", "here is the `learning_curve` function, which can be useful when\n", "wanting to visualize the effect of changes to a *single*\n", "hyper-parameter (which could be an iteration parameter). See, for\n", "example, [this section of the\n", "manual](https://alan-turing-institute.github.io/MLJ.jl/dev/learning_curves/)\n", "or [this\n", "tutorial](https://github.com/ablaom/MLJTutorial.jl/blob/dev/notebooks/04_tuning/notebook.ipynb)." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "Fine tuning the hyper-parameters of a gradient booster can be\n", "somewhat involved. Here we settle for simultaneously optimizing two\n", "key parameters: `max_depth` and `η` (learning_rate)." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "Like iteration control, **model optimization in MLJ is implemented as\n", "a model wrapper**, called `TunedModel`. After wrapping a model in a\n", "tuning strategy and binding the wrapped model to data in a machine\n", "called `mach`, calling `fit!(mach)` instigates a search for optimal\n", "model hyperparameters, within a specified range, and then uses all\n", "supplied data to train the best model. To predict using that model,\n", "one then calls `predict(mach, Xnew)`. In this way the wrapped model\n", "may be viewed as a \"self-tuning\" version of the unwrapped\n", "model. That is, wrapping the model simply transforms certain\n", "hyper-parameters into learned parameters (just as `IteratedModel`\n", "does for an iteration parameter)." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "To start with, we define ranges for the parameters of\n", "interest. Since these parameters are nested, let's force a\n", "display of our model to a larger depth:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "show(iterated_pipe, 2)" ], "metadata": {}, "execution_count": null }, { "outputs": [], "cell_type": "code", "source": [ "p1 = :(model.evo_tree_classifier.η)\n", "p2 = :(model.evo_tree_classifier.max_depth)\n", "\n", "r1 = range(iterated_pipe, p1, lower=-2, upper=-0.5, scale=x->10^x)\n", "r2 = range(iterated_pipe, p2, lower=2, upper=6)" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "Nominal ranges are defined by specifying `values` instead of `lower`\n", "and `upper`." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "Next, we choose an optimization strategy from [this\n", "list](https://alan-turing-institute.github.io/MLJ.jl/dev/tuning_models/#Tuning-Models):" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "tuning = RandomSearch(rng=123)" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "Then we wrap the model, specifying a `resampling` strategy and a\n", "`measure`, as we did for `IteratedModel`. In fact, we can include a\n", "battery of `measures`; by default, optimization is with respect to\n", "performance estimates based on the first measure, but estimates for\n", "all measures can be accessed from the model's `report`." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "The keyword `n` specifies the total number of models (sets of\n", "hyper-parameters) to evaluate." ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "tuned_iterated_pipe = TunedModel(model=iterated_pipe,\n", " range=[r1, r2],\n", " tuning=tuning,\n", " measures=[brier_loss, auc, accuracy],\n", " resampling=StratifiedCV(nfolds=6, rng=123),\n", " acceleration=CPUThreads(),\n", " n=40)" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "To save time, we skip the `repeats` here." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "Binding our final model to data and training:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "mach_tuned_iterated_pipe = machine(tuned_iterated_pipe, X, y)\n", "fit!(mach_tuned_iterated_pipe)" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "As explained above, the training we have just performed was split\n", "internally into two separate steps:" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "- A step to determine the parameter values that optimize the aggregated cross-validation scores\n", "- A final step that trains the optimal model on *all* available data. Future predictions `predict(mach_tuned_iterated_pipe, ...)` are based on this final training step." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "From `report(mach_tuned_iterated_pipe)` we can extract details about\n", "the optimization procedure. For example:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "rpt2 = report(mach_tuned_iterated_pipe);\n", "best_booster = rpt2.best_model.model.evo_tree_classifier" ], "metadata": {}, "execution_count": null }, { "outputs": [], "cell_type": "code", "source": [ "@info \"Optimal hyper-parameters:\" best_booster.max_depth best_booster.η;" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "Using the `confidence_intervals` function we defined earlier:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "e_best = rpt2.best_history_entry\n", "confidence_intervals(e_best)" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "Digging a little deeper, we can learn what stopping criterion was\n", "applied in the case of the optimal model, and how many iterations\n", "were required:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "rpt2.best_report.controls |> collect" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "Finally, we can visualize the optimization results:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "plot(mach_tuned_iterated_pipe, size=(600,450))" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "## Saving our model" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "> Introduces: `MLJ.save`" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "Here's how to serialize our final, trained self-iterating,\n", "self-tuning pipeline machine:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "MLJ.save(\"tuned_iterated_pipe.jlso\", mach_tuned_iterated_pipe)" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "We'll deserialize this in \"Testing the final model\" below." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Final performance estimate" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "Finally, to get an even more accurate estimate of performance, we\n", "can evaluate our model using stratified cross-validation and all the\n", "data attached to our machine. Because this evaluation implies\n", "[nested\n", "resampling](https://mlr.mlr-org.com/articles/tutorial/nested_resampling.html),\n", "this computation takes quite a bit longer than the previous one\n", "(which is being repeated six times, using 5/6th of the data each\n", "time):" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "e_tuned_iterated_pipe = evaluate(tuned_iterated_pipe, X, y,\n", " resampling=StratifiedCV(nfolds=6, rng=123),\n", " measures=[brier_loss, auc, accuracy])" ], "metadata": {}, "execution_count": null }, { "outputs": [], "cell_type": "code", "source": [ "confidence_intervals(e_tuned_iterated_pipe)" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "For comparison, here are the confidence intervals for the basic\n", "pipeline model (no feature selection and default hyperparameters):" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "confidence_intervals_basic_model" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "As each pair of intervals overlap, it's doubtful the small changes\n", "here can be assigned statistical significance. Default `booster`\n", "hyper-parameters do a pretty good job." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Testing the final model" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "We now determine the performance of our model on our\n", "lock-and-throw-away-the-key holdout set. To demonstrate\n", "deserialization, we'll pretend we're in a new Julia session (but\n", "have called `import`/`using` on the same packages). Then the\n", "following should suffice to recover our model trained under\n", "\"Hyper-parameter optimization\" above:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "mach_restored = machine(\"tuned_iterated_pipe.jlso\")" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "We compute predictions on the holdout set:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "ŷ_tuned = predict(mach_restored, Xtest);\n", "ŷ_tuned[1]" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "And can compute the final performance measures:" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "@info(\"Tuned model measurements on test:\",\n", " brier_loss(ŷ_tuned, ytest) |> mean,\n", " auc(ŷ_tuned, ytest),\n", " accuracy(mode.(ŷ_tuned), ytest)\n", " )" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "For comparison, here's the performance for the basic pipeline model" ], "metadata": {} }, { "outputs": [], "cell_type": "code", "source": [ "mach_basic = machine(pipe, X, y)\n", "fit!(mach_basic, verbosity=0)\n", "\n", "ŷ_basic = predict(mach_basic, Xtest);\n", "\n", "@info(\"Basic model measurements on test set:\",\n", " brier_loss(ŷ_basic, ytest) |> mean,\n", " auc(ŷ_basic, ytest),\n", " accuracy(mode.(ŷ_basic), ytest)\n", " )" ], "metadata": {}, "execution_count": null }, { "cell_type": "markdown", "source": [ "---\n", "\n", "*This notebook was generated using [Literate.jl](https://github.com/fredrikekre/Literate.jl).*" ], "metadata": {} } ], "nbformat_minor": 3, "metadata": { "language_info": { "file_extension": ".jl", "mimetype": "application/julia", "name": "julia", "version": "1.6.5" }, "kernelspec": { "name": "julia-1.6", "display_name": "Julia 1.6.5", "language": "julia" } }, "nbformat": 4 }