{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\"Open\n", "\n", "Uncomment the following line to install [geemap](https://geemap.org) if needed." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# !pip install geemap scikit-learn" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# How to use locally trained machine learning models with GEE\n", "\n", "This notebook illustrates how to train a random forest (or any other ensemble tree estimator) locally using scikit-learn, convert the estimator into a string representation that Earth Engine can interpret, and how to apply the machine learning model with EE. **The notebook and the geemap machine learning module ([ml.py](https://geemap.org/ml/)) were contributed by [Kel Markert](https://github.com/KMarkert). A huge thank you to him.** " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import ee\n", "import geemap\n", "import pandas as pd\n", "\n", "from geemap import ml\n", "from sklearn import ensemble" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "geemap.ee_initialize()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Train a model locally using scikit-learn\n", "\n", "In this demo, we are going to use the training data from [here](https://github.com/giswqs/geemap/blob/master/examples/data/rf_example.csv). " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# read the feature table to train our RandomForest model\n", "# data taken from ee.FeatureCollection('GOOGLE/EE/DEMOS/demo_landcover_labels')\n", "\n", "url = \"https://raw.githubusercontent.com/giswqs/geemap/master/examples/data/rf_example.csv\"\n", "df = pd.read_csv(url)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# specify the names of the features (i.e. band names) and label\n", "# feature names used to extract out features and define what bands\n", "\n", "feature_names = ['B2', 'B3', 'B4', 'B5', 'B6', 'B7']\n", "label = \"landcover\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# get the features and labels into seperate variables\n", "X = df[feature_names]\n", "y = df[label]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# create a classifier and fit\n", "n_trees = 10 \n", "rf = ensemble.RandomForestClassifier(n_trees).fit(X,y)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Convert a sklearn classifier object to a list of strings" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# convert the estimator into a list of strings\n", "# this function also works with the ensemble.ExtraTrees estimator\n", "trees = ml.rf_to_strings(rf,feature_names)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# print the first tree to see the result\n", "print(trees[0])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(trees[1])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# number of trees we converted should equal the number of trees we defined for the model\n", "len(trees) == n_trees" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Convert sklearn classifier to GEE classifier\n", "\n", "At this point you can take the list of strings and save them locally to avoid training again. However, we want to use the model with EE so we need to create an ee.Classifier and persist the data on ee for best results." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# create a ee classifier to use with ee objects from the trees\n", "ee_classifier = ml.strings_to_classifier(trees)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# ee_classifier.getInfo()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Classify image using GEE classifier" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Make a cloud-free Landsat 8 TOA composite (from raw imagery).\n", "l8 = ee.ImageCollection('LANDSAT/LC08/C01/T1');\n", "\n", "image = ee.Algorithms.Landsat.simpleComposite(\n", " collection= l8.filterDate('2018-01-01', '2018-12-31'),\n", " asFloat= True\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# classify the image using the classifier we created from the local training\n", "# note: here we select the feature_names from the image that way the classifier knows which bands to use\n", "classified = image.select(feature_names).classify(ee_classifier)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# display results\n", "Map = geemap.Map(center=(37.75,-122.25), zoom=11)\n", "\n", "Map.addLayer(image,{\"bands\": ['B7', 'B5', 'B3'], \"min\":0.05, \"max\": 0.55, \"gamma\":1.5}, 'image')\n", "Map.addLayer(classified, {\"min\": 0, \"max\": 2, \"palette\": ['red', 'green', 'blue']},'classification')\n", "\n", "Map" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Yay!! 🎉 Looks like our example works. Don't party too much because there is a catch...\n", "\n", "This workflow has several limitations particularly due to how much data you can pass from the client to the server and how large of a model ee can acutally handle. EE can only handle 40MB of data passed to the server, so if you have a lot of large decision tree strings then this will not work. Also, creating a classifier from strings has limitation (see this ee-forum discussion: https://groups.google.com/g/google-earth-engine-developers/c/lFFU1GBPzi8/m/6MewQk1FBwAJ), this is again limited by string lengths when ee creates a computation graph.\n", "\n", "So, you can use this but know you will probably run into errors when training large models." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Save trees to the cloud\n", "\n", "Now we have the strings in a format that ee can use, we want to save it for later use. There is a function to export a list of tree strings to a feature collection. The feature collection will have a pro" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "user_id = geemap.ee_user_id()\n", "user_id" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# specify asset id where to save trees\n", "# be sure to change to your ee user name\n", "asset_id = user_id + \"/random_forest_strings_test\"\n", "asset_id" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# kick off an export process so it will be saved to the ee asset\n", "ml.export_trees_to_fc(trees,asset_id)\n", "\n", "# this will kick off an export task, so wait a few minutes before moving on" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# read the exported tree feature collection\n", "rf_fc = ee.FeatureCollection(asset_id)\n", "\n", "# convert it to a classifier, very similar to the `ml.trees_to_classifier` function\n", "another_classifier = ml.fc_to_classifier(rf_fc)\n", "\n", "# classify the image again but with the classifier from the persisted trees\n", "classified = image.select(feature_names).classify(another_classifier)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# display results\n", "# we should get the exact same results as before\n", "Map = geemap.Map(center=(37.75,-122.25), zoom=11)\n", "\n", "Map.addLayer(image,{\"bands\": ['B7', 'B5', 'B3'], \"min\":0.05, \"max\": 0.55, \"gamma\":1.5}, 'image')\n", "Map.addLayer(classified, {\"min\": 0, \"max\": 2, \"palette\": ['red', 'green', 'blue']},'classification')\n", "\n", "Map" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Save trees locally" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "out_csv = os.path.expanduser(\"~/Downloads/trees.csv\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ml.trees_to_csv(trees, out_csv)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "another_classifier = ml.csv_to_classifier(out_csv)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "classified = image.select(feature_names).classify(another_classifier)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": false }, "outputs": [], "source": [ "# display results\n", "# we should get the exact same results as before\n", "Map = geemap.Map(center=(37.75,-122.25), zoom=11)\n", "\n", "Map.addLayer(image,{\"bands\": ['B7', 'B5', 'B3'], \"min\":0.05, \"max\": 0.55, \"gamma\":1.5}, 'image')\n", "Map.addLayer(classified, {\"min\": 0, \"max\": 2, \"palette\": ['red', 'green', 'blue']},'classification')\n", "\n", "Map" ] } ], "metadata": { "hide_input": false, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" }, "toc": { "base_numbering": 1, "nav_menu": {}, "number_sections": true, "sideBar": true, "skip_h1_title": false, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": false, "toc_position": { "height": "calc(100% - 180px)", "left": "10px", "top": "150px", "width": "384px" }, "toc_section_display": true, "toc_window_display": false } }, "nbformat": 4, "nbformat_minor": 4 }