{ "cells": [ { "cell_type": "markdown", "id": "36d44eb9", "metadata": {}, "source": [ "## Model selection \n", "\n", "Here, we demonstrate the modeling framework of experimenting the performance of many ML models to decide the best for the weather stations considered here. We follow the previous exercise on predictor selection method and the other related steps. The only thing we are adding here is to define many learning models, trained them individually to evaluate their performance. " ] }, { "cell_type": "code", "execution_count": 2, "id": "d74ebba9", "metadata": {}, "outputs": [], "source": [ "# import all the models required\n", "import os \n", "import sys \n", "import pandas as pd \n", "import numpy as np \n", "from collections import OrderedDict\n", "import socket\n", "\n", "# modules related to pyESD\n", "\n", "from pyESD.Weatherstation import read_station_csv\n", "from pyESD.standardizer import MonthlyStandardizer\n", "from pyESD.ESD_utils import store_pickle, store_csv\n", "from pyESD.splitter import KFold\n", "from pyESD.ESD_utils import Dataset\n", "from pyESD.Weatherstation import read_weatherstationnames" ] }, { "cell_type": "code", "execution_count": 3, "id": "5e4e3c42", "metadata": {}, "outputs": [], "source": [ "# define the predictors without teleconnection indices\n", "predictors = [\"t2m\", \"tp\",\"msl\", \"v10\", \"u10\", \"NAO\", \"EA\", \"SCAN\", \"EAWR\",\n", " \"u250\", \"u850\", \"u500\",\"u700\", \"u1000\",\"v250\", \"v850\", \"v500\",\"v700\", \"v1000\",\n", " \"r250\", \"r850\", \"r500\",\"r700\", \"r1000\", \"z250\", \"z500\", \"z700\", \"z850\", \"z1000\", \n", " \"t250\", \"t850\", \"t500\",\"t700\", \"t1000\",\"dtd250\", \"dtd850\", \"dtd500\",\"dtd700\", \"dtd1000\"\n", " ]\n", "\n", "# date-range for model training and validation\n", "from1958to2010 = pd.date_range(start=\"1958-01-01\", end=\"2010-12-31\", freq=\"MS\")\n", "\n", "# date-range for testing model\n", "from2011to2020 = pd.date_range(start=\"2011-01-01\", end=\"2020-12-31\", freq=\"MS\")\n", "\n", "#full-time range\n", "from1958to2020 = pd.date_range(start=\"1958-01-01\", end=\"2020-12-31\", freq=\"MS\")" ] }, { "cell_type": "markdown", "id": "22e0d002", "metadata": {}, "source": [ "### control function\n", "\n", "Define the control function that performs the predictor selection and model training.\n", "1. read the station data as object that would apply all the ESD routines\n", "2. set predictors with the list of predictors defined and the radius to construct the regional means\n", "3. standardize the data with any of the standardizers. Here we use the MonthlyStandardizer method\n", "4. defined the scoring metrics to be used for the validation\n", "5. set the model to be used for the ESD training (here we will use the LassoLarsCV model)\n", "6. fit the model, here we have to define the predictor selector method (here: Recursive ) to be used for selecting the predictors\n", "7. get the selected predictors \n", "8. use the cross_validate_predict to get the cross-validation metrics of the model training \n", "9. store the selected predictors \n", "10. stored the validation metrics" ] }, { "cell_type": "code", "execution_count": 13, "id": "33135164", "metadata": {}, "outputs": [], "source": [ "def run_model_selection(variable, estimator, cachedir, stationnames,\n", " station_datadir, radius, base_estimators=None,\n", " final_estimator=None):\n", " \"\"\"\n", " Run an experiment using pyESD to perform predictor selection for a given variable.\n", "\n", " Args:\n", " variable (str): The target variable to predict, here Precipitation.\n", " regressor (str): The regression method to use, here we use the RidgeCV regression to test all the predictor selection\n", " methods.\n", " selector_method (str): The method for selecting predictors (\"Recursive\", \"TreeBased\", \"Sequential\").\n", " cachedir (str): Directory to store cached results, here all the files would be stored in the .\n", " stationnames (list): List of station names. it would be loaded from the read_data file\n", " station_datadir (str): Directory containing station data files: this is also set in the read the data file\n", " predictors (list): List of predictor variables.\n", " predictordir (str): Directory containing predictor data files.\n", " radius (float): Radius for selecting predictors: also defined in the read_data file\n", " base_estimators (list): for stacking regressor base models \n", " final_estimator (str): for stacking regressor meta-learner\n", " \"\"\"\n", " num_of_stations = len(stationnames)\n", "\n", " # Loop through all stations\n", " for i in range(num_of_stations):\n", " stationname = stationnames[i]\n", " \n", " # set the exact path for the station data\n", " station_dir = os.path.join(station_datadir, stationname + \".csv\")\n", " \n", " # 1. create the station object using the read_station_csv and apply all the methods on the station object\n", " \n", " SO_instance = read_station_csv(filename=station_dir, varname=variable)\n", "\n", " # 2. Setting predictors (generate the predictors using the defined predictor names)\n", " SO_instance.set_predictors(variable, predictors, predictordir, radius)\n", "\n", " # 3. Setting standardizer\n", " SO_instance.set_standardizer(variable, standardizer=MonthlyStandardizer(detrending=False, scaling=False))\n", " \n", " \n", " # 4. define the scoring metrics\n", " scoring = [\"neg_root_mean_squared_error\", \"r2\", \"neg_mean_absolute_error\"]\n", " \n", " # 5. Setting model with cross-validation\n", " if estimator == \"Stacking\":\n", " \n", " SO_instance.set_model(variable, method=estimator, ensemble_learning=True, \n", " estimators=base_estimators, final_estimator_name=final_estimator, daterange=from1958to2010,\n", " predictor_dataset=ERA5Data, cv=KFold(n_splits=10),\n", " scoring = scoring)\n", " else:\n", " \n", " \n", " SO_instance.set_model(variable, method=estimator, cv=KFold(n_splits=10),\n", " scoring = scoring)\n", "\n", " # 6. Fitting model with predictor selector option\n", " SO_instance.fit(variable, from1958to2010, ERA5Data, fit_predictors=True, predictor_selector=True,\n", " selector_method=\"Recursive\", selector_regressor=\"ARD\",\n", " cal_relative_importance=False)\n", " \n", " # 7. cross-validate and predict\n", " score_fit, ypred_fit = SO_instance.cross_validate_and_predict(variable, from1958to2010, ERA5Data,)\n", "\n", " # 8. evaluate model on the test set\n", " score_test = SO_instance.evaluate(variable, from2011to2020, ERA5Data,)\n", " \n", " # 9. make predictions for the test and train period\n", " ypred_train = SO_instance.predict(variable, from1958to2010, ERA5Data)\n", " \n", " ypred_test = SO_instance.predict(variable, from2011to2020, ERA5Data)\n", " \n", " # get the observed datasets for comparisons \n", " y_obs_train = SO_instance.get_var(variable, from1958to2010, anomalies=True)\n", " \n", " y_obs_test = SO_instance.get_var(variable, from2011to2020, anomalies=True)\n", " \n", " y_obs_full = SO_instance.get_var(variable, from1958to2020, anomalies=True)\n", "\n", " # 9-10. Storing results using pickle\n", " predictions = pd.DataFrame({\n", " \"obs_full\": y_obs_full,\n", " \"obs_train\" : y_obs_train,\n", " \"obs_test\": y_obs_test,\n", " \"ERA5 1958-2010\" : ypred_train,\n", " \"ERA5 2011-2020\" : ypred_test})\n", " \n", " \n", " store_pickle(stationname, \"validation_score_\" + estimator, score_fit, cachedir)\n", " store_pickle(stationname, \"test_score_\" + estimator, score_test, cachedir)\n", " store_csv(stationname, \"predictions_\" + estimator, predictions, cachedir)\n", " " ] }, { "cell_type": "code", "execution_count": 14, "id": "b17f938b", "metadata": {}, "outputs": [], "source": [ "from read_data import radius, station_prec_datadir, stationnames_prec, ERA5Data, predictordir, cachedir_prec" ] }, { "cell_type": "markdown", "id": "4e5b4c61", "metadata": {}, "source": [ "## Perfom the experiment for all models" ] }, { "cell_type": "code", "execution_count": 15, "id": "b359af3e", "metadata": {}, "outputs": [], "source": [ "final_estimator = \"LassoLarsCV\"\n", "\n", "base_estimators = [\"LassoLarsCV\", \"ARD\", \"MLP\", \"RandomForest\", \"XGBoost\", \"Bagging\"]\n", "\n", "\n", "estimators = [\"LassoLarsCV\", \"ARD\", \"MLP\", \"RandomForest\", \"XGBoost\", \"Bagging\", \"Stacking\"]" ] }, { "cell_type": "code", "execution_count": null, "id": "e42d3630", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Freiburg 48.0232 7.8343 236.0\n", "13 : optimal number of predictors and selected variables are Index(['t2m', 'tp', 'v10', 'u10', 'NAO', 'SCAN', 'u850', 'v1000', 'r1000',\n", " 't500', 'dtd250', 'dtd700', 'dtd1000'],\n", " dtype='object')\n", "Regenerating predictor data for t2m using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for tp using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for msl using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for v10 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for u10 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for NAO using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "The EoF Package implementation of EOF analysis is used! for the teleconnections\n", "Regenerating predictor data for EA using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "The EoF Package implementation of EOF analysis is used! for the teleconnections\n", "Regenerating predictor data for SCAN using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "The sklearn implementation of EOF analysis is used! for the teleconnections\n", "Regenerating predictor data for EAWR using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "The sklearn implementation of EOF analysis is used! for the teleconnections\n", "Regenerating predictor data for u250 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for u850 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for u500 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for u700 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for u1000 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for v250 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for v850 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for v500 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for v700 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for v1000 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for r250 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for r850 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for r500 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for r700 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for r1000 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for z250 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for z500 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for z700 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for z850 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for z1000 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for t250 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for t850 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for t500 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for t700 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for t1000 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for dtd250 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for dtd850 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for dtd500 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for dtd700 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for dtd1000 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "RMSE: 24.100856\n", "Nach-Sutcliffe Efficiency(NSE): 0.573038\n", "Mean Squared Error): 24.10\n", "Mean Absolute Error): 18.68\n", "Explained Variance: 0.62\n", "R² (Coefficient of determinaiton): 0.57\n", "Maximum error: 80.265120\n", "Adjusted R²: 0.63\n", "Konstanz 47.6952 9.1307 428.0\n", "6 : optimal number of predictors and selected variables are Index(['t2m', 'tp', 'u10', 'EAWR', 'v700', 'r700'], dtype='object')\n", "Regenerating predictor data for t2m using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for tp using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for msl using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for v10 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for u10 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for u250 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for u850 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for u500 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for u700 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for u1000 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for v250 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for v850 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for v500 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for v700 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for v1000 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for r250 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for r850 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for r500 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for r700 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for r1000 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for z250 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for z500 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for z700 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for z850 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for z1000 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for t250 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for t850 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for t500 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for t700 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Regenerating predictor data for t1000 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for dtd250 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for dtd850 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for dtd500 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for dtd700 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for dtd1000 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "RMSE: 22.891658\n", "Nach-Sutcliffe Efficiency(NSE): 0.554393\n", "Mean Squared Error): 22.89\n", "Mean Absolute Error): 17.14\n", "Explained Variance: 0.55\n", "R² (Coefficient of determinaiton): 0.55\n", "Maximum error: 88.742237\n", "Adjusted R²: 0.63\n", "Mannheim 49.5063 8.5584 98.0\n", "34 : optimal number of predictors and selected variables are Index(['tp', 'msl', 'v10', 'u10', 'NAO', 'EA', 'SCAN', 'EAWR', 'u250', 'u850',\n", " 'u500', 'u700', 'u1000', 'v250', 'v850', 'v500', 'v700', 'v1000',\n", " 'r250', 'r850', 'r500', 'r700', 'r1000', 'z250', 't250', 't850', 't500',\n", " 't700', 't1000', 'dtd250', 'dtd850', 'dtd500', 'dtd700', 'dtd1000'],\n", " dtype='object')\n", "Regenerating predictor data for t2m using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for tp using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for msl using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for v10 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for u10 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for u250 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for u850 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for u500 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for u700 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for u1000 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for v250 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for v850 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for v500 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for v700 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for v1000 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for r250 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for r850 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for r500 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for r700 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for r1000 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for z250 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for z500 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for z700 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for z850 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for z1000 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for t250 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for t850 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for t500 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for t700 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for t1000 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for dtd250 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for dtd850 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for dtd500 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for dtd700 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "Regenerating predictor data for dtd1000 using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n", "RMSE: 15.838567\n", "Nach-Sutcliffe Efficiency(NSE): 0.640846\n", "Mean Squared Error): 15.84\n", "Mean Absolute Error): 12.54\n", "Explained Variance: 0.66\n", "R² (Coefficient of determinaiton): 0.64\n", "Maximum error: 40.988448\n", "Adjusted R²: 0.68\n", "Nürnberg 49.503 11.0549 314.0\n", "28 : optimal number of predictors and selected variables are Index(['t2m', 'tp', 'v10', 'u10', 'NAO', 'EA', 'SCAN', 'EAWR', 'u250', 'u850',\n", " 'u500', 'u700', 'u1000', 'v250', 'v850', 'v500', 'v700', 'v1000',\n", " 'r850', 'r500', 'r700', 't250', 't500', 't700', 't1000', 'dtd850',\n", " 'dtd500', 'dtd700'],\n", " dtype='object')\n", "Regenerating predictor data for t2m using dataset ERA5 with loading patterns and params from ERA5 and ERA5\n" ] } ], "source": [ "for estimator in estimators:\n", " run_model_selection(variable=\"Precipitation\", estimator=estimator, cachedir=cachedir_prec, stationnames=stationnames_prec,\n", " station_datadir=station_prec_datadir,\n", " radius=radius, base_estimators=base_estimators,\n", " final_estimator=final_estimator)" ] }, { "cell_type": "code", "execution_count": null, "id": "34903710", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "0e74ee83", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "6c74918b", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "9c3eb9aa", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "4cee5ff4", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "a0936863", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "c41425f5", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "49ca0ddd", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "e220b4a0", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "2b92a720", "metadata": {}, "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.7" } }, "nbformat": 4, "nbformat_minor": 5 }