{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Model API Example\n", "\n", "In this notebook, we'll explore some functionality of the models of this package. We'll work with the coupled CemaneigeGR4j model that is implemented in `rrmpg.models` module. The data we'll use, comes from the CAMELS [1] data set. For some basins, the data is provided within this Python library and can be easily imported using the `CAMELSLoader` class implemented in the `rrmpg.data` module.\n", "\n", "In summary we'll look at:\n", "- How you can create a model instance.\n", "- How we can use the CAMELSLoader.\n", "- How you can fit the model parameters to observed discharge by:\n", " - Using one of SciPy's global optimizer\n", " - Monte-Carlo-Simulation\n", "- How you can use a fitted model to calculate the simulated discharge.\n", "\n", "\n", "[1] Addor, N., A.J. Newman, N. Mizukami, and M.P. Clark, 2017: The CAMELS data set: catchment attributes and meteorology for large-sample studies. version 2.0. Boulder, CO: UCAR/NCAR. doi:10.5065/D6G73C3Q" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "# Imports and Notebook setup\n", "from timeit import timeit\n", "\n", "import pandas as pd\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "\n", "from rrmpg.models import CemaneigeGR4J\n", "from rrmpg.data import CAMELSLoader\n", "from rrmpg.tools.monte_carlo import monte_carlo\n", "from rrmpg.utils.metrics import calc_nse" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Create a model\n", "\n", "As a first step let us have a look how we can create one of the models implemented in `rrmpg.models`. Basically, for all models we have two different options:\n", "1. Initialize a model **without** specific model parameters.\n", "2. Initialize a model **with** specific model parameters.\n", "\n", "The [documentation](http://rrmpg.readthedocs.io) provides a list of all model parameters. Alternatively we can look at `help()` for the model (e.g. `help(CemaneigeGR4J)`).\n", "\n", "If no specific model parameters are provided upon intialization, random parameters will be generated that are in between the default parameter bounds. We can look at these bounds by calling `.get_param_bounds()` method on the model object and check the current parameter values by calling `.get_params()` method.\n", "\n", "For now we don't know any specific parameter values, so we'll create one with random parameters." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'CTG': 0.3399735717656279,\n", " 'Kf': 0.8724652383290821,\n", " 'x1': 427.9652389107806,\n", " 'x2': 0.9927197563086638,\n", " 'x3': 288.20205223188475,\n", " 'x4': 1.4185137324914372}" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "model = CemaneigeGR4J()\n", "model.get_params()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here we can see the six model parameters of CemaneigeGR4J model and their current values." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Using the CAMELSLoader\n", "To have data to start with, we can use the `CAMELSLoader` class to load data of provided basins from the CAMELS dataset. To get a list of all available basins that are provided within this library, we can use the `.get_basin_numbers()` method. For now we will use the provided basin number `01031500`." ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
dayl(s)prcp(mm/day)srad(W/m2)swe(mm)tmax(C)tmin(C)vp(Pa)PETQObs(mm/d)
1980-10-0141050.800.00286.900.016.194.31825.781.57130.5550
1980-10-0240780.812.08195.940.013.465.72920.181.26190.4979
1980-10-0340435.215.57172.600.017.848.611128.701.29790.5169
1980-10-0440435.2123.68170.450.016.287.321027.911.22511.5634
1980-10-0540089.583.00113.830.010.515.01881.610.91162.8541
\n", "
" ], "text/plain": [ " dayl(s) prcp(mm/day) srad(W/m2) swe(mm) tmax(C) tmin(C) \\\n", "1980-10-01 41050.80 0.00 286.90 0.0 16.19 4.31 \n", "1980-10-02 40780.81 2.08 195.94 0.0 13.46 5.72 \n", "1980-10-03 40435.21 5.57 172.60 0.0 17.84 8.61 \n", "1980-10-04 40435.21 23.68 170.45 0.0 16.28 7.32 \n", "1980-10-05 40089.58 3.00 113.83 0.0 10.51 5.01 \n", "\n", " vp(Pa) PET QObs(mm/d) \n", "1980-10-01 825.78 1.5713 0.5550 \n", "1980-10-02 920.18 1.2619 0.4979 \n", "1980-10-03 1128.70 1.2979 0.5169 \n", "1980-10-04 1027.91 1.2251 1.5634 \n", "1980-10-05 881.61 0.9116 2.8541 " ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df = CAMELSLoader().load_basin('01031500')\n", "df.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next we will split the data into a calibration period, which we will use to find a set of good model parameters, and a validation period, we will use the see how good our model works on unseen data. As in the CAMELS data set publication, we will use the first 15 hydrological years for calibration. The rest of the data will be used for validation.\n", "\n", "Because the index of the dataframe is in pandas Datetime format, we can easily split the dataframe into two parts" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "# calcute the end date of the calibration period\n", "end_cal = pd.to_datetime(f\"{df.index[0].year + 15}/09/30\", yearfirst=True)\n", "\n", "# validation period starts one day later\n", "start_val = end_cal + pd.DateOffset(days=1)\n", "\n", "# split the data into two parts\n", "cal = df[:end_cal].copy()\n", "val = df[start_val:].copy()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Fit the model to observed discharge\n", "\n", "As already said above, we'll look at two different methods implemented in this library:\n", "1. Using one of SciPy's global optimizer\n", "2. Monte-Carlo-Simulation\n", "\n", "### Using one of SciPy's global optimizer\n", "\n", "Each model has a `.fit()` method. This function uses the global optimizer [differential evolution](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differential_evolution.html) from the scipy package to find the set of model parameters that produce the best simulation, regarding the provided observed discharge array.\n", "The inputs for this function can be found in the [documentation](http://rrmpg.readthedocs.io) or the `help()`." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Help on method fit in module rrmpg.models.cemaneigegr4j:\n", "\n", "fit(obs, prec, mean_temp, min_temp, max_temp, etp, met_station_height, snow_pack_init=0, thermal_state_init=0, s_init=0, r_init=0, altitudes=[]) method of rrmpg.models.cemaneigegr4j.CemaneigeGR4J instance\n", " Fit the Cemaneige + GR4J coupled model to a observed timeseries\n", " \n", " This functions uses scipy's global optimizer (differential evolution)\n", " to find a good set of parameters for the model, so that the observed \n", " timeseries is simulated as good as possible. \n", " \n", " Args:\n", " obs: Array of the observed timeseries [mm]\n", " prec: Array of daily precipitation sum [mm]\n", " mean_temp: Array of the mean temperature [C]\n", " min_temp: Array of the minimum temperature [C]\n", " max_temp: Array of the maximum temperature [C]\n", " etp: Array of mean potential evapotranspiration [mm]\n", " met_station_height: Height of the meteorological station [m]. \n", " Needed to calculate the fraction of solid precipitation and\n", " optionally for the extrapolation of the meteorological inputs.\n", " snow_pack_init: (optional) Initial value of the snow pack storage\n", " thermal_state_init: (optional) Initial value of the thermal state\n", " of the snow pack\n", " s_init: (optional) Initial value of the production storage as \n", " fraction of x1. \n", " r_init: (optional) Initial value of the routing storage as fraction\n", " of x3.\n", " altitudes: (optional) List of median altitudes of each elevation\n", " layer [m]\n", " \n", " Returns:\n", " res: A scipy OptimizeResult class object.\n", " \n", " Raises:\n", " ValueError: If one of the inputs contains invalid values.\n", " TypeError: If one of the inputs has an incorrect datatype.\n", " RuntimeErrror: If there is a size mismatch between the \n", " precipitation and the pot. evapotranspiration input.\n", "\n" ] } ], "source": [ "help(model.fit)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We don't know any values for the initial states of the storages, so we will ignore them for now. For the missing mean temperature, we calculate a proxy from the minimum and maximum daily temperature. The station height can be retrieved from the `CAMELSLoader` class via the `.get_station_height()` method." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "# calculate mean temp for calibration and validation period\n", "cal['tmean'] = (cal['tmin(C)'] + cal['tmax(C)']) / 2\n", "val['tmean'] = (val['tmin(C)'] + val['tmax(C)']) / 2\n", "\n", "# load the gauge station height\n", "height = CAMELSLoader().get_station_height('01031500')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we are ready to fit the model and retrieve a good set of model parameters from the optimizer. Again, this will be done with the calibration data. Because the model methods also except pandas Series, we can call the function as follows." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "# We don't have an initial value for the snow storage, so we omit this input\n", "result = model.fit(cal['QObs(mm/d)'], cal['prcp(mm/day)'], cal['tmean'], \n", " cal['tmin(C)'], cal['tmax(C)'], cal['PET'], height)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`result` is an object defined by the scipy library and contains the optimized model parameters, as well as some more information on the optimization process. Let us have a look at this object:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/plain": [ " fun: 1.6435277126036711\n", " jac: array([ 0.00000000e+00, 3.68594044e-06, -1.36113343e-05, -6.66133815e-06,\n", " -4.21884749e-07, 7.35368810e-01])\n", " message: 'Optimization terminated successfully.'\n", " nfev: 2452\n", " nit: 25\n", " success: True\n", " x: array([7.60699105e-02, 4.22084687e+00, 1.45653881e+02, 1.14318020e+00,\n", " 5.87237837e+01, 1.10000000e+00])" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "result" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The relevant information here is:\n", "- `fun` is the final value of our optimization criterion (the mean-squared-error in this case)\n", "- `message` describes the cause of the optimization termination\n", "- `nfev` is the number of model simulations\n", "- `sucess` is a flag wether or not the optimization was successful\n", "- `x` are the optimized model parameters\n", "\n", "Next, let us set the model parameters to the optimized ones found by the search. Therefore we need to create a dictonary containing one key for each model parameter and as the corresponding value the optimized parameter. As mentioned before, the list of model parameter names can be retrieved by the `model.get_parameter_names()` function. We can then create the needed dictonary by the following lines of code:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'CTG': 0.07606991045128364,\n", " 'Kf': 4.220846873695767,\n", " 'x1': 145.6538807127758,\n", " 'x2': 1.143180196835088,\n", " 'x3': 58.723783711432226,\n", " 'x4': 1.1}" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "params = {}\n", "\n", "param_names = model.get_parameter_names()\n", "\n", "for i, param in enumerate(param_names):\n", " params[param] = result.x[i]\n", "\n", "# This line set the model parameters to the ones specified in the dict\n", "model.set_params(params)\n", "\n", "# To be sure, let's look at the current model parameters\n", "model.get_params()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Also it might not be clear at the first look, this are the same parameters as the ones specified in `result.x`. In `result.x` they are ordered according to the ordering of the `_param_list` specified in each model class, where ass the dictonary output here is alphabetically sorted." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Monte-Carlo-Simulation\n", "\n", "Now let us have a look how we can use the Monte-Carlo-Simulation implemented in `rrmpg.tools.monte_carlo`. " ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Help on function monte_carlo in module rrmpg.tools.monte_carlo:\n", "\n", "monte_carlo(model, num, qobs=None, **kwargs)\n", " Perform Monte-Carlo-Simulation.\n", " \n", " This function performs a Monte-Carlo-Simulation for any given hydrological\n", " model of this repository.\n", " \n", " Args:\n", " model: Any instance of a hydrological model of this repository.\n", " num: Number of simulations.\n", " qobs: (optional) Array of observed streamflow.\n", " **kwargs: Keyword arguments, matching the inputs the model needs to\n", " perform a simulation (e.g. qobs, precipitation, temperature etc.).\n", " See help(model.simulate) for model input requirements.\n", " \n", " Returns:\n", " A dictonary containing the following two keys ['params', 'qsim']. The \n", " key 'params' contains a numpy array with the model parameter of each \n", " simulation. 'qsim' is a 2D numpy array with the simulated streamflow \n", " for each simulation. If an array of observed streamflow is provided,\n", " one additional key is returned in the dictonary, being 'mse'. This key\n", " contains an array of the mean-squared-error for each simulation.\n", " \n", " Raises:\n", " ValueError: If any input contains invalid values.\n", " TypeError: If any of the inputs has a wrong datatype.\n", "\n" ] } ], "source": [ "help(monte_carlo)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As specified in the help text, all model inputs needed for a simulation must be provided as keyword arguments. The keywords need to match the names specified in the `model.simulate()` function. Let us create a new model instance and see how this works for the CemaneigeGR4J model." ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "model2 = CemaneigeGR4J()\n", "\n", "# Let use run MC for 1000 runs, which is in the same range as the above optimizer\n", "result_mc = monte_carlo(model2, num=10000, qobs=cal['QObs(mm/d)'], \n", " prec=cal['prcp(mm/day)'], mean_temp=cal['tmean'],\n", " min_temp=cal['tmin(C)'], max_temp=cal['tmax(C)'],\n", " etp=cal['PET'], met_station_height=height)\n", "\n", "# Get the index of the best fit (smallest mean squared error)\n", "idx = np.argmin(result_mc['mse'][~np.isnan(result_mc['mse'])])\n", "\n", "# Get the optimal parameters and set them as model parameters\n", "optim_params = result_mc['params'][idx]\n", "\n", "params = {}\n", "\n", "for i, param in enumerate(param_names):\n", " params[param] = optim_params[i]\n", "\n", "# This line set the model parameters to the ones specified in the dict\n", "model2.set_params(params)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Calculate simulated discharge\n", "\n", "We now have two models, optimized by different methods. Let's calculate the simulated streamflow of each model and compare the results! Each model has a `.simulate()` method, that returns the simulated discharge for the inputs we provide to this function." ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "NSE of the .fit() optimization: 0.8075\n", "NSE of the Monte-Carlo-Simulation: 0.7332\n" ] } ], "source": [ "# simulated discharge of the model optimized by the .fit() function\n", "val['qsim_fit'] = model.simulate(val['prcp(mm/day)'], val['tmean'], \n", " val['tmin(C)'], val['tmax(C)'], \n", " val['PET'], height)\n", "\n", "# simulated discharge of the model optimized by monte-carlo-sim\n", "val['qsim_mc'] = model2.simulate(val['prcp(mm/day)'], val['tmean'], \n", " val['tmin(C)'], val['tmax(C)'], \n", " val['PET'], height)\n", "\n", "# Calculate and print the Nash-Sutcliff-Efficiency for both simulations\n", "nse_fit = calc_nse(val['QObs(mm/d)'], val['qsim_fit'])\n", "nse_mc = calc_nse(val['QObs(mm/d)'], val['qsim_mc'])\n", "\n", "print(\"NSE of the .fit() optimization: {:.4f}\".format(nse_fit))\n", "print(\"NSE of the Monte-Carlo-Simulation: {:.4f}\".format(nse_mc))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "What do this number mean? Let us have a look at some window of the simulated timeseries and compare them to the observed discharge:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "data": { "application/javascript": [ "/* Put everything inside the global mpl namespace */\n", "window.mpl = {};\n", "\n", "\n", "mpl.get_websocket_type = function() {\n", " if (typeof(WebSocket) !== 'undefined') {\n", " return WebSocket;\n", " } else if (typeof(MozWebSocket) !== 'undefined') {\n", " return MozWebSocket;\n", " } else {\n", " alert('Your browser does not have WebSocket support.' +\n", " 'Please try Chrome, Safari or Firefox ≥ 6. ' +\n", " 'Firefox 4 and 5 are also supported but you ' +\n", " 'have to enable WebSockets in about:config.');\n", " };\n", "}\n", "\n", "mpl.figure = function(figure_id, websocket, ondownload, parent_element) {\n", " this.id = figure_id;\n", "\n", " this.ws = websocket;\n", "\n", " this.supports_binary = (this.ws.binaryType != undefined);\n", "\n", " if (!this.supports_binary) {\n", " var warnings = document.getElementById(\"mpl-warnings\");\n", " if (warnings) {\n", " warnings.style.display = 'block';\n", " warnings.textContent = (\n", " \"This browser does not support binary websocket messages. \" +\n", " \"Performance may be slow.\");\n", " }\n", " }\n", "\n", " this.imageObj = new Image();\n", "\n", " this.context = undefined;\n", " this.message = undefined;\n", " this.canvas = undefined;\n", " this.rubberband_canvas = undefined;\n", " this.rubberband_context = undefined;\n", " this.format_dropdown = undefined;\n", "\n", " this.image_mode = 'full';\n", "\n", " this.root = $('
');\n", " this._root_extra_style(this.root)\n", " this.root.attr('style', 'display: inline-block');\n", "\n", " $(parent_element).append(this.root);\n", "\n", " this._init_header(this);\n", " this._init_canvas(this);\n", " this._init_toolbar(this);\n", "\n", " var fig = this;\n", "\n", " this.waiting = false;\n", "\n", " this.ws.onopen = function () {\n", " fig.send_message(\"supports_binary\", {value: fig.supports_binary});\n", " fig.send_message(\"send_image_mode\", {});\n", " if (mpl.ratio != 1) {\n", " fig.send_message(\"set_dpi_ratio\", {'dpi_ratio': mpl.ratio});\n", " }\n", " fig.send_message(\"refresh\", {});\n", " }\n", "\n", " this.imageObj.onload = function() {\n", " if (fig.image_mode == 'full') {\n", " // Full images could contain transparency (where diff images\n", " // almost always do), so we need to clear the canvas so that\n", " // there is no ghosting.\n", " fig.context.clearRect(0, 0, fig.canvas.width, fig.canvas.height);\n", " }\n", " fig.context.drawImage(fig.imageObj, 0, 0);\n", " };\n", "\n", " this.imageObj.onunload = function() {\n", " fig.ws.close();\n", " }\n", "\n", " this.ws.onmessage = this._make_on_message_function(this);\n", "\n", " this.ondownload = ondownload;\n", "}\n", "\n", "mpl.figure.prototype._init_header = function() {\n", " var titlebar = $(\n", " '
');\n", " var titletext = $(\n", " '
');\n", " titlebar.append(titletext)\n", " this.root.append(titlebar);\n", " this.header = titletext[0];\n", "}\n", "\n", "\n", "\n", "mpl.figure.prototype._canvas_extra_style = function(canvas_div) {\n", "\n", "}\n", "\n", "\n", "mpl.figure.prototype._root_extra_style = function(canvas_div) {\n", "\n", "}\n", "\n", "mpl.figure.prototype._init_canvas = function() {\n", " var fig = this;\n", "\n", " var canvas_div = $('
');\n", "\n", " canvas_div.attr('style', 'position: relative; clear: both; outline: 0');\n", "\n", " function canvas_keyboard_event(event) {\n", " return fig.key_event(event, event['data']);\n", " }\n", "\n", " canvas_div.keydown('key_press', canvas_keyboard_event);\n", " canvas_div.keyup('key_release', canvas_keyboard_event);\n", " this.canvas_div = canvas_div\n", " this._canvas_extra_style(canvas_div)\n", " this.root.append(canvas_div);\n", "\n", " var canvas = $('');\n", " canvas.addClass('mpl-canvas');\n", " canvas.attr('style', \"left: 0; top: 0; z-index: 0; outline: 0\")\n", "\n", " this.canvas = canvas[0];\n", " this.context = canvas[0].getContext(\"2d\");\n", "\n", " var backingStore = this.context.backingStorePixelRatio ||\n", "\tthis.context.webkitBackingStorePixelRatio ||\n", "\tthis.context.mozBackingStorePixelRatio ||\n", "\tthis.context.msBackingStorePixelRatio ||\n", "\tthis.context.oBackingStorePixelRatio ||\n", "\tthis.context.backingStorePixelRatio || 1;\n", "\n", " mpl.ratio = (window.devicePixelRatio || 1) / backingStore;\n", "\n", " var rubberband = $('');\n", " rubberband.attr('style', \"position: absolute; left: 0; top: 0; z-index: 1;\")\n", "\n", " var pass_mouse_events = true;\n", "\n", " canvas_div.resizable({\n", " start: function(event, ui) {\n", " pass_mouse_events = false;\n", " },\n", " resize: function(event, ui) {\n", " fig.request_resize(ui.size.width, ui.size.height);\n", " },\n", " stop: function(event, ui) {\n", " pass_mouse_events = true;\n", " fig.request_resize(ui.size.width, ui.size.height);\n", " },\n", " });\n", "\n", " function mouse_event_fn(event) {\n", " if (pass_mouse_events)\n", " return fig.mouse_event(event, event['data']);\n", " }\n", "\n", " rubberband.mousedown('button_press', mouse_event_fn);\n", " rubberband.mouseup('button_release', mouse_event_fn);\n", " // Throttle sequential mouse events to 1 every 20ms.\n", " rubberband.mousemove('motion_notify', mouse_event_fn);\n", "\n", " rubberband.mouseenter('figure_enter', mouse_event_fn);\n", " rubberband.mouseleave('figure_leave', mouse_event_fn);\n", "\n", " canvas_div.on(\"wheel\", function (event) {\n", " event = event.originalEvent;\n", " event['data'] = 'scroll'\n", " if (event.deltaY < 0) {\n", " event.step = 1;\n", " } else {\n", " event.step = -1;\n", " }\n", " mouse_event_fn(event);\n", " });\n", "\n", " canvas_div.append(canvas);\n", " canvas_div.append(rubberband);\n", "\n", " this.rubberband = rubberband;\n", " this.rubberband_canvas = rubberband[0];\n", " this.rubberband_context = rubberband[0].getContext(\"2d\");\n", " this.rubberband_context.strokeStyle = \"#000000\";\n", "\n", " this._resize_canvas = function(width, height) {\n", " // Keep the size of the canvas, canvas container, and rubber band\n", " // canvas in synch.\n", " canvas_div.css('width', width)\n", " canvas_div.css('height', height)\n", "\n", " canvas.attr('width', width * mpl.ratio);\n", " canvas.attr('height', height * mpl.ratio);\n", " canvas.attr('style', 'width: ' + width + 'px; height: ' + height + 'px;');\n", "\n", " rubberband.attr('width', width);\n", " rubberband.attr('height', height);\n", " }\n", "\n", " // Set the figure to an initial 600x600px, this will subsequently be updated\n", " // upon first draw.\n", " this._resize_canvas(600, 600);\n", "\n", " // Disable right mouse context menu.\n", " $(this.rubberband_canvas).bind(\"contextmenu\",function(e){\n", " return false;\n", " });\n", "\n", " function set_focus () {\n", " canvas.focus();\n", " canvas_div.focus();\n", " }\n", "\n", " window.setTimeout(set_focus, 100);\n", "}\n", "\n", "mpl.figure.prototype._init_toolbar = function() {\n", " var fig = this;\n", "\n", " var nav_element = $('
')\n", " nav_element.attr('style', 'width: 100%');\n", " this.root.append(nav_element);\n", "\n", " // Define a callback function for later on.\n", " function toolbar_event(event) {\n", " return fig.toolbar_button_onclick(event['data']);\n", " }\n", " function toolbar_mouse_event(event) {\n", " return fig.toolbar_button_onmouseover(event['data']);\n", " }\n", "\n", " for(var toolbar_ind in mpl.toolbar_items) {\n", " var name = mpl.toolbar_items[toolbar_ind][0];\n", " var tooltip = mpl.toolbar_items[toolbar_ind][1];\n", " var image = mpl.toolbar_items[toolbar_ind][2];\n", " var method_name = mpl.toolbar_items[toolbar_ind][3];\n", "\n", " if (!name) {\n", " // put a spacer in here.\n", " continue;\n", " }\n", " var button = $('