{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Assignment 6: Floods\n", "\n", "*Due date: March 16*\n", "\n", "*4 pts + 2 final project points*\n", "\n", "In this assignment, we will look at flow data from the Feather River just downstream of the Oroville Dam, in order to figure out the statistical frequency of the recent high-flow event.\n", "\n", "Questions are interspersed among the \"tutorial\"-type material below. Please **submit your assignment as an html export**, and for written responses, please type them in a cell that is of type `Markdown.`" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Import numerical tools\n", "import numpy as np\n", "\n", "#Import pandas for reading in and managing data\n", "import pandas as pd\n", "\n", "# Import pyplot for plotting\n", "import matplotlib.pyplot as plt\n", "\n", "#Import seaborn (useful for plotting)\n", "import seaborn as sns\n", "\n", "# Magic function to make matplotlib inline; other style specs must come AFTER\n", "%matplotlib inline\n", "\n", "%config InlineBackend.figure_formats = {'svg',}\n", "#%config InlineBackend.figure_formats = {'png', 'retina'}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Importing data\n", "Unfortunately, the USGS does not have any data from this year available from the Feather River or its tributaries. (For this river, it only makes data available after staff hydrologists have had a chance to review it.) I was, however, able to find recent data available from a station maintained by the CA Department of Water Resources (DWR) on [this website](https://cdec.water.ca.gov/queryCSV.html). The station is the Feather River at Gridley, which is just downstream of Oroville. However, the period of record only goes back to 1984 (in contrast to some of the USGS stations with periods of record back to the early 1900s.) The [DWR data](https://drive.google.com/file/d/0BzoZUD3hISA4U0FPWHdna3pCZFU/view?usp=sharing) is downloadable as a CSV, in which the columns represent data collected on the hour over the 24 hours of the day, and the rows represent days in the period of record. Let's import it and begin to work with it. First, make sure the data file is saved to your computer." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
'station''sensor'year-month-day0100200300400500600...1400150016001700180019002000210022002300
0GRL20.019840101.0-9998.037400.037100.036900.036800.036500.036100.0...31700.031600.031400.031300.031300.031100.030900.030900.030800.030700.0
1GRL20.019840102.030600.030600.0-9998.0-9998.0-9998.030200.029900.0...25000.024000.023100.022200.021200.020600.020100.019800.019700.019500.0
2GRL20.019840103.019400.019300.019200.019100.019000.018900.018900.0...18900.018900.018900.018800.018800.018800.018800.018700.018600.018600.0
3GRL20.019840104.018600.018600.018600.018600.018600.018500.018500.0...18400.018400.018400.018400.018400.018400.018300.018300.018300.018300.0
4GRL20.019840105.018200.018200.018200.018200.018200.018100.018100.0...18100.018100.018100.018100.018100.018100.018100.018100.018100.0-9998.0
\n", "

5 rows × 27 columns

\n", "
" ], "text/plain": [ " 'station' 'sensor' year-month-day 0 100 200 300 \\\n", "0 GRL 20.0 19840101.0 -9998.0 37400.0 37100.0 36900.0 \n", "1 GRL 20.0 19840102.0 30600.0 30600.0 -9998.0 -9998.0 \n", "2 GRL 20.0 19840103.0 19400.0 19300.0 19200.0 19100.0 \n", "3 GRL 20.0 19840104.0 18600.0 18600.0 18600.0 18600.0 \n", "4 GRL 20.0 19840105.0 18200.0 18200.0 18200.0 18200.0 \n", "\n", " 400 500 600 ... 1400 1500 1600 1700 \\\n", "0 36800.0 36500.0 36100.0 ... 31700.0 31600.0 31400.0 31300.0 \n", "1 -9998.0 30200.0 29900.0 ... 25000.0 24000.0 23100.0 22200.0 \n", "2 19000.0 18900.0 18900.0 ... 18900.0 18900.0 18900.0 18800.0 \n", "3 18600.0 18500.0 18500.0 ... 18400.0 18400.0 18400.0 18400.0 \n", "4 18200.0 18100.0 18100.0 ... 18100.0 18100.0 18100.0 18100.0 \n", "\n", " 1800 1900 2000 2100 2200 2300 \n", "0 31300.0 31100.0 30900.0 30900.0 30800.0 30700.0 \n", "1 21200.0 20600.0 20100.0 19800.0 19700.0 19500.0 \n", "2 18800.0 18800.0 18800.0 18700.0 18600.0 18600.0 \n", "3 18400.0 18400.0 18300.0 18300.0 18300.0 18300.0 \n", "4 18100.0 18100.0 18100.0 18100.0 18100.0 -9998.0 \n", "\n", "[5 rows x 27 columns]" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Use pd.read_csv() to read in the data and store in a DataFrame\n", "fname = '/Users/lglarsen/Desktop/Laurel Google Drive/Terrestrial hydrology Spr2017/Assignments/Assignment 6/FeatherRiveratGridley.csv'\n", "df = pd.read_csv(fname)\n", "df.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that discharge is in units of cfs, or cubic feet per second. You'll notice that missing data has been assigned the value -9998. Below we will correct the blanks to nan and then convert the discharge data to an array using the `values` function in the pandas package." ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": false }, "outputs": [], "source": [ "hoursbyday = df[['0', '100', '200', '300', '400', '500', '600', '700', '800', '900', '1000', '1100', '1200', '1300', '1400', '1500', '1600', '1700', '1800', '1900', '2000', '2100', '2200', '2300']].values" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we will embark on a sophisticated approach for removing blank values. We'll interpolate them using pandas' interpolate function, but only when half the day's data values are valid. Finally, we will add all of the hourly values to create a daily value. If there were more than 12 blanks in a day, the daily value will end up as an NaN.\n", "\n", "So that you can see how this loop works, I suggest changing the first line to `for i in range(2):` and uncommenting the two `print` statements at the end of the code. The two questions below require you to do this.\n", "\n", "1. **Concept check**: In the first row of the data table, how many NaNs do you end up with after the blank interpolation procedure? Why? [1/3 pt]\n", "\n", "2. **Concept check**: What is the big difference in the daily sum between the first and second rows of the data table? Why? [1/3 pt]" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "collapsed": false }, "outputs": [], "source": [ "#First, initialize the daily discharge array\n", "dailyQ = np.zeros(np.size(hoursbyday,0)) #This makes the daily discharge array have the same length\n", "#as the number of rows in hoursbyday.\n", "\n", "for i in range(np.size(hoursbyday,0)): #This means to iterate over the number of rows in hoursbyday\n", " dayvals = hoursbyday[i,:] #Grab the row of hourly values for each day you are iterating over.\n", " \n", " #Now we're going to pad this array with the last 12 values from the day before and the first 12 values from the day after.\n", " #This is in case we need to get rid of blanks at the beginning or end of the day.\n", " \n", " if i>0:\n", " dayvals = np.append(hoursbyday[i-1,12:23], dayvals)\n", " else: #If this is the first row, pad the values with blanks\n", " dayvals = np.append(np.ones(12)*-9998, dayvals)\n", " if i==np.size(hoursbyday,0)-1: #if this is the last line in the data file\n", " dayvals = np.append(dayvals, np.ones(12)*-9998)\n", " else: #If this is not the last row\n", " dayvals = np.append(dayvals, hoursbyday[i+1,0:11])\n", "\n", " #Now we interpolate the missing values.\n", " dayvals[dayvals<0]=np.nan #Convert blanks to NaN values\n", " pandas_dayvals = pd.Series(dayvals)#I'm converting back to a pandas object because pandas has a useful interpolate function\n", "\n", " #Linearly interpolate across the blanks, but only if there are no more than 12 blanks in a row\n", " pandas_dayvals = pandas_dayvals.interpolate(limit=12)\n", " \n", " #Last, we'll convert back to a numpy array and just grab the original set of data for that day.\n", " dayvals = pandas_dayvals[12:35].values\n", " # print(dayvals)\n", " \n", " #Finally, add all of the values to acquire a daily discharge.\n", " dailyQ[i] = np.sum(dayvals)\n", " # print(dailyQ[i])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "11905" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "np.size(hoursbyday,0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, let's visualize the whole period of record. " ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" }, { "data": { "image/svg+xml": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "plt.plot(dailyQ)\n", "plt.xlabel('Day in sequence')\n", "plt.ylabel('Daily discharge, cfs')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So you can see that the recent high flow coming out of the Oroville Dam, while pretty darn high, was *not* the highest on record. Let's now look at the data another way..." ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" }, { "data": { "image/svg+xml": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "plt.hist(dailyQ[~np.isnan(dailyQ)],20, normed=True) #creates a 20-bin, normalized histogram.\n", "plt.title('Daily discharge')\n", "plt.xlabel('Discharge, cfs')\n", "plt.ylabel('Probability density')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Very skewed! Notice that above, we needed to tell the histogram function only to look at those values of dailyQ that were *not* NaNs, otherwise we get an error." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For extreme flows, we usually want to know the average amount of time that elapses between flows of particular magnitudes. Thus, for a flood frequency cacluation, the next step is to determine the peak daily flow for the year. This is very similar to calculations we made in the precipitation tutorial, which you can find [here](https://github.com/LaurelOak/hydro-teaching-resources/blob/master/PrecipFrequencyAnalyses.ipynb). \n", "\n", "One difference is that the dataset we downloaded is missing some days altogether. Rather than going back and finding the missing days (an onorous task), we are just going to use a `for` loop to calculate the peak value in each year. \n", "\n", "First we need to separate the year from the single date column in the input data file. We can do this in pandas using \"date-time\" functions, as shown below." ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[ 1984. 1984. 1984. ..., 2017. 2017. nan]\n" ] } ], "source": [ "df['year'] = pd.to_datetime(df['year-month-day'], format='%Y%m%d').dt.year #Tell pandas that \n", "#this column is a date, and then extract just the year and save it to a new column.\n", "year = df['year'].values #Convert this to an array\n", "print(year)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So now let's go through and find the maximimum value of discharge per year.\n", "\n", "**3) Concept check**: What does the first line after the for loop below do? [1/3 pt]" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "[]" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" }, { "data": { "image/svg+xml": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "these_years = np.unique(year) #This creates an array of \"unique\" years in the 'year' array (i.e.,\n", "#with no repeats)\n", "#Initialize the maximum value per year:\n", "maxQ = np.zeros(len(these_years)-1) #One peak discharge per year. Subtracted 1 because the last\n", "#year is a row of NaNs.\n", "\n", "for i in range(len(these_years)-1): #Loop over unique years\n", " these_Q = dailyQ[year==these_years[i]] \n", " maxQ[i] = max(these_Q[~np.isnan(these_Q)])\n", " \n", "plt.plot(these_years[0:len(these_years)-1], maxQ)" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" }, { "data": { "image/svg+xml": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "plt.hist(maxQ,10, normed=True) #creates a 10-bin, normalized histogram.\n", "plt.title('Annual peak discharge')\n", "plt.xlabel('Discharge, cfs')\n", "plt.ylabel('Probability density')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**4)** Now, using the Feb 2 class Notebook as a model, develop a Weibull probability plot of the return period vs. annual peakflow value for this period of record. What is the return period of this year's storm according to this method? [1 pt]" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false }, "outputs": [], "source": [ "#Code for Weibull Probability Analysis goes here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**5)** Now use the Gumbel method to produce a plot of return period vs. peak discharge. What is the return period of this year's storm according to this method? [1 pt]" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": false }, "outputs": [], "source": [ "#Code for Gumbel Probability Analysis goes here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**6)** Provide a critical analysis (1-2 paragraphs) of these two approaches for calculating the return period of this storm, or the so-called 100-year storm for the Feather River. Think about the procedure used for selecting peak flows, as well as that used for calculating their frequencies. If you were a water resource manager, how would you go about generating a reliable estimate of the 1% annual-exceedence-threshold flow? [1 pt]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**7)** Last, go back to the habitatManip.py file that you worked with in the Evapotranspiration assignment. Provide a narrative description of how the seasonal stream habitat model functions. In particular, describe the processes through which streamflow increases and decreases, as well as the processes through which streamside habitat increases and decreases. What happens to streamside habitat as flow goes up? [1 pt]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**8)** Comment on whether you \"buy\" the way streamside habitat is simulated now. Describe one or more ways through which it might be improved. [1 pt]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python [Root]", "language": "python", "name": "Python [Root]" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.12" } }, "nbformat": 4, "nbformat_minor": 1 }