{"nbformat":4,"nbformat_minor":0,"metadata":{"colab":{"name":"2020-11-19 - class #14 - activities.ipynb","provenance":[],"collapsed_sections":["8d6J9xZiFtU2","54uYvvLhQvrE","LFTevNkDXn2W","eB0a9B4uacC8","w0-gURapd_-G"]},"kernelspec":{"name":"python3","display_name":"Python 3"}},"cells":[{"cell_type":"markdown","metadata":{"id":"BRAp37uklN9X"},"source":["# Class \\#14 activities"]},{"cell_type":"markdown","metadata":{"id":"8d6J9xZiFtU2"},"source":["# **Global temperature trends**\n","### Practice with SciPy linear regression and 1-D interpolation"]},{"cell_type":"markdown","metadata":{"id":"0xyc3kUDwnm0"},"source":["\n","\n","*Image: Atmospheric CO2 from 1700 to yesterday. Source: [Scripps/UCSD](https://sioweb.ucsd.edu/programs/keelingcurve/).*\n"]},{"cell_type":"markdown","metadata":{"id":"2d8TX_spON9b"},"source":["1. Run this cell to import libraries (note the new `SciPy` and `Matplotlib.dates` import statements!) and give Colab access to Google Drive.\n","\n","2. When prompted, click the link to give Colab access to Google Drive, copy the code, and paste back into here."]},{"cell_type":"code","metadata":{"colab":{"base_uri":"https://localhost:8080/"},"id":"cTEogBL7Nqia","executionInfo":{"status":"ok","timestamp":1605851323779,"user_tz":480,"elapsed":21635,"user":{"displayName":"Ethan C Campbell","photoUrl":"https://lh3.googleusercontent.com/a-/AOh14GjCBYTiuomqOsCakND1k_5wj0kYvFY53Jt7kunt=s64","userId":"11255944928409084259"}},"outputId":"1cbd7c24-499d-4cb5-b8bf-56868ad86c5a"},"source":["# Import NumPy, Pandas, SciPy, Matplotlib, Datetime\n","import numpy as np\n","import pandas as pd\n","from scipy import stats, interpolate\n","import matplotlib.pyplot as plt\n","import matplotlib.dates as mdates\n","from datetime import datetime, timedelta\n","\n","# Give Colab access to Google Drive\n","from google.colab import drive\n","drive.mount('/content/drive')"],"execution_count":1,"outputs":[{"output_type":"stream","text":["Mounted at /content/drive\n"],"name":"stdout"}]},{"cell_type":"markdown","metadata":{"id":"GdSewIIHNqia"},"source":["3. Next, update the filepath below. To find the correct filepath, you can click on the left sidebar (folder icon), navigate to the CSV file in the Class #14 data folder, click the \"...\" on the file, and select \"Copy path.\"\n","\n","4. Add back slashes (`\\`) in front of quotation marks in the filepath, as necessary."]},{"cell_type":"code","metadata":{"id":"J0NhDnI8Nqia","executionInfo":{"status":"ok","timestamp":1605851325440,"user_tz":480,"elapsed":826,"user":{"displayName":"Ethan C Campbell","photoUrl":"https://lh3.googleusercontent.com/a-/AOh14GjCBYTiuomqOsCakND1k_5wj0kYvFY53Jt7kunt=s64","userId":"11255944928409084259"}}},"source":["# Filepath for Arctic sea ice concentration netCDF file\n","filepath = '/content/drive/MyDrive/OCEAN 215 - Autumn \\'20/OCEAN 215 - Autumn \\'20 - Course documents/Zoom class slides and notebooks/2020-11-19 - class #14 - data/NASA_GISS_global_temp.csv'"],"execution_count":2,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"yEnKAIINNqia"},"source":["5. Run the code cells to load and display the CSV file using `Pandas`. First, we'll parse the years as integer numbers. Then, we'll create a second version with the years parsed as `datetime` objects.\n","\n","6. Notice that there are three columns: **Year** (which is the `index`), **Temp_anomaly** (which has units of °C), and **Temp_anomaly_smooth** (which is a smoothed, or filtered, version of Temp_anomaly)."]},{"cell_type":"code","metadata":{"colab":{"base_uri":"https://localhost:8080/","height":883},"id":"ppZ9-lz0Nqia","executionInfo":{"status":"ok","timestamp":1605851328011,"user_tz":480,"elapsed":3381,"user":{"displayName":"Ethan C Campbell","photoUrl":"https://lh3.googleusercontent.com/a-/AOh14GjCBYTiuomqOsCakND1k_5wj0kYvFY53Jt7kunt=s64","userId":"11255944928409084259"}},"outputId":"eaa028aa-3052-4cfe-974e-bd6203fb5b6e"},"source":["# Load CSV file using Pandas with 2 different settings\n","global_temp = pd.read_csv(filepath,index_col='Year') # loaded years as integer numbers\n","global_temp_dt = pd.read_csv(filepath,index_col='Year',parse_dates=['Year']) # loaded years as Datetime objects\n","\n","# Display CSV file\n","display(global_temp)\n","display(global_temp_dt)"],"execution_count":3,"outputs":[{"output_type":"display_data","data":{"text/html":["
\n"," | Temp_anomaly | \n","Temp_anomaly_smooth | \n","
---|---|---|
Year | \n","\n"," | \n"," |
1880 | \n","-0.16 | \n","-0.08 | \n","
1881 | \n","-0.07 | \n","-0.12 | \n","
1882 | \n","-0.10 | \n","-0.16 | \n","
1883 | \n","-0.16 | \n","-0.19 | \n","
1884 | \n","-0.27 | \n","-0.23 | \n","
... | \n","... | \n","... | \n","
2015 | \n","0.90 | \n","0.83 | \n","
2016 | \n","1.01 | \n","0.87 | \n","
2017 | \n","0.92 | \n","0.91 | \n","
2018 | \n","0.85 | \n","0.95 | \n","
2019 | \n","0.98 | \n","0.98 | \n","
140 rows × 2 columns
\n","\n"," | Temp_anomaly | \n","Temp_anomaly_smooth | \n","
---|---|---|
Year | \n","\n"," | \n"," |
1880-01-01 | \n","-0.16 | \n","-0.08 | \n","
1881-01-01 | \n","-0.07 | \n","-0.12 | \n","
1882-01-01 | \n","-0.10 | \n","-0.16 | \n","
1883-01-01 | \n","-0.16 | \n","-0.19 | \n","
1884-01-01 | \n","-0.27 | \n","-0.23 | \n","
... | \n","... | \n","... | \n","
2015-01-01 | \n","0.90 | \n","0.83 | \n","
2016-01-01 | \n","1.01 | \n","0.87 | \n","
2017-01-01 | \n","0.92 | \n","0.91 | \n","
2018-01-01 | \n","0.85 | \n","0.95 | \n","
2019-01-01 | \n","0.98 | \n","0.98 | \n","
140 rows × 2 columns
\n","