{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", " Time series: Processing Notebook\n", " \n", "
This Notebook is part of the Time series Data Package of Open Power System Data.\n", "
" ] }, { "cell_type": "markdown", "metadata": { "toc": "true" }, "source": [ "

Table of Contents

\n", "
" ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "bba7a58e-f3b2-4d3b-9617-d734c369084f" } }, "source": [ "# Introductory Notes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This Notebook handles missing data, performs calculations and aggragations and creates the output files." ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "ba6b62da-6cee-476b-a563-c945f3fd0f79" } }, "source": [ "# Settings" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Set version number and recent changes\n", "Executing this script till the end will create a new version of the data package.\n", "The Version number specifies the local directory for the data
\n", "We include a note on what has been changed." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "init_cell": true }, "outputs": [], "source": [ "version = '2018-03-13'\n", "changes = '''Include data for 2017 where available from primary sources and\n", "include hourly load from ENTSO-E Power Statistics'''" ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "2b838df4-f987-4ae4-a132-9c898e3ffab1" } }, "source": [ "## Import Python libraries" ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "d4e82608-1ad1-4c51-8285-929a5a92c5b6" } }, "source": [ "This section: load libraries and set up a log.\n", "\n", "Note that the download module makes use of the [pycountry](https://pypi.python.org/pypi/pycountry) library that is not part of Anaconda. Install it with with `pip install pycountry` from the command line." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "init_cell": true, "nbpresent": { "id": "c0035fc6-ff1d-44d8-a3fd-b4c08f53be71" }, "scrolled": false }, "outputs": [], "source": [ "# Python modules\n", "from datetime import datetime, date, timedelta, time\n", "import pandas as pd\n", "import numpy as np\n", "import logging\n", "import json\n", "import sqlite3\n", "import yaml\n", "import itertools\n", "import os\n", "import pytz\n", "from shutil import copyfile\n", "\n", "# Skripts from time-series repository\n", "from timeseries_scripts.read import read\n", "from timeseries_scripts.download import download\n", "from timeseries_scripts.imputation import find_nan\n", "from timeseries_scripts.imputation import resample_markers, glue_markers, mark_own_calc\n", "from timeseries_scripts.make_json import make_json, get_sha_hash\n", "\n", "# Reload modules with execution of any code, to avoid having to restart\n", "# the kernel after editing timeseries_scripts\n", "%load_ext autoreload\n", "%autoreload 2\n", "\n", "# speed up tab completion in Jupyter Notebook\n", "%config Completer.use_jedi = False" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Display options" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "init_cell": true }, "outputs": [], "source": [ "# Allow pretty-display of multiple variables\n", "from IPython.core.interactiveshell import InteractiveShell\n", "InteractiveShell.ast_node_interactivity = \"all\"\n", "\n", "# Adjust the way pandas DataFrames a re displayed to fit more columns\n", "pd.reset_option('display.max_colwidth')\n", "pd.options.display.max_columns=60\n", "#pd.options.display.max_colwidth=5" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Set directories" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "init_cell": true }, "outputs": [], "source": [ "# make sure the working directory is this file's directory\n", "try:\n", " os.chdir(home_path)\n", "except NameError:\n", " home_path = os.path.realpath('.')\n", " \n", "# optionally, set a different directory to store outputs and raw data,\n", "# which will take up around 15 GB of disk space\n", "use_external_dir = True\n", "if use_external_dir:\n", " save_path = os.path.join('C:', os.sep, 'time series')\n", "else:\n", " save_path = home_path\n", "\n", "sources_yaml_path = os.path.join(home_path, 'input', 'sources.yml')\n", "areas_csv_path = os.path.join(home_path, 'input', 'areas.csv')\n", "data_path = os.path.join(save_path, version, 'original_data')\n", "out_path = os.path.join(save_path, version)\n", "temp_path = os.path.join(save_path, 'temp')\n", "os.makedirs(out_path, exist_ok=True)\n", "os.makedirs(temp_path, exist_ok=True)\n", "\n", "# change to temp directory\n", "os.chdir(temp_path)\n", "os.getcwd()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Set up a log" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "init_cell": true }, "outputs": [], "source": [ "today = datetime.today().strftime('%Y-%m-%d')\n", "\n", "FORMAT = '%(asctime)s %(levelname)s %(message)s'\n", "DATEFORMAT = '%Y-%m-%d %H:%M:%S'\n", "formatter = logging.Formatter(fmt=FORMAT, datefmt=DATEFORMAT)\n", "logging.basicConfig(level=logging.INFO,\n", " format=FORMAT,\n", " datefmt=DATEFORMAT)\n", "\n", "logfile = logging.FileHandler(today + '.log')\n", "logfile.setFormatter(formatter)\n", "logfile.setLevel(logging.INFO)\n", "logger = logging.getLogger(__name__)\n", "logger.addHandler(logfile)\n", "\n", "logger.setLevel(logging.INFO)\n", "# For more detailed logging messages, replace 'INFO' with 'DEBUG'\n", "# (May slow down computation)." ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "8ed60fdb-284d-43db-b802-8e5c405b8e84" } }, "source": [ "## Select timerange" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This section: select the time range and the data sources for download and read. Default: all data sources implemented, full time range available.\n", "\n", "**Source parameters** are specified in [input/sources.yml](input/sources.yml), which describes, for each source, the datasets (such as wind and solar generation) alongside all the parameters necessary to execute the downloads.\n", "\n", "The option to perform downloading and reading of subsets is for testing only. To be able to run the script succesfully until the end, all sources have to be included, or otherwise the script will run into errors (i.e. the step where aggregate German timeseries are caculated requires data from all four German TSOs to be loaded)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In order to do this, specify the beginning and end of the interval for which to attempt the download.\n", "\n", "Type `None` to download all available data." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "init_cell": true, "nbpresent": { "id": "f3008d47-ec89-40d0-85ee-776d110f3bb4" }, "scrolled": false }, "outputs": [], "source": [ "start_from_user = date(2006, 1, 1)\n", "end_from_user = date(2017, 12, 31)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Select download source" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Instead of downloading from the sources, the complete raw data can be downloaded as a zip file from the OPSD Server. Advantages are:\n", "- much faster download\n", "- back up of raw data in case it is deleted from the server at the original source\n", "\n", "In order to do this, specify an archive version to use the raw data from that version that has been cached on the OPSD server as input. All data from that version will be downloaded - timerange and subset will be ignored.\n", "\n", "Type `None` to download directly from the original sources." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "init_cell": true }, "outputs": [], "source": [ "archive_version = None # i.e. '2016-07-14'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Select subset" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Optionally, specify a subset to download/read.
\n", "The next cell prints the available sources and datasets.
" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "format": "row", "scrolled": true }, "outputs": [], "source": [ "with open(sources_yaml_path, 'r') as f:\n", " sources = yaml.load(f.read())\n", "for k, v in sources.items():\n", " print(yaml.dump({k: list(v.keys())}, default_flow_style=False))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Copy from its output and paste to following cell to get the right format.
\n", "Type `subset = None` to include all data." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "init_cell": true, "scrolled": false }, "outputs": [], "source": [ "subset = yaml.load('''\n", "insert_source_here:\n", "- insert_dataset1_from_that_source_here\n", "- insert_dataset2_here\n", "more_sources:\n", "- more_data_sets\n", "''')\n", "subset = None # to include all sources\n", "\n", "# need to exclude Elia data due to unclear copyright situation\n", "exclude = yaml.load('''\n", "- Elia\n", "''')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now eliminate sources and variables not in subset." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "init_cell": true, "scrolled": true }, "outputs": [], "source": [ "with open(sources_yaml_path, 'r') as f:\n", " sources = yaml.load(f.read())\n", "if subset: # eliminate sources and variables not in subset\n", " sources = {source_name: {k: v\n", " for k, v in sources[source_name].items()\n", " if k in variable_list}\n", " for source_name, variable_list in subset.items()}\n", "if exclude: # eliminate sources and variables in exclude\n", " sources = {source_name: variable_dict\n", " for source_name, variable_dict in sources.items()\n", " if not source_name in exclude}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Download" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This section: download data. Takes about 1 hour to run for the complete data set (`subset=None`).\n", "\n", "First, a data directory is created on your local computer. Then, download parameters for each data source are defined, including the URL. These parameters are then turned into a YAML-string. Finally, the download is executed file by file.\n", "\n", "Each file is saved under it's original filename. Note that the original file names are often not self-explanatory (called \"data\" or \"January\"). The files content is revealed by its place in the directory structure." ] }, { "cell_type": "markdown", "metadata": { "heading_collapsed": true, "nbpresent": { "id": "e08368cb-021f-453a-b201-a8d48bc8e4c4" } }, "source": [ "## Automatic download (for most sources)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "format": "row", "hidden": true, "nbpresent": { "id": "0c1eb987-6d5f-4e3d-9248-df80b9f37a49" }, "scrolled": false }, "outputs": [], "source": [ "download(sources, data_path,\n", " archive_version=archive_version,\n", " start_from_user=start_from_user,\n", " end_from_user=end_from_user,\n", " testmode=False)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "format": "row", "hidden": true, "nbpresent": { "id": "0c1eb987-6d5f-4e3d-9248-df80b9f37a49" }, "scrolled": false }, "outputs": [], "source": [ "download(sources, data_path,\n", " archive_version=archive_version,\n", " start_from_user=start_from_user,\n", " end_from_user=end_from_user,\n", " testmode=False)" ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "bae7810a-d4af-4021-a9ab-c4b772a2bd53" } }, "source": [ "## Manual download" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Energinet.dk" ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "a69f8271-e1fc-4e64-814c-4210fb80c006" } }, "source": [ "Go to http://osp.energinet.dk/_layouts/Markedsdata/framework/integrations/markedsdatatemplate.aspx.\n", "\n", "\n", "**Check The Boxes as specified below:**\n", "- Periode\n", " - Hent udtræk fra perioden: **01-01-2005** Til: **01-01-2018**\n", " - Select all months\n", "- Datakolonner\n", " - Elspot Pris, Valutakode/MWh: **Select all**\n", " - Produktion og forbrug, MWh/h: **Select all**\n", "- Udtræksformat\n", " - Valutakode: **EUR**\n", " - Decimalformat: **Engelsk talformat (punktum som decimaltegn**\n", " - Datoformat: **Andet datoformat (ÅÅÅÅ-MM-DD)**\n", " - Hent Udtræk: **Til Excel**\n", "\n", "Click **Hent Udtræk**\n", "\n", "You will receive a file `Markedsata.xls` of about 50 MB. Open the file in Excel. There will be a warning from Excel saying that file extension and content are in conflict. Select \"open anyways\" and and save the file as `.xlsx`.\n", "\n", "In order to be found by the read-function, place the downloaded file in the following subdirectory: \n", "**`{{data_path}}{{os.sep}}Energinet.dk{{os.sep}}prices_wind_solar{{os.sep}}2005-01-01_2017-12-31`**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### CEPS\n", "\n", "Go to http://www.ceps.cz/en/all-data#GenerationRES\n", "\n", "**check boxes as specified below:**\n", "\n", "DISPLAY DATA FOR: **Generation RES** \n", "TURN ON FILTER **checked** \n", "FILTER SETTINGS: \n", "- Set the date range\n", " - interval\n", " - from: **2012** to: **2018**\n", "- Agregation and data version\n", " - Aggregation: **Hour**\n", " - Agregation function: **average (AVG)**\n", " - Data version: **real data**\n", "- Filter\n", " - Type of power plant: **ALL**\n", "- Click **USE FILTER**\n", "- DOWNLOAD DATA: **DATA V TXT**\n", "\n", "You will receive a file `data.txt` of about 1.5 MB.\n", "\n", "In order to be found by the read-function, place the downloaded file in the following subdirectory: \n", "**`{{data_path}}{{os.sep}}CEPS{{os.sep}}wind_pv{{os.sep}}2012-01-01_2018-01-31`**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### ENTSO-E Power Statistics\n", "\n", "Go to https://www.entsoe.eu/data/statistics/Pages/monthly_hourly_load.aspx\n", "\n", "**check boxes as specified below:**\n", "\n", "- Date From: **01-01-2016** Date To: **30-04-2016**\n", "- Country: **(Select All)**\n", "- Scale values to 100% using coverage ratio: **NO**\n", "- **View Report**\n", "- Click the Save symbol and select **Excel**\n", "\n", "You will receive a file `1.01 Monthly%5FHourly Load%5FValues%5FStatistical.xlsx` of about 1 MB.\n", "\n", "In order to be found by the read-function, place the downloaded file in the following subdirectory: \n", "**`{{os.sep}}original_data{{os.sep}}ENTSO-E Power Statistics{{os.sep}}load{{os.sep}}2016-01-01_2016-04-30`**\n", "\n", "The data covers the period from 01-01-2016 up to the present, but 4 months of data seems to be the maximum that interface supports for a single download request, so you have to repeat the download procedure for 4-Month periods to cover the whole period until the present." ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "d0b353e7-179d-4556-bdd2-270192c830fb" } }, "source": [ "# Read" ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "d223207e-2b0e-49ec-8bf6-af2969ee5b28" } }, "source": [ "This section: Read each downloaded file into a pandas-DataFrame and merge data from different sources if it has the same time resolution. Takes ~15 minutes to run." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Preparations" ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "1300cbb2-efde-4844-b08c-fed092023e38" } }, "source": [ "Set the title of the rows at the top of the data used to store metadata internally. The order of this list determines the order of the levels in the resulting output." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "format": "row", "init_cell": true, "nbpresent": { "id": "4dc92cc3-c01d-4c83-9252-80958edbe0f9" }, "scrolled": false }, "outputs": [], "source": [ "headers = ['region', 'variable', 'attribute', 'source', 'web', 'unit']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Read a prepared table containing meta data on the geographical areas" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "init_cell": true, "scrolled": true }, "outputs": [], "source": [ "areas = pd.read_csv(areas_csv_path)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "View the areas table" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "scrolled": true }, "outputs": [], "source": [ "areas.loc[areas['area ID'].notnull(),:'EIC'].fillna('')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Reading loop" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Loop through sources and variables to do the reading.\n", "First read the originla CSV, Excel etc. files into pandas DataFrames." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "scrolled": true }, "outputs": [], "source": [ "areas = pd.read_csv(areas_csv_path)\n", "\n", "# For each source in the source dictionary\n", "for source_name, source_dict in sources.items():\n", " # For each variable from source_name\n", " for variable_name, param_dict in source_dict.items():\n", "# variable_dir = os.path.join(data_path, source_name, variable_name)\n", " res_list = param_dict['resolution']\n", " url = param_dict['web']\n", " for res_key in res_list:\n", " df = read(data_path, areas, source_name, variable_name,\n", " url, res_key, headers,\n", " start_from_user=start_from_user,\n", " end_from_user=end_from_user)\n", "\n", " os.makedirs(res_key, exist_ok=True)\n", " filename = '_'.join([source_name, variable_name]) + '.pickle'\n", " df.to_pickle(os.path.join(res_key, filename))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then combine the DataFrames that have the same temporal resolution" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "scrolled": true }, "outputs": [], "source": [ "# Create a dictionary of empty DataFrames to be populated with data\n", "data_sets = {'15min': pd.DataFrame(), '30min': pd.DataFrame(), '60min': pd.DataFrame()}\n", "entso_e = {'15min': pd.DataFrame(), '30min': pd.DataFrame(), '60min': pd.DataFrame()}\n", "for res_key in data_sets.keys():\n", " for filename in os.listdir(res_key):\n", " source_name = filename.split('_')[0]\n", " if subset and not source_name in subset.keys():\n", " continue\n", " logger.info(filename)\n", " df_portion = pd.read_pickle(os.path.join(res_key, filename))\n", "\n", " if source_name == 'ENTSO-E Transparency FTP':\n", " dfs = entso_e\n", " else:\n", " dfs = data_sets\n", "\n", " if dfs[res_key].empty:\n", " dfs[res_key] = df_portion\n", " elif not df_portion.empty:\n", " dfs[res_key] = dfs[res_key].combine_first(df_portion)\n", " else:\n", " logger.warning(filename + ' WAS EMPTY')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "scrolled": true }, "outputs": [], "source": [ "for res_key, df in data_sets.items():\n", " logger.info(res_key + ': %s', df.shape)\n", "for res_key, df in entso_e.items():\n", " logger.info('ENTSO-E ' + res_key + ': %s', df.shape)" ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "be24bc5f-e281-43bf-853a-e7130d20d2e8" } }, "source": [ "Display some rows of the dataframes to get a first impression of the data." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "data_sets['15min'].head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Save raw data" ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "019c8ae9-8934-4074-adfd-278acd212152" } }, "source": [ "Save the DataFrames created by the read function to disk. This way you have the raw data to fall back to if something goes wrong in the ramainder of this notebook without having to repeat the previos steps." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "scrolled": false }, "outputs": [], "source": [ "data_sets['15min'].to_pickle('raw_data_15.pickle')\n", "data_sets['30min'].to_pickle('raw_data_30.pickle')\n", "data_sets['60min'].to_pickle('raw_data_60.pickle')\n", "entso_e['15min'].to_pickle('raw_entso_e_15.pickle')\n", "entso_e['30min'].to_pickle('raw_entso_e_30.pickle')\n", "entso_e['60min'].to_pickle('raw_entso_e_60.pickle')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Load the DataFrames saved above" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "scrolled": false }, "outputs": [], "source": [ "data_sets = {}\n", "data_sets['15min'] = pd.read_pickle('raw_data_15.pickle')\n", "data_sets['30min'] = pd.read_pickle('raw_data_30.pickle')\n", "data_sets['60min'] = pd.read_pickle('raw_data_60.pickle')\n", "entso_e = {}\n", "entso_e['15min'] = pd.read_pickle('raw_entso_e_15.pickle')\n", "entso_e['30min'] = pd.read_pickle('raw_entso_e_30.pickle')\n", "entso_e['60min'] = pd.read_pickle('raw_entso_e_60.pickle')" ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "026c6ad8-0422-4887-9361-8e45ae33e0c6" } }, "source": [ "# Processing" ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "c446157a-7872-491b-a9c4-0336b87568a6" } }, "source": [ "This section: missing data handling, aggregation of sub-national to national data, aggregate 15'-data to 60'-resolution. Takes 30 minutes to run." ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "0c13e5eb-d8e3-425d-982e-2a25c364cdc5" } }, "source": [ "## Missing data handling" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Interpolation" ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "81d4d772-d3c7-4e07-a0a7-49afb3bdf8f6" } }, "source": [ "Patch missing data. At this stage, only small gaps (up to 2 hours) are filled by linear interpolation. This catched most of the missing data due to daylight savings time transitions, while leaving bigger gaps untouched\n", "\n", "The exact locations of missing data are stored in the `nan_table` DataFrames.\n", "\n", "Where data has been interpolated, it is marked in a new column `comment`. For eaxample the comment `solar_DE-transnetbw_generation;` means that in the original data, there is a gap in the solar generation timeseries from TransnetBW in the time period where the marker appears." ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "533cd53f-eb8f-4699-bbf7-72ad1cf90325" } }, "source": [ "Patch the datasets and display the location of missing Data in the original data. Takes ~5 minutes to run." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "format": "row", "nbpresent": { "id": "33602355-28cb-4d5c-b97e-fe59b9b48883" }, "scrolled": true }, "outputs": [], "source": [ "nan_tables = {}\n", "for res_key, df in data_sets.items():\n", " data_sets[res_key], nan_tables[res_key] = find_nan(\n", " df, res_key, headers, patch=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "format": "row", "nbpresent": { "id": "33602355-28cb-4d5c-b97e-fe59b9b48883" }, "scrolled": true }, "outputs": [], "source": [ "for res_key, df in entso_e.items():\n", " entso_e[res_key], nan_tables[res_key + ' ENTSO-E'] = find_nan(\n", " df, res_key, headers, patch=True)" ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "345c31ec-fa0b-4859-a930-8b7c20f1e1bf" } }, "source": [ "Execute this to see an example of where the data has been patched." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "format": "row", "scrolled": true }, "outputs": [], "source": [ "data_sets['15min'][data_sets['15min']['interpolated_values'].notnull()].tail()" ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "4c5c4893-5098-47df-8e68-51b13f5fd484" } }, "source": [ "Display the table of regions of missing values" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "nbpresent": { "id": "b5207ca6-4a72-40b6-8afd-3123f66ae323" }, "scrolled": true }, "outputs": [], "source": [ "nan_tables['60min']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can export the NaN-tables to Excel in order to inspect where there are NaNs" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "scrolled": false }, "outputs": [], "source": [ "#writer = pd.ExcelWriter('NaN_table.xlsx') \n", "for res_key, df in nan_tables.items():\n", " # df.to_excel(writer, res_key)\n", " # For some reason, output to Excel doesn't work currently, we resort to csv\n", " df.to_csv('NaN_table' + res_key + '.csv')\n", "#writer.save()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Save/Load the patched data sets" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "scrolled": false }, "outputs": [], "source": [ "data_sets['15min'].to_pickle('patched_15.pickle')\n", "data_sets['30min'].to_pickle('patched_30.pickle')\n", "data_sets['60min'].to_pickle('patched_60.pickle')\n", "entso_e['15min'].to_pickle('patched_entso_e_15.pickle')\n", "entso_e['30min'].to_pickle('patched_entso_e_30.pickle')\n", "entso_e['60min'].to_pickle('patched_entso_e_60.pickle')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "scrolled": false }, "outputs": [], "source": [ "data_sets = {}\n", "data_sets['15min'] = pd.read_pickle('patched_15.pickle')\n", "data_sets['30min'] = pd.read_pickle('patched_30.pickle')\n", "data_sets['60min'] = pd.read_pickle('patched_60.pickle')\n", "entso_e = {}\n", "entso_e['15min'] = pd.read_pickle('patched_entso_e_15.pickle')\n", "entso_e['30min'] = pd.read_pickle('patched_entso_e_30.pickle')\n", "entso_e['60min'] = pd.read_pickle('patched_entso_e_60.pickle')" ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "abd61a95-68e4-4bf2-9f1f-ef010b4cd288" } }, "source": [ "## Country specific calculations" ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "0a2cc638-d6f5-4f7f-a044-fb61d20cf4ad" } }, "source": [ "### Calculate onshore wind generation for German TSOs" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For 50 Hertz, it is already in the data.\n", "For TenneT, it is calculated by substracting offshore from total generation.\n", "For Amprion and TransnetBW, onshore wind generation is just total wind generation.\n", "Takes <1 second to run." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "scrolled": false }, "outputs": [], "source": [ "# Some of the following operations require the Dataframes to be lexsorted in\n", "# the columns\n", "for res_key, df in data_sets.items():\n", " df.sort_index(axis=1, inplace=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "scrolled": true }, "outputs": [], "source": [ "for area, source, url in zip(\n", " ['DE_amprion', 'DE_tennet', 'DE_transnetbw'],\n", " ['Amprion', 'TenneT', 'TransnetBW'],\n", " ['http://www.amprion.net/en/wind-feed-in',\n", " 'http://www.tennettso.de/site/en/Transparency/publications/network-figures/actual-and-forecast-wind-energy-feed-in',\n", " 'https://www.transnetbw.com/en/transparency/market-data/key-figures']):\n", " \n", " new_col_header = {\n", " 'variable': 'wind_onshore',\n", " 'region': '{area}',\n", " 'attribute': 'generation_actual',\n", " 'source': '{source}',\n", " 'web': '{url}',\n", " 'unit': 'MW'\n", " }\n", "\n", " if area == 'DE_tennet':\n", " colname = ('DE_tennet', 'wind_offshore', 'generation_actual', 'TenneT')\n", " offshore = data_sets['15min'].loc[:, colname]\n", " else:\n", " offshore = 0\n", "\n", " data_sets['15min'][\n", " tuple(new_col_header[level].format(area=area, source=source, url=url)\n", " for level in headers)\n", " ] = (data_sets['15min'][(area, 'wind', 'generation_actual', source)] - offshore)\n", "\n", " # Sort again\n", " data_sets['15min'].sort_index(axis=1, inplace=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Calculate aggregate wind capacity for Germany (on + offshore)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Apart from being interesting on it's own, this is also required to calculate an aggregated wind-profile for Germany" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "scrolled": false }, "outputs": [], "source": [ "new_col_header = {\n", " 'variable': 'wind',\n", " 'region': 'DE',\n", " 'attribute': 'capacity',\n", " 'source': 'own calculation based on BNetzA and netztransparenz.de',\n", " 'web': 'http://data.open-power-system-data.org/renewable_power_plants',\n", " 'unit': 'MW'\n", "}\n", "new_col_header = tuple(new_col_header[level] for level in headers)\n", "\n", "data_sets['15min'][new_col_header] = (\n", " data_sets['15min']\n", " .loc[:, ('DE', ['wind_onshore', 'wind_offshore'], 'capacity')]\n", " .sum(axis=1, skipna=False))\n", "\n", "# Sort again\n", "data_sets['15min'].sort_index(axis=1, inplace=True)" ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "cbcd1b7c-6c9c-4d1a-ab70-28276b421805" } }, "source": [ "### Aggregate German data from individual TSOs and calculate availabilities/profiles" ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "9ab0116d-402b-444b-acf2-925388a9886b" } }, "source": [ "The wind and solar in-feed data for the 4 German balancing areas is summed up and stored in in new columns, which are then used to calculate profiles, that is, the share of wind/solar capacity producing at a given time. The column headers are created in the fashion introduced in the read script. Takes 5 seconds to run." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "nbpresent": { "id": "26a83b1c-682d-43ec-bb70-12a096089508" }, "scrolled": false }, "outputs": [], "source": [ "for tech in ['solar', 'wind', 'wind_onshore', 'wind_offshore']:\n", " for attribute in ['generation_actual']: # we could also include 'generation_forecast'\n", " sum_col = data_sets['15min'].loc(axis=1)[(\n", " ['DE_50hertz', 'DE_amprion', 'DE_tennet', 'DE_transnetbw'],\n", " tech, attribute)].sum(axis=1, skipna=False).to_frame()\n", "\n", " # Calculate aggregate German generation\n", " sum_col.columns = pd.MultiIndex.from_tuples(tuples, names=headers)\n", "\n", " if attribute == 'generation_actual':\n", " # Calculate the profile column\n", " profile_col = (sum_col.values /\n", " data_sets['15min']['DE', tech, 'capacity']).round(4)\n", "\n", " # Create a new MultiIndex and append profile to the dataset\n", " new_col_header = {\n", " 'variable': '{tech}',\n", " 'region': 'DE',\n", " 'attribute': 'profile',\n", " 'source': 'own calculation based on German TSOs, BNetzA and netztranzparenz.de',\n", " 'web': '',\n", " 'unit': 'fraction'\n", " }\n", " tuples = [tuple(new_col_header[level].format(tech=tech)\n", " for level in headers)]\n", " profile_col.columns = pd.MultiIndex.from_tuples(tuples, names=headers)\n", " data_sets['15min'] = data_sets['15min'].combine_first(profile_col)\n", " \n", " else:\n", " pass\n", "\n", " # Create a new MultiIndex \n", " new_col_header = {\n", " 'variable': '{tech}',\n", " 'region': 'DE',\n", " 'attribute': '{attribute}',\n", " 'source': 'own calculation based on German TSOs',\n", " 'web': '',\n", " 'unit': 'MW'\n", " }\n", " tuples = [tuple(new_col_header[level].format(\n", " tech=tech, attribute=attribute) for level in headers)]\n", " \n", " # append aggregate German generation to the dataset after rounding\n", " data_sets['15min'] = data_sets['15min'].combine_first(sum_col.round(0))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Another savepoint" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "scrolled": false }, "outputs": [], "source": [ "data_sets['15min'].to_pickle('calc_15.pickle')\n", "data_sets['30min'].to_pickle('calc_30.pickle')\n", "data_sets['60min'].to_pickle('calc_60.pickle')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "data_sets = {}\n", "data_sets['15min'] = pd.read_pickle('calc_15.pickle')\n", "data_sets['30min'] = pd.read_pickle('calc_30.pickle')\n", "data_sets['60min'] = pd.read_pickle('calc_60.pickle')\n", "entso_e = {}\n", "entso_e['15min'] = pd.read_pickle('patched_entso_e_15.pickle')\n", "entso_e['30min'] = pd.read_pickle('patched_entso_e_30.pickle')\n", "entso_e['60min'] = pd.read_pickle('patched_entso_e_60.pickle')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Fill columns not retrieved directly from TSO webites with ENTSO-E Transparency data" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "nbpresent": { "id": "c19e3eb9-feca-4e94-9035-580aa07189ba" }, "scrolled": false }, "outputs": [], "source": [ "for res_key, df in entso_e.items():\n", " # Combine with TSO data\n", " \n", " # Copy entire 30min data from ENTSO-E\n", " if data_sets[res_key].empty:\n", " data_sets[res_key] = df\n", "\n", " else:\n", " # Keep only region, variable, attribute in MultiIndex for comparison \n", " data_cols = data_sets[res_key].columns.droplevel(['source', 'web', 'unit'])\n", " # Compare columns from ENTSO-E against ours, keep which we don't have yet\n", " tuples = [col for col in df.columns if not col[:3] in data_cols]\n", " add_cols = pd.MultiIndex.from_tuples(tuples, names=headers)\n", " data_sets[res_key] = data_sets[res_key].combine_first(df[add_cols])\n", "\n", " # Add the ENTSO-E markers (but only for the columns actually copied)\n", " add_cols = ['_'.join(col[:3]) for col in tuples]\n", " # Spread marker column out over a DataFrame for easiser comparison\n", " # Filter out everey secon colum, which contains the delimiter \"|\"\n", " # fromthe marker\n", " marker_table = (df['interpolated_values'].str.split(' | ', expand=True)\n", " .filter(regex='^\\d*[02468]$', axis='columns'))\n", " # Replace cells with markers marking columns not copied with NaNs\n", " marker_table[~marker_table.isin(add_cols)] = np.nan\n", " \n", " for col_name, col in marker_table.iteritems():\n", " if col_name == 0:\n", " marker_entso_e = col\n", " else:\n", " marker_entso_e = glue_markers(marker_entso_e, col)\n", " \n", " # Glue ENTSO-E marker onto our old marker\n", " marker = data_sets[res_key]['interpolated_values']\n", " data_sets[res_key].loc[:, 'interpolated_values'] = glue_markers(\n", " marker, df['interpolated_values'].reindex(marker.index))" ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "4e311f14-5124-47fd-9f94-0a866e9a6f71" } }, "source": [ "## Resample higher frequencies to 60'" ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "882bf163-9e93-44a4-9c51-a3ce0c32b4f4" } }, "source": [ "Some data comes in 15 or 30-minute intervals (i.e. German or British renewable generation), other in 60-minutes (i.e. load data from ENTSO-E and Prices). We resample the 15 and 30-minute data to hourly resolution and append it to the 60-minutes dataset.\n", "\n", "The marker column is resampled separately in such a way that all information on where data has been interpolated is preserved.\n", "\n", "The `.resample('H').mean()` methods calculates the means from the values for 4 quarter hours [:00, :15, :30, :45] of an hour values, inserts that for :00 and drops the other 3 entries. Takes 15 seconds to run." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "#marker_60 = data_sets['60min']['interpolated_values']\n", "for res_key, df in data_sets.items():\n", " if res_key == '60min':\n", " break\n", " # Resample first the marker column \n", " marker_resampled = df['interpolated_values'].groupby(\n", " pd.Grouper(freq='60Min', closed='left', label='left')\n", " ).agg(resample_markers, drop_region='DE_AT_LU')\n", " marker_resampled = marker_resampled.reindex(data_sets['60min'].index)\n", "\n", " # Glue condensed 15 min marker onto 60 min marker\n", " data_sets['60min'].loc[:, 'interpolated_values'] = glue_markers(\n", " data_sets['60min']['interpolated_values'],\n", " marker_resampled.reindex(data_sets['60min'].index))\n", "\n", " # Drop DE_AT_LU bidding zone data from the 15 minute resolution data to\n", " # be resampled since it is already provided in 60 min resolution by \n", " # ENTSO-E Transparency\n", " df = df.drop('DE_AT_LU', axis=1, errors='ignore')\n", " \n", " # Do the resampling\n", " resampled = df.resample('H').mean()\n", " resampled.columns = resampled.columns.map(mark_own_calc)\n", " resampled.columns.names = headers\n", " \n", " # Round the resampled columns\n", " for col in resampled.columns:\n", " if col[2] == 'profile': \n", " resampled.loc[:,col] = resampled.loc[:,col].round(4)\n", " else:\n", " resampled.loc[:,col] = resampled.loc[:,col].round(0)\n", " \n", " data_sets['60min'] = data_sets['60min'].combine_first(resampled)" ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "dea911fb-326f-46a6-a009-57af147d4be4" } }, "source": [ "## Insert a column with Central European (Summer-)time" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The index column of th data sets defines the start of the timeperiod represented by each row of that data set in **UTC** time. We include an additional column for the **CE(S)T** Central European (Summer-) Time, as this might help aligning the output data with other data sources." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "init_cell": true, "scrolled": false }, "outputs": [], "source": [ "info_cols = {'utc': 'utc_timestamp',\n", " 'cet': 'cet_cest_timestamp',\n", " 'marker': 'interpolated_values'}" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "nbpresent": { "id": "b1550779-53cc-498d-980b-7aa253974c91" }, "scrolled": false }, "outputs": [], "source": [ "for res_key, df in data_sets.items():\n", " if df.empty:\n", " continue\n", " df.index.rename(info_cols['utc'], inplace=True)\n", " df.insert(0, info_cols['cet'],\n", " df.index.tz_localize('UTC').tz_convert('Europe/Brussels'))" ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "4e625ea7-c448-45fe-85b2-026157ad24c0" } }, "source": [ "# Create a final savepoint" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "scrolled": false }, "outputs": [], "source": [ "data_sets['15min'].to_pickle('final_15.pickle')\n", "data_sets['30min'].to_pickle('final_30.pickle')\n", "data_sets['60min'].to_pickle('final_60.pickle')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "scrolled": false }, "outputs": [], "source": [ "os.chdir(temp_path)\n", "data_sets = {}\n", "data_sets['15min'] = pd.read_pickle('final_15.pickle')\n", "data_sets['30min'] = pd.read_pickle('final_30.pickle')\n", "data_sets['60min'] = pd.read_pickle('final_60.pickle')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Show the colum names contained inthe final DataFrame in a table" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": false }, "outputs": [], "source": [ "col_info = pd.DataFrame()\n", "df = data_sets['60min']\n", "for level in df.columns.names:\n", " col_info[level] = df.columns.get_level_values(level)\n", " \n", "col_info" ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "fa919796-a7f6-4556-aeed-181ddc6028ac" } }, "source": [ "# Write data to disk" ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "1b3c9199-ce14-4487-939d-656f60c14df3" } }, "source": [ "This section: Save as [Data Package](http://data.okfn.org/doc/tabular-data-package) (data in CSV, metadata in JSON file). All files are saved in the directory of this notebook. Alternative file formats (SQL, XLSX) are also exported. Takes about 1 hour to run." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Limit time range\n", "Cut off the data outside of `[start_from_user:end_from_user]`" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "for res_key, df in data_sets.items():\n", " # In order to make sure that the respective time period is covered in both\n", " # UTC and CE(S)T, we set the start in CE(S)T, but the end in UTC\n", " if start_from_user:\n", " start_from_user = (\n", " pytz.timezone('Europe/Brussels')\n", " .localize(datetime.combine(start_from_user, time()))\n", " .astimezone(pytz.timezone('UTC')))\n", " if end_from_user:\n", " end_from_user = (\n", " pytz.timezone('UTC')\n", " .localize(datetime.combine(end_from_user, time()))\n", " # Appropriate offset to inlude the end of period\n", " + timedelta(days=1, minutes=-int(res_key[:2])))\n", " # Then cut off the data_set\n", " data_sets[res_key] = df.loc[start_from_user:end_from_user, :]" ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "47c1efa2-d93f-4d13-81d7-8f64dadeff3f" } }, "source": [ "## Different shapes" ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "a753ac43-a0f4-44bc-a89d-1ccaaf48289a" } }, "source": [ "Data are provided in three different \"shapes\": \n", "- SingleIndex (easy to read for humans, compatible with datapackage standard, small file size)\n", " - Fileformat: CSV, SQLite\n", "- MultiIndex (easy to read into GAMS, not compatible with datapackage standard, small file size)\n", " - Fileformat: CSV, Excel\n", "- Stacked (compatible with data package standard, large file size, many rows, too many for Excel) \n", " - Fileformat: CSV\n", "\n", "The different shapes need to be created internally befor they can be saved to files. Takes about 1 minute to run." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "nbpresent": { "id": "769225c6-31f5-4db8-8d91-32a3f983489c" }, "scrolled": true }, "outputs": [], "source": [ "data_sets_singleindex = {}\n", "data_sets_multiindex = {}\n", "data_sets_stacked = {}\n", "for res_key, df in data_sets.items():\n", " if df.empty:\n", " continue\n", "\n", "# # Round floating point numbers to 2 digits\n", "# for col_name, col in df.iteritems():\n", "# if col_name[0] in info_cols.values():\n", "# pass\n", "# elif col_name[2] == 'profile':\n", "# df[col_name] = col.round(4)\n", "# else:\n", "# df[col_name] = col.round(3)\n", " \n", " # MultIndex\n", " data_sets_multiindex[res_key + '_multiindex'] = df\n", " \n", " # SingleIndex\n", " df_singleindex = df.copy()\n", " # use first 3 levels of multiindex to create singleindex\n", " df_singleindex.columns = [\n", " col_name[0] if col_name[0] in info_cols.values()\n", " else '_'.join([level for level in col_name[0:3] if not level == ''])\n", " for col_name in df.columns.values]\n", "\n", " data_sets_singleindex[res_key + '_singleindex'] = df_singleindex\n", " \n", " # Stacked\n", " stacked = df.copy().drop(columns=info_cols['cet'], level=0)\n", " stacked.columns = stacked.columns.droplevel(['source', 'web', 'unit'])\n", " # Concatrenate all columns below each other (=\"stack\").\n", " # df.transpose().stack() is faster than stacking all column levels \n", " # seperately\n", " stacked = stacked.transpose().stack(dropna=True).to_frame(name='data')\n", " data_sets_stacked[res_key + '_stacked'] = stacked" ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "84f1822e-3aa6-42c4-a424-5dc5ab6fa56f" } }, "source": [ "## Write to SQL-database" ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "3bb78fa9-5309-46b6-b945-68dcb654a567" } }, "source": [ "This file format is required for the filtering function on the OPSD website. This takes ~3 minutes to complete." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "nbpresent": { "id": "fd35212c-ec5c-4fcf-9897-4608742d1bf8" }, "scrolled": false }, "outputs": [], "source": [ "for res_key, df in data_sets_singleindex.items():\n", " table = 'time_series_' + res_key\n", " df = df.copy()\n", " df.index = df.index.strftime('%Y-%m-%dT%H:%M:%SZ')\n", " cet_col_name = info_cols['cet']\n", " df[cet_col_name] = (df[cet_col_name].dt.strftime('%Y-%m-%dT%H:%M:%S%z'))\n", " df.to_sql(table, sqlite3.connect('time_series.sqlite'),\n", " if_exists='replace', index_label=info_cols['utc'])" ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "143b39aa-dd2e-4923-be56-bb6c4706837d" } }, "source": [ "## Write to Excel" ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "ef7763f5-9bbc-40b8-8cee-829131b40336" } }, "source": [ "Writing the full tables to Excel takes extremely long. As a workaround, only the first 5 rows are exported. The rest of the data can than be inserted manually from the `_multindex.csv` files." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "nbpresent": { "id": "9e84c62a-7bd6-4319-89dd-409574dda234" }, "scrolled": false }, "outputs": [], "source": [ "writer = pd.ExcelWriter('time_series1.xlsx')\n", "for res_key, df in data_sets_multiindex.items():\n", " df.head().to_excel(writer, res_key.split('_')[0], float_format='%.2f',\n", " merge_cells=True)\n", " # merge_cells=False doesn't work properly with multiindex\n", "writer.save()" ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "454ee5f5-e8f1-4088-94e9-e846f48ee75b" } }, "source": [ "## Write to CSV" ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "279e3306-6dea-454d-bec8-0c826509ecd1" } }, "source": [ "This takes about 10 minutes to complete." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "nbpresent": { "id": "89449c49-608d-488d-8bc8-077c64bc26c7" }, "scrolled": true }, "outputs": [], "source": [ "# itertoools.chain() allows iterating over multiple dicts at once\n", "for res_stacking_key, df in itertools.chain(\n", " data_sets_singleindex.items(),\n", " data_sets_multiindex.items(),\n", " data_sets_stacked.items()\n", " ):\n", " \n", " if not res_stacking_key.startswith('60min'):\n", " continue\n", " \n", " df = df.copy()\n", " \n", " # convert the format of the cet_cest-timestamp to ISO-8601\n", " if not (res_stacking_key.split('_')[1] == 'stacked':\n", "# or type(df.iloc[0,0]) == str):\n", " df.iloc[:,0] = df.iloc[:,0].dt.strftime('%Y-%m-%dT%H:%M:%S%z')\n", " filename = 'time_series_' + res_stacking_key + '.csv'\n", " df.to_csv(filename, float_format='%.4f',\n", " date_format='%Y-%m-%dT%H:%M:%SZ')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Create metadata" ] }, { "cell_type": "markdown", "metadata": { "nbpresent": { "id": "38b6a454-31c4-4112-971f-da8271131d54" } }, "source": [ "This section: create the metadata, both general and column-specific. All metadata we be stored as a JSON file. Takes 10s to run." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Change to out_path directory\n", "os.chdir(out_path)\n", "os.getcwd()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "make_json(data_sets, info_cols, version, changes, headers, areas)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Write checksums.txt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We publish SHA-checksums for the outputfiles on GitHub to allow verifying the integrity of outputfiles on the OPSD server." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "os.chdir(out_path)\n", "files = os.listdir(out_path)\n", "\n", "# Create checksums.txt in the output directory\n", "with open('checksums.txt', 'w') as f:\n", " for file_name in files:\n", " if file_name.split('.')[-1] in ['csv', 'sqlite', 'xlsx']:\n", " file_hash = get_sha_hash(file_name)\n", " f.write('{},{}\\n'.format(file_name, file_hash))\n", "\n", "# Copy the file to root directory from where it will be pushed to GitHub,\n", "# leaving a copy in the version directory for reference\n", "copyfile('checksums.txt', os.path.join(home_path, 'checksums.txt'))" ] } ], "metadata": { "anaconda-cloud": {}, "celltoolbar": "Initialization Cell", "kernelspec": { "display_name": "Python [conda env:opsd_time_series]", "language": "python", "name": "conda-env-opsd_time_series-py" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.4" }, "nbpresent": { "slides": { "f6b300bf-88b5-4dea-951e-c926a9ea8287": { "id": "f6b300bf-88b5-4dea-951e-c926a9ea8287", "prev": "f96dd4bc-93a6-4014-b85f-a43061cf5688", "regions": { "dc486e18-7547-4610-99c0-55dfb5553f62": { "attrs": { "height": 1, "width": 1, "x": 0, "y": 0 }, "content": { "cell": "c0035fc6-ff1d-44d8-a3fd-b4c08f53be71", "part": "source" }, "id": "dc486e18-7547-4610-99c0-55dfb5553f62" } } }, "f96dd4bc-93a6-4014-b85f-a43061cf5688": { "id": "f96dd4bc-93a6-4014-b85f-a43061cf5688", "prev": null, "regions": { "657c3ad3-2fcf-4c8e-a527-de3d0a46fa4e": { "attrs": { "height": 1, "width": 1, "x": 0, "y": 0 }, "content": { "cell": "1562965a-7d74-4c1c-8251-4d82847f294a", "part": "source" }, "id": "657c3ad3-2fcf-4c8e-a527-de3d0a46fa4e" } } } }, "themes": {} }, "notify_time": "10", "toc": { "nav_menu": { "height": "512px", "width": "252px" }, "number_sections": true, "sideBar": true, "skip_h1_title": false, "toc_cell": true, "toc_position": { "height": "1241px", "left": "0px", "right": "2127px", "top": "107px", "width": "433px" }, "toc_section_display": "block", "toc_window_display": true }, "varInspector": { "cols": { "lenName": 16, "lenType": 16, "lenVar": 40 }, "kernels_config": { "python": { "delete_cmd_postfix": "", "delete_cmd_prefix": "del ", "library": "var_list.py", "varRefreshCmd": "print(var_dic_list())" }, "r": { "delete_cmd_postfix": ") ", "delete_cmd_prefix": "rm(", "library": "var_list.r", "varRefreshCmd": "cat(var_dic_list()) " } }, "position": { "height": "1101px", "left": "2009px", "right": "67px", "top": "127px", "width": "484px" }, "types_to_exclude": [ "module", "function", "builtin_function_or_method", "instance", "_Feature" ], "window_display": false } }, "nbformat": 4, "nbformat_minor": 1 }