{ "cells": [ { "cell_type": "markdown", "id": "ddb7d472", "metadata": {}, "source": [ "# Financial services use cases notebook\n", "This notebook outlines relevant SDK solutions relevant to financial institutions. The use cases below include the following:\n", "\n", "1. Crude oil storage (seaborne and onshore)\n", "2. Imports and exports\n", "3. Freight rates\n", "4. Average vessel speed time series\n", "5. Post-route ballast distribution" ] }, { "cell_type": "markdown", "id": "979c418e", "metadata": {}, "source": [ "## Import libraries\n", "The first step is to import the libraries required to query the data in question. To do this, run the below cell." ] }, { "cell_type": "code", "execution_count": null, "id": "0f917fd5", "metadata": { "ExecuteTime": { "end_time": "2023-08-15T10:15:07.462550Z", "start_time": "2023-08-15T10:15:05.834051Z" } }, "outputs": [], "source": [ "from vortexasdk import VoyagesTimeseries, VoyagesSearchEnriched, CargoTimeSeries, OnshoreInventoriesTimeseries, FreightPricingTimeseries, Vessels, Geographies, Products\n", "from datetime import datetime\n", "import pandas as pd\n", "import numpy as np\n", "import time\n", "import plotly.express as px" ] }, { "cell_type": "markdown", "id": "f44faac0", "metadata": {}, "source": [ "## Search for IDs\n", "In the Vortexa SDK, we cannot refer to products, vessels or geographies by name. Each product, vessel or geography in our data has its own unique ID. These IDs are used to refer to the products, vessels or geographies in our database when making queries.\n", "\n", "In the code below, remove the hash tags in lines 3&4 or in lines 14&15 in order to run a search for the desired product or geography. You can change the search term to refine your search.\n", "\n", "Once you find the id, you can copy it and assign it as an object. This way, you can refer to the product or geography by the name you have given to it. In the examples below, we assign names to geographies: US Gulf, United States and Europe, as well as to the products: crude, LPG and CPP." ] }, { "cell_type": "code", "execution_count": null, "id": "eb485715", "metadata": { "ExecuteTime": { "end_time": "2023-08-15T10:15:07.468238Z", "start_time": "2023-08-15T10:15:07.465165Z" } }, "outputs": [], "source": [ "# search for geography ids (remove hashtags to search)\n", "\n", "# full_length_df = Geographies().search(term=[\"europe\"]).to_df()\n", "# print(full_length_df.to_string(index=False))\n", "\n", "# Store geography ids\n", "\n", "us_gulf='e0d68b7a4ac37c97e3387471644d8b5c2a4be16a50092676ec3bec08408a2ebb'\n", "united_states='2d92cc08f22524dba59f6a7e340f132a9da0ce9573cca968eb8e3752ef17a963'\n", "europe='f39d455f5d38907394d6da3a91da4e391f9a34bd6a17e826d6042761067e88f4'\n", "\n", "# search for product ids (remove hashtags below to search)\n", "\n", "# product_search = Products().search(term=['Crude']).to_df()\n", "# print (product_search.to_string(index=False))\n", "\n", "# Store product ids\n", "crude='54af755a090118dcf9b0724c9a4e9f14745c26165385ffa7f1445bc768f06f11'\n", "cpp='b68cbb746f8b9098c50e2ba36bcad83001a53bd362e9031fb49085d02c36659c'\n", "lpg='364ccbb996c944055b479810a8e74863267885dc1b01407cb0f00ab26dafe1e1'" ] }, { "cell_type": "markdown", "id": "0483736e", "metadata": {}, "source": [ "## 1. Crude oil storage (seaborne and onshore)\n", "The code below extracts onshore crude inventories per week, as well as floating storage volumes per week. These are then added together to provide a picture of total oil supply per week. This can be used in conjunction with crude oil price data to anticipate when price trends might change." ] }, { "cell_type": "code", "execution_count": null, "id": "c683bc78", "metadata": { "ExecuteTime": { "end_time": "2023-08-15T10:27:28.500244Z", "start_time": "2023-08-15T10:27:28.484385Z" } }, "outputs": [], "source": [ "# Function\n", "def weekly_crude_supply(start_y, start_m, start_d, unit):\n", " \n", " # Define constants\n", " crude='54af755a090118dcf9b0724c9a4e9f14745c26165385ffa7f1445bc768f06f11'\n", " today=datetime.today()\n", " \n", " # Pull onshore crude inventory data\n", " inventories = OnshoreInventoriesTimeseries().search(\n", " time_min=datetime(start_y, start_m, start_d),\n", " time_max=today,\n", " crude_confidence=['confirmed', 'probable'],\n", " timeseries_frequency=\"week\",\n", " timeseries_unit=unit,\n", " timeseries_unit_operator=\"fill\").to_df()\n", "\n", " # Convert dates to weeks and years for merging\n", " inventories[\"weeks\"] = inventories['key'].dt.strftime('W%U-%Y')\n", " inventories[\"date\"] = inventories['key'].dt.strftime('%d/%m/%Y')\n", " inventories=pd.concat([inventories['date'], inventories['weeks'], inventories['value']], axis=1)\n", " inventories.columns=['date', 'Date', 'Crude inventories']\n", " \n", " # Pull floating stoarge data\n", " floating_storage = CargoTimeSeries().search(\n", " filter_time_min=datetime(start_y, start_m, start_d),\n", " filter_time_max=today,\n", " filter_products=crude,\n", " filter_activity=\"storing_state\",\n", " timeseries_frequency=\"day\",\n", " timeseries_unit=unit).to_df()\n", "\n", " # Convert dates to weeks and years for aggregation\n", " floating_storage[\"weeks\"] = floating_storage['key'].dt.strftime('W%U-%Y')\n", "\n", "\n", " # Get a unique list of the dates in weekly format\n", " dates = list(floating_storage[\"weeks\"].unique())\n", "\n", " # Get the average distance travelled per voyage in each week\n", " values = []\n", " for i in range(len(dates)):\n", " g = floating_storage.loc[floating_storage['weeks'] == dates[i]]\n", " dists = pd.to_numeric(g[\"value\"])\n", " mean = dists.mean()\n", " values.append(mean)\n", "\n", "\n", " dates = pd.DataFrame(dates)\n", " values = pd.DataFrame(values)\n", "\n", " floating_storage = pd.concat([dates, values], axis = 1)\n", " floating_storage.columns = ['Date', 'Floating storage']\n", " \n", " # Merge floating storage and inventory data\n", " merged_df = pd.merge(inventories, floating_storage, on='Date', how='inner')\n", " merged_df['Crude supply']=merged_df['Crude inventories'] + merged_df['Floating storage']\n", " \n", " # Plot data\n", " fig = px.line(\n", " merged_df,\n", " title=\"Global crude storage\",\n", " x=\"Date\", \n", " y=\"Crude supply\",\n", " labels={\n", " \"Date\":\"Date\",\n", " \"Crude supply\":\"bbl\"\n", " },\n", " )\n", " fig.update_layout(xaxis_rangeslider_visible = True)\n", " fig.show()\n", " \n", " return merged_df" ] }, { "cell_type": "markdown", "id": "73a61898", "metadata": {}, "source": [ "Run the cell below to see a plot of the data" ] }, { "cell_type": "code", "execution_count": null, "id": "12799cdf", "metadata": { "ExecuteTime": { "end_time": "2023-08-15T10:27:31.708331Z", "start_time": "2023-08-15T10:27:29.554903Z" } }, "outputs": [], "source": [ "supply=weekly_crude_supply(2021, 1, 1, \"b\")" ] }, { "cell_type": "markdown", "id": "5c920bd5", "metadata": {}, "source": [ "## 2. Imports and exports\n", "The code below provides a way to view import and export data for all products/regions within the Vortexa database. This can be used to monitor the imports and exports of major oil producing/consuming nations." ] }, { "cell_type": "code", "execution_count": null, "id": "a8812053", "metadata": { "ExecuteTime": { "end_time": "2023-08-15T10:57:08.086779Z", "start_time": "2023-08-15T10:57:08.071086Z" } }, "outputs": [], "source": [ "# Import/export functions\n", "def exports(origin, destination, product, start_y, start_m, start_d, unit, freq, title):\n", " \n", " # Define constants\n", " today=datetime.today()\n", " \n", " # Pull export data\n", " exports = CargoTimeSeries().search(\n", " filter_time_min=datetime(start_y, start_m, start_d),\n", " filter_time_max=today,\n", " filter_origins=origin,\n", " filter_destinations=destination,\n", " filter_products=product,\n", " filter_activity=\"loading_end\",\n", " timeseries_frequency=freq,\n", " timeseries_unit=unit).to_df()\n", " \n", " # Format and compile data\n", " exports=pd.concat([exports['key'], exports['value']], axis=1)\n", " exports.columns=['Date', 'Cargo volume']\n", " \n", " # Plot data\n", " fig = px.bar(\n", " exports,\n", " title=title,\n", " x=\"Date\", \n", " y=\"Cargo volume\",\n", " labels={\n", " \"Date\":\"Date\",\n", " \"Cargo volume\":unit\n", " },\n", " )\n", " fig.update_layout(xaxis_rangeslider_visible = True)\n", " fig.show()\n", " \n", " return exports\n", "\n", "def imports(origin, destination, product, start_y, start_m, start_d, unit, freq, title):\n", " \n", " # Define constants\n", " today=datetime.today()\n", " \n", " # Pull import data\n", " imports = CargoTimeSeries().search(\n", " filter_time_min=datetime(start_y, start_m, start_d),\n", " filter_time_max=today,\n", " filter_origins=origin,\n", " filter_destinations=destination,\n", " filter_products=product,\n", " filter_activity=\"unloading_start\",\n", " timeseries_frequency=freq,\n", " timeseries_unit=unit).to_df()\n", " \n", " # Format and compile data\n", " imports=pd.concat([imports['key'], imports['value']], axis=1)\n", " imports.columns=['Date', 'Cargo volume']\n", " \n", " # Plot data\n", " fig = px.bar(\n", " imports,\n", " title=title,\n", " x=\"Date\", \n", " y=\"Cargo volume\",\n", " labels={\n", " \"Date\":\"Date\",\n", " \"Cargo volume\":unit\n", " },\n", " )\n", " fig.update_layout(xaxis_rangeslider_visible = True)\n", " fig.show()\n", " \n", " return imports" ] }, { "cell_type": "markdown", "id": "9b18cbf3", "metadata": {}, "source": [ "Run the cell below to view examples for US imports, US exports and Europe imports for crude oil." ] }, { "cell_type": "code", "execution_count": null, "id": "383d8140", "metadata": { "ExecuteTime": { "end_time": "2023-08-15T10:57:10.578935Z", "start_time": "2023-08-15T10:57:09.801064Z" } }, "outputs": [], "source": [ "us_crude_imports=imports(None, united_states, crude, 2021, 1, 1, 'bpd', 'month', 'US crude imports')\n", "us_crude_exports=exports(united_states, None, crude, 2021, 1, 1, 'bpd', 'month', 'US crude exports')\n", "europe_crude_imports=imports(None, europe, crude, 2021, 1, 1, 'bpd', 'month', 'Europe crude imports')\n" ] }, { "cell_type": "markdown", "id": "0d10b800", "metadata": {}, "source": [ "## 3. Freight rates\n", "The below code demonstrates how to pull freight rates for multiple Baltic Exchange trade routes. Freight rates can be a valuable indicator of market demand. In the example below, we plot the US Gulf MR freight rates, which can serve as an indicator for US Gulf diesel price movements." ] }, { "cell_type": "code", "execution_count": null, "id": "a16ac9b5", "metadata": { "ExecuteTime": { "end_time": "2023-08-14T13:40:52.311740Z", "start_time": "2023-08-14T13:40:52.306213Z" } }, "outputs": [], "source": [ "# Define the rates you want to see\n", "all_rates = ['TC1', 'TC2_37', 'TC5', 'TC6', 'TC7', 'TC8', 'TC9', 'TC10', 'TC11', 'TC12', 'TC14', 'TC15', 'TC16', 'TC17', 'TC18', 'TC19', 'TD1', 'TD2', 'TD3C', 'TD6', 'TD7', 'TD8', 'TD9', 'TD14', 'TD15', 'TD17', 'TD18', 'TD19', 'TD20', 'TD21', 'TD22', 'TD23', 'TD24', 'TD25', 'TD26', 'BLPG1', 'BLPG2', 'BLPG3']\n", "usg_mr_rates=['TC14', 'TC18']" ] }, { "cell_type": "code", "execution_count": null, "id": "04bdfc5c", "metadata": { "ExecuteTime": { "end_time": "2023-08-14T13:40:52.819674Z", "start_time": "2023-08-14T13:40:52.807152Z" } }, "outputs": [], "source": [ "# Helper function to remove NAs and match values to correct dates\n", "def match_dates(data):\n", " \n", " # Replace blanks with pandas NA values\n", " data.replace('', pd.NA, inplace=True)\n", " \n", " mark = []\n", " \n", " # Mark rows containing any NA values as 'drop'\n", " for i in range(len(data)):\n", " if data.iloc[i].notna().all() == False:\n", " mark = mark + ['drop']\n", " else:\n", " mark = mark + ['keep']\n", " \n", " data['mark'] = mark\n", " \n", " data = data[data['mark'] == 'keep']\n", " \n", " data = data.iloc[:, :-1]\n", " \n", " return data\n", "\n", "# Function to query a given list of freight rates and remove NA values using the above helper function\n", "def freight_rates(start_y, start_m, start_d, rates, unit, freq):\n", " \n", " # Make an initial empty data frame\n", " final=pd.DataFrame()\n", " \n", " # Obtain just the dates\n", " dates=(FreightPricingTimeseries().search(\n", " time_min=datetime(start_y, start_m, start_d),\n", " time_max=datetime.today(),\n", " routes=rates[0],\n", " breakdown_property=unit,\n", " breakdown_frequency=freq))\n", " \n", " # Correctly label date column\n", " dates=dates.to_df()\n", " dates=pd.concat([dates[\"key\"]], axis=1)\n", " final=dates\n", " final.columns=['Date']\n", "\n", " # Obtain freight rate values for each route specified\n", " for i in range(len(rates)):\n", " df=(FreightPricingTimeseries().search(\n", " time_min=datetime(start_y, start_m, start_d),\n", " time_max=datetime.today(),\n", " routes=rates[i],\n", " breakdown_property=unit,\n", " breakdown_frequency=freq))\n", "\n", " df=df.to_df()\n", " df2=df[\"value\"]\n", " final=pd.concat([final, df2], axis = 1)\n", " \n", " # Define column names\n", " names=['Date'] + rates\n", " final.columns=names\n", " \n", " # Format dates\n", " final['Date']=pd.to_datetime(final['Date'])\n", " final['Date']=final['Date'].dt.strftime(\"%d-%m-%Y\")\n", " \n", " # Remove NAs\n", " final=match_dates(final)\n", " \n", " # Plot data\n", " fig = px.line(\n", " final, \n", " x=\"Date\", \n", " y=names[1:],\n", " labels={\n", " \"Date\":\"Date\",\n", " \"value\":\"$/ton\"\n", " },\n", " )\n", " fig.update_layout(xaxis_rangeslider_visible = True)\n", " fig.show()\n", " return final\n", "\n" ] }, { "cell_type": "markdown", "id": "fcc2990e", "metadata": {}, "source": [ "Run the cell below to see a plot of the specified freight rates." ] }, { "cell_type": "code", "execution_count": null, "id": "c5300c6d", "metadata": { "ExecuteTime": { "end_time": "2023-08-14T13:40:54.781776Z", "start_time": "2023-08-14T13:40:53.962323Z" } }, "outputs": [], "source": [ "usg_rates=freight_rates(2022, 1, 1, usg_mr_rates, 'cost', 'day')\n" ] }, { "cell_type": "markdown", "id": "200df00e", "metadata": {}, "source": [ "## 4. Average vessel speed time series\n", "The code below demonstrates how to query and plot vessel speed data for various regions, products, vessel classes or vessel statuses. Ballast speeds in particular can serve as an indicator of how busy freight markets are. The example below shows the vessel speeds for MR tankers heading towards the US Gulf." ] }, { "cell_type": "code", "execution_count": null, "id": "c6587f7b", "metadata": { "ExecuteTime": { "end_time": "2023-08-15T14:38:36.906273Z", "start_time": "2023-08-15T14:38:36.901139Z" } }, "outputs": [], "source": [ "# Average speed function\n", "def average_speed(start_y, start_m, start_d, origin, destination, vessels, prod, status, freq): \n", " \n", " # Define constants\n", " today=datetime.today()\n", " \n", " # Pull speeds data\n", " speeds = VoyagesTimeseries().search(\n", " time_min=datetime(start_y, start_m, start_d),\n", " time_max=today,\n", " voyage_status=status,\n", " origins=origin,\n", " destinations=destination,\n", " vessels=vessels,\n", " latest_products=prod,\n", " breakdown_frequency=freq,\n", " breakdown_property=\"avg_speed\",\n", " breakdown_unit_operator=\"avg\").to_df()\n", " \n", " # Compile data and rename columns\n", " speeds=pd.concat([speeds['key'], speeds['value']], axis=1)\n", " speeds.columns=['Date', 'Average speed (kn)']\n", "\n", " # Plot data\n", " fig = px.line(\n", " speeds, \n", " x=\"Date\", \n", " y='Average speed (kn)',\n", " labels={\n", " \"Date\":\"Date\",\n", " \"value\":\"Average speed (kn)\"\n", " },\n", " )\n", " fig.update_layout(xaxis_rangeslider_visible = True)\n", " fig.show()\n", " \n", " return speeds" ] }, { "cell_type": "markdown", "id": "0165407e", "metadata": {}, "source": [ "Run the cell below to view a plot of the vessel speed data." ] }, { "cell_type": "code", "execution_count": null, "id": "11504498", "metadata": { "ExecuteTime": { "end_time": "2023-08-15T15:13:43.632963Z", "start_time": "2023-08-15T15:13:41.789819Z" } }, "outputs": [], "source": [ "ballast_speed_to_us_gulf=average_speed(2022, 8, 14, None, us_gulf, 'handymax', cpp, 'ballast', 'day')" ] }, { "cell_type": "markdown", "id": "aa6775db", "metadata": {}, "source": [ "## 5. Post-route ballast distribution\n", "The code below captures the next ballast voyage of each vessel operating on a specified route. The output consists of counts & percentages of ballast voyages per shipping region, as well as a dataframe of the ballast voyages that make up the counts and percentages. This can be used to anticipate changes in fleet supply and behaviour." ] }, { "cell_type": "code", "execution_count": null, "id": "098d3afd", "metadata": { "ExecuteTime": { "end_time": "2023-08-14T13:43:30.296023Z", "start_time": "2023-08-14T13:43:30.246592Z" } }, "outputs": [], "source": [ "# Post voyage ballast distribution retrieval\n", "\n", "def monthly_post_route_ballast_distribution(origin, origin_excl, destination, destination_excl, vessel_class, product, product_excl, start_y, start_m, start_d, end_y, end_m, end_d, end):\n", "\n", " # Pull the laden voyages which occurred in the required timeframe\n", " route = VoyagesSearchEnriched().search(\n", " origins = origin,\n", " origins_excluded = origin_excl,\n", " destinations = destination,\n", " destinations_excluded = destination_excl,\n", " time_min = datetime(start_y, start_m, start_d),\n", " time_max = datetime(end_y, end_m, end_d, 23, 59, 59),\n", " vessels = vessel_class,\n", " products = product,\n", " products_excluded = product_excl\n", " )\n", " \n", " # Convert to dataframe\n", " route = pd.DataFrame(route)\n", " \n", " # Sort by end_timestamp\n", " route[\"end_timestamp\"] = pd.to_datetime(route[\"end_timestamp\"])\n", " route.sort_values(by='end_timestamp', ascending = True, inplace=True)\n", " \n", " # Remove null end_timestamps\n", " route.drop(route[pd.isnull(route['end_timestamp']) == True].index, inplace = True)\n", " \n", " # Remove voyages that end past the specified end date\n", " route = route[(route['end_timestamp'] <= end)]\n", " \n", " route = route.dropna(subset=['next_voyage_id'])\n", "\n", " \n", " # Get the next voyage IDs\n", " next_voyage_id_list = route[\"next_voyage_id\"].unique()\n", " next_voyage_id_list = next_voyage_id_list.tolist()\n", "\n", " \n", " # Get voyages corresponding to the next voyage IDs\n", " post_route = VoyagesSearchEnriched().search(\n", " voyage_id = next_voyage_id_list,\n", " columns = \"all\")\n", " \n", " # Convert this to dataframe\n", " df = post_route.to_df()\n", "\n", " # Sort them by their start dates (end date of laden voyage/discharge date)\n", " df[\"START DATE\"] = pd.to_datetime(df[\"START DATE\"])\n", " df.sort_values(by='START DATE', ascending = True, inplace=True)\n", " \n", " # Relabel blank destinations as Undetermined\n", " df['DESTINATION SHIPPING REGION'] = df['DESTINATION SHIPPING REGION'].replace([''],'Undetermined')\n", "\n", " # Remove laden results\n", " df = df.loc[df[\"VOYAGE STATUS\"] == 'Ballast']\n", " \n", " # Store the unique destinations\n", " dests = df[\"DESTINATION SHIPPING REGION\"].unique()\n", " \n", " dests = dests.tolist()\n", " \n", " dest_counts = []\n", " # Count the number of times each ballast destination is declared\n", " for i in range(len(dests)):\n", " g = len(df.loc[df['DESTINATION SHIPPING REGION'] == dests[i]])\n", " dest_counts.append(g)\n", "\n", " # Sort destinations by count\n", " dests = pd.DataFrame(dests)\n", " dest_counts = pd.DataFrame(dest_counts)\n", " \n", " ranked = pd.concat([dests, dest_counts], axis = 1)\n", " columns = ['Destination', 'Count']\n", " ranked.columns = columns\n", "\n", " ranked.sort_values(by='Count', ascending = False, inplace=True)\n", " \n", " # Get a list of ranked destinations\n", " dests = ranked[\"Destination\"].tolist()\n", " \n", " # Convert dates of ballast voyages to months and years for counting purposes\n", " df[\"months\"] = df['START DATE'].dt.strftime('%m-%Y')\n", " \n", " # Get a unique list of the dates in month/year format\n", " dates = df[\"months\"].unique()\n", " dates = dates.tolist()\n", " \n", " # Count the number of times each ballast destination is declared in each month\n", " counts2 = []\n", " for j in range(len(dests)):\n", " for i in range(len(dates)):\n", " g = ((df['DESTINATION SHIPPING REGION'] == dests[j]) & (df['months'] == dates[i])).sum()\n", " counts2.append(g)\n", " \n", " # Specify interval to select destination counts\n", " k = int(len(counts2)/len(dests))\n", " \n", " # Add ballast destination counts to dataframe\n", " raw_counts = []\n", " raw_counts = pd.DataFrame()\n", " raw_counts[\"Date\"] = dates\n", " for i in range(len(dests)):\n", " raw_counts[dests[i]] = counts2[k*i : k*(i+1)]\n", " \n", " # Turn ballast destination counts into an array so you can calculate percentages \n", " arr = np.array(raw_counts)\n", " \n", " # Delete the dates\n", " arr = np.delete(arr, 0, axis=1)\n", " \n", " # Calculate percentages from the ballast destination counts\n", " for i in range(len(arr[:,0])):\n", " sum = np.sum(arr[i,:])\n", " for j in range(len(arr[0,:])):\n", " prop = arr[i, j]/sum\n", " arr[i, j] = prop\n", " \n", " props = pd.DataFrame(arr)\n", "\n", " # Label the columns\n", " columns = dests\n", " props.columns = columns\n", " \n", " # Add in the date as the first column\n", " props.insert(0, 'Date', dates)\n", " \n", " # Check that voyages are in fact all ballast\n", " final_check = df[\"VOYAGE STATUS\"].unique()\n", " \n", " print(\"All voyages are\", final_check)\n", " \n", " # Change names of the output files here (filepath may need to be edited in Windows)\n", " raw_counts.to_csv(\"~/Desktop/Monthly ballast distribution counts.csv\", index=False)\n", " props.to_csv(\"~/Desktop/Monthly ballast distribution percentages.csv\", index=False)\n", " df.to_csv(\"~/Desktop/Monthly ballast dist voyages.csv\", index=False)\n", "\n", " \n", " return raw_counts, props, df\n", " \n" ] }, { "cell_type": "code", "execution_count": null, "id": "6d00478d", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.8" } }, "nbformat": 4, "nbformat_minor": 5 }