{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## Memory management utils" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Utility functions for memory management. Currently primarily for GPU." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [], "source": [ "from fastai.gen_doc.nbdoc import *\n", "from fastai.utils.mem import * " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

gpu_mem_get[source][test]

\n", "\n", "> gpu_mem_get(**`id`**=***`None`***)\n", "\n", "
×

Tests found for gpu_mem_get:

To run tests please refer to this guide.

\n", "\n", "get total, used and free memory (in MBs) for gpu `id`. if `id` is not passed, currently selected torch device is used " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(gpu_mem_get)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[`gpu_mem_get`](/utils.mem.html#gpu_mem_get)\n", "\n", "* for gpu returns `GPUMemory(total, free, used)`\n", "* for cpu returns `GPUMemory(0, 0, 0)`\n", "* for invalid gpu id returns `GPUMemory(0, 0, 0)`" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

gpu_mem_get_all[source][test]

\n", "\n", "> gpu_mem_get_all()\n", "\n", "
×

Tests found for gpu_mem_get_all:

  • pytest -sv tests/test_utils_mem.py::test_gpu_mem_all [source]

To run tests please refer to this guide.

\n", "\n", "get total, used and free memory (in MBs) for each available gpu " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(gpu_mem_get_all)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[`gpu_mem_get_all`](/utils.mem.html#gpu_mem_get_all)\n", "* for gpu returns `[ GPUMemory(total_0, free_0, used_0), GPUMemory(total_1, free_1, used_1), .... ]`\n", "* for cpu returns `[]`\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

gpu_mem_get_free[source][test]

\n", "\n", "> gpu_mem_get_free()\n", "\n", "
×

No tests found for gpu_mem_get_free. To contribute a test please refer to this guide and this discussion.

\n", "\n", "get free memory (in MBs) for the currently selected gpu id, w/o emptying the cache " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(gpu_mem_get_free)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

gpu_mem_get_free_no_cache[source][test]

\n", "\n", "> gpu_mem_get_free_no_cache()\n", "\n", "
×

No tests found for gpu_mem_get_free_no_cache. To contribute a test please refer to this guide and this discussion.

\n", "\n", "get free memory (in MBs) for the currently selected gpu id, after emptying the cache " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(gpu_mem_get_free_no_cache)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

gpu_mem_get_used[source][test]

\n", "\n", "> gpu_mem_get_used()\n", "\n", "
×

Tests found for gpu_mem_get_used:

  • pytest -sv tests/test_utils_mem.py::test_gpu_mem_measure_consumed_reclaimed [source]

To run tests please refer to this guide.

\n", "\n", "get used memory (in MBs) for the currently selected gpu id, w/o emptying the cache " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(gpu_mem_get_used)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

gpu_mem_get_used_no_cache[source][test]

\n", "\n", "> gpu_mem_get_used_no_cache()\n", "\n", "
×

No tests found for gpu_mem_get_used_no_cache. To contribute a test please refer to this guide and this discussion.

\n", "\n", "get used memory (in MBs) for the currently selected gpu id, after emptying the cache " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(gpu_mem_get_used_no_cache)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[`gpu_mem_get_used_no_cache`](/utils.mem.html#gpu_mem_get_used_no_cache)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

gpu_mem_get_used_fast[source][test]

\n", "\n", "> gpu_mem_get_used_fast(**`gpu_handle`**)\n", "\n", "
×

No tests found for gpu_mem_get_used_fast. To contribute a test please refer to this guide and this discussion.

\n", "\n", "get used memory (in MBs) for the currently selected gpu id, w/o emptying the cache, and needing the `gpu_handle` arg " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(gpu_mem_get_used_fast)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[`gpu_mem_get_used_fast`](/utils.mem.html#gpu_mem_get_used_fast)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

gpu_with_max_free_mem[source][test]

\n", "\n", "> gpu_with_max_free_mem()\n", "\n", "
×

Tests found for gpu_with_max_free_mem:

  • pytest -sv tests/test_utils_mem.py::test_gpu_with_max_free_mem [source]

To run tests please refer to this guide.

\n", "\n", "get [gpu_id, its_free_ram] for the first gpu with highest available RAM " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(gpu_with_max_free_mem)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[`gpu_with_max_free_mem`](/utils.mem.html#gpu_with_max_free_mem):\n", "* for gpu returns: `gpu_with_max_free_ram_id, its_free_ram`\n", "* for cpu returns: `None, 0`\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

preload_pytorch[source][test]

\n", "\n", "> preload_pytorch()\n", "\n", "
×

No tests found for preload_pytorch. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(preload_pytorch)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[`preload_pytorch`](/utils.mem.html#preload_pytorch) is helpful when GPU memory is being measured, since the first time any operation on `cuda` is performed by pytorch, usually about 0.5GB gets used by CUDA context." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class GPUMemory[test]

\n", "\n", "> GPUMemory(**`total`**, **`free`**, **`used`**) :: `tuple`\n", "\n", "
×

No tests found for GPUMemory. To contribute a test please refer to this guide and this discussion.

\n", "\n", "GPUMemory(total, free, used) " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(GPUMemory, title_level=4)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[`GPUMemory`](/utils.mem.html#GPUMemory) is a namedtuple that is returned by functions like [`gpu_mem_get`](/utils.mem.html#gpu_mem_get) and [`gpu_mem_get_all`](/utils.mem.html#gpu_mem_get_all)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

b2mb[source][test]

\n", "\n", "> b2mb(**`num`**)\n", "\n", "
×

No tests found for b2mb. To contribute a test please refer to this guide and this discussion.

\n", "\n", "convert Bs to MBs and round down " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(b2mb)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[`b2mb`](/utils.mem.html#b2mb) is a helper utility that just does `int(bytes/2**20)`" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Memory Tracing Utils" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class GPUMemTrace[source][test]

\n", "\n", "> GPUMemTrace(**`silent`**=***`False`***, **`ctx`**=***`None`***, **`on_exit_report`**=***`True`***)\n", "\n", "
×

Tests found for GPUMemTrace:

  • pytest -sv tests/test_utils_mem.py::test_gpu_mem_trace [source]
  • pytest -sv tests/test_utils_mem.py::test_gpu_mem_trace_ctx [source]

To run tests please refer to this guide.

\n", "\n", "Trace allocated and peaked GPU memory usage (deltas). " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(GPUMemTrace, title_level=4)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Arguments**:\n", "\n", "* `silent`: a shortcut to make `report` and `report_n_reset` silent w/o needing to remove those calls - this can be done from the constructor, or alternatively you can call `silent` method anywhere to do the same.\n", "* `ctx`: default context note in reports\n", "* `on_exit_report`: auto-report on ctx manager exit (default `True`)\n", "\n", "**Definitions**:\n", "\n", "* **Delta Used** is the difference between current used memory and used memory at the start of the counter.\n", "\n", "* **Delta Peaked** is the memory overhead if any. It's calculated in two steps:\n", " 1. The base measurement is the difference between the peak memory and the used memory at the start of the counter.\n", " 2. Then if delta used is positive it gets subtracted from the base value.\n", " \n", " It indicates the size of the blip.\n", "\n", " **Warning**: currently the peak memory usage tracking is implemented using a python thread, which is very unreliable, since there is no guarantee the thread will get a chance at running at the moment the peak memory is occuring (or it might not get a chance to run at all). Therefore we need pytorch to implement multiple concurrent and resettable [`torch.cuda.max_memory_allocated`](https://pytorch.org/docs/stable/cuda.html#torch.cuda.max_memory_allocated) counters. Please vote for this [feature request](https://github.com/pytorch/pytorch/issues/16266).\n", "\n", "**Usage Examples**:\n", "\n", "Setup:\n", "```\n", "from fastai.utils.mem import GPUMemTrace\n", "def some_code(): pass\n", "mtrace = GPUMemTrace()\n", "```\n", "\n", "Example 1: basic measurements via `report` (prints) and via [`data`](/tabular.data.html#tabular.data) (returns) accessors\n", "```\n", "some_code()\n", "mtrace.report()\n", "delta_used, delta_peaked = mtrace.data()\n", "\n", "some_code()\n", "mtrace.report('2nd run of some_code()')\n", "delta_used, delta_peaked = mtrace.data()\n", "```\n", "`report`'s optional `subctx` argument can be helpful if you have many `report` calls and you want to understand which is which in the outputs.\n", "\n", "Example 2: measure in a loop, resetting the counter before each run\n", "```\n", "for i in range(10):\n", " mtrace.reset()\n", " some_code()\n", " mtrace.report(f'i={i}')\n", "```\n", "`reset` resets all the counters.\n", "\n", "Example 3: like example 2, but having `report` automatically reset the counters\n", "```\n", "mtrace.reset()\n", "for i in range(10):\n", " some_code()\n", " mtrace.report_n_reset(f'i={i}')\n", "```\n", "\n", "The tracing starts immediately upon the [`GPUMemTrace`](/utils.mem.html#GPUMemTrace) object creation, and stops when that object is deleted. But it can also be `stop`ed, `start`ed manually as well.\n", "```\n", "mtrace.start()\n", "mtrace.stop()\n", "```\n", "`stop` is in particular useful if you want to **freeze** the [`GPUMemTrace`](/utils.mem.html#GPUMemTrace) object and to be able to query its data on `stop` some time down the road.\n", "\n", "\n", "**Reporting**:\n", "\n", "In reports you can print a main context passed via the constructor:\n", "\n", "```\n", "mtrace = GPUMemTrace(ctx=\"foobar\")\n", "mtrace.report()\n", "```\n", "prints:\n", "```\n", "△Used Peaked MB: 0 0 (foobar)\n", "```\n", "\n", "and then add subcontext notes as needed:\n", "\n", "```\n", "mtrace = GPUMemTrace(ctx=\"foobar\")\n", "mtrace.report('1st try')\n", "mtrace.report('2nd try')\n", "\n", "```\n", "prints:\n", "```\n", "△Used Peaked MB: 0 0 (foobar: 1st try)\n", "△Used Peaked MB: 0 0 (foobar: 2nd try)\n", "```\n", "\n", "Both context and sub-context are optional, and are very useful if you sprinkle [`GPUMemTrace`](/utils.mem.html#GPUMemTrace) in different places around the code.\n", "\n", "You can silence report calls w/o needing to remove them via constructor or `silent`:\n", "\n", "```\n", "mtrace = GPUMemTrace(silent=True)\n", "mtrace.report() # nothing will be printed\n", "mtrace.silent(silent=False)\n", "mtrace.report() # printing resumed\n", "mtrace.silent(silent=True)\n", "mtrace.report() # nothing will be printed\n", "```\n", "\n", "**Context Manager**:\n", "\n", "[`GPUMemTrace`](/utils.mem.html#GPUMemTrace) can also be used as a context manager:\n", "\n", "Report the used and peaked deltas automatically:\n", "\n", "```\n", "with GPUMemTrace(): some_code()\n", "```\n", "\n", "If you wish to add context:\n", "\n", "```\n", "with GPUMemTrace(ctx='some context'): some_code()\n", "```\n", "\n", "The context manager uses subcontext `exit` to indicate that the report comes after the context exited.\n", "\n", "The reporting is done automatically, which is especially useful in functions due to return call:\n", "\n", "```\n", "def some_func():\n", " with GPUMemTrace(ctx='some_func'):\n", " # some code\n", " return 1\n", "some_func()\n", "```\n", "prints:\n", "```\n", "△Used Peaked MB: 0 0 (some_func: exit)\n", "```\n", "so you still get a perfect report despite the `return` call here. `ctx` is useful for specifying the *context* in case you have many of those calls through your code and you want to know which is which.\n", "\n", "And, of course, instead of doing the above, you can use [`gpu_mem_trace`](/utils.mem.html#gpu_mem_trace) decorator to do it automatically, including using the function or method name as the context. Therefore, the example below does the same without modifying the function.\n", "\n", "```\n", "@gpu_mem_trace\n", "def some_func():\n", " # some code\n", " return 1\n", "some_func()\n", "```\n", "\n", "If you don't wish the automatic reporting, just pass `on_exit_report=False` in the constructor:\n", "\n", "```\n", "with GPUMemTrace(ctx='some_func', on_exit_report=False) as mtrace:\n", " some_code()\n", "mtrace.report(\"measured in ctx\")\n", "```\n", "\n", "or the same w/o the context note:\n", "```\n", "with GPUMemTrace(on_exit_report=False) as mtrace: some_code()\n", "print(mtrace) # or mtrace.report()\n", "```\n", "\n", "And, of course, you can get the numerical data (in rounded MBs):\n", "```\n", "with GPUMemTrace() as mtrace: some_code()\n", "delta_used, delta_peaked = mtrace.data()\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

gpu_mem_trace[source][test]

\n", "\n", "> gpu_mem_trace(**`func`**)\n", "\n", "
×

Tests found for gpu_mem_trace:

  • pytest -sv tests/test_utils_mem.py::test_gpu_mem_trace_decorator [source]

To run tests please refer to this guide.

\n", "\n", "A decorator that runs [`GPUMemTrace`](/utils.mem.html#GPUMemTrace) w/ report on func " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(gpu_mem_trace)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This allows you to decorate any function or method with:\n", "\n", "```\n", "@gpu_mem_trace\n", "def my_function(): pass\n", "# run:\n", "my_function()\n", "```\n", "and it will automatically print the report including the function name as a context:\n", "```\n", "△Used Peaked MB: 0 0 (my_function: exit)\n", "```\n", "In the case of methods it'll print a fully qualified method, e.g.:\n", "```\n", "△Used Peaked MB: 0 0 (Class.function: exit)\n", "```\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Undocumented Methods - Methods moved below this line will intentionally be hidden" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

report[source][test]

\n", "\n", "> report(**`subctx`**=***`None`***)\n", "\n", "
×

Tests found for report:

Some other tests where report is used:

  • pytest -sv tests/test_utils_mem.py::test_gpu_mem_trace [source]

To run tests please refer to this guide.

\n", "\n", "Print delta used+peaked, and an optional context note, which can also be preset in constructor " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(GPUMemTrace.report)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

silent[source][test]

\n", "\n", "> silent(**`silent`**=***`True`***)\n", "\n", "
×

No tests found for silent. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(GPUMemTrace.silent)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

start[source][test]

\n", "\n", "> start()\n", "\n", "
×

Tests found for start:

Some other tests where start is used:

  • pytest -sv tests/test_utils_mem.py::test_gpu_mem_trace [source]

To run tests please refer to this guide.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(GPUMemTrace.start)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

reset[source][test]

\n", "\n", "> reset()\n", "\n", "
×

Tests found for reset:

Some other tests where reset is used:

  • pytest -sv tests/test_utils_mem.py::test_gpu_mem_trace [source]

To run tests please refer to this guide.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(GPUMemTrace.reset)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

peak_monitor_stop[source][test]

\n", "\n", "> peak_monitor_stop()\n", "\n", "
×

No tests found for peak_monitor_stop. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(GPUMemTrace.peak_monitor_stop)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

stop[source][test]

\n", "\n", "> stop()\n", "\n", "
×

Tests found for stop:

Some other tests where stop is used:

  • pytest -sv tests/test_utils_mem.py::test_gpu_mem_trace [source]

To run tests please refer to this guide.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(GPUMemTrace.stop)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

report_n_reset[source][test]

\n", "\n", "> report_n_reset(**`subctx`**=***`None`***)\n", "\n", "
×

No tests found for report_n_reset. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Print delta used+peaked, and an optional context note. Then reset counters " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(GPUMemTrace.report_n_reset)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

peak_monitor_func[source][test]

\n", "\n", "> peak_monitor_func()\n", "\n", "
×

No tests found for peak_monitor_func. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(GPUMemTrace.peak_monitor_func)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

data_set[source][test]

\n", "\n", "> data_set()\n", "\n", "
×

No tests found for data_set. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(GPUMemTrace.data_set)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

data[source][test]

\n", "\n", "> data()\n", "\n", "
×

No tests found for data. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(GPUMemTrace.data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

peak_monitor_start[source][test]

\n", "\n", "> peak_monitor_start()\n", "\n", "
×

No tests found for peak_monitor_start. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(GPUMemTrace.peak_monitor_start)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## New Methods - Please document or move to the undocumented section" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 2 }