{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": false }, "outputs": [], "source": [ "#|hide\n", "#|default_exp export\n", "#|default_cls_lvl 3\n", "from nbdev.showdoc import show_doc" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": false }, "outputs": [], "source": [ "#|export\n", "from nbdev.imports import *\n", "from fastcore.script import *\n", "from fastcore.foundation import *\n", "from keyword import iskeyword\n", "import nbformat" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Export to modules\n", "\n", "> The functions that transform notebooks in a library" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The most important function defined in this module is `notebooks2script`, so you may want to jump to it before scrolling though the rest, which explain the details behind the scenes of the conversion from notebooks to library. The main things to remember are:\n", "- put `# export` on each cell you want exported\n", "- put `# exports` on each cell you want exported with the source code shown in the docs \n", "- put `# exporti` on each cell you want exported without it being added to `__all__`, and without it showing up in the docs.\n", "- one cell should contain `# default_exp` followed by the name of the module (with points for submodules and without the py extension) everything should be exported in (if one specific cell needs to be exported in a different module, just indicate it after `#export`: `#export special.module`)\n", "- all left members of an equality, functions and classes will be exported and variables that are not private will be put in the `__all__` automatically\n", "- to add something to `__all__` if it's not picked automatically, write an exported cell with something like `#add2all \"my_name\"`" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Examples of `export`\n", "\n", "See [these examples](https://github.com/fastai/nbdev_export_demo/blob/master/demo.ipynb) on different ways to use `#export` to export code in notebooks to modules. These include:\n", "\n", "- How to specify a default for exporting cells\n", "- How to hide code and not export it at all\n", "- How to export different cells to specific modules" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Basic foundations" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For bootstrapping `nbdev` we have a few basic foundations defined in imports, which we test a show here. First, a simple config file class, `Config` that read the content of your `settings.ini` file and make it accessible:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

class Config[source]

\n", "\n", "> Config(**`cfg_path`**, **`cfg_name`**, **`create`**=*`None`*)\n", "\n", "Reading and writing `ConfigParser` ini files" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Config, title_level=3)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "create_config(\"github\", \"nbdev\", user='fastai', path='..', tst_flags='tst', cfg_name='test_settings.ini', recursive='False')\n", "cfg = get_config(cfg_name='test_settings.ini')\n", "test_eq(cfg.lib_name, 'nbdev')\n", "test_eq(cfg.git_url, \"https://github.com/fastai/nbdev/tree/master/\")\n", "test_eq(cfg.path(\"lib_path\"), Path.cwd().parent/'nbdev')\n", "test_eq(cfg.path(\"nbs_path\"), Path.cwd())\n", "test_eq(cfg.path(\"doc_path\"), Path.cwd().parent/'docs')\n", "test_eq(cfg.custom_sidebar, 'False')\n", "test_eq(cfg.recursive, 'False')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Reading a notebook" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### What's a notebook?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A jupyter notebook is a json file behind the scenes. We can just read it with the json module, which will return a nested dictionary of dictionaries/lists of dictionaries, but there are some small differences between reading the json and using the tools from `nbformat` so we'll use this one." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "def read_nb(fname):\n", " \"Read the notebook in `fname`.\"\n", " with open(Path(fname),'r', encoding='utf8') as f: return nbformat.reads(f.read(), as_version=4)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`fname` can be a string or a pathlib object." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_nb = read_nb('00_export.ipynb')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The root has four keys: `cells` contains the cells of the notebook, `metadata` some stuff around the version of python used to execute the notebook, `nbformat` and `nbformat_minor` the version of nbformat. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "dict_keys(['cells', 'metadata', 'nbformat', 'nbformat_minor'])" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "test_nb.keys()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'jupytext': {'split_at_heading': True},\n", " 'kernelspec': {'display_name': 'Python 3 (ipykernel)',\n", " 'language': 'python',\n", " 'name': 'python3'},\n", " 'language_info': {'codemirror_mode': {'name': 'ipython', 'version': 3},\n", " 'file_extension': '.py',\n", " 'mimetype': 'text/x-python',\n", " 'name': 'python',\n", " 'nbconvert_exporter': 'python',\n", " 'pygments_lexer': 'ipython3',\n", " 'version': '3.9.7'},\n", " 'toc': {'base_numbering': 1,\n", " 'nav_menu': {},\n", " 'number_sections': True,\n", " 'sideBar': True,\n", " 'skip_h1_title': False,\n", " 'title_cell': 'Table of Contents',\n", " 'title_sidebar': 'Contents',\n", " 'toc_cell': False,\n", " 'toc_position': {},\n", " 'toc_section_display': True,\n", " 'toc_window_display': True}}" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "test_nb['metadata']" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'4.4'" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "f\"{test_nb['nbformat']}.{test_nb['nbformat_minor']}\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The cells key then contains a list of cells. Each one is a new dictionary that contains entries like the type (code or markdown), the source (what is written in the cell) and the output (for code cells)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'cell_type': 'code',\n", " 'execution_count': 1,\n", " 'metadata': {'hide_input': False},\n", " 'outputs': [],\n", " 'source': '#|hide\\n#|default_exp export\\n#|default_cls_lvl 3\\nfrom nbdev.showdoc import show_doc'}" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "test_nb['cells'][0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Finding patterns" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following functions are used to catch the flags used in the code cells." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "def check_re(cell, pat, code_only=True):\n", " \"Check if `cell` contains a line with regex `pat`\"\n", " if code_only and cell['cell_type'] != 'code': return\n", " if isinstance(pat, str): pat = re.compile(pat, re.IGNORECASE | re.MULTILINE)\n", " cell_source = cell['source'].replace('\\r', '') # Eliminate \\r\\n\n", " result = pat.search(cell_source)\n", " return result" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`pat` can be a string or a compiled regex. If `code_only=True`, this function ignores non-code cells, such as markdown." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cell = test_nb['cells'][1].copy()\n", "assert check_re(cell, '#|export') is not None\n", "assert check_re(cell, re.compile('#|export')) is not None\n", "assert check_re(cell, '# bla') is None\n", "cell['cell_type'] = 'markdown'\n", "assert check_re(cell, '#|export') is None\n", "assert check_re(cell, '#|export', code_only=False) is not None" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "def check_re_multi(cell, pats, code_only=True):\n", " \"Check if `cell` contains a line matching any regex in `pats`, returning the first match found\"\n", " return L(pats).map_first(partial(check_re, cell, code_only=code_only))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cell = test_nb['cells'][0].copy()\n", "cell['source'] = \"a b c\"\n", "assert check_re(cell, 'a') is not None\n", "assert check_re(cell, 'd') is None\n", "# show that searching with patterns ['d','b','a'] will match 'b'\n", "# i.e. 'd' is not found and we don't search for 'a'\n", "assert check_re_multi(cell, ['d','b','a']).span() == (2,3)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "def _mk_flag_re(body, n_params, comment):\n", " \"Compiles a regex for finding nbdev flags\"\n", " assert body!=True, 'magics no longer supported'\n", " prefix = r\"\\s*\\#\\|?\\s*\"\n", " param_group = \"\"\n", " if n_params == -1: param_group = r\"[ \\t]+(.+)\"\n", " if n_params == 1: param_group = r\"[ \\t]+(\\S+)\"\n", " if n_params == (0,1): param_group = r\"(?:[ \\t]+(\\S+))?\"\n", " return re.compile(rf\"\"\"\n", "# {comment}:\n", "^ # beginning of line (since re.MULTILINE is passed)\n", "{prefix}\n", "{body}\n", "{param_group}\n", "[ \\t]* # any number of spaces and/or tabs\n", "$ # end of line (since re.MULTILINE is passed)\n", "\"\"\", re.MULTILINE | re.VERBOSE)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This function returns a regex object that can be used to find nbdev flags in multiline text\n", "- `body` regex fragment to match one or more flags,\n", "- `n_params` number of flag parameters to match and catch (-1 for any number of params; `(0,1)` for 0 for 1 params),\n", "- `comment` explains what the compiled regex should do." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|hide\n", "re_blank_test = _mk_flag_re('export[si]?', 0, \"test\")\n", "re_mod_test = _mk_flag_re('export[si]?', 1, \"test\")\n", "re_opt_test = _mk_flag_re('export[si]?', (0,1), \"test\")\n", "for f in ['export', 'exports', 'exporti']:\n", " cell = nbformat.v4.new_code_cell(f'#{f} \\n some code')\n", " assert check_re(cell, re_blank_test) is not None\n", " assert check_re(cell, re_mod_test) is None\n", " assert check_re(cell, re_opt_test) is not None\n", " test_eq(check_re(cell, re_opt_test).groups()[0], None)\n", " cell.source = f'#{f} special.module \\n some code'\n", " assert check_re(cell, re_blank_test) is None\n", " assert check_re(cell, re_mod_test) is not None\n", " test_eq(check_re(cell, re_mod_test).groups()[0], 'special.module')\n", " assert check_re(cell, re_opt_test) is not None\n", " test_eq(check_re(cell, re_opt_test).groups()[0], 'special.module')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "_re_blank_export = _mk_flag_re(\"export[si]?\", 0,\n", " \"Matches any line with #export, #exports or #exporti without any module name\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "_re_mod_export = _mk_flag_re(\"export[si]?\", 1,\n", " \"Matches any line with #export, #exports or #exporti with a module name and catches it in group 1\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "_re_internal_export = _mk_flag_re(\"exporti\", (0,1),\n", " \"Matches any line with #exporti with or without a module name\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#exporti\n", "def _is_external_export(tst):\n", " \"Check if a cell is an external or internal export. `tst` is an re match\"\n", " return _re_internal_export.search(tst.string) is None" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "def is_export(cell, default):\n", " \"Check if `cell` is to be exported and returns the name of the module to export it if provided\"\n", " tst = check_re(cell, _re_blank_export)\n", " if tst:\n", " if default is None:\n", " print(f\"No export destination, ignored:\\n{cell['source']}\")\n", " return default, _is_external_export(tst)\n", " tst = check_re(cell, _re_mod_export)\n", " if tst: return os.path.sep.join(tst.groups()[0].split('.')), _is_external_export(tst)\n", " else: return None" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`is_export` returns;\n", "- a tuple of (\"module name\", \"external boolean\" (`False` for an internal export)) if `cell` is to be exported or \n", "- `None` if `cell` will not be exported.\n", "\n", "The cells to export are marked with `#export`/`#exporti`/`#exports`, potentially with a module name where we want it exported. The default module is given in a cell of the form `#default_exp bla` inside the notebook (usually at the top), though in this function, it needs the be passed (the final script will read the whole notebook to find it).\n", "- a cell marked with `#export`/`#exporti`/`#exports` will be exported to the default module\n", "- an exported cell marked with `special.module` appended will be exported in `special.module` (located in `lib_name/special/module.py`)\n", "- a cell marked with `#export` will have its signature added to the documentation\n", "- a cell marked with `#exports` will additionally have its source code added to the documentation\n", "- a cell marked with `#exporti` will not show up in the documentation, and will also not be added to `__all__`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cell = test_nb['cells'][1].copy()\n", "test_eq(is_export(cell, 'export'), ('export', True))\n", "cell['source'] = \"# exports\"\n", "test_eq(is_export(cell, 'export'), ('export', True))\n", "cell['source'] = \"# exporti\"\n", "test_eq(is_export(cell, 'export'), ('export', False))\n", "cell['source'] = \"# export mod\"\n", "test_eq(is_export(cell, 'export'), ('mod', True))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|hide\n", "cell['source'] = \"# export mod.file\"\n", "test_eq(is_export(cell, 'export'), (f'mod{os.path.sep}file', True))\n", "cell['source'] = \"# exporti mod.file\"\n", "test_eq(is_export(cell, 'export'), (f'mod{os.path.sep}file', False))\n", "cell['source'] = \"# expt mod.file\"\n", "assert is_export(cell, 'export') is None\n", "cell['source'] = \"# exportmod.file\"\n", "assert is_export(cell, 'export') is None\n", "cell['source'] = \"# exportsmod.file\"\n", "assert is_export(cell, 'export') is None\n", "cell['source'] = \"# exporti mod file\"\n", "assert is_export(cell, 'export') is None" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "_re_default_exp = _mk_flag_re('default_exp', 1, \"Matches any line with #default_exp with a module name\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "def find_default_export(cells):\n", " \"Find in `cells` the default export module.\"\n", " res = L(cells).map_first(check_re, pat=_re_default_exp)\n", " return res.groups()[0] if res else None" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Stops at the first cell containing `# default_exp` (if there are several) and returns the value behind. Returns `None` if there are no cell with that code." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_eq(find_default_export(test_nb['cells']), 'export')\n", "assert find_default_export(test_nb['cells'][2:]) is None" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|hide\n", "mods = [f'mod{i}' for i in range(3)]\n", "cells = [{'cell_type': 'code', 'source': f'#default_exp {mod}'} for mod in mods]\n", "for i, mod in enumerate(mods): test_eq(mod, find_default_export(cells[i:]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Listing all exported objects" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following functions make a list of everything that is exported to prepare a proper `__all__` for our exported module." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "_re_patch_func = re.compile(r\"\"\"\n", "# Catches any function decorated with @patch, its name in group 1 and the patched class in group 2\n", "@patch # At any place in the cell, something that begins with @patch\n", "(?:\\s*@.*)* # Any other decorator applied to the function\n", "\\s*def # Any number of whitespace (including a new line probably) followed by def\n", "\\s+ # One whitespace or more\n", "([^\\(\\s]+) # Catch a group composed of anything but whitespace or an opening parenthesis (name of the function)\n", "\\s*\\( # Any number of whitespace followed by an opening parenthesis\n", "[^:]* # Any number of character different of : (the name of the first arg that is type-annotated)\n", ":\\s* # A column followed by any number of whitespace\n", "(?: # Non-catching group with either\n", "([^,\\s\\(\\)]*) # a group composed of anything but a comma, a parenthesis or whitespace (name of the class)\n", "| # or\n", "(\\([^\\)]*\\))) # a group composed of something between parenthesis (tuple of classes)\n", "\\s* # Any number of whitespace\n", "(?:,|\\)) # Non-catching group with either a comma or a closing parenthesis\n", "\"\"\", re.VERBOSE)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(,\n", " ('func', 'Class', None))" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tst = _re_patch_func.search(\"\"\"\n", "@patch\n", "@log_args(a=1)\n", "def func(obj:Class):\"\"\")\n", "tst, tst.groups()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|hide\n", "tst = _re_patch_func.search(\"\"\"\n", "@patch\n", "def func(obj:Class):\"\"\")\n", "test_eq(tst.groups(), (\"func\", \"Class\", None))\n", "tst = _re_patch_func.search(\"\"\"\n", "@patch\n", "def func (obj:Class, a)\"\"\")\n", "test_eq(tst.groups(), (\"func\", \"Class\", None))\n", "tst = _re_patch_func.search(\"\"\"\n", "@patch\n", "def func (obj:Class1|Class2, a)\"\"\")\n", "test_eq(tst.groups(), (\"func\", \"Class1|Class2\", None))\n", "tst = _re_patch_func.search(\"\"\"\n", "@patch\n", "def func (obj:Class1|Class2, a:int)->int:\"\"\")\n", "test_eq(tst.groups(), (\"func\", \"Class1|Class2\", None))\n", "tst = _re_patch_func.search(\"\"\"\n", "@patch\n", "def func (obj:(Class1, Class2), a:int)->int:\"\"\")\n", "test_eq(tst.groups(), (\"func\", None, \"(Class1, Class2)\"))\n", "tst = _re_patch_func.search(\"\"\"\n", "@patch\n", "@log_args(but='a,b')\n", "@funcs_kwargs\n", "def func (obj:Class1|Class2, a:int)->int:\"\"\")\n", "test_eq(tst.groups(), (\"func\", \"Class1|Class2\", None))\n", "tst = _re_patch_func.search(\"\"\"\n", "@patch\n", "@contextmanager\n", "def func (obj:Class, a:int)->int:\"\"\")\n", "test_eq(tst.groups(), (\"func\", \"Class\", None))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "_re_typedispatch_func = re.compile(r\"\"\"\n", "# Catches any function decorated with @typedispatch\n", "(@typedispatch # At any place in the cell, catch a group with something that begins with @typedispatch\n", "\\s*def # Any number of whitespace (including a new line probably) followed by def\n", "\\s+ # One whitespace or more\n", "[^\\(]+ # Anything but whitespace or an opening parenthesis (name of the function)\n", "\\s*\\( # Any number of whitespace followed by an opening parenthesis\n", "[^\\)]* # Any number of character different of )\n", "\\)[\\s\\S]*:) # A closing parenthesis followed by any number of characters and whitespace (type annotation) and :\n", "\"\"\", re.VERBOSE)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|hide\n", "assert _re_typedispatch_func.search(\"@typedispatch\\ndef func(a, b):\").groups() == ('@typedispatch\\ndef func(a, b):',)\n", "assert (_re_typedispatch_func.search(\"@typedispatch\\ndef func(a:str, b:bool)->int:\").groups() ==\n", " ('@typedispatch\\ndef func(a:str, b:bool)->int:',))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "_re_class_func_def = re.compile(r\"\"\"\n", "# Catches any 0-indented function or class definition with its name in group 1\n", "^ # Beginning of a line (since re.MULTILINE is passed)\n", "(?:async\\sdef|def|class) # Non-catching group for def or class\n", "\\s+ # One whitespace or more\n", "([^\\(\\s]+) # Catching group with any character except an opening parenthesis or a whitespace (name)\n", "\\s* # Any number of whitespace\n", "(?:\\(|:) # Non-catching group with either an opening parenthesis or a : (classes don't need ())\n", "\"\"\", re.MULTILINE | re.VERBOSE)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|hide\n", "test_eq(_re_class_func_def.search(\"class Class:\").groups(), ('Class',))\n", "test_eq(_re_class_func_def.search(\"def func(a, b):\").groups(), ('func',))\n", "test_eq(_re_class_func_def.search(\"def func(a:str, b:bool)->int:\").groups(), ('func',))\n", "test_eq(_re_class_func_def.search(\"async def func(a, b):\").groups(), ('func',))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "_re_obj_def = re.compile(r\"\"\"\n", "# Catches any 0-indented object definition (bla = thing) with its name in group 1\n", "^ # Beginning of a line (since re.MULTILINE is passed)\n", "([_a-zA-Z]+[a-zA-Z0-9_\\.]*) # Catch a group which is a valid python variable name\n", "\\s* # Any number of whitespace\n", "(?::\\s*\\S.*|)= # Non-catching group of either a colon followed by a type annotation, or nothing; followed by an =\n", "\"\"\", re.MULTILINE | re.VERBOSE)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|hide\n", "test_eq(_re_obj_def.search(\"a = 1\").groups(), ('a',))\n", "test_eq(_re_obj_def.search(\"a.b = 1\").groups(), ('a.b',))\n", "test_eq(_re_obj_def.search(\"_aA1=1\").groups(), ('_aA1',))\n", "test_eq(_re_obj_def.search(\"a : int =1\").groups(), ('a',))\n", "test_eq(_re_obj_def.search(\"a:f(':=')=1\").groups(), ('a',))\n", "assert _re_obj_def.search(\"@abc=2\") is None\n", "assert _re_obj_def.search(\"a a=2\") is None" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "def _not_private(n):\n", " for t in n.split('.'):\n", " if (t.startswith('_') and not t.startswith('__')) or t.startswith('@'): return False\n", " return '\\\\' not in t and '^' not in t and '[' not in t and t != 'else'\n", "\n", "def export_names(code, func_only=False):\n", " \"Find the names of the objects, functions or classes defined in `code` that are exported.\"\n", " #Format monkey-patches with @patch\n", " def _f(gps):\n", " nm, c, t = gps.groups()\n", " if c is None: c, delim = t[1:-1], ','\n", " elif '|' in c: delim = '\\|'\n", " else: delim = None\n", " if delim: cs = re.split(f'{delim} *', c)\n", " else: cs = [c]\n", " return '\\n'.join([f'def {c}.{nm}():' for c in cs])\n", "\n", " code = _re_typedispatch_func.sub('', code)\n", " code = _re_patch_func.sub(_f, code)\n", " names = _re_class_func_def.findall(code)\n", " if not func_only: names += _re_obj_def.findall(code)\n", " return [n for n in names if _not_private(n) and not iskeyword(n)]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This function only picks the zero-indented objects on the left side of an =, functions or classes (we don't want the class methods for instance) and excludes private names (that begin with `_`) but no dunder names. It only returns func and class names (not the objects) when `func_only=True`. \n", "\n", "To work properly with fastai added python functionality, this function ignores function decorated with `@typedispatch` (since they are defined multiple times) and unwraps properly functions decorated with `@patch`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_eq(export_names(\"def my_func(x):\\n pass\\nclass MyClass():\"), [\"my_func\", \"MyClass\"])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|hide\n", "#Indented funcs are ignored (funcs inside a class)\n", "test_eq(export_names(\" def my_func(x):\\n pass\\nclass MyClass():\"), [\"MyClass\"])\n", "\n", "#Private funcs are ignored, dunder are not\n", "test_eq(export_names(\"def _my_func():\\n pass\\nclass MyClass():\"), [\"MyClass\"])\n", "test_eq(export_names(\"__version__ = 1:\\n pass\\nclass MyClass():\"), [\"MyClass\", \"__version__\"])\n", "\n", "#trailing spaces\n", "test_eq(export_names(\"def my_func ():\\n pass\\nclass MyClass():\"), [\"my_func\", \"MyClass\"])\n", "\n", "#class without parenthesis\n", "test_eq(export_names(\"def my_func ():\\n pass\\nclass MyClass:\"), [\"my_func\", \"MyClass\"])\n", "\n", "#object and funcs\n", "test_eq(export_names(\"def my_func ():\\n pass\\ndefault_bla=[]:\"), [\"my_func\", \"default_bla\"])\n", "test_eq(export_names(\"def my_func ():\\n pass\\ndefault_bla=[]:\", func_only=True), [\"my_func\"])\n", "\n", "#Private objects are ignored\n", "test_eq(export_names(\"def my_func ():\\n pass\\n_default_bla = []:\"), [\"my_func\"])\n", "\n", "#Objects with dots are privates if one part is private\n", "test_eq(export_names(\"def my_func ():\\n pass\\ndefault.bla = []:\"), [\"my_func\", \"default.bla\"])\n", "test_eq(export_names(\"def my_func ():\\n pass\\ndefault._bla = []:\"), [\"my_func\"])\n", "\n", "#Monkey-path with @patch are properly renamed\n", "test_eq(export_names(\"@patch\\ndef my_func(x:Class):\\n pass\"), [\"Class.my_func\"])\n", "test_eq(export_names(\"@patch\\ndef my_func(x:Class):\\n pass\", func_only=True), [\"Class.my_func\"])\n", "test_eq(export_names(\"some code\\n@patch\\ndef my_func(x:Class, y):\\n pass\"), [\"Class.my_func\"])\n", "test_eq(export_names(\"some code\\n@patch\\ndef my_func(x:(Class1,Class2), y):\\n pass\"), [\"Class1.my_func\", \"Class2.my_func\"])\n", "test_eq(export_names(\"some code\\n@patch\\ndef my_func(x:Class1|Class2, y):\\n pass\"), [\"Class1.my_func\", \"Class2.my_func\"])\n", "\n", "#Check delegates\n", "test_eq(export_names(\"@delegates(keep=True)\\nclass someClass:\\n pass\"), [\"someClass\"])\n", "\n", "#Typedispatch decorated functions shouldn't be added\n", "test_eq(export_names(\"@patch\\ndef my_func(x:Class):\\n pass\\n@typedispatch\\ndef func(x: TensorImage): pass\"), [\"Class.my_func\"])\n", "\n", "#try, except and other keywords should not be picked up (these can look like object def with type annotation)\n", "test_eq(export_names(\"try:\\n a=1\\nexcept:\\n b=2\"), [])\n", "test_eq(export_names(\"try:\\n this_might_work\\nexcept:\\n b=2\"), [])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "_re_all_def = re.compile(r\"\"\"\n", "# Catches a cell with defines \\_all\\_ = [\\*\\*] and get that \\*\\* in group 1\n", "^_all_ # Beginning of line (since re.MULTILINE is passed)\n", "\\s*=\\s* # Any number of whitespace, =, any number of whitespace\n", "\\[ # Opening [\n", "([^\\n\\]]*) # Catching group with anything except a ] or newline\n", "\\] # Closing ]\n", "\"\"\", re.MULTILINE | re.VERBOSE)\n", "\n", "#Same with __all__\n", "_re__all__def = re.compile(r'^__all__\\s*=\\s*\\[([^\\]]*)\\]', re.MULTILINE)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "def extra_add(flags, code):\n", " \"Catch adds to `__all__` required by a cell with `_all_=`\"\n", " m = check_re({'source': code}, _re_all_def, False)\n", " if m:\n", " code = m.re.sub('#nbdev_' + 'comment \\g<0>', code)\n", " code = re.sub(r'([^\\n]|^)\\n*$', r'\\1', code)\n", " if not m: return [], code\n", " def clean_quotes(s):\n", " \"Return `s` enclosed in single quotes, removing double quotes if needed\"\n", " if s.startswith(\"'\") and s.endswith(\"'\"): return s\n", " if s.startswith('\"') and s.endswith('\"'): s = s[1:-1]\n", " return f\"'{s}'\"\n", " return [clean_quotes(s) for s in parse_line(m.group(1))], code" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Sometimes objects are not picked to be automatically added to the `__all__` of the module so you will need to add them manually. To do so, create an exported cell with the following code `_all_ = [\"name\", \"name2\"]`" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|hide\n", "for code, expected in [\n", " ['_all_ = [\"func\", \"func1\", \"func2\"]', \n", " ([\"'func'\", \"'func1'\", \"'func2'\"],'#nbdev_comment _all_ = [\"func\", \"func1\", \"func2\"]')],\n", " ['_all_=[func, func1, func2]', \n", " ([\"'func'\", \"'func1'\", \"'func2'\"],'#nbdev_comment _all_=[func, func1, func2]')],\n", " [\"_all_ = ['func', 'func1', 'func2']\", \n", " ([\"'func'\", \"'func1'\", \"'func2'\"],\"#nbdev_comment _all_ = ['func', 'func1', 'func2']\")],\n", " ['_all_ = [\"func\", \"func1\" , \"func2\"]', \n", " ([\"'func'\", \"'func1'\", \"'func2'\"],'#nbdev_comment _all_ = [\"func\", \"func1\" , \"func2\"]')],\n", " [\"_all_ = ['func','func1', 'func2']\\n\", \n", " ([\"'func'\", \"'func1'\", \"'func2'\"],\"#nbdev_comment _all_ = ['func','func1', 'func2']\")],\n", " [\"_all_ = ['func']\\n_all_ = ['func1', 'func2']\\n\", \n", " ([\"'func'\"],\"#nbdev_comment _all_ = ['func']\\n#nbdev_comment _all_ = ['func1', 'func2']\")],\n", " ['code\\n\\n_all_ = [\"func\", \"func1\", \"func2\"]', \n", " ([\"'func'\", \"'func1'\", \"'func2'\"],'code\\n\\n#nbdev_comment _all_ = [\"func\", \"func1\", \"func2\"]')],\n", " ['code\\n\\n_all_ = [func]\\nmore code', \n", " ([\"'func'\"],'code\\n\\n#nbdev_comment _all_ = [func]\\nmore code')]]:\n", " test_eq(extra_add('', code), expected)\n", " \n", "# line breaks within the list of names means _all_ is ignored\n", "test_eq(extra_add('', \"_all_ = ['func',\\n'func1', 'func2']\\n\"), ([],\"_all_ = ['func',\\n'func1', 'func2']\\n\"))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "_re_from_future_import = re.compile(r\"^from[ \\t]+__future__[ \\t]+import.*$\", re.MULTILINE)\n", "\n", "def _from_future_import(fname, flags, code, to_dict=None):\n", " \"Write `__future__` imports to `fname` and return `code` with `__future__` imports commented out\"\n", " from_future_imports = _re_from_future_import.findall(code)\n", " if from_future_imports: code = _re_from_future_import.sub('#nbdev' + '_comment \\g<0>', code)\n", " else: from_future_imports = _re_from_future_import.findall(flags)\n", " if not from_future_imports or to_dict is not None: return code\n", " with open(fname, 'r', encoding='utf8') as f: text = f.read()\n", " start = _re__all__def.search(text).start()\n", " with open(fname, 'w', encoding='utf8') as f:\n", " f.write('\\n'.join([text[:start], *from_future_imports, '\\n', text[start:]]))\n", " return code" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you need a `from __future__ import` in your library, you can export your cell with special comments:\n", "\n", "```python\n", "#|export\n", "from __future__ import annotations\n", "class ...\n", "```\n", "\n", "Notice that `#export` is after the `__future__` import. Because `__future__` imports must occur at the beginning of the file, nbdev allows `__future__` imports in the flags section of a cell." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|hide\n", "txt = \"\"\"\n", "# AUTOHEADER ... File to edit: mod.ipynb (unless otherwise specified).\n", "\n", "__all__ = [my_file, MyClas]\n", "# Cell\n", "def valid_code(): pass\"\"\"\n", "expected_txt = \"\"\"\n", "# AUTOHEADER ... File to edit: mod.ipynb (unless otherwise specified).\n", "\n", "\n", "from __future__ import annotations \n", "from __future__ import generator_stop\n", "\n", "\n", "__all__ = [my_file, MyClas]\n", "# Cell\n", "def valid_code(): pass\"\"\"\n", "flags=\"# export\"\n", "code = \"\"\"\n", "# comment\n", "from __future__ import annotations \n", "valid_code = False # but _from_future_import will work anyway\n", "from __future__ import generator_stop\n", " from __future__ import not_zero_indented\n", "valid_code = True\n", "\"\"\"\n", "expected_code = \"\"\"\n", "# comment\n", "#nbdev_comment from __future__ import annotations \n", "valid_code = False # but _from_future_import will work anyway\n", "#nbdev_comment from __future__ import generator_stop\n", " from __future__ import not_zero_indented\n", "valid_code = True\n", "\"\"\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def _run_from_future_import_test():\n", " fname = 'test_from_future_import.txt'\n", " with open(fname, 'w', encoding='utf8') as f: f.write(txt)\n", "\n", " actual_code=_from_future_import(fname, flags, code, {})\n", " test_eq(expected_code, actual_code)\n", " with open(fname, 'r', encoding='utf8') as f: test_eq(f.read(), txt)\n", "\n", " actual_code=_from_future_import(fname, flags, code)\n", " test_eq(expected_code, actual_code)\n", " with open(fname, 'r', encoding='utf8') as f: test_eq(f.read(), expected_txt)\n", "\n", " os.remove(fname)\n", "\n", "_run_from_future_import_test()\n", "\n", "flags=\"\"\"from __future__ import annotations \n", "from __future__ import generator_stop\n", "#export\"\"\"\n", "code = \"\"\n", "expected_code = \"\"\n", "fname = 'test_from_future_import.txt'\n", "\n", "_run_from_future_import_test()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "def _add2all(fname, names, line_width=120):\n", " if len(names) == 0: return\n", " with open(fname, 'r', encoding='utf8') as f: text = f.read()\n", " tw = TextWrapper(width=120, initial_indent='', subsequent_indent=' '*11, break_long_words=False)\n", " re_all = _re__all__def.search(text)\n", " start,end = re_all.start(),re_all.end()\n", " text_all = tw.wrap(f\"{text[start:end-1]}{'' if text[end-2]=='[' else ', '}{', '.join(names)}]\")\n", " with open(fname, 'w', encoding='utf8') as f: f.write(text[:start] + '\\n'.join(text_all) + text[end:])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|hide\n", "fname = 'test_add.txt'\n", "with open(fname, 'w', encoding='utf8') as f: f.write(\"Bla\\n__all__ = [my_file, MyClas]\\nBli\")\n", "_add2all(fname, ['new_function'])\n", "with open(fname, 'r', encoding='utf8') as f: \n", " test_eq(f.read(), \"Bla\\n__all__ = [my_file, MyClas, new_function]\\nBli\")\n", "_add2all(fname, [f'new_function{i}' for i in range(10)])\n", "with open(fname, 'r', encoding='utf8') as f: \n", " test_eq(f.read(), \"\"\"Bla\n", "__all__ = [my_file, MyClas, new_function, new_function0, new_function1, new_function2, new_function3, new_function4,\n", " new_function5, new_function6, new_function7, new_function8, new_function9]\n", "Bli\"\"\")\n", "os.remove(fname)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "def relative_import(name, fname):\n", " \"Convert a module `name` to a name relative to `fname`\"\n", " mods = name.split('.')\n", " splits = str(fname).split(os.path.sep)\n", " if mods[0] not in splits: return name\n", " i=len(splits)-1\n", " while i>0 and splits[i] != mods[0]: i-=1\n", " splits = splits[i:]\n", " while len(mods)>0 and splits[0] == mods[0]: splits,mods = splits[1:],mods[1:]\n", " return '.' * (len(splits)) + '.'.join(mods)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When we say from \n", "``` python\n", "from lib_name.module.submodule import bla\n", "``` \n", "in a notebook, it needs to be converted to something like \n", "```\n", "from .module.submodule import bla\n", "```\n", "or \n", "```from .submodule import bla``` \n", "depending on where we are. This function deals with those imports renaming.\n", "\n", "Note that import of the form\n", "```python\n", "import lib_name.module\n", "```\n", "are left as is as the syntax `import module` does not work for relative imports." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_eq(relative_import('nbdev.core', Path.cwd()/'nbdev'/'data.py'), '.core')\n", "test_eq(relative_import('nbdev.core', Path('nbdev')/'vision'/'data.py'), '..core')\n", "test_eq(relative_import('nbdev.vision.transform', Path('nbdev')/'vision'/'data.py'), '.transform')\n", "test_eq(relative_import('nbdev.notebook.core', Path('nbdev')/'data'/'external.py'), '..notebook.core')\n", "test_eq(relative_import('nbdev.vision', Path('nbdev')/'vision'/'learner.py'), '.')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "_re_import = ReLibName(r'^(\\s*)from (LIB_NAME\\.\\S*) import (.*)$')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "def _deal_import(code_lines, fname):\n", " def _replace(m):\n", " sp,mod,obj = m.groups()\n", " return f\"{sp}from {relative_import(mod, fname)} import {obj}\"\n", " return [_re_import.re.sub(_replace,line) for line in code_lines]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|hide\n", "lines = [\"from nbdev.core import *\", \n", " \"nothing to see\", \n", " \" from nbdev.vision import bla1, bla2\", \n", " \"from nbdev.vision import models\",\n", " \"import nbdev.vision\"]\n", "test_eq(_deal_import(lines, Path.cwd()/'nbdev'/'data.py'), [\n", " \"from .core import *\", \n", " \"nothing to see\", \n", " \" from .vision import bla1, bla2\", \n", " \"from .vision import models\",\n", " \"import nbdev.vision\"\n", "])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Create the library" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Saving an index" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To be able to build back a correspondence between functions and the notebooks they are defined in, we need to store an index. It's done in the private module _nbdev inside your library, and the following function are used to define it." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "_re_index_custom = re.compile(r'def custom_doc_links\\(name\\):(.*)$', re.DOTALL)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "def reset_nbdev_module():\n", " \"Create a skeleton for _nbdev\"\n", " fname = get_config().path(\"lib_path\")/'_nbdev.py'\n", " fname.parent.mkdir(parents=True, exist_ok=True)\n", " sep = '\\n' * (get_config().d.getint('cell_spacing', 1) + 1)\n", " if fname.is_file():\n", " with open(fname, 'r') as f: search = _re_index_custom.search(f.read())\n", " else: search = None\n", " prev_code = search.groups()[0] if search is not None else ' return None\\n'\n", " with open(fname, 'w') as f:\n", " f.write(f\"# AUTOGENERATED BY NBDEV! DO NOT EDIT!\")\n", " f.write('\\n\\n__all__ = [\"index\", \"modules\", \"custom_doc_links\", \"git_url\"]')\n", " f.write('\\n\\nindex = {}')\n", " f.write('\\n\\nmodules = []')\n", " f.write(f'\\n\\ndoc_url = \"{get_config().doc_host}{get_config().doc_baseurl}\"')\n", " f.write(f'\\n\\ngit_url = \"{get_config().git_url}\"')\n", " f.write(f'{sep}def custom_doc_links(name):{prev_code}')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "class _EmptyModule():\n", " def __init__(self):\n", " self.index,self.modules = {},[]\n", " try: self.doc_url,self.git_url = f\"{get_config().doc_host}{get_config().doc_baseurl}\",get_config().git_url\n", " except FileNotFoundError: self.doc_url,self.git_url = '',''\n", "\n", " def custom_doc_links(self, name): return None" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "def get_nbdev_module():\n", " \"Reads _nbdev\"\n", " try:\n", " spec = importlib.util.spec_from_file_location(f\"{get_config().lib_name}._nbdev\", get_config().path(\"lib_path\")/'_nbdev.py')\n", " mod = importlib.util.module_from_spec(spec)\n", " spec.loader.exec_module(mod)\n", " return mod\n", " except: return _EmptyModule()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "_re_index_idx = re.compile(r'index\\s*=\\s*{[^}]*}')\n", "_re_index_mod = re.compile(r'modules\\s*=\\s*\\[[^\\]]*\\]')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "def save_nbdev_module(mod):\n", " \"Save `mod` inside _nbdev\"\n", " fname = get_config().path(\"lib_path\")/'_nbdev.py'\n", " with open(fname, 'r') as f: code = f.read()\n", " t = r',\\n '.join([f'\"{k}\": \"{v}\"' for k,v in mod.index.items()])\n", " code = _re_index_idx.sub(\"index = {\"+ t +\"}\", code)\n", " t = r',\\n '.join(['\"' + f.replace('\\\\','/') + '\"' for f in mod.modules])\n", " code = _re_index_mod.sub(f\"modules = [{t}]\", code)\n", " with open(fname, 'w') as f: f.write(code)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|hide\n", "ind,ind_bak = get_config().path(\"lib_path\")/'_nbdev.py',get_config().path(\"lib_path\")/'_nbdev.bak'\n", "if ind.exists(): shutil.move(ind, ind_bak)\n", "try:\n", " reset_nbdev_module()\n", " mod = get_nbdev_module()\n", " test_eq(mod.index, {})\n", " test_eq(mod.modules, [])\n", "\n", " mod.index = {'foo':'bar'}\n", " mod.modules.append('lala.bla')\n", " save_nbdev_module(mod)\n", "\n", " mod = get_nbdev_module()\n", " test_eq(mod.index, {'foo':'bar'})\n", " test_eq(mod.modules, ['lala.bla'])\n", "finally:\n", " if ind_bak.exists(): shutil.move(ind_bak, ind)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Create the modules" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "def split_flags_and_code(cell, return_type=list):\n", " \"Splits the `source` of a cell into 2 parts and returns (flags, code)\"\n", " source_str = cell['source'].replace('\\r', '')\n", " code_lines = source_str.split('\\n')\n", " split_pos = 0 if code_lines[0].strip().startswith('#') else -1\n", " for i, line in enumerate(code_lines):\n", " if not line.startswith('#') and line.strip() and not _re_from_future_import.match(line): break\n", " split_pos+=1\n", " res = code_lines[:split_pos], code_lines[split_pos:]\n", " if return_type is list: return res\n", " return tuple('\\n'.join(r) for r in res)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`return_type` tells us if the tuple returned will contain `list`s of lines or `str`ings with line breaks. \n", "\n", "We treat the first comment line as a flag" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\"split_flags_and_code" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def _test_split_flags_and_code(expected_flags, expected_code):\n", " cell = nbformat.v4.new_code_cell('\\n'.join(expected_flags + expected_code))\n", " test_eq((expected_flags, expected_code), split_flags_and_code(cell))\n", " expected=('\\n'.join(expected_flags), '\\n'.join(expected_code))\n", " test_eq(expected, split_flags_and_code(cell, str))\n", " \n", "_test_split_flags_and_code([\n", " '#export'],\n", " ['# TODO: write this function',\n", " 'def func(x): pass'])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "def create_mod_file(fname, nb_path, bare=False):\n", " \"Create a module file for `fname`.\"\n", " try: bare = get_config().d.getboolean('bare', bare)\n", " except FileNotFoundError: pass\n", " fname.parent.mkdir(parents=True, exist_ok=True)\n", " try: dest = get_config().config_file.parent\n", " except FileNotFoundError: dest = nb_path\n", " file_path = os.path.relpath(nb_path, dest).replace('\\\\', '/')\n", " with open(fname, 'w') as f:\n", " if not bare: f.write(f\"# AUTOGENERATED! DO NOT EDIT! File to edit: {file_path} (unless otherwise specified).\")\n", " f.write('\\n\\n__all__ = []')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A new module filename is created each time a notebook has a cell marked with `#default_exp`. In your collection of notebooks, you should only have one notebook that creates a given module since they are re-created each time you do a library build (to ensure the library is clean). Note that any file you create manually will never be overwritten (unless it has the same name as one of the modules defined in a `#default_exp` cell) so you are responsible to clean up those yourself.\n", "\n", "`fname` is the notebook that contained the `#default_exp` cell." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "def create_mod_files(files, to_dict=False, bare=False):\n", " \"Create mod files for default exports found in `files`\"\n", " modules = []\n", " try: lib_path = get_config().path(\"lib_path\")\n", " except FileNotFoundError: lib_path = Path()\n", " try: nbs_path = get_config().path(\"nbs_path\")\n", " except FileNotFoundError: nbs_path = Path()\n", " for f in sorted(files):\n", " fname = Path(f)\n", " nb = read_nb(fname)\n", " default = find_default_export(nb['cells'])\n", " if default:\n", " default = os.path.sep.join(default.split('.'))\n", " modules.append(default)\n", " if not to_dict: create_mod_file(lib_path/f'{default}.py', nbs_path/f'{fname}', bare=bare)\n", " return modules" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Create module files for all `#default_export` flags found in `files` and return a list containing the names of modules created. \n", "\n", "Note: The number if modules returned will be less that the number of files passed in if files do not `#default_export`.\n", "\n", "By creating all module files before calling `_notebook2script`, the order of execution no longer matters - so you can now export to a notebook that is run \"later\".\n", "\n", "You might still have problems when\n", "- converting a subset of notebooks or\n", "- exporting to a module that does not have a `#default_export` yet\n", "\n", "in which case `_notebook2script` will print warnings like;\n", "```\n", "Warning: Exporting to \"core.py\" but this module is not part of this build\n", "```\n", "\n", "If you see a warning like this\n", "- and the module file (e.g. \"core.py\") does not exist, you'll see a `FileNotFoundError`\n", "- if the module file exists, the exported cell will be written - even if the exported cell is already in the module file" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "def _notebook2script(fname, modules, silent=False, to_dict=None, bare=False):\n", " \"Finds cells starting with `#export` and puts them into a module created by `create_mod_files`\"\n", " try: bare = get_config().d.getboolean('bare', bare)\n", " except FileNotFoundError: pass\n", " if os.environ.get('IN_TEST',0): return # don't export if running tests\n", " try: spacing,has_setting = get_config().d.getint('cell_spacing', 1), True\n", " except FileNotFoundError: spacing,has_setting = 1, False\n", " sep = '\\n' * (spacing + 1)\n", " try: lib_path = get_config().path(\"lib_path\")\n", " except FileNotFoundError: lib_path = Path()\n", " fname = Path(fname)\n", " nb = read_nb(fname)\n", " default = find_default_export(nb['cells'])\n", " if default is not None: default = os.path.sep.join(default.split('.'))\n", " mod = get_nbdev_module()\n", " exports = [is_export(c, default) for c in nb['cells']]\n", " cells = [(i,c,e) for i,(c,e) in enumerate(zip(nb['cells'],exports)) if e is not None]\n", " for i,c,(e,a) in cells:\n", " if e not in modules: print(f'Warning: Exporting to \"{e}.py\" but this module is not part of this build')\n", " fname_out = lib_path/f'{e}.py'\n", " if bare: orig = \"\\n\"\n", " else: orig = (f'# {\"\" if a else \"Internal \"}C' if e==default else f'# Comes from {fname.name}, c') + 'ell\\n'\n", " flag_lines,code_lines = split_flags_and_code(c)\n", " if has_setting: code_lines = _deal_import(code_lines, fname_out)\n", " code = sep + orig + '\\n'.join(code_lines)\n", " names = export_names(code)\n", " flags = '\\n'.join(flag_lines)\n", " extra,code = extra_add(flags, code)\n", " code = _from_future_import(fname_out, flags, code, to_dict)\n", " if a:\n", " if to_dict is None: _add2all(fname_out, [f\"'{f}'\" for f in names if '.' not in f and len(f) > 0] + extra)\n", " mod.index.update({f: fname.name for f in names})\n", " code = re.sub(r' +$', '', code, flags=re.MULTILINE)\n", " if code != sep + orig[:-1]:\n", " if to_dict is not None: to_dict[e].append((i, fname, code))\n", " else:\n", " with open(fname_out, 'a', encoding='utf8') as f: f.write(code)\n", " if f'{e}.py' not in mod.modules: mod.modules.append(f'{e}.py')\n", " if has_setting: save_nbdev_module(mod)\n", "\n", " if not silent: print(f\"Converted {fname.name}.\")\n", " return to_dict" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Converted 00_export.ipynb.\n" ] } ], "source": [ "#|hide\n", "if not os.environ.get('IN_TEST',0):\n", " modules = create_mod_files(glob.glob('00_export.ipynb'))\n", " _notebook2script('00_export.ipynb', modules)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|hide\n", "with open(get_config().path(\"lib_path\")/('export.py')) as f: l = f.readline()\n", "test_eq(l, '# AUTOGENERATED! DO NOT EDIT! File to edit: nbs/00_export.ipynb (unless otherwise specified).\\n')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "def add_init(path):\n", " \"Add `__init__.py` in all subdirs of `path` containing python files if it's not there already\"\n", " for p,d,f in os.walk(path):\n", " for f_ in f:\n", " if f_.endswith('.py'):\n", " if not (Path(p)/'__init__.py').exists(): (Path(p)/'__init__.py').touch()\n", " break" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with tempfile.TemporaryDirectory() as d:\n", " os.makedirs(Path(d)/'a', exist_ok=True)\n", " (Path(d)/'a'/'f.py').touch()\n", " os.makedirs(Path(d)/'a/b', exist_ok=True)\n", " (Path(d)/'a'/'b'/'f.py').touch()\n", " add_init(d)\n", " assert not (Path(d)/'__init__.py').exists()\n", " for e in [Path(d)/'a', Path(d)/'a/b']:\n", " assert (e/'__init__.py').exists()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "_re_version = re.compile('^__version__\\s*=.*$', re.MULTILINE)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "def update_version():\n", " \"Add or update `__version__` in the main `__init__.py` of the library\"\n", " fname = get_config().path(\"lib_path\")/'__init__.py'\n", " if not fname.exists(): fname.touch()\n", " version = f'__version__ = \"{get_config().version}\"'\n", " with open(fname, 'r') as f: code = f.read()\n", " if _re_version.search(code) is None: code = version + \"\\n\" + code\n", " else: code = _re_version.sub(version, code)\n", " with open(fname, 'w') as f: f.write(code)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "_re_baseurl = re.compile('^baseurl\\s*:.*$', re.MULTILINE)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "def update_baseurl():\n", " \"Add or update `baseurl` in `_config.yml` for the docs\"\n", " fname = get_config().path(\"doc_path\")/'_config.yml'\n", " if not fname.exists(): return\n", " with open(fname, 'r') as f: code = f.read()\n", " if _re_baseurl.search(code) is None: code = code + f\"\\nbaseurl: {get_config().doc_baseurl}\"\n", " else: code = _re_baseurl.sub(f\"baseurl: {get_config().doc_baseurl}\", code)\n", " with open(fname, 'w') as f: f.write(code)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "def nbglob(fname=None, recursive=None, extension='.ipynb', config_key='nbs_path') -> L:\n", " \"Find all files in a directory matching an extension given a `config_key`.\"\n", " fname = Path(fname or get_config().path(config_key))\n", " if fname.is_file(): return [fname]\n", " if recursive == None: recursive=get_config().get('recursive', 'False').lower() == 'true'\n", " if fname.is_dir(): pat = f'**/*{extension}' if recursive else f'*{extension}'\n", " else: fname,_,pat = str(fname).rpartition(os.path.sep)\n", " if str(fname).endswith('**'): fname,pat = fname[:-2],'**/'+pat\n", " fls = L(Path(fname).glob(pat)).map(Path)\n", " return fls.filter(lambda x: x.name[0]!='_' and '/.' not in str(x))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Ignores hidden directories and filenames starting with `_`. If argument `recursive` is not set to `True` or `False`, this value is retreived from settings.ini with a default of `False`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|hide\n", "with tempfile.TemporaryDirectory() as d:\n", " os.makedirs(Path(d)/'a', exist_ok=True)\n", " (Path(d)/'a'/'a.ipynb').touch()\n", " (Path(d)/'a'/'fake_a.ipynb').touch()\n", " os.makedirs(Path(d)/'a/b', exist_ok=True)\n", " (Path(d)/'a'/'b'/'fake_b.ipynb').touch()\n", " os.makedirs(Path(d)/'a/b/c', exist_ok=True)\n", " (Path(d)/'a'/'b'/'c'/'fake_c.ipynb').touch()\n", " (Path(d)/'a'/'b'/'c'/'foo_c.ipynb').touch()\n", " \n", " if sys.platform != \"win32\":\n", " assert len(nbglob(f'{d}/**/foo*', recursive=True)) == 1\n", " assert len(nbglob(f'{d}/a/**/[f-g]*.*')) == 4\n", " assert len(nbglob(d, recursive=True)) == 5\n", " assert len(nbglob(d, recursive=False)) == 0\n", " assert len(nbglob(f'{d}/a', recursive=False)) == 2" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|hide\n", "if sys.platform != \"win32\":\n", " assert len(nbglob('*')) > 1\n", " assert len(nbglob('*')) > len(nbglob('0*'))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "assert not nbglob().filter(lambda x: '.ipynb_checkpoints' in str(x))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|hide\n", "fnames = nbglob()\n", "test_eq(len(fnames) > 0, True)\n", "\n", "fnames = nbglob(fnames[0])\n", "test_eq(len(fnames), 1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Optionally you can pass a `config_key` to dictate which directory you are pointing to. By default it's `nbs_path` as without any parameters passed in, it will check for notebooks. To have it instead find library files simply pass in `lib_path` instead.\n", "\n", "> Note: it will only search for paths in `get_config().path`" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|hide\n", "fnames = nbglob(extension='.py', config_key='lib_path')\n", "test_eq(len(fnames) > 1, True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "def notebook2script(fname=None, silent=False, to_dict=False, bare=False):\n", " \"Convert notebooks matching `fname` to modules\"\n", " # initial checks\n", " if os.environ.get('IN_TEST',0): return # don't export if running tests\n", " if fname is None:\n", " reset_nbdev_module()\n", " update_version()\n", " update_baseurl()\n", " files = nbglob(fname=fname)\n", " d = collections.defaultdict(list) if to_dict else None\n", " modules = create_mod_files(files, to_dict, bare=bare)\n", " for f in sorted(files): d = _notebook2script(f, modules, silent=silent, to_dict=d, bare=bare)\n", " if to_dict: return d\n", " elif fname is None: add_init(get_config().path(\"lib_path\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finds cells starting with `#export` and puts them into the appropriate module. If `fname` is not specified, this will convert all notebook not beginning with an underscore in the `nb_folder` defined in `setting.ini`. Otherwise `fname` can be a single filename or a glob expression.\n", "\n", "`silent` makes the command not print any statement and `to_dict` is used internally to convert the library to a dictionary. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|export\n", "class DocsTestClass:\n", " \"for tests only\"\n", " def test(): pass\n", "\n", " def test_self(self, cls, arg): pass\n", "\n", " @classmethod\n", " def test_cls(cls, arg): pass\n", " \n", " @property\n", " def test_property(self): pass" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#|hide\n", "#exporti\n", "#for tests only\n", "def update_lib_with_exporti_testfn(): pass" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Export -" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Converted 00_export.ipynb.\n", "Converted 01_sync.ipynb.\n", "Converted 02_showdoc.ipynb.\n", "Converted 03_export2html.ipynb.\n", "Converted 04_test.ipynb.\n", "Converted 05_merge.ipynb.\n", "Converted 06_cli.ipynb.\n", "Converted 07_clean.ipynb.\n", "Converted 99_search.ipynb.\n", "Converted example.ipynb.\n", "Converted index.ipynb.\n", "Converted nbdev_comments.ipynb.\n", "Converted tutorial.ipynb.\n", "Converted tutorial_colab.ipynb.\n" ] } ], "source": [ "#|hide\n", "notebook2script()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "jupytext": { "split_at_heading": true }, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 4 }