{ "cells": [ { "cell_type": "markdown", "metadata": { "button": false, "new_sheet": false, "run_control": { "read_only": false }, "slideshow": { "slide_type": "slide" } }, "source": [ "# Debugging Performance Issues\n", "\n", "Most chapters of this book deal with _functional_ issues – that is, issues related to the _functionality_ (or its absence) of the code in question. However, debugging can also involve _nonfunctional_ issues, however – performance, usability, reliability, and more. In this chapter, we give a short introduction on how to debug such nonfunctional issues, notably _performance_ issues." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:41:59.115666Z", "iopub.status.busy": "2023-11-12T12:41:59.115573Z", "iopub.status.idle": "2023-11-12T12:41:59.152047Z", "shell.execute_reply": "2023-11-12T12:41:59.151721Z" }, "slideshow": { "slide_type": "skip" } }, "outputs": [ { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from bookutils import YouTubeVideo\n", "YouTubeVideo(\"0tMeB9G0uUI\")" ] }, { "cell_type": "markdown", "metadata": { "button": false, "new_sheet": false, "run_control": { "read_only": false }, "slideshow": { "slide_type": "subslide" } }, "source": [ "**Prerequisites**\n", "\n", "* This chapter leverages visualization capabilities from [the chapter on statistical debugging](StatisticalDebugger.ipynb).\n", "* We also show how to debug nonfunctional issues using [delta debugging](DeltaDebugger.ipynb)." ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "button": false, "execution": { "iopub.execute_input": "2023-11-12T12:41:59.172172Z", "iopub.status.busy": "2023-11-12T12:41:59.171989Z", "iopub.status.idle": "2023-11-12T12:41:59.174025Z", "shell.execute_reply": "2023-11-12T12:41:59.173757Z" }, "new_sheet": false, "run_control": { "read_only": false }, "slideshow": { "slide_type": "skip" } }, "outputs": [], "source": [ "import bookutils.setup" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:41:59.175592Z", "iopub.status.busy": "2023-11-12T12:41:59.175489Z", "iopub.status.idle": "2023-11-12T12:41:59.855674Z", "shell.execute_reply": "2023-11-12T12:41:59.855352Z" }, "slideshow": { "slide_type": "skip" } }, "outputs": [], "source": [ "import StatisticalDebugger\n", "import DeltaDebugger" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "skip" } }, "source": [ "## Synopsis\n", "\n", "\n", "To [use the code provided in this chapter](Importing.ipynb), write\n", "\n", "```python\n", ">>> from debuggingbook.PerformanceDebugger import \n", "```\n", "\n", "and then make use of the following features.\n", "\n", "\n", "This chapter provides a class `PerformanceDebugger` that allows measuring and visualizing the time taken per line in a function.\n", "\n", "```python\n", ">>> with PerformanceDebugger(TimeCollector) as debugger:\n", ">>> for i in range(100):\n", ">>> s = remove_html_markup('foo')\n", "```\n", "The distribution of executed time within each function can be obtained by printing out the debugger:\n", "\n", "```python\n", ">>> print(debugger)\n", " 238 2% def remove_html_markup(s): # type: ignore\n", " 239 2% tag = False\n", " 240 1% quote = False\n", " 241 1% out = \"\"\n", " 242 0%\n", " 243 17% for c in s:\n", " 244 15% assert tag or not quote\n", " 245 0%\n", " 246 14% if c == '<' and not quote:\n", " 247 2% tag = True\n", " 248 11% elif c == '>' and not quote:\n", " 249 3% tag = False\n", " 250 8% elif (c == '\"' or c == \"'\") and tag:\n", " 251 0% quote = not quote\n", " 252 9% elif not tag:\n", " 253 5% out = out + c\n", " 254 0%\n", " 255 2% return out\n", "\n", "\n", "```\n", "The sum of all percentages in a function should always be 100%.\n", "\n", "These percentages can also be visualized, where darker shades represent higher percentage values:\n", "\n", "```python\n", ">>> debugger\n", "```\n", "
 238 def remove_html_markup(s):  # type: ignore
\n", "
 239     tag = False
\n", "
 240     quote = False
\n", "
 241     out = ""
\n", "
 242  
\n", "
 243     for c in s:
\n", "
 244         assert tag or not quote
\n", "
 245  
\n", "
 246         if c == '<' and not quote:
\n", "
 247             tag = True
\n", "
 248         elif c == '>' and not quote:
\n", "
 249             tag = False
\n", "
 250         elif (c == '"' or c == "'") and tag:
\n", "
 251             quote = not quote
\n", "
 252         elif not tag:
\n", "
 253             out = out + c
\n", "
 254  
\n", "
 255     return out
\n", "\n", "\n", "The abstract `MetricCollector` class allows subclassing to build more collectors, such as `HitCollector`.\n", "\n", "![](PICS/PerformanceDebugger-synopsis-1.svg)\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Measuring Performance\n", "\n", "The solution to debugging performance issues fits in two simple rules:\n", "\n", "1. _Measure_ performance\n", "2. _Break down_ how individual parts of your code contribute to performance.\n", "\n", "The first part, actually _measuring_ performance, is key here. Developers often take elaborated guesses on which aspects of their code impact performance, and think about all possible ways to optimize their code – and at the same time, making it harder to understand, harder to evolve, and harder to maintain. In most cases, such guesses are wrong. Instead, _measure_ performance of your program, _identify_ the very few parts that may need to get improved, and again _measure_ the impact of your changes." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "Almost all programming languages offer a way to measure performance and breaking it down to individual parts of the code – a means also known as *profiling*. Profiling works by measuring the execution time for each function (or even more fine-grained location) in your program. This can be achieved by\n", "\n", "1. _Instrumenting_ or _tracing_ code such that the current time at entry and exit of each function (or line), thus determining the time spent. In Python, this is achieved by profilers like [profile or cProfile](https://docs.python.org/3/library/profile.html)\n", "\n", "2. _Sampling_ the current function call stack at regular intervals, and thus assessing which functions are most active (= take the most time) during execution. For Python, the [scalene](https://github.com/plasma-umass/scalene) profiler works this way.\n", "\n", "Pretty much all programming languages support profiling, either through measuring, sampling, or both. As a rule of thumb, _interpreted_ languages more frequently support measuring (as it is easy to implement in an interpreter), while _compiled_ languages more frequently support sampling (because instrumentation requires recompilation). Python is lucky to support both methods." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "### Tracing Execution Profiles\n", "\n", "Let us illustrate profiling in a simple example. The `ChangeCounter` class (which we will encounter in the [chapter on mining version histories](ChangeCounter.ipynb)) reads in a version history from a git repository. Yet, it takes more than a minute to read in the debugging book change history:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:41:59.857546Z", "iopub.status.busy": "2023-11-12T12:41:59.857401Z", "iopub.status.idle": "2023-11-12T12:42:00.992365Z", "shell.execute_reply": "2023-11-12T12:42:00.991990Z" }, "slideshow": { "slide_type": "skip" } }, "outputs": [], "source": [ "from ChangeCounter import ChangeCounter, debuggingbook_change_counter # minor dependency" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:42:00.994752Z", "iopub.status.busy": "2023-11-12T12:42:00.994360Z", "iopub.status.idle": "2023-11-12T12:42:00.996591Z", "shell.execute_reply": "2023-11-12T12:42:00.996192Z" }, "slideshow": { "slide_type": "skip" } }, "outputs": [], "source": [ "import Timer" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:42:00.998427Z", "iopub.status.busy": "2023-11-12T12:42:00.998286Z", "iopub.status.idle": "2023-11-12T12:44:13.957334Z", "shell.execute_reply": "2023-11-12T12:44:13.955736Z" }, "slideshow": { "slide_type": "fragment" } }, "outputs": [], "source": [ "with Timer.Timer() as t:\n", " change_counter = debuggingbook_change_counter(ChangeCounter)" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:44:13.987900Z", "iopub.status.busy": "2023-11-12T12:44:13.987700Z", "iopub.status.idle": "2023-11-12T12:44:13.990811Z", "shell.execute_reply": "2023-11-12T12:44:13.990515Z" }, "slideshow": { "slide_type": "fragment" } }, "outputs": [ { "data": { "text/plain": [ "132.9539235000002" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "t.elapsed_time()" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "The Python `profile` and `cProfile` modules offer a simple way to identify the most time-consuming functions. They are invoked using the `run()` function, whose argument is the command to be profiled. The output reports, for each function encountered:\n", "\n", "* How often it was called (`ncalls` column)\n", "* How much time was spent in the given function, _excluding_ time spent in calls to sub-functions (`tottime` column)\n", "* The fraction of `tottime` / `ncalls` (first `percall` column)\n", "* How much time was spent in the given function, _including_ time spent in calls to sub-functions (`cumtime` column)\n", "* The fraction of `cumtime` / `percall` (second `percall` column)\n", "\n", "Let us have a look at the profile we obtain:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:44:13.999609Z", "iopub.status.busy": "2023-11-12T12:44:13.999489Z", "iopub.status.idle": "2023-11-12T12:44:14.001119Z", "shell.execute_reply": "2023-11-12T12:44:14.000866Z" }, "slideshow": { "slide_type": "skip" } }, "outputs": [], "source": [ "import cProfile" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:44:14.002786Z", "iopub.status.busy": "2023-11-12T12:44:14.002676Z", "iopub.status.idle": "2023-11-12T12:47:16.435047Z", "shell.execute_reply": "2023-11-12T12:47:16.434323Z" }, "slideshow": { "slide_type": "subslide" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " 20229584 function calls (20086427 primitive calls) in 182.081 seconds\n", "\n", " Ordered by: cumulative time\n", "\n", " ncalls tottime percall cumtime percall filename:lineno(function)\n", " 1 0.000 0.000 182.081 182.081 {built-in method builtins.exec}\n", " 1 0.000 0.000 182.081 182.081 :1()\n", " 1 0.000 0.000 182.081 182.081 ChangeCounter.ipynb:168(debuggingbook_change_counter)\n", " 1 0.000 0.000 182.081 182.081 ChangeCounter.ipynb:51(__init__)\n", " 1 1.061 1.061 182.081 182.081 ChangeCounter.ipynb:88(mine)\n", " 1172 0.025 0.000 180.790 0.154 ChangeCounter.ipynb:102(mine_commit)\n", " 1172 0.037 0.000 180.592 0.154 commit.py:680(modified_files)\n", " 1172 0.021 0.000 180.553 0.154 commit.py:696(_get_modified_files)\n", " 1122 0.208 0.000 154.573 0.138 diff.py:95(diff)\n", " 1165 0.084 0.000 82.444 0.071 cmd.py:1114(_call_process)\n", " 1165 0.101 0.000 82.342 0.071 cmd.py:726(execute)\n", " 1165 0.183 0.000 81.912 0.070 subprocess.py:753(__init__)\n", " 1165 0.229 0.000 81.688 0.070 subprocess.py:1682(_execute_child)\n", " 1165 79.249 0.068 79.249 0.068 {built-in method _posixsubprocess.fork_exec}\n", " 1123 0.058 0.000 79.224 0.071 cmd.py:638()\n", " 1122 14.849 0.013 75.150 0.067 diff.py:445(_index_from_patch_format)\n", " 1122 0.065 0.000 47.601 0.042 cmd.py:71(handle_process_output)\n", " 11320 47.058 0.004 47.058 0.004 {method 'acquire' of '_thread.lock' objects}\n", " 2246 0.014 0.000 46.996 0.021 threading.py:1057(join)\n", " 2246 0.019 0.000 46.980 0.021 threading.py:1095(_wait_for_tstate_lock)\n", " 1172 0.052 0.000 25.921 0.022 commit.py:730(_parse_diff)\n", " 23898 0.027 0.000 24.907 0.001 commit.py:759(_get_undecoded_content)\n", " 79263 0.087 0.000 20.358 0.000 cmd.py:528(read)\n", " 66944 0.022 0.000 20.301 0.000 base.py:137(read)\n", " 158504 20.264 0.000 20.264 0.000 {method 'read' of '_io.BufferedReader' objects}\n", " 11949 0.095 0.000 11.865 0.001 diff.py:290(__init__)\n", " 11147 0.013 0.000 11.561 0.001 base.py:363(submodules)\n", " 11147 0.047 0.000 11.548 0.001 util.py:1092(list_items)\n", "23621/23618 0.030 0.000 11.486 0.000 {method 'extend' of 'list' objects}\n", " 33441 0.135 0.000 11.456 0.000 base.py:1228(iter_items)\n", " 79263 0.119 0.000 7.002 0.000 db.py:46(stream)\n", " 79263 0.188 0.000 6.803 0.000 cmd.py:1263(stream_object_data)\n", " 123851 0.388 0.000 5.095 0.000 cmd.py:1235(__get_object_header)\n", " 22356 0.019 0.000 4.700 0.000 base.py:129(data_stream)\n", " 58029 0.057 0.000 3.970 0.000 util.py:248(__getattr__)\n", " 204423 3.916 0.000 3.917 0.000 {method 'readline' of '_io.BufferedReader' objects}\n", " 33441 0.027 0.000 3.725 0.000 base.py:527(commit)\n", " 123851 0.029 0.000 3.320 0.000 cmd.py:1221(_get_persistent_cmd)\n", " 33441 0.018 0.000 3.289 0.000 symbolic.py:212(_get_commit)\n", " 33441 0.035 0.000 3.272 0.000 symbolic.py:203(_get_object)\n", "78029/22294 0.046 0.000 3.050 0.000 tree.py:347(__getitem__)\n", "78029/22294 0.159 0.000 3.025 0.000 tree.py:245(join)\n", " 44588 0.049 0.000 2.616 0.000 tree.py:224(_set_cache_)\n", " 44588 0.103 0.000 2.385 0.000 base.py:75(new_from_sha)\n", " 11147 0.049 0.000 2.202 0.000 base.py:196(_config_parser)\n", " 44588 0.034 0.000 2.167 0.000 symbolic.py:143(dereference_recursive)\n", " 89177 0.066 0.000 2.133 0.000 symbolic.py:196(_get_ref_info)\n", " 89177 0.233 0.000 2.067 0.000 symbolic.py:156(_get_ref_info_helper)\n", " 44588 0.074 0.000 2.023 0.000 db.py:42(info)\n", " 44588 0.036 0.000 1.901 0.000 cmd.py:1243(get_object_header)\n", " 1165 1.735 0.001 1.735 0.001 {built-in method posix.read}\n", " 11147 0.057 0.000 1.434 0.000 fun.py:191(rev_parse)\n", " 11147 0.038 0.000 1.369 0.000 fun.py:121(name_to_object)\n", " 13441 0.032 0.000 1.287 0.000 commit.py:196(_set_cache_)\n", " 103022 1.170 0.000 1.222 0.000 {built-in method io.open}\n", " 11147 0.022 0.000 1.048 0.000 util.py:72(__init__)\n", "89426/78112 0.070 0.000 1.040 0.000 config.py:104(assure_data_present)\n", " 11229 0.047 0.000 1.034 0.000 config.py:281(__init__)\n", " 11229 0.057 0.000 0.982 0.000 configparser.py:601(__init__)\n", " 44588 0.689 0.000 0.903 0.000 fun.py:59(tree_entries_from_data)\n", " 1006233 0.774 0.000 0.774 0.000 {method 'decode' of 'bytes' objects}\n", " 11229 0.186 0.000 0.771 0.000 configparser.py:1321(__init__)\n", "89426/78197 0.066 0.000 0.763 0.000 config.py:543(read)\n", " 1122 0.759 0.001 0.759 0.001 {method 'join' of 'bytes' objects}\n", " 11949 0.004 0.000 0.679 0.000 commit.py:749(_get_decoded_str)\n", " 11229 0.227 0.000 0.497 0.000 config.py:391(_read)\n", " 11229 0.406 0.000 0.406 0.000 {built-in method builtins.dir}\n", " 12319 0.096 0.000 0.392 0.000 commit.py:525(_deserialize)\n", " 89337 0.267 0.000 0.332 0.000 {method 'read' of '_io.TextIOWrapper' objects}\n", " 123851 0.280 0.000 0.281 0.000 {method 'flush' of '_io.BufferedWriter' objects}\n", " 1539795 0.259 0.000 0.259 0.000 {method 'match' of 're.Pattern' objects}\n", " 13121 0.006 0.000 0.242 0.000 commit.py:587(committer_date)\n", " 13121 0.007 0.000 0.236 0.000 commit.py:209(committed_datetime)\n", " 11949 0.038 0.000 0.236 0.000 commit.py:155(__init__)\n", " 1173 0.002 0.000 0.229 0.000 repository.py:207(traverse_commits)\n", " 131755 0.147 0.000 0.227 0.000 posixpath.py:71(join)\n", " 1165 0.077 0.000 0.226 0.000 subprocess.py:1222(_close_pipe_fds)\n", " 2246 0.013 0.000 0.225 0.000 threading.py:909(start)\n", " 2246 0.083 0.000 0.220 0.000 threading.py:820(__init__)\n", " 1165 0.039 0.000 0.214 0.000 os.py:711(copy)\n", " 123851 0.150 0.000 0.213 0.000 cmd.py:1207(_prepare_ref)\n", " 148539 0.158 0.000 0.200 0.000 base.py:50(__init__)\n", " 22610 0.018 0.000 0.198 0.000 pathlib.py:955(__new__)\n", " 44669 0.051 0.000 0.187 0.000 configparser.py:766(get)\n", " 390145 0.111 0.000 0.187 0.000 compat.py:49(safe_decode)\n", " 777783 0.126 0.000 0.183 0.000 cmd.py:466(__getattr__)\n", " 123851 0.134 0.000 0.181 0.000 cmd.py:1185(_parse_object_header)\n", " 22612 0.017 0.000 0.179 0.000 pathlib.py:587(_from_parts)\n", " 1176 0.022 0.000 0.178 0.000 repository.py:246(_iter_commits)\n", " 90410 0.056 0.000 0.177 0.000 base.py:153(__init__)\n", " 22612 0.043 0.000 0.160 0.000 pathlib.py:567(_parse_args)\n", " 45760 0.036 0.000 0.154 0.000 tree.py:214(__init__)\n", " 24638 0.035 0.000 0.147 0.000 util.py:268(parse_actor_and_date)\n", " 100569 0.147 0.000 0.147 0.000 {method '__exit__' of '_io._IOBase' objects}\n", " 44588 0.079 0.000 0.144 0.000 util.py:85(get_object_type_by_name)\n", " 11229 0.071 0.000 0.141 0.000 configparser.py:1244(__init__)\n", " 2247 0.012 0.000 0.139 0.000 threading.py:582(wait)\n", "1016334/1016333 0.137 0.000 0.137 0.000 {built-in method builtins.getattr}\n", " 22608 0.092 0.000 0.131 0.000 util.py:69(mode_str_to_int)\n", " 58129 0.045 0.000 0.125 0.000 commit.py:84(__init__)\n", " 2248 0.010 0.000 0.121 0.000 threading.py:288(wait)\n", " 116581 0.050 0.000 0.108 0.000 os.py:674(__getitem__)\n", " 22612 0.059 0.000 0.108 0.000 pathlib.py:56(parse_parts)\n", " 1760567 0.102 0.000 0.106 0.000 {built-in method builtins.isinstance}\n", " 269115 0.065 0.000 0.106 0.000 os.py:804(fsencode)\n", " 13121 0.018 0.000 0.096 0.000 util.py:167(from_timestamp)\n", " 22294 0.014 0.000 0.086 0.000 base.py:335(index)\n", " 2372 0.051 0.000 0.085 0.000 contextlib.py:496(callback)\n", " 1165 0.014 0.000 0.081 0.000 util.py:412(remove_password_if_present)\n", " 78109 0.061 0.000 0.081 0.000 util.py:163(join_path)\n", " 44669 0.039 0.000 0.076 0.000 configparser.py:1143(_unify_values)\n", " 45566 0.075 0.000 0.076 0.000 config.py:183(add)\n", " 22294 0.016 0.000 0.072 0.000 base.py:117(__init__)\n", " 2246 0.072 0.000 0.072 0.000 {built-in method _thread.start_new_thread}\n", " 11232 0.013 0.000 0.067 0.000 config.py:489(_has_includes)\n", " 71961 0.067 0.000 0.067 0.000 {method 'search' of 're.Pattern' objects}\n", " 1123 0.010 0.000 0.066 0.000 util.py:383(finalize_process)\n", " 89337 0.049 0.000 0.065 0.000 codecs.py:319(decode)\n", " 2247 0.010 0.000 0.065 0.000 threading.py:538(__init__)\n", " 117665 0.013 0.000 0.063 0.000 _collections_abc.py:877(__iter__)\n", " 31455 0.013 0.000 0.061 0.000 subprocess.py:1770()\n", " 2245 0.039 0.000 0.061 0.000 cmd.py:470(wait)\n", " 1165 0.019 0.000 0.060 0.000 os.py:619(get_exec_path)\n", " 11352 0.033 0.000 0.056 0.000 parse.py:437(urlsplit)\n", " 22294 0.014 0.000 0.056 0.000 base.py:157(_index_path)\n", " 24638 0.021 0.000 0.054 0.000 util.py:665(_from_string)\n", " 11229 0.016 0.000 0.054 0.000 config.py:492(_included_paths)\n", " 22294 0.017 0.000 0.053 0.000 base.py:114(__init__)\n", " 44669 0.033 0.000 0.053 0.000 __init__.py:976(__getitem__)\n", " 1205 0.009 0.000 0.052 0.000 cmd.py:463(__del__)\n", " 89337 0.035 0.000 0.052 0.000 codecs.py:309(__init__)\n", " 89177 0.036 0.000 0.052 0.000 symbolic.py:43(_git_dir)\n", " 117665 0.027 0.000 0.050 0.000 os.py:697(__iter__)\n", " 23898 0.009 0.000 0.050 0.000 diff.py:432(_pick_best_path)\n", " 706030 0.050 0.000 0.050 0.000 {built-in method builtins.len}\n", " 509679 0.049 0.000 0.049 0.000 {method 'encode' of 'str' objects}\n", " 111678 0.040 0.000 0.049 0.000 config.py:192(__getitem__)\n", " 204764 0.047 0.000 0.047 0.000 {method 'split' of 'str' objects}\n", " 2252 0.047 0.000 0.047 0.000 threading.py:236(__init__)\n", " 233000 0.027 0.000 0.046 0.000 os.py:758(decode)\n", " 33442 0.015 0.000 0.045 0.000 base.py:342(head)\n", " 11949 0.004 0.000 0.045 0.000 ChangeCounter.ipynb:112(include)\n", " 13121 0.040 0.000 0.042 0.000 {built-in method fromtimestamp}\n", " 22374 0.012 0.000 0.042 0.000 util.py:200(join_path_native)\n", " 11949 0.009 0.000 0.041 0.000 ChangeCounter.ipynb:176(filter)\n", " 79042 0.041 0.000 0.041 0.000 config.py:180(__setitem__)\n", " 2286 0.005 0.000 0.040 0.000 subprocess.py:1199(wait)\n", " 23898 0.025 0.000 0.040 0.000 diff.py:54(decode_path)\n", " 80 0.000 0.000 0.039 0.000 util.py:98(wrapper)\n", " 80 0.000 0.000 0.039 0.000 base.py:1103(module)\n", " 81 0.002 0.000 0.039 0.000 base.py:108(__init__)\n", " 281131 0.039 0.000 0.039 0.000 {method 'endswith' of 'str' objects}\n", " 79263 0.025 0.000 0.038 0.000 base.py:128(__new__)\n", " 134652 0.026 0.000 0.038 0.000 posixpath.py:41(_get_sep)\n", " 694235 0.037 0.000 0.037 0.000 {method 'append' of 'list' objects}\n", " 1205 0.010 0.000 0.037 0.000 cmd.py:421(_terminate)\n", " 229688 0.036 0.000 0.036 0.000 {method 'startswith' of 'str' objects}\n", " 116581 0.023 0.000 0.036 0.000 os.py:754(encode)\n", " 11949 0.019 0.000 0.036 0.000 commit.py:923(_from_change_to_modification_type)\n", " 1173 0.000 0.000 0.035 0.000 git.py:110(get_list_commits)\n", " 2286 0.004 0.000 0.035 0.000 subprocess.py:1906(_wait)\n", " 1165 0.026 0.000 0.034 0.000 contextlib.py:533(__exit__)\n", " 22294 0.027 0.000 0.033 0.000 symbolic.py:439(to_full_path)\n", " 44589 0.025 0.000 0.033 0.000 :1053(_handle_fromlist)\n", " 80423 0.033 0.000 0.033 0.000 typing.py:1408(_no_init_or_replace_init)\n", " 2246 0.033 0.000 0.033 0.000 threading.py:1294(_make_invoke_excepthook)\n", " 135648 0.032 0.000 0.032 0.000 typing.py:305(inner)\n", " 44589 0.024 0.000 0.032 0.000 :404(parent)\n", " 148539 0.031 0.000 0.031 0.000 {method 'split' of 'bytes' objects}\n", " 29340 0.009 0.000 0.031 0.000 commit.py:243(new_path)\n", " 33443 0.022 0.000 0.030 0.000 head.py:38(__init__)\n", " 86018 0.029 0.000 0.029 0.000 {built-in method sys.intern}\n", " 1164 0.004 0.000 0.029 0.000 subprocess.py:1893(_try_wait)\n", " 216655 0.029 0.000 0.029 0.000 {built-in method binascii.a2b_hex}\n", " 22294 0.019 0.000 0.028 0.000 configparser.py:878(has_option)\n", " 159625 0.028 0.000 0.028 0.000 {built-in method __new__ of type object at 0x100e370a0}\n", " 123983 0.026 0.000 0.026 0.000 {method 'write' of '_io.BufferedWriter' objects}\n", " 1246 0.026 0.000 0.026 0.000 {built-in method posix.waitpid}\n", " 44588 0.016 0.000 0.025 0.000 base.py:35(__new__)\n", " 40 0.000 0.000 0.025 0.001 base.py:1121(module_exists)\n", " 1165 0.025 0.000 0.025 0.000 cmd.py:416(__init__)\n", " 81 0.000 0.000 0.025 0.000 cmd.py:1273(clear_cache)\n", " 79263 0.024 0.000 0.024 0.000 cmd.py:517(__init__)\n", " 124315 0.024 0.000 0.024 0.000 {method 'group' of 're.Match' objects}\n", " 2244 0.013 0.000 0.023 0.000 threading.py:1191(daemon)\n", " 13121 0.019 0.000 0.022 0.000 {method 'astimezone' of 'datetime.datetime' objects}\n", " 28859 0.014 0.000 0.022 0.000 pathlib.py:619(__str__)\n", " 1165 0.021 0.000 0.022 0.000 warnings.py:458(__enter__)\n", " 427923 0.021 0.000 0.021 0.000 {built-in method posix.fspath}\n", " 128589 0.020 0.000 0.020 0.000 {built-in method binascii.b2a_hex}\n", " 1 0.000 0.000 0.020 0.020 base.py:560(iter_commits)\n", " 1 0.000 0.000 0.019 0.019 commit.py:246(iter_items)\n", " 2372 0.012 0.000 0.019 0.000 contextlib.py:514(_push_exit_callback)\n", " 24638 0.019 0.000 0.019 0.000 util.py:110(utctz_to_altz)\n", " 1165 0.007 0.000 0.018 0.000 subprocess.py:1583(_get_handles)\n", " 238753 0.018 0.000 0.018 0.000 {method 'strip' of 'str' objects}\n", " 32698 0.014 0.000 0.018 0.000 base.py:105(__ne__)\n", " 2246 0.013 0.000 0.018 0.000 _weakrefset.py:86(add)\n", " 123944 0.018 0.000 0.018 0.000 {method 'startswith' of 'bytes' objects}\n", " 89337 0.018 0.000 0.018 0.000 codecs.py:260(__init__)\n", " 79263 0.017 0.000 0.017 0.000 base.py:132(__init__)\n", " 1165 0.017 0.000 0.017 0.000 contextlib.py:452(__init__)\n", " 89337 0.017 0.000 0.017 0.000 {built-in method _codecs.utf_8_decode}\n", " 79 0.000 0.000 0.017 0.000 base.py:240(__del__)\n", " 113145 0.013 0.000 0.016 0.000 {built-in method builtins.hasattr}\n", " 79 0.001 0.000 0.016 0.000 base.py:246(close)\n", " 44669 0.016 0.000 0.016 0.000 base.py:296(common_dir)\n", " 22376 0.014 0.000 0.016 0.000 configparser.py:644(sections)\n", " 135063 0.015 0.000 0.015 0.000 {method 'rstrip' of 'str' objects}\n", " 1173 0.001 0.000 0.015 0.000 commit.py:318(_iter_from_process_or_stream)\n", " 79263 0.015 0.000 0.015 0.000 cmd.py:604(__del__)\n", " 61225 0.015 0.000 0.015 0.000 {method 'groups' of 're.Match' objects}\n", " 3537 0.014 0.000 0.014 0.000 {built-in method posix.pipe}\n", " 11949 0.010 0.000 0.014 0.000 commit.py:549(committer)\n", " 44669 0.014 0.000 0.014 0.000 __init__.py:966(__init__)\n", " 2372 0.013 0.000 0.013 0.000 contextlib.py:446(_create_cb_wrapper)\n", " 56187 0.013 0.000 0.013 0.000 {built-in method builtins.setattr}\n", " 13121 0.013 0.000 0.013 0.000 util.py:147(__init__)\n", " 2 0.000 0.000 0.012 0.006 {built-in method builtins.next}\n", " 2 0.000 0.000 0.012 0.006 repository.py:173(_prep_repo)\n", " 22294 0.010 0.000 0.012 0.000 util.py:34(sm_name)\n", " 4703 0.012 0.000 0.012 0.000 {built-in method posix.close}\n", " 22926 0.009 0.000 0.011 0.000 diff.py:403(a_path)\n", "10144/1165 0.007 0.000 0.011 0.000 cmd.py:1069(__unpack_args)\n", " 745 0.002 0.000 0.011 0.000 ChangeCounter.ipynb:121(update_stats)\n", " 44588 0.011 0.000 0.011 0.000 base.py:38(__init__)\n", " 6736 0.011 0.000 0.011 0.000 threading.py:546(is_set)\n", " 1165 0.006 0.000 0.011 0.000 warnings.py:165(simplefilter)\n", " 322/162 0.001 0.000 0.011 0.000 fun.py:85(find_submodule_git_dir)\n", " 11352 0.004 0.000 0.010 0.000 parse.py:155(password)\n", " 10425 0.008 0.000 0.010 0.000 diff.py:426(renamed_file)\n", " 19583 0.010 0.000 0.010 0.000 {method 'get' of 'dict' objects}\n", " 172670 0.010 0.000 0.010 0.000 typing.py:1715(cast)\n", " 5663 0.010 0.000 0.010 0.000 {built-in method _thread.allocate_lock}\n", " 11147 0.008 0.000 0.010 0.000 util.py:970(__new__)\n", " 1222 0.009 0.000 0.010 0.000 commit.py:623(parents)\n", " 1 0.000 0.000 0.009 0.009 contextlib.py:139(__exit__)\n", " 2 0.000 0.000 0.009 0.005 git.py:77(clear)\n", " 4492 0.009 0.000 0.009 0.000 threading.py:1176(daemon)\n", " 111810 0.009 0.000 0.009 0.000 {function _OMD.__getitem__ at 0x12b4d5900}\n", " 55941 0.009 0.000 0.009 0.000 {method 'rpartition' of 'str' objects}\n", " 112530 0.009 0.000 0.009 0.000 config.py:387(optionxform)\n", " 23328 0.006 0.000 0.008 0.000 diff.py:407(b_path)\n", " 33443 0.008 0.000 0.008 0.000 symbolic.py:65(__init__)\n", " 4620 0.008 0.000 0.008 0.000 {method 'append' of 'collections.deque' objects}\n", " 11465 0.006 0.000 0.008 0.000 pathlib.py:606(_format_parsed_parts)\n", " 1122 0.008 0.000 0.008 0.000 {method 'finditer' of 're.Pattern' objects}\n", " 81 0.000 0.000 0.008 0.000 base.py:488(config_reader)\n", " 1165 0.007 0.000 0.007 0.000 {built-in method posix.access}\n", " 4492 0.007 0.000 0.007 0.000 threading.py:1423(current_thread)\n", " 22608 0.007 0.000 0.007 0.000 {method 'sub' of 're.Pattern' objects}\n", " 11039 0.002 0.000 0.007 0.000 config.py:352(__del__)\n", " 24638 0.007 0.000 0.007 0.000 util.py:646(__init__)\n", " 2244 0.006 0.000 0.006 0.000 threading.py:775(_newname)\n", " 62282 0.006 0.000 0.006 0.000 {method 'readline' of '_io.BytesIO' objects}\n", " 11352 0.005 0.000 0.006 0.000 parse.py:189(_userinfo)\n", " 89176 0.006 0.000 0.006 0.000 {built-in method builtins.ord}\n", " 44588 0.006 0.000 0.006 0.000 base.py:52(type)\n", " 1164 0.006 0.000 0.006 0.000 subprocess.py:1060(__del__)\n", " 11147 0.006 0.000 0.006 0.000 util.py:973(__init__)\n", " 22454 0.006 0.000 0.006 0.000 base.py:290(working_tree_dir)\n", " 68332 0.006 0.000 0.006 0.000 {method 'lower' of 'str' objects}\n", " 2490 0.003 0.000 0.005 0.000 posixpath.py:150(dirname)\n", " 11229 0.004 0.000 0.005 0.000 configparser.py:1363(__iter__)\n", " 11147 0.004 0.000 0.005 0.000 base.py:99(__eq__)\n", " 44588 0.005 0.000 0.005 0.000 base.py:42(binsha)\n", " 2246 0.004 0.000 0.005 0.000 threading.py:1021(_stop)\n", " 22612 0.005 0.000 0.005 0.000 pathlib.py:239(splitroot)\n", " 1172 0.002 0.000 0.005 0.000 conf.py:257(is_commit_filtered)\n", " 403 0.001 0.000 0.005 0.000 fun.py:44(is_git_dir)\n", " 1165 0.005 0.000 0.005 0.000 warnings.py:437(__init__)\n", " 11352 0.004 0.000 0.005 0.000 parse.py:114(_coerce_args)\n", " 4738 0.004 0.000 0.005 0.000 base.py:123(hexsha)\n", " 1172 0.002 0.000 0.005 0.000 commit.py:529(hash)\n", " 11949 0.004 0.000 0.005 0.000 commit.py:614(msg)\n", " 11040 0.003 0.000 0.005 0.000 config.py:364(release)\n", " 745 0.005 0.000 0.005 0.000 ChangeCounter.ipynb:137(update_size)\n", " 2246 0.004 0.000 0.005 0.000 _weakrefset.py:39(_remove)\n", " 2260 0.004 0.000 0.005 0.000 threading.py:264(__enter__)\n", " 69408 0.004 0.000 0.004 0.000 {method 'replace' of 'str' objects}\n", " 2328 0.003 0.000 0.004 0.000 subprocess.py:1173(poll)\n", " 1165 0.002 0.000 0.004 0.000 warnings.py:181(_add_filter)\n", " 2244 0.004 0.000 0.004 0.000 threading.py:1162(is_alive)\n", " 44588 0.004 0.000 0.004 0.000 base.py:60(size)\n", " 11229 0.004 0.000 0.004 0.000 config.py:333(_acquire_lock)\n", " 13121 0.004 0.000 0.004 0.000 developer.py:27(__init__)\n", " 7066 0.003 0.000 0.004 0.000 conf.py:45(get)\n", " 1165 0.004 0.000 0.004 0.000 _collections_abc.py:828(keys)\n", " 11147 0.004 0.000 0.004 0.000 fun.py:180(to_commit)\n", " 3413 0.004 0.000 0.004 0.000 {method 'add' of 'set' objects}\n", " 1246 0.001 0.000 0.004 0.000 abc.py:117(__instancecheck__)\n", " 44669 0.004 0.000 0.004 0.000 configparser.py:363(before_get)\n", " 745 0.003 0.000 0.004 0.000 ChangeCounter.ipynb:145(update_changes)\n", " 1165 0.004 0.000 0.004 0.000 {built-in method sys.exc_info}\n", " 22374 0.003 0.000 0.003 0.000 util.py:194(to_native_path_linux)\n", " 2248 0.002 0.000 0.003 0.000 threading.py:279(_is_owned)\n", " 1246 0.003 0.000 0.003 0.000 {built-in method _abc._abc_instancecheck}\n", " 56145 0.003 0.000 0.003 0.000 {built-in method builtins.callable}\n", " 243 0.001 0.000 0.003 0.000 util.py:400(expand_path)\n", " 811 0.002 0.000 0.003 0.000 posixpath.py:337(normpath)\n", " 30290 0.003 0.000 0.003 0.000 {method 'endswith' of 'bytes' objects}\n", " 1 0.000 0.000 0.003 0.003 contextlib.py:130(__enter__)\n", " 2372 0.001 0.000 0.003 0.000 contextlib.py:448(_exit_wrapper)\n", " 52484 0.003 0.000 0.003 0.000 util.py:160(dst)\n", " 1136 0.003 0.000 0.003 0.000 {built-in method posix.stat}\n", " 2248 0.002 0.000 0.003 0.000 threading.py:273(_release_save)\n", " 40535 0.003 0.000 0.003 0.000 util.py:154(utcoffset)\n", " 1 0.000 0.000 0.003 0.003 git.py:39(__init__)\n", " 1 0.000 0.000 0.002 0.002 git.py:86(_open_repository)\n", " 808 0.001 0.000 0.002 0.000 genericpath.py:39(isdir)\n", " 12277 0.002 0.000 0.002 0.000 {method 'join' of 'str' objects}\n", " 101 0.000 0.000 0.002 0.000 parse.py:88(clear_cache)\n", " 22376 0.002 0.000 0.002 0.000 {method 'keys' of 'collections.OrderedDict' objects}\n", " 202 0.002 0.000 0.002 0.000 {method 'clear' of 'dict' objects}\n", " 2244 0.001 0.000 0.002 0.000 base.py:115(__str__)\n", " 1166 0.002 0.000 0.002 0.000 {built-in method builtins.sorted}\n", " 1169 0.002 0.000 0.002 0.000 {method 'remove' of 'list' objects}\n", " 11229 0.002 0.000 0.002 0.000 configparser.py:1191(converters)\n", " 11043 0.002 0.000 0.002 0.000 config.py:707(read_only)\n", " 2245 0.001 0.000 0.002 0.000 encoding.py:1(force_bytes)\n", " 12319 0.002 0.000 0.002 0.000 {method 'read' of '_io.BytesIO' objects}\n", " 11227 0.002 0.000 0.002 0.000 base.py:309(bare)\n", " 10425 0.001 0.000 0.001 0.000 diff.py:411(rename_from)\n", " 81 0.000 0.000 0.001 0.000 configparser.py:827(getboolean)\n", " 3492 0.001 0.000 0.001 0.000 subprocess.py:1858(_internal_poll)\n", " 1165 0.001 0.000 0.001 0.000 cmd.py:1058(transform_kwargs)\n", " 1173 0.001 0.000 0.001 0.000 __init__.py:1467(info)\n", " 13548 0.001 0.000 0.001 0.000 {method 'strip' of 'bytes' objects}\n", " 2248 0.001 0.000 0.001 0.000 threading.py:276(_acquire_restore)\n", " 245 0.000 0.000 0.001 0.000 posixpath.py:376(abspath)\n", " 11949 0.001 0.000 0.001 0.000 {method 'end' of 're.Match' objects}\n", " 22613 0.001 0.000 0.001 0.000 {method 'reverse' of 'list' objects}\n", " 1164 0.001 0.000 0.001 0.000 subprocess.py:1846(_handle_exitstatus)\n", " 81 0.000 0.000 0.001 0.000 configparser.py:806(_get_conv)\n", " 1165 0.001 0.000 0.001 0.000 cmd.py:1148()\n", " 2260 0.001 0.000 0.001 0.000 threading.py:267(__exit__)\n", " 81 0.000 0.000 0.001 0.000 configparser.py:803(_get)\n", " 4661 0.001 0.000 0.001 0.000 {method 'items' of 'dict' objects}\n", " 1165 0.001 0.000 0.001 0.000 warnings.py:477(__exit__)\n", " 2015 0.001 0.000 0.001 0.000 :1()\n", " 1165 0.001 0.000 0.001 0.000 __init__.py:1455(debug)\n", " 1209 0.000 0.000 0.001 0.000 cmd.py:180(dashify)\n", " 11633 0.001 0.000 0.001 0.000 {method 'pop' of 'list' objects}\n", " 81 0.000 0.000 0.001 0.000 cmd.py:612(__init__)\n", " 10425 0.001 0.000 0.001 0.000 diff.py:415(rename_to)\n", " 1122 0.001 0.000 0.001 0.000 base.py:90(_set_cache_)\n", " 2338 0.001 0.000 0.001 0.000 __init__.py:1724(isEnabledFor)\n", " 1172 0.001 0.000 0.001 0.000 commit.py:538(author)\n", " 2245 0.001 0.000 0.001 0.000 cmd.py:632(__getattr__)\n", " 2328 0.001 0.000 0.001 0.000 {method 'close' of '_io.BufferedReader' objects}\n", " 11229 0.001 0.000 0.001 0.000 {built-in method builtins.iter}\n", " 10827 0.001 0.000 0.001 0.000 {method 'start' of 're.Match' objects}\n", " 2372 0.001 0.000 0.001 0.000 {method 'pop' of 'collections.deque' objects}\n", " 3 0.000 0.000 0.001 0.000 config.py:659(write)\n", " 11352 0.001 0.000 0.001 0.000 parse.py:103(_noop)\n", " 81 0.000 0.000 0.001 0.000 db.py:37(__init__)\n", " 1325 0.001 0.000 0.001 0.000 {method 'rfind' of 'str' objects}\n", " 1165 0.001 0.000 0.001 0.000 subprocess.py:246(_cleanup)\n", " 2/1 0.000 0.000 0.001 0.001 config.py:117(flush_changes)\n", " 1165 0.001 0.000 0.001 0.000 {method 'rfind' of 'bytes' objects}\n", " 1165 0.001 0.000 0.001 0.000 cmd.py:1156()\n", " 1165 0.001 0.000 0.001 0.000 contextlib.py:530(__enter__)\n", " 4492 0.001 0.000 0.001 0.000 {built-in method _thread.get_ident}\n", " 41 0.000 0.000 0.001 0.000 subprocess.py:2093(terminate)\n", " 2015 0.001 0.000 0.001 0.000 {method 'find' of 'str' objects}\n", " 245 0.000 0.000 0.001 0.000 genericpath.py:27(isfile)\n", " 1 0.000 0.000 0.001 0.001 base.py:511(config_writer)\n", " 81 0.000 0.000 0.001 0.000 genericpath.py:16(exists)\n", " 81 0.000 0.000 0.001 0.000 loose.py:77(__init__)\n", " 4576 0.001 0.000 0.001 0.000 {method 'release' of '_thread.lock' objects}\n", " 1 0.000 0.000 0.001 0.001 config.py:791(set_value)\n", " 4 0.000 0.000 0.001 0.000 util.py:878(_obtain_lock)\n", " 4 0.000 0.000 0.001 0.000 util.py:856(_obtain_lock_or_raise)\n", " 41 0.001 0.000 0.001 0.000 {method 'close' of '_io.BufferedWriter' objects}\n", " 1 0.001 0.001 0.001 0.001 {built-in method posix.open}\n", " 1165 0.000 0.000 0.000 0.000 cmd.py:1149()\n", " 1172 0.000 0.000 0.000 0.000 git.py:134(get_commit_from_gitpython)\n", " 41 0.000 0.000 0.000 0.000 subprocess.py:2061(send_signal)\n", " 81 0.000 0.000 0.000 0.000 re.py:197(search)\n", " 2252 0.000 0.000 0.000 0.000 {method '__enter__' of '_thread.lock' objects}\n", " 4531 0.000 0.000 0.000 0.000 {method 'insert' of 'list' objects}\n", " 82 0.000 0.000 0.000 0.000 base.py:464(_get_config_path)\n", " 1165 0.000 0.000 0.000 0.000 _collections_abc.py:854(__init__)\n", " 3428 0.000 0.000 0.000 0.000 {method '__exit__' of '_thread.lock' objects}\n", " 2 0.000 0.000 0.000 0.000 pathlib.py:1062(resolve)\n", " 1 0.000 0.000 0.000 0.000 repository.py:240()\n", " 4 0.000 0.000 0.000 0.000 thread.py:161(submit)\n", " 2246 0.000 0.000 0.000 0.000 {method 'discard' of 'set' objects}\n", " 2254 0.000 0.000 0.000 0.000 {method '__exit__' of '_thread.RLock' objects}\n", " 2 0.000 0.000 0.000 0.000 posixpath.py:391(realpath)\n", " 2 0.000 0.000 0.000 0.000 posixpath.py:400(_joinrealpath)\n", " 81 0.000 0.000 0.000 0.000 re.py:288(_compile)\n", " 4 0.000 0.000 0.000 0.000 thread.py:180(_adjust_thread_count)\n", " 81 0.000 0.000 0.000 0.000 _collections_abc.py:820(__contains__)\n", " 407 0.000 0.000 0.000 0.000 posixpath.py:60(isabs)\n", " 41 0.000 0.000 0.000 0.000 {built-in method posix.kill}\n", " 8 0.000 0.000 0.000 0.000 {built-in method posix.lstat}\n", " 3495 0.000 0.000 0.000 0.000 {built-in method _warnings._filters_mutated}\n", " 1 0.000 0.000 0.000 0.000 repository.py:256(_split_in_chunks)\n", " 81 0.000 0.000 0.000 0.000 base.py:113(__init__)\n", " 1 0.000 0.000 0.000 0.000 {built-in method math.ceil}\n", " 3 0.000 0.000 0.000 0.000 config.py:615(_write)\n", " 2015 0.000 0.000 0.000 0.000 parse.py:419(_checknetloc)\n", " 45 0.000 0.000 0.000 0.000 config.py:618(write_section)\n", " 80 0.000 0.000 0.000 0.000 base.py:197(abspath)\n", " 79 0.000 0.000 0.000 0.000 mman.py:408(collect)\n", " 1172 0.000 0.000 0.000 0.000 commit.py:503(__init__)\n", " 243 0.000 0.000 0.000 0.000 posixpath.py:228(expanduser)\n", " 1207 0.000 0.000 0.000 0.000 {method 'update' of 'dict' objects}\n", " 1164 0.000 0.000 0.000 0.000 {built-in method posix.WIFSTOPPED}\n", " 1165 0.000 0.000 0.000 0.000 {built-in method sys.audit}\n", " 46 0.000 0.000 0.000 0.000 cmd.py:1042(transform_kwarg)\n", " 1 0.000 0.000 0.000 0.000 _base.py:636(__exit__)\n", " 1164 0.000 0.000 0.000 0.000 {built-in method posix.waitstatus_to_exitcode}\n", " 1122 0.000 0.000 0.000 0.000 util.py:257(_set_cache_)\n", " 79 0.000 0.000 0.000 0.000 mman.py:303(_collect_lru_region)\n", " 1 0.000 0.000 0.000 0.000 thread.py:216(shutdown)\n", " 241 0.000 0.000 0.000 0.000 cmd.py:368(is_cygwin)\n", " 1122 0.000 0.000 0.000 0.000 diff.py:86(_process_diff_args)\n", " 243 0.000 0.000 0.000 0.000 posixpath.py:284(expandvars)\n", " 5 0.000 0.000 0.000 0.000 _base.py:201(as_completed)\n", " 2251 0.000 0.000 0.000 0.000 {method 'locked' of '_thread.lock' objects}\n", " 1165 0.000 0.000 0.000 0.000 {method 'pop' of 'dict' objects}\n", " 1 0.000 0.000 0.000 0.000 git.py:92(_discover_main_branch)\n", " 81 0.000 0.000 0.000 0.000 base.py:70(__init__)\n", " 1 0.000 0.000 0.000 0.000 base.py:792(active_branch)\n", " 81 0.000 0.000 0.000 0.000 configparser.py:1163(_convert_to_boolean)\n", " 1 0.000 0.000 0.000 0.000 symbolic.py:288(_get_reference)\n", " 745 0.000 0.000 0.000 0.000 ChangeCounter.ipynb:155(update_elems)\n", " 80 0.000 0.000 0.000 0.000 base.py:266(__ne__)\n", " 45 0.000 0.000 0.000 0.000 config.py:216(items_all)\n", " 1 0.000 0.000 0.000 0.000 thread.py:123(__init__)\n", " 45 0.000 0.000 0.000 0.000 config.py:218()\n", " 727 0.000 0.000 0.000 0.000 {built-in method _stat.S_ISDIR}\n", " 2 0.000 0.000 0.000 0.000 weakref.py:370(remove)\n", " 88 0.000 0.000 0.000 0.000 config.py:786(_value_to_string)\n", " 2 0.000 0.000 0.000 0.000 util.py:883(_release_lock)\n", " 1 0.000 0.000 0.000 0.000 util.py:139(rmfile)\n", " 80 0.000 0.000 0.000 0.000 base.py:261(__eq__)\n", " 4 0.000 0.000 0.000 0.000 threading.py:421(acquire)\n", " 241 0.000 0.000 0.000 0.000 util.py:347(is_cygwin_git)\n", " 1 0.000 0.000 0.000 0.000 {built-in method posix.remove}\n", " 132 0.000 0.000 0.000 0.000 config.py:209(getall)\n", " 6 0.000 0.000 0.000 0.000 _base.py:179(_yield_finished_futures)\n", " 1 0.000 0.000 0.000 0.000 repository.py:44(__init__)\n", " 1 0.000 0.000 0.000 0.000 symbolic.py:685(from_path)\n", " 1 0.000 0.000 0.000 0.000 conf.py:77(sanity_check_filters)\n", " 3 0.000 0.000 0.000 0.000 config.py:212(items)\n", " 3 0.000 0.000 0.000 0.000 config.py:214()\n", " 163 0.000 0.000 0.000 0.000 {built-in method _stat.S_ISREG}\n", " 1 0.000 0.000 0.000 0.000 threading.py:415(__init__)\n", " 2 0.000 0.000 0.000 0.000 threading.py:793(_maintain_shutdown_locks)\n", " 4 0.000 0.000 0.000 0.000 thread.py:47(__init__)\n", " 2 0.000 0.000 0.000 0.000 pathlib.py:1090(stat)\n", " 4 0.000 0.000 0.000 0.000 _base.py:418(result)\n", " 4 0.000 0.000 0.000 0.000 pathlib.py:629(__fspath__)\n", " 88 0.000 0.000 0.000 0.000 encoding.py:11(force_text)\n", " 4 0.000 0.000 0.000 0.000 _base.py:318(__init__)\n", " 1 0.000 0.000 0.000 0.000 _base.py:157(_create_and_install_waiters)\n", " 57 0.000 0.000 0.000 0.000 {method 'find' of 'bytes' objects}\n", " 1 0.000 0.000 0.000 0.000 conf.py:191(build_args)\n", " 57 0.000 0.000 0.000 0.000 {method 'rstrip' of 'bytes' objects}\n", " 1 0.000 0.000 0.000 0.000 _base.py:146(__init__)\n", " 1 0.000 0.000 0.000 0.000 _base.py:79(__init__)\n", " 1 0.000 0.000 0.000 0.000 conf.py:287(_check_timezones)\n", " 79 0.000 0.000 0.000 0.000 {method 'values' of 'dict' objects}\n", " 1 0.000 0.000 0.000 0.000 conf.py:24(__init__)\n", " 1 0.000 0.000 0.000 0.000 conf.py:293(_replace_timezone)\n", " 5 0.000 0.000 0.000 0.000 {method 'put' of '_queue.SimpleQueue' objects}\n", " 81 0.000 0.000 0.000 0.000 {built-in method builtins.issubclass}\n", " 2 0.000 0.000 0.000 0.000 repository.py:148(_is_remote)\n", " 1 0.000 0.000 0.000 0.000 _base.py:63(__init__)\n", " 2 0.000 0.000 0.000 0.000 weakref.py:428(__setitem__)\n", " 7 0.000 0.000 0.000 0.000 conf.py:36(set_value)\n", " 1 0.000 0.000 0.000 0.000 {method 'replace' of 'datetime.datetime' objects}\n", " 1 0.000 0.000 0.000 0.000 reference.py:46(__init__)\n", " 1 0.000 0.000 0.000 0.000 conf.py:65(_check_only_one_from_commit)\n", " 1 0.000 0.000 0.000 0.000 configparser.py:892(set)\n", " 1 0.000 0.000 0.000 0.000 _base.py:153(__exit__)\n", " 8 0.000 0.000 0.000 0.000 util.py:851(_has_lock)\n", " 1 0.000 0.000 0.000 0.000 git.py:322(__del__)\n", " 1 0.000 0.000 0.000 0.000 contextlib.py:279(helper)\n", " 1 0.000 0.000 0.000 0.000 _base.py:149(__enter__)\n", " 4 0.000 0.000 0.000 0.000 threading.py:90(RLock)\n", " 8 0.000 0.000 0.000 0.000 {method 'partition' of 'str' objects}\n", " 1 0.000 0.000 0.000 0.000 util.py:844(__del__)\n", " 8 0.000 0.000 0.000 0.000 {method '__enter__' of '_thread.RLock' objects}\n", " 2 0.000 0.000 0.000 0.000 threading.py:803()\n", " 2 0.000 0.000 0.000 0.000 conf.py:181(only_one_filter)\n", " 1 0.000 0.000 0.000 0.000 threading.py:572(clear)\n", " 1 0.000 0.000 0.000 0.000 contextlib.py:102(__init__)\n", " 1 0.000 0.000 0.000 0.000 conf.py:71(_check_only_one_to_commit)\n", " 4 0.000 0.000 0.000 0.000 _base.py:388(__get_result)\n", " 3 0.000 0.000 0.000 0.000 config.py:699(_assure_writable)\n", " 1 0.000 0.000 0.000 0.000 reference.py:100(name)\n", " 1 0.000 0.000 0.000 0.000 conf.py:54(_sanity_check_repos)\n", " 4 0.000 0.000 0.000 0.000 _base.py:225()\n", " 2 0.000 0.000 0.000 0.000 {method 'difference_update' of 'set' objects}\n", " 5 0.000 0.000 0.000 0.000 {method 'remove' of 'set' objects}\n", " 2 0.000 0.000 0.000 0.000 util.py:847(_lock_file_path)\n", " 3 0.000 0.000 0.000 0.000 git.py:63(repo)\n", " 4 0.000 0.000 0.000 0.000 {built-in method time.monotonic}\n", " 1 0.000 0.000 0.000 0.000 pathlib.py:708(name)\n", " 8 0.000 0.000 0.000 0.000 {built-in method _stat.S_ISLNK}\n", " 4 0.000 0.000 0.000 0.000 {method 'lstrip' of 'str' objects}\n", " 1 0.000 0.000 0.000 0.000 conf.py:114(_check_correct_filters_order)\n", " 2 0.000 0.000 0.000 0.000 pathlib.py:1430(expanduser)\n", " 1 0.000 0.000 0.000 0.000 conf.py:142(get_starting_commit)\n", " 1 0.000 0.000 0.000 0.000 conf.py:165(get_ending_commit)\n", " 2 0.000 0.000 0.000 0.000 conf.py:189()\n", " 4 0.000 0.000 0.000 0.000 {method 'acquire' of '_thread.RLock' objects}\n", " 1 0.000 0.000 0.000 0.000 util.py:840(__init__)\n", " 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}\n", " 2 0.000 0.000 0.000 0.000 {method 'remove' of 'collections.deque' objects}\n", " 1 0.000 0.000 0.000 0.000 configparser.py:663(has_section)\n", " 4 0.000 0.000 0.000 0.000 {method 'release' of '_thread.RLock' objects}\n", " 1 0.000 0.000 0.000 0.000 __init__.py:230(utcoffset)\n", " 1 0.000 0.000 0.000 0.000 _base.py:633(__enter__)\n", " 1 0.000 0.000 0.000 0.000 configparser.py:366(before_set)\n", "\n", "\n" ] } ], "source": [ "cProfile.run('debuggingbook_change_counter(ChangeCounter)', sort='cumulative')" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "Yes, that's an awful lot of functions, but we can quickly narrow things down. The `cumtime` column is sorted by largest values first. We see that the `debuggingbook_change_counter()` method at the top takes up all the time – but this is not surprising, since it is the method we called in the first place. This calls a method `mine()` in the `ChangeCounter` class, which does all the work." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "The next places are more interesting: almost all time is spent in a single method, named `modifications()`. This method determines the difference between two versions, which is an expensive operation; this is also supported by the observation that half of the time is spent in a `diff()` method." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "This profile thus already gets us a hint on how to improve performance: Rather than computing the diff between versions for _every_ version, we could do so _on demand_ (and possibly cache results so we don't have to compute them twice). Alas, this (slow) functionality is part of the \n", "underlying [PyDriller](https://pydriller.readthedocs.io/) Python package, so we cannot fix this within the `ChangeCounter` class. But we could file a bug with the developers, suggesting a patch to improve performance." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "### Sampling Execution Profiles\n", "\n", "Instrumenting code is _precise_, but it is also _slow_. An alternate way to measure performance is to _sample_ in regular intervals which functions are currently active – for instance, by examining the current function call stack. The more frequently a function is sampled as active, the more time is spent in that function." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "One profiler for Python that implements such sampling is [Scalene](https://github.com/plasma-umass/scalene) – a high-performance, high-precision CPU, GPU, and memory profiler for Python. We can invoke it on our example as follows:\n", "\n", "```sh\n", "$ scalene --html test.py > scalene-out.html\n", "```" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "where `test.py` is a script that again invokes\n", "\n", "```python\n", "debuggingbook_change_counter(ChangeCounter)\n", "```" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "The output of `scalene` is sent to a HTML file (here, `scalene-out.html`) which is organized by _lines_ – that is, for each line, we see how much it contributed to overall execution time. Opening the output `scalene-out.html` in a HTML browser, we see these lines:" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "![](PICS/scalene-out.png)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "As with `cProfile`, above, we identify the `mine()` method in the `ChangeCounter` class as the main performance hog – and in the `mine()` method, it is the iteration over all modifications that takes all the time. Adding the option `--profile-all` to `scalene` would extend the profile to all executed code, including the `pydriller` third-party library." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "Besides relying on sampling rather that tracing (which is more efficient) and breaking down execution time by line, `scalene` also provides additional information on memory usage and more. If `cProfile` is not sufficient, then `scalene` will bring profiling to the next level." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Improving Performance" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "Identifying a culprit is not always that easy. Notably, when the first set of obvious performance hogs is fixed, it becomes more and more difficult to squeeze out additional performance – and, as stated above, such optimization may be in conflict with readability and maintainability of your code. Here are some simple ways to improve performance:" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "* **Efficient algorithms**. For many tasks, the simplest algorithm is not always the best performing one. Consider alternatives that may be more efficient, and _measure_ whether they pay off.\n", "\n", "* **Efficient data types**. Remember that certain operations, such as looking up whether an element is contained, may take different amounts of time depending on the data structure. In Python, a query like `x in xs` takes (mostly) constant time if `xs` is a set, but linear time if `xs` is a list; these differences become significant as the size of `xs` grows.\n", "\n", "* **Efficient modules**. In Python, most frequently used modules (or at least parts of) are implemented in C, which is way more efficient than plain Python. Rely on existing modules whenever possible. Or implement your own, _after_ having measured that this may pay off." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "These are all things you can already use during programming – and also set up your code such that exchanging, say, one data type by another will still be possible later. This is best achieved by hiding implementation details (such as the used data types) behind an abstract interface used by your clients." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "But beyond these points, remember the famous words by [Donald J. Knuth](https://en.wikipedia.org/wiki/Donald_Knuth):" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:16.441776Z", "iopub.status.busy": "2023-11-12T12:47:16.441450Z", "iopub.status.idle": "2023-11-12T12:47:16.444050Z", "shell.execute_reply": "2023-11-12T12:47:16.443628Z" }, "slideshow": { "slide_type": "skip" } }, "outputs": [], "source": [ "from bookutils import quiz" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:16.447028Z", "iopub.status.busy": "2023-11-12T12:47:16.446894Z", "iopub.status.idle": "2023-11-12T12:47:16.452803Z", "shell.execute_reply": "2023-11-12T12:47:16.452406Z" }, "slideshow": { "slide_type": "fragment" } }, "outputs": [ { "data": { "text/html": [ "\n", " \n", " \n", " \n", "
\n", "

Quiz

\n", "

\n", "

Donald J. Knuth said: \"Premature optimization...\"
\n", "

\n", "

\n", "

\n", " \n", " \n", "
\n", " \n", " \n", "
\n", " \n", " \n", "
\n", " \n", " \n", "
\n", " \n", "
\n", "

\n", " \n", " \n", "
\n", " " ], "text/plain": [ "" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "quiz('Donald J. Knuth said: \"Premature optimization...\"',\n", " [\n", " \"... is the root of all evil\",\n", " \"... requires lots of experience\",\n", " \"... should be left to assembly programmers\",\n", " \"... is the reason why TeX is so fast\",\n", " ], 'len(\"METAFONT\") - len(\"TeX\") - len(\"CWEB\")')" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "This quote should always remind us that after a good design, you should always _first_ measure and _then_ optimize." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Building a Profiler\n", "\n", "Having discussed profilers from a _user_ perspective, let us now dive into how they are actually implemented. It turns out we can use most of our existing infrastructure to implement a simple tracing profiler with only a few lines of code." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "The program we will apply our profiler on is – surprise! – our ongoing example, `remove_html_markup()`. Our aim is to understand how much time is spent _in each line of the code_ (such that we have a new feature on top of Python `cProfile`)." ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:16.456457Z", "iopub.status.busy": "2023-11-12T12:47:16.456294Z", "iopub.status.idle": "2023-11-12T12:47:16.458112Z", "shell.execute_reply": "2023-11-12T12:47:16.457747Z" }, "slideshow": { "slide_type": "skip" } }, "outputs": [], "source": [ "from Intro_Debugging import remove_html_markup" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:16.460084Z", "iopub.status.busy": "2023-11-12T12:47:16.459849Z", "iopub.status.idle": "2023-11-12T12:47:16.461827Z", "shell.execute_reply": "2023-11-12T12:47:16.461474Z" }, "slideshow": { "slide_type": "fragment" } }, "outputs": [], "source": [ "# ignore\n", "from typing import Any, Optional, Type, Dict, Tuple, List" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:16.463724Z", "iopub.status.busy": "2023-11-12T12:47:16.463511Z", "iopub.status.idle": "2023-11-12T12:47:16.465330Z", "shell.execute_reply": "2023-11-12T12:47:16.464976Z" }, "slideshow": { "slide_type": "fragment" } }, "outputs": [], "source": [ "# ignore\n", "from bookutils import print_content" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:16.467484Z", "iopub.status.busy": "2023-11-12T12:47:16.467288Z", "iopub.status.idle": "2023-11-12T12:47:16.468977Z", "shell.execute_reply": "2023-11-12T12:47:16.468663Z" }, "slideshow": { "slide_type": "subslide" } }, "outputs": [], "source": [ "# ignore\n", "import inspect" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:16.471194Z", "iopub.status.busy": "2023-11-12T12:47:16.470993Z", "iopub.status.idle": "2023-11-12T12:47:16.599346Z", "shell.execute_reply": "2023-11-12T12:47:16.599057Z" }, "slideshow": { "slide_type": "subslide" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "238 \u001b[34mdef\u001b[39;49;00m \u001b[32mremove_html_markup\u001b[39;49;00m(s): \u001b[37m# type: ignore\u001b[39;49;00m\u001b[37m\u001b[39;49;00m\n", "239 tag = \u001b[34mFalse\u001b[39;49;00m\u001b[37m\u001b[39;49;00m\n", "240 quote = \u001b[34mFalse\u001b[39;49;00m\u001b[37m\u001b[39;49;00m\n", "241 out = \u001b[33m\"\u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m\u001b[37m\u001b[39;49;00m\n", "242 \u001b[37m\u001b[39;49;00m\n", "243 \u001b[34mfor\u001b[39;49;00m c \u001b[35min\u001b[39;49;00m s:\u001b[37m\u001b[39;49;00m\n", "244 \u001b[34massert\u001b[39;49;00m tag \u001b[35mor\u001b[39;49;00m \u001b[35mnot\u001b[39;49;00m quote\u001b[37m\u001b[39;49;00m\n", "245 \u001b[37m\u001b[39;49;00m\n", "246 \u001b[34mif\u001b[39;49;00m c == \u001b[33m'\u001b[39;49;00m\u001b[33m<\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m \u001b[35mand\u001b[39;49;00m \u001b[35mnot\u001b[39;49;00m quote:\u001b[37m\u001b[39;49;00m\n", "247 tag = \u001b[34mTrue\u001b[39;49;00m\u001b[37m\u001b[39;49;00m\n", "248 \u001b[34melif\u001b[39;49;00m c == \u001b[33m'\u001b[39;49;00m\u001b[33m>\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m \u001b[35mand\u001b[39;49;00m \u001b[35mnot\u001b[39;49;00m quote:\u001b[37m\u001b[39;49;00m\n", "249 tag = \u001b[34mFalse\u001b[39;49;00m\u001b[37m\u001b[39;49;00m\n", "250 \u001b[34melif\u001b[39;49;00m (c == \u001b[33m'\u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m \u001b[35mor\u001b[39;49;00m c == \u001b[33m\"\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m) \u001b[35mand\u001b[39;49;00m tag:\u001b[37m\u001b[39;49;00m\n", "251 quote = \u001b[35mnot\u001b[39;49;00m quote\u001b[37m\u001b[39;49;00m\n", "252 \u001b[34melif\u001b[39;49;00m \u001b[35mnot\u001b[39;49;00m tag:\u001b[37m\u001b[39;49;00m\n", "253 out = out + c\u001b[37m\u001b[39;49;00m\n", "254 \u001b[37m\u001b[39;49;00m\n", "255 \u001b[34mreturn\u001b[39;49;00m out\u001b[37m\u001b[39;49;00m" ] } ], "source": [ "print_content(inspect.getsource(remove_html_markup), '.py',\n", " start_line_number=238)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "We introduce a class `PerformanceTracer` that tracks, for each line in the code:\n", "\n", "* how _often_ it was executed (`hits`), and\n", "* _how much time_ was spent during its execution (`time`).\n", "\n", "To this end, we make use of our `Timer` class, which measures time, and the `Tracer` class from [the chapter on tracing](Tracer.ipynb), which allows us to track every line of the program as it is being executed." ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:16.601323Z", "iopub.status.busy": "2023-11-12T12:47:16.601197Z", "iopub.status.idle": "2023-11-12T12:47:16.603059Z", "shell.execute_reply": "2023-11-12T12:47:16.602760Z" }, "slideshow": { "slide_type": "skip" } }, "outputs": [], "source": [ "from Tracer import Tracer" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "In `PerformanceTracker`, the attributes `hits` and `time` are mappings indexed by unique locations – that is, pairs of function name and line number." ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:16.604968Z", "iopub.status.busy": "2023-11-12T12:47:16.604843Z", "iopub.status.idle": "2023-11-12T12:47:16.606545Z", "shell.execute_reply": "2023-11-12T12:47:16.606284Z" }, "slideshow": { "slide_type": "fragment" } }, "outputs": [], "source": [ "Location = Tuple[str, int]" ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:16.608343Z", "iopub.status.busy": "2023-11-12T12:47:16.608220Z", "iopub.status.idle": "2023-11-12T12:47:16.610581Z", "shell.execute_reply": "2023-11-12T12:47:16.610322Z" }, "slideshow": { "slide_type": "subslide" } }, "outputs": [], "source": [ "class PerformanceTracer(Tracer):\n", " \"\"\"Trace time and #hits for individual program lines\"\"\"\n", "\n", " def __init__(self) -> None:\n", " \"\"\"Constructor.\"\"\"\n", " super().__init__()\n", " self.reset_timer()\n", " self.hits: Dict[Location, int] = {}\n", " self.time: Dict[Location, float] = {}\n", "\n", " def reset_timer(self) -> None:\n", " self.timer = Timer.Timer()" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "As common in this book, we want to use `PerformanceTracer` in a `with`-block around the function call(s) to be tracked:\n", "\n", "```python\n", "with PerformanceTracer() as perf_tracer:\n", " function(...)\n", "```" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "When entering the `with` block (`__enter__()`), we reset all timers. Also, coming from the `__enter__()` method of the superclass `Tracer`, we enable tracing through the `traceit()` method." ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:16.612120Z", "iopub.status.busy": "2023-11-12T12:47:16.611999Z", "iopub.status.idle": "2023-11-12T12:47:16.613897Z", "shell.execute_reply": "2023-11-12T12:47:16.613623Z" }, "slideshow": { "slide_type": "skip" } }, "outputs": [], "source": [ "from types import FrameType" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:16.615798Z", "iopub.status.busy": "2023-11-12T12:47:16.615694Z", "iopub.status.idle": "2023-11-12T12:47:16.617601Z", "shell.execute_reply": "2023-11-12T12:47:16.617336Z" }, "slideshow": { "slide_type": "subslide" } }, "outputs": [], "source": [ "class PerformanceTracer(PerformanceTracer):\n", " def __enter__(self) -> Any:\n", " \"\"\"Enter a `with` block.\"\"\"\n", " super().__enter__()\n", " self.reset_timer()\n", " return self" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "The `traceit()` method extracts the current location. It increases the corresponding `hits` value by 1, and adds the elapsed time to the corresponding `time`." ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:16.619024Z", "iopub.status.busy": "2023-11-12T12:47:16.618913Z", "iopub.status.idle": "2023-11-12T12:47:16.621288Z", "shell.execute_reply": "2023-11-12T12:47:16.621017Z" }, "slideshow": { "slide_type": "subslide" } }, "outputs": [], "source": [ "class PerformanceTracer(PerformanceTracer):\n", " def traceit(self, frame: FrameType, event: str, arg: Any) -> None:\n", " \"\"\"Tracing function; called for every line.\"\"\"\n", " t = self.timer.elapsed_time()\n", " location = (frame.f_code.co_name, frame.f_lineno)\n", "\n", " self.hits.setdefault(location, 0)\n", " self.time.setdefault(location, 0.0)\n", " self.hits[location] += 1\n", " self.time[location] += t\n", "\n", " self.reset_timer()" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "This is it already. We can now determine where most time is spent in `remove_html_markup()`. We invoke it 10,000 times such that we can average over runs:" ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:16.623432Z", "iopub.status.busy": "2023-11-12T12:47:16.623307Z", "iopub.status.idle": "2023-11-12T12:47:17.553303Z", "shell.execute_reply": "2023-11-12T12:47:17.552936Z" }, "slideshow": { "slide_type": "subslide" } }, "outputs": [], "source": [ "with PerformanceTracer() as perf_tracer:\n", " for i in range(10000):\n", " s = remove_html_markup('foo')" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "Here are the hits. For every line executed, we see how often it was executed. The most executed line is the `for` loop with 110,000 hits – once for each of the 10 characters in `foo`, once for the final check, and all of this 10,000 times." ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:17.556981Z", "iopub.status.busy": "2023-11-12T12:47:17.556819Z", "iopub.status.idle": "2023-11-12T12:47:17.559901Z", "shell.execute_reply": "2023-11-12T12:47:17.559523Z" }, "slideshow": { "slide_type": "subslide" } }, "outputs": [ { "data": { "text/plain": [ "{('__init__', 17): 1,\n", " ('__init__', 19): 1,\n", " ('clock', 8): 1,\n", " ('clock', 12): 2,\n", " ('__init__', 20): 2,\n", " ('remove_html_markup', 238): 10000,\n", " ('remove_html_markup', 239): 10000,\n", " ('remove_html_markup', 240): 10000,\n", " ('remove_html_markup', 241): 10000,\n", " ('remove_html_markup', 243): 110000,\n", " ('remove_html_markup', 244): 100000,\n", " ('remove_html_markup', 246): 100000,\n", " ('remove_html_markup', 247): 20000,\n", " ('remove_html_markup', 248): 80000,\n", " ('remove_html_markup', 250): 60000,\n", " ('remove_html_markup', 252): 60000,\n", " ('remove_html_markup', 249): 20000,\n", " ('remove_html_markup', 253): 30000,\n", " ('remove_html_markup', 255): 20000}" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "perf_tracer.hits" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "The `time` attribute collects how much time was spent in each line. Within the loop, again, the `for` statement takes the most time. The other lines show some variability, though." ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:17.562702Z", "iopub.status.busy": "2023-11-12T12:47:17.562563Z", "iopub.status.idle": "2023-11-12T12:47:17.565494Z", "shell.execute_reply": "2023-11-12T12:47:17.565148Z" }, "slideshow": { "slide_type": "subslide" } }, "outputs": [ { "data": { "text/plain": [ "{('__init__', 17): 2.2833000002719928e-05,\n", " ('__init__', 19): 1.2500004231696948e-06,\n", " ('clock', 8): 1.0000003385357559e-06,\n", " ('clock', 12): 1.7500005924375728e-06,\n", " ('__init__', 20): 1.8329992599319667e-06,\n", " ('remove_html_markup', 238): 0.012149665964898304,\n", " ('remove_html_markup', 239): 0.01087509404260345,\n", " ('remove_html_markup', 240): 0.010530847047448333,\n", " ('remove_html_markup', 241): 0.010054877981019672,\n", " ('remove_html_markup', 243): 0.09611047714861343,\n", " ('remove_html_markup', 244): 0.0851450361078605,\n", " ('remove_html_markup', 246): 0.08416355099780048,\n", " ('remove_html_markup', 247): 0.01722242398227536,\n", " ('remove_html_markup', 248): 0.06797252500382456,\n", " ('remove_html_markup', 250): 0.050401491005686694,\n", " ('remove_html_markup', 252): 0.05263906708023569,\n", " ('remove_html_markup', 249): 0.017042534977917967,\n", " ('remove_html_markup', 253): 0.025152493912173668,\n", " ('remove_html_markup', 255): 0.016864611041455646}" ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "perf_tracer.time" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "For a full profiler, these numbers would now be sorted and printed in a table, much like `cProfile` does. However, we will borrow some material from previous chapters and annotate our code accordingly." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Visualizing Performance Metrics\n", "\n", "In the [chapter on statistical debugging](StatisticalDebugger.ipynb), we have encountered the `CoverageCollector` class, which collects line and function coverage during execution, using a `collect()` method that is invoked for every line. We will repurpose this class to collect arbitrary _metrics_ on the lines executed, notably time taken." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "### Collecting Time Spent" ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:17.567591Z", "iopub.status.busy": "2023-11-12T12:47:17.567459Z", "iopub.status.idle": "2023-11-12T12:47:17.569262Z", "shell.execute_reply": "2023-11-12T12:47:17.568978Z" }, "slideshow": { "slide_type": "skip" } }, "outputs": [], "source": [ "from StatisticalDebugger import CoverageCollector, SpectrumDebugger" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "The `MetricCollector` class is an abstract superclass that provides an interface to access a particular metric." ] }, { "cell_type": "code", "execution_count": 27, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:17.571062Z", "iopub.status.busy": "2023-11-12T12:47:17.570948Z", "iopub.status.idle": "2023-11-12T12:47:17.573258Z", "shell.execute_reply": "2023-11-12T12:47:17.572948Z" }, "slideshow": { "slide_type": "fragment" } }, "outputs": [], "source": [ "class MetricCollector(CoverageCollector):\n", " \"\"\"Abstract superclass for collecting line-specific metrics\"\"\"\n", "\n", " def metric(self, event: Any) -> Optional[float]:\n", " \"\"\"Return a metric for an event, or none.\"\"\"\n", " return None\n", "\n", " def all_metrics(self, func: str) -> List[float]:\n", " \"\"\"Return all metric for a function `func`.\"\"\"\n", " return []" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "Given these metrics, we can also compute sums and maxima for a single function." ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:17.575160Z", "iopub.status.busy": "2023-11-12T12:47:17.574949Z", "iopub.status.idle": "2023-11-12T12:47:17.577289Z", "shell.execute_reply": "2023-11-12T12:47:17.577001Z" }, "slideshow": { "slide_type": "subslide" } }, "outputs": [], "source": [ "class MetricCollector(MetricCollector):\n", " def total(self, func: str) -> float:\n", " return sum(self.all_metrics(func))\n", "\n", " def maximum(self, func: str) -> float:\n", " return max(self.all_metrics(func))" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "Let us instantiate this superclass into `TimeCollector` – a subclass that measures time. This is modeled after our `PerformanceTracer` class, above; notably, the `time` attribute serves the same role." ] }, { "cell_type": "code", "execution_count": 29, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:17.578884Z", "iopub.status.busy": "2023-11-12T12:47:17.578770Z", "iopub.status.idle": "2023-11-12T12:47:17.582110Z", "shell.execute_reply": "2023-11-12T12:47:17.581788Z" }, "slideshow": { "slide_type": "subslide" } }, "outputs": [], "source": [ "class TimeCollector(MetricCollector):\n", " \"\"\"Collect time executed for each line\"\"\"\n", "\n", " def __init__(self) -> None:\n", " \"\"\"Constructor\"\"\"\n", " super().__init__()\n", " self.reset_timer()\n", " self.time: Dict[Location, float] = {}\n", " self.add_items_to_ignore([Timer.Timer, Timer.clock])\n", "\n", " def collect(self, frame: FrameType, event: str, arg: Any) -> None:\n", " \"\"\"Invoked for every line executed. Accumulate time spent.\"\"\"\n", " t = self.timer.elapsed_time()\n", " super().collect(frame, event, arg)\n", " location = (frame.f_code.co_name, frame.f_lineno)\n", "\n", " self.time.setdefault(location, 0.0)\n", " self.time[location] += t\n", "\n", " self.reset_timer()\n", "\n", " def reset_timer(self) -> None:\n", " self.timer = Timer.Timer()\n", "\n", " def __enter__(self) -> Any:\n", " super().__enter__()\n", " self.reset_timer()\n", " return self" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "The `metric()` and `all_metrics()` methods accumulate the metric (time taken) for an individual function:" ] }, { "cell_type": "code", "execution_count": 30, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:17.583741Z", "iopub.status.busy": "2023-11-12T12:47:17.583630Z", "iopub.status.idle": "2023-11-12T12:47:17.586394Z", "shell.execute_reply": "2023-11-12T12:47:17.586088Z" }, "slideshow": { "slide_type": "fragment" } }, "outputs": [], "source": [ "class TimeCollector(TimeCollector):\n", " def metric(self, location: Any) -> Optional[float]:\n", " if location in self.time:\n", " return self.time[location]\n", " else:\n", " return None\n", "\n", " def all_metrics(self, func: str) -> List[float]:\n", " return [time\n", " for (func_name, lineno), time in self.time.items()\n", " if func_name == func]" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "Here's how to use `TimeCollector()` – again, in a `with` block:" ] }, { "cell_type": "code", "execution_count": 31, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:17.588577Z", "iopub.status.busy": "2023-11-12T12:47:17.588426Z", "iopub.status.idle": "2023-11-12T12:47:17.607001Z", "shell.execute_reply": "2023-11-12T12:47:17.606683Z" }, "slideshow": { "slide_type": "subslide" } }, "outputs": [], "source": [ "with TimeCollector() as collector:\n", " for i in range(100):\n", " s = remove_html_markup('foo')" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "The `time` attribute holds the time spent in each line:" ] }, { "cell_type": "code", "execution_count": 32, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:17.608880Z", "iopub.status.busy": "2023-11-12T12:47:17.608744Z", "iopub.status.idle": "2023-11-12T12:47:17.611079Z", "shell.execute_reply": "2023-11-12T12:47:17.610740Z" }, "slideshow": { "slide_type": "subslide" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "('remove_html_markup', 238) 0.00021604699941235594\n", "('remove_html_markup', 239) 0.00018649800040293485\n", "('remove_html_markup', 240) 0.00017612200008443324\n", "('remove_html_markup', 241) 0.00017058100183930947\n", "('remove_html_markup', 243) 0.001600210991455242\n", "('remove_html_markup', 244) 0.0015725439980087685\n", "('remove_html_markup', 246) 0.0014476009782811161\n", "('remove_html_markup', 247) 0.0002855489965440938\n", "('remove_html_markup', 248) 0.0011488629961604602\n", "('remove_html_markup', 250) 0.0008527550062353839\n", "('remove_html_markup', 252) 0.0008805919842416188\n", "('remove_html_markup', 249) 0.0002848419944712077\n", "('remove_html_markup', 253) 0.0004203369908282184\n", "('remove_html_markup', 255) 0.0002832869949997985\n" ] } ], "source": [ "for location, time_spent in collector.time.items():\n", " print(location, time_spent)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "And we can also create a total for an entire function:" ] }, { "cell_type": "code", "execution_count": 33, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:17.613253Z", "iopub.status.busy": "2023-11-12T12:47:17.613094Z", "iopub.status.idle": "2023-11-12T12:47:17.616126Z", "shell.execute_reply": "2023-11-12T12:47:17.615843Z" }, "slideshow": { "slide_type": "fragment" } }, "outputs": [ { "data": { "text/plain": [ "0.009525828932964941" ] }, "execution_count": 33, "metadata": {}, "output_type": "execute_result" } ], "source": [ "collector.total('remove_html_markup')" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "### Visualizing Time Spent\n", "\n", "Let us now go and visualize these numbers in a simple form. The idea is to assign each line a _color_ whose saturation indicates the time spent in that line relative to the time spent in the function overall – the higher the fraction, the darker the line. We create a `MetricDebugger` class built as a specialization of `SpectrumDebugger`, in which `suspiciousness()` and `color()` are repurposed to show these metrics." ] }, { "cell_type": "code", "execution_count": 34, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:17.619656Z", "iopub.status.busy": "2023-11-12T12:47:17.619450Z", "iopub.status.idle": "2023-11-12T12:47:17.623766Z", "shell.execute_reply": "2023-11-12T12:47:17.623489Z" }, "slideshow": { "slide_type": "subslide" } }, "outputs": [], "source": [ "class MetricDebugger(SpectrumDebugger):\n", " \"\"\"Visualize a metric\"\"\"\n", "\n", " def metric(self, location: Location) -> float:\n", " sum = 0.0\n", " for outcome in self.collectors:\n", " for collector in self.collectors[outcome]:\n", " assert isinstance(collector, MetricCollector)\n", " m = collector.metric(location)\n", " if m is not None:\n", " sum += m # type: ignore\n", "\n", " return sum\n", "\n", " def total(self, func_name: str) -> float:\n", " total = 0.0\n", " for outcome in self.collectors:\n", " for collector in self.collectors[outcome]:\n", " assert isinstance(collector, MetricCollector)\n", " total += sum(collector.all_metrics(func_name))\n", "\n", " return total\n", "\n", " def maximum(self, func_name: str) -> float:\n", " maximum = 0.0\n", " for outcome in self.collectors:\n", " for collector in self.collectors[outcome]:\n", " assert isinstance(collector, MetricCollector)\n", " maximum = max(maximum, \n", " max(collector.all_metrics(func_name)))\n", "\n", " return maximum\n", "\n", " def suspiciousness(self, location: Location) -> float:\n", " func_name, _ = location\n", " return self.metric(location) / self.total(func_name)\n", "\n", " def color(self, location: Location) -> str:\n", " func_name, _ = location\n", " hue = 240 # blue\n", " saturation = 100 # fully saturated\n", " darkness = self.metric(location) / self.maximum(func_name)\n", " lightness = 100 - darkness * 25\n", " return f\"hsl({hue}, {saturation}%, {lightness}%)\"\n", "\n", " def tooltip(self, location: Location) -> str:\n", " return f\"{super().tooltip(location)} {self.metric(location)}\"" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "We can now introduce `PerformanceDebugger` as a subclass of `MetricDebugger`, using an arbitrary `MetricCollector` (such as `TimeCollector`) to obtain the metric we want to visualize." ] }, { "cell_type": "code", "execution_count": 35, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:17.625421Z", "iopub.status.busy": "2023-11-12T12:47:17.625269Z", "iopub.status.idle": "2023-11-12T12:47:17.627559Z", "shell.execute_reply": "2023-11-12T12:47:17.627179Z" }, "slideshow": { "slide_type": "fragment" } }, "outputs": [], "source": [ "class PerformanceDebugger(MetricDebugger):\n", " \"\"\"Collect and visualize a metric\"\"\"\n", "\n", " def __init__(self, collector_class: Type, log: bool = False):\n", " assert issubclass(collector_class, MetricCollector)\n", " super().__init__(collector_class, log=log)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "With `PerformanceDebugger`, we inherit all the capabilities of `SpectrumDebugger`, such as showing the (relative) percentage of time spent in a table. We see that the `for` condition and the following `assert` take most of the time, followed by the first condition." ] }, { "cell_type": "code", "execution_count": 36, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:17.630332Z", "iopub.status.busy": "2023-11-12T12:47:17.630097Z", "iopub.status.idle": "2023-11-12T12:47:17.651340Z", "shell.execute_reply": "2023-11-12T12:47:17.650937Z" }, "slideshow": { "slide_type": "subslide" } }, "outputs": [], "source": [ "with PerformanceDebugger(TimeCollector) as debugger:\n", " for i in range(100):\n", " s = remove_html_markup('foo')" ] }, { "cell_type": "code", "execution_count": 37, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:17.653483Z", "iopub.status.busy": "2023-11-12T12:47:17.653359Z", "iopub.status.idle": "2023-11-12T12:47:17.655974Z", "shell.execute_reply": "2023-11-12T12:47:17.655684Z" }, "slideshow": { "slide_type": "subslide" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " 238 2% def remove_html_markup(s): # type: ignore\n", " 239 1% tag = False\n", " 240 1% quote = False\n", " 241 1% out = \"\"\n", " 242 0%\n", " 243 17% for c in s:\n", " 244 15% assert tag or not quote\n", " 245 0%\n", " 246 15% if c == '<' and not quote:\n", " 247 3% tag = True\n", " 248 12% elif c == '>' and not quote:\n", " 249 3% tag = False\n", " 250 9% elif (c == '\"' or c == \"'\") and tag:\n", " 251 0% quote = not quote\n", " 252 9% elif not tag:\n", " 253 4% out = out + c\n", " 254 0%\n", " 255 3% return out\n", "\n" ] } ], "source": [ "print(debugger)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "However, we can also visualize these percentages, using shades of blue to indicate those lines most time spent in:" ] }, { "cell_type": "code", "execution_count": 38, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:17.657980Z", "iopub.status.busy": "2023-11-12T12:47:17.657762Z", "iopub.status.idle": "2023-11-12T12:47:17.661426Z", "shell.execute_reply": "2023-11-12T12:47:17.661078Z" }, "slideshow": { "slide_type": "subslide" } }, "outputs": [ { "data": { "text/html": [ "
 238 def remove_html_markup(s):  # type: ignore
\n", "
 239     tag = False
\n", "
 240     quote = False
\n", "
 241     out = ""
\n", "
 242  
\n", "
 243     for c in s:
\n", "
 244         assert tag or not quote
\n", "
 245  
\n", "
 246         if c == '<' and not quote:
\n", "
 247             tag = True
\n", "
 248         elif c == '>' and not quote:
\n", "
 249             tag = False
\n", "
 250         elif (c == '"' or c == "'") and tag:
\n", "
 251             quote = not quote
\n", "
 252         elif not tag:
\n", "
 253             out = out + c
\n", "
 254  
\n", "
 255     return out
\n" ], "text/markdown": [ "| `remove_html_markup` | `s='foo'` | \n", "| ---------------------- | ---- | \n", "| remove_html_markup:238 | X | \n", "| remove_html_markup:239 | X | \n", "| remove_html_markup:240 | X | \n", "| remove_html_markup:241 | X | \n", "| remove_html_markup:243 | X | \n", "| remove_html_markup:244 | X | \n", "| remove_html_markup:246 | X | \n", "| remove_html_markup:247 | X | \n", "| remove_html_markup:248 | X | \n", "| remove_html_markup:249 | X | \n", "| remove_html_markup:250 | X | \n", "| remove_html_markup:252 | X | \n", "| remove_html_markup:253 | X | \n", "| remove_html_markup:255 | X | \n" ], "text/plain": [ " 238 2% def remove_html_markup(s): # type: ignore\n", " 239 1% tag = False\n", " 240 1% quote = False\n", " 241 1% out = \"\"\n", " 242 0%\n", " 243 17% for c in s:\n", " 244 15% assert tag or not quote\n", " 245 0%\n", " 246 15% if c == '<' and not quote:\n", " 247 3% tag = True\n", " 248 12% elif c == '>' and not quote:\n", " 249 3% tag = False\n", " 250 9% elif (c == '\"' or c == \"'\") and tag:\n", " 251 0% quote = not quote\n", " 252 9% elif not tag:\n", " 253 4% out = out + c\n", " 254 0%\n", " 255 3% return out" ] }, "execution_count": 38, "metadata": {}, "output_type": "execute_result" } ], "source": [ "debugger" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "### Other Metrics\n", "\n", "Our framework is flexible enough to collect (and visualize) arbitrary metrics. This `HitCollector` class, for instance, collects how often a line is being executed." ] }, { "cell_type": "code", "execution_count": 39, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:17.663818Z", "iopub.status.busy": "2023-11-12T12:47:17.663667Z", "iopub.status.idle": "2023-11-12T12:47:17.667108Z", "shell.execute_reply": "2023-11-12T12:47:17.666799Z" }, "slideshow": { "slide_type": "subslide" } }, "outputs": [], "source": [ "class HitCollector(MetricCollector):\n", " \"\"\"Collect how often a line is executed\"\"\"\n", "\n", " def __init__(self) -> None:\n", " super().__init__()\n", " self.hits: Dict[Location, int] = {}\n", "\n", " def collect(self, frame: FrameType, event: str, arg: Any) -> None:\n", " super().collect(frame, event, arg)\n", " location = (frame.f_code.co_name, frame.f_lineno)\n", "\n", " self.hits.setdefault(location, 0)\n", " self.hits[location] += 1\n", "\n", " def metric(self, location: Location) -> Optional[int]:\n", " if location in self.hits:\n", " return self.hits[location]\n", " else:\n", " return None\n", "\n", " def all_metrics(self, func: str) -> List[float]:\n", " return [hits\n", " for (func_name, lineno), hits in self.hits.items()\n", " if func_name == func]" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "We can plug in this class into `PerformanceDebugger` to obtain a distribution of lines executed:" ] }, { "cell_type": "code", "execution_count": 40, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:17.669242Z", "iopub.status.busy": "2023-11-12T12:47:17.669125Z", "iopub.status.idle": "2023-11-12T12:47:17.683378Z", "shell.execute_reply": "2023-11-12T12:47:17.683014Z" }, "slideshow": { "slide_type": "fragment" } }, "outputs": [], "source": [ "with PerformanceDebugger(HitCollector) as debugger:\n", " for i in range(100):\n", " s = remove_html_markup('foo')" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "In total, during this call to `remove_html_markup()`, there are 6,400 lines executed:" ] }, { "cell_type": "code", "execution_count": 41, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:17.685232Z", "iopub.status.busy": "2023-11-12T12:47:17.685109Z", "iopub.status.idle": "2023-11-12T12:47:17.687754Z", "shell.execute_reply": "2023-11-12T12:47:17.687436Z" }, "slideshow": { "slide_type": "fragment" } }, "outputs": [ { "data": { "text/plain": [ "6400.0" ] }, "execution_count": 41, "metadata": {}, "output_type": "execute_result" } ], "source": [ "debugger.total('remove_html_markup')" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "Again, we can visualize the distribution as a table and using colors. We can see how the shade gets lighter in the lower part of the loop as individual conditions have been met." ] }, { "cell_type": "code", "execution_count": 42, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:17.689640Z", "iopub.status.busy": "2023-11-12T12:47:17.689501Z", "iopub.status.idle": "2023-11-12T12:47:17.691884Z", "shell.execute_reply": "2023-11-12T12:47:17.691582Z" }, "slideshow": { "slide_type": "subslide" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " 238 1% def remove_html_markup(s): # type: ignore\n", " 239 1% tag = False\n", " 240 1% quote = False\n", " 241 1% out = \"\"\n", " 242 0%\n", " 243 17% for c in s:\n", " 244 15% assert tag or not quote\n", " 245 0%\n", " 246 15% if c == '<' and not quote:\n", " 247 3% tag = True\n", " 248 12% elif c == '>' and not quote:\n", " 249 3% tag = False\n", " 250 9% elif (c == '\"' or c == \"'\") and tag:\n", " 251 0% quote = not quote\n", " 252 9% elif not tag:\n", " 253 4% out = out + c\n", " 254 0%\n", " 255 3% return out\n", "\n" ] } ], "source": [ "print(debugger)" ] }, { "cell_type": "code", "execution_count": 43, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:17.693739Z", "iopub.status.busy": "2023-11-12T12:47:17.693590Z", "iopub.status.idle": "2023-11-12T12:47:17.697061Z", "shell.execute_reply": "2023-11-12T12:47:17.696723Z" }, "slideshow": { "slide_type": "subslide" } }, "outputs": [ { "data": { "text/html": [ "
 238 def remove_html_markup(s):  # type: ignore
\n", "
 239     tag = False
\n", "
 240     quote = False
\n", "
 241     out = ""
\n", "
 242  
\n", "
 243     for c in s:
\n", "
 244         assert tag or not quote
\n", "
 245  
\n", "
 246         if c == '<' and not quote:
\n", "
 247             tag = True
\n", "
 248         elif c == '>' and not quote:
\n", "
 249             tag = False
\n", "
 250         elif (c == '"' or c == "'") and tag:
\n", "
 251             quote = not quote
\n", "
 252         elif not tag:
\n", "
 253             out = out + c
\n", "
 254  
\n", "
 255     return out
\n" ], "text/markdown": [ "| `remove_html_markup` | `s='foo'` | \n", "| ---------------------- | ---- | \n", "| remove_html_markup:238 | X | \n", "| remove_html_markup:239 | X | \n", "| remove_html_markup:240 | X | \n", "| remove_html_markup:241 | X | \n", "| remove_html_markup:243 | X | \n", "| remove_html_markup:244 | X | \n", "| remove_html_markup:246 | X | \n", "| remove_html_markup:247 | X | \n", "| remove_html_markup:248 | X | \n", "| remove_html_markup:249 | X | \n", "| remove_html_markup:250 | X | \n", "| remove_html_markup:252 | X | \n", "| remove_html_markup:253 | X | \n", "| remove_html_markup:255 | X | \n" ], "text/plain": [ " 238 1% def remove_html_markup(s): # type: ignore\n", " 239 1% tag = False\n", " 240 1% quote = False\n", " 241 1% out = \"\"\n", " 242 0%\n", " 243 17% for c in s:\n", " 244 15% assert tag or not quote\n", " 245 0%\n", " 246 15% if c == '<' and not quote:\n", " 247 3% tag = True\n", " 248 12% elif c == '>' and not quote:\n", " 249 3% tag = False\n", " 250 9% elif (c == '\"' or c == \"'\") and tag:\n", " 251 0% quote = not quote\n", " 252 9% elif not tag:\n", " 253 4% out = out + c\n", " 254 0%\n", " 255 3% return out" ] }, "execution_count": 43, "metadata": {}, "output_type": "execute_result" } ], "source": [ "debugger" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Integrating with Delta Debugging\n", "\n", "Besides identifying causes for performance issues in the code, one may also search for causes in the _input_, using [Delta Debugging](DeltaDebugger.ipynb). This can be useful if one does not immediately want to embark into investigating the code, but maybe first determine external influences that are related to performance issues." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "Here is a variant of `remove_html_markup()` that introduces a (rather obvious) performance issue." ] }, { "cell_type": "code", "execution_count": 44, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:17.699735Z", "iopub.status.busy": "2023-11-12T12:47:17.699581Z", "iopub.status.idle": "2023-11-12T12:47:17.701637Z", "shell.execute_reply": "2023-11-12T12:47:17.701313Z" }, "slideshow": { "slide_type": "skip" } }, "outputs": [], "source": [ "import time" ] }, { "cell_type": "code", "execution_count": 45, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:17.703490Z", "iopub.status.busy": "2023-11-12T12:47:17.703339Z", "iopub.status.idle": "2023-11-12T12:47:17.706201Z", "shell.execute_reply": "2023-11-12T12:47:17.705875Z" }, "slideshow": { "slide_type": "subslide" } }, "outputs": [], "source": [ "def remove_html_markup_ampersand(s: str) -> str:\n", " tag = False\n", " quote = False\n", " out = \"\"\n", "\n", " for c in s:\n", " assert tag or not quote\n", "\n", " if c == '&':\n", " time.sleep(0.1) # <-- the obvious performance issue\n", "\n", " if c == '<' and not quote:\n", " tag = True\n", " elif c == '>' and not quote:\n", " tag = False\n", " elif (c == '\"' or c == \"'\") and tag:\n", " quote = not quote\n", " elif not tag:\n", " out = out + c\n", "\n", " return out" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "We can easily trigger this issue by measuring time taken:" ] }, { "cell_type": "code", "execution_count": 46, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:17.708436Z", "iopub.status.busy": "2023-11-12T12:47:17.708277Z", "iopub.status.idle": "2023-11-12T12:47:18.024178Z", "shell.execute_reply": "2023-11-12T12:47:18.023845Z" }, "slideshow": { "slide_type": "fragment" } }, "outputs": [ { "data": { "text/plain": [ "0.3131859590002932" ] }, "execution_count": 46, "metadata": {}, "output_type": "execute_result" } ], "source": [ "with Timer.Timer() as t:\n", " remove_html_markup_ampersand('&&&')\n", "t.elapsed_time()" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "Let us set up a test that checks whether the performance issue is present." ] }, { "cell_type": "code", "execution_count": 47, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:18.026963Z", "iopub.status.busy": "2023-11-12T12:47:18.026823Z", "iopub.status.idle": "2023-11-12T12:47:18.029022Z", "shell.execute_reply": "2023-11-12T12:47:18.028740Z" }, "slideshow": { "slide_type": "fragment" } }, "outputs": [], "source": [ "def remove_html_test(s: str) -> None:\n", " with Timer.Timer() as t:\n", " remove_html_markup_ampersand(s)\n", " assert t.elapsed_time() < 0.1" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "We can now apply delta debugging to determine a minimum input that causes the failure:" ] }, { "cell_type": "code", "execution_count": 48, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:18.030608Z", "iopub.status.busy": "2023-11-12T12:47:18.030501Z", "iopub.status.idle": "2023-11-12T12:47:18.032243Z", "shell.execute_reply": "2023-11-12T12:47:18.031930Z" }, "slideshow": { "slide_type": "fragment" } }, "outputs": [], "source": [ "s_fail = 'foo&'" ] }, { "cell_type": "code", "execution_count": 49, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:18.034111Z", "iopub.status.busy": "2023-11-12T12:47:18.033990Z", "iopub.status.idle": "2023-11-12T12:47:18.137006Z", "shell.execute_reply": "2023-11-12T12:47:18.136660Z" }, "slideshow": { "slide_type": "subslide" } }, "outputs": [], "source": [ "with DeltaDebugger.DeltaDebugger() as dd:\n", " remove_html_test(s_fail)" ] }, { "cell_type": "code", "execution_count": 50, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:18.138944Z", "iopub.status.busy": "2023-11-12T12:47:18.138818Z", "iopub.status.idle": "2023-11-12T12:47:18.661950Z", "shell.execute_reply": "2023-11-12T12:47:18.661603Z" }, "slideshow": { "slide_type": "fragment" } }, "outputs": [ { "data": { "text/plain": [ "{'s': '&'}" ] }, "execution_count": 50, "metadata": {}, "output_type": "execute_result" } ], "source": [ "dd.min_args()" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "For performance issues, however, a minimal input is often not enough to highlight the failure cause. This is because short inputs tend to take less processing time than longer inputs, which increases the risks of a spurious diagnosis. A better alternative is to compute a _maximum_ input where the issue does not occur:" ] }, { "cell_type": "code", "execution_count": 51, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:18.663670Z", "iopub.status.busy": "2023-11-12T12:47:18.663551Z", "iopub.status.idle": "2023-11-12T12:47:18.871680Z", "shell.execute_reply": "2023-11-12T12:47:18.871267Z" }, "slideshow": { "slide_type": "fragment" } }, "outputs": [], "source": [ "s_pass = dd.max_args()" ] }, { "cell_type": "code", "execution_count": 52, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:18.873666Z", "iopub.status.busy": "2023-11-12T12:47:18.873534Z", "iopub.status.idle": "2023-11-12T12:47:18.876166Z", "shell.execute_reply": "2023-11-12T12:47:18.875834Z" }, "slideshow": { "slide_type": "fragment" } }, "outputs": [ { "data": { "text/plain": [ "{'s': 'fooamp;'}" ] }, "execution_count": 52, "metadata": {}, "output_type": "execute_result" } ], "source": [ "s_pass" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "We see that the culprit character (the `&`) is removed. This tells us the failure-inducing difference – or, more precisely, the cause for the performance issue." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Synopsis" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "This chapter provides a class `PerformanceDebugger` that allows measuring and visualizing the time taken per line in a function." ] }, { "cell_type": "code", "execution_count": 53, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:18.879022Z", "iopub.status.busy": "2023-11-12T12:47:18.878855Z", "iopub.status.idle": "2023-11-12T12:47:18.899307Z", "shell.execute_reply": "2023-11-12T12:47:18.898967Z" }, "slideshow": { "slide_type": "fragment" } }, "outputs": [], "source": [ "with PerformanceDebugger(TimeCollector) as debugger:\n", " for i in range(100):\n", " s = remove_html_markup('foo')" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "The distribution of executed time within each function can be obtained by printing out the debugger:" ] }, { "cell_type": "code", "execution_count": 54, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:18.901570Z", "iopub.status.busy": "2023-11-12T12:47:18.901446Z", "iopub.status.idle": "2023-11-12T12:47:18.903947Z", "shell.execute_reply": "2023-11-12T12:47:18.903613Z" }, "slideshow": { "slide_type": "subslide" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " 238 2% def remove_html_markup(s): # type: ignore\n", " 239 2% tag = False\n", " 240 1% quote = False\n", " 241 1% out = \"\"\n", " 242 0%\n", " 243 17% for c in s:\n", " 244 15% assert tag or not quote\n", " 245 0%\n", " 246 14% if c == '<' and not quote:\n", " 247 2% tag = True\n", " 248 11% elif c == '>' and not quote:\n", " 249 3% tag = False\n", " 250 8% elif (c == '\"' or c == \"'\") and tag:\n", " 251 0% quote = not quote\n", " 252 9% elif not tag:\n", " 253 5% out = out + c\n", " 254 0%\n", " 255 2% return out\n", "\n" ] } ], "source": [ "print(debugger)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "The sum of all percentages in a function should always be 100%." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "These percentages can also be visualized, where darker shades represent higher percentage values:" ] }, { "cell_type": "code", "execution_count": 55, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:18.905585Z", "iopub.status.busy": "2023-11-12T12:47:18.905474Z", "iopub.status.idle": "2023-11-12T12:47:18.908944Z", "shell.execute_reply": "2023-11-12T12:47:18.908631Z" }, "slideshow": { "slide_type": "subslide" } }, "outputs": [ { "data": { "text/html": [ "
 238 def remove_html_markup(s):  # type: ignore
\n", "
 239     tag = False
\n", "
 240     quote = False
\n", "
 241     out = ""
\n", "
 242  
\n", "
 243     for c in s:
\n", "
 244         assert tag or not quote
\n", "
 245  
\n", "
 246         if c == '<' and not quote:
\n", "
 247             tag = True
\n", "
 248         elif c == '>' and not quote:
\n", "
 249             tag = False
\n", "
 250         elif (c == '"' or c == "'") and tag:
\n", "
 251             quote = not quote
\n", "
 252         elif not tag:
\n", "
 253             out = out + c
\n", "
 254  
\n", "
 255     return out
\n" ], "text/markdown": [ "| `remove_html_markup` | `s='foo'` | \n", "| ---------------------- | ---- | \n", "| remove_html_markup:238 | X | \n", "| remove_html_markup:239 | X | \n", "| remove_html_markup:240 | X | \n", "| remove_html_markup:241 | X | \n", "| remove_html_markup:243 | X | \n", "| remove_html_markup:244 | X | \n", "| remove_html_markup:246 | X | \n", "| remove_html_markup:247 | X | \n", "| remove_html_markup:248 | X | \n", "| remove_html_markup:249 | X | \n", "| remove_html_markup:250 | X | \n", "| remove_html_markup:252 | X | \n", "| remove_html_markup:253 | X | \n", "| remove_html_markup:255 | X | \n" ], "text/plain": [ " 238 2% def remove_html_markup(s): # type: ignore\n", " 239 2% tag = False\n", " 240 1% quote = False\n", " 241 1% out = \"\"\n", " 242 0%\n", " 243 17% for c in s:\n", " 244 15% assert tag or not quote\n", " 245 0%\n", " 246 14% if c == '<' and not quote:\n", " 247 2% tag = True\n", " 248 11% elif c == '>' and not quote:\n", " 249 3% tag = False\n", " 250 8% elif (c == '\"' or c == \"'\") and tag:\n", " 251 0% quote = not quote\n", " 252 9% elif not tag:\n", " 253 5% out = out + c\n", " 254 0%\n", " 255 2% return out" ] }, "execution_count": 55, "metadata": {}, "output_type": "execute_result" } ], "source": [ "debugger" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "The abstract `MetricCollector` class allows subclassing to build more collectors, such as `HitCollector`." ] }, { "cell_type": "code", "execution_count": 56, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:18.910553Z", "iopub.status.busy": "2023-11-12T12:47:18.910440Z", "iopub.status.idle": "2023-11-12T12:47:18.912126Z", "shell.execute_reply": "2023-11-12T12:47:18.911811Z" }, "slideshow": { "slide_type": "fragment" } }, "outputs": [], "source": [ "# ignore\n", "from ClassDiagram import display_class_hierarchy" ] }, { "cell_type": "code", "execution_count": 57, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:18.913776Z", "iopub.status.busy": "2023-11-12T12:47:18.913623Z", "iopub.status.idle": "2023-11-12T12:47:19.505575Z", "shell.execute_reply": "2023-11-12T12:47:19.505127Z" }, "slideshow": { "slide_type": "fragment" } }, "outputs": [ { "data": { "image/svg+xml": [ "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "PerformanceDebugger\n", "\n", "\n", "PerformanceDebugger\n", "\n", "\n", "\n", "__init__()\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "MetricDebugger\n", "\n", "\n", "MetricDebugger\n", "\n", "\n", "\n", "color()\n", "\n", "\n", "\n", "maximum()\n", "\n", "\n", "\n", "metric()\n", "\n", "\n", "\n", "suspiciousness()\n", "\n", "\n", "\n", "tooltip()\n", "\n", "\n", "\n", "total()\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "PerformanceDebugger->MetricDebugger\n", "\n", "\n", "\n", "\n", "\n", "SpectrumDebugger\n", "\n", "\n", "SpectrumDebugger\n", "\n", "\n", "\n", "\n", "\n", "MetricDebugger->SpectrumDebugger\n", "\n", "\n", "\n", "\n", "\n", "DifferenceDebugger\n", "\n", "\n", "DifferenceDebugger\n", "\n", "\n", "\n", "FAIL\n", "\n", "\n", "\n", "PASS\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "SpectrumDebugger->DifferenceDebugger\n", "\n", "\n", "\n", "\n", "\n", "StatisticalDebugger\n", "\n", "\n", "StatisticalDebugger\n", "\n", "\n", "\n", "\n", "\n", "DifferenceDebugger->StatisticalDebugger\n", "\n", "\n", "\n", "\n", "\n", "TimeCollector\n", "\n", "\n", "TimeCollector\n", "\n", "\n", "\n", "__enter__()\n", "\n", "\n", "\n", "__init__()\n", "\n", "\n", "\n", "all_metrics()\n", "\n", "\n", "\n", "collect()\n", "\n", "\n", "\n", "metric()\n", "\n", "\n", "\n", "reset_timer()\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "MetricCollector\n", "\n", "\n", "MetricCollector\n", "\n", "\n", "\n", "all_metrics()\n", "\n", "\n", "\n", "maximum()\n", "\n", "\n", "\n", "metric()\n", "\n", "\n", "\n", "total()\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "TimeCollector->MetricCollector\n", "\n", "\n", "\n", "\n", "\n", "CoverageCollector\n", "\n", "\n", "CoverageCollector\n", "\n", "\n", "\n", "\n", "\n", "MetricCollector->CoverageCollector\n", "\n", "\n", "\n", "\n", "\n", "Collector\n", "\n", "\n", "Collector\n", "\n", "\n", "\n", "\n", "\n", "CoverageCollector->Collector\n", "\n", "\n", "\n", "\n", "\n", "StackInspector\n", "\n", "\n", "StackInspector\n", "\n", "\n", "\n", "_generated_function_cache\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "CoverageCollector->StackInspector\n", "\n", "\n", "\n", "\n", "\n", "Tracer\n", "\n", "\n", "Tracer\n", "\n", "\n", "\n", "\n", "\n", "Collector->Tracer\n", "\n", "\n", "\n", "\n", "\n", "Tracer->StackInspector\n", "\n", "\n", "\n", "\n", "\n", "HitCollector\n", "\n", "\n", "HitCollector\n", "\n", "\n", "\n", "__init__()\n", "\n", "\n", "\n", "all_metrics()\n", "\n", "\n", "\n", "collect()\n", "\n", "\n", "\n", "metric()\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "HitCollector->MetricCollector\n", "\n", "\n", "\n", "\n", "\n", "Legend\n", "Legend\n", "• \n", "public_method()\n", "• \n", "private_method()\n", "• \n", "overloaded_method()\n", "Hover over names to see doc\n", "\n", "\n", "\n" ], "text/plain": [ "" ] }, "execution_count": 57, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# ignore\n", "display_class_hierarchy([PerformanceDebugger, TimeCollector, HitCollector],\n", " public_methods=[\n", " PerformanceDebugger.__init__,\n", " ],\n", " project='debuggingbook')" ] }, { "cell_type": "markdown", "metadata": { "button": false, "new_sheet": true, "run_control": { "read_only": false }, "slideshow": { "slide_type": "slide" } }, "source": [ "## Lessons Learned\n", "\n", "* To measure performance,\n", " * instrument the code such that the time taken per function (or line) is collected; or\n", " * sample the execution that at regular intervals, the active call stack is collected.\n", "* To make code performant, focus on efficient algorithms, efficient data types, and sufficient abstraction such that you can replace them by alternatives.\n", "* Beyond efficient algorithms and data types, do _not_ optimize before measuring." ] }, { "cell_type": "markdown", "metadata": { "button": false, "new_sheet": false, "run_control": { "read_only": false }, "slideshow": { "slide_type": "slide" } }, "source": [ "## Next Steps\n", "\n", "This chapter concludes the part on abstracting failures. The next part will focus on\n", "\n", "* [repairing code automatically](Repairer.ipynb)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Background\n", "\n", "[Scalene](https://github.com/plasma-umass/scalene) is a high-performance, high-precision CPU, GPU, and memory profiler for Python. In contrast to the standard Python `cProfile` profiler, it uses _sampling_ instead of instrumentation or relying on Python's tracing facilities; and it also supports line-by-line profiling. Scalene might be the tool of choice if you want to go beyond basic profiling.\n", "\n", "The Wikipedia articles on [profiling](https://en.wikipedia.org/wiki/Profiling_(computer_programming)) and [performance analysis tools](https://en.wikipedia.org/wiki/List_of_performance_analysis_tools) provide several additional resources on profiling tools and how to apply them in practice." ] }, { "cell_type": "markdown", "metadata": { "button": false, "new_sheet": true, "run_control": { "read_only": false }, "slideshow": { "slide_type": "slide" } }, "source": [ "## Exercises" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "button": false, "new_sheet": false, "run_control": { "read_only": false }, "slideshow": { "slide_type": "subslide" } }, "source": [ "### Exercise 1: Profiling Memory Usage\n", "\n", "The Python [`tracemalloc` module](https://docs.python.org/3/library/tracemalloc.html) allows tracking memory usage during execution. Between `tracemalloc.start()` and `tracemalloc.end()`, use `tracemalloc.get_traced_memory()` to obtain how much memory is currently being consumed:" ] }, { "cell_type": "code", "execution_count": 58, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:19.508770Z", "iopub.status.busy": "2023-11-12T12:47:19.508643Z", "iopub.status.idle": "2023-11-12T12:47:19.512683Z", "shell.execute_reply": "2023-11-12T12:47:19.512371Z" }, "slideshow": { "slide_type": "skip" } }, "outputs": [], "source": [ "import tracemalloc" ] }, { "cell_type": "code", "execution_count": 59, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:19.514532Z", "iopub.status.busy": "2023-11-12T12:47:19.514419Z", "iopub.status.idle": "2023-11-12T12:47:19.517251Z", "shell.execute_reply": "2023-11-12T12:47:19.516777Z" }, "slideshow": { "slide_type": "fragment" } }, "outputs": [], "source": [ "tracemalloc.start()" ] }, { "cell_type": "code", "execution_count": 60, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:19.519110Z", "iopub.status.busy": "2023-11-12T12:47:19.518920Z", "iopub.status.idle": "2023-11-12T12:47:19.522105Z", "shell.execute_reply": "2023-11-12T12:47:19.521690Z" }, "slideshow": { "slide_type": "fragment" } }, "outputs": [ { "data": { "text/plain": [ "20256" ] }, "execution_count": 60, "metadata": {}, "output_type": "execute_result" } ], "source": [ "current_size, peak_size = tracemalloc.get_traced_memory()\n", "current_size" ] }, { "cell_type": "code", "execution_count": 61, "metadata": { "execution": { "iopub.execute_input": "2023-11-12T12:47:19.523975Z", "iopub.status.busy": "2023-11-12T12:47:19.523766Z", "iopub.status.idle": "2023-11-12T12:47:19.525833Z", "shell.execute_reply": "2023-11-12T12:47:19.525537Z" }, "slideshow": { "slide_type": "fragment" } }, "outputs": [], "source": [ "tracemalloc.stop()" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "Create a subclass of `MetricCollector` named `MemoryCollector`. Make it measure the memory consumption before and after each line executed (0 if negative), and visualize the impact of individual lines on memory. Create an appropriate test program that (temporarily) consumes larger amounts of memory." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Exercise 2: Statistical Performance Debugging\n", "\n", "In a similar way as we integrated a binary \"performance test\" with delta debugging, we can also integrate such a test with other techniques. Combining a performance test with [Statistical Debugging](StatisticalDebugger.ipynb), for instance, will highlight those lines whose execution correlates with low performance. But then, the performance test need not be binary, as with functional pass/fail tests – you can also _weight_ individual lines by _how much_ they impact performance. Create a variant of `StatisticalDebugger` that reflects the impact of individual lines on an arbitrary (summarized) performance metric." ] } ], "metadata": { "ipub": { "bibliography": "fuzzingbook.bib", "toc": true }, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.2" }, "toc": { "base_numbering": 1, "nav_menu": {}, "number_sections": true, "sideBar": true, "skip_h1_title": true, "title_cell": "", "title_sidebar": "Contents", "toc_cell": false, "toc_position": {}, "toc_section_display": true, "toc_window_display": true }, "toc-autonumbering": false }, "nbformat": 4, "nbformat_minor": 4 }