{ "cells": [ { "metadata": {}, "cell_type": "markdown", "source": [ "# Compare Benchmarks with each other\n", "This notebook demonstrates how you can analyze results for the two benchmark functions, one measuring performance for some baseline implementation, and another for the proposed optimized implementation.\n", "\n", "Such an approach could be handy when it is possible to use two alternative implementations simultaneously (which is usually the case).\n", "\n", "The notebook assumes that benchmark functions for baseline implementation has the \"Baseline\" suffix in their names, and optimized (or changed) alternative implementations has the \"Optimized\" suffix in their names. For example, `invSqrtBaseline` and `invSqrtOptimized`.\n", "\n", "While this example uses a JVM-only project, the notebook could be applied to results collected from multiplatform benchmarks as well.\n", "\n", "First, you need to run benchmarks. This can be done by running the following command from the root of the project:\n", "\n", "```shell\n", "> ./gradlew :examples:kotlin-jvm-compare-hypothesis:benchmark\n", "```\n", "\n", "Once it is completed, run this notebook, and it will automatically find the latest result." ] }, { "metadata": { "ExecuteTime": { "end_time": "2025-11-12T23:20:54.464923Z", "start_time": "2025-11-12T23:20:49.670002Z" } }, "cell_type": "code", "source": "%use serialization, dataframe, kandy", "outputs": [], "execution_count": 1 }, { "metadata": { "ExecuteTime": { "end_time": "2025-11-12T23:20:54.900251Z", "start_time": "2025-11-12T23:20:54.471334Z" } }, "cell_type": "code", "source": [ "// Serialization classes matching the JMH-alike JSON format.\n", "// We define these classes manually so we can keep `params` as a JsonObject, as it means we can handle them\n", "// in a generic manner. If you benchmark have fixed params, using `\"\".deserializeThis()` is\n", "// faster and easier.\n", "\n", "@Serializable\n", "public data class Benchmark(\n", " public val benchmark: String,\n", " public val mode: String,\n", " public val warmupIterations: Int,\n", " public val warmupTime: String,\n", " public val measurementIterations: Int,\n", " public val measurementTime: String,\n", " public val primaryMetric: PrimaryMetric,\n", " public val secondaryMetrics: Map,\n", " public val params: JsonObject? = null\n", ")\n", "\n", "@Serializable\n", "public data class PrimaryMetric(\n", " public val score: Double,\n", " public val scoreError: Double,\n", " public val scoreConfidence: List,\n", " public val scorePercentiles: Map,\n", " public val scoreUnit: String,\n", " public val rawData: List>,\n", ")" ], "outputs": [], "execution_count": 2 }, { "metadata": { "ExecuteTime": { "end_time": "2025-11-12T23:20:55.023965Z", "start_time": "2025-11-12T23:20:54.947559Z" } }, "cell_type": "code", "source": [ "// Benchmarks for a \"baseline\" implementation have a \"Baseline\" suffix in their names,\n", "// while benchmarks for an \"opimized\" implementation have a \"Optimized\" suffix.\n", "val baselineSuffix = \"Baseline\"\n", "val optimizedSuffix = \"Optimized\"" ], "outputs": [], "execution_count": 3 }, { "metadata": { "ExecuteTime": { "end_time": "2025-11-12T23:20:55.512756Z", "start_time": "2025-11-12T23:20:55.032052Z" } }, "cell_type": "code", "source": [ "import kotlinx.serialization.json.Json\n", "import java.nio.file.Files\n", "import java.nio.file.attribute.BasicFileAttributes\n", "import kotlin.io.path.exists\n", "import kotlin.io.path.forEachDirectoryEntry\n", "import kotlin.io.path.isDirectory\n", "import kotlin.io.path.listDirectoryEntries\n", "import kotlin.io.path.readText\n", "\n", "// Find latest result file, based on the their timestamp.\n", "val runsDir = notebook.workingDir.resolve(\"kotlin-jvm-compare-hypothesis/build/reports/benchmarks/main\")\n", "val lastRunDir = runsDir.listDirectoryEntries()\n", " .filter { it.isDirectory() }\n", " .sortedByDescending { dir -> Files.readAttributes(dir, BasicFileAttributes::class.java).creationTime() }\n", " .first()\n", "val outputFile = lastRunDir.resolve(\"main.json\")\n", "val json = Json { ignoreUnknownKeys = true }\n", "val benchmarkData = json.decodeFromString>(outputFile.readText())" ], "outputs": [], "execution_count": 4 }, { "metadata": { "ExecuteTime": { "end_time": "2025-11-12T23:20:55.815900Z", "start_time": "2025-11-12T23:20:55.517639Z" } }, "cell_type": "code", "source": [ "import kotlinx.serialization.json.*\n", "\n", "// Helper class for tracking the information we need to use.\n", "data class Benchmark(val name: String, val params: String, val score: Double, val error: Double, val unit: String)\n", "\n", "// Split benchmark results into groups. Generally, each group consist of all tests from one test file,\n", "// except when it is an parameterized test. In this case, each test (with all its variants) are put\n", "// in its own group.\n", "val benchmarkGroups = benchmarkData\n", " .groupBy {\n", " if (it.benchmark.endsWith(optimizedSuffix))\n", " it.benchmark.removeSuffix(optimizedSuffix)\n", " else\n", " it.benchmark.removeSuffix(baselineSuffix)\n", " }\n", " .mapValues { group ->\n", " val benchmarks = group.value.map { benchmark ->\n", " // Parameters are specific to each test. `deserializeJson()` will generate the appropriate data classes,\n", " // but for generic handling of parameters we would need to fallback to reading the JSON. In this case\n", " // we just handle them through the typed API.\n", " val paramInfo = benchmark.params?.entries.orEmpty()\n", " .sortedBy { it.key }\n", " .joinToString(\",\") { \"${it.key}=${it.value.jsonPrimitive.content}\" }\n", " val name = benchmark.benchmark\n", " Benchmark(\n", " name,\n", " paramInfo,\n", " benchmark.primaryMetric.score,\n", " benchmark.primaryMetric.scoreError,\n", " benchmark.primaryMetric.scoreUnit\n", " )\n", " }\n", " val baseline = benchmarks.filter { it.name.endsWith(\"Baseline\") }.toDataFrame()\n", " val optimized = benchmarks.filter { it.name.endsWith(\"Optimized\") }.toDataFrame()\n", " baseline.join(optimized, \"params\")\n", " }\n", "\n", "// Un-commont this to see the benchmark data as DataFrames\n", "// benchmarkGroups.forEach {\n", "// DISPLAY(it.value)\n", "// }" ], "outputs": [], "execution_count": 5 }, { "metadata": { "ExecuteTime": { "end_time": "2025-11-12T23:20:56.182074Z", "start_time": "2025-11-12T23:20:55.820371Z" } }, "cell_type": "code", "source": [ "// Prepare the data frames for plotting by:\n", "// - Add calculated columns for errorMin / errorMax, for both the baseline and optimized \"versions\"\n", "// - Tests with parameters use the parameter values as the label\n", "// - Tests without paramaters use the test name as the label\n", "val plotData = benchmarkGroups.mapValues {\n", " it.value\n", " .add(\"errorMin\") { it.getValue(\"score\") - it.getValue(\"error\") }\n", " .add(\"errorMax\") { it.getValue(\"score\") + it.getValue(\"error\") }\n", " .add(\"errorMin1\") { it.getValue(\"score1\") - it.getValue(\"error1\") }\n", " .add(\"errorMax1\") { it.getValue(\"score1\") + it.getValue(\"error1\") }\n", " .add(\"diff\") { (it.getValue(\"score1\") - it.getValue(\"score\")) / it.getValue(\"score\") * 100.0 }\n", " .insert(\"label\") {\n", " // Re-format the benchmark labels to make them look \"nicer\"\n", " if (!it.getValue(\"params\").isBlank()) {\n", " it.getValue(\"params\").replace(\",\", \"\\n\")\n", " } else {\n", " it.getValue(\"name\").substringAfterLast(\".\").removeSuffix(baselineSuffix)\n", " }\n", " }.at(0)\n", " .add(\"barColor\") {\n", " val diff = get(\"diff\") as Double\n", " val interval1 = (get(\"errorMin\") as Double)..(get(\"errorMax\") as Double)\n", " val interval2 = (get(\"errorMin1\") as Double)..(get(\"errorMax1\") as Double)\n", " val overlap = interval1.start <= interval2.endInclusive && interval2.start <= interval1.endInclusive\n", " when {\n", " overlap -> \"grey\"\n", " diff > 0 -> \"green\"\n", " else -> \"red\"\n", " }\n", " }\n", " .remove(\"name\", \"params\")\n", "}" ], "outputs": [], "execution_count": 6 }, { "metadata": { "ExecuteTime": { "end_time": "2025-11-12T23:20:56.633229Z", "start_time": "2025-11-12T23:20:56.186639Z" } }, "cell_type": "code", "source": [ "import org.jetbrains.letsPlot.Geom\n", "import org.jetbrains.letsPlot.core.spec.plotson.coord\n", "import org.jetbrains.letsPlot.themes.margin\n", "\n", "// Plot each group as a bar plot with the error displayed as error bars.\n", "// This approach assumes that each group has tests roughly within the same \"scale\".\n", "// If this is not the case, some plots might look very squished. If this happens,\n", "// you can play around with using a LOG10 scale or modifying the limits to focus\n", "// on the changes.\n", "plotData.forEach { (fileName, dataframe) ->\n", " val plot = dataframe.plot {\n", " bars {\n", " x(\"label\") {\n", " axis.name = \"\"\n", " }\n", " y(\"diff\")\n", " fillColor(\"barColor\") {\n", " scale = categorical(\"red\" to Color.RED, \"green\" to Color.GREEN, \"grey\" to Color.GREY)\n", " legend.type = LegendType.None\n", " }\n", " }\n", " coordinatesTransformation = CoordinatesTransformation.cartesianFlipped()\n", " layout {\n", " this.yAxisLabel = \"Diff, %\"\n", " style {\n", " global {\n", " title {\n", " margin(10.0, -10.0)\n", " }\n", " text {\n", " fontFamily = FontFamily.MONO\n", " }\n", " }\n", " }\n", " // Adjust the height of the Kandy plot based on the number of tests.\n", " size = 800 to ((50 * dataframe.size().nrow) + 100)\n", " }\n", " }\n", " DISPLAY(HTML(\"

$fileName

\"))\n", " DISPLAY(plot)\n", "}" ], "outputs": [ { "data": { "text/html": [ "

test.InverseSquareRootBenchmark.invSqrt

" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " -20\n", " \n", " \n", " \n", " \n", " \n", " \n", " -18\n", " \n", " \n", " \n", " \n", " \n", " \n", " -16\n", " \n", " \n", " \n", " \n", " \n", " \n", " -14\n", " \n", " \n", " \n", " \n", " \n", " \n", " -12\n", " \n", " \n", " \n", " \n", " \n", " \n", " -10\n", " \n", " \n", " \n", " \n", " \n", " \n", " -8\n", " \n", " \n", " \n", " \n", " \n", " \n", " -6\n", " \n", " \n", " \n", " \n", " \n", " \n", " -4\n", " \n", " \n", " \n", " \n", " \n", " \n", " -2\n", " \n", " \n", " \n", " \n", " \n", " \n", " 0\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " invSqrt\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " Diff, %\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "\n", " " ], "application/plot+json": { "output_type": "lets_plot_spec", "output": { "mapping": {}, "guides": { "y": { "title": "Diff, %" } }, "coord": { "name": "flip", "flip": true }, "data": { "diff": [ -19.336985717494702 ], "label": [ "invSqrt" ], "barColor": [ "red" ] }, "ggsize": { "width": 800.0, "height": 150.0 }, "kind": "plot", "scales": [ { "aesthetic": "x", "discrete": true, "name": "" }, { "aesthetic": "y", "limits": [ null, null ] }, { "aesthetic": "fill", "values": [ "#ee6666", "#3ba272", "#a39999" ], "limits": [ "red", "green", "grey" ], "guide": "none" } ], "layers": [ { "mapping": { "x": "label", "y": "diff", "fill": "barColor" }, "stat": "identity", "sampling": "none", "inherit_aes": false, "position": "dodge", "geom": "bar" } ], "theme": { "text": { "family": "mono", "blank": false }, "title": { "margin": [ 10.0, -10.0, 10.0, -10.0 ], "blank": false }, "axis_ontop": false, "axis_ontop_y": false, "axis_ontop_x": false }, "data_meta": { "series_annotations": [ { "type": "str", "column": "label" }, { "type": "float", "column": "diff" }, { "type": "str", "column": "barColor" } ] } }, "apply_color_scheme": true, "swing_enabled": true } }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

test.PowerBenchmark.power

" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " 0\n", " \n", " \n", " \n", " \n", " \n", " \n", " 200\n", " \n", " \n", " \n", " \n", " \n", " \n", " 400\n", " \n", " \n", " \n", " \n", " \n", " \n", " 600\n", " \n", " \n", " \n", " \n", " \n", " \n", " 800\n", " \n", " \n", " \n", " \n", " \n", " \n", " 1,000\n", " \n", " \n", " \n", " \n", " \n", " \n", " 1,200\n", " \n", " \n", " \n", " \n", " \n", " \n", " 1,400\n", " \n", " \n", " \n", " \n", " \n", " \n", " 1,600\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " power=2.71 value=-3.0\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " power=2.71 value=421431.243214\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " Diff, %\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "\n", " " ], "application/plot+json": { "output_type": "lets_plot_spec", "output": { "mapping": {}, "guides": { "y": { "title": "Diff, %" } }, "coord": { "name": "flip", "flip": true }, "data": { "diff": [ 132.5126341566779, 1549.7214684405087 ], "label": [ "power=2.71\nvalue=-3.0", "power=2.71\nvalue=421431.243214" ], "barColor": [ "green", "green" ] }, "ggsize": { "width": 800.0, "height": 200.0 }, "kind": "plot", "scales": [ { "aesthetic": "x", "discrete": true, "name": "" }, { "aesthetic": "y", "limits": [ null, null ] }, { "aesthetic": "fill", "values": [ "#ee6666", "#3ba272", "#a39999" ], "limits": [ "red", "green", "grey" ], "guide": "none" } ], "layers": [ { "mapping": { "x": "label", "y": "diff", "fill": "barColor" }, "stat": "identity", "sampling": "none", "inherit_aes": false, "position": "dodge", "geom": "bar" } ], "theme": { "text": { "family": "mono", "blank": false }, "title": { "margin": [ 10.0, -10.0, 10.0, -10.0 ], "blank": false }, "axis_ontop": false, "axis_ontop_y": false, "axis_ontop_x": false }, "data_meta": { "series_annotations": [ { "type": "str", "column": "label" }, { "type": "float", "column": "diff" }, { "type": "str", "column": "barColor" } ] } }, "apply_color_scheme": true, "swing_enabled": true } }, "metadata": {}, "output_type": "display_data" } ], "execution_count": 7 } ], "metadata": { "kernelspec": { "display_name": "Kotlin", "language": "kotlin", "name": "kotlin" }, "language_info": { "name": "kotlin", "version": "2.2.20-dev-4982", "mimetype": "text/x-kotlin", "file_extension": ".kt", "pygments_lexer": "kotlin", "codemirror_mode": "text/x-kotlin", "nbconvert_exporter": "" } }, "nbformat": 4, "nbformat_minor": 0 }