{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "
Peter Norvig, 3 Jan 2020
\n", "\n", "# Spelling Bee Puzzle\n", "\n", "The [3 Jan. 2020 edition of the **538 Riddler**](https://fivethirtyeight.com/features/can-you-solve-the-vexing-vexillology/) concerns the popular NYTimes [**Spelling Bee**](https://www.nytimes.com/puzzles/spelling-bee) puzzle. In this game, seven letters are arranged in a **honeycomb** lattice, with one letter in the center:\n", "\n", "\n", "\n", "The goal is to identify as many valid words as possible. A valid word uses only letters from the honeycomb, may use a letter multiple times, must use the center letter, and must be at least 4 letters long. For example, AMALGAM is acceptable, but neither GAP (too short) nor PALM (no G) are allowed. Four-letter words are worth 1 point each, while words longer than that score one point for each letter. Words that use all seven letters in the honeycomb are known as **pangrams** and earn 7 bonus points in addition to the points for the length of the word. So in the above example, MEGAPLEX is worth 8 + 7 = 15 points. A valid honeycomb must contain at least one pangram, and must not contain the letter S (that would make it too easy, with all the plural words).\n", "\n", "***The puzzle is: Which seven-letter honeycomb results in the highest possible score?*** \n", "\n", "The 538 Riddler referenced a [word list](https://norvig.com/ngrams/enable1.txt) from my [web site](https://norvig.com/ngrams), so I felt compelled to solve the puzzle. (Note the word list is a standard public domain Scrabble® dictionary that I happen to host a copy of; I didn't curate it, Mendel Cooper and Alan Beale did.) \n", "\n", "I'll show you how I address the problem. First some imports:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "from collections import defaultdict, Counter\n", "from dataclasses import dataclass\n", "from itertools import combinations, chain\n", "from typing import Iterable " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Letters, Words, Lettersets, and Pangrams\n", "\n", "Let's start by defining the most basic vocabulary terms:\n", "\n", "- **Letter**: the **valid letters** are uppercase 'A' to 'Z', but not 'S'.\n", "- **Word**: A string of letters.\n", "- **Letterset**: the distinct letters in a word; e.g. letterset('BOOBOO') = 'BO'.\n", "- **Word list**: a list of valid words.\n", "- **Valid word**: a word of at least 4 valid letters and not more than 7 distinct letters.\n", "- **Pangram**: a valid word with exactly 7 distinct letters." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "valid_letters = set('ABCDEFGHIJKLMNOPQR' + 'TUVWXYZ')\n", "Letter = str # A string of one letter\n", "Word = str # A string of 4 or more letters\n", "Letterset = str # A sorted string of distinct letters\n", "\n", "def word_list(text: str) -> list[Word]: \n", " \"\"\"All the valid words in a text.\"\"\"\n", " return [w for w in text.upper().split() if is_valid(w)]\n", "\n", "def is_valid(word) -> bool: \n", " \"\"\"A word with at least 4 letters, at most 7 distinct letters, and no 'S'.\"\"\"\n", " return len(word) >= 4 and len(set(word)) <= 7 and valid_letters.issuperset(word) \n", "\n", "def letterset(word) -> Letterset:\n", " \"\"\"The set of distinct letters in a word, represented as a sorted str.\"\"\"\n", " return ''.join(sorted(set(word)))\n", "\n", "def is_pangram(word) -> bool: return len(set(word)) == 7" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "I chose to represent a `Letterset` as a sorted string of distinct letters, and not as a `set`. Why? Because:\n", "- A `set` can't be the key of a dict, and we'll need that capability.\n", "- A `frozenset` could be a key, and would be a good choice for `Letterset`, but a frozenset:\n", " - Takes up 2 to 4 times as much space in memory.\n", " - Is harder to read when debugging: `frozenset({'A', 'G', 'L', 'M'})` versus `'AGLM'`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's a mini word list to experiment with:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['AMALGAM', 'CACCIATORE', 'EROTICA', 'GAME', 'GLAM', 'MEGAPLEX']" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "mini = word_list('amalgam amalgamation cacciatore erotica em game gem gems glam megaplex')\n", "mini" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that `em` and `gem` are too short, `gems` has an `s`, and `amalgamation` has 8 distinct letters. We're left with six valid words out of the ten candidate words. Three of them are pangrams:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'CACCIATORE', 'EROTICA', 'MEGAPLEX'}" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "{w for w in mini if is_pangram(w)}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Honeycombs and Scoring\n", "\n", "Here are the main concepts for defining a honeycomb and determining a score:\n", "\n", "- A **honeycomb** lattice consists of two attributes: a letterset of seven distinct letters, and a single center letter.\n", "- The **word score** is 1 point for a 4-letter word, or the word length for longer words, plus 7 bonus points for a pangram.\n", "- The **total score** for a honeycomb is the sum of the word scores for the words that the honeycomb **can make**. \n", "- A honeycomb **can make** a word if the word contains the honeycomb's center, and every letter in the word is in the honeycomb. " ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "@dataclass(frozen=True, order=True)\n", "class Honeycomb:\n", " \"\"\"A Honeycomb lattice, with 7 letters, 1 of which is the center.\"\"\"\n", " letters: Letterset \n", " center: Letter\n", " \n", "def word_score(word) -> int: \n", " \"\"\"The points for this word, including bonus for pangram.\"\"\"\n", " return 1 if len(word) == 4 else (len(word) + 7 * is_pangram(word))\n", "\n", "def total_score(honeycomb, wordlist) -> int:\n", " \"\"\"The total score for this honeycomb.\"\"\"\n", " return sum(word_score(w) for w in wordlist if can_make(honeycomb, w))\n", "\n", "def can_make(honeycomb, word) -> bool:\n", " \"\"\"Can the honeycomb make this word?\"\"\"\n", " return honeycomb.center in word and all(L in honeycomb.letters for L in word)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here is the honeycomb from the diagram at the top of the notebook:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Honeycomb(letters='AEGLMPX', center='G')" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "hc = Honeycomb(letterset('LAPGEMX'), 'G')\n", "hc" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The word scores, makeable words, and total score for this honeycomb on the `mini` word list are as follows:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'AMALGAM': 7,\n", " 'CACCIATORE': 17,\n", " 'EROTICA': 14,\n", " 'GAME': 1,\n", " 'GLAM': 1,\n", " 'MEGAPLEX': 15}" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "{w: word_score(w) for w in mini}" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'AMALGAM', 'GAME', 'GLAM', 'MEGAPLEX'}" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "{w for w in mini if can_make(hc, w)}" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "24" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "total_score(hc, mini) # 7 + 1 + 1 + (8+7)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Finding the Top-Scoring Honeycomb\n", "\n", "A simple strategy for finding the top-scoring honeycomb is:\n", " - Compile a list of all valid candidate honeycombs.\n", " - For each honeycomb, compute the total score.\n", " - Return a (score, honeycomb) tuple for a honeycomb with the maximum score." ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "def top_honeycomb(wordlist) -> tuple[int, Honeycomb]: \n", " \"\"\"Find a (score, honeycomb) pair with a highest-scoring honeycomb.\"\"\"\n", " return max((total_score(h, wordlist), h) \n", " for h in candidate_honeycombs(wordlist))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "What are the possible candidate honeycombs? We could try all letters in all slots, but that's a **lot** of honeycombs:\n", "- The center can be any valid letter (25 choices, because 'S' is not allowed).\n", "- The outside can be any six of the remaining 24 letters.\n", "- All together, that's 25 × (24 choose 6) = 3,364,900 candidate honeycombs.\n", "\n", "Fortunately, we can use the constraint that **a valid honeycomb must contain at least one pangram**. So the letters of any valid honeycomb must ***be*** the letterset of some pangram (and the center can be any one of the seven letters):" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "def candidate_honeycombs(wordlist) -> list[Honeycomb]:\n", " \"\"\"Valid honeycombs have pangram letters, with any center.\"\"\"\n", " return [Honeycomb(letters, center) \n", " for letters in pangram_lettersets(wordlist)\n", " for center in letters]\n", "\n", "def pangram_lettersets(wordlist: list[Word]) -> set[Letterset]:\n", " \"\"\"All lettersets from the pangrams in wordlist.\"\"\"\n", " return {letterset(word) for word in wordlist if is_pangram(word)}" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Honeycomb(letters='ACEIORT', center='A'),\n", " Honeycomb(letters='ACEIORT', center='C'),\n", " Honeycomb(letters='ACEIORT', center='E'),\n", " Honeycomb(letters='ACEIORT', center='I'),\n", " Honeycomb(letters='ACEIORT', center='O'),\n", " Honeycomb(letters='ACEIORT', center='R'),\n", " Honeycomb(letters='ACEIORT', center='T'),\n", " Honeycomb(letters='AEGLMPX', center='A'),\n", " Honeycomb(letters='AEGLMPX', center='E'),\n", " Honeycomb(letters='AEGLMPX', center='G'),\n", " Honeycomb(letters='AEGLMPX', center='L'),\n", " Honeycomb(letters='AEGLMPX', center='M'),\n", " Honeycomb(letters='AEGLMPX', center='P'),\n", " Honeycomb(letters='AEGLMPX', center='X')]" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "candidate_honeycombs(mini) # 7 candidates for each of the 2 pangram lettersets" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we're ready to find the highest-scoring honeycomb with respect to the `mini` word list:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(31, Honeycomb(letters='ACEIORT', center='T'))" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "top_honeycomb(mini)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The program appears to work. But that's just the mini word list. \n", "\n", "# Big Word list\n", "\n", "Here's the big word list:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "aa\n", "aah\n", "aahed\n", "aahing\n", "aahs\n", "aal\n", "aalii\n", "aaliis\n", "aals\n", "aardvark\n" ] } ], "source": [ "! [ -e enable1.txt ] || curl -O http://norvig.com/ngrams/enable1.txt\n", "! head enable1.txt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "How big is it?" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " 172820 enable1.txt\n" ] } ], "source": [ "! wc -w enable1.txt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "172,820 words." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's load it up and print some statistics:" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " 44,585 valid Spelling Bee words\n", " 14,741 pangram words\n", " 7,986 distinct pangram lettersets\n", " 55,902 candidate honeycombs\n" ] } ], "source": [ "file = 'enable1.txt'\n", "big = word_list(open(file).read())\n", "\n", "print(f\"\"\"\\\n", "{len(big):7,d} valid Spelling Bee words\n", "{sum(map(is_pangram, big)):7,d} pangram words\n", "{len(pangram_lettersets(big)):7,d} distinct pangram lettersets\n", "{len(candidate_honeycombs(big)):7,d} candidate honeycombs\"\"\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "How long will it take to run `top_honeycomb(big)`? Most of the computation time is in `total_score`, which is called once for each of the 55,902 candidate honeycombs, so let's estimate the total time by first checking how long it takes to compute the total score of a single honeycomb:" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "3.62 ms ± 38.6 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n" ] } ], "source": [ "%timeit total_score(hc, big)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Roughly 3.5 milliseconds for one honeycomb. For all 55,902 valid honeycombs how many seconds would that be?" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "195.657" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ ".0035 * 55902" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A little over 3 minutes. I could run `top_honeycomb(big)`, get a coffee, come back, and declare victory. \n", "\n", "But I think that a puzzle like this deserves a more elegant solution. \n", "\n", "# Faster Scoring: Points Table\n", "\n", "Here's an idea to make `total_score` faster by doing some precomputation:\n", "\n", "- Do the following computation only once:\n", " - Compute the `letterset` and `word_score` for each word in the word list. \n", " - Make a table of `{letterset: sum_of_word_scores}` giving the total score for each letterset. \n", " - I call this a **points table**.\n", "- For each of the 55,902 candidate honeycombs, do the following:\n", " - Consider every **letter subset** of the honeycomb's 7 letters that includes the center letter.\n", " - Sum the points table entries for each of these letter subsets.\n", "\n", "The resulting algorithm, `fast_total_score`, iterates over just 26 – 1 = 63 letter subsets; much fewer than 44,585 valid words. The function `top_honeycomb2` creates the points table and calls `fast_total_score`:" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [], "source": [ "def top_honeycomb2(wordlist) -> tuple[int, Honeycomb]: \n", " \"\"\"Find a (score, honeycomb) tuple with a highest-scoring honeycomb.\"\"\"\n", " table = points_table(wordlist)\n", " return max((fast_total_score(h, table), h) \n", " for h in candidate_honeycombs(wordlist))\n", "\n", "def points_table(wordlist) -> Counter:\n", " \"\"\"A Counter of {letterset: sum_of_word_scores} from all the words in wordlist.\"\"\"\n", " table = Counter()\n", " for word in wordlist:\n", " table[letterset(word)] += word_score(word)\n", " return table\n", "\n", "def fast_total_score(honeycomb, points_table) -> int:\n", " \"\"\"The total score for this honeycomb, using a points table.\"\"\"\n", " return sum(points_table[s] for s in letter_subsets(honeycomb))\n", "\n", "def letter_subsets(honeycomb) -> list[Letterset]:\n", " \"\"\"The 63 subsets of the letters in the honeycomb, each including the center letter.\"\"\"\n", " # range starts at 2, not 1, because (e.g.) 'MAMMA' is valid, but (e.g.) 'AAAA' is not.\n", " subsets = chain.from_iterable(combinations(honeycomb.letters, n) for n in range(2, 8))\n", " return [letters for letters in map(''.join, subsets) \n", " if honeycomb.center in letters]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here is the points table for the mini word list:" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Counter({'ACEIORT': 31, 'AEGLMPX': 15, 'AGLM': 8, 'AEGM': 1})" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "table = points_table(mini)\n", "table" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The letterset `'ACEIORT'` gets 31 points (17 for CACCIATORE and 14 for EROTICA), \n", "`'AEGLMPX'` gets 15 for MEGAPLEX\n", "`'AGLM'` gets 8 points (7 for AMALGAM and 1 for GLAM), and\n", "`'AEGM'` gets 1 for GAME. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here is the honeycomb `hc` again, and its 63 letter subsets:" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Honeycomb(letters='AEGLMPX', center='G')" ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "hc" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['AG', 'EG', 'GL', 'GM', 'GP', 'GX', 'AEG', 'AGL', 'AGM', 'AGP', 'AGX', 'EGL', 'EGM', 'EGP', 'EGX', 'GLM', 'GLP', 'GLX', 'GMP', 'GMX', 'GPX', 'AEGL', 'AEGM', 'AEGP', 'AEGX', 'AGLM', 'AGLP', 'AGLX', 'AGMP', 'AGMX', 'AGPX', 'EGLM', 'EGLP', 'EGLX', 'EGMP', 'EGMX', 'EGPX', 'GLMP', 'GLMX', 'GLPX', 'GMPX', 'AEGLM', 'AEGLP', 'AEGLX', 'AEGMP', 'AEGMX', 'AEGPX', 'AGLMP', 'AGLMX', 'AGLPX', 'AGMPX', 'EGLMP', 'EGLMX', 'EGLPX', 'EGMPX', 'GLMPX', 'AEGLMP', 'AEGLMX', 'AEGLPX', 'AEGMPX', 'AGLMPX', 'EGLMPX', 'AEGLMPX']\n" ] } ], "source": [ "print(letter_subsets(hc))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The total from `fast_total_score` is the sum of the scores from its letter subsets (only 3 of which are in `points_table(mini)`):" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [], "source": [ "assert fast_total_score(hc, table) == 24 == table['AGLM'] + table['AEGM'] + table['AEGLMPX']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can now solve the puzzle on the big word list:" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 702 ms, sys: 3.74 ms, total: 706 ms\n", "Wall time: 707 ms\n" ] }, { "data": { "text/plain": [ "(3898, Honeycomb(letters='AEGINRT', center='R'))" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%time top_honeycomb2(big)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Wow! 3898 is a high score!** And the whole computation took **less than a second**!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Scoring Fewer Honeycombs: Branch and Bound\n", "\n", "A run time of less than a second to find the top possible honeycomb is pretty good! Can we do even better?\n", "\n", "The program would run faster if we scored fewer honeycombs. But if we want to be guaranteed of finding the top-scoring honeycomb, how can we skip any? Consider the pangram **JUKEBOX**. With the unusual letters **J**, **K**, and **X**, it scores poorly, regardless of the choice of center:" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Honeycomb(letters='BEJKOUX', center='J') 26 points\n", "Honeycomb(letters='BEJKOUX', center='U') 32 points\n", "Honeycomb(letters='BEJKOUX', center='K') 26 points\n", "Honeycomb(letters='BEJKOUX', center='E') 37 points\n", "Honeycomb(letters='BEJKOUX', center='B') 49 points\n", "Honeycomb(letters='BEJKOUX', center='O') 39 points\n", "Honeycomb(letters='BEJKOUX', center='X') 15 points\n" ] } ], "source": [ "for C in 'JUKEBOX':\n", " h = Honeycomb(letterset('JUKEBOX'), C)\n", " print(h, total_score(h, big), 'points')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We might be able to dismiss **JUKEBOX** in one call to `fast_total_score`, rather than seven, with this approach:\n", "- Keep track of the top score found so far, on any previous pangram.\n", "- For each pangram letterset, ask \"if we weren't required to use the center letter, what would this letterset score?\"\n", "- Check if that score is higher than the top score so far.\n", " - If yes, then try the pangram letterset with each of the seven centers; \n", " - If not then dismiss it without trying *any* of the centers.\n", "- This is called a [**branch and bound**](https://en.wikipedia.org/wiki/Branch_and_bound) algorithm: prune a **branch** of 7 honeycombs if an upper **bound** can't beat the top score.\n", "\n", "*Note*: To represent a honeycomb with no center, I can just use `Honeycomb(p, '')`. This works because of a quirk of Python: `letter_subsets` checks if `honeycomb.center in letters`; normally in Python the expression `e in s` means \"*is* `e` *an element of the collection* `s`\", but when `s` is a string it means \"*is* `e` *a substring of* `s`\", and the empty string is a substring of every string. \n", "\n", "I can rewrite `top_honeycomb2` as follows:" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [], "source": [ "def top_honeycomb3(wordlist) -> tuple[int, Honeycomb]: \n", " \"\"\"Find a (score, honeycomb) tuple with a highest-scoring honeycomb.\"\"\"\n", " table = points_table(wordlist)\n", " top_score, top_honeycomb = 0, None\n", " pangrams = [s for s in table if len(s) == 7]\n", " for p in pangrams:\n", " if fast_total_score(Honeycomb(p, ''), table) > top_score:\n", " for center in p:\n", " honeycomb = Honeycomb(p, center)\n", " score = fast_total_score(honeycomb, table)\n", " if score > top_score:\n", " top_score, top_honeycomb = score, honeycomb\n", " return top_score, top_honeycomb" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 176 ms, sys: 2.31 ms, total: 179 ms\n", "Wall time: 178 ms\n" ] }, { "data": { "text/plain": [ "(3898, Honeycomb(letters='AEGINRT', center='R'))" ] }, "execution_count": 27, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%time top_honeycomb3(big)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Awesome! We get the correct answer, and it runs four times faster than `top_honeycomb2`.\n", "\n", "# How many honeycombs does top_honeycomb3 examine? \n", "\n", "We can use `functools.lru_cache` to make `Honeycomb` keep track:" ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "CacheInfo(hits=0, misses=8084, maxsize=None, currsize=8084)" ] }, "execution_count": 28, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import functools\n", "Honeycomb = functools.lru_cache(None)(Honeycomb)\n", "top_honeycomb3(big)\n", "Honeycomb.cache_info()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`top_honeycomb3` examined 8,084 honeycombs; a 6.9× reduction from the 55,902 examined by `top_honeycomb2`. Since there are 7,986 pangram lettersets, that means we had to look at all 7 centers for only (8084-7986)/7 = 14 of them.\n", "\n", "How much faster is `fast_total_score` than `total_score`?" ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "10.8 μs ± 88.7 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)\n" ] } ], "source": [ "table = points_table(big)\n", "\n", "%timeit fast_total_score(hc, table)" ] }, { "cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "3.67 ms ± 49.5 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n" ] } ], "source": [ "%timeit total_score(hc, big)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We see that `fast_total_score` is about 300 times faster." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Fancy Report\n", "\n", "Here I show the top-scoring honeycomb, all the pangrams and other words it can make, and the counts:" ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [], "source": [ "from textwrap import fill\n", "\n", "def report(wordlist: list[Word]) -> None:\n", " \"\"\"Print stats, words, and word scores for the top-scoring honeycomb.\"\"\"\n", " score, honeycomb = top_honeycomb3(wordlist)\n", " made = {w for w in wordlist if can_make(honeycomb, w)}\n", " pangrams = {w for w in made if is_pangram(w)}\n", " print(f'Top {honeycomb}:\\n\\nWords: {len(made):,d}, Pangrams: {len(pangrams)}, Points: {score:,d}.')\n", " for (title, words) in [('Pangrams:', pangrams), ('Other words:', made - pangrams)]:\n", " print('\\n' + title)\n", " print(fill(', '.join(sorted(words)), width=114))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here is the big report:" ] }, { "cell_type": "code", "execution_count": 32, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Top Honeycomb(letters='AEGINRT', center='R'):\n", "\n", "Words: 537, Pangrams: 50, Points: 3,898.\n", "\n", "Pangrams:\n", "AERATING, AGGREGATING, ARGENTINE, ARGENTITE, ENTERTAINING, ENTRAINING, ENTREATING, GARNIERITE, GARTERING,\n", "GENERATING, GNATTIER, GRANITE, GRATINE, GRATINEE, GRATINEEING, GREATENING, INGRATE, INGRATIATE, INTEGRATE,\n", "INTEGRATING, INTENERATING, INTERAGE, INTERGANG, INTERREGNA, INTREATING, ITERATING, ITINERATING, NATTERING,\n", "RATTENING, REAGGREGATING, REATTAINING, REGENERATING, REGRANTING, REGRATING, REINITIATING, REINTEGRATE,\n", "REINTEGRATING, REITERATING, RETAGGING, RETAINING, RETARGETING, RETEARING, RETRAINING, RETREATING, TANGERINE,\n", "TANGIER, TARGETING, TATTERING, TEARING, TREATING\n", "\n", "Other words:\n", "AERATE, AERIE, AERIER, AGAR, AGER, AGGER, AGGREGATE, AGINNER, AGRARIAN, AGREE, AGREEING, AGRIA, AIGRET, AIGRETTE,\n", "AIRER, AIRIER, AIRING, AIRN, AIRT, AIRTING, ANEAR, ANEARING, ANERGIA, ANGARIA, ANGER, ANGERING, ANGRIER, ANTEATER,\n", "ANTIAIR, ANTIAR, ANTIARIN, ANTRA, ANTRE, AREA, AREAE, ARENA, ARENITE, ARETE, ARGENT, ARGININE, ARIA, ARIETTA,\n", "ARIETTE, ARRAIGN, ARRAIGNING, ARRANGE, ARRANGER, ARRANGING, ARRANT, ARREAR, ARREARAGE, ARTIER, ATRIA, ATTAINER,\n", "ATTAR, ATTIRE, ATTIRING, ATTRITE, EAGER, EAGERER, EAGRE, EARING, EARN, EARNER, EARNING, EARRING, EATER, EERIE,\n", "EERIER, EGER, EGGAR, EGGER, EGRET, ENGAGER, ENGINEER, ENGINEERING, ENGIRT, ENGRAIN, ENGRAINING, ENRAGE, ENRAGING,\n", "ENTER, ENTERA, ENTERER, ENTERING, ENTERTAIN, ENTERTAINER, ENTIRE, ENTRAIN, ENTRAINER, ENTRANT, ENTREAT, ENTREE,\n", "ERGATE, ERNE, ERRANT, ERRATA, ERRING, ETAGERE, ETERNE, GAGER, GAGGER, GAINER, GAITER, GANGER, GANGRENE,\n", "GANGRENING, GARAGE, GARAGING, GARGET, GARNER, GARNERING, GARNET, GARNI, GARRET, GARRING, GARTER, GEAR, GEARING,\n", "GENERA, GENERATE, GENRE, GERENT, GETTER, GETTERING, GINGER, GINGERING, GINNER, GINNIER, GIRN, GIRNING, GIRT,\n", "GIRTING, GITTERN, GNAR, GNARR, GNARRING, GRAIN, GRAINER, GRAINIER, GRAINING, GRAN, GRANA, GRANGE, GRANGER,\n", "GRANITA, GRANNIE, GRANT, GRANTEE, GRANTER, GRANTING, GRAT, GRATE, GRATER, GRATIN, GRATING, GREAT, GREATEN,\n", "GREATER, GREE, GREEGREE, GREEING, GREEN, GREENER, GREENGAGE, GREENIE, GREENIER, GREENING, GREET, GREETER,\n", "GREETING, GREGARINE, GREIGE, GRIG, GRIGRI, GRIN, GRINNER, GRINNING, GRIT, GRITTIER, GRITTING, IGNITER, INANER,\n", "INERRANT, INERT, INERTIA, INERTIAE, INGRAIN, INGRAINING, INGRATIATING, INNER, INTEGER, INTENERATE, INTER, INTERN,\n", "INTERNE, INTERNEE, INTERNING, INTERRING, INTERTIE, INTRANT, INTREAT, INTRIGANT, IRATE, IRATER, IRING, IRRIGATE,\n", "IRRIGATING, IRRITANT, IRRITATE, IRRITATING, ITERANT, ITERATE, ITINERANT, ITINERATE, NAGGER, NAGGIER, NAIRA,\n", "NARINE, NARRATE, NARRATER, NARRATING, NATTER, NATTIER, NEAR, NEARER, NEARING, NEATER, NEGATER, NETTER, NETTIER,\n", "NIGGER, NITER, NITERIE, NITRATE, NITRATING, NITRE, NITRITE, NITTIER, RAGA, RAGE, RAGEE, RAGGEE, RAGGING, RAGI,\n", "RAGING, RAGTAG, RAIA, RAIN, RAINIER, RAINING, RANEE, RANG, RANGE, RANGER, RANGIER, RANGING, RANI, RANT, RANTER,\n", "RANTING, RARE, RARER, RARING, RATAN, RATATAT, RATE, RATER, RATINE, RATING, RATITE, RATTAN, RATTEEN, RATTEN,\n", "RATTENER, RATTER, RATTIER, RATTING, REAGENT, REAGGREGATE, REAGIN, REAR, REARER, REARING, REARRANGE, REARRANGING,\n", "REATA, REATTAIN, REEARN, REEARNING, REENGAGE, REENGAGING, REENGINEER, REENGINEERING, REENTER, REENTERING,\n", "REENTRANT, REGAIN, REGAINER, REGAINING, REGATTA, REGEAR, REGEARING, REGENERATE, REGENT, REGGAE, REGINA, REGINAE,\n", "REGNA, REGNANT, REGRANT, REGRATE, REGREEN, REGREENING, REGREET, REGREETING, REGRET, REGRETTER, REGRETTING, REIGN,\n", "REIGNING, REIGNITE, REIGNITING, REIN, REINING, REINITIATE, REINTER, REINTERRING, REITERATE, RENEGE, RENEGER,\n", "RENEGING, RENIG, RENIGGING, RENIN, RENITENT, RENNET, RENNIN, RENT, RENTE, RENTER, RENTIER, RENTING, RERAN, RERIG,\n", "RERIGGING, RETAG, RETAIN, RETAINER, RETARGET, RETE, RETEAR, RETENE, RETIA, RETIARII, RETIE, RETINA, RETINAE,\n", "RETINE, RETINENE, RETINITE, RETINT, RETINTING, RETIRANT, RETIRE, RETIREE, RETIRER, RETIRING, RETRAIN, RETREAT,\n", "RETREATANT, RETREATER, RETTING, RIANT, RIATA, RIGGER, RIGGING, RING, RINGENT, RINGER, RINGGIT, RINGING, RINNING,\n", "RITE, RITTER, TAGGER, TAGRAG, TANAGER, TANNER, TANTARA, TANTRA, TARE, TARGE, TARGET, TARING, TARN, TARRE, TARRIER,\n", "TARRING, TART, TARTAN, TARTANA, TARTAR, TARTER, TARTING, TARTRATE, TATAR, TATER, TATTER, TATTIER, TEAR, TEARER,\n", "TEARIER, TEENAGER, TEENER, TEENIER, TEETER, TEETERING, TENNER, TENTER, TENTERING, TENTIER, TERAI, TERETE, TERGA,\n", "TERGITE, TERN, TERNATE, TERNE, TERRA, TERRAE, TERRAIN, TERRANE, TERRARIA, TERREEN, TERRENE, TERRET, TERRIER,\n", "TERRINE, TERRIT, TERTIAN, TETRA, TETTER, TIARA, TIER, TIERING, TIGER, TINIER, TINNER, TINNIER, TINTER, TIRE,\n", "TIRING, TITER, TITRANT, TITRATE, TITRATING, TITRE, TITTER, TITTERER, TITTERING, TRAGI, TRAIN, TRAINEE, TRAINER,\n", "TRAINING, TRAIT, TREAT, TREATER, TREE, TREEING, TREEN, TRET, TRIAGE, TRIAGING, TRIENE, TRIENNIA, TRIER, TRIG,\n", "TRIGGER, TRIGGERING, TRIGGING, TRINE, TRINING, TRINITARIAN, TRITE, TRITER\n" ] } ], "source": [ "report(wordlist=big)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 'S' Words\n", "\n", "What if we allowed honeycombs to have an 'S' in them? I'll make a new word list that doesn't exclude the 'S'-words, and report on it:" ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Top Honeycomb(letters='AEINRST', center='E'):\n", "\n", "Words: 1,179, Pangrams: 86, Points: 8,681.\n", "\n", "Pangrams:\n", "ANESTRI, ANTISERA, ANTISTRESS, ANTSIER, ARENITES, ARSENITE, ARSENITES, ARTINESS, ARTINESSES, ATTAINERS,\n", "ENTERTAINERS, ENTERTAINS, ENTRAINERS, ENTRAINS, ENTREATIES, ERRANTRIES, INERTIAS, INSTANTER, INTENERATES,\n", "INTERSTATE, INTERSTATES, INTERSTRAIN, INTERSTRAINS, INTRASTATE, INTREATS, IRATENESS, IRATENESSES, ITINERANTS,\n", "ITINERARIES, ITINERATES, NASTIER, NITRATES, RAINIEST, RATANIES, RATINES, REATTAINS, REINITIATES, REINSTATE,\n", "REINSTATES, RESINATE, RESINATES, RESISTANT, RESISTANTS, RESTRAIN, RESTRAINER, RESTRAINERS, RESTRAINS, RESTRAINT,\n", "RESTRAINTS, RETAINERS, RETAINS, RETINAS, RETIRANTS, RETRAINS, RETSINA, RETSINAS, SANITARIES, SEATRAIN, SEATRAINS,\n", "STAINER, STAINERS, STANNARIES, STEARIN, STEARINE, STEARINES, STEARINS, STRAINER, STRAINERS, STRAITEN, STRAITENS,\n", "STRAITNESS, STRAITNESSES, TANISTRIES, TANNERIES, TEARSTAIN, TEARSTAINS, TENANTRIES, TERNARIES, TERRAINS, TERTIANS,\n", "TRAINEES, TRAINERS, TRANSIENT, TRANSIENTS, TRISTEARIN, TRISTEARINS\n", "\n", "Other words:\n", "AERATE, AERATES, AERIE, AERIER, AERIES, AERIEST, AIRER, AIRERS, AIREST, AIRIER, AIRIEST, AIRINESS, AIRINESSES,\n", "ANATASE, ANATASES, ANEAR, ANEARS, ANENST, ANENT, ANES, ANISE, ANISES, ANISETTE, ANISETTES, ANNATES, ANSAE, ANSATE,\n", "ANSERINE, ANSERINES, ANTAE, ANTE, ANTEATER, ANTEATERS, ANTENNA, ANTENNAE, ANTENNAS, ANTES, ANTISENSE, ANTISTATE,\n", "ANTRE, ANTRES, ANTSIEST, AREA, AREAE, AREAS, ARENA, ARENAS, ARENITE, ARES, ARETE, ARETES, ARIETTA, ARIETTAS,\n", "ARIETTE, ARIETTES, ARISE, ARISEN, ARISES, ARISTAE, ARISTATE, ARREAR, ARREARS, ARREST, ARRESTANT, ARRESTANTS,\n", "ARRESTEE, ARRESTEES, ARRESTER, ARRESTERS, ARRESTS, ARRISES, ARSE, ARSENATE, ARSENATES, ARSES, ARSINE, ARSINES,\n", "ARTERIES, ARTERITIS, ARTIER, ARTIEST, ARTISTE, ARTISTES, ARTISTRIES, ARTSIER, ARTSIEST, ASEA, ASININE,\n", "ASININITIES, ASSASSINATE, ASSASSINATES, ASSENT, ASSENTER, ASSENTERS, ASSENTS, ASSERT, ASSERTER, ASSERTERS,\n", "ASSERTS, ASSES, ASSESS, ASSESSES, ASSET, ASSETS, ASSISTER, ASSISTERS, ASTATINE, ASTATINES, ASTER, ASTERIA,\n", "ASTERIAS, ASTERN, ASTERS, ATES, ATRESIA, ATRESIAS, ATTAINER, ATTENT, ATTEST, ATTESTER, ATTESTERS, ATTESTS, ATTIRE,\n", "ATTIRES, ATTRITE, EARN, EARNER, EARNERS, EARNEST, EARNESTNESS, EARNESTNESSES, EARNESTS, EARNS, EARS, EASE, EASES,\n", "EASIER, EASIES, EASIEST, EASINESS, EASINESSES, EAST, EASTER, EASTERN, EASTERNER, EASTERNERS, EASTERS, EASTS,\n", "EATEN, EATER, EATERIES, EATERS, EATS, EERIE, EERIER, EERIEST, EERINESS, EERINESSES, EINSTEIN, EINSTEINS, ENATE,\n", "ENATES, ENSNARE, ENSNARER, ENSNARERS, ENSNARES, ENTASES, ENTASIA, ENTASIAS, ENTASIS, ENTENTE, ENTENTES, ENTER,\n", "ENTERA, ENTERER, ENTERERS, ENTERITIS, ENTERITISES, ENTERS, ENTERTAIN, ENTERTAINER, ENTIA, ENTIRE, ENTIRENESS,\n", "ENTIRENESSES, ENTIRES, ENTIRETIES, ENTITIES, ENTRAIN, ENTRAINER, ENTRANT, ENTRANTS, ENTREAT, ENTREATS, ENTREE,\n", "ENTREES, ENTRIES, ERAS, ERASE, ERASER, ERASERS, ERASES, ERNE, ERNES, ERNS, ERRANT, ERRANTS, ERRATA, ERRATAS, ERRS,\n", "ERSES, ERST, ESERINE, ESERINES, ESES, ESSES, ESTATE, ESTATES, ESTER, ESTERASE, ESTERASES, ESTERS, ESTREAT,\n", "ESTREATS, ESTRIN, ESTRINS, ETAS, ETATIST, ETERNE, ETERNISE, ETERNISES, ETERNITIES, ETESIAN, ETESIANS, ETNA, ETNAS,\n", "INANE, INANENESS, INANENESSES, INANER, INANES, INANEST, INANITIES, INERRANT, INERT, INERTIA, INERTIAE, INERTNESS,\n", "INERTNESSES, INERTS, INITIATE, INITIATES, INNATE, INNATENESS, INNATENESSES, INNER, INNERS, INSANE, INSANENESS,\n", "INSANENESSES, INSANER, INSANEST, INSANITIES, INSATIATE, INSATIATENESS, INSATIATENESSES, INSENSATE, INSENTIENT,\n", "INSERT, INSERTER, INSERTERS, INSERTS, INSET, INSETS, INSETTER, INSETTERS, INSISTENT, INSISTER, INSISTERS, INSNARE,\n", "INSNARER, INSNARERS, INSNARES, INSTANTANEITIES, INSTANTIATE, INSTANTIATES, INSTANTNESS, INSTANTNESSES, INSTATE,\n", "INSTATES, INTENERATE, INTENSE, INTENSENESS, INTENSENESSES, INTENSER, INTENSEST, INTENSITIES, INTENT, INTENTNESS,\n", "INTENTNESSES, INTENTS, INTER, INTEREST, INTERESTS, INTERN, INTERNE, INTERNEE, INTERNEES, INTERNES, INTERNIST,\n", "INTERNISTS, INTERNS, INTERS, INTERTIE, INTERTIES, INTESTATE, INTESTATES, INTESTINE, INTESTINES, INTINE, INTINES,\n", "INTREAT, IRATE, IRATER, IRATEST, IRES, IRISES, IRITISES, IRRITATE, IRRITATES, ISATINE, ISATINES, ISSEI, ISSEIS,\n", "ITERANT, ITERATE, ITERATES, ITINERANT, ITINERATE, NANNIE, NANNIES, NARES, NARINE, NARRATE, NARRATER, NARRATERS,\n", "NARRATES, NASTIES, NASTIEST, NASTINESS, NASTINESSES, NATES, NATTER, NATTERS, NATTIER, NATTIEST, NATTINESS,\n", "NATTINESSES, NEAR, NEARER, NEAREST, NEARNESS, NEARNESSES, NEARS, NEAT, NEATEN, NEATENS, NEATER, NEATEST, NEATNESS,\n", "NEATNESSES, NEATS, NEIST, NENE, NEREIS, NERTS, NESS, NESSES, NEST, NESTER, NESTERS, NESTS, NETS, NETT, NETTER,\n", "NETTERS, NETTIER, NETTIEST, NETTS, NINE, NINES, NINETEEN, NINETEENS, NINETIES, NINNIES, NISEI, NISEIS, NITE,\n", "NITER, NITERIE, NITERIES, NITERS, NITES, NITRATE, NITRE, NITRES, NITRITE, NITRITES, NITTIER, NITTIEST, RAINIER,\n", "RAISE, RAISER, RAISERS, RAISES, RANEE, RANEES, RANTER, RANTERS, RARE, RARENESS, RARENESSES, RARER, RARES, RAREST,\n", "RARITIES, RASE, RASER, RASERS, RASES, RASTER, RASTERS, RATE, RATER, RATERS, RATES, RATINE, RATITE, RATITES,\n", "RATTEEN, RATTEENS, RATTEN, RATTENER, RATTENERS, RATTENS, RATTER, RATTERS, RATTIER, RATTIEST, REAR, REARER,\n", "REARERS, REARREST, REARRESTS, REARS, REASSERT, REASSERTS, REASSESS, REASSESSES, REATA, REATAS, REATTAIN, REEARN,\n", "REEARNS, REENTER, REENTERS, REENTRANT, REENTRANTS, REENTRIES, REES, REEST, REESTS, REIN, REINITIATE, REINS,\n", "REINSERT, REINSERTS, REINTER, REINTERS, REIS, REITERATE, REITERATES, RENEST, RENESTS, RENIN, RENINS, RENITENT,\n", "RENNASE, RENNASES, RENNET, RENNETS, RENNIN, RENNINS, RENT, RENTE, RENTER, RENTERS, RENTES, RENTIER, RENTIERS,\n", "RENTS, RERAISE, RERAISES, RERAN, RERISE, RERISEN, RERISES, RESEAT, RESEATS, RESEE, RESEEN, RESEES, RESENT,\n", "RESENTS, RESET, RESETS, RESETTER, RESETTERS, RESIN, RESINS, RESIST, RESISTER, RESISTERS, RESISTS, RESITE, RESITES,\n", "REST, RESTART, RESTARTS, RESTATE, RESTATES, RESTER, RESTERS, RESTRESS, RESTRESSES, RESTS, RETAIN, RETAINER,\n", "RETASTE, RETASTES, RETE, RETEAR, RETEARS, RETENE, RETENES, RETEST, RETESTS, RETIA, RETIARII, RETIE, RETIES,\n", "RETINA, RETINAE, RETINE, RETINENE, RETINENES, RETINES, RETINITE, RETINITES, RETINITIS, RETINT, RETINTS, RETIRANT,\n", "RETIRE, RETIREE, RETIREES, RETIRER, RETIRERS, RETIRES, RETRAIN, RETREAT, RETREATANT, RETREATANTS, RETREATER,\n", "RETREATERS, RETREATS, RETRIES, RETS, RINSE, RINSER, RINSERS, RINSES, RISE, RISEN, RISER, RISERS, RISES, RITE,\n", "RITES, RITTER, RITTERS, SANE, SANENESS, SANENESSES, SANER, SANES, SANEST, SANIES, SANITATE, SANITATES, SANITIES,\n", "SANITISE, SANITISES, SANSEI, SANSEIS, SAREE, SAREES, SARSEN, SARSENET, SARSENETS, SARSENS, SASSES, SASSIER,\n", "SASSIES, SASSIEST, SATE, SATEEN, SATEENS, SATES, SATIATE, SATIATES, SATIETIES, SATINET, SATINETS, SATIRE, SATIRES,\n", "SATIRISE, SATIRISES, SEAR, SEARER, SEAREST, SEARS, SEAS, SEAT, SEATER, SEATERS, SEATS, SEEN, SEER, SEERESS,\n", "SEERESSES, SEERS, SEES, SEINE, SEINER, SEINERS, SEINES, SEIS, SEISE, SEISER, SEISERS, SEISES, SEISIN, SEISINS,\n", "SENARII, SENATE, SENATES, SENE, SENITI, SENNA, SENNAS, SENNET, SENNETS, SENNIT, SENNITS, SENSA, SENSATE, SENSATES,\n", "SENSE, SENSES, SENSITISE, SENSITISES, SENT, SENTE, SENTENTIA, SENTENTIAE, SENTI, SENTIENT, SENTIENTS, SENTRIES,\n", "SERA, SERAI, SERAIS, SERE, SEREIN, SEREINS, SERENATA, SERENATAS, SERENATE, SERENE, SERENENESS, SERENENESSES,\n", "SERENER, SERENES, SERENEST, SERENITIES, SERER, SERES, SEREST, SERIATE, SERIATES, SERIES, SERIN, SERINE, SERINES,\n", "SERINS, SERRATE, SERRATES, SERRIES, SERS, SESTERTIA, SESTET, SESTETS, SESTINA, SESTINAS, SESTINE, SESTINES, SETA,\n", "SETAE, SETENANT, SETENANTS, SETS, SETT, SETTEE, SETTEES, SETTER, SETTERS, SETTS, SIENITE, SIENITES, SIENNA,\n", "SIENNAS, SIERRA, SIERRAN, SIERRAS, SIESTA, SIESTAS, SINE, SINES, SINISTER, SINISTERNESS, SINISTERNESSES, SINNER,\n", "SINNERS, SINTER, SINTERS, SIRE, SIREE, SIREES, SIREN, SIRENIAN, SIRENIANS, SIRENS, SIRES, SIRREE, SIRREES, SISES,\n", "SISSIER, SISSIES, SISSIEST, SISTER, SISTERS, SITE, SITES, SITTEN, SITTER, SITTERS, SNARE, SNARER, SNARERS, SNARES,\n", "SNEER, SNEERER, SNEERERS, SNEERS, STANE, STANES, STANINE, STANINES, STANNITE, STANNITES, STARE, STARER, STARERS,\n", "STARES, STARETS, STARRIER, STARRIEST, STARTER, STARTERS, STASES, STATE, STATER, STATERS, STATES, STEARATE,\n", "STEARATES, STEATITE, STEATITES, STEER, STEERER, STEERERS, STEERS, STEIN, STEINS, STERE, STERES, STERN, STERNA,\n", "STERNER, STERNEST, STERNITE, STERNITES, STERNNESS, STERNNESSES, STERNS, STET, STETS, STIES, STINTER, STINTERS,\n", "STIRRER, STIRRERS, STRAITER, STRAITEST, STRASSES, STREET, STREETS, STRESS, STRESSES, STRETTA, STRETTAS, STRETTE,\n", "STRETTI, STRIAE, STRIATE, STRIATES, TAENIA, TAENIAE, TAENIAS, TAENIASES, TAENIASIS, TANNATE, TANNATES, TANNER,\n", "TANNERS, TANNEST, TANSIES, TARANTASES, TARE, TARES, TARRE, TARRES, TARRIER, TARRIERS, TARRIES, TARRIEST, TARSIER,\n", "TARSIERS, TARTER, TARTEST, TARTNESS, TARTNESSES, TARTRATE, TARTRATES, TASSE, TASSES, TASSET, TASSETS, TASSIE,\n", "TASSIES, TASTE, TASTER, TASTERS, TASTES, TASTIER, TASTIEST, TASTINESS, TASTINESSES, TATE, TATER, TATERS, TATES,\n", "TATTER, TATTERS, TATTIE, TATTIER, TATTIES, TATTIEST, TATTINESS, TATTINESSES, TEAR, TEARER, TEARERS, TEARIER,\n", "TEARIEST, TEARS, TEAS, TEASE, TEASER, TEASERS, TEASES, TEAT, TEATS, TEEN, TEENER, TEENERS, TEENIER, TEENIEST,\n", "TEENS, TEENSIER, TEENSIEST, TEENTSIER, TEENTSIEST, TEES, TEETER, TEETERS, TENANT, TENANTS, TENET, TENETS, TENIA,\n", "TENIAE, TENIAS, TENIASES, TENIASIS, TENNER, TENNERS, TENNIES, TENNIS, TENNISES, TENNIST, TENNISTS, TENS, TENSE,\n", "TENSENESS, TENSENESSES, TENSER, TENSES, TENSEST, TENSITIES, TENT, TENTER, TENTERS, TENTIE, TENTIER, TENTIEST,\n", "TENTS, TERAI, TERAIS, TERETE, TERN, TERNATE, TERNE, TERNES, TERNS, TERRA, TERRAE, TERRAIN, TERRANE, TERRANES,\n", "TERRARIA, TERRAS, TERRASES, TERREEN, TERREENS, TERRENE, TERRENES, TERRET, TERRETS, TERRIER, TERRIERS, TERRIES,\n", "TERRINE, TERRINES, TERRIT, TERRITS, TERSE, TERSENESS, TERSENESSES, TERSER, TERSEST, TERTIAN, TERTIARIES, TESSERA,\n", "TESSERAE, TEST, TESTA, TESTAE, TESTATE, TESTATES, TESTEE, TESTEES, TESTER, TESTERS, TESTES, TESTIER, TESTIEST,\n", "TESTINESS, TESTINESSES, TESTIS, TESTS, TETANIES, TETANISE, TETANISES, TETRA, TETRAS, TETS, TETTER, TETTERS, TIER,\n", "TIERS, TIES, TINE, TINEA, TINEAS, TINES, TINIER, TINIEST, TININESS, TININESSES, TINNER, TINNERS, TINNIER,\n", "TINNIEST, TINNINESS, TINNINESSES, TINTER, TINTERS, TIRE, TIRES, TISANE, TISANES, TITANATE, TITANATES, TITANESS,\n", "TITANESSES, TITANITE, TITANITES, TITER, TITERS, TITRATE, TITRATES, TITRE, TITRES, TITTER, TITTERER, TITTERERS,\n", "TITTERS, TITTIE, TITTIES, TRAINEE, TRAINER, TRAITRESS, TRAITRESSES, TRASSES, TREAT, TREATER, TREATERS, TREATIES,\n", "TREATISE, TREATISES, TREATS, TREE, TREEN, TREENS, TREES, TRESS, TRESSES, TRESSIER, TRESSIEST, TRET, TRETS, TRIENE,\n", "TRIENES, TRIENNIA, TRIENS, TRIENTES, TRIER, TRIERS, TRIES, TRINE, TRINES, TRINITIES, TRISTATE, TRISTE, TRITE,\n", "TRITENESS, TRITENESSES, TRITER, TRITEST, TSETSE, TSETSES\n" ] } ], "source": [ "valid_letters.add('S') # Make 'S' a legal letter\n", "\n", "big_s = word_list(open(file).read())\n", "\n", "report(wordlist=big_s)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Allowing 'S' more than doubles the length of the word list and the score of the top honeycomb!\n", "\n", "# Summary\n", "\n", "Here are the highest-scoring honeycombs (with and without an S) with their stats and a pangram to remember them by:\n", "\n", "\n", "
\n",
    "  537 words            1,179 words \n",
    "   50 pangrams            86 pangrams\n",
    "3,898 points           8,681 points\n",
    "      RETAINING              ENTERTAINERS\n",
    "
\n", "\n", "This notebook explored four approaches to finding the highest-scoring honeycomb, with three big efficiency gains:\n", "\n", "1. **Brute Force Enumeration**: Compute total score for every possible honeycomb letter combination; return the highest-scoring. \n", "2. **Pangram Lettersets**: Compute total score only for pangram lettersets. **Reduces candidates by 60x.**\n", "3. **Points Table**: Precompute score for each letterset once; for each honeycomb, sum 63 letter subset scores. **Speeds up scoring time by 300x.**\n", "4. **Branch and Bound**: Try all 7 centers only for lettersets that score better than the top score so far. **Reduces candidates by nearly 7x.**\n", "\n", "All together that is about a 100,000-fold speedup!\n", "\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.13.3" } }, "nbformat": 4, "nbformat_minor": 4 }