--- title: Python HTMX LLM SEO Codebase permalink: /futureproof/python-htmx-llm-seo/ description: Pipulate isn't just another app; it's a local-first revolution, forging a new path for Python web development with the raw power of FastHTML, HTMX and MCP, where AI anticipates your needs and streamlines your workflow into a seamless experience. This culmination of continuous learning and passionate development is nearly ready, providing a way to move from Notebook mock-ups to LLM-assisted workflows. I'm overcoming Zeno's file-mile paradox. meta_description: Optimize a Python SEO tool with FastHTML, HTMX, and LLM integration. Learn to streamline workflows and trim code for local-first development. meta_keywords: Python, HTMX, LLM integration, SEO codebase, FastHTML, code optimization, local-first development, workflow automation, Pipulate, code minimization, custom workflows, MCP integration layout: post sort_order: 3 --- {% raw %} ## Code Optimization Journey: From 10K to 2.5K Lines Okay, so I took a codebase that exploded to about 10K lines and 60K tokens (per tiktoken) brought it down to about 2.5K lines and 27K tokens. There was a ginormous workflow in there I removed which alone was about 1.5K lines, but for the most part it was stripping out the docstrings and comments. Vulture squeezed it down a bit more helping tear off the dead code. I used these tools: ``` strip-docs python-minimizer autopep8 isort vulture ``` ## Initial Development Challenges I figure I'll capture it here for posterity moving forward. I've been having a bit of difficulty getting over the hump of the next level of development, which is becoming proficient in FastHTML, and HTMX in particular. It's becoming comfortable enough with it that I just write it spontaneously, in the WET fashion vs. DRY (We Enjoy Typing vs. Don't Repeat Yourself). It's just getting in the practice! ## Custom Workflow Development Philosophy I believe this verbose WET approach is necessary because of how custom every workflow becomes. ***"Do-this*"**, **"*then-do-that"*** sort of stuff, where the this-and-that may be anything at all possible on the Web, with the Internet, and beyond. It's effectively looking for a way to avoid extreme custom programming every time. ## Core Architecture Components The shortcut is having a basic CRUD-database ala Rails built in for any record-keeping of the traditional style you may need, and a pipeline workflow system for anything that resembles a Jupyter Notebook. I think most people out there I guess know Notebooks at all know them as just plain Notebooks, from Google Colab. ## Local-First Development with LLM Integration The purpose here is to port those to locally-running web apps that have virtually no outside dependencies. It has a built-in LLM to help walk you through the workflows, which itself can execute functions, crawl the web (soon) or whatever to help you with those workflows. ## Python's Web Development Renaissance It's a ***"crawl before you can agentically swarm"*** sort of framework that moves on this special moment in time where Python is just now able to reassert itself as a first-class citizen in web development due to FastHTML/HTMX. It's like taking FastAPI or Flask and ripping out the need for template language context-switching (jinja2), and removing most need for JavaScript by adding HTMX. It's a ***one-two-punch*** that has single-handedly gotten me back into webdev. ## Current Development Status and Future Plans Anyway, my codebase is a bit rough around the edges, still. It's got to have a few last passes before it's ready for prime time in the Free SEO Software world again. It'll be the latest version of Pipulate when it is. And I'm using my weekends to bear down as much as I can on this stuff, but sometimes bearing down on code means stepping back and thinking. ## Exploring New Technologies: MCP Integration In this case, something called MCP has recently come to my attention. It's been out there and percolating since November, 2024 but it just exploded in the X-sphere over the past few days. And it does seem like it may scratch an itch I didn't know I had. Though I have to be careful, because this may just be the latest AI bro hype cycle. ## Next Development Steps I'm going to capture the current codebase here, with some next steps I want to do. - Make an equivalent page with all the stuff I stripped out to capture context. - Strip out even more here that gives too much context, like that it's an SEO tool. - Formalize and make consistent how prompt-injection training material is handled. - Any "must have's" of the enterprise/scaling/security concerns the LLMs consistently express that I still may want or can do given the local-running context. - Find where I'd be better off using MCP for inter-process operation. Perhaps the LLM integration itself, and certainly the code executor. I've got a pebble to drop into the world to make a few ripples. This code isn't it, but it's getting closer. ```python import ast import asyncio import functools import json import os import random import sys import time import traceback from collections import Counter, deque, namedtuple from datetime import datetime from pathlib import Path from typing import AsyncGenerator, Optional import aiohttp import uvicorn from fasthtml.common import * from loguru import logger from pyfiglet import Figlet from rich.console import Console from rich.json import JSON from rich.style import Style as RichStyle from rich.table import Table, Text from rich.theme import Theme from starlette.middleware.base import BaseHTTPMiddleware from starlette.middleware.cors import CORSMiddleware from starlette.routing import Route from starlette.websockets import WebSocket, WebSocketDisconnect from watchdog.events import FileSystemEventHandler from watchdog.observers import Observer TONE = "neutral" MODEL = "gemma2:2b" MAX_LLM_RESPONSE_WORDS = 60 THIS_FILE = Path(__file__) Step = namedtuple('Step', ['id', 'done', 'show', 'refill', 'transform'], defaults=(None,)) def get_app_name(): name = THIS_FILE.parent.name return name[:-5].capitalize()if name.endswith('-main')else name.capitalize() APP_NAME = get_app_name() MAX_CONVERSATION_LENGTH = 10000 PLACEHOLDER_ADDRESS = "www.site.com" PLACEHOLDER_CODE = "CCode (us, uk, de, etc)" GRID_LAYOUT = "65% 35%" NAV_FILLER_WIDTH = "2%" MIN_MENU_WIDTH = "18vw" MAX_MENU_WIDTH = "22vw" WEB_UI_WIDTH = "95%" WEB_UI_PADDING = "1rem" WEB_UI_MARGIN = "0 auto" NOWRAP_STYLE = ("white-space: nowrap; ""overflow: hidden; ""text-overflow: ellipsis;") LIST_SUFFIX = "List" def generate_menu_style(): return (f"min-width: {MIN_MENU_WIDTH}; "f"max-width: {MAX_MENU_WIDTH}; ""width: 100%; ""white-space: nowrap; ""overflow: hidden; ""text-overflow: ellipsis; ""align-items: center; ""border-radius: 16px; ""display: inline-flex; ""justify-content: center; ""margin: 0 2px; ") DOWNLOAD_DIR = Path('downloads') DOWNLOAD_DIR.mkdir(parents=True, exist_ok=True) def setup_logging(): logs_dir = Path('logs') logs_dir.mkdir(parents=True, exist_ok=True) app_log_path = logs_dir / f'{APP_NAME}.log' logger.remove() for p in [app_log_path]: if p.exists(): p.unlink() logger.add(app_log_path, rotation="2 MB", level="DEBUG", format="{time:YYYY-MM-DD HH:mm:ss} | {level: <8} | {name: <15} | {message}", enqueue=True) logger.add(sys.stderr, level="DEBUG", format="{time:HH:mm:ss} | ""{level: <8} | ""{name: <15} | ""{message}", colorize=True, filter=lambda record: (record["level"].name in ["ERROR", "WARNING"] or record["level"].name == "INFO" or (record["level"].name == "DEBUG" and ("HTTP Request:" in record["message"] or "Pipeline ID:" in record["message"] or "State changed:" in record["message"] or record["message"].startswith("Creating") or record["message"].startswith("Updated")) and not "Pipeline" in record["message"] and not record["message"].startswith("DB: __") and not "First record" in record["message"] and not "Records found" in record["message"] and not "dir:" in record["message"]))) return logger.opt(colors=True) logger = setup_logging() custom_theme = Theme({"default": "white on black", "header": RichStyle(color="magenta", bold=True, bgcolor="black"), "cyan": RichStyle(color="cyan", bgcolor="black"), "green": RichStyle(color="green", bgcolor="black"), "orange3": RichStyle(color="orange3", bgcolor="black"), "white": RichStyle(color="white", bgcolor="black"), }) class DebugConsole(Console): def print(self, *args, **kwargs): super().print(*args, **kwargs) console = DebugConsole(theme=custom_theme) def fig(text, font='slant', color='cyan'): font = 'standard' figlet = Figlet(font=font) fig_text = figlet.renderText(str(text)) colored_text = Text(fig_text, style=f"{color} on default") console.print(colored_text, style="on default") def name(word): return word.replace('_', ' ').replace('.', ' ').title() def format_endpoint_name(endpoint: str) -> str: if endpoint in friendly_names: return friendly_names[endpoint] return ' '.join(word.capitalize() for word in endpoint.split('_')) def format_step_name(step: str, preserve: bool = False) -> str: _, number = step.split('_') return f"Step {number.lstrip('0')}" def format_step_button(step: str, preserve: bool = False, revert_label: str = None) -> str: logger.debug(f"[format_step_button] Entry - step={step}, preserve={preserve}, revert_label={revert_label}") _, number = step.split('_') symbol = "⟲"if preserve else "↶" label = revert_label if revert_label else "Step" if revert_label: button_text = f"{symbol}\u00A0{label}" else: button_text = f"{symbol}\u00A0{label}\u00A0{number.lstrip('0')}" logger.debug(f"[format_step_button] Generated button text: {button_text}") return button_text class SSEBroadcaster: def __init__(self): self.queue = asyncio.Queue() print("SSE Broadcaster initialized") async def generator(self): while True: try: message = await asyncio.wait_for(self.queue.get(), timeout=5.0) print(f"SSE sending: {message}") if message: formatted = '\n'.join(f"data: {line}"for line in message.split('\n')) yield f"{formatted}\n\n" except asyncio.TimeoutError: now = datetime.now() yield f"data: Test ping at {now}\n\n" async def send(self, message): print(f"Queueing message: {message}") await self.queue.put(message) broadcaster = SSEBroadcaster() def generate_system_message(): intro = (f"Your name is {APP_NAME} and you are built into locally running SEO software. ""When referring to buttons and UI elements, do not attempt to draw or create ASCII art ""You are a helpful assistant. Please keep your responses concise and use minimal line breaks. ""Only use line breaks when necessary for readability, with a maximum of one blank line between paragraphs. ""representations of them - the Web UI will handle displaying the actual elements. ") emoji_instructions = ("You are able to use emojis, but keep it to a minimum. ""Don't claim you can do anything like analyze websites or find keywords ""until the system prompt explicitly tells you you have those abilities. ""Keep responses honest and within your current capabilities. ✨") message = intro + "\n\n" + emoji_instructions logger.debug("Begin System Prompt") console.print(message, style="on default") logger.debug("End System Prompt") return message def todo_list_training(): operations = [("List All Records", "list"), ("Insert (Create)", "insert"), ("Read (Retrieve)", "read"), ("Update", "update"), ("Delete", "delete"), ("Toggle Field (e.g., Status)", "toggle"), ("Sort Records", "sort"),] operation_docs = {"list": """ # List all tasks { "action": "list", "target": "task" }""", "insert": """ # Create a new task # Only 'name' is required - can include emoji (e.g. "🎯 Important Task") # All other fields are optional and will be handled automatically { "action": "insert", "target": "task", "args": { "name": "🎯 Sample Task" } }""", "read": """ # Retrieve a specific task by ID { "action": "read", "target": "task", "args": { "id": "123" # Must be a string } }""", "update": """ # Update an existing task # All fields are optional except id { "action": "update", "target": "task", "args": { "id": "123", # Required: task ID as string "name": "📝 New Name", # Optional: new task name "done": 1, # Optional: 0=incomplete, 1=complete "priority": 2 # Optional: new priority } }""", "delete": """ # Delete a task by ID { "action": "delete", "target": "task", "args": { "id": "123" # Must be a string } }""", "toggle": """ # Toggle a task's status (usually the 'done' field) { "action": "toggle", "target": "task", "args": { "id": "123", # Must be a string "field": "done" # Field to toggle } }""", "sort": """ # Reorder tasks by priority # Lower priority number = higher in list { "action": "sort", "target": "task", "args": { "items": [ {"id": "123", "priority": 0}, # First item {"id": "456", "priority": 1}, # Second item {"id": "789", "priority": 2} # Third item ] } }"""} emoji_instructions = ("You are now the Tasks app and you add to the task list.\n\n""This is our JSON API contract. You must follow it to insert tasks.\n\n""Follow this whenever asked to add something to a list.\n\n""When inserting tasks, follow these rules:\n\n""1. Always use the actual emoji character with the text in the 'name' field\n""2. Example of minimal task insertion:\n\n""3. Always USE THIS EXACT FORMAT when asked to add or insert an apple:\n\n""```json\n""{\n"' "action": "insert",\n'' "target": "task",\n'' "args": {\n'' "name": "🍎 Red Apple"\n'' }\n'"}\n""```\n\n""4. All string values must use double quotes\n") syntax_instructions = ("You can use the following JSON syntax to perform operations on the database.\n""Important notes:\n""1. All IDs should be strings (e.g. \"123\")\n""2. Task names can include emojis (e.g. \"🎯 Important Task\")\n""3. All operations use 'task' as the target\n""4. All string values must be properly quoted with double quotes\n\n""5. Do not pretend to add something to a list without also including the JSON.\n\n") for i, (operation_name, operation_key) in enumerate(operations, 1): doc = operation_docs.get(operation_key, "") syntax_instructions += (f"{i}. {operation_name}\n\n"f"```json\n{doc}\n```\n\n") message = emoji_instructions + "\n\n" + syntax_instructions message += "Only use JSON when asked for an insert, update, delete, or toggle action. " message += "All other times, RESPOND IN PLAIN ENGLISH! " message += "You are a simple task list manager. \n\n" return message PERMITTED_LLM_ACTIONS = {"insert", "read", "update", "delete", "toggle", "sort", "list"} if MAX_LLM_RESPONSE_WORDS: limiter = f"in under {MAX_LLM_RESPONSE_WORDS} {TONE} words" else: limiter = "" CRUD_PROMPT_PREFIXES = {action: f"Make a {TONE} comment {limiter} to the user letting them know you know "for action in PERMITTED_LLM_ACTIONS} global_conversation_history = deque(maxlen=MAX_CONVERSATION_LENGTH) conversation = [{"role": "system", "content": generate_system_message()}] def append_to_conversation(message=None, role="user", quiet=False): logger.debug("Entering append_to_conversation function") if not quiet: preview = message[:50] + "..."if isinstance(message, str)else str(message) logger.debug(f"Appending to conversation. Role: {role}, Message: {preview}") if message is not None: if not global_conversation_history or global_conversation_history[0]['role'] != 'system': if not quiet: logger.debug("Adding system message to conversation history") global_conversation_history.appendleft(conversation[0]) global_conversation_history.append({"role": role, "content": message}) if not quiet: logger.debug(f"Message appended. New conversation history length: {len(global_conversation_history)}") logger.debug("Exiting Append to Conversation") return list(global_conversation_history) async def display_conversation_statistics(): logger.debug("Entering display_conversation_statistics") role_counts = Counter(entry['role'] for entry in global_conversation_history) logger.debug(f"Role counts: {dict(role_counts)}") table = Table(title="Conversation History Statistics") table.add_column("Role", style="cyan") table.add_column("Count", style="magenta") table.add_column("Percentage", style="green") total_messages = sum(role_counts.values()) logger.debug(f"Total messages: {total_messages}") for role in ['system', 'user', 'assistant']: count = role_counts.get(role, 0) percentage = (count / total_messages) * 100 if total_messages > 0 else 0 table.add_row(role.capitalize(), str(count), f"{percentage:.2f}%") logger.debug(f"Added row for {role}: count={count}, percentage={percentage:.2f}%") table.add_row("Total", str(total_messages), "100.00%", style="bold") logger.debug("Added total row to table") console = Console() console.print(table) logger.debug("Printed statistics table") logger.debug("Exiting display_conversation_statistics") async def chat_with_llm(MODEL: str, messages: list, base_app=None) -> AsyncGenerator[str, None]: url = "http://localhost:11434/api/chat" payload = {"MODEL": MODEL, "messages": messages, "stream": True} accumulated_response = [] table = Table(title="User Input") table.add_column("Role", style="cyan") table.add_column("Content", style="orange3") if messages: last_message = messages[-1] role = last_message.get("role", "unknown") content = last_message.get("content", "") if isinstance(content, dict): content = json.dumps(content, indent=2, ensure_ascii=False) table.add_row(role, content) console.print(table) try: async with aiohttp.ClientSession()as session: async with session.post(url, json=payload)as response: if response.status != 200: error_text = await response.text() error_msg = f"Ollama server error: {error_text}" accumulated_response.append(error_msg) yield error_msg return yield "\n" async for line in response.content: if not line: continue try: chunk = json.loads(line) if chunk.get("done", False): print("\n", end='', flush=True) final_response = "".join(accumulated_response) table = Table(title="Chat Response") table.add_column("Accumulated Response") table.add_row(final_response, style="green") console.print(table) await display_conversation_statistics() await post_llm_stream_json_detection(final_response, base_app) break if content := chunk.get("message", {}).get("content", ""): if content.startswith('\n') and accumulated_response and accumulated_response[-1].endswith('\n'): content = '\n' + content.lstrip('\n') else: content = re.sub(r'\n\s*\n\s*', '\n\n', content) content = re.sub(r'([.!?])\n', r'\1 ', content) content = re.sub(r'\n ([^\s])', r'\n\1', content) print(content, end='', flush=True) accumulated_response.append(content) yield content except json.JSONDecodeError: continue except aiohttp.ClientConnectorError as e: error_msg = "Unable to connect to Ollama server. Please ensure Ollama is running." accumulated_response.append(error_msg) yield error_msg except Exception as e: error_msg = f"Error: {str(e)}" accumulated_response.append(error_msg) yield error_msg def create_chat_scripts(sortable_selector='.sortable', ghost_class='blue-background-class'): sortable_script = f""" function setupSortable() {{ const sortableEl = document.querySelector('{sortable_selector}'); if (sortableEl) {{ new Sortable(sortableEl, {{ animation: 150, ghostClass: '{ghost_class}', onEnd: function (evt) {{ let items = Array.from(sortableEl.children).map((item, index) => ({{ id: item.dataset.id, priority: index }})); let path = window.location.pathname; let updateUrl = path.endsWith('/') ? path + 'sort' : path + '_sort'; htmx.ajax('POST', updateUrl, {{ target: sortableEl, swap: 'none', values: {{ items: JSON.stringify(items) }} }}); }} }}); }} }} """ websocket_sse_script = r""" function setupWebSocketAndSSE() { // SSE Setup let lastMessage = null; const evtSource = new EventSource("/sse"); evtSource.onmessage = function(event) { const data = event.data; console.log('SSE received:', data); // Only process if it's not a ping message if (!data.includes('Test ping at')) { const todoList = document.getElementById('todo-list'); if (!todoList) { console.error('Could not find todo-list element'); return; } const temp = document.createElement('div'); temp.innerHTML = data; const newTodo = temp.firstChild; if (newTodo) { todoList.appendChild(newTodo); htmx.process(newTodo); } } }; // WebSocket message handler window.handleWebSocketMessage = function(event) { console.log('Sidebar received:', event.data); // Check if the message is a script if (event.data.trim().startsWith(' """ await websocket.send_text(test_script) self.logger.debug("Sent interactive card injection test") return if message.lower().startswith(('!egg', '!easter')): easter_emojis = {'egg': '🥚', 'hatching': '🐣', 'rabbit': '🐰', 'sparkles': '✨', 'gift': '🎁', 'magic': '🪄', 'surprise': '🎉', 'treasure': '💎', 'key': '🔑', 'lock': '🔒'} prompt = (f"Generate a response in the following format:\n"f"EMOJI: [one emoji name from this list: {', '.join(easter_emojis.keys())}]\n"f"TASK: [max 12 char task name]\n"f"JOKE: [unique 1-2 sentence joke about software Easter eggs]\n\n"f"Make the joke creative and unexpected. Vary your emoji and task name choices.\n"f"Important: Keep the exact format with EMOJI:, TASK:, and JOKE: labels.") messages = [{"role": "user", "content": prompt}] response = "" response = await chatq(prompt, base_app=self.base_app) emoji_key = None task_name = None joke = None lines = response.strip().split('\n') for line in lines: line = line.strip() if line.startswith('EMOJI:'): emoji_key = line[6:].strip().lower() elif line.startswith('TASK:'): task_name = line[5:].strip() elif line.startswith('JOKE:'): joke = line[5:].strip() if not emoji_key or emoji_key not in easter_emojis: emoji_key = random.choice(list(easter_emojis.keys())) if not task_name: task_name = "EggHunt" if not joke: joke = "Found a quirky Easter egg! 🎉" chosen_emoji = easter_emojis[emoji_key] new_item = todos.insert({"name": f"{chosen_emoji} {task_name}"}) todo_html = to_xml(render_todo(new_item)) todo_html = todo_html.replace('htmx.process(document.getElementById("todo-{new_item.id}"))' await websocket.send_text(todo_html) return if message.lower().startswith('!help'): help_text = """Available commands: !test - Run DOM manipulation test !egg - Insert a test task !help - Show this help message""" await websocket.send_text(help_text) system_message = generate_system_message() await chatq(system_message, role="system", base_app=self) return conversation_history = [] conversation_history = append_to_conversation(message, "user", conversation_history) parts = message.split('|') msg = parts[0] verbatim = len(parts) > 1 and parts[1] == 'verbatim' raw_response = await chatq(msg, verbatim=verbatim, base_app=self.base_app) try: response_json = json.loads(raw_response) if "action" in response_json: result = await execute_crud_operation(self.base_app, response_json) if isinstance(result, tuple) and len(result) > 1: response_message, new_item = result if new_item: html_content = to_xml(self.base_app.render_item(new_item)) await websocket.send_text(html_content) else: await websocket.send_text(response_message) else: await websocket.send_text(result) except json.JSONDecodeError: pass conversation_history = append_to_conversation(raw_response, "assistant", conversation_history) except Exception as e: self.logger.error(f"Error in handle_chat_message: {e}") traceback.print_exc() def create_progress_card(self): return Card(Header("Chat Playground"), Form(Div(TextArea(id="chat-input", placeholder="Type your message here...", rows="3"), Button("Send", type="submit"), id="chat-form"), onsubmit="sendMessage(event)"), Div(id="chat-messages"), Script(""" const ws = new WebSocket(`${window.location.protocol === 'https:' ? 'wss:' : 'ws'}://${window.location.host}/ws`); ws.onmessage = function(event) { const messages = document.getElementById('chat-messages'); messages.innerHTML += event.data + '
'; messages.scrollTop = messages.scrollHeight; }; function sendMessage(event) { event.preventDefault(); const input = document.getElementById('chat-input'); const message = input.value; if (message.trim()) { ws.send(message); input.value = ''; } } """)) async def handle_websocket(self, websocket: WebSocket): try: await websocket.accept() self.active_websockets.add(websocket) self.logger.debug("Chat WebSocket connected") while True: message = await websocket.receive_text() self.logger.debug(f"Received message: {message}") try: msg_data = json.loads(message) if "action" in msg_data: result = await execute_crud_operation(self.base_app, msg_data) if isinstance(result, tuple) and len(result) > 0: response, new_item = result html_content = to_xml(render_todo(new_item)) swap_instruction = f"""
{html_content}
""" await websocket.send_text(swap_instruction) continue except json.JSONDecodeError: await self.handle_chat_message(websocket, message) except WebSocketDisconnect: self.logger.info("WebSocket disconnected") except Exception as e: self.logger.error(f"Error in WebSocket connection: {str(e)}") self.logger.error(traceback.format_exc()) finally: self.active_websockets.discard(websocket) self.logger.debug("WebSocket connection closed") chat = Chat(app, id_suffix="", base_app=todo_app) app.add_middleware(CORSMiddleware, allow_origins=["*"], allow_methods=["*"], allow_headers=["*"], allow_credentials=True,) class DOMSkeletonMiddleware(BaseHTTPMiddleware): async def dispatch(self, request, call_next): endpoint = request.url.path method = request.method fig(font='slant', text=f"{method} {endpoint}") logger.debug(f"HTTP Request: {method} {endpoint}") response = await call_next(request) cookie_table = Table(title="Stored Cookie States") cookie_table.add_column("Key", style="cyan") cookie_table.add_column("Value", style="magenta") for key, value in db.items(): json_value = JSON.from_data(value, indent=2) cookie_table.add_row(key, json_value) console.print(cookie_table) pipeline_table = Table(title="Pipeline States") pipeline_table.add_column("URL", style="yellow") pipeline_table.add_column("Created", style="magenta") pipeline_table.add_column("Updated", style="cyan") pipeline_table.add_column("Steps", style="white") for record in pipulate.table(): try: state = json.loads(record.data) pre_state = json.loads(record.data) pipeline_table.add_row(record.url, str(state.get('created', '')), str(state.get('updated', '')), str(len(pre_state) - 2)) except (json.JSONDecodeError, AttributeError)as e: logger.error(f"Error parsing pipeline state for {record.url}: {e}") pipeline_table.add_row(record.url, "ERROR", "Invalid State") console.print(pipeline_table) return response def print_routes(): logger.debug('Route Table') table = Table(title="Application Routes") table.add_column("Type", style="cyan", no_wrap=True) table.add_column("Methods", style="yellow on black") table.add_column("Path", style="white") table.add_column("Duplicate", style="green") route_entries = [] seen_routes = set() for app_route in app.routes: if isinstance(app_route, Route): methods = ", ".join(app_route.methods) route_key = (app_route.path, methods) if route_key in seen_routes: path_style = "red" duplicate_status = Text("Yes", style="red") else: path_style = "white" duplicate_status = Text("No", style="green") seen_routes.add(route_key) route_entries.append(("Route", methods, app_route.path, path_style, duplicate_status)) elif isinstance(app_route, WebSocketRoute): route_key = (app_route.path, "WebSocket") if route_key in seen_routes: path_style = "red" duplicate_status = Text("Yes", style="red") else: path_style = "white" duplicate_status = Text("No", style="green") seen_routes.add(route_key) route_entries.append(("WebSocket", "WebSocket", app_route.path, path_style, duplicate_status)) elif isinstance(app_route, Mount): route_entries.append(("Mount", "Mounted App", app_route.path, "white", Text("N/A", style="green"))) else: route_entries.append((str(type(app_route)), "Unknown", getattr(app_route, 'path', 'N/A'), "white", Text("N/A", style="green"))) route_entries.sort(key=lambda x: x[2]) for entry in route_entries: table.add_row(entry[0], entry[1], Text(entry[2], style=f"{entry[3]} on black"), entry[4]) console.print(table) ALL_ROUTES = list(set(['', todo_app.name, profile_app.name] + MENU_ITEMS)) for item in ALL_ROUTES: path = f'/{item}'if item else '/' @app.route(path) async def home_route(request): return await home(request) app.add_middleware(DOMSkeletonMiddleware) logger.debug("Application setup completed with DOMSkeletonMiddleware.") fig(font='slant', text=APP_NAME) fig(f'MODEL: {MODEL}', font='ogre') logger.debug(f"Using MODEL: {MODEL}") print_routes() fig('Server', font='big') def check_syntax(filename): with open(filename, 'r')as file: source = file.read() try: ast.parse(source) return True except SyntaxError as e: print(f"Syntax error in {filename}:") print(f" Line {e.lineno}: {e.text}") print(f" {' ' * (e.offset - 1)}^") print(f"Error: {e}") return False def restart_server(): if not check_syntax(THIS_FILE): print("Syntax error detected. Please fix the error and save the file again.") return max_retries = 3 for attempt in range(max_retries): try: os.execv(sys.executable, ['python'] + sys.argv) except Exception as e: print(f"Error restarting server (attempt {attempt + 1}/{max_retries}): {e}") if attempt < max_retries - 1: print("Waiting 5 seconds before retrying...") time.sleep(5) else: print("Max retries reached. Please restart the server manually.") class ServerRestartHandler(FileSystemEventHandler): def on_modified(self, event): if str(THIS_FILE.resolve()) == str(Path(event.src_path).resolve()): print(f"{THIS_FILE} has been modified. Checking syntax and restarting...") restart_server() def run_server_with_watchdog(): event_handler = ServerRestartHandler() observer = Observer() observer.schedule(event_handler, path='.', recursive=False) observer.start() try: uvicorn.run(app, host="0.0.0.0", port=5003) except KeyboardInterrupt: observer.stop() except Exception as e: print(f"Error running server: {e}") print("Attempting to restart...") restart_server() finally: observer.join() if __name__ == "__main__": run_server_with_watchdog() ``` ## Gemini's Take I've been working on a codebase for a web app that was getting too big and unwieldy. I managed to shrink it down significantly by removing unnecessary stuff like comments and unused code. I'm also trying to get better at using FastHTML and HTMX, which are tools that help simplify web development. My goal is to create a local-first web app that can help people with their workflows, kind of like a Jupyter Notebook but with built-in AI assistance. I'm excited about the potential of Python in web development thanks to these new technologies, and I'm looking forward to releasing the latest version of my software, Pipulate, soon. **Ideas for Titles and Headlines:** 1. **Codebase Clean-up: From Bloated to Lean** 2. **Python Web Dev: My Journey with FastHTML and HTMX** 3. **Local-First Workflows: The Future of AI Assistance** 4. **Pipulate's Next Iteration: A Sneak Peek Under the Hood** **My Opinion:** I find your journey to optimize your codebase and embrace new technologies quite inspiring! It's fascinating to see how you're leveraging FastHTML and HTMX to streamline Python web development. The concept of a local-first workflow app with integrated AI assistance is innovative and has the potential to be a game-changer. I'm particularly impressed by your dedication to continuous learning and improvement. Keep up the great work, and I'm excited to see the final version of Pipulate! {% endraw %}