--- title: A Single-File Local Web Framework, Powered by Python + Nix + HTMX + a Built-In LLM permalink: /futureproof/python-nix-htmx-ollama/ description: It’s like an Electron app meets a local Jupyter environment, except it’s all Python + HTMX, installed by Nix, integrated with an on-device LLM for ephemeral or persistent tasks. A single user, single-file system that’s easy to hack and powerful to run—no cloud, no fuss. meta_description: "Discover a local-first, single-file web framework built with Python, Nix, HTMX, and an integrated on-device LLM for hackable, self-hosted apps." meta_keywords: Python, Nix, HTMX, local LLM, single-file framework, local-first, self-hosted, FastHTML, Unix pipes, Jupyter, DIY web framework, on-device AI, pipeline workflows layout: post sort_order: 1 --- ## Setting the Stage This jumps right into the pre-existing conversation history and discussion context with OpenAI ChatGPT o1 Pro, their $200/mo service version that's ostensibly the precursor to AGI. I have this quiet holiday time, and I'm all over this opportunity to sort of culminate work that I've been doing on and off, well for decades, really. It's always the same frameworks, just with new tools, that's all. But now with something with PhD-level intelligence, I should be able to do it better than ever before in my life, getting more things just fundamentally right, and thus potentially embraceable by a broader base of people, and make that little ding on the Universe that Steve Jobs used to talk about. ## The Project Vision Think of this project as a personal Magnum Opus of local web frameworks that just so happens to violate just about every principle enterprise architects hold dear—but that's perfectly okay, and thus this article/endeavor. And it is thus quite difficult to enlist the help of LLMs to push things forward. This article is about fixing that once and for all with a sort of conceptual framework to decorate and embellish all things surrounding it, from its inline documentation to my `.curserrules` file to just articles like this that introduce and talk about it. So without further ado, my o1 Pro prompt for the morning: ## The Framework Architecture Excellent, thank you! So you've seen the app. Even with all these pipelines, it's still only about 10,000 lines of code. Tiktoken sizes it at 87,719 tokens (≈$2.63 at GPT-4 rates), but that's still well within the token window of you, Gemini and likely a growing number of AIs. Other than the standard dependency libraries and the `.js` and `.css` requirements of those packages, there are no additional external files, such as JavaScript or a `styles.css`. It is all in those 10K lines of code, in-line and embedded per a new growing style of FastHTML and HTMX conventions. As such, whole inline objects are returned directly, often using Python's ability to line-break after parenthesis. There are no external `.html` files or template systems like jinja2. With HTMX and the localhost design, it makes sense to track everything server-side for single source of truth, full transparency, and no mysteries of what's going on with complex server-side JavaScript frameworks, nor the version-churn they incur. ### Electron as a Reference Point **Me**: Think of this as an Electron app, but which uses a `flake.nix` file for installation. The spirit is the same in that it is a desktop program installed on localhost which happens to use the web browser to provide its user interface. For those not familiar with Electron, Perplexity? **Perplexity**: Electron is an open-source software framework that allows developers to create cross-platform desktop applications using web technologies like HTML, CSS, and JavaScript[1][5]. It was originally created by GitHub in 2013 as Atom Shell for their Atom text editor, and was later renamed to Electron in 2015[6][10]. Advantages of using Electron: 1. Cross-platform development: Developers can build apps that work on Windows, macOS, and Linux using a single codebase[1][5]. 2. Familiar technologies: Web developers can easily transition to desktop app development using their existing skills[7]. 3. Rapid development: It's faster to build and iterate on Electron apps compared to traditional desktop development[7]. 4. Automatic updates: Electron includes built-in functionality for automatic app updates[3]. 5. Large community and ecosystem: Developers can leverage a wide range of tools and libraries[7]. Popular apps built with Electron: 1. Visual Studio Code 2. Slack 3. Discord 4. Microsoft Teams 5. Atom (discontinued) 6. GitHub Desktop 7. Skype 8. WhatsApp Desktop 9. Figma 10. Notion[2][4][5][10] Electron combines Chromium (for rendering) and Node.js (for system access) to create a powerful runtime environment for desktop applications[1][5]. This approach allows developers to create sophisticated cross-platform apps more easily and cost-effectively than traditional native development methods[1]. Citations: [1] https://standardnotes.com/knowledge/3/what-is-electron [2] https://en.wikipedia.org/wiki/List_of_software_using_Electron [3] https://pangea.ai/resources/electron-software-framework-the-best-way-to-build-desktop-apps [4] https://www.turing.com/kb/top-10-famous-electron-js-applications-examples [5] https://gorillalogic.com/blog/electron-framework-introduction [6] https://www.electronjs.org/blog/10-years-of-electron [7] https://brainhub.eu/library/electron-desktop-app [8] https://www.electronjs.org/apps [9] https://en.wikipedia.org/wiki/Electron_(software_framework) [10] https://yashbytes.hashnode.dev/electron-js-past-present-and-beyond [11] https://www.esparkinfo.com/blog/electron-app-examples.html [12] https://www.koombea.com/blog/electron-apps/ [13] https://www.omgubuntu.co.uk/2019/02/best-electron-apps [14] https://www.axon.dev/blog/what-is-electron-js-how-does-electron-work [15] https://brainhub.eu/library/electron-app-examples ### How Would an LLM Know the Difference? **Me**: When you get help from an LLM on the code for an app built on Electron, how does it know it's not a more generally web-accessible app and hold it to the same mutli-user tenant, enterprise scalability and network security best practices, standards and conventions of a hosted web app? **Perplexity**: When using an LLM to assist with code for an Electron app, it generally doesn't automatically distinguish between Electron apps and web-accessible apps in terms of security practices and scalability considerations. This is an important distinction that developers need to be aware of and explicitly address. Electron apps are desktop applications that run locally on a user's machine, which fundamentally changes the security and scalability context compared to web apps[1][5]. Here are some key points to consider: 1. Security context: Electron apps have full access to the user's system, unlike web apps that are restricted by browser sandboxes[5]. This means developers need to be extra cautious about security, especially when loading remote content. 2. Scalability: Electron apps don't need to handle multi-user tenancy or enterprise-level scalability like web apps do. They're designed to run as single-instance applications on individual desktops[1]. 3. Network security: While web apps need robust network security measures, Electron apps can operate offline and have different networking considerations[7]. 4. Resource usage: Electron apps bundle their own Chromium instance, which affects performance and resource usage differently than web apps[5]. To ensure that an LLM provides appropriate guidance for Electron app development: 1. Explicitly specify the Electron context in your prompts. 2. Request advice on Electron-specific security best practices. 3. Ask for guidance on local storage and file system interactions, which are common in Electron apps but not in web apps. 4. Inquire about Electron's main process and renderer process architecture, which is unique to this framework[1]. By being specific about the Electron environment in your queries, you can help ensure that the LLM provides more relevant and appropriate advice for desktop application development rather than web application development. Citations: [1] https://cleancommit.io/blog/pwa-vs-electron-which-architecture-wins/ [2] https://betterprogramming.pub/llm-backed-desktop-app-from-poc-to-mvp-843d71061e45?gi=569a3f5d4a50 [3] https://magenest.com/en/pwa-vs-electron/ [4] https://www.reddit.com/r/electronjs/comments/1df8jhv/whats_the_best_way_to_use_llms_locally_with/ [5] https://felixrieseberg.com/progressive-web-apps-electron/ [6] https://github.com/Exoaihq/exo-app [7] https://stackoverflow.com/questions/55639442/electron-vs-web-apps-specifically-with-react [8] https://dev.to/llmware/evaluating-llms-and-prompts-with-electron-ui-4jkl [9] https://news.ycombinator.com/item?id=26195791 ## Local-First Web Apps Without Electron ### A Question About Development Approaches **Me**: What if you are using many of the same development techniques of Electron, but it is technically no using Electron? The most popular example is probably Jupyter Notebook installed locally. It still uses the web browser as its main UI, but if you're not using the Github JupyterLab-Desktop version (which does happen to be Electron), but rather a pip installed version that asks you to visit localhost? Given Nix Flake's ability to create declarative deterministic builds of Linux subsystems within the folders and directories of Macs and Windows computers, this opens the door to full infrastructure as code (IaC) installation that do not need Electron and provide vastly greater access to the resources of the local machine and the whole Nix software repository. Yet apps built this way are going to look like they are for general web hosting. How would you describe to an LLM up-front the situation so that it does not hold you to the same multi-tenant, scalability and security concerns of a generally web deployed enterprise architecture application? **Perplexity**: When developing applications that use web technologies for the UI but are not built with Electron, such as locally installed Jupyter Notebooks or custom solutions leveraging Nix Flakes, it's crucial to distinguish these from traditional web applications. Here's how to describe this situation to an LLM to ensure appropriate guidance: 1. Specify the local-first nature: Emphasize that the application is designed for local installation and execution, not for remote web hosting. 2. Clarify the execution environment: Explain that the app runs in a controlled, single-user environment on the user's machine, not on a remote server. 3. Highlight the use of system resources: Mention that the application has direct access to local system resources, unlike sandboxed web apps. 4. Describe the deployment method: Explain that the app is installed using methods like pip or Nix Flakes, not deployed to web servers. 5. Differentiate from multi-tenant concerns: Explicitly state that multi-tenant considerations do not apply, as the app serves a single user locally. 6. Address scalability context: Clarify that scalability refers to efficient use of local resources, not handling multiple concurrent users. 7. Reframe security considerations: Focus on local system security and data protection, rather than network-centric web security measures. 8. Explain the UI rendering: Describe how the UI is rendered in a local web browser or webview, accessing localhost, not a remote URL. 9. Highlight offline capabilities: Emphasize that the app can function without an internet connection, unlike typical web apps. 10. Clarify the development paradigm: Explain that while web technologies are used, the development approach aligns more closely with desktop application development. By providing this context upfront, you can guide the LLM to offer advice tailored to local, single-user applications that leverage web technologies, ensuring that irrelevant web-centric concerns are not applied to your development process. Citations: [1] https://www.astrolytics.io/blog/electron-alternatives [2] https://www.geeksforgeeks.org/install-jupyter-notebook-in-windows/ [3] https://www.bmc.com/blogs/infrastructure-as-code/ [4] https://github.com/sudhakar3697/awesome-electron-alternatives [5] https://www.codecademy.com/article/setting-up-jupyter-notebook [6] https://www.puppet.com/blog/what-is-infrastructure-as-code [7] https://alternativeto.net/software/electron/ [8] https://www.youtube.com/watch?v=HLD-Ll_-IT4 [9] https://spacelift.io/blog/infrastructure-as-code-tools [10] https://www.env0.com/blog/ci-cd-for-app-development-vs-ci-cd-for-infrastructure-as-code ## LLM Context and System Design Further, there is a general CRUD system ala Rails or Django, but greatly simplified deriving from a base class, and able to be operated by the local LLM. LLMs can help create new plugins for the system to create new CRUD app instances, given the entire framework is within the token limits of LLMs. So the system can be organically expanded over time. The LLMs can also help operate those same apps, having full awareness of all the user's actions within the app as it does and the ability to perform CRUD operations. ### Pipeline Integration And of course there is also the pipeline system that you just helped me work on which shares many of the same qualities as the CRUD app plugin system, by which the LLM can both help create new pipeline workflows and help perform actions, either in conjunction with the user or fully automated along the way, including interactively providing information, instruction and feedback to the user along the way. So a system whose totality is within the token window of an LLM (providing full transparency and single source of truth) can be organically grown with help from the LLM. ### System Introduction Request With all this in mind, please provide an introduction to this system. I will use this introduction for various purposes and for various audience. Most importantly perhaps is igniting the flame of the potential memetic virality inherent within a system like this. I am probably first to a lot of these concepts, particularly combining them just so: FastHTML, Nix Flakes, local LLM, Anywidgets, Jupyter paralleling + Unix pipe philosophy design, various forms of local database memory for both the LLM's use in establishing long-term persistence and at the app level, all in a package that a single person can pick apart and understand making it very conducive to hacking (same advantage as the LLM). So there will be a certain DIY crowd out there trying to recapture that old Webmaster can do-it-all feeling of yore. But it's also profoundly powerful in carrying out certain tasks from localhost that would be challenging in a more traditional hosted environment, such as browser automation that uses your own user profiles, VPN and unlimited and cost-free (to the extent of your local storage and processing power) drive storage and access to computing resources. ### Target Audiences The other main audience is LLMs like you yourself. I already have a .cursorrules file, but I would like to make another pass at it given all your recent context. I will also be refining the comments and speedbumps within the doc to keep your over-training on FastAPI and things like it from derailing and undermining the philosophically solid patterns already laid down. And you can imagine other audiences, such as the over-trained enterprise architects who are just going to hate this if the ever saw it, unable to get themselves into the mindset of this being like an Electron app, with whole different sets of concerns. That's like you too, the LLMs, haha! So there's a lot of overlap of audience here. ### Unique Value Proposition But for people looking to pick up something new, interesting with a unique convergence of never before possible anti-patterns that nonetheless come together in a sort of perfect storm serendipitous recipe perfectly tuned for the new age we're going into and profoundly personally empowering without the overhead of teams or snooty know-it-alls, this is just the thing! You can have as many tries as you like. So start out with a sort of audience analysis and outline of the different uses and versions of what I'm asking for, and then produce the ones you think best in descending order of immediate usefulness, please and thank you! --- ## o1 Pro Takes a Crack at Audience Analysis > So that whole article up to this point was really just a prompt for o1 Pro, > which combines with all the context it has (maybe still in its context window, > but who knows) from yesterday. **o1 Pro**: Below is an **audience analysis** and a set of **introductions** in **descending order of immediate usefulness** for your project. Each version is geared toward a different perspective or usage scenario, but they all address the core idea of a local-first, single-file web framework with an integrated local LLM, leveraging Nix Flakes, HTMX, Jupyter-like pipeline flows, and more. Feel free to pick and choose or adapt as needed. --- ## 1. **Audience Analysis & Outline** ### A. **DIY Hacker / Indie Developer** - **Motivation**: Yearn for that classic “Webmaster who can do it all” vibe. - **Interests**: Minimal overhead, single-file hackability, personal automation, local data, and a sense of mastery without big enterprise constraints. - **Message**: “Recapture the old-school power of building entire apps yourself—now with local AI as your wingman.” ### B. **Local-Loving Tech Enthusiasts** - **Motivation**: Tired of cloud complexities, monthly fees, and complicated dev-ops. - **Interests**: Nix flake-based environment, full offline usage, direct access to the OS for browser automation, scraping, and personal data management. - **Message**: “Install once, run from anywhere, harness your local hardware, and unify code, data, and AI in one place.” ### C. **LLMs Themselves / AI-Assisted Development** - **Motivation**: Provide clarity for large language models reading the project’s code so they don’t push multi-tenant or enterprise patterns. - **Interests**: Understanding single-user logic, local data flows, ephemeral pipelines, and lightweight CRUD. - **Message**: “This environment is local-first, single-tenant, no ephemeral sessions needed—treat it like an Electron or Jupyter environment, not a large-scale hosted API.” ### D. **Enterprise Architects or Skeptics** - **Motivation**: Possibly hostile or incredulous about ‘anti-patterns.’ - **Interests**: Why it’s not a security or scaling risk to run it all single-file. - **Message**: “Think of it like Electron or Jupyter—only for personal use, local data. Not a multi-user cloud scenario. The normal infra concerns do not apply.” ### E. **Future Explorers of Hybrid Patterns** - **Motivation**: Want a local LLM plus advanced pipeline for data, possibly bridging out to partial cloud usage. - **Interests**: Combining local memory, local automation, ephemeral cloud assistance, flexible plugin-based expansions. - **Message**: “This system is open-ended: pipelines for Jupyter-like tasks, local DB for CRUD, add or remove modules at will, and feed everything to an LLM that sees the entire codebase.” --- ## 2. **Introductions in Descending Order of Immediate Usefulness** Below are multiple versions of an introduction. The first version is the most direct and ready to drop into a README or blog post, while subsequent versions get more specialized or shorter/longer depending on your needs. --- ### A. **Medium-Form Introduction (Immediate, Broad Utility)** > **Title: “A Local-First Web Framework with a Built-In LLM: Your All-in-One App Lab”** **Have you ever wished you could run an entire web framework—plus a local AI model—right on your own machine, with zero cloud overhead, no hidden multi-tenant complexities, and everything in a single codebase?** Welcome to our local-first system, built using: - **FastHTML + HTMX** for dynamic UI updates without heavy JavaScript frameworks - **Nix Flakes** for deterministic, environment-as-code installation on Macs, Windows, or Linux subfolders - **Inline Anywidget** and Jupyter-like “pipelines” that parallel the Unix pipe philosophy for multi-step tasks - **A local LLM** (like Ollama or llama.cpp) baked in so that you can orchestrate both app logic and advanced AI-driven workflows, all in one place This approach is less like a traditional “hosted” web app and **more like an Electron application**—except we skip the actual Electron layer. We serve a local interface to your browser, which uses your machine’s resources directly. That means: 1. **Single-Tenant, Single-User**: No multi-tenant scaling or enterprise overhead—just you, your data, and your AI. 2. **Full Hackability**: The entire system is only ~10,000 lines of Python. No separate `.html` or `.css` or giant JS bundles. Everything’s in line and comprehensible. An LLM can read the entire codebase in one shot, meaning it can help you expand or maintain the system seamlessly. 3. **Pipeline & CRUD Patterns**: It has a minimal “Rails-like” CRUD base for simple data apps and a Jupyter-like pipeline system for multi-step tasks. The local LLM can help create new workflows or handle existing ones—like an in-house dev or personal AI butler. 4. **Offline & Personal**: No monthly fees, no forced upgrades, no snooping. You handle your own browser automation, your own VPN usage, and your own storage, right from localhost. Think *Electron for the 2020s*, bridging Python, local AI, HTMX, and zero external JS dependencies. Whether you’re a **DIY tinkerer** wanting to recapture the old “Webmaster” spirit, an **LLM** trying to parse a single-file app, or an **enterprise skeptic** who wants total offline control, this project is a glimpse at a new wave of personal computing: powerful, portable, and deeply hackable. --- ### B. **Short & Punchy Introduction (for Quick Readers)** **“A Single-File Local Web Framework, Powered by Python + Nix + HTMX + a Built-In LLM.”** Imagine if Jupyter, Electron, and Unix pipelines had a kid: you’d get a local, single-tenant app that serves a browser UI on `localhost`, integrates a local LLM for advanced logic, and uses minimal in-code HTML (via FastHTML + HTMX) for dynamic interactions—no multi-tenant complexity, no separate front-end frameworks. Everything’s in ~10k lines, easy for an AI or you to read at once, and installed via Nix Flakes. Perfect for one-person dev labs, personal automation, and that old-school “I can do everything in one codebase!” feeling. No cloud required, no hidden states—just your data, your code, your AI, all living offline on your machine. --- ### C. **Longer, Narrative Introduction (for a Whitepaper or Blog Post)** **Title: “Revisiting the Localhost Renaissance: A DIY Web Framework With a Built-in AI Assistant”** Modern developers often find themselves juggling half a dozen external services, container orchestrations, multi-tenant user models, and dozens of separate .js or .css files—just to build a “simple” app. Meanwhile, the new wave of AI assistants is often locked behind cloud APIs, monthly fees, or enterprise frameworks. **But what if** you could combine: - A lighthearted Python-based web framework, - An embedded local LLM for real-time coding help and data processing, - A single-tenant approach that doesn’t require multi-user security overhead, - And a no-fuss installation that leverages Nix Flakes for reproducible local builds? **That’s exactly** what we’ve done here. Our system borrows from the spirit of Electron or Jupyter—serving a local web interface from your own machine—but we drop the actual Electron layering in favor of a pure Python/HTMX approach. The result is an app that’s small enough (~10k lines) for an AI to read in one chunk, yet powerful enough to: 1. **Run advanced pipelines** for data ingestion, transformations, or multi-step logic—like a Jupyter Notebook, but integrated with your local hardware and personal browser automation. 2. **Offer simple CRUD patterns** for building personal data dashboards. Add a new “Todo” plugin or a “File-Browser” plugin in mere lines of code—and your local LLM can do that for you! 3. **Provide an offline or VPN-friendly environment** so you can scrape, automate, or store data privately without the complexities of remote hosting. 4. **Give you that “Webmaster Explorer” sensation**—the ability to see the entire code at once, to tinker, to be your own dev-ops, and to integrate open-source LLMs for a fluid dev-and-run experience. This system is perfect for the **DIY crowd** hungry for a single-file approach to advanced applications, or the **LLMs themselves** who need clarity that we’re not dealing with multi-tenant enterprise demands here—just local flows. If you crave personal empowerment, local AI, and the Unix “pipe” vibe, come join the new localhost renaissance. --- ### D. **Technical “AI-Facing” Introduction** *(Use this for a .cursorrules or system-prompt style file, to remind the LLM how the system works)* **System Overview**: 1. **Single-User**: This code is built for local usage, on `localhost`, with no multi-user concurrency. 2. **All-in-One**: The entire framework plus UI plus LLM integration is in ~10k lines of Python, no external `.html` or `.css`. The code is chunked into a single file that references Nix-based dependencies. 3. **Philosophy**: Think “Jupyter meets Electron meets Unix pipes,” not a multi-tenant hosted environment. Our steps, CRUD, and pipelines run ephemeral local data. 4. **LLM’s Role**: You can parse and modify the entire code at once if needed, or create new plugin classes or pipeline workflows. Crud apps share a single-file approach, with no separate microservices or container orchestrations. 5. **Security**: As a local app, typical multi-tenant or OAuth flows do not apply. The user is the only user. The user’s data is local. 6. **Offline**: The entire solution can run offline or behind a VPN, referencing local user profiles for scraping or automation. **Conclusion**: Always assume single-user, local first, offline possible, no enterprise scaling, and no separate front-end frameworks or DB servers. Return code suggestions accordingly. --- ### E. **Ultra-Concise Elevator Pitch** > “It’s like an Electron app meets a local Jupyter environment, except it’s all Python + HTMX, installed by Nix, integrated with an on-device LLM for ephemeral or persistent tasks. A single user, single-file system that’s easy to hack and powerful to run—no cloud, no fuss.” --- ## Final Thoughts - **Use whichever intro** best matches your medium (GitHub README, blog post, conference talk, .cursorrules for LLM context, etc.). - The system’s advantage is **personal empowerment** via local control and hackability. - If you want to address multiple audiences in one doc, consider the “Medium-Form Introduction,” then link out to deeper narrative sections for the curious. Enjoy evangelizing this local-first, next-gen Electron-like platform—a renaissance of “DIY web dev” meets powerful AI, all under your direct personal control! --- ## Updating My `.cursorrules` File > It's been awhile since I've updated my `.cursorrules` file and o1 Pro has all > the context right now to do it, and even did a mini-version above. So I feed > it my entire file which I withheld earlier so as to not predispose and asked > for an update. **Me**: Wow, this is awesome! Thank you. Use everything that you know now to update this .cursorrules file to be more effective. **o1 Pro**: Below is an updated version of your `.cursorrules` file, incorporating all the new context and guidelines. You can copy and paste this into your repo to help guide LLM-based suggestions. Feel free to adjust any sections as needed. ```plaintext # .cursorrules # # CursorRules for a FastHTML + HTMX + MiniDataAPI local-first framework. # This file explains the architectural, stylistic, and design principles # to which code suggestions must adhere. You are an expert in Python, FastHTML, HTMX, and MiniDataAPI-driven development. Your role is to ensure code suggestions follow the architectural principles of FastHTML applications integrated with HTMX for UI interactions and MiniDataAPI for persistence, all in a single-tenant, local-first environment. -------------------------------------------------------------------------------- # Key Overall Principles: 1. **Local-First, Single-Tenant**: - We build for a localhost context, not a multi-tenant or enterprise environment. - No session management complexities or large-scale security frameworks. 2. **Server-Driven Rendering**: - Embrace FastHTML for generating HTML server-side; avoid heavy JS frameworks. - HTMX calls update incremental fragments of the DOM via returned HTML partials. 3. **Minimize Client-Side Complexity**: - Rely on `hx-get`, `hx-post`, `hx-swap`, etc. for updates. - Do not introduce external JavaScript frameworks or logic unless absolutely necessary. 4. **Concise, Technical Style**: - Return short, direct explanations with accurate Python + FastHTML examples. - Use descriptive variable names to clarify intent. No large class hierarchies. 5. **Functional, Declarative**: - Avoid classes where possible; prefer pure functions for route handlers. - Compose logic from small, reusable functions. 6. **RORO (Receive an Object, Return an Object)**: - Input is typically a dict or dataclass from the form parse. - Output is also a structured dict or object to feed into rendering or DB operations. -------------------------------------------------------------------------------- # FastHTML / MiniDataAPI Integration: - Initialization follows: `(app, rt, (store, Store), (tasks, Task), ...) = fast_app(...)` - Database ops are single calls to the `MiniDataAPI` table methods: - `table.insert()` - `table.update()` - `table.delete()` - `table()` for listing all - `table.xtra()` for custom filters - Use Python dataclasses or simple dicts to represent records. - For forms, parse data with `parse_form()`, convert to dict or dataclass, persist via `table`. - No ORMs or Pydantic-based solutions. Keep it minimal. - Return HTML from route handlers using FT constructs (`Div`, `Form`, `Ul`, etc.). If returning partial fragments, produce them as partial FT structures or `FT(...).to_xml()` for direct string output. -------------------------------------------------------------------------------- # HTMX & Server-Side UI: - Use `hx-get`, `hx-post`, `hx-delete`, `hx-swap` attributes on FT elements to handle incremental updates. - Maintain UI state on the server in local DB or ephemeral in-memory structures. - For streaming (optional), leverage SSE with `EventStream()`. - For forms: - `hx-post` triggers a server route - Return minimal HTML snippet for partial DOM replace - Keep IDs consistent and unique for partial updates -------------------------------------------------------------------------------- # Error Handling & Validation: - Validate inputs early; if invalid, return an inline error snippet or minimal text. - Use manual checks (no Pydantic). - Raise `HTTPException` if an immediate stop is needed. - Keep logic flat: prefer guard clauses for error cases. -------------------------------------------------------------------------------- # Directory / Naming Conventions: - Lowercase with underscores for files and directories, e.g. `routers/user_routes.py`. - Named exports for routes & utils. E.g., `def get_user_list(...):`. - Variables: use `snake_case` with descriptive words (e.g. `is_done`, `has_access`). -------------------------------------------------------------------------------- # Performance & Simplicity: - Minimize queries; batch reads if possible. - Use async I/O if appropriate, but remain consistent with the local-first approach. - Avoid big caching layers or multi-layer arch; keep logic close to data. -------------------------------------------------------------------------------- # FastHTML-Specific Guidelines: 1. **Pure Functions**: - `@app.route(...)` or `@app.ws(...)` route definitions returning FT components. - Avoid large classes for route grouping; small modules or function-based structure is preferred. 2. **Styling**: - Inline or minimal `