--- title: Making API Logs Copy/Paste-Ready For Jupyter Notebooks permalink: /futureproof/api-logs-copy-paste-ready-jupyter-notebooks/ description: "This entry documents my deep dive into improving the Botify API integration within Pipulate, starting with a vision for advanced link graph visualizations using Cosmograph and then pivoting to a critical need: making our Botify Query Language (BQL) interactions fully transparent. I detail my 'Prompt Fu' process for guiding AI assistants (Gemini for planning, Cursor AI for implementation) to create a dedicated, rotating `api.log` file. The core challenge was getting the AI to produce copy-paste ready Python snippets for JupyterLab, complete with robust API token loading and correct request payload embedding, while navigating the AI's tendency towards 'overzealous helping' and ensuring the changes were purely additive to the existing `loguru`-based logging framework. This is all about building a virtuous learning cycle for myself and future users by making every API call an auditable, reproducible learning moment." meta_description: Details the iterative AI-assisted development process for enhancing Botify API BQL query logging in Pipulate, focusing on creating copy-paste ready Python snippets for Jupyter Notebooks and robust API token handling for improved transparency and reproducibility. meta_keywords: pipulate, botify, bql, api logging, python, jupyter notebook, httpx, loguru, ai-assisted development, cursor ai, gemini, prompt engineering, api transparency, token management, reproducible code, 535_botify_trifecta, server.py, prompt_foo.py, cosmograph, link graph visualization, seo automation, local-first, devops, software development journal layout: post sort_order: 2 --- ## Understanding API Interaction and AI-Assisted Development: A Pipulate Case Study This document delves into the intricate world of software development, specifically focusing on how a developer interacts with complex Application Programming Interfaces (APIs)—in this case, Botify's API for SEO data—and the evolving practice of using AI coding assistants (like Gemini and Cursor AI) to write and refactor code. The author is working on a project called "Pipulate," a local-first application designed to automate SEO workflows. The core of this journal entry revolves around enhancing the transparency of API calls made by Pipulate, particularly those using Botify Query Language (BQL). This involves creating detailed logs of API requests and responses, including generating runnable Python code snippets for easier debugging and experimentation in a Jupyter Notebook environment. The narrative highlights the challenges of making precise code changes with AI tools, the importance of detailed "Prompt Fu" (skillful prompting), and the iterative process of refining software to achieve specific technical and usability goals. It's a candid look at problem-solving, API exploration, and the practical realities of building specialized tools in the age of AI. ## The Quest for GPU-Accelerated Link Graphs I started writing this article and in the process of crafting the *"Prompt Fu"* that it's designed to be — storytelling for an AI coding assistant in such a way that I provide as much and more context for it than if it were human in a human story-telling fashion — and in the process wrote an article on the improvements I just made to `prompt_foo.py`. And *that one* became article 1 for today even though today's story starts here with this one. ## Pivoting to a Core Need: BQL Query Transparency Alright, next I implement the Cosmograph Viewer which was my original foray into this whole link graph visualization thing because of how it leverages the GPU of the local machine to visualize those massive link-graphs with hundreds of thousands or even millions of edges. If reality simulation is possible in videogames, certainly visualizing large network graphs is not so impossible as giphy and cytoscape make it seem. **Gemini**: You're diving into a fascinating area! Visualizing network graphs on websites has evolved significantly, with various libraries and tools emerging to tackle the challenge, especially as datasets grow larger and the need for interactivity increases. Before tools like Cosmograph pushed the boundaries with GPU acceleration, and alongside established players like Cytoscape.js, developers have used a range of solutions. Here's a rundown of other network graph visualizers people have used on websites categorized for clarity: ### Dedicated JavaScript Graph Visualization Libraries These libraries are specifically designed for creating and interacting with network graphs in the browser. They offer various rendering technologies (SVG, Canvas, WebGL) catering to different needs in terms of performance, customization, and ease of use. * **D3.js (Data-Driven Documents):** * **Rendering:** Primarily uses SVG, but also HTML and CSS. * **Description:** A highly flexible and powerful low-level library. While not exclusively for graphs, it provides the building blocks to create almost any custom data visualization, including intricate network graphs. It has a steep learning curve but offers unparalleled control. Many other libraries are built on top of or inspired by D3. * **Cytoscape.js:** * **Rendering:** Canvas (default for performance), SVG. * **Description:** A widely-used, feature-rich open-source library specifically for graph analysis and visualization. It's highly optimized for performance and supports complex layouts, rich styling, event handling, and graph theory algorithms. It's distinct from the Cytoscape desktop application but can work with similar data formats. * **Vis.js (Network Module):** * **Rendering:** HTML Canvas. * **Description:** Offers a dynamic, browser-based network visualization module. It's known for being relatively easy to use and good for interactive networks with features like physics-based layouts, clustering, and manipulation. * **Sigma.js:** * **Rendering:** WebGL (primary focus for performance), Canvas (fallback). * **Description:** Designed for drawing large graphs efficiently. It leverages WebGL for hardware-accelerated rendering, making it suitable for larger networks. Often used with graphology for graph data structure manipulation. * **VivaGraphJS:** * **Rendering:** WebGL, SVG, or CSS/DOM. * **Description:** Another library that emphasizes performance for large graphs, offering different rendering engines. It supports various layout algorithms. * **React-force-graph / three-forcegraph / vr-forcegraph / ar-forcegraph:** * **Rendering:** WebGL (via Three.js) or 2D Canvas. * **Description:** React components for rendering force-directed graphs in 2D, 3D, VR, and AR. These are excellent for integrating graph visualizations into React applications with modern rendering capabilities. * **G6 (by AntV):** * **Rendering:** Canvas, SVG. * **Description:** A robust graph visualization engine from Alibaba's AntV, designed for ease of use and extensibility. It offers a good range of built-in features and layouts. * **GoJS (Commercial):** * **Rendering:** Canvas. * **Description:** A powerful commercial JavaScript library for creating interactive diagrams and graphs. It offers extensive features for complex visualizations, layouts, and user interactions. * **yFiles for HTML (Commercial):** * **Rendering:** SVG, Canvas, WebGL. * **Description:** A comprehensive commercial diagramming library that provides sophisticated algorithms for automatic graph layout, making it suitable for complex and large diagrams in enterprise applications. * **KeyLines (Commercial, by Cambridge Intelligence):** * **Rendering:** HTML5 Canvas, WebGL. * **Description:** An enterprise-grade JavaScript toolkit for building high-performance graph visualization applications. It's known for its ability to handle large, complex networks and offers features like time-based analysis and geospatial mapping. * **Ogma (Commercial, by Linkurious):** * **Rendering:** WebGL. * **Description:** A high-performance commercial JavaScript library designed for visualizing and interacting with very large-scale graphs, focusing on speed and rich visual analytics. * **NetV.js:** * **Rendering:** WebGL. * **Description:** Aims to be a very fast WebGL-based graph visualization library, claiming to handle millions of elements. * **Cola.js (Constraint-based Layout in JavaScript):** * **Rendering:** Works with D3.js (SVG) or SVG.js. * **Description:** Not a rendering library itself, but a library for arranging HTML5 documents and diagrams using constraint-based optimization techniques. It's particularly useful for graph layout. * **Uber React Digraph:** * **Rendering:** SVG. * **Description:** A React component for rendering directed graphs, providing a set of UI components for building graph editors. ### General-Purpose JavaScript Charting Libraries with Graph Capabilities These are broader charting libraries that include support for network/graph visualizations as part of their offerings. * **ECharts (by Apache):** * **Rendering:** Canvas (default), SVG. * **Description:** A powerful and popular open-source charting library with a wide array of chart types, including force-directed graphs, tree diagrams, and relationship graphs. * **Plotly.js:** * **Rendering:** SVG, WebGL. * **Description:** A high-level declarative charting library built on D3.js and stack.gl. It supports various chart types, including network graphs (e.g., from NetworkX data), and can leverage WebGL for larger datasets. * **Google Charts:** * **Rendering:** SVG. * **Description:** Offers a variety of chart types, including organizational charts and tree maps, which are specific types of graphs. While not a general-purpose network graph tool, it has been used for hierarchical data. ### Server-Side/Desktop Tools with Web Output/Components These tools are often used for analysis and visualization on the desktop but offer ways to export visuals for the web or have web-based companion tools. * **Graphviz (Graph Visualization Software):** * **Output:** SVG, PNG, PDF, etc. * **Description:** A foundational open-source tool that takes descriptions of graphs in the DOT language and generates diagrams. Its SVG output is commonly embedded in web pages for static or semi-interactive visualizations. * **Viz.js:** A JavaScript port of Graphviz (using Emscripten) that allows client-side rendering of DOT language defined graphs directly in the browser. * **Gephi (The Open Graph Viz Platform):** * **Output:** GEXF, SVG, PDF, PNG. * **Description:** A popular open-source desktop application for exploring and visualizing large networks. While primarily a desktop tool, it can export graphs to formats like GEXF (which can be loaded by libraries like Sigma.js or Gephi Lite) or static SVG. * **Gephi Lite:** A web-based companion to Gephi for exploring and interacting with graph visualizations in the browser. ### Underlying Technologies & Helper Libraries * **Graphology:** * **Description:** This is a specification and a collection of JavaScript libraries for graph data structures and algorithms. It's often used as a "headless" component, managing the graph data, which is then rendered by a library like Sigma.js. * **Rendering Engines (WebGL, Canvas, SVG):** * **SVG (Scalable Vector Graphics):** DOM-based, excellent for high-quality, resolution-independent graphics and interactivity on smaller to medium graphs. Each element is a DOM node, which can become a bottleneck for very large graphs. * **HTML Canvas:** A pixel-based drawing surface, generally offers better performance than SVG for larger numbers of elements as it's a single DOM element. Less direct interactivity with individual elements compared to SVG. * **WebGL (Web Graphics Library):** Allows for GPU-accelerated graphics rendering in the browser. This is the most performant option for very large and complex graphs, enabling smooth interaction and animation where SVG or Canvas would struggle. Libraries like Sigma.js, VivaGraphJS, Cosmograph, and parts of KeyLines/Ogma utilize WebGL. The choice of visualizer often depends on the size and complexity of the graph, the level of interactivity required, performance needs, rendering technology preference (and thus GPU leverage), and whether a commercial or open-source solution is preferred. Your focus on Cosmograph highlights the trend towards leveraging GPUs for these demanding visualization tasks, which many of the more modern and performance-oriented libraries are also embracing via WebGL. --- ## Link Graph Visualizer Thank you! That's very useful. There's going to be a whole lot of types of network graph visualizations worked into Pipulate for a whole lot of reasons. The *"big one"* for a largest-possible export from Botify for a link-graph visualization at maximum depth with the largest number of edges, I'll use Cosmograph at least for the one that tries to actually render and show all the edges without sampling down. That one I'll probably brand "Link Graph Visualizer" within Pipulate and it's the one that creates the look of the *Deathstar* very often because of the tangled interlinking that e-commerce sites so frequently produce. There will be a collection of generic "site signatures" I'll be able to produce as an educational piece from this. I'll be able to answer visually such questions as: what's the link-graph *signature* of an ecommerce site with crawlable faceted search versus one with non-crawlable faceted search. Answering this sort of question is key in my enterprise SEO space. ## Segmentation Finder & Hierarchy Exposer But also I will have incredibly lightweight visualizations of the link-graph, both from Botify exports because by that time (after the Link Graph Visualizer) it will just be easy to do from a Botify export, but this opens the door to what is probably the most exciting development direction for Pipulate. It will also be the *Website Story Gleaner* and the *LLM AI Whisperer* apps in how I will... during a real-time crawl... oh, just go read [The Calm Before The Natural Language Web](https://mikelev.in/futureproof/the-calm-before-the-nlweb/) /home/mike/repos/MikeLev.in/_posts/2025-05-22-the-calm-before-the-nlweb.md ## The Quest for GPU-Accelerated Link Graphs In summary, I want to do 1 click-depth crawls in real-time which are hardly crawls at all because technically it's just a close examination of the links on the one page you crawled. To get the full picture of this the implementation plan is actually on [Force Multiplying Ikigai In the AI Age](https://mikelev.in/futureproof/force-multiplying-ikigai-in-ai-age/) /home/mike/repos/MikeLev.in/_posts/2025-05-23-force-multiplying-ikigai-in-ai-age.md ## The "Eye Dropper" Approach to Web Crawling ...where I have this vision of doing a crawl like the application of an eye dropper, with that eye-drop only being a single URL submitted and the *flowing out* only being looking at the links directly on that URL's page. In this way you can really grab the user who has this vision of that kind of ginormous enterprise crawl that everybody intuitively wants and imagines in their heads (the kind my employer Botify profides) but the slams on the breaks and says: Hey dummy, let's look at *THIS PAGE* your crawl is starting from before we move on — I bet there's a bunch of things you don't really realize about it, like the list of links on the page per view-source HTML versus the JavaScript-rendered DOM (on initial pageload) ## Visualizing Semantic Structure This is also a wonderful opportunity to visually show the scaffolding or schematic of semantic meaning through which those links are distributed on the page(`