--- name: wiki-export description: > Export the Obsidian wiki's knowledge graph to structured formats for use in external tools. Use this skill when the user says "export wiki", "export graph", "export to JSON", "export to Gephi", "export to Neo4j", "graphml", "visualize wiki", "knowledge graph export", or wants to use their wiki data in another tool. Outputs graph.json, graph.graphml, cypher.txt (Neo4j), and graph.html (interactive browser visualization) into a wiki-export/ directory at the vault root. --- # Wiki Export — Knowledge Graph Export You are exporting the wiki's wikilink graph to structured formats so it can be used in external tools (Gephi, Neo4j, custom scripts, browser visualization). ## Before You Start 1. Read `.env` to get `OBSIDIAN_VAULT_PATH` 2. Confirm the vault has pages to export — if fewer than 5 pages exist, warn the user and stop ## Visibility Filter (optional) By default, **all pages are exported** regardless of visibility tags. This preserves existing behavior. If the user requests a filtered export — phrases like **"public export"**, **"user-facing export"**, **"exclude internal"**, **"no internal pages"** — activate **filtered mode**: - Build a **blocked tag set**: `{visibility/internal, visibility/pii}` - Skip any page whose frontmatter tags contain a blocked tag when building the node list - Skip any edge where either endpoint was excluded - Note the filter in the summary: `(filtered: visibility/internal, visibility/pii excluded)` Pages with no `visibility/` tag, or tagged `visibility/public`, are always included. ## Step 1: Build the Node and Edge Lists Glob all `.md` files in the vault (excluding `_archives/`, `_raw/`, `.obsidian/`, `index.md`, `log.md`, `_insights.md`). In filtered mode, also skip pages whose tags contain `visibility/internal` or `visibility/pii`. For each page, extract from frontmatter: - `id` — relative path from vault root, without `.md` extension (e.g. `concepts/transformers`) - `label` — `title` field from frontmatter, or filename if missing - `category` — directory prefix (`concepts`, `entities`, `skills`, `references`, `synthesis`, `projects`, or `journal`) - `tags` — array from frontmatter tags field - `summary` — frontmatter `summary` field if present This is your **node list**. For each page, Grep the body for `\[\[.*?\]\]` to extract all wikilinks: - Parse each `[[target]]` or `[[target|display]]` — use the target part only - Resolve the target to a node id (normalize: lowercase, spaces→hyphens, strip `.md`) - Skip links that point outside the node list (broken links) - Each resolved link becomes an edge: `{source: page_id, target: linked_id, relation: "wikilink", confidence: "EXTRACTED"}` - If the linking sentence ends with `^[inferred]` or `^[ambiguous]`, override `confidence` accordingly This is your **edge list**. ## Step 2: Assign Community IDs Group pages into communities by tag clustering: - Pages sharing the same dominant tag belong to the same community - Dominant tag = the first tag in the page's frontmatter tags array - Pages with no tags get community id `null` - Number communities starting from 0, ordered by size descending (largest community = 0) This enables community-based coloring in the HTML visualization and tools like Gephi. ## Step 3: Write the Output Files Create `wiki-export/` at the vault root if it doesn't exist. Write all four files: --- ### 3a. `graph.json` NetworkX node_link format — standard for graph tools and scripts: ```json { "directed": false, "multigraph": false, "graph": { "exported_at": "", "vault": "", "total_nodes": N, "total_edges": M }, "nodes": [ { "id": "concepts/transformers", "label": "Transformer Architecture", "category": "concepts", "tags": ["ml", "architecture"], "summary": "The attention-based architecture introduced in Attention Is All You Need.", "community": 0 } ], "links": [ { "source": "concepts/transformers", "target": "entities/vaswani", "relation": "wikilink", "confidence": "EXTRACTED" } ] } ``` --- ### 3b. `graph.graphml` GraphML XML format — loadable in Gephi, yEd, and Cytoscape: ```xml Transformer Architecture concepts ml, architecture 0 wikilink EXTRACTED ``` Write one `` per page and one `` per wikilink. --- ### 3c. `cypher.txt` Neo4j Cypher `MERGE` statements — paste into Neo4j Browser or run with `cypher-shell`: ```cypher // Wiki knowledge graph export — // Load with: cypher-shell -u neo4j -p password < cypher.txt // Nodes MERGE (n:Page {id: "concepts/transformers"}) SET n.label = "Transformer Architecture", n.category = "concepts", n.tags = ["ml","architecture"], n.community = 0; MERGE (n:Page {id: "entities/vaswani"}) SET n.label = "Ashish Vaswani", n.category = "entities", n.tags = ["person","ml"], n.community = 0; // Relationships MATCH (a:Page {id: "concepts/transformers"}), (b:Page {id: "entities/vaswani"}) MERGE (a)-[:WIKILINK {relation: "wikilink", confidence: "EXTRACTED"}]->(b); ``` Write one `MERGE` node statement per page, then one `MATCH`/`MERGE` relationship statement per edge. --- ### 3d. `graph.html` A self-contained interactive visualization using the vis.js CDN (no local dependencies). The user opens this file in any browser — no server needed. Build the HTML file by: 1. Generating a JSON array of node objects for vis.js: ```js {id: "concepts/transformers", label: "Transformer Architecture", color: {background: "#4E79A7"}, size: , title: "concepts | #ml #architecture", community: 0} ``` - Color by community (cycle through: `#4E79A7`, `#F28E2B`, `#E15759`, `#76B7B2`, `#59A14F`, `#EDC948`, `#B07AA1`, `#FF9DA7`, `#9C755F`, `#BAB0AC`) - Size by degree (incoming + outgoing link count): `size = degree * 3 + 8`, capped at 60 - `title` = tooltip text shown on hover: category, tags, summary (if available) 2. Generating a JSON array of edge objects for vis.js: ```js {from: "concepts/transformers", to: "entities/vaswani", dashes: false, width: 1, color: {color: "#666", opacity: 0.6}} ``` - `dashes: true` for INFERRED edges - `dashes: [4,8]` for AMBIGUOUS edges 3. Writing the full HTML file: ```html Wiki Knowledge Graph
``` Replace `/* NODES_JSON */` and `/* EDGES_JSON */` with the actual JSON arrays you generated in step 1. --- ## Step 4: Print Summary ``` Wiki export complete → wiki-export/ graph.json — N nodes, M edges (NetworkX node_link format) graph.graphml — N nodes, M edges (Gephi / yEd / Cytoscape) cypher.txt — N MERGE nodes + M MERGE relationships (Neo4j) graph.html — interactive browser visualization (open in any browser) ``` In filtered mode, append a line showing what was excluded: ``` (filtered: X of Y pages excluded — visibility/internal, visibility/pii) ``` ## Notes - **Re-running is safe** — all output files are overwritten on each run - **Broken wikilinks are skipped** — only edges to pages that exist in the vault are exported - **The `wiki-export/` directory should be gitignored** if the vault is version-controlled — these are derived artifacts - **`graph.json` is the primary format** — the others are derived from it. If a future tool supports graph queries natively, point it at `graph.json`