--- name: skill-finder description: Analyze Claude Code session history to find repeated workflows that should become skills, plugins, agents, or CLAUDE.md entries. Use when asked to find automation opportunities, analyze usage patterns, discover repeated workflows, or figure out what to turn into skills. Differs from claude-automation-recommender which analyzes codebases — this analyzes actual user behavior across all sessions. user-invocable: true tools: Bash, Read, Glob, Grep --- # Skill Finder — Behavioral Analysis of Claude Code Sessions Analyze the user's Claude Code session history (`~/.claude/history.jsonl` and `~/.claude/transcripts/*.jsonl`) to find repeated workflows worth automating. ## What This Skill Does Unlike `claude-automation-recommender` (which looks at codebase files to suggest automations), this skill looks at **what the user actually does** — their prompts, tools used, and projects — to find patterns worth turning into skills, plugins, agents, or CLAUDE.md entries. ## Arguments `$ARGUMENTS` — optional filters: - No args: full analysis across all sessions - A project name: filter to that project only - "skills only" / "plugins only" / "agents only": focus on one recommendation type ## Workflow ### Phase 1: Extract Session Data Read all user prompts from `~/.claude/history.jsonl` (the `display` field has user messages, `project` has the working directory). Also scan `~/.claude/transcripts/*.jsonl` for tool usage patterns (lines with `"type":"tool_use"` have a `tool_name` field). Strip any system prompt wrappers (e.g., `` blocks — extract the actual user message after `ulw:` or after the last `---\n\n`). ### Phase 2: Categorize & Cluster For each user prompt, classify it into workflow categories by keyword matching. Use categories like: - build_app_from_scratch (scaffold, create, new app, bootstrap, initialize) - implement_from_ticket (linear, ticket, implement this) - scrape_website (scrape, crawl, download contents, extract) - write_perf_review (perf review, performance review, year end) - debug_error (error, bug, fix, broken, failing, debug) - git_commit_pr (commit, push, pull request, merge, branch) - deploy_upload (upload, s3, deploy, publish) - org_chart_people (who reports, staff engineers, org chart, direct reports) - write_tests (test, spec, e2e, playwright, jest) - cost_token_analysis (cost, token, usage, spend) - design_ui (design, landing page, mockup, .pen, wireframe) - refactor_code (refactor, clean up, simplify, reorganize) - browser_automation (browser, navigate, click, open website) - learn_claude_features (claude, skill, plugin, agent, mcp, hook) - documentation (readme, doc, documentation) Also track: - Which projects each workflow appears in - Total count per workflow - Tool usage frequency across all sessions ### Phase 3: Check Existing Setup Read what the user already has: - `~/.claude/skills/` — existing skills - `~/.claude/CLAUDE.md` — existing global instructions (may not exist) - `~/.claude/plugins/installed_plugins.json` — installed plugins - `~/.claude/settings.json` and `~/.claude/settings.local.json` — hooks, permissions ### Phase 4: Generate Recommendations Apply this decision framework: | Signal | Recommend | |--------|-----------| | Same prompt pasted into 3+ sessions verbatim | **CLAUDE.md** (make it a persistent instruction) | | Same multi-step workflow done 5+ times manually | **Skill** (slash command) | | A skill that multiple team members would use | **Plugin** (package for sharing) | | A specialized persona/role used repeatedly | **Agent** (custom sub-agent) | ### Phase 5: Output Report Format the output as: ```markdown ## Skill Finder Report **Sessions analyzed**: X | **Prompts analyzed**: Y | **Projects**: Z ### Workflow Frequency (table sorted by count, with project breakdown) ### Recommendations #### CLAUDE.md (persistent instructions) - What to add and why (things repeated across many sessions) #### Skills (slash commands to create) - `/skill-name` — what it does, why (N times you did this manually), example prompt it replaces #### Agents (custom sub-agents) - Agent name — what it does, why #### Plugins (package for team) - Plugin name — what skills/agents to bundle, why ### Already Automated (list what the user already has as skills/plugins that covers a pattern) ### Quick Wins (top 3 highest-impact recommendations ranked by frequency x effort saved) ``` ## Important Notes - This is a **read-only analysis** — do not create any files, just output recommendations - Focus on the **top patterns** — don't list every single message, surface the 5-10 most impactful opportunities - Always compare against existing skills to avoid recommending what's already automated - Show concrete examples from actual session history to justify each recommendation