--- name: ffuf-web-fuzzing-v2 description: "FFUF (Fuzz Faster U Fool) Skill workflow skill. Use this skill when the user needs Expert guidance for ffuf web fuzzing during penetration testing, including authenticated fuzzing with raw requests, auto-calibration, and result analysis and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off." version: "0.0.1" category: fullstack-web tags: ["ffuf-web-fuzzing-v2", "ffuf-web-fuzzing", "expert", "guidance", "for", "ffuf", "web", "fuzzing"] complexity: advanced risk: caution tools: ["codex-cli", "claude-code", "cursor", "gemini-cli", "opencode"] source: community author: "sickn33" date_added: "2026-04-16" date_updated: "2026-04-25" --- # FFUF (Fuzz Faster U Fool) Skill ## Overview This public intake copy packages `plugins/antigravity-awesome-skills/skills/ffuf-web-fuzzing` from `https://github.com/sickn33/antigravity-awesome-skills` into the native Omni Skills editorial shape without hiding its origin. Use it when the operator needs the upstream workflow, support files, and repository context to stay intact while the public validator and private enhancer continue their normal downstream flow. This intake keeps the copied upstream files intact and uses the `external_source` block in `metadata.json` plus `ORIGIN.md` as the provenance anchor for review. # FFUF (Fuzz Faster U Fool) Skill Imported source sections that did not map cleanly to the public headings are still preserved below or in the support files. Notable imported sections: Core Concepts, Common Use Cases, Filtering and Matching, Rate Limiting and Timing, Output Options, Advanced Techniques. ## When to Use This Skill Use this section as the trigger filter. It should make the activation boundary explicit before the operator loads files, runs commands, or opens a pull request. - You are fuzzing web targets with ffuf during authorized security testing or penetration testing. - The task involves content discovery, subdomain enumeration, parameter fuzzing, or authenticated request fuzzing. - You need guidance on wordlists, filtering, calibration, and interpreting ffuf results efficiently. - Use when the request clearly matches the imported source intent: Expert guidance for ffuf web fuzzing during penetration testing, including authenticated fuzzing with raw requests, auto-calibration, and result analysis. - Use when the operator should preserve upstream workflow detail instead of rewriting the process from scratch. - Use when provenance needs to stay visible in the answer, PR, or review packet. ## Operating Table | Situation | Start here | Why it matters | | --- | --- | --- | | First-time use | `metadata.json` | Confirms repository, branch, commit, and imported path through the `external_source` block before touching the copied workflow | | Provenance review | `ORIGIN.md` | Gives reviewers a plain-language audit trail for the imported source | | Workflow execution | `SKILL.md` | Starts with the smallest copied file that materially changes execution | | Supporting context | `SKILL.md` | Adds the next most relevant copied source file without loading the entire package | | Handoff decision | `## Related Skills` | Helps the operator switch to a stronger native skill when the task drifts | ## Workflow This workflow is intentionally editorial and operational at the same time. It keeps the imported source useful to the operator while still satisfying the public intake standards that feed the downstream enhancer flow. 1. bash # Using Go go install github.com/ffuf/ffuf/v2@latest # Using Homebrew (macOS) brew install ffuf # Binary download # Download from: https://github.com/ffuf/ffuf/releases/latest 2. Confirm the user goal, the scope of the imported workflow, and whether this skill is still the right router for the task. 3. Read the overview and provenance files before loading any copied upstream support files. 4. Load only the references, examples, prompts, or scripts that materially change the outcome for the current request. 5. Execute the upstream workflow while keeping provenance and source boundaries explicit in the working notes. 6. Validate the result against the upstream expectations and the evidence you can point to in the copied files. 7. Escalate or hand off to a related skill when the work moves out of this imported workflow's center of gravity. ### Imported Workflow Notes #### Imported: Installation ```bash # Using Go go install github.com/ffuf/ffuf/v2@latest # Using Homebrew (macOS) brew install ffuf # Binary download # Download from: https://github.com/ffuf/ffuf/releases/latest ``` #### Imported: Overview FFUF is a fast web fuzzer written in Go, designed for discovering hidden content, directories, files, subdomains, and testing for vulnerabilities during penetration testing. It's significantly faster than traditional tools like dirb or dirbuster. #### Imported: Core Concepts ### The FUZZ Keyword The `FUZZ` keyword is used as a placeholder that gets replaced with entries from your wordlist. You can place it anywhere: - URLs: `https://target.com/FUZZ` - Headers: `-H "Host: FUZZ"` - POST data: `-d "username=admin&password=FUZZ"` - Multiple locations with custom keywords: `-w wordlist.txt:CUSTOM` then use `CUSTOM` instead of `FUZZ` ### Multi-wordlist Modes - **clusterbomb**: Tests all combinations (default) - cartesian product - **pitchfork**: Iterates through wordlists in parallel (1-to-1 matching) - **sniper**: Tests one position at a time (for multiple FUZZ positions) ## Examples ### Example 1: Ask for the upstream workflow directly ```text Use @ffuf-web-fuzzing-v2 to handle . Start from the copied upstream workflow, load only the files that change the outcome, and keep provenance visible in the answer. ``` **Explanation:** This is the safest starting point when the operator needs the imported workflow, but not the entire repository. ### Example 2: Ask for a provenance-grounded review ```text Review @ffuf-web-fuzzing-v2 against metadata.json and ORIGIN.md, then explain which copied upstream files you would load first and why. ``` **Explanation:** Use this before review or troubleshooting when you need a precise, auditable explanation of origin and file selection. ### Example 3: Narrow the copied support files before execution ```text Use @ffuf-web-fuzzing-v2 for . Load only the copied references, examples, or scripts that change the outcome, and name the files explicitly before proceeding. ``` **Explanation:** This keeps the skill aligned with progressive disclosure instead of loading the whole copied package by default. ### Example 4: Build a reviewer packet ```text Review @ffuf-web-fuzzing-v2 using the copied upstream files plus provenance, then summarize any gaps before merge. ``` **Explanation:** This is useful when the PR is waiting for human review and you want a repeatable audit packet. ## Best Practices Treat the generated public skill as a reviewable packaging layer around the upstream repository. The goal is to keep provenance explicit and load only the copied source material that materially improves execution. - Directory discovery: SecLists Discovery/Web-Content (raft-large-directories.txt, directory-list-2.3-medium.txt) - Subdomains: SecLists Discovery/DNS (subdomains-top1million-5000.txt) - Parameters: SecLists Discovery/Web-Content (burp-parameter-names.txt) - Usernames: SecLists Usernames - Passwords: SecLists Passwords - Source: https://github.com/danielmiessler/SecLists - Check the default response first to identify common response sizes, status codes, or patterns ### Imported Operating Notes #### Imported: Best Practices ### 1. ALWAYS Use Auto-Calibration Use `-ac` by default for every scan. This is non-negotiable for productive pentesting: ```bash ffuf -w wordlist.txt -u https://target.com/FUZZ -ac ``` ### 2. Use Raw Requests for Authentication Don't struggle with command-line flags for complex auth. Capture the full request and use `--request`: ```bash # 1. Capture authenticated request from Burp/DevTools # 2. Save to req.txt with FUZZ keyword in place # 3. Run with -ac ffuf --request req.txt -w wordlist.txt -ac -o results.json ``` ### 3. Use Appropriate Wordlists - **Directory discovery**: SecLists Discovery/Web-Content (raft-large-directories.txt, directory-list-2.3-medium.txt) - **Subdomains**: SecLists Discovery/DNS (subdomains-top1million-5000.txt) - **Parameters**: SecLists Discovery/Web-Content (burp-parameter-names.txt) - **Usernames**: SecLists Usernames - **Passwords**: SecLists Passwords - Source: https://github.com/danielmiessler/SecLists ### 3. Rate Limiting for Stealth Use `-rate` to avoid triggering WAF/IDS or overwhelming the server: ```bash ffuf -w wordlist.txt -u https://target.com/FUZZ -rate 2 -t 10 ``` ### 4. Filter Strategically - Check the default response first to identify common response sizes, status codes, or patterns - Use `-fs` to filter by size or `-fc` to filter by status code - Combine filters: `-fc 403,404 -fs 1234` ### 5. Save Results Appropriately Always save results to a file for later analysis: ```bash ffuf -w wordlist.txt -u https://target.com/FUZZ -o results.json -of json ``` ### 6. Use Interactive Mode Press ENTER during execution to drop into interactive mode where you can: - Adjust filters on the fly - Save current results - Restart the scan - Manage the queue ### 7. Recursion Depth Be careful with recursion depth to avoid getting stuck in infinite loops or overwhelming the server: ```bash ffuf -w wordlist.txt -u https://target.com/FUZZ -recursion -recursion-depth 2 -maxtime-job 120 ``` ## Troubleshooting ### Problem: The operator skipped the imported context and answered too generically **Symptoms:** The result ignores the upstream workflow in `plugins/antigravity-awesome-skills/skills/ffuf-web-fuzzing`, fails to mention provenance, or does not use any copied source files at all. **Solution:** Re-open `metadata.json`, `ORIGIN.md`, and the most relevant copied upstream files. Check the `external_source` block first, then restate the provenance before continuing. ### Problem: The imported workflow feels incomplete during review **Symptoms:** Reviewers can see the generated `SKILL.md`, but they cannot quickly tell which references, examples, or scripts matter for the current task. **Solution:** Point at the exact copied references, examples, scripts, or assets that justify the path you took. If the gap is still real, record it in the PR instead of hiding it. ### Problem: The task drifted into a different specialization **Symptoms:** The imported skill starts in the right place, but the work turns into debugging, architecture, design, security, or release orchestration that a native skill handles better. **Solution:** Use the related skills section to hand off deliberately. Keep the imported provenance visible so the next skill inherits the right context instead of starting blind. ### Imported Troubleshooting Notes #### Imported: Troubleshooting ### Too Many False Positives - Use `-ac` for auto-calibration - Check default response and filter by size with `-fs` - Use regex filtering with `-fr` ### Too Slow - Increase threads: `-t 100` - Reduce wordlist size - Use `-ignore-body` if you don't need response content ### Getting Blocked - Reduce rate: `-rate 2` - Add delays: `-p 0.5-1.5` - Reduce threads: `-t 10` - Randomize User-Agent - Use proxy rotation ### Missing Results - Check if you're filtering too aggressively - Use `-mc all` to see all responses - Disable auto-calibration temporarily - Use verbose mode `-v` to see what's happening ## Related Skills - `@00-andruia-consultant` - Use when the work is better handled by that native specialization after this imported skill establishes context. - `@00-andruia-consultant-v2` - Use when the work is better handled by that native specialization after this imported skill establishes context. - `@10-andruia-skill-smith` - Use when the work is better handled by that native specialization after this imported skill establishes context. - `@10-andruia-skill-smith-v2` - Use when the work is better handled by that native specialization after this imported skill establishes context. ## Additional Resources Use this support matrix and the linked files below as the operator packet for this imported skill. They should reflect real copied source material, not generic scaffolding. | Resource family | What it gives the reviewer | Example path | | --- | --- | --- | | `references` | copied reference notes, guides, or background material from upstream | `references/n/a` | | `examples` | worked examples or reusable prompts copied from upstream | `examples/n/a` | | `scripts` | upstream helper scripts that change execution or validation | `scripts/n/a` | | `agents` | routing or delegation notes that are genuinely part of the imported package | `agents/n/a` | | `assets` | supporting assets or schemas copied from the source package | `assets/n/a` | ### Imported Reference Notes #### Imported: Resources - Official GitHub: https://github.com/ffuf/ffuf - Wiki: https://github.com/ffuf/ffuf/wiki - Codingo's Guide: https://codingo.io/tools/ffuf/bounty/2020/09/17/everything-you-need-to-know-about-ffuf.html - Practice Lab: http://ffuf.me - SecLists Wordlists: https://github.com/danielmiessler/SecLists #### Imported: Quick Reference Card | Task | Command Template | |------|------------------| | Directory Discovery | `ffuf -w wordlist.txt -u https://target.com/FUZZ -ac` | | Subdomain Discovery | `ffuf -w subdomains.txt -u https://FUZZ.target.com -ac` | | Parameter Fuzzing | `ffuf -w params.txt -u https://target.com/page?FUZZ=value -ac` | | POST Data Fuzzing | `ffuf -w wordlist.txt -X POST -d "param=FUZZ" -u https://target.com/endpoint` | | With Extensions | Add `-e .php,.html,.txt` | | Filter Status | Add `-fc 404,403` | | Filter Size | Add `-fs 1234` | | Rate Limit | Add `-rate 2` | | Save Output | Add `-o results.json` | | Verbose | Add `-c -v` | | Recursion | Add `-recursion -recursion-depth 2` | | Through Proxy | Add `-x http://127.0.0.1:8080` | #### Imported: Additional Resources This skill includes supplementary materials in the `resources/` directory: ### Resource Files - **WORDLISTS.md**: Comprehensive guide to SecLists wordlists, recommended lists for different scenarios, file extensions, and quick reference patterns - **REQUEST_TEMPLATES.md**: Pre-built req.txt templates for common authentication scenarios (JWT, OAuth, session cookies, API keys, etc.) with usage examples ### Helper Script - **ffuf_helper.py**: Python script to assist with: - Analyzing ffuf JSON results for anomalies and interesting findings - Creating req.txt template files from command-line arguments - Generating number-based wordlists for IDOR testing **Helper Script Usage:** ```bash # Analyze results to find interesting anomalies python3 ffuf_helper.py analyze results.json # Create authenticated request template python3 ffuf_helper.py create-req -o req.txt -m POST -u "https://api.target.com/users" \ -H "Authorization: Bearer TOKEN" -d '{"action":"FUZZ"}' # Generate IDOR testing wordlist python3 ffuf_helper.py wordlist -o ids.txt -t numbers -s 1 -e 10000 ``` **When to use resources:** - Users need wordlist recommendations → Reference WORDLISTS.md - Users need help with authenticated requests → Reference REQUEST_TEMPLATES.md - Users want to analyze results → Use ffuf_helper.py analyze - Users need to generate req.txt → Use ffuf_helper.py create-req - Users need number ranges for IDOR → Use ffuf_helper.py wordlist #### Imported: Common Use Cases ### 1. Directory and File Discovery ```bash # Basic directory fuzzing ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ # With file extensions ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -e .php,.html,.txt,.pdf # Colored and verbose output ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -c -v # With recursion (finds nested directories) ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -recursion -recursion-depth 2 ``` ### 2. Subdomain Enumeration ```bash # Virtual host discovery ffuf -w /path/to/subdomains.txt -u https://target.com -H "Host: FUZZ.target.com" -fs 4242 # Note: -fs 4242 filters out responses of size 4242 (adjust based on default response size) ``` ### 3. Parameter Fuzzing ```bash # GET parameter names ffuf -w /path/to/params.txt -u https://target.com/script.php?FUZZ=test_value -fs 4242 # GET parameter values ffuf -w /path/to/values.txt -u https://target.com/script.php?id=FUZZ -fc 401 # Multiple parameters ffuf -w params.txt:PARAM -w values.txt:VAL -u https://target.com/?PARAM=VAL -mode clusterbomb ``` ### 4. POST Data Fuzzing ```bash # Basic POST fuzzing ffuf -w /path/to/passwords.txt -X POST -d "username=admin&password=FUZZ" -u https://target.com/login.php -fc 401 # JSON POST data ffuf -w entries.txt -u https://target.com/api -X POST -H "Content-Type: application/json" -d '{"name": "FUZZ", "key": "value"}' -fr "error" # Fuzzing multiple POST fields ffuf -w users.txt:USER -w passes.txt:PASS -X POST -d "username=USER&password=PASS" -u https://target.com/login -mode pitchfork ``` ### 5. Header Fuzzing ```bash # Custom headers ffuf -w /path/to/wordlist.txt -u https://target.com -H "X-Custom-Header: FUZZ" # Multiple headers ffuf -w /path/to/wordlist.txt -u https://target.com -H "User-Agent: FUZZ" -H "X-Forwarded-For: 127.0.0.1" ``` #### Imported: Filtering and Matching ### Matchers (Include Results) - `-mc`: Match status codes (default: 200-299,301,302,307,401,403,405,500) - `-ml`: Match line count - `-mr`: Match regex - `-ms`: Match response size - `-mt`: Match response time (e.g., `>100` or `<100` milliseconds) - `-mw`: Match word count ### Filters (Exclude Results) - `-fc`: Filter status codes (e.g., `-fc 404,403,401`) - `-fl`: Filter line count - `-fr`: Filter regex (e.g., `-fr "error"`) - `-fs`: Filter response size (e.g., `-fs 42,4242`) - `-ft`: Filter response time - `-fw`: Filter word count ### Auto-Calibration (USE BY DEFAULT!) **CRITICAL:** Always use `-ac` unless you have a specific reason not to. This is especially important when having Claude analyze results, as it dramatically reduces noise and false positives. ```bash # Auto-calibration - ALWAYS USE THIS ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -ac # Per-host auto-calibration (useful for multiple hosts) ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -ach # Custom auto-calibration string (for specific patterns) ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -acc "404NotFound" ``` **Why `-ac` is essential:** - Automatically detects and filters repetitive false positive responses - Removes noise from dynamic websites with random content - Makes results analysis much easier for both humans and Claude - Prevents thousands of identical 404/403 responses from cluttering output - Adapts to the target's specific behavior **When Claude analyzes your ffuf results, `-ac` is MANDATORY** - without it, Claude will waste time sifting through thousands of false positives instead of finding the interesting anomalies. #### Imported: Rate Limiting and Timing ### Rate Control ```bash # Limit to 2 requests per second (stealth mode) ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -rate 2 # Add delay between requests (0.1 to 2 seconds random) ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -p 0.1-2.0 # Set number of concurrent threads (default: 40) ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -t 10 ``` ### Time Limits ```bash # Maximum total execution time (60 seconds) ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -maxtime 60 # Maximum time per job (useful with recursion) ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -maxtime-job 60 -recursion ``` #### Imported: Output Options ### Output Formats ```bash # JSON output ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -o results.json # HTML output ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -of html -o results.html # CSV output ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -of csv -o results.csv # All formats ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -of all -o results # Silent mode (no progress, only results) ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -s # Pipe to file with tee ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -s | tee results.txt ``` #### Imported: Advanced Techniques ### Using Raw HTTP Requests (Critical for Authenticated Fuzzing) This is one of the most powerful features of ffuf, especially for authenticated requests with complex headers, cookies, or tokens. **Workflow:** 1. Capture a full authenticated request (from Burp Suite, browser DevTools, etc.) 2. Save it to a file (e.g., `req.txt`) 3. Replace the value you want to fuzz with the `FUZZ` keyword 4. Use the `--request` flag ```bash # From a file containing raw HTTP request ffuf --request req.txt -w /path/to/wordlist.txt -ac ``` **Example req.txt file:** ```http POST /api/v1/users/FUZZ HTTP/1.1 Host: target.com User-Agent: Mozilla/5.0 Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9... Cookie: session=abc123xyz; csrftoken=def456 Content-Type: application/json Content-Length: 27 {"action":"view","id":"1"} ``` **Use Cases:** - Fuzzing authenticated endpoints with complex auth headers - Testing API endpoints with JWT tokens - Fuzzing with CSRF tokens, session cookies, and custom headers - Testing endpoints that require specific User-Agents or Accept headers - POST/PUT/DELETE requests with authentication **Pro Tips:** - You can place FUZZ in multiple locations: URL path, headers, body - Use `-request-proto https` if needed (default is https) - Always use `-ac` to filter out authenticated "not found" or error responses - Great for IDOR testing: fuzz user IDs, document IDs, etc. in authenticated contexts ```bash # Common authenticated fuzzing patterns ffuf --request req.txt -w user_ids.txt -ac -mc 200 -o results.json # With multiple FUZZ positions using custom keywords ffuf --request req.txt -w endpoints.txt:ENDPOINT -w ids.txt:ID -mode pitchfork -ac ``` ### Proxy Usage ```bash # HTTP proxy (useful for Burp Suite) ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -x http://127.0.0.1:8080 # SOCKS5 proxy ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -x socks5://127.0.0.1:1080 # Replay matched requests through proxy ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -replay-proxy http://127.0.0.1:8080 ``` ### Cookie and Authentication ```bash # Using cookies ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -b "sessionid=abc123; token=xyz789" # Client certificate authentication ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -cc client.crt -ck client.key ``` ### Encoding ```bash # URL encoding ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -enc 'FUZZ:urlencode' # Multiple encodings ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -enc 'FUZZ:urlencode b64encode' ``` ### Testing for Vulnerabilities ```bash # SQL injection testing ffuf -w sqli_payloads.txt -u https://target.com/page.php?id=FUZZ -fs 1234 # XSS testing ffuf -w xss_payloads.txt -u https://target.com/search?q=FUZZ -mr "