# SEO Best Practices - Complete Reference **Version:** 1.2.0 **Organization:** Agent Skills Contributors **Date:** March 2026 **License:** MIT ## Abstract Comprehensive SEO patterns for web applications built with React and Laravel. Contains 31 rules across 8 categories covering Core Web Vitals, technical SEO, on-page optimization, structured data, performance, social sharing, React/SPA SEO, and mobile-first indexing. Supports SEO audit mode with PASS/FAIL checklist output. Each rule includes incorrect and correct code examples with practical HTML, React (Inertia.js), and Laravel implementations. ## How to Audit When asked to "audit SEO" or "check SEO", run through each rule in this document as a checklist. For each item output **PASS**, **FAIL** (with `file:line` and fix), or **N/A**. End with a summary of pass/fail counts and top 3 priority fixes. ## References - [Google Search Central](https://developers.google.com/search) - [web.dev Core Web Vitals](https://web.dev/articles/vitals) - [Schema.org](https://schema.org/) - [Open Graph Protocol](https://ogp.me/) - [Google Rich Results Test](https://search.google.com/test/rich-results) - [Google PageSpeed Insights](https://pagespeed.web.dev/) - [Google Search Console](https://search.google.com/search-console) - [Mobile-First Indexing](https://developers.google.com/search/docs/crawling-indexing/mobile/mobile-sites-mobile-first-indexing) ## Step 1: Detect Project Type **Always check the project stack before giving advice.** Different stacks need different SEO approaches. Check `package.json` and project structure: | Signal | Project Type | |--------|-------------| | `@inertiajs/react` in dependencies | Laravel + Inertia + React | | `resources/views/**/*.blade.php` only (no React) | Laravel Blade (server-rendered) | **If Laravel Blade:** Apply `tech-`, `onpage-`, `schema-`, `perf-`, `social-`, `mobile-` rules. Meta tags go in Blade layouts. Sitemaps via `spatie/laravel-sitemap`. Skip `spa-` rules — pages are already server-rendered. **If Laravel + Inertia + React:** Apply all rules. Meta tags via `@inertiaHead` in Blade layout + `` component from `@inertiajs/react` in React pages. For SSR, create `resources/js/ssr.jsx` using `createServer` from `@inertiajs/react/server`, add `ssr: 'resources/js/ssr.jsx'` to Vite config, build with `vite build && vite build --ssr`, and run `php artisan inertia:start-ssr`. Use `head-key` attribute on meta tags to prevent duplicates between layout and page. Focus on `schema-`, `social-`, and `perf-` rules. --- # Sections This file defines all sections, their ordering, impact levels, and descriptions. The section ID (in parentheses) is the filename prefix used to group rules. --- ## 1. Core Web Vitals (cwv) **Impact:** CRITICAL **Description:** Google uses Core Web Vitals (LCP, INP, CLS) as ranking signals. Pages that fail these thresholds rank lower and provide poor user experience. Optimizing CWV is the highest-impact technical SEO work. ## 2. Technical SEO (tech) **Impact:** CRITICAL **Description:** Foundational SEO elements that search engines need to discover, crawl, and index pages correctly. Without proper meta tags, canonical URLs, sitemaps, and robots.txt, content cannot rank regardless of quality. ## 3. On-Page SEO (onpage) **Impact:** HIGH **Description:** Content structure and HTML semantics that help search engines understand page topics and relevance. Proper headings, semantic markup, internal linking, and image optimization directly affect rankings. ## 4. Structured Data (schema) **Impact:** HIGH **Description:** JSON-LD markup using Schema.org vocabulary that enables rich results in search (star ratings, FAQ accordions, breadcrumb trails, product prices). Structured data does not directly boost rankings but significantly improves click-through rates. ## 5. Performance SEO (perf) **Impact:** HIGH **Description:** Page speed and loading optimization that affects both Core Web Vitals scores and user experience. Modern image formats, lazy loading, font strategies, and resource hints reduce load times and improve search ranking. ## 6. Social Sharing (social) **Impact:** HIGH **Description:** Open Graph and Twitter Card meta tags that control how pages appear when shared on social media. Proper social meta tags increase click-through rates from social platforms and drive organic traffic. ## 7. React/SPA SEO (spa) **Impact:** HIGH **Description:** SEO patterns specific to single-page applications built with React. SPAs require special attention to rendering strategy, meta tag management, and routing to be crawlable by search engines. ## 8. Mobile-First (mobile) **Impact:** MEDIUM **Description:** Google uses mobile-first indexing, meaning it primarily crawls and indexes the mobile version of pages. Mobile viewport configuration, content parity, and UX requirements directly affect how pages are indexed and ranked. --- ## Largest Contentful Paint Optimization **Impact: CRITICAL (Must be under 2.5s (Google ranking signal))** LCP measures how long it takes for the largest visible element (usually a hero image or heading) to render. Google uses LCP as a direct ranking signal, and pages exceeding 2.5s risk lower search positions and higher bounce rates. ## Incorrect ```html
Welcome to our platform
``` **Problems:** - `loading="lazy"` on the hero image delays the LCP element, as the browser defers loading until it enters the viewport - No `` means the browser discovers the image only after parsing the HTML and CSS - Multiple render-blocking stylesheets delay first render, pushing LCP further out ## Correct ```html
Welcome to our platform
``` ```tsx // ✅ React: hero image with fetchPriority="high" and no lazy loading export default function HeroSection() { return (
Welcome to our platform
); } ``` **Benefits:** - `fetchpriority="high"` tells the browser to prioritize the hero image over other resources - `` starts fetching the image before the browser encounters the `` tag - Non-critical CSS is deferred using the `media="print"` trick, unblocking initial render - WebP format reduces image payload, further improving load time Reference: [Optimize Largest Contentful Paint](https://web.dev/articles/optimize-lcp) --- ## Interaction to Next Paint Optimization **Impact: CRITICAL (Must be under 200ms (replaced FID in March 2024))** INP measures the latency of every click, tap, and keyboard interaction throughout a page visit, reporting the worst interaction. Since March 2024, INP replaced First Input Delay as a Core Web Vital ranking signal, making responsive interactions essential for SEO. ## Incorrect ```tsx // ❌ Bad: synchronous heavy computation blocks the main thread on click export default function ProductFilter({ products }: { products: Product[] }) { const handleFilter = (category: string) => { // Long-running synchronous task blocks UI for 500ms+ const filtered = products.filter((product) => { // Expensive computation per item const score = calculateRelevanceScore(product, category); const normalized = normalizeAcrossDataset(score, products); return normalized > 0.5; }); // DOM update only happens after entire computation finishes setFilteredProducts(filtered); updateURL(category); trackAnalytics("filter", category); }; return ( ); } ``` **Problems:** - The entire filtering, normalization, and DOM update runs synchronously, blocking the main thread - The browser cannot paint the next frame until the handler completes, causing visible lag - Analytics and URL updates further extend the blocking time after the critical render ## Correct ```tsx // ✅ Good: break work into chunks and yield to the main thread export default function ProductFilter({ products }: { products: Product[] }) { const [isPending, startTransition] = useTransition(); const handleFilter = async (category: string) => { // Immediately update UI to show pending state startTransition(() => { setCategory(category); }); // Offload heavy computation to a Web Worker const filtered = await new Promise((resolve) => { filterWorker.onmessage = (e) => resolve(e.data); filterWorker.postMessage({ products, category }); }); startTransition(() => { setFilteredProducts(filtered); }); // Defer non-critical work requestIdleCallback(() => { updateURL(category); trackAnalytics("filter", category); }); }; return ( ); } ``` ```ts // ✅ Web Worker: filter-worker.ts — runs off the main thread self.onmessage = (event: MessageEvent) => { const { products, category } = event.data; const filtered = products.filter((product: Product) => { const score = calculateRelevanceScore(product, category); const normalized = normalizeAcrossDataset(score, products); return normalized > 0.5; }); self.postMessage(filtered); }; ``` ```ts // ✅ Alternative: yield to main thread using scheduler.yield() async function processInChunks( items: T[], callback: (item: T) => boolean, chunkSize = 100 ): Promise { const results: T[] = []; for (let i = 0; i < items.length; i += chunkSize) { const chunk = items.slice(i, i + chunkSize); results.push(...chunk.filter(callback)); // Yield to the main thread between chunks if ("scheduler" in globalThis) { await (globalThis as any).scheduler.yield(); } else { await new Promise((resolve) => setTimeout(resolve, 0)); } } return results; } ``` **Benefits:** - `useTransition` provides immediate visual feedback while deferring the expensive re-render - Web Workers move heavy computation off the main thread entirely, keeping INP near zero - `requestIdleCallback` defers analytics and URL updates until the browser is idle - Chunked processing with yielding prevents any single task from blocking the main thread beyond 50ms Reference: [Optimize Interaction to Next Paint](https://web.dev/articles/optimize-inp) --- ## Cumulative Layout Shift Prevention **Impact: CRITICAL (Must be under 0.1 (Google ranking signal))** CLS measures unexpected visual shifts during a page's lifecycle. A CLS score above 0.1 harms both user experience and search rankings, as Google treats it as a Core Web Vital ranking signal. Most layout shifts come from images without dimensions, late-loading fonts, and dynamically injected content. ## Incorrect ```html
Company logo

Latest News

Article hero

Article content here...

``` **Problems:** - Images without `width` and `height` attributes cause the browser to reflow content once dimensions are known - `font-display: block` causes an invisible text flash (FOIT) followed by a layout shift when the font loads - The ad banner div has no reserved height, pushing content down when the ad loads - Cookie consent banner inserted into the document flow shifts all content below it ## Correct ```html
Company logo

Latest News

Article hero

Article content here...

``` ```tsx // ✅ React: explicit dimensions and aspect-ratio to prevent layout shift interface Article { title: string; heroImage: string; heroAlt: string; content: string; } export default function ArticlePage({ article }: { article: Article }) { return (

{article.title}

{article.heroAlt}
); } ``` **Benefits:** - Explicit `width` and `height` attributes let the browser calculate aspect ratio before the image loads - `aspect-ratio` CSS property ensures responsive images maintain their space during layout - `font-display: swap` with `size-adjust` eliminates both invisible text and font-swap layout shifts - Fixed positioning on the cookie banner keeps it out of document flow, preventing content shifts - Reserved `min-height` on ad slots prevents content from jumping when ads load Reference: [Optimize Cumulative Layout Shift](https://web.dev/articles/optimize-cls) --- ## Essential HTML Meta Tags **Impact: CRITICAL (Every page must have unique title and description)** Title tags and meta descriptions are the most fundamental on-page SEO elements. The title tag is a confirmed ranking factor, and the meta description directly influences click-through rates in search results. Every page must have a unique, properly sized title (50-60 characters) and description (150-160 characters). ## Incorrect ```html Home ``` ```tsx // ❌ Bad: React component with no meta tags export default function ProductPage({ product }: { product: Product }) { return (

{product.name}

{product.description}

); } ``` ```blade {{-- ❌ Bad: Laravel Blade with hardcoded duplicate meta across pages --}} My Website ``` **Problems:** - Generic title like "Home" wastes the most valuable on-page ranking signal - Missing `` breaks mobile rendering and mobile-first indexing - No meta description means Google auto-generates a snippet, often poorly - Duplicate titles and descriptions across pages cause keyword cannibalization - Missing charset can cause character encoding issues in search results ## Correct ```html Wireless Noise-Cancelling Headphones | AudioTech ``` ```tsx // ✅ React SPA: dynamic meta tags with Inertia.js Head component import { Head } from '@inertiajs/react'; interface Product { name: string; slug: string; description: string; metaDescription: string; shortDescription: string; ogImage: string; } export default function ProductPage({ product }: { product: Product }) { return ( <> {`${product.name} | AudioTech`}

{product.name}

{product.description}

); } ``` ```php {{-- ✅ Laravel Blade: dynamic meta tags via layout (layouts/app.blade.php) --}} @yield('title', 'Default Site Title') {{-- products/show.blade.php --}} @extends('layouts.app') @section('title', Str::limit($product->name . ' | AudioTech', 60)) @section('meta_description', Str::limit($product->meta_description, 160)) @section('og_title', $product->name) @section('og_image', $product->og_image_url) @section('canonical', route('products.show', $product->slug)) ``` **Benefits:** - Unique, keyword-rich titles on every page maximize ranking potential for target queries - Properly sized meta descriptions improve CTR by giving searchers a compelling preview - Open Graph and Twitter Card tags ensure rich previews when pages are shared on social media - Dynamic metadata from the CMS or database prevents duplicate meta tags across pages - Charset and viewport meta tags ensure correct rendering across devices and browsers Reference: [Google's Title Links Documentation](https://developers.google.com/search/docs/appearance/title-link) --- ## Canonical URL Implementation **Impact: CRITICAL (Prevents duplicate content penalties)** Canonical tags tell search engines which version of a URL is the "master" copy. Without them, duplicate content from www/non-www variations, query parameters, and pagination fragments dilutes link equity and can trigger ranking penalties. ## Incorrect ```html Running Shoes | ShoeStore ``` ```html Running Shoes - Page 3 | ShoeStore ``` ```html ``` **Problems:** - Without a canonical tag, Google indexes multiple URL variations and splits ranking signals - Pointing paginated pages to page 1 tells Google to ignore pages 2+ entirely, de-indexing that content - Relative canonical URLs may resolve incorrectly depending on the base URL context - Query parameter variations create potentially unlimited duplicate URLs ## Correct ```html Running Shoes | ShoeStore Red Running Shoes | ShoeStore Running Shoes - Page 3 | ShoeStore ``` ```tsx // ✅ Inertia.js: canonical URL with component import { Head } from '@inertiajs/react'; interface CategoryPageProps { category: string; currentPage: number; } export default function CategoryPage({ category, currentPage }: CategoryPageProps) { const baseUrl = "https://example.com"; // Paginated pages get self-referencing canonical // Filter/sort params are excluded from canonical const canonical = currentPage > 1 ? `${baseUrl}/${category}?page=${currentPage}` : `${baseUrl}/${category}`; return ( <> {`${category} | ShoeStore`}

{category}

{/* Product listing */}
); } ``` ```php {{-- ✅ Laravel: canonical URL middleware + Blade directive --}} // app/Http/Middleware/SetCanonicalUrl.php namespace App\Http\Middleware; use Closure; use Illuminate\Http\Request; class SetCanonicalUrl { public function handle(Request $request, Closure $next) { // Strip tracking params, keep meaningful ones like page $allowed = ['page']; $params = collect($request->query()) ->only($allowed) ->filter() ->all(); $canonical = $params ? $request->url() . '?' . http_build_query($params) : $request->url(); // Force HTTPS and non-www $canonical = preg_replace('/^http:/', 'https:', $canonical); $canonical = preg_replace('/\/\/www\./', '//', $canonical); view()->share('canonical', $canonical); return $next($request); } } {{-- layouts/app.blade.php --}} ``` **Benefits:** - Self-referencing canonicals on every page prevent ambiguity for search engine crawlers - Stripping tracking and filter query parameters consolidates link equity to the main URL - Paginated pages retain their own canonical so their content remains indexed - Middleware-based approach ensures consistent canonical URLs across the entire site - Forcing HTTPS and non-www in the canonical prevents protocol and subdomain duplication Reference: [Google's Canonical Documentation](https://developers.google.com/search/docs/crawling-indexing/canonicalization) --- ## Robots.txt Configuration **Impact: HIGH (Controls crawl budget and blocks sensitive paths)** Robots.txt controls which pages search engine crawlers can access. A misconfigured robots.txt can either block important content from being indexed or waste crawl budget on irrelevant pages. It must always reference your XML sitemap to aid discovery. ## Incorrect ```txt # ❌ Bad: blocks CSS/JS (breaks rendering), no sitemap, too open User-agent: * Disallow: /css/ Disallow: /js/ Disallow: /images/ # No sitemap reference # No blocking of admin, API, or internal paths ``` ```txt # ❌ Bad: blocks everything on staging (but staging is publicly accessible) User-agent: * Disallow: / # This only prevents indexing — it does NOT prevent access. # If staging is public, Google can still find and cache URLs via links. ``` **Problems:** - Blocking `/css/` and `/js/` prevents Google from rendering the page, leading to poor indexing of JavaScript-heavy sites - Blocking `/images/` removes images from Google Image Search traffic - No `Sitemap:` directive makes it harder for crawlers to discover all pages - Not blocking admin, API, or staging paths wastes crawl budget and risks exposing internal routes - Using robots.txt alone to "hide" staging does not prevent access — it only prevents crawling ## Correct ```txt # ✅ Good: robots.txt for production site # Default rules for all crawlers User-agent: * # Block admin and internal paths Disallow: /admin/ Disallow: /api/ Disallow: /internal/ # Block search result pages (thin/duplicate content) Disallow: /search Disallow: /*?s= # Block user-specific pages Disallow: /account/ Disallow: /cart Disallow: /checkout # Block duplicate filtered/sorted views Disallow: /*?sort= Disallow: /*?filter= # Allow all static assets (CSS, JS, images) Allow: /css/ Allow: /js/ Allow: /images/ Allow: /fonts/ # Sitemap reference (always absolute URL) Sitemap: https://example.com/sitemap.xml ``` ```txt # ✅ Good: block AI training crawlers while allowing search engines User-agent: GPTBot Disallow: / User-agent: ChatGPT-User Disallow: / User-agent: CCBot Disallow: / # Allow search engine crawlers User-agent: Googlebot Allow: / User-agent: Bingbot Allow: / User-agent: * Disallow: /admin/ Disallow: /api/ Sitemap: https://example.com/sitemap.xml ``` ```php // ✅ Laravel: dynamic robots.txt via route (routes/web.php) use Illuminate\Support\Facades\App; Route::get('/robots.txt', function () { $content = App::environment('production') ? view('seo.robots-production')->render() : "User-agent: *\nDisallow: /"; return response($content, 200) ->header('Content-Type', 'text/plain'); }); ``` ```php {{-- ✅ resources/views/seo/robots-production.blade.php --}} User-agent: * Disallow: /admin/ Disallow: /api/ Disallow: /account/ Disallow: /cart Disallow: /checkout Disallow: /search Disallow: /*?sort= Disallow: /*?filter= Allow: /css/ Allow: /js/ Allow: /images/ Sitemap: {{ url('/sitemap.xml') }} ``` **Benefits:** - Allowing CSS, JS, and images ensures Google can render pages accurately for indexing - Blocking admin, API, and internal paths protects crawl budget and keeps sensitive routes out of search results - Blocking search and filtered pages prevents thin or duplicate content from being indexed - Sitemap reference helps crawlers discover all important pages efficiently - Environment-aware generation prevents production rules from accidentally blocking staging, and vice versa Reference: [Google's Robots.txt Specification](https://developers.google.com/search/docs/crawling-indexing/robots/intro) --- ## XML Sitemap Best Practices **Impact: HIGH (Helps search engines discover and index all pages)** An XML sitemap is a roadmap for search engines, listing every page you want indexed along with metadata about when it was last updated. A well-maintained sitemap improves crawl efficiency, ensures new content is discovered quickly, and prevents important pages from being missed. ## Incorrect ```xml https://example.com/admin/dashboard 2020-01-01 daily 1.0 https://example.com/products/widget 2019-06-15 always 0.9 https://example.com/products/widget?ref=homepage 2019-06-15 ``` **Problems:** - Including noindex or admin pages in the sitemap sends contradictory signals to crawlers - Stale `lastmod` dates cause crawlers to skip pages that may have been updated - Non-canonical URL variations waste crawl budget and dilute link signals - A single sitemap file with over 50,000 URLs exceeds the sitemap protocol limit - `priority` and `changefreq` are largely ignored by Google and add noise ## Correct ```xml https://example.com/ 2026-03-10 https://example.com/products/wireless-headphones 2026-03-08 https://example.com/images/wireless-headphones.webp Wireless Noise-Cancelling Headphones https://example.com/blog/seo-guide-2026 2026-02-20 ``` ```xml https://example.com/sitemaps/pages.xml 2026-03-10 https://example.com/sitemaps/products-001.xml 2026-03-08 https://example.com/sitemaps/products-002.xml 2026-03-05 https://example.com/sitemaps/blog.xml 2026-02-20 ``` ```php // ✅ Laravel: using spatie/laravel-sitemap // Install: composer require spatie/laravel-sitemap use Spatie\Sitemap\Sitemap; use Spatie\Sitemap\SitemapIndex; use Spatie\Sitemap\Tags\Url; use App\Models\Product; use App\Models\Post; // app/Console/Commands/GenerateSitemap.php class GenerateSitemap extends Command { protected $signature = 'sitemap:generate'; public function handle(): void { $sitemapIndex = SitemapIndex::create(); // Products sitemap $productSitemap = Sitemap::create(); Product::query() ->where('is_published', true) ->where('is_indexable', true) ->cursor() ->each(function (Product $product) use ($productSitemap) { $productSitemap->add( Url::create(route('products.show', $product->slug)) ->setLastModificationDate($product->updated_at) ->addImage($product->image_url, $product->name) ); }); $productSitemap->writeToFile(public_path('sitemaps/products.xml')); $sitemapIndex->add('/sitemaps/products.xml'); // Blog sitemap $blogSitemap = Sitemap::create(); Post::query() ->where('status', 'published') ->cursor() ->each(function (Post $post) use ($blogSitemap) { $blogSitemap->add( Url::create(route('blog.show', $post->slug)) ->setLastModificationDate($post->updated_at) ); }); $blogSitemap->writeToFile(public_path('sitemaps/blog.xml')); $sitemapIndex->add('/sitemaps/blog.xml'); // Write sitemap index $sitemapIndex->writeToFile(public_path('sitemap.xml')); $this->info('Sitemap generated successfully.'); } } ``` **Benefits:** - Only canonical, indexable URLs are included, preventing wasted crawl budget - Accurate `lastmod` dates from the database help crawlers prioritize recently updated content - Image sitemap extension improves visibility in Google Image Search - Sitemap index pattern keeps individual files under the 50,000 URL / 50MB limit - Automated generation via commands or build steps ensures the sitemap stays current Reference: [Google's Sitemap Documentation](https://developers.google.com/search/docs/crawling-indexing/sitemaps/overview) --- ## SEO-Friendly URL Structure **Impact: HIGH (Clean URLs improve CTR and crawlability)** URLs are visible in search results and influence both click-through rates and crawl efficiency. Clean, descriptive URLs help users and search engines understand the page content before visiting it. Changing URLs without proper redirects causes 404 errors and lost link equity. ## Incorrect ``` ❌ Bad URL patterns: https://example.com/index.php?page=product&id=4827&cat=12 https://example.com/Products/Running_Shoes/ITEM-4827.html https://example.com/shop/cat/12/subcat/45/product/4827/view/detail/ref/homepage https://EXAMPLE.COM/Our-Amazing-Collection-Of-The-Best-Running-Shoes-For-Marathon-Training-2026 https://example.com/p/4827 ``` ```php // ❌ Bad: Laravel routes with IDs and query params as primary URLs Route::get('/product', function (Request $request) { $product = Product::findOrFail($request->query('id')); return view('product.show', compact('product')); }); // Result: /product?id=4827 ``` ```tsx // ❌ Bad: Inertia page using only numeric ID in URL // Laravel route: Route::get('/products/{product}', ...) // Result: /products/4827 — no keywords in URL export default function ProductPage({ product }: { product: { id: number; name: string } }) { // URL is /products/4827 — no keyword context for search engines return
{product.name}
; } ``` **Problems:** - Query parameter URLs are harder for search engines to crawl and provide no keyword context - Uppercase letters create duplicate URL variations (servers may treat `/Products` and `/products` differently) - Underscores are not treated as word separators by Google (`running_shoes` is one token, not two) - Excessively long URLs are truncated in search results and harder to share - Numeric-only slugs provide no content signal to users or crawlers ## Correct ``` ✅ Good URL patterns: https://example.com/running-shoes https://example.com/running-shoes/nike-air-zoom-pegasus https://example.com/blog/marathon-training-guide https://example.com/blog/marathon-training-guide/nutrition-tips ``` ```php // ✅ Laravel: slug-based routing with 301 redirects for old URLs // routes/web.php Route::get('/products/{product:slug}', [ProductController::class, 'show']) ->name('products.show'); // Redirect old query-param URLs to new slug URLs Route::get('/product', function (Request $request) { $product = Product::findOrFail($request->query('id')); return redirect()->route('products.show', $product->slug, 301); }); // app/Models/Product.php class Product extends Model { public function getRouteKeyName(): string { return 'slug'; } // Auto-generate slug from name on creation protected static function booted(): void { static::creating(function (Product $product) { $product->slug = Str::slug($product->name); }); } } ``` ```php // ✅ Laravel: middleware to enforce lowercase URLs with 301 redirect // app/Http/Middleware/LowercaseUrls.php namespace App\Http\Middleware; use Closure; use Illuminate\Http\Request; class LowercaseUrls { public function handle(Request $request, Closure $next) { $url = $request->getRequestUri(); $lowercase = strtolower($url); if ($url !== $lowercase) { return redirect($lowercase, 301); } return $next($request); } } ``` **Benefits:** - Hyphenated, lowercase slugs are treated as separate words by Google, improving keyword matching - Short, descriptive URLs display fully in search results and improve click-through rates - 301 redirects preserve link equity when URLs change, preventing SEO loss during migrations - Slug-based routing eliminates duplicate content from query parameter variations - Middleware enforcement ensures URL consistency across the entire application automatically Reference: [Google's URL Structure Guidelines](https://developers.google.com/search/docs/crawling-indexing/url-structure) --- ## Heading Hierarchy and Structure **Impact: HIGH (Crawlers and screen readers rely on heading structure)** Search engines use headings to understand content hierarchy and topical relevance. A clear heading structure also improves accessibility for screen-reader users navigating by heading landmarks. ## Incorrect ```html

Welcome to Our Store

Latest Products

Running Shoes

High-performance running shoes for every terrain.

Customer Reviews

About Us

We have been selling shoes since 2010.

Free shipping on orders over $50

``` **Problems:** - Multiple `

` tags dilute the primary topic signal for crawlers - Skipping from `

` to `

` breaks the logical outline and confuses assistive technology - Using heading tags for visual styling instead of structure misleads search engines about content importance ## Correct ```html

Running Shoes for Every Terrain

Latest Products

Trail Running Shoes

Grip-focused shoes designed for off-road surfaces.

Road Running Shoes

Lightweight cushioned shoes for pavement.

Customer Reviews

Top-Rated This Month

See what runners are saying about our best sellers.

Free shipping on orders over $50

``` **Benefits:** - Single `

` clearly signals the page topic to search engines - Logical `h1 > h2 > h3` nesting creates a scannable outline for crawlers and screen readers - Heading levels are never skipped, preserving document structure integrity Reference: [Google Search Central - Headings](https://developers.google.com/search/docs/appearance/title-link) --- ## Semantic HTML for SEO **Impact: HIGH (Semantic elements help crawlers understand page structure)** Semantic HTML gives meaning to your markup so search engines can distinguish navigation from content, sidebars from articles, and headers from footers. This improves indexing accuracy and accessibility compliance. ## Incorrect ```html
Understanding Semantic HTML

Semantic HTML is important for SEO...

``` **Problems:** - Crawlers cannot distinguish navigation, content, and supplementary sections - Screen readers have no landmark regions to jump between - The document structure is invisible without inspecting class names ## Correct ```html

Understanding Semantic HTML

Semantic HTML is important for SEO...

Why It Matters

Search engines use element types to weight content relevance.

Key Elements

The most impactful elements are main, article, nav, and section.

© 2026 My Site

``` **Benefits:** - Crawlers identify the primary content via `
` and `
`, boosting indexing accuracy - `