--- id: ins_handley-ai-cant-violate-expectation operator: Ann Handley operator_role: Chief Content Officer, MarketingProfs; author of *Everybody Writes* source_url: https://annhandley.com/what-ai-would-delete-from-great-writing/ source_type: essay source_title: "What AI Would Delete From Great Writing" source_date: 2026-05-03 captured_date: 2026-05-06 domain: [pmm, content, ai-native, marketing] lifecycle: [content-strategy, voice-development, brand-strategy] maturity: applied artifact_class: framework score: { originality: 5, specificity: 5, evidence: 4, transferability: 5, source: 5 } tier: A related: [ins_handley-voice-as-moat-against-ai, ins_handley-self-laughter-as-quality-kpi, ins_handley-write-to-one-subscriber] raw_ref: --- # AI prose can't violate expectation because it IS expectation, protect the smallest deliberate rule-break from every polish pass ## Claim The danger of an AI polish pass on a draft isn't that it makes the draft bad, it's that it makes the draft *generic*. AI is trained on the statistical average of all written prose, so any output it produces tends toward expectation. Voice lives in the moments where the writer deliberately violated expectation, the unexpected verb, the one-line paragraph, the sentence fragment, the self-aware aside. Those are exactly the things a polish pass smooths out. The operating rule: identify the smallest signature quirk before the polish runs, mark it explicitly, and verify it survived afterward. Never let the polish pass have the last word. ## Mechanism Generic content has a survival cost in modern markets that didn't exist before. Pre-AI, competent prose was scarce, and competent-but-generic prose still got read because the alternative was no prose. Post-AI, competent-but-generic prose is the default; it floods feeds, search results, and inboxes; readers learn to skim past it within the first sentence. The signal that survives that filter is anything that breaks expectation cleanly, a rule violation that lands. Anns Handley's structural diagnosis: AI cannot generate that signal because the model is the average of all expectation. A polish pass on a draft sands the violation back to expectation, which is the literal opposite of the move that makes the prose worth reading. ## Conditions Holds when: - The audience has the cognitive bandwidth to detect generic-vs-distinctive prose (most knowledge-worker readers, and increasingly most readers, do). - The writer has a real voice with identifiable quirks, patterns the polish pass will erase if not protected. - The team has the discipline to mark the rule-break before the polish runs, not after. Fails when: - The content surface explicitly *needs* expectation (regulatory copy, compliance disclosures, technical references where the reader wants no surprises). - The rule-break is gimmick rather than craft, repeated quirks for their own sake, with no underlying voice, fail the same filter generic prose fails. - The polish step is a non-negotiable downstream gate (legal review, SEO normalisation) that can't preserve voice. ## Evidence > "AI prose can't violate expectation because it *is* expectation. It's the average of everything." ยท Ann Handley, *What AI Would Delete From Great Writing*, 2026-05-03. The piece's central anecdote: she fed a federal judge's ruling to an AI editor; the model confessed it would smooth out exactly the lines that gave the writing its bite. The lines it would delete were the lines doing the work. ## Signals - The drafting workflow names the deliberate rule-break before the polish pass, annotated, tagged, or commented. - The diff review explicitly checks "did the marked element survive the polish." - Final voice check by a human (the editor or the writer themselves) happens on the marked elements, not the entire piece. - Readers can identify pieces by the writer without seeing the byline. ## Counter-evidence - The discipline can become precious: writers protect "voice" that's actually just bad habits. Voice without underlying craft fails just as visibly as generic prose; the rule-break has to land. - Some content engines that don't preserve voice still scale and convert (some commodity SEO, some pure-utility help docs). The argument here is about content where voice is load-bearing, not all content qualifies. ## Cross-references - `ins_handley-voice-as-moat-against-ai`, same author's earlier framing of voice as the defensible moat. The "expectation" formulation is the structural reason *why* voice is a moat. - `ins_handley-self-laughter-as-quality-kpi`, the diagnostic test for whether voice survived the polish: does the writer still laugh at the line? - `ins_judgment-vs-understanding`, Karpathy's parallel framing for technical work; same shape.