--- id: ins_hallucinations-when-makes-things-thinks operator: Dharmesh Shah operator_role: 'Founder and CTO at HubSpot. Helping millions grow better.' source_url: https://www.linkedin.com/feed/update/urn:li:activity:7372151908330844160/ source_type: thread source_title: 'BREAKING NEWS: Seems like OpenAI may have come up with a way to dramatically reduce the hallucina' source_date: 2026-04-10 captured_date: 2026-05-02 domain: [strategy, leadership, growth] lifecycle: [strategy-bets, measurement-experimentation] maturity: frontier artifact_class: framework score: { originality: 3, specificity: 3, evidence: 2, transferability: 4, source: 4 } tier: B related: [] raw_ref: raw/linkedin/reactions/linkedin-reactions-2026-04-10.md --- # Hallucinations are when the AI makes up things that it *thinks* are true -- but just ar ## Claim BREAKING NEWS: Seems like OpenAI may have come up with a way to dramatically reduce the hallucinations in AI models. Hallucinations are when the AI makes up things that it *thinks* are true -- but just aren't. The solution was brilliantly simple. So simple, I'm surprised we didn't come up with it sooner. ## Mechanism That's why standardized tests (like the SAT) have a penalty for wrong answers. They want to remove the benefit of simply guessing. ## Conditions Holds when: the operating context matches the post's stated frame (team shape, stage, tooling, buyer type). Fails when: the practice is lifted into a different stage or buyer context without reworking the underlying mechanism. ## Evidence > "Hallucinations are when the AI makes up things that it *thinks* are true -- but just aren't." ยท Dharmesh Shah, LinkedIn, 2026-04-10 ## Signals - The team observes the pattern repeating across multiple cycles before naming it. - Practitioners stop questioning the discipline once results compound. - Skipping the step shows up as friction within one or two iterations. ## Counter-evidence No opposing view in current corpus. ## Cross-references - (none in current corpus)