--- date: "2025-09-16T00:00:00Z" categories: - linkedin - llms description: "Coding agents can over-literalize instructions, so sloppy or conflicting guidance in rules files creates predictable but absurd failures." keywords: ["Codex", "AI coding agents", "instruction following", "AGENTS.md", "prompt design", "failure modes"] --- GPT-5 (Codex) follows instructions exactly as given. Usually a good thing, but sometimes, it this is what happens. **AGENTS.md**: ALWAYS WRITE TESTS before coding. **Codex**: Let me begin with the tests. (Spends 5 minutes writing tests.) **Anand**: Stop! This is a proof of concept. We don't need tests! --- **AGENTS.md**: Write tests before coding. Drop tests for proof-of-concepts. **Codex**: (Proceeds to delete all existing tests.) **Anand**: STOP! We need those tests! --- **AGENTS.md**: For new code, or if tests exist, start by writing tests. **Anand**: Do this task. ... BUT SKIP TESTS! This is a POC! **Codex**: The user has explicitly asked to skip tests. But the guidelines require tests for new code. For now, I think I will skip tests. **Anand**: (sigh!) Reminds me of Chaplin's Feeding Machine.
[LinkedIn](https://www.linkedin.com/posts/sanand0_charlie-chaplin-demonstrates-the-feeding-activity-7370987866577022976-d1MA)