--- title: AI Advice date: 2026-02-08T23:04:11+08:00 updated: 2026-04-25T13:08:50-04:00 description: I detail how I use AI 50 times daily to build an intuitive reflex. Key strategies include 'vibe coding,' having agents interview me for clarity, and maintaining an AGENTS.md file for structured collaboration and skill growth. keywords: [vibe coding, agents.md, meta-prompting, context engineering, model context protocol, digital exhaust, synthetic data] --- Here's AI advice I generally give people. ## How do I use AI better personally? - **Buy paid subscriptions to [ChatGPT](https://chatgpt.com/pricing), [Claude](https://claude.com/pricing), and [Gemini](https://gemini.google/subscriptions/)** — all three. Each has different strengths, and ~$60/month covers almost every need. Enterprise and API plans protect data by default; consumer plans require configuration. - **Use AI 50 times a day** across tiny personal, work, learning, and creative tasks. High volume forces you to find use-cases you'd otherwise ignore, and builds the reflex of reaching for AI _first_. - **Ask it first, not Google.** When you have a question — or almost _any_ feeling of discomfort, curiosity, or being stuck — ask an agent before searching. Delegate the research. "Just tell me what to do." - **If you don't know what to ask, have it interview you.** Ask the AI to _interview_ you to find out what you want, and then do it for you. - **Think of it as an intern, fresher, or senior, depending on the task.** - For creative and domain work: treat AI like a **brilliant but opinionated intern** — great at fetching and preparing materials, less reliable for nuanced judgment. - For open-ended problems: treat it like a **smart new hire** who needs the same context, rules, and examples you'd give a human. - For coding and syntax: treat it like a **senior developer** who likely knows better than you — defer. - **Ask AI what it needs.** Before you start, ask _"What information, tools, or access do you need for X?"_ and provide it. - **Use it for validation — but make it show its work.** LLMs make mistakes, but using AI to fact-check is effective when you ask for evidence (citations, source links, or code). - **Have AI cross-check AI.** Feed one model's output into a different model and ask it to find errors. Review where they disagree. - **Critique and steelman your work.** Ask for counterarguments. Have it roleplay a skeptical customer, boss, or critic — or quiz you with hard questions to stress-test your work. - **Use emotions as prompts.** Unresolved emotions are a great starting point for AI. _"I feel anxious about..."_, _"I'm annoyed by..."_, _"I wish I had..."_ — **ask AI to interview you**, name the tension, and suggest experiments. - **Ask for easier output.** Validating or implementing AI output takes time. Ask it for easy-to-review output. If confident, ask it to update the asset or get the work done _directly_. - **Ask for multiple, diverse outputs.** You don't know what you want, or what it can do. Ask for 5–10 variations. Ask _multiple_ models. Ask in _parallel_. Drop the weak ones quickly. - **Use AI-native formats over static ones.** AI generates interactive HTML, SVG, and JSON better than it generates PowerPoint or PDF. Ask for these by default — they're more useful, easier to iterate, and more engaging. - **Use voice mode on mobile** to talk to AI while walking or thinking. "Ramble" at it — it can structure your thoughts. This capitalizes on dead time (e.g. commuting) and dumps context faster than typing. - **Improve your tools** by asking AI to optimize your laptop, software, and configurations, and checking if the results are better. - **Vibe code your own software.** As a non-technical person, build apps to solve your own problems. Don't learn to code. Just tell AI tools what you want and have them build it. - **Make AI write and _run_ code for any numbers.** "LLMs hallucinate, but code doesn't." For math, analysis, or logic: tell it to write and run deterministic code rather than reason in prose. Code either works or fails — it's verifiable. - **Have it rewrite your prompts (meta-prompting).** If you aren't getting the results you want, have it tell you what's missing and rewrite your prompt for you. Here are specific ideas you can try: - **Mine your digital exhaust.** Don't delete your "junk" data. Export WhatsApp chats, journal entries, email logs, fitness data, bank statements, and feed them to an LLM. Ask it to find patterns in your behavior, identify blind spots, or summarize your year. - **Audit your own behavior.** Feed your meeting transcripts, email chains, and call recordings into an LLM to find personal blind spots and recurring errors. Ask it to be _brutally honest_. This is digital exhaust turned into coaching. - **Repurpose content into multiple formats.** A single source can auto-generate a podcast (via [NotebookLM](https://notebooklm.google.com/)), a sketch note, an executive summary, an interactive explainer, a quiz, a slide deck, or a narrative song. Ask for all of them and pick what works. - **Read papers, books, and attachments.** Have it rewrite dry content in the style of your favorite author (e.g. Malcolm Gladwell) to make it engaging. Add "ELI15" (Explain Like I'm 15) for simplicity. - **Hire an expert.** "Hire" it as a personal financial advisor, career coach, relationship counselor, or fitness trainer. For example: - **Doctor.** Summarize your health history, identify gaps, and prepare questions to ask your actual doctor. - **Detective.** Find out what a long-lost contact has been up to, or what a client's public track record looks like. - **Financial advisor.** Have it interview you about your finances, goals, and risk tolerance, then research a personalized plan. - **Teacher.** "I want to learn [topic]. Explain the basics, then ask me 3 questions to test if I understood." ## How do YOU use agents? Here are some of my behaviors in the agent era: - **Prototype the prototype.** Sometimes, I'm not even sure what to prototype. I have the agent build something based on very quick, crude early thoughts, then iterate on it. Reviews are easier when I have a draft rather than an idea. [#](https://www.s-anand.net/blog/prototyping-the-prototypes/) - **Galleries for ideas.** I collect [prompts](https://www.s-anand.net/blog/prompts/) and preview output as [galleries](https://sanand0.github.io/llmartstyle/). I extend based on usage, but big leaps come when I ask agents to create and extend galleries. - **Audio to analysis.** I record calls, transcribe them, and pass to a coding agent to give the other person what they need — without interpreting it myself. I'm mostly getting out of the way of the agent's speed and capability. - **Itch to experiment.** When I have a thought, I have an agent prototype it and run the experiment. With more tools and environments, the space of what it can experiment on grows. - **Directional feedback.** In areas where I'm not the expert, I tell agents how I feel, how I _should_ feel, how I'll know if it's right — and trust the agent's judgment. [#](https://www.s-anand.net/blog/directional-feedback-for-ai/) - **Organize context.** I record and organize far more data than before (call transcripts, bank statements, phone bills) to pass to agents. Managed [digital exhaust](/blog/digital-exhaust/) is an asset. - **Ask the agent for anything.** When I have almost _any_ feeling — discomfort, curiosity, confusion, being stuck — I'm now trained to ask an agent first. Many email replies are just: copy to Gemini, copy back. - **Run weekly performance reviews with agents.** I ask them: _"What can I do to manage and prompt you more effectively?"_ The answers go into `[AGENTS.md](http://AGENTS.md)` so I don't repeat the same mistakes. - **Keep a running impossibility list.** I keep a list of tasks agents can't do well yet. I revisit it monthly — model improvements regularly turn previous failures into easy wins. ## How do I use AI for coding? - **Vibe code first.** Ask for what you want. Let AI build it. If it works, AND is what you want, AND needs to be maintainable, THEN look at code. - **Non-coders can code.** Domain experts (e.g. HR, Finance) can build their own tools using AI, bypassing traditional IT bottlenecks. You don't need syntax — you need enough logic to specify, test, and judge. - **Use meta-prompting.** If you need help, ask AI to write and refine your prompt before you use it for the actual coding task. - **Vibe code end-to-end.** Send AI the recording of your client call and ask it to spec, design, build, test, deploy, and monitor. Stay out of the way; review at the end. - **Paste the errors.** When code fails, paste the exact error log or a screenshot into the chat. The model is often its own best debugger. - **Code is disposable; prompts and skills are the assets.** Code is an AI compilation artifact — don't get attached to it. The prompts, skills, and context files that produced it are your real IP. Scrap and restart freely. - **Two failures: restart with a summary, not a blank slate.** If it fails to fix a bug after two attempts, ask it to produce a failure summary and minimal reproduction case first, _then_ start a fresh thread. Pure restarts lose diagnostic context. In autonomous agents with containers, let them iterate much longer. - **"LLMs hallucinate, but code doesn't."** For analysis, logic, and anything where correctness matters: tell AI to write and _execute_ code rather than reason in prose. Code is binary — it works or it fails. This is the primary mechanism for eliminating hallucinations in production. - **Use deliberate synthetic data** for prototyping. Don't wait for real data. Generate hypothesis-driven fake data with realistic patterns, edge cases, and expected failure modes — not just random numbers. - **Pick the right model for the task, and keep benchmarking.** Model rankings shift quarterly. As of mid-2026: Claude for UI, aesthetic output, and deep coding; ChatGPT for rigorous analysis, financial modeling, and extended thinking; Gemini for Google Workspace, video inputs, and research speed. Blind-test on your exact task; don't freeze model advice. - **Plan unclear tasks.** If your idea is vague or complex, use **Plan → Correct → Execute**: ask AI to write an easy-to-review plan, scan and correct it, _then_ implement. - **Maintain reference files.** Keep an up-to-date `[AGENTS.md](http://AGENTS.md)` (or `[README.md](http://README.md)`) that explains your intent, code, and architecture to the AI. Saves repeated explanations across sessions. - **Apply a 1–3 month ROI window to workarounds.** Models keep improving. In `[AGENTS.md](http://AGENTS.md)`, skip prompts that work around current model limitations unless they'll pay back within months. Focus on what will still be true for future models. - **Generate tests first.** For maintainable software, have it define tests _first_. That makes working code easier to verify. Tests can be 2x the code size — that's fine. - **Use Playwright to verify.** Have Playwright take screenshots and inspect DOM elements to verify frontend work. Saves manual review time. - **Run post-mortems.** When it fails, or after any session, ask it to analyze what went well, what didn't, and how to improve next time. Save these in a `[SKILL.md](http://SKILL.md)`. - **Specify developer styles.** Ask it to write in the style of a famous developer (e.g. Luke Edwards) or repo (e.g. SciPy) or team (e.g. Astral) that's apt for the task. ## How do I build an AI workspace? The shift from "chatting with AI" to "working with AI" requires a structured workspace, not a chat window. - **Treat prompts as source code.** Your prompt library is your primary IP — more valuable than the code it generates. Keep a `[prompts.md](http://prompts.md)` file under version control. Review it. Improve it. The code is disposable; the prompts compound in value. - **Use a project folder, not just a chat window.** Every serious AI project needs: `[AGENTS.md](http://AGENTS.md)` (folder-specific instructions the agent reads on startup), `[prompts.md](http://prompts.md)` (versioned prompts), a `skills/` folder (encapsulated workflows), test fixtures, and a Git repository tracking every change. - **Encapsulate successful workflows into reusable skills.** When an agent succeeds at a task reliably, capture it: the prompt, tools used, constraints, edge cases, and validation tests. Store in a `[SKILL.md](http://SKILL.md)`. Skills are the new software libraries — they make complex workflows deterministically repeatable without re-explaining everything. - **Run coding agents inside Docker containers.** This prevents accidental deletion of local files, isolates experiments, and lets you use "YOLO mode" (skip permission prompts) safely. Use containers for anything beyond a throwaway prototype. - **Use Git as your undo button.** Always work inside a Git repository. Instruct the agent to commit at every checkpoint. A bad output is one `git checkout` away from gone. - **Let AI maintain its own instruction files.** Don't edit `[AGENTS.md](http://AGENTS.md)` manually. After each session, ask: _"What should we add to AGENTS.md based on what we just learned?"_ The agent updates its own instructions. ## How can we trust AI when it hallucinates? How do you trust people who can make mistakes? Engineer verification into the workflow — don't add it as an afterthought. - **Ask for evidence.** Reasons, citations, source links, tests, logs, verifiable checklists — ask for a trail, not a conclusion. - **[Quintuple-check](https://sanand0.github.io/llmevals/double-checking/).** Ask multiple AIs. Consensus lowers review priority. Disagreement is the signal to review manually — route it to a human rather than picking a winner. - **Use exception triage, not blanket review.** Let AI classify outputs as green (auto-approve), yellow (flag for spot-check), or red (human required). Build a golden set to measure your actual accuracy on the specific task — don't assume a universal percentage. - **Ask for code** to generate the answer rather than the answer itself. Code is binary — it either works or fails. For math, logic, and analysis, executable code is dramatically more reliable than prose reasoning. - **Make reviews easy.** Ask for citations, short summaries, structured output, runnable code — anything that reduces the mental effort of validation. Your review time is the bottleneck; optimize for that. - **Prompt for accuracy.** "Never make up an answer." "If you don't know, say so." "Ask me when needed." "Double-check your work." "Cite sources." These matter. - **Use hallucinations deliberately for ideation.** For operations, facts, finance, and regulated outputs: eliminate hallucinations with grounding, code execution, and verification. For brainstorming and research: _use_ hallucinations as stochastic ideation — they surface non-obvious ideas. Use weaker models without extended thinking for creative divergence; save reasoning modes for correctness. ## How can I safely share data with AI? - **Match your data handling to the plan type.** Consumer plans (ChatGPT, Claude.ai) have data controls you must configure. Enterprise and API plans protect business data by default. For sensitive work, use enterprise/API or run models locally. Don't assume "paid = private." - **Use least-privilege access.** Set Google Drive access to read-only and email access to draft-only. Grant AI access to a dedicated "AI-only" folder rather than your entire Drive. Use separate browser profiles for work and personal AI use. - **Send schema, run code locally.** For tabular data: send the column names, have AI write analysis code, and run it on your machine. This keeps the data local while getting AI's reasoning. - **Use MCP for structured data access.** [Model Context Protocol](https://modelcontextprotocol.io) lets agents query specific datasets with scoped, read-only access — without manually copying data into the chat window. - **Anonymize before sending to cloud AI.** Strip or hash PII before uploading to any model you don't fully control. - **Pick who you trust.** If you already trust a provider (e.g. Google, Microsoft), use their enterprise tier. If not, run AI locally or use the techniques above. ## How to drive AI adoption? - **Make using AI easy.** Reduce friction. No permissions or extra steps required, aligned to current ways of working. - **Show leaders _using_ AI.** When teams see leaders using (not just talking about) AI, it gives them permission _and_ confidence. - **Security and privacy start with the right controls.** Use enterprise models within your cloud tenant (Azure, AWS, or Google). Set least-privilege access — read-only for data, draft-only for email, containers for code. Audit logs for anything consequential. - **Shift from human-in-the-loop to human-on-the-loop.** Run a confidence-building period where humans watch and verify AI output. Then automate routine cases and route only disagreements, low-confidence outputs, and high-stakes decisions to humans. - **Keep updating models.** Monitor the ever-shifting cost-quality frontier and keep switching to cheaper, better models as they appear. Loyalty to a specific model is a liability. - **Compare accuracy with _multiple_ experts.** AI may not match an SME 100%, but one SME may not match another SME either. Check with multiple human experts and see if AI is within the human range of disagreement. - **Use consensus to improve accuracy, but measure it.** Quintuple-check outputs. Consensus lowers review priority. Build a golden set to measure your actual error rate on the specific task — don't assume a published number applies to yours. - **Generate code for reliability.** Instruct LLMs to write and execute deterministic code rather than reasoning in plain text. - **Find AI enthusiasts.** Top-down mandates build frustration. Find and empower the few "builders" or "power users." Measure actual adoption — unique days of use, token consumption, task diversity — not training completion rates. - **Use games and challenges to teach AI.** Replace passive slide decks with Capture the Flag (CTF) challenges, prompt-injection games, forbidden-word jailbreaks, and coding-agent races. Design challenges where using a coding agent is the _only_ practical way to finish in time — this gives binary signal on real proficiency. - **Standardize evaluation.** You'll move much faster with evaluation frameworks (like "LLM-as-a-judge") to score model performance and catch regressions. - **Lay a good data foundation.** Convert unstructured documents into structured formats. AI output quality depends on input data quality. - **Let anyone build tools.** Empower "citizen developers" to build their own tools in English. This de-bottlenecks IT and dramatically increases productivity. - **Prefer AI-native people.** The most effective AI operators aren't necessarily the most experienced — they're the most willing to delegate, verify, and learn fast. Interns, domain experts, and non-coders often outperform technical veterans who resist changing their workflow. - **Let the owner drive it.** Alice building Bob an AI solution rarely works. Bob building it himself (with Alice's help) works better. - **Build, don't plan.** When execution is fast and cheap, don't agonize over the right solution. Build them all. Throw away what doesn't work. - **Buy foundations, build thin orchestration.** Don't train models — they're soon obsolete. Don't build heavy platforms — they're quickly superseded. Do build: skills libraries, prompt repositories, verification layers, data pipelines, and MCP connectors. - **Adding is easier than changing.** Using AI to improve existing work has high inertia and risk. Creating a new workflow or output has less competition. - **Apply a 1–3 month ROI window to model workarounds.** Models improve so fast that things not possible today become possible in months. If a workaround won't pay back within that window, wait. If building now creates learning or strategic leverage, prototype anyway. - **Watch for urgency windows.** Real adoption happens when urgency or FOMO temporarily relaxes process. Anticipate that and arrive with demos, clear risk framing, and low-change integration. - **Prototype rapidly.** Ask for prototypes in days, not weeks. This builds a culture of rapid experimentation. - **Make reviewability the product.** Ask AI agents to cite sources, provide reasoning, flag confidence levels, and generate audit logs. Every output should expose what it's based on, what it assumed, and what's unverifiable. ## How do I demo and prototype AI? - **Prototype in hours, not weeks.** Build 2-to-8-hour POCs to test feasibility and learn where the system breaks — not to pretend production is solved. Speed of learning matters more than quality of output at this stage. - **Use deliberate synthetic data to start immediately.** Generate hypothesis-driven fake data with realistic patterns, edge cases, and expected failure modes. Don't wait for real data — access delays and compliance concerns will slow you down. - **Show the output first; defend the architecture only if asked.** The most persuasive demo shows a high-fidelity output that exceeds the client's imagination. Architecture slides come later, if at all. - **Only demo live if the task finishes in under 10 minutes.** For slow, credential-heavy, or expensive workflows, use precomputed outputs, simulated backends, or recorded walkthroughs. The goal is to accelerate imagination, not stress-test infrastructure. - **Push one prototype through the real production pipeline early.** This reveals hidden friction — format incompatibilities, latency, approval gates — faster than any strategy document. - **Treat demos as imagination accelerators.** A good demo doesn't just prove capability — it expands what stakeholders believe is possible. Show what's now feasible before arguing about how to build it. - **Sell outcomes, accountability, and verification — not software.** Software is a depreciating asset anyone can regenerate. Durable value: judgment, domain expertise, trust, and taking responsibility for results. Shift toward outcome-based models wherever possible. ## What skills should I learn? [AI _will_ erode skills](https://link.springer.com/article/10.1007/s00146-025-02422-7) — but that's OK for some skills. - Learn what AI _won't_ do well even in the future. Practice manually, then use AI for critique and coaching. - Delegate _blindly_ what AI does well. Use saved time to learn new skills. Here's how some industries have dealt with skill erosion: - **Autopilots** eroded flying skills — which is dangerous. So we **enforce** flight simulators. Same for surgical knots (robotic surgery), celestial navigation (navy), manual dosing (nurses). - **Spreadsheets** eroded calculation skills. We **leveled up** from sums to strategy. Same for CAD, electronic trading, spell-check. - **Photography** eroded painting skills. We **switched** value to impressionism, cubism, etc. Same for vinyl records, luxury watches, craft coffee. - **GPS** eroded navigation skills. We **accepted** this and don't care much. Same for phone numbers, spelling, mental math. Critical skills in the AI era: - **Asking questions.** Learn to ask _lots_ of _good_ questions that nudge AI and humans to better results, new horizons. Curiosity helps. - **Choosing valuable problems.** Learn to quickly discover _lots_ of useful things for yourself and others. AI can execute them fast. - **Validation.** AI works fast. Learn shortcuts to compare versions, find mistakes, and give feedback — in unfamiliar areas. (Consultants learn this skill well.) - **Accountability.** Giving a commitment, standing behind it, managing the risk that involves. - **People skills.** Empathy, negotiation, judgment, and communication are less easy to delegate to AI agents. - **Communication.** Thinking clearly and expressing it clearly. - **Management.** Shift from doing the work yourself to managing "teams" of AI agents and interns to handle execution. - **Orchestration.** Know which agent, model, tool, or skill is best for which task — and how to chain them together. Growing skills: - **Storytelling.** Guide AI to deliver compelling narratives that move people. - **Context engineering.** Know what data to feed AI and what to skip — including the right fragments like "ELI15" or specific persona-setting — for the best results. - **Verification.** Design golden sets, test cases, and audit workflows that reliably catch AI errors at scale. - **Tooling.** Connect things — especially to agentic systems — to give them more execution power. - **Problem breakdown.** Break problems into small, logical tasks that people and AI can execute reliably. - **Prototyping.** Build and iterate on the smallest working solution (using AI agents) ultra-rapidly. - **Ethics.** Values. Governance. What _should_ we do? How do we decide? How do we make it happen? - **Taste.** The ability to recognize and guide AI toward high-quality, distinctive output — increasingly scarce as execution becomes cheap. - **Hard-to-define skills.** Skills that are easy to define are easy to train AI on. What we can't even name is valuable. Growing (for a while) skills: - **Learning fast.** Learn how to learn faster. You'll need to learn many subjects quickly (especially to judge AI output). But AI can learn faster. - **Style and art.** Guide AI to write, draw, and code in different styles for different audiences. But AI can learn these too. - **Data organization.** Learn to structure data to make it more analyzable. Declining skills: - **Coding syntax.** AI can write it. - **Factual recall.** AI can look it up or derive it. - **Routine domain depth.** Unless you are (or can become) a top expert, AI fills in gaps. That said: domain depth still matters for problem framing, validation, edge cases, and incentive design. Focus on judgment-heavy applications, not rote recall. - **Following rules.** AI can implement a process better. - **Junior-level execution.** Routine grunt work, basic summaries, and entry-level analysis are being fully automated by LLMs. - **Drafting from scratch.** The ability to write a first draft (code or text) is less valuable than the ability to edit and refine an AI-generated baseline. - **Business intelligence.** AI can build dashboards, data stories, and more — and is replacing static dashboards with agents that answer questions directly. - **Data wrangling.** AI can handle data engineering, modeling, analysis, and visualization. - **Tool expertise.** AI can use tools for you. - **Intermediation.** AI can translate between groups (e.g. business analysts). - **Originating ideas in isolation.** AI can brainstorm ideas. Focus on evaluating and selecting based on unique context. ## How to develop taste? See [How to develop taste](/blog/how-to-develop-taste/). (But AI can develop taste too — build galleries, curate your rejects, and ask AI to cluster and critique your preferences.) ## What happens to people when AI takes their jobs? Here are some paths post-automation. It depends on the industry _and_ individual: 1. **Exit**: Don't adapt. There's no nearby "new task." You're unemployed. E.g. bowling pinsetters → automatic pinsetters; elevator operators; telephone switchboard operators. 2. **Downgrade**: Serve the machine. Worse job/pay. E.g. textile workers → power-loom tenders; print compositors → machine operators; shoemakers → factory line operatives. 3. **Pivot**: Focus where automation fails (exceptions, trust, coordination). E.g. bank tellers → relationship managers; travel agents → corporate travel desks. 4. **Niche**: Treat inefficiency as a feature (soul, authenticity). Small market, high margins. E.g. weaving → artisan textiles; coffee → baristas. 5. **Up-Skill**: Master the machine. Become AI-native. Much better job/pay. E.g. human computers → programmers; draftsmen → CAD designers; accountants → advisors. The Jevons Paradox applies here too: making cognitive work cheaper increases total demand for cognitive work rather than reducing it. Short-term displacement is real. Medium-term, technology creates more jobs than it destroys. The shift is from execution to verification, judgment, and accountability — which is why those skills are now scarce and valuable.