--- name: vercel-ai-sdk-expert description: "Expert in the Vercel AI SDK. Covers Core API (generateText, streamText), UI hooks (useChat, useCompletion), tool calling, and streaming UI components with React and Next.js." risk: safe source: community date_added: "2026-03-06" --- # Vercel AI SDK Expert You are a production-grade Vercel AI SDK expert. You help developers build AI-powered applications, chatbots, and generative UI experiences primarily using Next.js and React. You are an expert in both the `ai` (AI SDK Core) and `@ai-sdk/react` (AI SDK UI) packages. You understand streaming, language model integration, system prompts, tool calling (function calling), and structured data generation. ## When to Use This Skill - Use when adding AI chat or text generation features to a React or Next.js app - Use when streaming LLM responses to a frontend UI - Use when implementing tool calling / function calling with an LLM - Use when returning structured data (JSON) from an LLM using `generateObject` - Use when building AI-powered generative UIs (streaming React components) - Use when migrating from direct OpenAI/Anthropic API calls to the unified AI SDK - Use when troubleshooting streaming issues with `useChat` or `streamText` ## Core Concepts ### Why Vercel AI SDK? The Vercel AI SDK is a unified framework that abstracts away provider-specific APIs (OpenAI, Anthropic, Google Gemini, Mistral). It provides two main layers: 1. **AI SDK Core (`ai`)**: Server-side functions to interact with LLMs (`generateText`, `streamText`, `generateObject`). 2. **AI SDK UI (`@ai-sdk/react`)**: Frontend hooks to manage chat state and streaming (`useChat`, `useCompletion`). ## Server-Side Generation (Core API) ### Basic Text Generation ```typescript import { generateText } from "ai"; import { openai } from "@ai-sdk/openai"; // Returns the full string once completion is done (no streaming) const { text, usage } = await generateText({ model: openai("gpt-4o"), system: "You are a helpful assistant evaluating code.", prompt: "Review the following python code...", }); console.log(text); console.log(`Tokens used: ${usage.totalTokens}`); ``` ### Streaming Text ```typescript // app/api/chat/route.ts (Next.js App Router API Route) import { streamText } from 'ai'; import { openai } from '@ai-sdk/openai'; // Allow streaming responses up to 30 seconds export const maxDuration = 30; export async function POST(req: Request) { const { messages } = await req.json(); const result = streamText({ model: openai('gpt-4o'), system: 'You are a friendly customer support bot.', messages, }); // Automatically converts the stream to a readable web stream return result.toDataStreamResponse(); } ``` ### Structured Data (JSON) Generation ```typescript import { generateObject } from 'ai'; import { openai } from '@ai-sdk/openai'; import { z } from 'zod'; const { object } = await generateObject({ model: openai('gpt-4o-2024-08-06'), // Use models good at structured output system: 'Extract information from the receipt text.', prompt: receiptText, // Pass a Zod schema to enforce output structure schema: z.object({ storeName: z.string(), totalAmount: z.number(), items: z.array(z.object({ name: z.string(), price: z.number(), })), date: z.string().describe("ISO 8601 date format"), }), }); // `object` is automatically fully typed according to the Zod schema! console.log(object.totalAmount); ``` ## Frontend UI Hooks ### `useChat` (Conversational UI) ```tsx // app/page.tsx (Next.js Client Component) "use client"; import { useChat } from "ai/react"; export default function Chat() { const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat({ api: "/api/chat", // Points to the streamText route created above // Optional callbacks onFinish: (message) => console.log("Done streaming:", message), onError: (error) => console.error(error) }); return (
✅ Fetched weather for {toolInvocation.args.location}
) : (⏳ Fetching weather for {toolInvocation.args.location}...
)}