Vercel AI SDK: The Complete Guide to Building AI-Powered Applications
On this page
- Vercel AI SDK: The Complete Guide to Building AI-Powered Applications
- What is the Vercel AI SDK?
- Why Choose Vercel AI SDK?
- Core Components
- 1. AI SDK Core
- 2. AI SDK UI
- Getting Started: Installation and Setup
- Initial Setup
- Environment Configuration
- Real-Time Example 1: Basic Chat Application
- Backend API Route
- Frontend Chat Component
- Real-Time Example 2: AI-Powered Code Generator
- Code Generation API
- Real-Time Example 3: Streaming with Tools
- Tools API Route
- Tools Chat Component
- Advanced Features
- 1. Custom Providers
- 2. Middleware and Rate Limiting
- 3. Error Handling and Fallbacks
- Best Practices
- 1. Security
- 2. Performance
- 3. User Experience
- 4. Cost Optimization
- Conclusion
Vercel AI SDK: The Complete Guide to Building AI-Powered Applications
The Vercel AI SDK has revolutionized how developers integrate artificial intelligence into web applications. As a TypeScript toolkit designed for modern frameworks like React, Next.js, Vue, and Svelte, it abstracts away the complexity of working with different AI providers while providing a unified, developer-friendly API.
What is the Vercel AI SDK?
The Vercel AI SDK is a comprehensive TypeScript toolkit that standardizes AI model integration across multiple providers. Instead of learning different APIs for OpenAI, Anthropic, Google, and other providers, developers can use a single, consistent interface to build AI-powered applications.
Architecture Overview:
- Your Application connects to Vercel AI SDK
- SDK provides unified interface to multiple providers:
- OpenAI (GPT-4, GPT-3.5)
- Anthropic (Claude 3)
- Google AI (Gemini)
- Hugging Face (Open Source Models)
- Other Custom APIs
Why Choose Vercel AI SDK?
π Provider Agnostic
- Switch between AI providers without changing your code
- Support for 20+ model providers including OpenAI, Anthropic, Google, and more
- Consistent API across all providers
β‘ Framework Integration
- Built-in hooks for React, Next.js, Vue, and Svelte
- Server-side and client-side AI capabilities
- Streaming support for real-time responses
π οΈ Developer Experience
- TypeScript-first with excellent type safety
- Comprehensive documentation and examples
- Active community and regular updates
Core Components
The AI SDK consists of two main libraries:
1. AI SDK Core
Provides the foundational APIs for:
- Text generation
- Structured object generation
- Tool calling and function execution
- Building AI agents
2. AI SDK UI
Framework-specific hooks and components for:
- Chat interfaces
- Streaming responses
- Loading states
- Error handling
SDK Components:
AI SDK Core:
generateTextβ Single ResponsegenerateObjectβ Structured DatastreamTextβ Real-time Streamtoolβ Function Calls
AI SDK UI:
useChatβ Chat InterfaceuseCompletionβ Text GenerationuseObjectβ Form HandlinguseAssistantβ Assistant API
Getting Started: Installation and Setup
Letβs start with a practical example by building a Next.js application with AI capabilities.
Initial Setup
# Create a new Next.js projectnpx create-next-app@latest ai-chat-app --typescript --tailwind --eslintcd ai-chat-app
# Install Vercel AI SDKnpm install ai @ai-sdk/openai @ai-sdk/anthropic
# Install additional dependenciesnpm install zodEnvironment Configuration
OPENAI_API_KEY=your_openai_api_key_hereANTHROPIC_API_KEY=your_anthropic_api_key_hereReal-Time Example 1: Basic Chat Application
Letβs build a complete chat application with streaming responses and multiple AI providers.
Chat Flow:
- User types message in Chat Component
- Component sends POST request to
/api/chat - API Route calls
streamText()from AI SDK - SDK makes API request to AI Provider
- Provider returns streaming response
- SDK processes stream chunks
- API streams data to component via SSE
- Component displays real-time updates to user
Note: useChat hook handles streaming automatically, SDK provides provider abstraction
Backend API Route
import { openai } from '@ai-sdk/openai';import { anthropic } from '@ai-sdk/anthropic';import { streamText } from 'ai';import { NextRequest } from 'next/server';
export async function POST(req: NextRequest) { try { const { messages, provider = 'openai', model } = await req.json();
// Select provider and model let selectedModel; switch (provider) { case 'openai': selectedModel = openai(model || 'gpt-4'); break; case 'anthropic': selectedModel = anthropic(model || 'claude-3-sonnet-20240229'); break; default: selectedModel = openai('gpt-4'); }
// Generate streaming response const result = await streamText({ model: selectedModel, messages, system: `You are a helpful AI assistant. Provide accurate, helpful, and engaging responses. Current date: ${new Date().toLocaleDateString()}`, temperature: 0.7, maxTokens: 1000, });
return result.toAIStreamResponse(); } catch (error) { console.error('Chat API error:', error); return new Response('Internal Server Error', { status: 500 }); }}Frontend Chat Component
'use client';
import { useChat } from 'ai/react';import { useState } from 'react';import { Send, Bot, User, Settings } from 'lucide-react';
export default function ChatInterface() { const [provider, setProvider] = useState('openai'); const [model, setModel] = useState('gpt-4'); const [showSettings, setShowSettings] = useState(false);
const { messages, input, handleInputChange, handleSubmit, isLoading, error } = useChat({ api: '/api/chat', body: { provider, model }, });
const providers = { openai: ['gpt-4', 'gpt-3.5-turbo'], anthropic: ['claude-3-sonnet-20240229', 'claude-3-haiku-20240307'], };
return ( <div className="flex flex-col h-screen max-w-4xl mx-auto bg-white"> {/* Header */} <div className="flex items-center justify-between p-4 border-b bg-gray-50"> <h1 className="text-xl font-semibold text-gray-800">AI Chat Assistant</h1> <button onClick={() => setShowSettings(!showSettings)} className="p-2 rounded-lg hover:bg-gray-200 transition-colors" > <Settings className="w-5 h-5" /> </button> </div>
{/* Settings Panel */} {showSettings && ( <div className="p-4 bg-blue-50 border-b"> <div className="flex gap-4"> <div> <label className="block text-sm font-medium text-gray-700 mb-1"> Provider </label> <select value={provider} onChange={(e) => setProvider(e.target.value)} className="px-3 py-2 border border-gray-300 rounded-md focus:outline-none focus:ring-2 focus:ring-blue-500" > <option value="openai">OpenAI</option> <option value="anthropic">Anthropic</option> </select> </div> <div> <label className="block text-sm font-medium text-gray-700 mb-1"> Model </label> <select value={model} onChange={(e) => setModel(e.target.value)} className="px-3 py-2 border border-gray-300 rounded-md focus:outline-none focus:ring-2 focus:ring-blue-500" > {providers[provider as keyof typeof providers]?.map((m) => ( <option key={m} value={m}> {m} </option> ))} </select> </div> </div> </div> )}
{/* Messages */} <div className="flex-1 overflow-y-auto p-4 space-y-4"> {messages.length === 0 && ( <div className="text-center text-gray-500 mt-8"> <Bot className="w-12 h-12 mx-auto mb-4 text-gray-400" /> <p>Start a conversation with your AI assistant!</p> </div> )}
{messages.map((message) => ( <div key={message.id} className={`flex ${ message.role === 'user' ? 'justify-end' : 'justify-start' }`} > <div className={`flex max-w-xs lg:max-w-md xl:max-w-lg ${ message.role === 'user' ? 'flex-row-reverse' : 'flex-row' }`} > <div className={`flex-shrink-0 ${ message.role === 'user' ? 'ml-3' : 'mr-3' }`} > {message.role === 'user' ? ( <div className="w-8 h-8 bg-blue-500 rounded-full flex items-center justify-center"> <User className="w-4 h-4 text-white" /> </div> ) : ( <div className="w-8 h-8 bg-gray-500 rounded-full flex items-center justify-center"> <Bot className="w-4 h-4 text-white" /> </div> )} </div> <div className={`px-4 py-2 rounded-lg ${ message.role === 'user' ? 'bg-blue-500 text-white' : 'bg-gray-100 text-gray-800' }`} > <div className="whitespace-pre-wrap">{message.content}</div> </div> </div> </div> ))}
{isLoading && ( <div className="flex justify-start"> <div className="flex mr-3"> <div className="w-8 h-8 bg-gray-500 rounded-full flex items-center justify-center"> <Bot className="w-4 h-4 text-white" /> </div> </div> <div className="bg-gray-100 px-4 py-2 rounded-lg"> <div className="flex space-x-1"> <div className="w-2 h-2 bg-gray-400 rounded-full animate-bounce"></div> <div className="w-2 h-2 bg-gray-400 rounded-full animate-bounce" style={{ animationDelay: '0.1s' }}></div> <div className="w-2 h-2 bg-gray-400 rounded-full animate-bounce" style={{ animationDelay: '0.2s' }}></div> </div> </div> </div> )} </div>
{/* Input Form */} <div className="border-t p-4"> {error && ( <div className="mb-4 p-3 bg-red-100 border border-red-400 text-red-700 rounded"> Error: {error.message} </div> )}
<form onSubmit={handleSubmit} className="flex space-x-2"> <input value={input} onChange={handleInputChange} placeholder="Type your message..." disabled={isLoading} className="flex-1 px-4 py-2 border border-gray-300 rounded-lg focus:outline-none focus:ring-2 focus:ring-blue-500 disabled:opacity-50" /> <button type="submit" disabled={isLoading || !input.trim()} className="px-4 py-2 bg-blue-500 text-white rounded-lg hover:bg-blue-600 focus:outline-none focus:ring-2 focus:ring-blue-500 disabled:opacity-50 disabled:cursor-not-allowed" > <Send className="w-4 h-4" /> </button> </form> </div> </div> );}Real-Time Example 2: AI-Powered Code Generator
Letβs build an application that generates code based on user requirements.
Code Generation API
import { openai } from '@ai-sdk/openai';import { generateObject } from 'ai';import { z } from 'zod';
const codeSchema = z.object({ code: z.string().describe('The generated code'), explanation: z.string().describe('Explanation of how the code works'), dependencies: z.array(z.string()).describe('Required dependencies'), language: z.string().describe('Programming language used'),});
export async function POST(req: Request) { const { prompt, language, framework } = await req.json();
const result = await generateObject({ model: openai('gpt-4'), system: `Generate high-quality, production-ready code for ${language}${framework ? ` using ${framework}` : ''}`, prompt, schema: codeSchema, });
return Response.json(result.object);}Real-Time Example 3: Streaming with Tools
Hereβs an advanced example showing how to use tools with streaming:
Tool Usage Flow:
- User Query β AI Model
- AI Model determines if tools are needed
- If tools needed:
- Tool Selection (Weather, Calculator, Custom)
- Tool Execution (API calls, calculations, external services)
- Tool Results processed by AI
- If no tools needed: Direct Response
- Final Response streamed to user
Tools API Route
import { openai } from '@ai-sdk/openai';import { streamText, tool } from 'ai';import { z } from 'zod';
const weatherTool = tool({ description: 'Get weather information for a city', parameters: z.object({ city: z.string().describe('The city to get weather for'), }), execute: async ({ city }) => { // Simulate weather API call return { city, temperature: Math.round(Math.random() * 30 + 10), condition: ['sunny', 'cloudy', 'rainy'][Math.floor(Math.random() * 3)], }; },});
const calculatorTool = tool({ description: 'Perform mathematical calculations', parameters: z.object({ expression: z.string().describe('Mathematical expression to evaluate'), }), execute: async ({ expression }) => { try { // Simple evaluation (in production, use a proper math parser) const result = eval(expression); return { expression, result }; } catch (error) { return { expression, error: 'Invalid expression' }; } },});
export async function POST(req: Request) { const { messages } = await req.json();
const result = await streamText({ model: openai('gpt-4'), messages, tools: { weather: weatherTool, calculator: calculatorTool, }, system: 'You are a helpful assistant with access to weather and calculator tools.', });
return result.toAIStreamResponse();}Tools Chat Component
'use client';
import { useChat } from 'ai/react';
export default function ToolsChat() { const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat({ api: '/api/tools', });
return ( <div className="max-w-2xl mx-auto p-6"> <h1 className="text-2xl font-bold mb-6">AI Assistant with Tools</h1>
<div className="space-y-4 mb-4"> {messages.map((message) => ( <div key={message.id} className={`p-4 rounded-lg ${ message.role === 'user' ? 'bg-blue-100 ml-8' : 'bg-gray-100 mr-8' }`} > <div className="font-semibold mb-2"> {message.role === 'user' ? 'You' : 'Assistant'} </div> <div className="whitespace-pre-wrap">{message.content}</div>
{/* Display tool calls */} {message.toolInvocations?.map((toolInvocation) => ( <div key={toolInvocation.toolCallId} className="mt-2 p-2 bg-yellow-100 rounded"> <div className="text-sm font-medium">Tool: {toolInvocation.toolName}</div> <div className="text-sm">Args: {JSON.stringify(toolInvocation.args)}</div> {toolInvocation.result && ( <div className="text-sm">Result: {JSON.stringify(toolInvocation.result)}</div> )} </div> ))} </div> ))} </div>
<form onSubmit={handleSubmit} className="flex gap-2"> <input value={input} onChange={handleInputChange} placeholder="Ask about weather or math calculations..." className="flex-1 px-4 py-2 border rounded-lg" disabled={isLoading} /> <button type="submit" disabled={isLoading} className="px-6 py-2 bg-blue-500 text-white rounded-lg disabled:opacity-50" > Send </button> </form> </div> );}Advanced Features
1. Custom Providers
import { createOpenAI } from '@ai-sdk/openai';
export const customOpenAI = createOpenAI({ baseURL: 'https://your-custom-endpoint.com/v1', apiKey: process.env.CUSTOM_API_KEY,});2. Middleware and Rate Limiting
Rate Limiting Flow:
- Incoming Request β Middleware
- Rate Limit Check:
- Under Limit: Process Request β AI SDK Processing β Provider API Call β Response β Return to Client
- Over Limit: Return 429 Rate Limited Response
- Middleware tracks: IP, Request Counter, Time Window
import { NextResponse } from 'next/server';import type { NextRequest } from 'next/server';
const rateLimitMap = new Map();
export function middleware(request: NextRequest) { const ip = request.ip ?? '127.0.0.1'; const limit = 10; // requests per minute const windowMs = 60 * 1000; // 1 minute
if (!rateLimitMap.has(ip)) { rateLimitMap.set(ip, { count: 0, lastReset: Date.now() }); }
const ipData = rateLimitMap.get(ip);
if (Date.now() - ipData.lastReset > windowMs) { ipData.count = 0; ipData.lastReset = Date.now(); }
if (ipData.count >= limit) { return new NextResponse('Too Many Requests', { status: 429 }); }
ipData.count += 1; return NextResponse.next();}
export const config = { matcher: '/api/:path*',};3. Error Handling and Fallbacks
Provider Fallback Strategy:
- AI Request β Primary Provider (OpenAI GPT-4)
- If Primary fails β Try Fallback 1 (Anthropic Claude)
- If Fallback 1 fails β Try Fallback 2 (Google Gemini)
- If all providers fail β Return Error Response
- On any success β Return Response
import { openai } from '@ai-sdk/openai';import { anthropic } from '@ai-sdk/anthropic';import { generateText } from 'ai';
export async function generateWithFallback(prompt: string) { const providers = [ () => generateText({ model: openai('gpt-4'), prompt }), () => generateText({ model: anthropic('claude-3-sonnet-20240229'), prompt }), ];
for (const provider of providers) { try { return await provider(); } catch (error) { console.error('Provider failed:', error); continue; } }
throw new Error('All providers failed');}Best Practices
1. Security
- Always validate inputs on the server side
- Implement rate limiting and authentication
- Use environment variables for API keys
- Sanitize user inputs to prevent injection attacks
2. Performance
- Use streaming for better user experience
- Implement caching for repeated requests
- Optimize token usage with appropriate max tokens
- Use appropriate models for different tasks
3. User Experience
- Provide loading states and error handling
- Implement graceful degradation
- Add retry mechanisms for failed requests
- Show progress indicators for long operations
4. Cost Optimization
- Monitor token usage and costs
- Use cheaper models for simple tasks
- Implement request deduplication
- Cache responses when appropriate
Conclusion
The Vercel AI SDK represents a significant leap forward in AI application development, offering developers a powerful, flexible, and easy-to-use toolkit for building sophisticated AI-powered applications. Its provider-agnostic approach, excellent TypeScript support, and comprehensive feature set make it an ideal choice for modern web development.
Whether youβre building simple chatbots, complex multi-modal applications, or AI-powered tools, the Vercel AI SDK provides the foundation you need to create exceptional user experiences. The examples weβve explored demonstrate just a fraction of whatβs possible with this powerful toolkit.
As AI continues to evolve, the Vercel AI SDK positions developers to take advantage of new models and capabilities without having to rewrite their applications. Start building today and join the future of AI-powered web development!
For more information, visit the official Vercel AI SDK documentation and explore the extensive examples and guides available.