| name | better-chatbot |
| description | This skill provides project-specific coding conventions, architectural principles, repository structure standards, testing patterns, and contribution guidelines for the better-chatbot project (https://github.com/cgoinglove/better-chatbot). Use this skill when contributing to or working with better-chatbot to understand the design philosophy and ensure code follows established patterns. Includes: API architecture deep-dive, three-tier tool system (MCP/Workflow/Default), component design patterns, database repository patterns, architectural principles (progressive enhancement, defensive programming, streaming-first), practical templates for adding features (tools, routes, repositories). Use when: working in better-chatbot repository, contributing features/fixes, understanding architectural decisions, following server action validators, implementing tools/workflows, setting up Playwright tests, adding API routes, designing database queries, building UI components, handling multi-AI provider integration Keywords: better-chatbot, chatbot contribution, better-chatbot standards, chatbot development, AI chatbot patterns, API architecture, three-tier tool system, repository pattern, progressive enhancement, defensive programming, streaming-first, compound component pattern, Next.js chatbot, Vercel AI SDK chatbot, MCP tools, workflow builder, server action validators, tool abstraction, DAG workflows, shared business logic, safe() wrapper, tool lifecycle |
| license | MIT |
| metadata | [object Object] |
better-chatbot Contribution & Standards Skill
Status: Production Ready Last Updated: 2025-11-04 (v2.1.0 - Added extension points + UX patterns) Dependencies: None (references better-chatbot project) Latest Versions: Next.js 15.3.2, Vercel AI SDK 5.0.82, Better Auth 1.3.34, Drizzle ORM 0.41.0
Overview
better-chatbot is an open-source AI chatbot platform for individuals and teams, built with Next.js 15 and Vercel AI SDK v5. It combines multi-model AI support (OpenAI, Anthropic, Google, xAI, Ollama, OpenRouter) with advanced features like MCP (Model Context Protocol) tool integration, visual workflow builder, realtime voice assistant, and team collaboration.
This skill teaches Claude the project-specific conventions and patterns used in better-chatbot to ensure contributions follow established standards and avoid common pitfalls.
Project Architecture
Directory Structure
better-chatbot/
├── src/
│ ├── app/ # Next.js App Router + API routes
│ │ ├── api/[resource]/ # RESTful API organized by domain
│ │ ├── (auth)/ # Auth route group
│ │ ├── (chat)/ # Chat UI route group
│ │ └── store/ # Zustand stores
│ ├── components/ # UI components by domain
│ │ ├── layouts/
│ │ ├── agent/
│ │ ├── chat/
│ │ └── export/
│ ├── lib/ # Core logic and utilities
│ │ ├── action-utils.ts # Server action validators (CRITICAL)
│ │ ├── ai/ # AI integration (models, tools, MCP, speech)
│ │ ├── db/ # Database (Drizzle ORM + repositories)
│ │ ├── validations/ # Zod schemas
│ │ └── [domain]/ # Domain-specific helpers
│ ├── hooks/ # Custom React hooks
│ │ ├── queries/ # Data fetching hooks
│ │ └── use-*.ts
│ └── types/ # TypeScript types by domain
├── tests/ # E2E tests (Playwright)
├── docs/ # Setup guides and tips
├── docker/ # Docker configs
└── drizzle/ # Database migrations
API Architecture & Design Patterns
Route Structure Philosophy
Convention: RESTful resources with Next.js App Router conventions
/api/[resource]/route.ts → GET/POST collection endpoints
/api/[resource]/[id]/route.ts → GET/PUT/DELETE item endpoints
/api/[resource]/actions.ts → Server actions (mutations)
Standard Route Handler Pattern
Location: src/app/api/
Template structure:
export async function POST(request: Request) {
try {
// 1. Parse and validate request body with Zod
const json = await request.json();
const parsed = zodSchema.parse(json);
// 2. Check authentication
const session = await getSession();
if (!session?.user.id) return new Response("Unauthorized", { status: 401 });
// 3. Check authorization (ownership/permissions)
if (resource.userId !== session.user.id) return new Response("Forbidden", { status: 403 });
// 4. Load/compose dependencies (tools, context, etc.)
const tools = await loadMcpTools({ mentions, allowedMcpServers });
// 5. Execute with streaming if applicable
const stream = createUIMessageStream({ execute: async ({ writer }) => { ... } });
// 6. Return response
return createUIMessageStreamResponse({ stream });
} catch (error) {
logger.error(error);
return Response.json({ message: error.message }, { status: 500 });
}
}
Shared Business Logic Pattern
Key Insight: Extract complex orchestration logic into shared utilities
Example: src/app/api/chat/shared.chat.ts
This file demonstrates how to handle:
- Tool loading (
loadMcpTools,loadWorkFlowTools,loadAppDefaultTools) - Filtering and composition (
filterMCPToolsByMentions,excludeToolExecution) - System prompt building (
mergeSystemPrompt) - Manual tool execution handling
Pattern:
// Shared utility function
export const loadMcpTools = (opt?) =>
safe(() => mcpClientsManager.tools())
.map((tools) => {
if (opt?.mentions?.length) {
return filterMCPToolsByMentions(tools, opt.mentions);
}
return filterMCPToolsByAllowedMCPServers(tools, opt?.allowedMcpServers);
})
.orElse({} as Record<string, VercelAIMcpTool>);
// Used in multiple routes
// - /api/chat/route.ts
// - /api/chat/temporary/route.ts
// - /api/workflow/[id]/execute/route.ts
Why: DRY principle, single source of truth, consistent behavior
Defensive Programming with safe()
Library: ts-safe for functional error handling
Philosophy: Never crash the chat - degrade features gracefully
// Returns empty object on failure, chat continues
const MCP_TOOLS = await safe()
.map(errorIf(() => !isToolCallAllowed && "Not allowed"))
.map(() => loadMcpTools({ mentions, allowedMcpServers }))
.orElse({}); // Graceful fallback
Streaming-First Architecture
Pattern: Use Vercel AI SDK streaming utilities
// In route handler
const stream = createUIMessageStream({
execute: async ({ writer }) => {
// Stream intermediate results
writer.write({ type: "text", content: "Processing..." });
// Execute with streaming
const result = await streamText({
model,
messages,
tools,
onChunk: (chunk) => writer.write({ type: "text-delta", delta: chunk })
});
return { output: result };
}
});
return createUIMessageStreamResponse({ stream });
Why: Live feedback, better UX, handles long-running operations
Tool System Deep Dive
Three-Tier Tool Architecture
Design Goal: Balance extensibility (MCP), composability (workflows), and batteries-included (default tools)
Tier 1: MCP Tools (External)
↓ Can be used in
Tier 2: Workflow Tools (User-Created)
↓ Can be used in
Tier 3: Default Tools (Built-In)
Tier 1: MCP Tools (External Integrations)
Location: src/lib/ai/mcp/
Philosophy: Model Context Protocol servers become first-class tools
Manager Pattern:
// mcp-manager.ts - Singleton for all MCP clients
export const mcpClientsManager = globalThis.__mcpClientsManager__;
// API:
mcpClientsManager.init() // Initialize configured servers
mcpClientsManager.getClients() // Get connected clients
mcpClientsManager.tools() // Get all tools as Vercel AI SDK tools
mcpClientsManager.toolCall(serverId, toolName, args) // Execute tool
Why Global Singleton?
- Next.js dev hot-reloading → reconnecting MCP servers on every change is expensive
- Persists across HMR updates
- Production: only one instance needed
Tool Wrapping:
// MCP tools are tagged with metadata for filtering
type VercelAIMcpTool = Tool & {
_mcpServerId: string;
_originToolName: string;
_toolName: string; // Transformed for AI SDK
};
// Branded type for runtime checking
VercelAIMcpToolTag.create(tool)
Tier 2: Workflow Tools (Visual Composition)
Location: src/lib/ai/workflow/
Philosophy: Visual workflows become callable tools via @workflow_name
Node Types:
enum NodeKind {
Input = "input", // Entry point
LLM = "llm", // AI reasoning
Tool = "tool", // Call MCP/default tools
Http = "http", // HTTP requests
Template = "template",// Text processing
Condition = "condition", // Branching logic
Output = "output", // Exit point
}
Execution with Streaming:
// Workflows stream intermediate results
executor.subscribe((e) => {
if (e.eventType == "NODE_START") {
dataStream.write({
type: "tool-output-available",
toolCallId,
output: { status: "running", node: e.nodeId }
});
}
if (e.eventType == "NODE_END") {
dataStream.write({
type: "tool-output-available",
toolCallId,
output: { status: "complete", result: e.result }
});
}
});
Key Feature: Live progress updates in chat UI
Tier 3: Default Tools (Built-In Capabilities)
Location: src/lib/ai/tools/
Categories:
export const APP_DEFAULT_TOOL_KIT = {
[AppDefaultToolkit.Visualization]: {
CreatePieChart, CreateBarChart, CreateLineChart,
CreateTable, CreateTimeline
},
[AppDefaultToolkit.WebSearch]: {
WebSearch, WebContent
},
[AppDefaultToolkit.Http]: {
Http
},
[AppDefaultToolkit.Code]: {
JavascriptExecution, PythonExecution
},
};
Tool Implementation Pattern:
// Execution returns "Success", rendering happens client-side
export const createTableTool = createTool({
description: "Create an interactive table...",
inputSchema: z.object({
title: z.string(),
columns: z.array(...),
data: z.array(...)
}),
execute: async () => "Success"
});
// Client-side rendering in components/tool-invocation/
export function InteractiveTable({ part }) {
const args = part.input;
return <DataTable columns={args.columns} data={args.data} />;
}
Why Separation?
- Server: Pure data/business logic
- Client: Rich visualization/interaction
- Easier testing, better performance
Tool Lifecycle
1. Request → /api/chat/route.ts
2. Parse mentions (@tool, @workflow, @agent)
3. Load tools based on mentions/permissions:
- loadMcpTools() → filters by mentions or allowedMcpServers
- loadWorkFlowTools() → converts workflows to tools
- loadAppDefaultTools() → filters default toolkits
4. Merge all tools into single Record<string, Tool>
5. Handle toolChoice mode:
- "manual" → LLM proposes, user confirms
- "auto" → full execution
- "none" → no tools loaded
6. Pass tools to streamText()
7. Stream results back
Convention-Based Extension
Adding a new tool type is simple:
- Add enum to
AppDefaultToolkit - Implement tool with
createTool() - Add to
APP_DEFAULT_TOOL_KIT - Tool automatically available via
@toolname
Component & Design Philosophy
Organization by Feature
Location: src/components/
components/
├── ui/ → shadcn/ui primitives
├── layouts/ → App structure
├── agent/ → Agent-specific
├── workflow/ → Workflow editor
├── tool-invocation/ → Tool result rendering
└── *.tsx → Shared components
Principle: Group by feature, not by type
Compound Component Pattern
Example: message.tsx + message-parts.tsx
Philosophy: Break complex components into composable parts
// message.tsx exports multiple related components
export function PreviewMessage({ message }) { ... }
export function ErrorMessage({ error }) { ... }
// message-parts.tsx handles polymorphic content
export function MessageParts({ parts }) {
return parts.map(part => {
if (isToolUIPart(part)) return <ToolInvocation part={part} />;
if (part.type === 'text') return <Markdown text={part.text} />;
// ... other types
});
}
Client Component Wrapper Pattern
Example: chat-bot.tsx
Structure:
export default function ChatBot({ threadId, initialMessages }) {
// 1. State management (Zustand)
const [model, toolChoice] = appStore(useShallow(state => [...]));
// 2. Vercel AI SDK hook
const { messages, append, status } = useChat({
id: threadId,
initialMessages,
body: { chatModel: model, toolChoice },
});
// 3. Render orchestration
return (
<>
<ChatGreeting />
<MessageList messages={messages} />
<PromptInput onSubmit={append} />
</>
);
}
Why: Top-level orchestrates, delegates rendering to specialized components
Tool Result Rendering Separation
Key Architecture Decision:
- Tool execution lives in
lib/ai/tools/ - Tool rendering lives in
components/tool-invocation/
Example:
// Server-side (lib/ai/tools/create-table.ts)
execute: async (params) => "Success"
// Client-side (components/tool-invocation/interactive-table.tsx)
export function InteractiveTable({ part }) {
const { columns, data } = part.input;
return <DataTable columns={columns} data={data} />;
}
Benefits:
- Clear separation of concerns
- Easier testing
- Client can be rich/interactive without server complexity
Database & Repository Patterns
Repository Pattern Architecture
Location: src/lib/db/
Structure:
db/
├── repository.ts → Single import point
├── pg/
│ ├── db.pg.ts → Drizzle connection
│ ├── schema.pg.ts → Table definitions
│ └── repositories/ → Feature queries
└── migrations/ → Drizzle migrations
Philosophy: Abstract DB behind repository interfaces
Interface-First Design
Pattern:
// 1. Define interface in src/types/[domain].ts
export type ChatRepository = {
insertThread(thread: Omit<ChatThread, "createdAt">): Promise<ChatThread>;
selectThread(id: string): Promise<ChatThread | null>;
selectThreadDetails(id: string): Promise<ThreadDetails | null>;
};
// 2. Implement in src/lib/db/pg/repositories/[domain]-repository.pg.ts
export const pgChatRepository: ChatRepository = {
selectThreadDetails: async (id: string) => {
const [thread] = await db
.select()
.from(ChatThreadTable)
.leftJoin(UserTable, eq(ChatThreadTable.userId, UserTable.id))
.where(eq(ChatThreadTable.id, id));
if (!thread) return null;
const messages = await pgChatRepository.selectMessagesByThreadId(id);
return {
id: thread.chat_thread.id,
title: thread.chat_thread.title,
userId: thread.chat_thread.userId,
createdAt: thread.chat_thread.createdAt,
userPreferences: thread.user?.preferences,
messages,
};
},
};
// 3. Export from src/lib/db/repository.ts
export const chatRepository = pgChatRepository;
Why:
- Easy to swap implementations (pg → sqlite)
- Testable without database
- Consistent API across codebase
Query Optimization Strategies
1. Indexes on Foreign Keys:
export const ChatThreadTable = pgTable("chat_thread", {
id: uuid("id").primaryKey(),
userId: uuid("user_id").references(() => UserTable.id),
}, (table) => ({
userIdIdx: index("chat_thread_user_id_idx").on(table.userId),
}));
2. Selective Loading:
// Load minimal data
selectThread(id) → { id, title, userId, createdAt }
// Load full data when needed
selectThreadDetails(id) → { ...thread, messages, userPreferences }
3. SQL Aggregation:
// Get threads with last message timestamp
const threadsWithActivity = await db
.select({
threadId: ChatThreadTable.id,
lastMessageAt: sql<string>`MAX(${ChatMessageTable.createdAt})`,
})
.from(ChatThreadTable)
.leftJoin(ChatMessageTable, eq(ChatThreadTable.id, ChatMessageTable.threadId))
.groupBy(ChatThreadTable.id)
.orderBy(desc(sql`last_message_at`));
Schema Evolution Workflow
# 1. Modify schema in src/lib/db/pg/schema.pg.ts
export const NewTable = pgTable("new_table", { ... });
# 2. Generate migration
pnpm db:generate
# 3. Review generated SQL in drizzle/migrations/
# 4. Apply migration
pnpm db:migrate
# 5. Optional: Visual DB exploration
pnpm db:studio
Architectural Principles
1. Progressive Enhancement
Features build in layers:
Base Layer: Chat + LLM
↓
Tool Layer: Default + MCP
↓
Composition Layer: Workflows (tools as nodes)
↓
Personalization Layer: Agents (workflows + prompts)
Evidence:
- Agents can have
instructions.mentions(inject tools/workflows) - Workflows can call MCP + default tools
- Chat API composes all three tiers
User Journey:
- Start with default tools (no setup)
- Add MCP servers for specialized needs
- Combine into workflows for automation
- Package into agents for personas
2. Convention Over Configuration
New Tool?
- Add to
AppDefaultToolkitenum → auto-available
New Workflow Node?
- Add to
NodeKindenum → executor handles it
New MCP Server?
- Just configure via UI → manager handles lifecycle
3. Defensive Programming
Use safe() everywhere:
const tools = await safe(() => loadMcpTools())
.orElse({}); // Returns default on failure
Philosophy: Never crash the chat - degrade gracefully
4. Streaming-First
Evidence:
- Chat API uses
createUIMessageStream() - Workflow execution streams intermediate steps
- Tool calls stream progress updates
Why: Live feedback, better UX, handles long operations
5. Type-Driven Development
Pattern:
// Zod defines runtime validation AND TypeScript types
const schema = z.object({ name: z.string() });
type SchemaType = z.infer<typeof schema>;
// Discriminated unions for polymorphic data
type WorkflowNodeData =
| { kind: "input"; ... }
| { kind: "llm"; ... }
| { kind: "tool"; ... };
// Brand types for runtime checking
VercelAIMcpToolTag.isMaybe(tool)
Extension Points Reference
Quick lookup: "I want to add X" → "Modify Y file"
| Want to add... | Extend/Modify... | File Location | Notes |
|---|---|---|---|
| New default tool | AppDefaultToolkit enum + APP_DEFAULT_TOOL_KIT |
lib/ai/tools/tool-kit.ts |
Add tool implementation in lib/ai/tools/[category]/ + rendering in components/tool-invocation/ |
| New tool category | AppDefaultToolkit enum |
lib/ai/tools/index.ts |
Creates new toolkit group (e.g., Visualization, WebSearch) |
| New workflow node type | NodeKind enum + executor + validator |
lib/ai/workflow/workflow.interface.ts + executor/node-executor.ts + validator/node-validate.ts |
Also add UI config in components/workflow/node-config/ |
| New API endpoint | Create route handler | src/app/api/[resource]/route.ts |
Follow standard pattern: auth → validation → repository → response |
| New server action | Use validatedActionWithUser |
src/app/api/[resource]/actions.ts |
Import from lib/action-utils.ts |
| New database table | Add to schema + repository | lib/db/pg/schema.pg.ts + lib/db/pg/repositories/[name]-repository.pg.ts |
Then pnpm db:generate and pnpm db:migrate |
| New UI component | Create in domain folder | src/components/[domain]/[name].tsx |
Use shadcn/ui primitives from components/ui/ |
| New React hook | Create with use- prefix |
src/hooks/use-[name].ts or src/hooks/queries/use-[name].ts |
Data fetching hooks go in queries/ subfolder |
| New Zod schema | Add to validations | src/lib/validations/[domain].ts |
Use z.infer<typeof schema> for TypeScript types |
| New AI provider | Add to providers registry | lib/ai/providers.ts |
Use createOpenAI, createAnthropic, etc. from AI SDK |
| New MCP server | Configure via UI | Settings → MCP Servers | No code changes needed (file or DB storage) |
| New agent template | Create via UI | Agents page | Combine tools/workflows/prompts |
| New permission type | Add to permissions enum | lib/auth/permissions.ts |
Use in validatedActionWithAdminPermission |
| New E2E test | Add test file | tests/[feature].spec.ts |
Use Playwright, follow existing patterns |
| New system prompt | Add to prompts | lib/ai/prompts.ts |
Use mergeSystemPrompt for composition |
Common Development Flows
Adding a Feature End-to-End:
1. Define types (src/types/[domain].ts)
2. Create Zod schema (lib/validations/[domain].ts)
3. Add DB table + repository (lib/db/pg/)
4. Create API route (app/api/[resource]/route.ts)
5. Create UI component (components/[domain]/)
6. Create data hook (hooks/queries/use-[resource].ts)
7. Add E2E test (tests/[feature].spec.ts)
8. Run: pnpm check && pnpm test:e2e
Adding a Tool End-to-End:
1. Implement tool (lib/ai/tools/[category]/[name].ts)
2. Add to toolkit (lib/ai/tools/tool-kit.ts)
3. Create rendering component (components/tool-invocation/[name].tsx)
4. Add to tool invocation switch (components/tool-invocation/index.tsx)
5. Test with @toolname mention in chat
Adding a Workflow Node End-to-End:
1. Add NodeKind enum (lib/ai/workflow/workflow.interface.ts)
2. Define node data type (same file)
3. Add executor (lib/ai/workflow/executor/node-executor.ts)
4. Add validator (lib/ai/workflow/validator/node-validate.ts)
5. Create UI config (components/workflow/node-config/[name]-node.tsx)
6. Test in workflow builder
UX Patterns & @Mention System
The @Mention Philosophy
Core Design Principle: Every feature is instantly accessible via @mentions - no digging through menus.
Why This Matters: Users can compose features on-the-fly without context switching.
Three Types of @Mentions
1. @tool (Default Tools)
Format: @tool("tool_name")
Examples:
@tool("web-search") find recent AI papers
@tool("create-table") show sales data
@tool("python-execution") calculate fibonacci
How It Works:
- Parsed from message on server
- Loads corresponding tools from
APP_DEFAULT_TOOL_KIT - LLM decides when to invoke based on prompt
Use Case: Built-in capabilities (search, visualization, code execution)
2. @mcp (MCP Server Tools)
Format: @mcp("server_name") or specific tool @mcp("server_name:tool_name")
Examples:
@mcp("github") create an issue in my repo
@mcp("playwright") navigate to google.com
@mcp("slack:send-message") post update to #general
How It Works:
- Mentions filter which MCP servers/tools to load
- Reduces token usage (only relevant tools sent to LLM)
- MCP manager handles connection and execution
Use Case: External integrations (GitHub, Slack, databases, etc.)
3. @workflow (Custom Workflows)
Format: @workflow("workflow_name")
Examples:
@workflow("customer-onboarding") process new signup
@workflow("data-pipeline") transform and analyze CSV
How It Works:
- Workflows are converted to callable tools
- LLM sees workflow as a single tool with description
- Execution streams intermediate node results
Use Case: Multi-step automations, business processes
4. @agent (Agent Personas)
Format: Select agent from dropdown (not typed in message)
How It Works:
- Agent's
instructions.mentionsauto-inject tools/workflows - System prompt prepended to conversation
- Presets can override model/temperature
Use Case: Role-specific contexts (coding assistant, data analyst, etc.)
Tool Choice Modes
Context: User selects mode from dropdown
Auto Mode (Default)
- LLM can invoke tools autonomously
- Multiple tool calls per message
- Best for: Automation, workflows, exploration
Example Flow:
User: @tool("web-search") find AI news, then @tool("create-table") summarize
→ LLM searches → formats results → creates table → returns message
Manual Mode
- LLM proposes tool calls, waits for user approval
- User sees "Tool: web-search" with args, clicks "Execute"
- Best for: Sensitive operations, learning, debugging
Example Flow:
User: @mcp("github") create issue
→ LLM proposes: create_issue(repo="...", title="...", body="...")
→ User reviews and clicks "Execute"
→ Tool runs → result shown
None Mode
- No tools loaded (text-only conversation)
- Reduces latency and token usage
- Best for: Brainstorming, explanations, simple queries
Preset System
What Are Presets?
- Quick configurations for common scenarios
- Stored per-user
- Can override: model, temperature, toolChoice, allowed MCP servers
Example Use Cases:
Preset: "Quick Chat"
- Model: GPT-4o-mini (fast)
- Tools: None
- Use for: Rapid Q&A
Preset: "Research Assistant"
- Model: Claude Sonnet 4.5
- Tools: @tool("web-search"), @mcp("wikipedia")
- Use for: Deep research
Preset: "Code Review"
- Model: GPT-5
- Tools: @mcp("github"), @tool("python-execution")
- Use for: Reviewing PRs with tests
How To Create:
- Configure chat (model, tools, settings)
- Click "Save as Preset"
- Name it
- Select from dropdown in future chats
User Journey Examples
Beginner: First-Time User
1. Start chat (no @mentions) → Default tools available
2. Ask: "Search for news about AI"
3. LLM automatically uses @tool("web-search")
4. User sees: Search results → Formatted answer
5. Learns: Tools work automatically in Auto mode
Intermediate: Using Workflows
1. Create workflow in Workflow Builder:
Input → Web Search → LLM Summary → Output
2. Save as "research-workflow"
3. In chat: "@workflow('research-workflow') AI trends 2025"
4. Sees: Live progress per node
5. Gets: Formatted research report
Advanced: Agent + MCP + Workflows
1. Create agent "DevOps Assistant"
2. Agent instructions include: @mcp("github"), @workflow("deploy-pipeline")
3. Select agent from dropdown
4. Chat: "Deploy latest commit to staging"
5. Agent: Uses GitHub MCP → triggers deploy workflow → monitors → reports
Design Patterns Developers Should Follow
1. Discoverability
- Every tool should have clear description (shown in LLM context)
- Use semantic names (
create-tablenottool-42)
2. Composability
- Tools should be single-purpose
- Workflows compose tools
- Agents compose workflows + tools + context
3. Progressive Disclosure
- Beginners: Auto mode, no @mentions (use defaults)
- Intermediate: Explicit @tool/@mcp mentions
- Advanced: Workflows, agents, presets
4. Feedback
- Streaming for long operations
- Progress updates for workflows
- Clear error messages with solutions
Practical Templates
Template: Adding a New Default Tool
// 1. Define in lib/ai/tools/[category]/[tool-name].ts
import { tool as createTool } from "ai";
import { z } from "zod";
export const myNewTool = createTool({
description: "Clear description for LLM to understand when to use this",
inputSchema: z.object({
param: z.string().describe("What this parameter does"),
}),
execute: async (params) => {
// For visualization tools: return "Success"
// For data tools: return actual data
return "Success";
},
});
// 2. Add to lib/ai/tools/tool-kit.ts
import { DefaultToolName } from "./index";
import { myNewTool } from "./[category]/[tool-name]";
export enum DefaultToolName {
// ... existing
MyNewTool = "my_new_tool",
}
export const APP_DEFAULT_TOOL_KIT = {
[AppDefaultToolkit.MyCategory]: {
[DefaultToolName.MyNewTool]: myNewTool,
},
};
// 3. Create rendering in components/tool-invocation/my-tool-invocation.tsx
export function MyToolInvocation({ part }: { part: ToolUIPart }) {
const args = part.input as z.infer<typeof myNewTool.inputSchema>;
return <div>{/* Render based on args */}</div>;
}
// 4. Add to components/tool-invocation/index.tsx switch
if (toolName === DefaultToolName.MyTool) {
return <MyToolInvocation part={part} />;
}
Template: Adding a New API Route
// src/app/api/[resource]/route.ts
import { getSession } from "auth/server";
import { [resource]Repository } from "lib/db/repository";
import { z } from "zod";
const querySchema = z.object({
limit: z.coerce.number().default(10),
});
export async function GET(request: Request) {
// 1. Auth check
const session = await getSession();
if (!session?.user.id) {
return new Response("Unauthorized", { status: 401 });
}
// 2. Parse & validate
try {
const url = new URL(request.url);
const params = querySchema.parse(Object.fromEntries(url.searchParams));
// 3. Use repository
const data = await [resource]Repository.selectByUserId(
session.user.id,
params.limit
);
return Response.json(data);
} catch (error) {
if (error instanceof z.ZodError) {
return Response.json(
{ error: "Invalid params", details: error.message },
{ status: 400 }
);
}
console.error("Failed:", error);
return new Response("Internal Server Error", { status: 500 });
}
}
export async function POST(request: Request) {
const session = await getSession();
if (!session?.user.id) {
return new Response("Unauthorized", { status: 401 });
}
try {
const body = await request.json();
const data = createSchema.parse(body);
const item = await [resource]Repository.insert({
...data,
userId: session.user.id,
});
return Response.json(item);
} catch (error) {
if (error instanceof z.ZodError) {
return Response.json({ error: "Invalid input" }, { status: 400 });
}
return Response.json({ error: "Internal error" }, { status: 500 });
}
}
Template: Adding a New Repository
// 1. Define interface in src/types/[domain].ts
export type MyRepository = {
selectById(id: string): Promise<MyType | null>;
selectByUserId(userId: string, limit?: number): Promise<MyType[]>;
insert(data: InsertType): Promise<MyType>;
update(id: string, data: Partial<InsertType>): Promise<MyType>;
delete(id: string): Promise<void>;
};
// 2. Add table to src/lib/db/pg/schema.pg.ts
export const MyTable = pgTable("my_table", {
id: uuid("id").primaryKey().defaultRandom(),
userId: uuid("user_id").references(() => UserTable.id).notNull(),
name: text("name").notNull(),
createdAt: timestamp("created_at").defaultNow().notNull(),
}, (table) => ({
userIdIdx: index("my_table_user_id_idx").on(table.userId),
}));
// 3. Implement in src/lib/db/pg/repositories/my-repository.pg.ts
import { pgDb as db } from "../db.pg";
import { MyTable } from "../schema.pg";
import { eq, desc } from "drizzle-orm";
export const pgMyRepository: MyRepository = {
selectById: async (id) => {
const [result] = await db
.select()
.from(MyTable)
.where(eq(MyTable.id, id));
return result ?? null;
},
selectByUserId: async (userId, limit = 10) => {
return await db
.select()
.from(MyTable)
.where(eq(MyTable.userId, userId))
.orderBy(desc(MyTable.createdAt))
.limit(limit);
},
insert: async (data) => {
const [result] = await db
.insert(MyTable)
.values(data)
.returning();
return result;
},
update: async (id, data) => {
const [result] = await db
.update(MyTable)
.set(data)
.where(eq(MyTable.id, id))
.returning();
return result;
},
delete: async (id) => {
await db.delete(MyTable).where(eq(MyTable.id, id));
},
};
// 4. Export from src/lib/db/repository.ts
export { pgMyRepository as myRepository } from "./pg/repositories/my-repository.pg";
// 5. Generate and run migration
// pnpm db:generate
// pnpm db:migrate
Server Action Validators & Coding Standards
Server Action Validators (lib/action-utils.ts)
Centralized pattern for validated, permission-gated server actions:
// Pattern 1: Simple validation
validatedAction(schema, async (data, formData) => { ... })
// Pattern 2: With user context (auto-auth, auto-error handling)
validatedActionWithUser(schema, async (data, formData, user) => { ... })
// Pattern 3: Permission-based (admin, user-manage)
validatedActionWithAdminPermission(schema, async (data, formData, session) => { ... })
Prevents:
- Forgetting auth checks ✓
- Inconsistent validation ✓
- FormData parsing errors ✓
- Non-standard error responses ✓
2. Tool Abstraction System Unified interface for multiple tool types using branded type tags:
// Branded types for runtime type narrowing
VercelAIMcpToolTag.create(tool) // Brand as MCP tool
VercelAIWorkflowToolTag.isMaybe(tool) // Check if Workflow tool
// Single handler for multiple tool types
if (VercelAIWorkflowToolTag.isMaybe(tool)) {
// Workflow-specific logic
} else if (VercelAIMcpToolTag.isMaybe(tool)) {
// MCP-specific logic
}
Tool Types:
- MCP Tools: Model Context Protocol integrations
- Workflow Tools: Visual DAG-based workflows
- Default Tools: Built-in capabilities (search, code execution, etc.)
3. Workflow Execution Engine DAG-based workflow system with real-time streaming:
- Streams node execution progress via
dataStream.write() - Tracks: status, input/output, errors, timing
- Token optimization: history stored without detailed results
4. State Management Zustand stores with shallow comparison for workflows and app config.
Coding Standards
Naming Conventions
| Type | Convention | Example |
|---|---|---|
| Components | PascalCase | ChatBot.tsx, WorkflowBuilder.tsx |
| Component files | kebab-case or PascalCase | chat-bot.tsx, ChatBot.tsx |
| Hooks | camelCase with use- prefix |
use-chat-bot.ts, use-workflow.ts |
| Utilities | camelCase | action-utils.ts, shared.chat.ts |
| API routes | Next.js convention | src/app/api/[resource]/route.ts |
| Types | Domain suffix | chat.ts, mcp.ts, workflow.ts |
TypeScript Standards
- Strict TypeScript throughout (no implicit any)
- Zod for validation AND type inference:
const schema = z.object({ name: z.string() }) type SchemaType = z.infer<typeof schema> - Custom type tags for runtime type narrowing (see Tool Abstraction)
- Types organized by domain in
src/types/
Code Quality
- Line width: 80 characters
- Indentation: 2 spaces
- Formatter: Biome 1.9.4
- Linter: Biome (no ESLint)
- Validation: Zod everywhere (forms, API, dynamic config)
Error Handling
- Enum error types for specific errors:
enum UpdateUserPasswordError { INVALID_CURRENT_PASSWORD = "invalid_current_password", PASSWORD_MISMATCH = "password_mismatch" } - Cross-field validation with Zod
superRefine:.superRefine((data, ctx) => { if (data.password !== data.confirmPassword) { ctx.addIssue({ path: ["confirmPassword"], message: "Passwords must match" }) } })
Development Workflow
Core Commands
# Development
pnpm dev # Start dev server
pnpm build # Production build
pnpm start # Start production server
pnpm lint:fix # Auto-fix linting issues
# Database (Drizzle ORM)
pnpm db:generate # Generate migrations
pnpm db:migrate # Run migrations
pnpm db:push # Push schema changes
pnpm db:studio # Open Drizzle Studio
# Testing
pnpm test # Run Vitest unit tests
pnpm test:e2e # Full Playwright E2E suite
pnpm test:e2e:first-user # First-user signup + admin role tests
pnpm test:e2e:standard # Standard tests (skip first-user)
pnpm test:e2e:ui # Interactive Playwright UI
# Quality Check
pnpm check # Run lint + type-check + tests
Environment Setup
- Copy
.env.exampleto.env(auto-generated onpnpm i) - Required: PostgreSQL connection, at least one LLM API key
- Optional: OAuth providers (Google, GitHub, Microsoft), Redis, Vercel Blob
Branch Strategy
- Main: Production-ready code
- Feature branches:
feat/feature-nameorfix/bug-name - Squash merge: Single commit per PR for clean history
Testing Patterns
Unit Tests (Vitest)
- Collocated with source code (
*.test.ts) - Coverage: Happy path + one failure mode minimum
- Example:
// src/lib/utils.test.ts import { describe, it, expect } from 'vitest' import { formatDate } from './utils' describe('formatDate', () => { it('formats ISO date correctly', () => { expect(formatDate('2025-01-01')).toBe('January 1, 2025') }) it('handles invalid date', () => { expect(formatDate('invalid')).toBe('Invalid Date') }) })
E2E Tests (Playwright)
Special orchestration for multi-user and first-user scenarios:
# First-user tests (clean DB → signup → verify admin role)
pnpm test:e2e:first-user
# Standard tests (assumes first user exists)
pnpm test:e2e:standard
# Full suite (first-user → standard)
pnpm test:e2e
Test project dependencies ensure sequenced execution:
- Clean database
- Run first-user signup + role verification
- Run standard multi-user tests
Shared auth states across test runs to avoid re-login.
Seed/cleanup scripts for deterministic testing.
Contribution Guidelines
Before Starting
Major changes require discussion first:
- New UI components
- New API endpoints
- External service integrations
- Breaking changes
No prior approval needed:
- Bug fixes
- Documentation improvements
- Minor refactoring
Pull Request Standards
Title format (Conventional Commits):
feat: Add realtime voice chat
fix: Resolve MCP tool streaming error
chore: Update dependencies
docs: Add OAuth setup guide
Prefixes: feat:, fix:, chore:, docs:, style:, refactor:, test:, perf:, build:
Visual documentation required:
- Before/after screenshots for UI changes
- Screen recordings for interactive features
- Mobile + desktop views for responsive updates
Description should explain:
- What changed
- Why it changed
- How you tested it
Pre-Submission Checklist
# Must pass before PR:
pnpm check # Lint + type-check + tests
pnpm test:e2e # E2E tests (if applicable)
- Tests added for new features/bug fixes
- Visual documentation included (if UI change)
- Conventional Commit title
- Description explains what, why, testing
Critical Rules
Always Do
✅ Use validatedActionWithUser or validatedActionWithAdminPermission for server actions
✅ Check tool types with branded type tags before execution
✅ Use Zod superRefine for cross-field validation
✅ Add unit tests (happy path + one failure mode)
✅ Run pnpm check before PR submission
✅ Include visual documentation for UI changes
✅ Use Conventional Commit format for PR titles
✅ Run E2E tests when touching critical flows
Never Do
❌ Implement server actions without auth validators
❌ Assume tool type without runtime check
❌ Parse FormData manually (use validators)
❌ Mutate Zustand state directly (use shallow updates)
❌ Skip first-user tests on clean database
❌ Commit without running pnpm check
❌ Submit PR without visual docs (if UI change)
❌ Use non-conventional commit format
Known Issues Prevention
This skill prevents 8 documented issues:
Issue #1: Forgetting Auth Checks in Server Actions
Error: Unauthorized users accessing protected actions
Why It Happens: Manual auth implementation is inconsistent
Prevention: Use validatedActionWithUser or validatedActionWithAdminPermission
// ❌ BAD: Manual auth check
export async function updateProfile(data: ProfileData) {
const session = await getSession()
if (!session) throw new Error("Unauthorized")
// ... rest of logic
}
// ✅ GOOD: Use validator
export const updateProfile = validatedActionWithUser(
profileSchema,
async (data, formData, user) => {
// user is guaranteed to exist, auto-error handling
}
)
Issue #2: Tool Type Mismatches
Error: Runtime type errors when executing tools Why It Happens: Not checking tool type before execution Prevention: Use branded type tags for runtime narrowing
// ❌ BAD: Assuming tool type
const result = await executeMcpTool(tool)
// ✅ GOOD: Check tool type
if (VercelAIMcpToolTag.isMaybe(tool)) {
const result = await executeMcpTool(tool)
} else if (VercelAIWorkflowToolTag.isMaybe(tool)) {
const result = await executeWorkflowTool(tool)
}
Issue #3: FormData Parsing Errors
Error: Inconsistent error handling for form submissions Why It Happens: Manual FormData parsing with ad-hoc validation Prevention: Validators handle parsing automatically
// ❌ BAD: Manual parsing
const name = formData.get("name") as string
if (!name) throw new Error("Name required")
// ✅ GOOD: Validator with Zod
const schema = z.object({ name: z.string().min(1) })
export const action = validatedAction(schema, async (data) => {
// data.name is validated and typed
})
Issue #4: Cross-Field Validation Issues
Error: Password mismatch validation not working
Why It Happens: Separate validation for related fields
Prevention: Use Zod superRefine
// ❌ BAD: Separate checks
if (data.password !== data.confirmPassword) { /* error */ }
// ✅ GOOD: Zod superRefine
const schema = z.object({
password: z.string(),
confirmPassword: z.string()
}).superRefine((data, ctx) => {
if (data.password !== data.confirmPassword) {
ctx.addIssue({
path: ["confirmPassword"],
message: "Passwords must match"
})
}
})
Issue #5: Workflow State Inconsistency
Error: Zustand state updates not triggering re-renders Why It Happens: Deep mutation of nested workflow state Prevention: Use shallow updates
// ❌ BAD: Deep mutation
store.workflow.nodes[0].status = "complete"
// ✅ GOOD: Shallow update
set(state => ({
workflow: {
...state.workflow,
nodes: state.workflow.nodes.map((node, i) =>
i === 0 ? { ...node, status: "complete" } : node
)
}
}))
Issue #6: Missing E2E Test Setup
Error: E2E tests failing on clean database Why It Happens: Running standard tests before first-user setup Prevention: Use correct test commands
# ❌ BAD: Running standard tests on clean DB
pnpm test:e2e:standard
# ✅ GOOD: Full suite with first-user setup
pnpm test:e2e
Issue #7: Environment Config Mistakes
Error: Missing required environment variables causing crashes
Why It Happens: Not copying .env.example to .env
Prevention: Auto-generated .env on pnpm i
# Auto-generates .env on install
pnpm i
# Verify all required vars present
# Required: DATABASE_URL, at least one LLM_API_KEY
Issue #8: Incorrect Commit Message Format
Error: CI/CD failures due to non-conventional commit format Why It Happens: Not following Conventional Commits standard Prevention: Use prefix + colon format
# ❌ BAD:
git commit -m "added feature"
git commit -m "fix bug"
# ✅ GOOD:
git commit -m "feat: add MCP tool streaming"
git commit -m "fix: resolve auth redirect loop"
Common Patterns
Pattern 1: Server Action with User Context
import { validatedActionWithUser } from "@/lib/action-utils"
import { z } from "zod"
const updateProfileSchema = z.object({
name: z.string().min(1),
email: z.string().email()
})
export const updateProfile = validatedActionWithUser(
updateProfileSchema,
async (data, formData, user) => {
// user is guaranteed authenticated
// data is validated and typed
await db.update(users).set(data).where(eq(users.id, user.id))
return { success: true }
}
)
When to use: Any server action that requires authentication
Pattern 2: Tool Type Checking
import { VercelAIMcpToolTag, VercelAIWorkflowToolTag } from "@/lib/ai/tools"
async function executeTool(tool: unknown) {
if (VercelAIMcpToolTag.isMaybe(tool)) {
return await executeMcpTool(tool)
} else if (VercelAIWorkflowToolTag.isMaybe(tool)) {
return await executeWorkflowTool(tool)
} else {
return await executeDefaultTool(tool)
}
}
When to use: Handling multiple tool types in unified interface
Pattern 3: Workflow State Updates
import { useWorkflowStore } from "@/app/store/workflow"
// In component:
const updateNodeStatus = useWorkflowStore(state => state.updateNodeStatus)
// In store:
updateNodeStatus: (nodeId, status) =>
set(state => ({
workflow: {
...state.workflow,
nodes: state.workflow.nodes.map(node =>
node.id === nodeId ? { ...node, status } : node
)
}
}))
When to use: Updating nested Zustand state without mutation
Using Bundled Resources
References (references/)
references/AGENTS.md- Full repository guidelines (loaded when detailed structure questions arise)references/CONTRIBUTING.md- Complete contribution process (loaded when PR standards questions arise)
When Claude should load these: When user asks about detailed better-chatbot conventions, asks "what are the full guidelines?", or needs comprehensive contribution workflow details.
Dependencies
Required:
- next@15.3.2 - Framework
- ai@5.0.82 - Vercel AI SDK
- better-auth@1.3.34 - Authentication
- drizzle-orm@0.41.0 - Database ORM
- @modelcontextprotocol/sdk@1.20.2 - MCP support
- zod@3.24.2 - Validation
- zustand@5.0.3 - State management
Testing:
- vitest@3.2.4 - Unit tests
- @playwright/test@1.56.1 - E2E tests
Official Documentation
- better-chatbot: https://github.com/cgoinglove/better-chatbot
- Next.js: https://nextjs.org/docs
- Vercel AI SDK: https://sdk.vercel.ai/docs
- Better Auth: https://www.better-auth.com/docs
- Drizzle ORM: https://orm.drizzle.team/docs
- Playwright: https://playwright.dev/docs/intro
- Live Demo: https://betterchatbot.vercel.app
Production Example
This skill is based on better-chatbot production standards:
- Live: https://betterchatbot.vercel.app
- Tests: 48+ E2E tests passing
- Errors: 0 (all 8 known issues prevented)
- Validation: ✅ Multi-user scenarios, workflow execution, MCP tools
Complete Setup Checklist
When contributing to better-chatbot:
- Fork and clone repository
- Run
pnpm i(auto-generates.env) - Configure required env vars (DATABASE_URL, LLM_API_KEY)
- Run
pnpm devand verify it starts - Create feature branch
- Add unit tests for new features
- Run
pnpm checkbefore PR - Run
pnpm test:e2eif touching critical flows - Include visual docs (screenshots/recordings)
- Use Conventional Commit title
- Squash merge when approved
Questions? Issues?
- Check
references/AGENTS.mdfor detailed guidelines - Check
references/CONTRIBUTING.mdfor PR process - Check official docs: https://github.com/cgoinglove/better-chatbot
- Ensure PostgreSQL and LLM API key are configured
Token Efficiency: ~60% savings | Errors Prevented: 8 | Production Verified: Yes