Claude Code Plugins

Community-maintained marketplace

Feedback

performance-optimization

@chriscarterux/chris-claude-stack
1
0

This skill should be used when optimizing application performance - profiling, caching, lazy loading, code splitting, image optimization, and CDN strategies.

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name performance-optimization
description This skill should be used when optimizing application performance - profiling, caching, lazy loading, code splitting, image optimization, and CDN strategies.

Performance Optimization

Overview

Performance optimization is the systematic process of improving application speed, responsiveness, and resource efficiency. The core principle is to deliver the best user experience by minimizing load times, reducing time-to-interactive, and ensuring smooth interactions across all devices and network conditions.

Key Philosophy: Measure first, optimize second. Never optimize based on assumptions - always profile, measure, and validate improvements with real metrics.

When to Use

Apply performance optimization when:

  • Initial page load exceeds 3 seconds
  • Core Web Vitals fail (LCP > 2.5s, FID > 100ms, CLS > 0.1)
  • Bundle sizes exceed 200KB gzipped for initial load
  • Users report slow interactions or laggy UI
  • Mobile performance significantly lags desktop
  • Server response times exceed 200ms for API calls
  • Database queries take longer than 100ms
  • Images are not optimized or properly sized
  • Render-blocking resources delay page display
  • Memory leaks cause performance degradation over time
  • High bounce rates correlate with slow load times

Performance Measurement

Core Web Vitals

Largest Contentful Paint (LCP): Measures loading performance

  • Target: < 2.5 seconds
  • Good: 0-2.5s | Needs Improvement: 2.5-4.0s | Poor: > 4.0s
  • Measures when the largest content element becomes visible
  • Optimize by: reducing server response time, optimizing images, eliminating render-blocking resources

First Input Delay (FID): Measures interactivity

  • Target: < 100 milliseconds
  • Good: 0-100ms | Needs Improvement: 100-300ms | Poor: > 300ms
  • Measures time from first user interaction to browser response
  • Optimize by: reducing JavaScript execution time, code splitting, web workers

Cumulative Layout Shift (CLS): Measures visual stability

  • Target: < 0.1
  • Good: 0-0.1 | Needs Improvement: 0.1-0.25 | Poor: > 0.25
  • Measures unexpected layout shifts during page load
  • Optimize by: setting explicit dimensions, avoiding dynamic content injection

Performance Budgets

Establish clear budgets for critical metrics:

// performance-budget.config.ts
export const PERFORMANCE_BUDGET = {
  // Size budgets
  maxBundleSize: 200 * 1024, // 200KB gzipped
  maxInitialJS: 150 * 1024,  // 150KB gzipped
  maxInitialCSS: 30 * 1024,  // 30KB gzipped
  maxImageSize: 500 * 1024,  // 500KB per image

  // Timing budgets
  maxLCP: 2500,              // 2.5 seconds
  maxFID: 100,               // 100 milliseconds
  maxTTI: 3800,              // 3.8 seconds (Time to Interactive)
  maxFCP: 1800,              // 1.8 seconds (First Contentful Paint)

  // Resource budgets
  maxRequests: 50,           // Total HTTP requests
  maxFonts: 2,               // Number of font families
  maxThirdParty: 3,          // Third-party scripts

  // API budgets
  maxAPIResponse: 200,       // 200ms for API calls
  maxDBQuery: 100,           // 100ms for database queries
} as const;

// Monitor in CI/CD
export function validateBudget(metrics: PerformanceMetrics): boolean {
  const violations: string[] = [];

  if (metrics.bundleSize > PERFORMANCE_BUDGET.maxBundleSize) {
    violations.push(`Bundle size ${metrics.bundleSize} exceeds ${PERFORMANCE_BUDGET.maxBundleSize}`);
  }

  if (metrics.lcp > PERFORMANCE_BUDGET.maxLCP) {
    violations.push(`LCP ${metrics.lcp}ms exceeds ${PERFORMANCE_BUDGET.maxLCP}ms`);
  }

  if (violations.length > 0) {
    console.error('Performance Budget Violations:', violations);
    return false;
  }

  return true;
}

Profiling Tools

Chrome DevTools Performance Panel:

// Enable performance profiling
performance.mark('component-render-start');
// ... component render logic
performance.mark('component-render-end');
performance.measure('component-render', 'component-render-start', 'component-render-end');

// Log measurements
const measures = performance.getEntriesByType('measure');
console.table(measures.map(m => ({
  name: m.name,
  duration: `${m.duration.toFixed(2)}ms`
})));

Lighthouse CI Integration:

# .github/workflows/lighthouse.yml
name: Lighthouse CI
on: [pull_request]
jobs:
  lighthouse:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: Run Lighthouse CI
        run: |
          npm install -g @lhci/cli
          lhci autorun

Web Vitals Monitoring:

// lib/web-vitals.ts
import { onCLS, onFID, onLCP, onFCP, onTTFB } from 'web-vitals';

export function reportWebVitals() {
  onCLS((metric) => sendToAnalytics('CLS', metric.value));
  onFID((metric) => sendToAnalytics('FID', metric.value));
  onLCP((metric) => sendToAnalytics('LCP', metric.value));
  onFCP((metric) => sendToAnalytics('FCP', metric.value));
  onTTFB((metric) => sendToAnalytics('TTFB', metric.value));
}

function sendToAnalytics(name: string, value: number) {
  // Send to your analytics service
  if (typeof window.gtag !== 'undefined') {
    window.gtag('event', name, {
      value: Math.round(name === 'CLS' ? value * 1000 : value),
      event_category: 'Web Vitals',
      non_interaction: true,
    });
  }
}

Frontend Optimization

Code Splitting and Lazy Loading

Route-based Code Splitting:

// BEFORE: Everything loaded upfront (800KB bundle)
import Home from './pages/Home';
import Dashboard from './pages/Dashboard';
import Profile from './pages/Profile';
import Settings from './pages/Settings';

// AFTER: Lazy load routes (200KB initial, rest on-demand)
import { lazy, Suspense } from 'react';

const Home = lazy(() => import('./pages/Home'));
const Dashboard = lazy(() => import('./pages/Dashboard'));
const Profile = lazy(() => import('./pages/Profile'));
const Settings = lazy(() => import('./pages/Settings'));

function App() {
  return (
    <Suspense fallback={<LoadingSpinner />}>
      <Routes>
        <Route path="/" element={<Home />} />
        <Route path="/dashboard" element={<Dashboard />} />
        <Route path="/profile" element={<Profile />} />
        <Route path="/settings" element={<Settings />} />
      </Routes>
    </Suspense>
  );
}

// Result: Initial bundle reduced by 75%, FCP improved from 4.2s to 1.5s

Component-level Code Splitting:

// components/HeavyChart.tsx - Split large dependencies
import { lazy, Suspense } from 'react';

// BEFORE: Chart library loaded always (300KB)
import { Chart } from 'chart.js';

// AFTER: Load only when needed
const Chart = lazy(() => import('chart.js').then(module => ({
  default: module.Chart
})));

export function AnalyticsDashboard({ data }) {
  const [showChart, setShowChart] = useState(false);

  return (
    <div>
      <button onClick={() => setShowChart(true)}>
        Show Chart
      </button>

      {showChart && (
        <Suspense fallback={<ChartSkeleton />}>
          <Chart data={data} />
        </Suspense>
      )}
    </div>
  );
}

Image Optimization

Modern Image Formats and Lazy Loading:

// components/OptimizedImage.tsx
import { useState, useEffect } from 'react';

interface OptimizedImageProps {
  src: string;
  alt: string;
  width: number;
  height: number;
  priority?: boolean;
}

export function OptimizedImage({
  src,
  alt,
  width,
  height,
  priority = false
}: OptimizedImageProps) {
  const [isLoaded, setIsLoaded] = useState(false);

  // Generate srcset for responsive images
  const srcSet = `
    ${src}?w=${width}&q=75&fm=webp 1x,
    ${src}?w=${width * 2}&q=75&fm=webp 2x
  `;

  return (
    <picture>
      {/* Modern formats */}
      <source type="image/avif" srcSet={`${src}?w=${width}&fm=avif`} />
      <source type="image/webp" srcSet={srcSet} />

      {/* Fallback */}
      <img
        src={`${src}?w=${width}&q=75`}
        srcSet={srcSet}
        alt={alt}
        width={width}
        height={height}
        loading={priority ? 'eager' : 'lazy'}
        decoding={priority ? 'sync' : 'async'}
        onLoad={() => setIsLoaded(true)}
        style={{
          objectFit: 'cover',
          backgroundColor: '#f0f0f0',
          transition: 'opacity 0.3s',
          opacity: isLoaded ? 1 : 0
        }}
      />
    </picture>
  );
}

// Result: 60-80% smaller images, faster loading
// JPEG (500KB) -> WebP (150KB) -> AVIF (100KB)

Next.js Image Component:

// BEFORE: Unoptimized images
<img src="/hero.jpg" alt="Hero" width={1200} height={600} />

// AFTER: Automatic optimization
import Image from 'next/image';

<Image
  src="/hero.jpg"
  alt="Hero"
  width={1200}
  height={600}
  priority // For above-the-fold images
  placeholder="blur"
  blurDataURL="data:image/jpeg;base64,/9j/4AAQSkZJRg..." // Low-quality placeholder
  sizes="(max-width: 768px) 100vw, (max-width: 1200px) 50vw, 33vw"
/>

// Automatically:
// - Serves WebP/AVIF when supported
// - Generates multiple sizes
// - Lazy loads by default
// - Prevents CLS with aspect ratio

CSS Optimization

Critical CSS Extraction:

// next.config.js
module.exports = {
  experimental: {
    optimizeCss: true, // Enable CSS optimization
  },
};

// Extract critical CSS
import { CriticalCSS } from 'critters';

export async function extractCriticalCSS(html: string) {
  const critters = new CriticalCSS({
    path: 'public',
    logLevel: 'silent',
    preload: 'swap',
    pruneSource: true,
  });

  return await critters.process(html);
}

// Result: FCP improved from 2.8s to 1.2s

CSS-in-JS Optimization:

// BEFORE: Runtime CSS generation (slow)
import styled from 'styled-components';

const Button = styled.button`
  padding: 12px 24px;
  background: blue;
  color: white;
`;

// AFTER: Zero-runtime CSS with Linaria
import { styled } from '@linaria/react';

const Button = styled.button`
  padding: 12px 24px;
  background: blue;
  color: white;
`;

// Or use CSS Modules for best performance
import styles from './Button.module.css';

<button className={styles.button}>Click</button>

JavaScript Bundle Optimization

Tree Shaking and Dead Code Elimination:

// BEFORE: Importing entire library (300KB)
import _ from 'lodash';

const result = _.debounce(fn, 300);

// AFTER: Import only what you need (5KB)
import debounce from 'lodash/debounce';

const result = debounce(fn, 300);

// Even better: Use native or lightweight alternatives
function debounce<T extends (...args: any[]) => any>(
  fn: T,
  delay: number
): (...args: Parameters<T>) => void {
  let timeoutId: NodeJS.Timeout;
  return (...args) => {
    clearTimeout(timeoutId);
    timeoutId = setTimeout(() => fn(...args), delay);
  };
}

Webpack Bundle Analyzer:

// webpack.config.js
const BundleAnalyzerPlugin = require('webpack-bundle-analyzer').BundleAnalyzerPlugin;

module.exports = {
  plugins: [
    new BundleAnalyzerPlugin({
      analyzerMode: 'static',
      openAnalyzer: false,
      reportFilename: 'bundle-report.html',
    }),
  ],
  optimization: {
    splitChunks: {
      chunks: 'all',
      cacheGroups: {
        vendor: {
          test: /[\\/]node_modules[\\/]/,
          name: 'vendors',
          priority: 10,
        },
        common: {
          minChunks: 2,
          priority: 5,
          reuseExistingChunk: true,
        },
      },
    },
  },
};

Caching Strategies

Browser Caching

Cache-Control Headers:

// next.config.js - Static assets
module.exports = {
  async headers() {
    return [
      {
        source: '/static/:path*',
        headers: [
          {
            key: 'Cache-Control',
            value: 'public, max-age=31536000, immutable', // 1 year
          },
        ],
      },
      {
        source: '/:path*',
        headers: [
          {
            key: 'Cache-Control',
            value: 'public, max-age=0, must-revalidate', // Always revalidate HTML
          },
        ],
      },
    ];
  },
};

Local Storage Caching:

// lib/cache.ts
interface CacheItem<T> {
  data: T;
  timestamp: number;
  ttl: number;
}

export class BrowserCache {
  private prefix = 'app_cache_';

  set<T>(key: string, data: T, ttl: number = 3600000): void {
    const item: CacheItem<T> = {
      data,
      timestamp: Date.now(),
      ttl,
    };

    try {
      localStorage.setItem(
        `${this.prefix}${key}`,
        JSON.stringify(item)
      );
    } catch (error) {
      console.warn('Cache set failed:', error);
    }
  }

  get<T>(key: string): T | null {
    try {
      const item = localStorage.getItem(`${this.prefix}${key}`);
      if (!item) return null;

      const cached: CacheItem<T> = JSON.parse(item);
      const isExpired = Date.now() - cached.timestamp > cached.ttl;

      if (isExpired) {
        this.delete(key);
        return null;
      }

      return cached.data;
    } catch (error) {
      console.warn('Cache get failed:', error);
      return null;
    }
  }

  delete(key: string): void {
    localStorage.removeItem(`${this.prefix}${key}`);
  }

  clear(): void {
    Object.keys(localStorage)
      .filter(key => key.startsWith(this.prefix))
      .forEach(key => localStorage.removeItem(key));
  }
}

// Usage
const cache = new BrowserCache();

async function fetchUserData(userId: string) {
  const cached = cache.get<User>(`user_${userId}`);
  if (cached) return cached;

  const data = await fetch(`/api/users/${userId}`).then(r => r.json());
  cache.set(`user_${userId}`, data, 300000); // 5 minutes

  return data;
}

// Result: Reduced API calls by 70%, improved response time from 200ms to 5ms

Service Workers and PWA

Service Worker Caching Strategy:

// public/service-worker.ts
const CACHE_NAME = 'app-v1';
const RUNTIME_CACHE = 'runtime-v1';

// Assets to cache on install
const PRECACHE_ASSETS = [
  '/',
  '/offline.html',
  '/styles/main.css',
  '/scripts/main.js',
];

// Install: Cache critical assets
self.addEventListener('install', (event: ExtendableEvent) => {
  event.waitUntil(
    caches.open(CACHE_NAME)
      .then(cache => cache.addAll(PRECACHE_ASSETS))
      .then(() => self.skipWaiting())
  );
});

// Activate: Clean old caches
self.addEventListener('activate', (event: ExtendableEvent) => {
  event.waitUntil(
    caches.keys()
      .then(keys => Promise.all(
        keys
          .filter(key => key !== CACHE_NAME && key !== RUNTIME_CACHE)
          .map(key => caches.delete(key))
      ))
      .then(() => self.clients.claim())
  );
});

// Fetch: Network first, fallback to cache
self.addEventListener('fetch', (event: FetchEvent) => {
  const { request } = event;

  // API requests: Network first
  if (request.url.includes('/api/')) {
    event.respondWith(
      fetch(request)
        .then(response => {
          const clone = response.clone();
          caches.open(RUNTIME_CACHE)
            .then(cache => cache.put(request, clone));
          return response;
        })
        .catch(() => caches.match(request))
    );
    return;
  }

  // Static assets: Cache first
  event.respondWith(
    caches.match(request)
      .then(cached => cached || fetch(request))
      .catch(() => caches.match('/offline.html'))
  );
});

CDN Caching

Edge Caching with Vercel:

// pages/api/data.ts
import type { NextApiRequest, NextApiResponse } from 'next';

export default async function handler(
  req: NextApiRequest,
  res: NextApiResponse
) {
  // Set cache headers for CDN
  res.setHeader(
    'Cache-Control',
    'public, s-maxage=3600, stale-while-revalidate=86400'
  );

  const data = await fetchData();
  res.status(200).json(data);
}

// Result: 95% of requests served from edge, latency reduced from 200ms to 20ms

API Response Caching

React Query for Smart Caching:

// lib/api-client.ts
import { useQuery, useMutation, useQueryClient } from '@tanstack/react-query';

export function useUser(userId: string) {
  return useQuery({
    queryKey: ['user', userId],
    queryFn: () => fetch(`/api/users/${userId}`).then(r => r.json()),
    staleTime: 5 * 60 * 1000, // 5 minutes
    cacheTime: 10 * 60 * 1000, // 10 minutes
    retry: 3,
    refetchOnWindowFocus: false,
  });
}

export function useUpdateUser() {
  const queryClient = useQueryClient();

  return useMutation({
    mutationFn: (data: UpdateUserData) =>
      fetch('/api/users', {
        method: 'PUT',
        body: JSON.stringify(data),
      }),
    onSuccess: (_, variables) => {
      // Invalidate and refetch
      queryClient.invalidateQueries(['user', variables.id]);
    },
  });
}

// Usage
function UserProfile({ userId }: { userId: string }) {
  const { data, isLoading } = useUser(userId);

  if (isLoading) return <Skeleton />;
  return <Profile user={data} />;
}

Backend Optimization

Database Query Optimization

Index Strategy:

-- BEFORE: Full table scan (5000ms for 1M rows)
SELECT * FROM orders
WHERE user_id = 123
AND status = 'pending'
ORDER BY created_at DESC;

-- AFTER: Composite index (15ms)
CREATE INDEX idx_orders_user_status_date
ON orders(user_id, status, created_at DESC);

-- Query with index hint
SELECT * FROM orders
WHERE user_id = 123
AND status = 'pending'
ORDER BY created_at DESC
LIMIT 20;

-- Result: 99.7% faster queries

N+1 Query Prevention:

// BEFORE: N+1 queries (1001 queries for 1000 posts)
async function getPosts() {
  const posts = await db.post.findMany();

  for (const post of posts) {
    post.author = await db.user.findUnique({ where: { id: post.authorId } });
  }

  return posts;
}

// AFTER: Single query with join (1 query)
async function getPosts() {
  return await db.post.findMany({
    include: {
      author: true,
    },
  });
}

// Or use DataLoader for batch loading
import DataLoader from 'dataloader';

const userLoader = new DataLoader(async (userIds: readonly string[]) => {
  const users = await db.user.findMany({
    where: { id: { in: [...userIds] } },
  });

  return userIds.map(id => users.find(u => u.id === id));
});

// Result: Query time reduced from 10s to 50ms

API Response Compression

Gzip and Brotli Compression:

// middleware/compression.ts
import compression from 'compression';
import { NextApiRequest, NextApiResponse } from 'next';

export function compressionMiddleware(
  req: NextApiRequest,
  res: NextApiResponse,
  next: () => void
) {
  compression({
    level: 6, // Balance between speed and compression
    threshold: 1024, // Only compress responses > 1KB
    filter: (req, res) => {
      if (req.headers['x-no-compression']) return false;
      return compression.filter(req, res);
    },
  })(req, res, next);
}

// Result: Response size reduced by 70% (500KB -> 150KB)

Connection Pooling

Database Connection Pool:

// lib/db-pool.ts
import { Pool } from 'pg';

const pool = new Pool({
  host: process.env.DB_HOST,
  database: process.env.DB_NAME,
  user: process.env.DB_USER,
  password: process.env.DB_PASSWORD,

  // Connection pool settings
  max: 20,                    // Maximum connections
  min: 5,                     // Minimum idle connections
  idleTimeoutMillis: 30000,   // Close idle connections after 30s
  connectionTimeoutMillis: 2000, // Timeout for acquiring connection
});

// Graceful shutdown
process.on('SIGTERM', async () => {
  await pool.end();
  process.exit(0);
});

export async function query<T>(sql: string, params?: any[]): Promise<T[]> {
  const client = await pool.connect();

  try {
    const result = await client.query(sql, params);
    return result.rows;
  } finally {
    client.release(); // Return connection to pool
  }
}

// Result: Reduced connection overhead from 50ms to 2ms per query

Horizontal Scaling

Load Balancing Strategy:

// lib/load-balancer.ts
export class RoundRobinLoadBalancer {
  private currentIndex = 0;

  constructor(private servers: string[]) {}

  getNextServer(): string {
    const server = this.servers[this.currentIndex];
    this.currentIndex = (this.currentIndex + 1) % this.servers.length;
    return server;
  }
}

// Usage
const loadBalancer = new RoundRobinLoadBalancer([
  'https://api-1.example.com',
  'https://api-2.example.com',
  'https://api-3.example.com',
]);

export async function fetchWithLoadBalancing(endpoint: string) {
  const server = loadBalancer.getNextServer();
  return fetch(`${server}${endpoint}`);
}

Network Optimization

HTTP/2 and HTTP/3

Server Push and Multiplexing:

// next.config.js
module.exports = {
  async headers() {
    return [
      {
        source: '/:path*',
        headers: [
          {
            key: 'Link',
            value: [
              '</styles/critical.css>; rel=preload; as=style',
              '</scripts/main.js>; rel=preload; as=script',
              '</fonts/inter.woff2>; rel=preload; as=font; crossorigin',
            ].join(', '),
          },
        ],
      },
    ];
  },
};

Resource Hints

Preload, Prefetch, Preconnect:

// components/Head.tsx
import Head from 'next/head';

export function OptimizedHead() {
  return (
    <Head>
      {/* Preconnect to critical origins */}
      <link rel="preconnect" href="https://api.example.com" />
      <link rel="preconnect" href="https://cdn.example.com" />
      <link rel="dns-prefetch" href="https://analytics.example.com" />

      {/* Preload critical resources */}
      <link
        rel="preload"
        href="/fonts/inter-var.woff2"
        as="font"
        type="font/woff2"
        crossOrigin="anonymous"
      />

      {/* Prefetch next page resources */}
      <link rel="prefetch" href="/dashboard" />
      <link rel="prefetch" href="/api/user-data" />

      {/* Preload hero image */}
      <link
        rel="preload"
        as="image"
        href="/hero.webp"
        imageSrcSet="/hero-mobile.webp 640w, /hero.webp 1280w"
      />
    </Head>
  );
}

// Result: Reduced latency for critical resources by 200-500ms

Critical CSS

Inline Critical CSS:

// pages/_document.tsx
import Document, { Html, Head, Main, NextScript } from 'next/document';

class MyDocument extends Document {
  render() {
    return (
      <Html>
        <Head>
          {/* Inline critical CSS */}
          <style dangerouslySetInnerHTML={{
            __html: `
              body { margin: 0; font-family: -apple-system, sans-serif; }
              .above-fold { min-height: 100vh; }
              .hero { background: linear-gradient(135deg, #667eea 0%, #764ba2 100%); }
            `
          }} />

          {/* Async load full stylesheet */}
          <link
            rel="stylesheet"
            href="/styles/main.css"
            media="print"
            onLoad="this.media='all'"
          />
        </Head>
        <body>
          <Main />
          <NextScript />
        </body>
      </Html>
    );
  }
}

export default MyDocument;

React/Next.js Specific

React.memo and useMemo

Prevent Unnecessary Re-renders:

// BEFORE: Re-renders on every parent update
function ExpensiveList({ items, onSelect }) {
  console.log('Rendering list with', items.length, 'items');

  return (
    <ul>
      {items.map(item => (
        <li key={item.id} onClick={() => onSelect(item)}>
          {item.name}
        </li>
      ))}
    </ul>
  );
}

// AFTER: Only re-renders when items change
import { memo, useMemo, useCallback } from 'react';

const ExpensiveList = memo(function ExpensiveList({ items, onSelect }) {
  console.log('Rendering list with', items.length, 'items');

  const sortedItems = useMemo(
    () => [...items].sort((a, b) => a.name.localeCompare(b.name)),
    [items]
  );

  return (
    <ul>
      {sortedItems.map(item => (
        <ListItem key={item.id} item={item} onSelect={onSelect} />
      ))}
    </ul>
  );
}, (prevProps, nextProps) => {
  return prevProps.items === nextProps.items;
});

const ListItem = memo(function ListItem({ item, onSelect }) {
  return (
    <li onClick={() => onSelect(item)}>
      {item.name}
    </li>
  );
});

// Parent component
function Parent() {
  const [items, setItems] = useState([]);

  const handleSelect = useCallback((item) => {
    console.log('Selected:', item);
  }, []);

  return <ExpensiveList items={items} onSelect={handleSelect} />;
}

// Result: Reduced re-renders by 90%, improved interaction responsiveness

Virtual Scrolling

Handle Large Lists Efficiently:

// components/VirtualList.tsx
import { useVirtualizer } from '@tanstack/react-virtual';
import { useRef } from 'react';

interface VirtualListProps<T> {
  items: T[];
  renderItem: (item: T, index: number) => React.ReactNode;
  estimateSize?: number;
}

export function VirtualList<T>({
  items,
  renderItem,
  estimateSize = 50
}: VirtualListProps<T>) {
  const parentRef = useRef<HTMLDivElement>(null);

  const virtualizer = useVirtualizer({
    count: items.length,
    getScrollElement: () => parentRef.current,
    estimateSize: () => estimateSize,
    overscan: 5, // Render 5 extra items above/below viewport
  });

  return (
    <div
      ref={parentRef}
      style={{
        height: '600px',
        overflow: 'auto',
      }}
    >
      <div
        style={{
          height: `${virtualizer.getTotalSize()}px`,
          width: '100%',
          position: 'relative',
        }}
      >
        {virtualizer.getVirtualItems().map((virtualItem) => (
          <div
            key={virtualItem.key}
            style={{
              position: 'absolute',
              top: 0,
              left: 0,
              width: '100%',
              height: `${virtualItem.size}px`,
              transform: `translateY(${virtualItem.start}px)`,
            }}
          >
            {renderItem(items[virtualItem.index], virtualItem.index)}
          </div>
        ))}
      </div>
    </div>
  );
}

// Usage: Render 10,000 items smoothly
function App() {
  const items = Array.from({ length: 10000 }, (_, i) => ({
    id: i,
    name: `Item ${i}`,
  }));

  return (
    <VirtualList
      items={items}
      renderItem={(item) => (
        <div style={{ padding: '10px', borderBottom: '1px solid #eee' }}>
          {item.name}
        </div>
      )}
    />
  );
}

// Result: 10,000 items rendered at 60fps instead of freezing browser

Dynamic Imports

Split Heavy Components:

// BEFORE: Chart library bundled in main (300KB)
import { Chart } from 'react-chartjs-2';

function Dashboard() {
  return (
    <div>
      <Chart data={data} />
    </div>
  );
}

// AFTER: Load chart only when needed
import dynamic from 'next/dynamic';

const Chart = dynamic(() => import('react-chartjs-2').then(mod => mod.Chart), {
  loading: () => <ChartSkeleton />,
  ssr: false, // Disable SSR for this component
});

function Dashboard() {
  const [showChart, setShowChart] = useState(false);

  return (
    <div>
      <button onClick={() => setShowChart(true)}>Show Chart</button>
      {showChart && <Chart data={data} />}
    </div>
  );
}

// Result: Initial bundle 300KB smaller, TTI improved by 1.2s

Anti-Patterns

Avoid these common performance mistakes:

  1. Premature Optimization

    // DON'T: Optimize before measuring
    const memoizedValue = useMemo(() => value * 2, [value]); // Unnecessary
    
    // DO: Measure first, optimize if needed
    const value = props.value * 2; // Simple calculation, no memo needed
    
  2. Over-fetching Data

    // DON'T: Request unnecessary fields
    const user = await fetch('/api/users/123'); // Returns 50 fields
    
    // DO: Request only what you need
    const user = await fetch('/api/users/123?fields=name,email');
    
  3. Blocking Main Thread

    // DON'T: Heavy computation on main thread
    const result = heavyComputation(largeArray); // Blocks UI
    
    // DO: Use Web Workers
    const worker = new Worker('/worker.js');
    worker.postMessage(largeArray);
    worker.onmessage = (e) => setResult(e.data);
    
  4. Unnecessary State Updates

    // DON'T: Update state in render
    function Component() {
      const [count, setCount] = useState(0);
      setCount(1); // Causes infinite loop
    }
    
    // DO: Update in effects or callbacks
    function Component() {
      const [count, setCount] = useState(0);
      useEffect(() => setCount(1), []);
    }
    
  5. Ignoring Bundle Size

    // DON'T: Import entire library
    import _ from 'lodash'; // 70KB
    
    // DO: Import only needed functions
    import debounce from 'lodash/debounce'; // 5KB
    

Examples

Example 1: Optimizing Initial Load

Scenario: Landing page takes 5.2s to load, LCP is 4.8s

// BEFORE: All resources loaded synchronously
import Hero from './Hero';
import Features from './Features';
import Testimonials from './Testimonials';
import Footer from './Footer';
import './styles/main.css';
import './styles/animations.css';

function LandingPage() {
  return (
    <>
      <Hero />
      <Features />
      <Testimonials />
      <Footer />
    </>
  );
}

// AFTER: Optimized loading strategy
import dynamic from 'next/dynamic';
import { Suspense } from 'react';

// Critical above-the-fold content
import Hero from './Hero';

// Lazy load below-the-fold components
const Features = dynamic(() => import('./Features'));
const Testimonials = dynamic(() => import('./Testimonials'));
const Footer = dynamic(() => import('./Footer'));

function LandingPage() {
  return (
    <>
      {/* Above-the-fold: Load immediately */}
      <Hero />

      {/* Below-the-fold: Lazy load */}
      <Suspense fallback={<div style={{ height: '400px' }} />}>
        <Features />
      </Suspense>

      <Suspense fallback={<div style={{ height: '600px' }} />}>
        <Testimonials />
      </Suspense>

      <Suspense fallback={<div style={{ height: '200px' }} />}>
        <Footer />
      </Suspense>
    </>
  );
}

// Split CSS
// main.css: Critical styles only (inline in <head>)
// animations.css: Load async

// Result:
// - Initial bundle: 450KB -> 120KB (73% reduction)
// - LCP: 4.8s -> 1.9s (60% improvement)
// - TTI: 5.2s -> 2.4s (54% improvement)

Example 2: Optimizing Data Fetching

Scenario: Dashboard makes 50+ API calls, takes 3s to load

// BEFORE: Sequential API calls
async function Dashboard() {
  const user = await fetch('/api/user').then(r => r.json());
  const posts = await fetch('/api/posts').then(r => r.json());
  const comments = await fetch('/api/comments').then(r => r.json());
  const analytics = await fetch('/api/analytics').then(r => r.json());

  // Each request waits for previous: 3000ms total
}

// AFTER: Parallel fetching with React Query
import { useQueries } from '@tanstack/react-query';

function Dashboard() {
  const results = useQueries({
    queries: [
      { queryKey: ['user'], queryFn: () => fetch('/api/user').then(r => r.json()) },
      { queryKey: ['posts'], queryFn: () => fetch('/api/posts').then(r => r.json()) },
      { queryKey: ['comments'], queryFn: () => fetch('/api/comments').then(r => r.json()) },
      { queryKey: ['analytics'], queryFn: () => fetch('/api/analytics').then(r => r.json()) },
    ],
  });

  const [userQuery, postsQuery, commentsQuery, analyticsQuery] = results;

  if (results.some(q => q.isLoading)) {
    return <DashboardSkeleton />;
  }

  return (
    <div>
      <UserHeader user={userQuery.data} />
      <PostsList posts={postsQuery.data} />
      <CommentsPanel comments={commentsQuery.data} />
      <AnalyticsChart data={analyticsQuery.data} />
    </div>
  );
}

// Even better: Server-side aggregation
// pages/api/dashboard.ts
export default async function handler(req, res) {
  const [user, posts, comments, analytics] = await Promise.all([
    db.user.findUnique({ where: { id: req.userId } }),
    db.post.findMany({ where: { userId: req.userId } }),
    db.comment.findMany({ where: { userId: req.userId } }),
    getAnalytics(req.userId),
  ]);

  res.json({ user, posts, comments, analytics });
}

// Result:
// - API calls: 50 -> 1 (98% reduction)
// - Load time: 3000ms -> 400ms (87% improvement)
// - Network traffic: 2MB -> 200KB (90% reduction)

Example 3: Image Gallery Optimization

Scenario: Image gallery with 100 high-res images crashes mobile browsers

// BEFORE: All images loaded at once
function Gallery({ images }: { images: string[] }) {
  return (
    <div className="grid">
      {images.map(url => (
        <img key={url} src={url} alt="" />
      ))}
    </div>
  );
}

// AFTER: Virtual grid with lazy loading
import { useVirtualizer } from '@tanstack/react-virtual';
import { useRef } from 'react';
import Image from 'next/image';

function Gallery({ images }: { images: string[] }) {
  const parentRef = useRef<HTMLDivElement>(null);
  const columnCount = 3;
  const rowCount = Math.ceil(images.length / columnCount);

  const rowVirtualizer = useVirtualizer({
    count: rowCount,
    getScrollElement: () => parentRef.current,
    estimateSize: () => 300, // Row height
    overscan: 2,
  });

  return (
    <div
      ref={parentRef}
      style={{ height: '100vh', overflow: 'auto' }}
    >
      <div style={{ height: `${rowVirtualizer.getTotalSize()}px` }}>
        {rowVirtualizer.getVirtualItems().map((virtualRow) => {
          const startIndex = virtualRow.index * columnCount;
          const rowImages = images.slice(startIndex, startIndex + columnCount);

          return (
            <div
              key={virtualRow.key}
              style={{
                position: 'absolute',
                top: 0,
                left: 0,
                width: '100%',
                height: `${virtualRow.size}px`,
                transform: `translateY(${virtualRow.start}px)`,
                display: 'grid',
                gridTemplateColumns: 'repeat(3, 1fr)',
                gap: '10px',
              }}
            >
              {rowImages.map((url, i) => (
                <Image
                  key={startIndex + i}
                  src={url}
                  alt=""
                  width={400}
                  height={300}
                  loading="lazy"
                  placeholder="blur"
                  blurDataURL={generateBlurPlaceholder(url)}
                  sizes="(max-width: 768px) 100vw, 33vw"
                />
              ))}
            </div>
          );
        })}
      </div>
    </div>
  );
}

// Result:
// - Memory usage: 2GB -> 200MB (90% reduction)
// - Images loaded: 100 -> 15 (only visible ones)
// - Scroll FPS: 15fps -> 60fps (smooth scrolling)
// - Mobile: No crashes, works on low-end devices

Example 4: State Management Optimization

Scenario: Complex form with 50+ fields re-renders entire component on every keystroke

// BEFORE: Single state object causes full re-renders
function ComplexForm() {
  const [formData, setFormData] = useState({
    firstName: '',
    lastName: '',
    email: '',
    // ... 47 more fields
  });

  const handleChange = (field: string, value: string) => {
    setFormData(prev => ({ ...prev, [field]: value }));
    // Entire form re-renders on every keystroke
  };

  return (
    <form>
      <Input value={formData.firstName} onChange={e => handleChange('firstName', e.target.value)} />
      <Input value={formData.lastName} onChange={e => handleChange('lastName', e.target.value)} />
      {/* 48 more inputs... */}
    </form>
  );
}

// AFTER: Isolated state with React Hook Form
import { useForm, Controller } from 'react-hook-form';

function ComplexForm() {
  const { control, handleSubmit } = useForm({
    defaultValues: {
      firstName: '',
      lastName: '',
      email: '',
      // ... 47 more fields
    },
  });

  const onSubmit = (data) => {
    console.log(data);
  };

  return (
    <form onSubmit={handleSubmit(onSubmit)}>
      <Controller
        name="firstName"
        control={control}
        render={({ field }) => (
          <Input {...field} />
          // Only this input re-renders on change
        )}
      />
      <Controller
        name="lastName"
        control={control}
        render={({ field }) => <Input {...field} />}
      />
      {/* 48 more controlled inputs... */}
    </form>
  );
}

// Result:
// - Re-renders per keystroke: 50 -> 1 (98% reduction)
// - Input lag: 200ms -> 0ms (instant feedback)
// - CPU usage: 80% -> 5% while typing

Example 5: API Response Optimization

Scenario: Product list API returns 500KB of data, most unused

// BEFORE: Over-fetching data
// GET /api/products returns full product objects
[
  {
    id: 1,
    name: "Product A",
    description: "Long description...",
    fullSpecifications: { /* 100 fields */ },
    reviews: [ /* 50 reviews */ ],
    relatedProducts: [ /* 10 products */ ],
    images: [ /* 20 images */ ],
    // ... lots more data
  },
  // ... 99 more products
]

// 500KB response for simple product list

// AFTER: GraphQL with field selection
import { gql, useQuery } from '@apollo/client';

const GET_PRODUCTS = gql`
  query GetProducts($limit: Int!) {
    products(limit: $limit) {
      id
      name
      price
      thumbnail
    }
  }
`;

function ProductList() {
  const { data, loading } = useQuery(GET_PRODUCTS, {
    variables: { limit: 100 },
  });

  if (loading) return <Skeleton />;

  return (
    <div className="grid">
      {data.products.map(product => (
        <ProductCard key={product.id} product={product} />
      ))}
    </div>
  );
}

// Or with REST: Field filtering
// GET /api/products?fields=id,name,price,thumbnail

// Result:
// - Response size: 500KB -> 25KB (95% reduction)
// - Load time: 2000ms -> 150ms (92.5% improvement)
// - Bandwidth saved: 475KB per request
// - Mobile data usage: Significantly reduced

Example 6: Database Query Optimization

Scenario: User dashboard query takes 8 seconds with N+1 problem

// BEFORE: N+1 queries
async function getUserDashboard(userId: string) {
  // 1 query to get user
  const user = await db.user.findUnique({
    where: { id: userId },
  });

  // 1 query to get user's posts
  const posts = await db.post.findMany({
    where: { authorId: userId },
  });

  // N queries to get each post's comments (100 posts = 100 queries)
  for (const post of posts) {
    post.comments = await db.comment.findMany({
      where: { postId: post.id },
    });

    // N queries to get each comment's author (500 comments = 500 queries)
    for (const comment of post.comments) {
      comment.author = await db.user.findUnique({
        where: { id: comment.authorId },
      });
    }
  }

  return { user, posts };
}

// Total: 1 + 1 + 100 + 500 = 602 queries (8000ms)

// AFTER: Optimized with joins and includes
async function getUserDashboard(userId: string) {
  return await db.user.findUnique({
    where: { id: userId },
    include: {
      posts: {
        include: {
          comments: {
            include: {
              author: {
                select: {
                  id: true,
                  name: true,
                  avatar: true,
                },
              },
            },
          },
        },
      },
    },
  });
}

// Total: 1 query (50ms)

// Result:
// - Queries: 602 -> 1 (99.8% reduction)
// - Query time: 8000ms -> 50ms (99.4% improvement)
// - Database load: Drastically reduced
// - Scalability: Can handle 20x more concurrent users

Remember: Performance optimization is an ongoing process. Continuously monitor your metrics, profile your application, and iterate on improvements. The goal is not perfection but providing users with a fast, responsive experience that meets your performance budgets.