A comprehensive deep-dive into every dimension of frontend performance — from network optimization and rendering pipelines to virtualization, infinite scroll, cookies, and real-world monitoring. Techniques I've applied at Intuit, Cisco, and PayPal serving 100M+ users.
Network performance is the foundation of every fast frontend. No matter how optimized your JavaScript is, a slow network will ruin the user experience. At Intuit, our Storefront Data Service (RASS) was sending 500MB payloads — we cut it to 50MB through systematic network optimization. Here's exactly how.
HTTP/1.1 allows only one request per TCP connection at a time. HTTP/2 introduced multiplexing — multiple requests over a single connection simultaneously. HTTP/3 goes further, using QUIC protocol to eliminate head-of-line blocking entirely.
Always compress your text assets — HTML, CSS, JS, JSON. Brotli achieves 15–25% better compression than Gzip on average, especially for repetitive text like JavaScript bundles.
# nginx.conf — Enable Brotli + Gzip fallback
brotli on;
brotli_comp_level 6;
brotli_types text/html text/css application/javascript application/json;
gzip on;
gzip_comp_level 6;
gzip_types text/html text/css application/javascript application/json;
# Always prefer Brotli — 20% smaller than Gzip on JS bundles
Browser resource hints let you guide the browser to fetch critical resources early — before they're discovered in the HTML parsing phase.
<!-- Preconnect: Establish early TCP/TLS connection to CDN -->
<link rel="preconnect" href="https://cdn.bgajwala.in"/>
<!-- Preload: Fetch critical font ASAP (blocks render if missing) -->
<link rel="preload" href="/fonts/SpaceGrotesk.woff2"
as="font" crossorigin/>
<!-- Prefetch: Fetch next-page JS in browser idle time -->
<link rel="prefetch" href="/chunks/dashboard.js"/>
<!-- DNS-prefetch: Resolve DNS for third-party domains early -->
<link rel="dns-prefetch" href="https://analytics.google.com"/>
A CDN serves assets from the edge node closest to the user. Pair this with aggressive cache headers for immutable assets (hashed filenames) and short TTLs for dynamic content.
# Cache-Control strategy for different asset types
# Immutable JS/CSS bundles (hashed filenames) — cache forever
Cache-Control: public, max-age=31536000, immutable
# e.g. main.a3f9c2b.js
# HTML — always revalidate
Cache-Control: no-cache, must-revalidate
# API responses — short cache with stale-while-revalidate
Cache-Control: public, max-age=60, stale-while-revalidate=300
max-age=31536000 (1 year) safely — the hash changes when content changes, busting the cache automatically.The browser rendering pipeline is a sequence of steps that turn your HTML/CSS/JS into pixels on screen. Understanding where it can stall is key to eliminating jank and achieving a consistent 60fps experience.
Layout thrashing happens when JavaScript reads and writes to the DOM in alternating cycles, forcing the browser to recalculate layout multiple times per frame. This is one of the most common causes of jank.
// ❌ BAD — Read/write interleaved = layout thrashing
elements.forEach(el => {
const height = el.offsetHeight; // READ — forces layout
el.style.height = height + 10 + 'px'; // WRITE — invalidates layout
});
// ✅ GOOD — Batch reads, then batch writes
const heights = elements.map(el => el.offsetHeight); // All READs first
elements.forEach((el, i) => {
el.style.height = heights[i] + 10 + 'px'; // All WRITEs after
});
CSS contain tells the browser that an element's subtree is independent — limiting the scope of style recalculations. will-change promotes an element to its own compositor layer, enabling GPU-accelerated animations.
/* CSS Containment — isolate expensive component subtrees */
.widget-card {
contain: layout style paint; /* Browser won't recalc outside this */
}
/* will-change — GPU layer for animated elements */
.animated-sidebar {
will-change: transform; /* Promotes to own layer */
transform: translateZ(0); /* Trigger layer on Safari too */
}
/* ⚠️ Use sparingly — too many layers = memory pressure */
/* Only add will-change just before animation, remove after */
// React 19 Compiler handles useMemo/useCallback automatically
// But for React 18 and below:
// 1. React.memo — skip re-render if props unchanged
const InvoiceRow = React.memo(({ invoice }) => (
<tr><td>{invoice.id}</td><td>{invoice.amount}</td></tr>
));
// 2. useDeferredValue — defer expensive renders
function SearchResults({ query }) {
const deferred = useDeferredValue(query); // non-urgent
return <ExpensiveList filter={deferred} />;
}
// 3. useTransition — mark state updates as non-urgent
const [isPending, startTransition] = useTransition();
startTransition(() => setFilter(value)); // Won't block input
transform and opacity. These are the only CSS properties that can be animated on the compositor thread without triggering layout or paint — giving you true 60fps animations.Images and videos are typically the largest assets on any webpage. At PayPal, poorly optimized images were responsible for 60% of our page weight. Here's a systematic approach to media optimization.
| Format | Best For | Compression | Browser Support |
|---|---|---|---|
| WebP | Photos, complex images | 30% smaller than JPEG | 95%+ |
| AVIF | Photos, highest quality | 50% smaller than JPEG | 85%+ |
| SVG | Icons, logos, illustrations | Infinitely scalable | 100% |
| PNG | Screenshots, transparency | Largest file size | 100% |
| JPEG | Photos (legacy fallback) | Standard | 100% |
<!-- Serve correctly sized image for every screen -->
<picture>
<!-- AVIF for modern browsers -->
<source
type="image/avif"
srcset="hero-400.avif 400w, hero-800.avif 800w, hero-1200.avif 1200w"
sizes="(max-width: 768px) 100vw, 50vw"/>
<!-- WebP fallback -->
<source
type="image/webp"
srcset="hero-400.webp 400w, hero-800.webp 800w, hero-1200.webp 1200w"
sizes="(max-width: 768px) 100vw, 50vw"/>
<!-- JPEG ultimate fallback -->
<img src="hero-800.jpg" alt="Hero"
loading="lazy" decoding="async"
width="800" height="450"/> <!-- Always set dimensions! -->
</picture>
// Custom lazy loader with IntersectionObserver
const observer = new IntersectionObserver(
(entries) => {
entries.forEach(entry => {
if (entry.isIntersecting) {
const img = entry.target as HTMLImageElement;
img.src = img.dataset.src!; // Swap in real src
img.classList.add('loaded');
observer.unobserve(img); // Stop watching
}
});
},
{ rootMargin: '200px' } // Start loading 200px before viewport
);
document.querySelectorAll('img[data-src]')
.forEach(img => observer.observe(img));
loading="eager" and fetchpriority="high". Lazy loading it will destroy your LCP score.Page load time directly affects revenue. Amazon found that every 100ms of latency costs 1% in sales. Google uses Core Web Vitals as a ranking signal. Here are the strategies that matter most.
// ❌ BAD — Entire app in one bundle
import Dashboard from './Dashboard';
import Reports from './Reports';
import Settings from './Settings';
// ✅ GOOD — Route-based code splitting
const Dashboard = lazy(() => import('./Dashboard'));
const Reports = lazy(() => import('./Reports'));
const Settings = lazy(() => import('./Settings'));
// Wrap in Suspense with skeleton fallback
<Suspense fallback={<DashboardSkeleton />}>
<Dashboard />
</Suspense>
Inline the CSS needed to render above-the-fold content directly in <head>. Load the rest asynchronously. This eliminates render-blocking CSS and dramatically improves FCP.
<!-- Inline critical CSS in <head> -->
<style>
/* Only above-fold styles — nav, hero, fonts */
body { margin: 0; font-family: 'Space Grotesk', sans-serif; }
.nav { position: fixed; top: 0; width: 100%; }
.hero { min-height: 100vh; display: flex; }
</style>
<!-- Load rest of CSS non-blocking -->
<link rel="preload" href="styles.css" as="style"
onload="this.onload=null;this.rel='stylesheet'"/>
<noscript><link rel="stylesheet" href="styles.css"/></noscript>
<!-- async: load in parallel, execute immediately when ready -->
<!-- Use for: analytics, ads, non-critical third parties -->
<script async src="analytics.js"></script>
<!-- defer: load in parallel, execute after HTML parsed -->
<!-- Use for: your own app bundles -->
<script defer src="app.js"></script>
<!-- type="module": deferred by default, supports ESM -->
<script type="module" src="main.js"></script>
JavaScript is the most expensive resource on the web — not just to download, but to parse, compile, and execute. A 500KB JS bundle costs far more than a 500KB image because images don't need to be parsed and executed by the CPU.
// webpack.config.js — Bundle analysis setup
const { BundleAnalyzerPlugin } = require('webpack-bundle-analyzer');
module.exports = {
mode: 'production',
optimization: {
usedExports: true, // Tree shaking — remove unused exports
sideEffects: false, // Trust package.json sideEffects field
splitChunks: {
chunks: 'all', // Split vendor + app bundles
cacheGroups: {
vendor: {
test: /node_modules/,
name: 'vendors',
chunks: 'all'
}
}
}
},
plugins: [new BundleAnalyzerPlugin()] // Visualize bundle
};
The main thread is responsible for UI rendering. Running CPU-heavy logic (data processing, encryption, image manipulation) on the main thread causes jank. Move it to a Web Worker.
// worker.ts — Runs on separate thread
self.addEventListener('message', ({ data }) => {
// Heavy computation — won't block UI
const result = processLargeDataset(data.items);
self.postMessage({ result });
});
// main.ts — Non-blocking call
const worker = new Worker(new URL('./worker.ts', import.meta.url));
worker.postMessage({ items: largeArray });
worker.onmessage = ({ data }) => setResult(data.result);
// Debounce — fire AFTER user stops typing (search input)
function debounce<T extends (...args: any[]) => any>(fn: T, ms: number) {
let timer: ReturnType<typeof setTimeout>;
return (...args: Parameters<T>) => {
clearTimeout(timer);
timer = setTimeout(() => fn(...args), ms);
};
}
const debouncedSearch = debounce(fetchResults, 300);
// Throttle — fire at most once per interval (scroll handler)
function throttle<T extends (...args: any[]) => any>(fn: T, ms: number) {
let last = 0;
return (...args: Parameters<T>) => {
const now = Date.now();
if (now - last >= ms) { last = now; fn(...args); }
};
}
const throttledScroll = throttle(updateScrollProgress, 16); // ~60fps
scheduler.yield() (Chrome 115+) or setTimeout(0) to break up long tasks and give the browser a chance to handle user input between chunks.Rendering 10,000 DOM nodes at once is the fastest way to freeze your UI. Virtualization (also called windowing) renders only the items currently visible in the viewport — keeping DOM nodes at a constant, small number regardless of data size.
import { useState, useRef, useCallback } from 'react';
interface VirtualListProps {
items: any[];
itemHeight: number;
containerHeight: number;
renderItem: (item: any, index: number) => React.ReactNode;
}
function VirtualList({ items, itemHeight, containerHeight, renderItem }: VirtualListProps) {
const [scrollTop, setScrollTop] = useState(0);
const totalHeight = items.length * itemHeight;
// Calculate which items are in viewport + overscan buffer
const startIndex = Math.max(0, Math.floor(scrollTop / itemHeight) - 3);
const endIndex = Math.min(
items.length - 1,
Math.ceil((scrollTop + containerHeight) / itemHeight) + 3
);
const visibleItems = items.slice(startIndex, endIndex + 1);
return (
<div
style={{ height: containerHeight, overflowY: 'auto', position: 'relative' }}
onScroll={e => setScrollTop((e.target as HTMLDivElement).scrollTop)}
>
<div style={{ height: totalHeight }}> {/* Full height spacer */}
<div style={{ transform: `translateY(${startIndex * itemHeight}px)` }}>
{visibleItems.map((item, i) => (
<div key={startIndex + i} style={{ height: itemHeight }}>
{renderItem(item, startIndex + i)}
</div>
))}
</div>
</div>
</div>
);
}
react-window reduced initial render time from 4.2s → 180ms and cut memory usage by 87%. Always virtualize lists over 100 items.Two dominant patterns for rendering large datasets: Pagination (load page by page) and Infinite Scroll (load more as user scrolls). Both have real trade-offs — the right choice depends on your use case.
| Criteria | Pagination | Infinite Scroll |
|---|---|---|
| Navigation | Back button works perfectly | Loses scroll position on back |
| Memory usage | Constant — only one page in DOM | Grows with scrolling (unless virtualized) |
| SEO | All pages crawlable | Content below fold may not be indexed |
| User intent | Goal-oriented browsing (e-commerce) | Discovery & social feed content |
| Implementation | Simple | Moderate complexity |
| Accessibility | Keyboard navigation works | Requires extra ARIA + focus management |
function InfiniteList() {
const [items, setItems] = useState(initialItems);
const [page, setPage] = useState(1);
const [loading, setLoading] = useState(false);
const [hasMore, setHasMore] = useState(true);
const sentinelRef = useRef<HTMLDivElement>(null);
useEffect(() => {
const observer = new IntersectionObserver(
async ([entry]) => {
if (entry.isIntersecting && !loading && hasMore) {
setLoading(true);
const newItems = await fetchPage(page + 1);
if (newItems.length === 0) { setHasMore(false); return; }
setItems(prev => [...prev, ...newItems]);
setPage(p => p + 1);
setLoading(false);
}
},
{ rootMargin: '400px' } // Pre-fetch 400px before end
);
if (sentinelRef.current) observer.observe(sentinelRef.current);
return () => observer.disconnect();
}, [page, loading, hasMore]);
return (
<div>
{items.map(item => <ItemRow key={item.id} item={item} />)}
<div ref={sentinelRef} /> {/* Invisible trigger */}
{loading && <Spinner />}
{!hasMore && <p>All items loaded</p>}
</div>
);
}
@tanstack/react-virtual to render only visible rows, and load more data as the user approaches the end. This gives you infinite scroll UX with constant memory usage.Cookies and sessions are foundational to web state management, auth, and performance. Used correctly, they reduce redundant network calls. Used incorrectly, they're a performance and security liability.
| Storage | Capacity | Sent with requests? | Expiry | Access |
|---|---|---|---|---|
| Cookie | 4KB | Yes — every request | Configurable | JS + Server |
| sessionStorage | 5MB | No | Tab close | JS only |
| localStorage | 10MB | No | Never (manual) | JS only |
| IndexedDB | 50MB+ | No | Never (manual) | JS only (async) |
// Setting a secure auth cookie (server-side — Node.js/Express)
res.cookie('auth_token', token, {
httpOnly: true, // ✅ JS cannot access — XSS protection
secure: true, // ✅ HTTPS only
sameSite: 'strict', // ✅ CSRF protection
maxAge: 3600000, // 1 hour in ms
path: '/',
});
// Performance cookie — store UI preferences client-side
// These are non-sensitive so JS access is fine
document.cookie = `theme=dark; max-age=31536000; SameSite=Lax`;
Every cookie set on your root domain is sent with every single HTTP request — including images, fonts, and API calls. A bloated cookie jar can add kilobytes of unnecessary overhead to every request.
// ✅ Serve static assets from a cookieless domain
// Instead of: https://bgajwala.in/images/hero.webp
// Use: https://static.bgajwala.in/images/hero.webp
// No cookies set on static.* subdomain = zero cookie overhead on assets
// Session management — JWT vs Server Sessions
const sessionStrategy = {
jwt: {
pros: ['Stateless', 'No DB lookup per request', 'Scales horizontally'],
cons: ['Cannot revoke before expiry', 'Payload size grows'],
},
serverSession: {
pros: ['Instant revocation', 'Small cookie (just session ID)'],
cons: ['Requires session store (Redis)', 'DB hit per request'],
}
};
httpOnly cookies for auth tokens instead.You can't optimize what you can't measure. Monitoring is not an afterthought — it's how you find real performance issues in production that never appear in local dev. At Intuit, our monitoring stack caught a 3× regression in LCP before it reached 5% of users.
// Measure custom performance marks in your app
// e.g. time from route change to meaningful paint
function measureRouteChange(routeName: string) {
performance.mark(`route-start:${routeName}`);
return () => { // Call when component is fully rendered
performance.mark(`route-end:${routeName}`);
performance.measure(
`route:${routeName}`,
`route-start:${routeName}`,
`route-end:${routeName}`
);
const [entry] = performance.getEntriesByName(`route:${routeName}`);
reportToAnalytics({ metric: 'route_change', route: routeName,
duration: entry.duration });
};
}
// Core Web Vitals — report to your analytics
import { onCLS, onINP, onLCP } from 'web-vitals';
onLCP(({ value }) => sendToAnalytics({ metric: 'LCP', value }));
onINP(({ value }) => sendToAnalytics({ metric: 'INP', value }));
onCLS(({ value }) => sendToAnalytics({ metric: 'CLS', value }));
A performance budget is a set of limits for metrics you care about. Enforce it in CI so regressions are caught before merge — not after deploy.
// lighthouserc.js — Fail CI if performance regresses
module.exports = {
ci: {
assert: {
assertions: {
'categories:performance': ['error', { minScore: 0.9 }],
'first-contentful-paint': ['error', { maxNumericValue: 1500 }],
'largest-contentful-paint': ['error', { maxNumericValue: 2500 }],
'total-blocking-time': ['warn', { maxNumericValue: 200 }],
'cumulative-layout-shift': ['error', { maxNumericValue: 0.1 }],
'uses-optimized-images': ['warn', { maxLength: 0 }],
'resource-summary:script:size': ['error', { maxNumericValue: 300000 }],
}
}
}
};