Next.js WebSockets in 2026: The Complete Guide (App Router + Pages Router)
Add WebSockets to Next.js in 2026. Full guide for App Router and Pages Router with working code: custom server, ws, Socket.IO, and managed realtime APIs.
If you have ever tried to wire WebSockets into a Next.js app, you have probably hit the same wall thousands of other developers have: Route Handlers cannot upgrade to WebSockets, Vercel cannot hold a persistent TCP connection, and the top Google results for "next.js websocket" are either four years out of date or aimed at Socket.IO on a custom Express server. In 2026, with Next.js 15 and the App Router as the default, the shape of the problem has not really changed — but the available solutions have matured, and one of them (a managed realtime API) sidesteps the problem entirely.
This guide covers every realistic option for adding WebSockets to a Next.js app in 2026: custom server.js, a separate ws server, Socket.IO, and managed realtime APIs. You get working code for each, clear tradeoffs, and a map of when App Router specifics matter. By the end you will know which option fits your constraints.
TL;DR
- Next.js does not ship native WebSocket support. Route Handlers cannot upgrade a request.
- On Vercel, you cannot host a WebSocket. Vercel Functions terminate after the request.
- Options:
server.js(Node only, lose Vercel), a separatewsserver (recommended if you host yourself), Socket.IO (ecosystem tradeoff), or a managed realtime API (simplest, keep Vercel).- For App Router apps on Vercel, a managed realtime API is almost always the right call.
Why Next.js does not ship native WebSocket support
The reason is not an oversight — it is a direct consequence of Next.js's deployment model. Next.js is optimized for serverless and edge runtimes. Vercel Functions, AWS Lambda, Cloudflare Workers: these are stateless, short-lived request handlers. They spin up, process a request, return a response, and terminate.
A WebSocket is the opposite. It is a persistent TCP connection between one browser and one server process. Once the handshake completes, that socket stays open for minutes or hours, pushing messages in both directions. The connection has state: which channels it is subscribed to, which user it belongs to, what the last message it sent was. A serverless function cannot hold that state because it does not exist between invocations.
This mismatch shows up concretely in the Next.js API. Route Handlers give you a Request and expect a Response back. There is no socket, no upgrade event, no way to call webSocketServer.handleUpgrade() on the underlying connection. The Next.js runtime wraps the raw HTTP request in its own abstraction, and that abstraction is designed to be portable across runtimes that often do not support WebSockets at all.
So the question "how do I add WebSockets to Next.js" is really two separate questions: where does the WebSocket endpoint live, and how does your Next.js app talk to it?
Option 1: Custom server.js
Next.js supports a "custom server" pattern: a plain Node.js entry point that you write yourself, which both serves your Next.js pages and runs any other Node code you want — including a WebSocket server.
// server.js
import { createServer } from "node:http";
import { WebSocketServer } from "ws";
import next from "next";
const app = next({ dev: process.env.NODE_ENV !== "production" });
const handle = app.getRequestHandler();
app.prepare().then(() => {
const server = createServer((req, res) => handle(req, res));
const wss = new WebSocketServer({ noServer: true });
wss.on("connection", (ws) => {
ws.on("message", (msg) => {
ws.send(`echo: ${msg.toString()}`);
});
});
server.on("upgrade", (req, socket, head) => {
if (req.url === "/ws") {
wss.handleUpgrade(req, socket, head, (ws) => {
wss.emit("connection", ws, req);
});
}
});
server.listen(3000, () => console.log("Ready on http://localhost:3000"));
});
Start with node server.js. Your Next.js pages are served as usual; /ws becomes a WebSocket endpoint in the same process.
Tradeoffs. This works, but it disables a lot of what makes Next.js productive. You lose Vercel — Vercel does not run custom servers. You lose automatic edge routing, ISR-on-demand behavior, the streaming tweaks the platform applies, and zero-config scaling. Every deploy has to be a Dockerfile or a long-running Node process on a VM. For a solo app or a homelab it is fine. For a production team that picked Next.js because of Vercel, giving up Vercel to add realtime is a big swap.
Option 2: Separate ws server (recommended self-managed pattern)
If you want to keep Next.js on Vercel (or any serverless platform) and still host your own WebSocket endpoint, the cleanest pattern is a separate process. Run a minimal Node server with ws on a VM, container, or platform-as-a-service like Fly.io, Railway, or Render. Your Next.js app talks to it from the browser for the realtime parts and from Route Handlers for server-triggered events.
// ws-server/index.js
import { WebSocketServer } from "ws";
import { createServer } from "node:http";
import { randomUUID } from "node:crypto";
const server = createServer();
const wss = new WebSocketServer({ server });
// channel -> Set<WebSocket>
const channels = new Map();
function subscribe(ws, channel) {
if (!channels.has(channel)) channels.set(channel, new Set());
channels.get(channel).add(ws);
ws.channels ??= new Set();
ws.channels.add(channel);
}
function unsubscribeAll(ws) {
for (const channel of ws.channels ?? []) {
channels.get(channel)?.delete(ws);
}
}
wss.on("connection", (ws) => {
ws.id = randomUUID();
ws.on("message", (raw) => {
const msg = JSON.parse(raw.toString());
if (msg.type === "subscribe") subscribe(ws, msg.channel);
});
ws.on("close", () => unsubscribeAll(ws));
});
// HTTP endpoint the Next.js Route Handler calls to publish an event
server.on("request", async (req, res) => {
if (req.method === "POST" && req.url === "/publish") {
if (req.headers["x-publish-secret"] !== process.env.PUBLISH_SECRET) {
res.writeHead(401).end();
return;
}
const body = await new Promise((resolve) => {
let data = "";
req.on("data", (chunk) => (data += chunk));
req.on("end", () => resolve(data));
});
const { channel, event, payload } = JSON.parse(body);
const subscribers = channels.get(channel) ?? new Set();
for (const ws of subscribers) {
ws.send(JSON.stringify({ channel, event, payload }));
}
res.writeHead(204).end();
return;
}
res.writeHead(404).end();
});
server.listen(8080, () => console.log("ws server on :8080"));
From a Next.js Route Handler:
// app/api/orders/route.ts
import { NextResponse } from "next/server";
export async function POST(req: Request) {
const order = await req.json();
// ... persist the order ...
await fetch(`${process.env.WS_SERVER_URL}/publish`, {
method: "POST",
headers: {
"content-type": "application/json",
"x-publish-secret": process.env.PUBLISH_SECRET!
},
body: JSON.stringify({
channel: `user-${order.userId}`,
event: "order.created",
payload: order
})
});
return NextResponse.json(order);
}
And from a React component:
"use client";
import { useEffect, useState } from "react";
export function OrderFeed({ userId }: { userId: string }) {
const [orders, setOrders] = useState<Order[]>([]);
useEffect(() => {
const ws = new WebSocket(process.env.NEXT_PUBLIC_WS_URL!);
ws.onopen = () => {
ws.send(JSON.stringify({ type: "subscribe", channel: `user-${userId}` }));
};
ws.onmessage = (e) => {
const { event, payload } = JSON.parse(e.data);
if (event === "order.created") setOrders((prev) => [payload, ...prev]);
};
return () => ws.close();
}, [userId]);
return /* ... */;
}
Tradeoffs. You keep Vercel for Next.js and run a single Node process for the socket layer — a clear split. The operational cost is real, though: sticky sessions on the load balancer, reconnect logic on the client, horizontal scaling via Redis pub/sub once you outgrow one node, TLS certificates, and authentication on the upgrade request. It adds up. If you enjoy infrastructure, this pattern is the most flexible self-managed option. If you do not, skip to Option 4.
For deeper reading on scaling this pattern past one node, see WebSocket Load Balancing Explained and How WebSockets Scale.
Option 3: Socket.IO with a custom server
Socket.IO is the same "Option 1" pattern with more batteries. It gives you rooms, acknowledgements, automatic reconnect with exponential backoff, and a long-poll fallback for clients behind restrictive proxies. The tradeoff is the same: a custom server.js, no Vercel.
// server.js
import { createServer } from "node:http";
import { Server } from "socket.io";
import next from "next";
const app = next({ dev: process.env.NODE_ENV !== "production" });
const handle = app.getRequestHandler();
app.prepare().then(() => {
const server = createServer((req, res) => handle(req, res));
const io = new Server(server, { path: "/api/socket_io" });
io.on("connection", (socket) => {
socket.on("join", (room) => socket.join(room));
socket.on("message", ({ room, text }) => {
io.to(room).emit("message", { text, at: Date.now() });
});
});
server.listen(3000);
});
// In a "use client" component
import { io } from "socket.io-client";
const socket = io({ path: "/api/socket_io" });
socket.emit("join", "room-42");
socket.on("message", (m) => console.log(m));
Tradeoffs. Socket.IO is not a pure WebSocket — it is its own protocol layered on top. Clients and servers must both speak it. The ergonomics are nice for simple apps, but the lock-in is meaningful. If you ever want to move the socket layer behind a different server, you carry Socket.IO with you. And you still cannot deploy to Vercel.
Option 4: Managed realtime API (the pragmatic choice)
The fourth option skips the problem entirely. Instead of running a WebSocket server, you use a managed service. Your Next.js app stays on Vercel. Route Handlers publish events over plain HTTP (a fetch call). Client components subscribe to channels with a small SDK. The managed service runs the persistent WebSocket connections, reconnects, sticky sessions, and cross-node delivery.
Three major providers follow this shape: Pusher, Ably, and Apinator. The API surface is similar across all three. The pricing models are not — Pusher and Ably meter per message and per peak connection, which gets expensive fast. Apinator is free to use with no per-message billing, which is why the examples in this section use it. The pattern translates directly to the other two.
// app/api/orders/route.ts
import { ApinatorServer } from "@apinator/server";
import { NextResponse } from "next/server";
const apinator = new ApinatorServer({
appId: process.env.APINATOR_APP_ID!,
key: process.env.APINATOR_KEY!,
secret: process.env.APINATOR_SECRET!
});
export async function POST(req: Request) {
const order = await req.json();
// ... persist the order ...
await apinator.trigger(`user-${order.userId}`, "order.created", order);
return NextResponse.json(order);
}
// components/OrderFeed.tsx
"use client";
import { useEffect, useState } from "react";
import { RealtimeClient } from "@apinator/client";
const client = new RealtimeClient(process.env.NEXT_PUBLIC_APINATOR_KEY!, {
host: "wss://rt.apinator.io"
});
export function OrderFeed({ userId }: { userId: string }) {
const [orders, setOrders] = useState<Order[]>([]);
useEffect(() => {
const channel = client.subscribe(`user-${userId}`);
channel.bind("order.created", (order: Order) => {
setOrders((prev) => [order, ...prev]);
});
return () => client.unsubscribe(`user-${userId}`);
}, [userId]);
return /* ... */;
}
Tradeoffs. You are relying on a third party for the socket layer. That is the whole pitch: the tradeoff is operational burden for a service relationship. For most teams that is a good trade — the people who want WebSockets usually do not want to run load balancers with sticky sessions and a Redis cluster. The honest caveat is metered pricing: with Pusher or Ably, your bill scales with traffic, and a viral moment can be expensive. Apinator sidesteps this by being free to use — no per-message, no per-connection, no peak-connection charges. If you want framework-specific guidance, the Realtime for Next.js page has a full walkthrough.
App Router specifics (Next.js 14 / 15)
Most of this guide is framework-agnostic, but App Router has a few quirks worth calling out.
Route Handlers cannot host WebSockets. This is worth repeating because it is the first thing most people try. app/api/ws/route.ts with a GET handler that tries to upgrade the request will not work — NextRequest has no access to the underlying socket. The upgrade has to happen outside the Next.js runtime, in a custom server or a separate process.
Server Components cannot hold a socket. Server Components render on the server per request and stream back HTML. They are not long-lived. You cannot "open a WebSocket in a Server Component" — all WebSocket work lives in "use client" components, and you usually wrap subscription logic in a custom hook (useRealtimeChannel(channelName)) so the component body stays clean.
Streaming and Suspense are separate. Both are server-to-client data transport mechanisms, but they are not WebSockets. A Server Component can return a Promise and Suspense will stream the resolved UI to the client. This is great for a single server-generated payload. It is not a persistent push channel — you cannot send follow-up messages after the initial response.
Edge runtime has tighter limits. If you deploy a Route Handler on Vercel's Edge runtime (for lower latency), a subset of Node APIs are unavailable, including some of what ws and socket.io need. Publishing to a managed realtime API is one fetch call, so it works fine on Edge. Hosting your own socket layer on Edge is not practical.
Middleware runs before Route Handlers. Middleware is a good place to attach a short-lived auth token to a request bound for your WebSocket server, which the socket server can verify in its upgrade handler. More on auth next.
Authentication for WebSockets in Next.js
The fundamental pattern: your user authenticates with Next.js normally (NextAuth, Clerk, Supabase Auth, a plain session cookie — whatever). Your Next.js server mints a short-lived, signed token that proves the user can subscribe to certain channels. The client sends that token to the WebSocket server during subscription, and the WebSocket server verifies it.
// app/api/realtime/auth/route.ts
import { ApinatorServer } from "@apinator/server";
import { NextResponse } from "next/server";
import { auth } from "@/lib/auth";
const apinator = new ApinatorServer({
appId: process.env.APINATOR_APP_ID!,
key: process.env.APINATOR_KEY!,
secret: process.env.APINATOR_SECRET!
});
export async function POST(req: Request) {
const session = await auth();
if (!session) return NextResponse.json({ error: "unauthorized" }, { status: 401 });
const { channel, socketId } = await req.json();
// Only allow subscribing to your own user's channel
if (!channel.startsWith(`private-user-${session.user.id}`)) {
return NextResponse.json({ error: "forbidden" }, { status: 403 });
}
const token = apinator.authenticateChannel(socketId, channel);
return NextResponse.json({ token });
}
// client side
const client = new RealtimeClient(process.env.NEXT_PUBLIC_APINATOR_KEY!, {
host: "wss://rt.apinator.io",
authEndpoint: "/api/realtime/auth"
});
client.subscribe(`private-user-${userId}`);
The same pattern applies to a self-hosted ws server: the client sends the token as the first message, the server verifies the signature and the claim, and either allows the subscription or closes the connection. Using HMAC-SHA256 over socketId + channel is the standard.
Two gotchas worth knowing: cookies on the upgrade request are unreliable in browsers (some skip them for cross-origin WebSocket connections), so auth tokens are usually carried in the first WebSocket message or as a query parameter. And if you put auth only on the Next.js endpoint that mints the token but forget to verify the token on the socket server, anyone who guesses the channel name subscribes freely — always verify on both sides.
Scaling WebSockets in Next.js
Once you have any option from above working, the scaling questions arrive quickly. With a single ws node, one server can usually hold 10,000–50,000 idle connections in Node. Past that, or as soon as you want redundancy, you need two things:
- Sticky sessions at the load balancer, so a given client always reconnects to the node that already knows its subscriptions.
- Pub/sub fan-out (typically Redis) so a
publish()on one node reaches subscribers on every other node.
The patterns are well-documented — How WebSockets Scale walks through Redis pub/sub fan-out and the ref-counted subscription trick, and WebSocket Load Balancing Explained covers why round-robin breaks WebSockets and what to use instead.
With a managed realtime API, all of this is someone else's problem. That is the value proposition in one sentence.
Which option should you pick?
- You are on Vercel and want realtime working today. Managed realtime API. No custom server, no extra process, no sticky-session tuning. A free-to-use platform like Apinator removes the cost concern too.
- You host Next.js yourself on a VM and already operate a Redis cluster. A separate
wsserver is a clean fit. You get full control and no third-party dependency. - You want Socket.IO's rooms and fallback transports specifically. Custom
server.jswith Socket.IO, deployed as a long-running Node process. - You are prototyping locally and do not care about production yet. Any of the above. The custom
server.jsis the fastest way to see a message round-trip in a singlenode server.js.
The common mistake is to treat WebSockets in Next.js as a framework problem. It is really a hosting problem. Once you decide where the persistent socket lives — in your Next.js process, in a separate process you run, or in a managed service you do not run — every other decision follows.
If you want the least friction, pick the managed option. If you want the most control, run ws in a separate process. Either way: Route Handlers cannot upgrade, Vercel Functions cannot hold connections, and pretending otherwise is the source of 90% of the outdated advice on the internet.