The architecture behind distributed WebSocket infrastructure — from a single server to multi-region.
Scaling WebSockets is fundamentally different from scaling stateless HTTP. Persistent connections, state on the connection, and the need to broadcast to thousands of clients create specific architectural requirements. This is how Apinator solves them.
Each WebSocket client maintains a persistent connection to one server node. Standard round-robin load balancing breaks subscriptions. Sticky sessions (or a pub/sub layer) are required.
When you publish an event, Apinator fans it out to all nodes via Redis pub/sub. Each node delivers to its locally-connected subscribers. No direct node-to-node communication needed.
Apinator subscribes to a Redis channel only when the first local client joins. It unsubscribes when the last client leaves. This minimizes Redis traffic at scale.
WebSocket nodes (data plane) have no PostgreSQL dependency. Tenant config is cached in Redis. Adding more nodes is instant — no schema migrations, no shared state.
Deploy independent data planes per region. Users connect to the nearest node. Publish once to any region — the event reaches all regions via their own Redis pub/sub.
Presence tracking uses Redis hashes (member state) and sorted sets (heartbeat timestamps). Stale members are evicted automatically, even across server restarts.
One trigger() call delivers the event to every connected client in every region — via Redis pub/sub fan-out on each data plane.
Completely free, no credit card required. Deploy in minutes.