Build Real-Time Apps with Server-Sent Events in Node.js A Comprehensive Guide

Build Real-Time Apps with Server-Sent Events in Node.js: A Comprehensive Guide



Server-sent events (SSE) allow a web server to push data to the browser without the client explicitly requesting it. This enables real-time communication between server and client, which is great for things like chat apps, stock tickers, and monitoring dashboards.

In this comprehensive guide, we’ll explore how to use SSE with Node.js to build fast, efficient, real-time web apps.

What are Server-Sent Events?

Traditionally, a web page sends a request to the server, the server computes a response, then sends it back to the client. This is called “polling”, and it has some downsides:

  • The client has to continuously poll the server to check for updates. This leads to wasteful requests, especially if updates are infrequent.
  • There is latency between an event happening on the server, and the client receiving the update. This makes true real-time communication difficult.

With SSE, the flow is reversed. The client establishes a persistent connection to the server, and the server uses this to push new data to the client whenever it wants. The key advantages are:

  • No wasteful polling. The server sends data only when there are updates.
  • Updates are immediate. As soon as data is available, the server can push it.
  • Simple API. SSE has a straightforward JavaScript API for sending and receiving events.

This makes SSE perfect for things like live feeds, notifications, monitoring dashboards, chat apps, and multiplayer games. The client can show constantly updating data in real-time.

How Server-Sent Events Work

Client-side, SSE uses the EventSource API. To open a connection, you create a new EventSource instance, passing it the URL of the server endpoint:

const eventSource = new EventSource('/updates');

This initiates an HTTP request to the server. Crucially, the connection is held open indefinitely.

On the server, the HTTP response looks like this:

HTTP/1.1 200 OK
Content-Type: text/event-stream

data: {"message": "Hello World"}

The key things to note are:

  • The Content-Type is text/event-stream. This tells the client to interpret the response as an SSE stream.
  • The body contains one or more discrete messages prefixed by data:

The client will continue listening indefinitely, and can react to messages as they arrive:

eventSource.onmessage = (event) => {
  const data = JSON.parse(;

To push a new message, the server just needs to send a new data event down the connection. This makes streaming data to the client effortless.

When the client is done, it calls eventSource.close() to cleanly close the connection.

Why Use Server-Sent Events?

There are some other options for real-time communication between client and server:

  • AJAX Polling: client periodically pings server for updates
  • WebSockets: maintain socket connection for bidirectional messaging
  • HTTP Long Polling: long-lived HTTP request, server pushes data and closes

So why use SSE over one of these alternatives? Here are some of the advantages:

  • Simple API: SSE has a straightforward async API for sending and receiving messages. Easy to implement.
  • Low overhead: SSE uses plain HTTP so requires less overhead than WebSockets.
  • Automatic reconnect: the EventSource object will automatically reconnect if the connection drops.
  • Cross-domain support: works across different domains out of the box.
  • Scaling: SSE connections are uni-directional so easier to horizontally scale than WebSockets.
  • Legacy support: supported in all modern browsers with a simple polyfill.

The main downside is that SSE is uni-directional. For low-latency bidirectional apps, WebSockets are likely a better fit. But in many cases, SSE provides a lighter-weight option that’s nearly as efficient.

Implementing Server-Sent Events in Node.js

Let’s look at how to implement a basic SSE endpoint in Node.js. We’ll use the Express framework to handle HTTP requests and routing.

First install Express:

npm install express

Then we can create an Express app with a route for our SSE endpoint:

const express = require('express');
const app = express();

app.get('/updates', (req, res) => {
  // SSE implementation here


When a client connects to this endpoint, we’ll send events containing a counter that increments every second:

let count = 0;
setInterval(() => {
  // Send event to client
}, 1000);

To send an SSE message, we need to format the response correctly. We can use the sse package to handle this:

npm install sse

Then sending events is straightforward:

const SSE = require('sse');

app.get('/updates', (req, res) => {
  const stream = new SSE(req, res);
  setInterval(() => {
  }, 1000);

The client will receive a stream of increasing counters sent every second!

The send() method also supports sending string data and optional event names:

stream.send('data: My message');
stream.send('event: message', 'Hello');

Broadcasting to Multiple Connections

Our current endpoint only handles one SSE connection. To support multiple subscribers, we need to manage all their connections simultaneously.

One option is to store connected res objects in a map keyed by user ID:

const streams = new Map();

function createStream(req, res) {
  const userId = getUserId(req);
  streams.set(userId, res);
  // ...
  res.on('close', () => {

function broadcast(data) {
  for (let res of streams.values()) {

When we want to broadcast an update, we loop through all open response objects and send the update.

This works, but doesn’t scale well. Every request ties up a thread in our process.

A better approach is to offload the streams to a separate stream handling service. For example, we could use Redis Streams to publish updates, and have each process consume the stream independently. This allows us to scale horizontally while efficiently sharing streams between processes.

Client-Side Implementation

Now let’s look at how to connect and consume an SSE stream on the client-side. We’ll reuse the /updates endpoint we implemented in Node.js.

First we create a new EventSource instance, passing it the URL of our server endpoint:

const stream = new EventSource('/updates');

This initiates the connection and begins receiving events. Next we can listen for message events:

stream.onmessage = (event) => {
  const data = JSON.parse(;
  // Handle new data

We can also register other event listeners:

stream.addEventListener('open', () => {
  console.log('Connection opened');

stream.addEventListener('error', (err) => {
  console.error('Error', err);

Be sure to handle errors and reconnects. The EventSource object will automatically reconnect if the connection drops, but your code should be robust.

When done, we close the connection:


For advanced use cases, you can open multiple simultaneous connections. This is useful for consuming multiple streams in parallel.

Example Real-Time App: Live Chat

Let’s demonstrate a real-time web app by building a simple chat room with SSE.

We’ll have a Express + Node.js server that handles user connections and broadcasts messages. The frontend will use vanilla JS to show new messages as they arrive.

Our app architecture will look like:

Browsers -> SSE Connection -> Express Server -> Redis Pub/Sub

The Express server will publish chat messages to a Redis channel. This allows broadcasting to all subscribed clients.

On the frontend, the HTML page is simple:

<!-- index.html -->

<ul id="messages"></ul>

<form id="chat">
  <input type="text" id="messageText" />
  <button type="submit">Send</button>

It displays a list of messages and a form to send new ones.

The JavaScript establishes the SSE connection:

// client.js

const stream = new EventSource('/chat');

stream.onmessage = (event) => {
  const data = JSON.parse(;

const form = document.getElementById('chat');
form.addEventListener('submit', sendMessage);

// ...

When a new message comes down the stream, we append it to the message list. We also handle sending chat input via forms.

On the server, our Express app handles the /chat endpoint:

// server.js 

app.get('/chat', (req, res) => {
  const stream = createSSEStream(req, res);  


function createSSEStream(req, res) {
  // Set SSE headers
  res.writeHead(200, {
    'Content-Type': 'text/event-stream',
    'Cache-Control': 'no-cache', 
    'Connection': 'keep-alive'

  return res;

This sets up the SSE response headers and pipes the res stream to a Redis publisher. Any data we publish to Redis will get sent down the SSE connection.

When the frontend sends a new message, our server handles it:'/message', (req, res) => {
  const { message } = req.body;
  // Publish chat message
  redisPublisher.publish('chat', JSON.stringify({

We publish the message to Redis where it will be broadcast to all listening SSE clients.

With under 50 lines of code, we’ve built a fast, scalable real-time chat app with Server-Sent Events and Redis!

Production Considerations

In production, there are a few things we should consider:

  • Security: authenticate clients and authorize access to streams.
  • Scaling: distribute load across processes/machines (i.e. with Redis, Kafka).
  • Persistence: save streams to allow rewinding or replay for new subscribers.
  • Reliability: handle client reconnections and dropped server connections gracefully.

There are also some great services that provide hosted solutions, like Pusher and Ably. These take care of scaling, persistence, and reliability for you.


Server-sent events provide a simple paradigm for real-time communication between browser and server. With an SSE connection, a server can efficiently stream updates to a client as they happen.

Some key benefits are:

  • Real-time data pushed from server to client
  • Event-based API for reacting to updates
  • Auto reconnects and cross-domain support
  • Excellent browser support with fallbacks

This makes SSE ideal for live feeds, notifications, dashboards, and apps where freshness of data is important. They provide an alternative to polling and WebSockets with lower overhead in many cases.

In this post we saw how to implement SSE both client and server-side. We looked at techniques for managing multiple connections, and built a real-time chat application using Node, Express and Redis.

There is a wide range of possibilities – from multiplayer games to collaborative editing apps. Wherever you need real-time data streaming from server to client, consider the simplicity and power of Server-Sent Events.

Frequently Asked Questions

What are some disadvantages of using Server-Sent Events?

Some potential downsides of SSE include:

  • Only supports uni-directional communication from server to client. For bi-directional apps, WebSockets are better.
  • Can be more difficult to scale horizontally compared to other options like WebSockets.
  • Limited browser support in older browsers like IE11 and below. Requires polyfills.
  • Not ideal for high throughput data – better for intermittent event-based streaming.

How does Server-Sent Events compare to HTTP long polling?

Both SSE and long polling involve keeping a HTTP request open for the server to push data. Key differences:

  • SSE is event-driven and enables multiple independent streams over a single connection. With long polling, the client has to re-request each update.
  • SSE keeps the connection open indefinitely. Long polling closes after each response.
  • SSE has built-in auto reconnect functionality.
  • SSE has a simpler protocol and API.

What are some examples of Server-Sent Events use cases?

Some common use cases of SSE include:

  • Live activity feeds or news tickers
  • Chat and messaging apps
  • Real-time dashboards and monitoring
  • Multiplayer game updates
  • Stock tickers and market data
  • Notifications and alerts
  • IoT sensor data streaming
  • Progressive image loading

Any application where real-time updates from server to client are needed is a good fit for SSE.

How do I handle reconnections and retries with Server-Sent Events?

The EventSource API will automatically try to reconnect if the connection is closed or loses network. However, you should still properly handle errors and reconnections in your code:

  • Listen for the error event to handle any errors that occur
  • Keep track of reconnect attempts
  • Continue showing stale data until reconnect succeeds
  • Queue up messages/events that occur during disconnect
  • Retry failed updates when reconnect succeeds

Proper retry logic and state management will lead to a smooth experience across flaky network conditions.

What are some ways to scale Server-Sent Events to support multiple clients?

Some good ways to scale an SSE architecture:

  • Maintain a map of Res objects, one per connected client. Broadcast by looping through them.
  • Offload connections to a separate process using a threaded language like Go.
  • Use a message queue like Redis to publish updates and have stateless processes consume streams.
  • Serve static content and SSE on separate domains for better scaling.
  • Sticky load balancing to ensure clients connect to same server.
  • Service like Pusher or Ably to manage client connections.

The best approach depends on the infrastructure and expected traffic load.

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, every week.

We don’t spam! Read our [link]privacy policy[/link] for more info.

Leave a Reply

Your email address will not be published. Required fields are marked *