Feature Flags in Node.js: Express and Fastify Guide

Why Feature Flags in Node.js?
Node.js powers a huge slice of production backends — REST APIs, GraphQL gateways, background workers, BFF layers, real-time services. All of them share the same release problem: you want to ship code continuously, but you do not want every deploy to be a product change.
Feature flags in Node.js decouple deployment from release. You push code to production behind a flag and decide later who sees the new behavior, when, and under what conditions. If something breaks, you flip the flag off in the dashboard — no redeploy, no rollback PR, no pager at 3am.
This guide covers the practical side: how to wire feature flags into Express and Fastify applications using the Rollgate Node.js SDK, how to target specific users, how to roll out gradually, and the production gotchas that every team hits sooner or later.
Quick Start: Feature Flags in Node.js
Let us get a flag running end-to-end. Install the SDK:
npm install @rollgate/sdk-node
Then wire it up:
import { RollgateClient } from '@rollgate/sdk-node';
const rollgate = new RollgateClient({
apiKey: process.env.ROLLGATE_API_KEY!,
enableStreaming: true, // real-time updates over SSE
});
await rollgate.init();
if (rollgate.isEnabled('new-checkout', false)) {
console.log('New checkout flow enabled');
} else {
console.log('Legacy checkout');
}
That is the whole setup. The SDK pulls rules from Rollgate's API, caches them in memory, and keeps them fresh in the background. With enableStreaming: true the client keeps a Server-Sent Events connection open and applies changes within ~50ms of a flag flip. The second argument to isEnabled is the default value returned if the client is not yet initialized or the flag does not exist.
Evaluation is local, in-process. No network hop per flag check — the rules are already in memory, so you can evaluate thousands of flags per request without adding latency to the hot path.
The DIY Approach (and Its Limitations)
Before reaching for a dedicated platform, most teams start with environment variables:
const flags = {
newCheckout: process.env.NEW_CHECKOUT === 'true',
darkMode: process.env.DARK_MODE === 'true',
};
if (flags.newCheckout) {
// ...
}
This works, for exactly one week. Then you hit the limitations:
- Every flag change requires a redeploy — the whole point of flags is to avoid that
- No gradual rollouts — it is all-or-nothing for every user
- No targeting — you cannot enable a feature for beta testers, enterprise plans, or a specific region
- No kill switch — if the new code breaks, rolling back means another deploy cycle
- No audit trail — you do not know who flipped what, when, or why
The next evolution is usually a config file or a database table. You solve the redeploy problem but inherit a new one: keeping the config in sync across every instance of your Node.js service, and refreshing it without restarts. That is where a purpose-built feature flag platform earns its keep.
Using Rollgate with Express
Express is still the workhorse of the Node.js backend ecosystem. Here is a clean pattern: attach the Rollgate client to the request via middleware, then evaluate flags inside route handlers with the user context from the request.
import express from 'express';
import { RollgateClient } from '@rollgate/sdk-node';
const app = express();
const rollgate = new RollgateClient({
apiKey: process.env.ROLLGATE_API_KEY!,
enableStreaming: true,
});
await rollgate.init();
// Attach the client and evaluation context to the request
app.use((req, res, next) => {
const userId = req.headers['x-user-id'] as string | undefined;
req.flags = {
isEnabled: (key: string, fallback = false) =>
rollgate.isEnabled(key, fallback, userId ? { userId } : undefined),
};
next();
});
app.get('/checkout', (req, res) => {
if (req.flags.isEnabled('new-checkout')) {
return res.json({ version: 'v2', flow: 'stripe-elements' });
}
return res.json({ version: 'v1', flow: 'legacy-form' });
});
app.listen(3000);
The EvalContext you pass as the third argument lets you evaluate a flag for a specific user without mutating client-level state. Each request gets its own targeting evaluation based on userId and any attributes you forward (plan, region, role, anything your targeting rules reference).
Remember to shut the client down cleanly on SIGTERM so the SSE connection and telemetry buffers drain properly:
process.on('SIGTERM', async () => {
await rollgate.close();
process.exit(0);
});
Using Rollgate with Fastify
Fastify is the faster, more opinionated alternative. The pattern is the same — a plugin that decorates the request — but with Fastify's decorator API:
import Fastify from 'fastify';
import { RollgateClient } from '@rollgate/sdk-node';
const fastify = Fastify({ logger: true });
const rollgate = new RollgateClient({
apiKey: process.env.ROLLGATE_API_KEY!,
enableStreaming: true,
});
await rollgate.init();
fastify.decorateRequest('flags', null);
fastify.addHook('onRequest', async (request) => {
const userId = request.headers['x-user-id'] as string | undefined;
request.flags = {
isEnabled: (key: string, fallback = false) =>
rollgate.isEnabled(key, fallback, userId ? { userId } : undefined),
};
});
fastify.get('/api/experiments', async (request) => {
return {
pricing: request.flags.isEnabled('new-pricing-ui'),
search: request.flags.isEnabled('semantic-search'),
};
});
fastify.addHook('onClose', async () => {
await rollgate.close();
});
await fastify.listen({ port: 3000 });
One thing to watch: if you are running Fastify with logger: true and want the flag value in every log line, add it to the request context with request.log.child({ flags: [...] }) inside the onRequest hook. Observability on which flags evaluated for which request is the kind of detail that saves you hours in an incident.
Gradual Rollouts and User Targeting
Once flags are wired, the real value kicks in: turning a feature on for 1% of traffic, watching error rates for an hour, then bumping it to 10% the next day. Rollgate handles this with sticky, deterministic bucketing — the same user always lands in the same bucket, so a user who sees the new feature at 5% keeps seeing it when you move to 50%.
You do not need to change your Node.js code when you change the rollout percentage. The rules live in Rollgate's dashboard; your SDK pulls the new rules and evaluates them locally.
// Your code stays the same whether the flag is at 1% or 100%
const showNewFlow = rollgate.isEnabled('checkout-v2', false, {
userId: user.id,
attributes: {
plan: user.plan,
region: user.region,
signupDate: user.signupDate,
},
});
The attributes you pass feed into targeting rules. A common pattern for B2B SaaS: enable a feature for all plan = "enterprise" users plus 10% of plan = "pro" users, with no rollout for free-tier. That is three rules in the dashboard, zero code changes. See our gradual rollouts guide for rollout strategies that do not break production.
If you are using flags for experimentation rather than safe releases, pair them with event tracking. The Node SDK exposes client.track() for conversion events, which plugs into A/B testing with feature flags workflows.
Production Considerations
A flag system sits in the hot path of every request. That changes how you think about it.
Caching and evaluation mode. The SDK evaluates locally by default — rules are cached in memory and refreshed via SSE or polling. There is no network call on each isEnabled(). In a Node.js process with a hot path that evaluates flags thousands of times per second, this matters: network-dependent flag checks would wreck your P99.
Resilience. The SDK ships with a circuit breaker, retry-with-backoff, and a stale cache fallback. If the Rollgate API becomes unreachable, your service keeps serving flag evaluations using the last known rules — it does not hard-fail. You can subscribe to circuit-open and flags-stale events to surface this in your own monitoring:
rollgate.on('circuit-open', () => {
metrics.increment('rollgate.circuit.open');
});
rollgate.on('flags-stale', () => {
metrics.increment('rollgate.flags.stale');
});
Kill switches in production. Wrap risky code paths — a new payment provider, a rewritten algorithm, an external API integration — in a flag you can flip instantly. When something breaks, you want the shortest possible path from "we are paging" to "traffic is back on the old code." A flag flip takes under a second; a rollback deploy takes tens of minutes.
Process lifecycle. Always call rollgate.close() on shutdown. It closes the SSE connection, flushes pending telemetry, and lets Kubernetes or your PaaS roll pods cleanly. Skipping this leaks file descriptors and loses the last batch of evaluation analytics.
One client per process, not per request. The SDK is thread-safe and reusable. Do not instantiate a new RollgateClient per request — you will hit the API hard, leak connections, and lose the benefit of local caching. One long-lived client at app start, shut down on SIGTERM.
Best Practices
- Name flags by feature, not by team.
new-checkoutages better thanbackend-team-q2-project. Future you will thank present you. - Always pass a sensible default.
rollgate.isEnabled('feature', false)— false is usually the safe default (do not ship the new thing if we cannot decide). Explicit is better than surprising. - Retire flags. Once a rollout hits 100% and has been stable for a week, remove the flag from code. Zombie flags are a maintenance tax; we have written about why this matters.
- Log the evaluated value for high-stakes flags. If
isEnabled('new-payment-provider')returnedtruefor a user whose transaction failed, you want that in the log line, not inferred from the timestamp. - Separate experimentation flags from release flags. A kill switch for production should not expire when an experiment wraps up. Use different naming prefixes so they are easy to tell apart.
Frequently Asked Questions
Does Rollgate work with NestJS, Koa, or Hapi?
Yes — the Node.js SDK is framework-agnostic. The patterns shown for Express and Fastify apply identically: instantiate one RollgateClient at app start, attach flag evaluation to the request context via middleware (Express, Koa), interceptor (NestJS), or decorator (Fastify, Hapi). Shut down on SIGTERM.
How does flag evaluation perform under load?
Evaluation is local and in-memory — no network call per isEnabled() check. You can safely evaluate thousands of flags per request without measurable overhead. The SDK keeps rules fresh via SSE streaming (~50ms after a flag flip) or polling (30s default), depending on your configuration.
What happens if the Rollgate API is unreachable?
The SDK ships with a circuit breaker and a stale cache fallback. If fetching new rules fails, your service keeps evaluating against the last known rules — it does not hard-fail. You can subscribe to circuit-open and flags-stale events to surface this in your own monitoring.
Can I use feature flags in a worker or queue consumer?
Yes. Initialize one RollgateClient at worker start, evaluate inside the job handler with the per-job user context, and close the client on shutdown. The same "one client per process" rule applies — do not instantiate per job.
Should I use polling or SSE streaming?
Use enableStreaming: true for real-time apps where flag flips need to propagate in under a second (user-facing features, kill switches). Use polling (refreshInterval: 30000) for simpler deployments or serverless environments where long-lived connections are awkward. Polling is the default.
Is there a performance cost to adding the SDK?
The SDK is about 2KB gzipped and has no runtime dependencies beyond eventsource (for SSE). Memory footprint is O(number of flags) — negligible for typical workloads. Startup adds one HTTP request to fetch the initial ruleset; subsequent evaluations are pure memory lookups.
Next Steps
Feature flags in Node.js are a small change with an outsized impact. You stop shipping features and start shipping code, which means faster deploys, safer releases, and a rollback story that takes seconds instead of a pager rotation.
The Rollgate Node.js SDK is open source, 2KB gzipped, and works identically in Express, Fastify, Koa, NestJS, and any other Node.js framework. Local evaluation, SSE streaming, circuit breaker, and kill switches come in the box. You can start free — 500K evaluations per month, no credit card.
Pair this with the frontend: if your stack is React or Next.js, the same flags work through the React SDK and Next.js guide — same dashboard, same targeting rules, same audit trail.
If you are coming from another language or framework, see our guides for Go and Python. The Node.js ecosystem page at nodejs.org covers the runtime fundamentals the SDK relies on.