Building for scale: Structuring your microfrontends
- Microfrontends
- Rspack
- Module Federation
- Turborepo
- React
- TypeScript
- pnpm
- Zustand
When you're the only person working on a frontend, keeping things coupled is fine. You defined the conventions, you know where everything is, and you can evolve the architecture at will. But once you're working with a team, that coupling turns into a ball and chain. You can't change the architecture without aligning (using the term loosely) with everyone else who touches the code.
Picture an e-commerce platform where each major domain belongs to a different team. Here's what coupling buys you at that scale: a change in the cart breaks the nav, a deploy of the checkout service blocks the products team, and nobody ships without coordinating with somebody else.
Microfrontends are one answer. This post walks through building one from scratch, including the issues I faced and how I designed around them.
What we're building
An e-commerce app split into independently deployable microfrontends:
- Shell: the host app (layout, nav, routing)
- Products: product catalog
- Cart: shopping cart
- Checkout: checkout page
Each is a separate React app, built with Rspack, composed at runtime via Module Federation, and orchestrated in a Turborepo monorepo.
Setup
Choosing a Bundler
The first decision is the bundler. Module Federation is the runtime composition mechanism (explained below). It lets independently built apps share code at runtime. Bundler support varies: some bundle it natively, others need a plugin.
The three contenders:
- Webpack 5: Module Federation was invented here. Mature plugin API and well-documented, but builds are slow.
- Vite: Fast dev server, but Module Federation support comes via
vite-plugin-federation, a community plugin that's less mature. It works, but you'll likely find gaps on less common configurations. - Rspack: Rust-based, API-compatible with Webpack. Module Federation ships natively in
@rspack/core, no extra plugin needed. Build times are significantly faster than Webpack (I saw ~200ms per app), and the configuration is nearly identical.
I went with Rspack for four reasons:
- Native Module Federation support: ships with
@rspack/core, zero extra dependencies - Fast builds: Rust-based, ~200ms per microfrontend
- Webpack-compatible config: existing configs mostly transfer without changes
- Backed by ByteDance: actively developed, not a hobby project
In practice, each rspack config uses rspack.container.ModuleFederationPlugin directly from @rspack/core. No @module-federation/enhanced package, no extra wiring. The same plugin name and config shape you'd write for Webpack:
// rspack.config.ts
import { rspack } from "@rspack/core";
export default {
plugins: [
new rspack.container.ModuleFederationPlugin({
name: "products",
filename: "remoteEntry.js",
exposes: { "./ProductsApp": "./src/App.tsx" },
shared: ["react", "react-dom"],
}),
],
};Scaffolding the Monorepo
I used Turborepo with pnpm workspaces to manage the monorepo.
pnpm workspaces is pnpm's built-in monorepo support. A single pnpm-workspace.yaml file at the root tells pnpm which directories contain packages. It then hoists shared dependencies to the root node_modules, symlinks workspace packages to each other, and lets you reference local packages with the workspace:* protocol in package.json. I'm using it because it's fast, disk-efficient (content-addressed store), and handles workspace dependency graphs without extra tooling.
# pnpm-workspace.yaml
packages:
- "apps/*"
- "packages/*"// apps/shell/package.json — depends on a local workspace package
{
"dependencies": {
"@mfe/event-bus": "workspace:*",
},
}Turborepo is a task runner that sits on top of the workspace. It reads each package's package.json scripts, builds a dependency graph from your task definitions, and runs them in the correct order — in parallel where possible — with content-addressed caching so unchanged work is skipped on subsequent runs. I'm using it because microfrontends naturally produce many independent build/dev/test tasks, and Turbo lets you run them as a single coordinated command (pnpm dev boots all four servers; pnpm build builds everything in dependency order).
Repo structure
microfrontend-example/
├── apps/
│ ├── shell/ # Host, port 3000
│ ├── products/ # Remote, port 3001
│ ├── cart/ # Remote, port 3002
│ └── checkout/ # Remote, port 3003
├── packages/
│ └── shared-lib/ # Shared types, event bus, store
├── turbo.json
├── pnpm-workspace.yaml
└── package.jsonConfiguring Federation
Turbo pipeline: turbo.json defines task dependencies. The critical one: dev depends on ^build, meaning shared packages build before any app starts its dev server. In Turborepo, the ^ prefix means "run this task in dependencies first" (it's a Turbo convention, not a semver caret).
{
"tasks": {
"dev": {
"dependsOn": ["^build"],
"persistent": true
}
}
}Rspack configs: each remote exposes a component and declares shared singletons.
A singleton in Module Federation means: across all federated apps loaded into the page, only one instance of this dependency is allowed to run. Without singleton: true, each remote that bundles its own copy of, say, React would load its own copy at runtime and you'd end up with two React instances on the same page. React uses module-level identity internally, so two components each importing their own React are playing by different rules: hooks throw "Invalid hook call", context providers don't propagate across remotes, and instanceof checks silently fail.
When to mark a dependency singleton: true
- It uses module-level state or identity. React (hook dispatcher), React Router (history singleton), Zustand stores, any pub/sub bus, i18n instances. Two copies = two disconnected worlds.
- It has a peer-dependency relationship across remotes. If remote A passes a React element to remote B, both must be using the same React.
- You want to upgrade it in one place. Singleton dependencies are negotiated at runtime: the highest compatible version wins, and everyone uses it.
What not to mark singleton: leaf utilities with no shared state (lodash, date-fns, classnames). Duplicating these across remotes is harmless and avoids version-negotiation overhead.
// Remote (e.g., Products)
new rspack.container.ModuleFederationPlugin({
name: "products",
filename: "remoteEntry.js",
exposes: {
"./ProductsApp": "./src/App.tsx",
},
shared: {
react: { singleton: true, eager: false },
"react-dom": { singleton: true, eager: false },
},
});The shell (host) declares remotes:
// Host (Shell)
new rspack.container.ModuleFederationPlugin({
name: "shell",
remotes: {
products: "products@http://localhost:3001/remoteEntry.js",
cart: "cart@http://localhost:3002/remoteEntry.js",
checkout: "checkout@http://localhost:3003/remoteEntry.js",
},
shared: {
react: { singleton: true, eager: false },
"react-dom": { singleton: true, eager: false },
},
});The async boundary: every app (host and remotes) needs a two-file entry point:
// src/index.tsx — the *real* entry. Does nothing but defer to bootstrap.
import("./bootstrap");
// src/bootstrap.tsx — the actual React mount.
import { createRoot } from "react-dom/client";
import App from "./App";
createRoot(document.getElementById("root")!).render(<App />);Why this pattern is mandatory:
When the browser loads index.tsx, Module Federation's runtime hasn't yet finished negotiating which copy of each shared dependency (React, React DOM, etc.) the page should use. That negotiation happens asynchronously: the host has to fetch each remote's remoteEntry.js, read its declared shared config, and pick a single winning version per package.
If index.tsx synchronously did import React from "react", that import would resolve before federation finishes negotiating. The local copy would win, and any remote loaded later would either crash or end up with its own duplicate React.
The import("./bootstrap") is a dynamic import. Webpack/Rspack splits everything reachable from bootstrap.tsx into a separate chunk that's only fetched after the dynamic import is evaluated, i.e., after federation has had a tick to negotiate shared modules. That's the async gap federation needs. Every React import and component import lives behind that gap. Every app in this repo uses this two-file entry pattern.
Don't try to fix startup by setting eager: true
Each entry in the shared config has an eager flag. eager: false (the default) means "load this dependency lazily, after federation's async negotiation". eager: true means "bundle it into the entry chunk and load it synchronously".
It's tempting to set eager: true on React in the shell, reasoning: "the host always needs React, so let's just inline it and skip the async dance." When I tried this, the app crashed at startup with TypeError: factory is undefined. The federation runtime expected to negotiate a shared module that had already been resolved synchronously, and its internal lookup table was empty when remotes asked for it.
Rule: keep eager: false everywhere and rely on the import("./bootstrap") async boundary. The only legitimate use for eager: true is when you have no dynamic boundary at all (e.g., a host that ships React inline and doesn't share it with remotes), which defeats the purpose in a federated setup.
pnpm + requiredVersion: false: when you list a package in the shared config without specifying a version, Module Federation auto-reads requiredVersion from your package.json so it can warn at runtime if a remote brings an incompatible version. That's helpful for external npm packages.
It backfires for workspace packages. pnpm uses the workspace:* protocol to pin a dependency to whatever version is currently in the monorepo:
// apps/shell/package.json
{
"dependencies": {
"@mfe/event-bus": "workspace:*", // not valid semver
},
}workspace:* isn't a valid semver range, so federation's version check throws a runtime warning every time the page loads. The fix is to explicitly tell federation not to check the version for that entry:
shared: {
"@mfe/event-bus": {
singleton: true,
eager: false,
requiredVersion: false, // skip the version check entirely
},
}Choosing a requiredVersion strategy for workspace packages
Three options, in increasing order of strictness:
requiredVersion: false: No version negotiation. The first loaded copy wins. Right for a single repo where you control all consumers.requiredVersion: "1.0.0"(hardcoded): Federation checks the loaded version matches, but you have to hand-bump this string in every consumer's rspack config on every release. Fragile.- Replace
workspace:*with a real version range (e.g.,"^1.0.0") and let federation auto-read it. It provides a deterministic result, but it requires publishing the package to a registry so pnpm can resolve a real version.
For a single repo with a single deployment, option 1 is the pragmatic choice. If you split repos later, move to option 3.
Minimum Working Setup
Here's the minimum scaffold to get a host + one remote running. The files below go into the locations shown in the repo structure above.
Root configuration
The root package.json defines workspace-wide scripts that delegate to Turborepo. pnpm dev and pnpm build are the only commands you'll typically run from the repo root.
// package.json (root)
{
"name": "mfe-example",
"private": true,
"scripts": {
"dev": "turbo run dev",
"build": "turbo run build",
},
"devDependencies": {
"turbo": "^2.0.0",
},
"packageManager": "pnpm@9.0.0",
}pnpm-workspace.yaml tells pnpm which folders contain packages. Anything under apps/ or packages/ is treated as a workspace package.
# pnpm-workspace.yaml
packages:
- "apps/*"
- "packages/*"turbo.json defines the task graph. The key bit is dev: { dependsOn: ["^build"] }: shared packages compile before any app's dev server boots, so federation has its dependencies ready when the page loads.
// turbo.json
{
"$schema": "https://turbo.build/schema.json",
"tasks": {
"build": { "outputs": ["dist/**"] },
"dev": { "dependsOn": ["^build"], "persistent": true, "cache": false },
},
}Host (shell) rspack config
The shell declares which remotes it consumes and which dependencies it shares with them. Note publicPath: "auto": federation needs this so each remote's remoteEntry.js can resolve its own asset URLs at runtime regardless of where the shell is hosted.
// apps/shell/rspack.config.ts — the host
import { defineConfig } from "@rspack/cli";
import { rspack } from "@rspack/core";
export default defineConfig({
entry: "./src/index.tsx",
output: { publicPath: "auto", uniqueName: "shell" },
devServer: { port: 3000, historyApiFallback: true },
resolve: { extensions: [".tsx", ".ts", ".js"] },
module: {
rules: [
{
test: /\.tsx?$/,
use: {
loader: "builtin:swc-loader",
options: {
jsc: { parser: { syntax: "typescript", tsx: true }, transform: { react: { runtime: "automatic" } } },
},
},
},
],
},
plugins: [
new rspack.container.ModuleFederationPlugin({
name: "shell",
remotes: {
products: "products@http://localhost:3001/remoteEntry.js",
},
shared: {
react: { singleton: true, eager: false },
"react-dom": { singleton: true, eager: false },
},
}),
new rspack.HtmlRspackPlugin({ template: "./src/index.html" }),
],
});Remote (products) rspack config
The remote is almost identical to the shell, but it exposes a component instead of consuming any. The module and resolve keys are unchanged from the shell; copy them across.
// apps/products/rspack.config.ts — a remote
import { defineConfig } from "@rspack/cli";
import { rspack } from "@rspack/core";
export default defineConfig({
entry: "./src/index.tsx",
output: { publicPath: "auto", uniqueName: "products" },
devServer: { port: 3001 },
resolve: { extensions: [".tsx", ".ts", ".js"] },
module: {
rules: [
{
test: /\.tsx?$/,
use: {
loader: "builtin:swc-loader",
options: {
jsc: { parser: { syntax: "typescript", tsx: true }, transform: { react: { runtime: "automatic" } } },
},
},
},
],
},
plugins: [
new rspack.container.ModuleFederationPlugin({
name: "products",
filename: "remoteEntry.js",
exposes: { "./ProductsApp": "./src/App.tsx" },
shared: {
react: { singleton: true, eager: false },
"react-dom": { singleton: true, eager: false },
},
}),
new rspack.HtmlRspackPlugin({ template: "./src/index.html" }),
],
});App entry points
Each app needs an HTML template. HtmlRspackPlugin injects the <script> tag automatically based on the entry config, so no manual tag is needed.
<!-- apps/shell/src/index.html (and identical for each remote) -->
<!DOCTYPE html>
<html lang="en">
<body>
<div id="root"></div>
</body>
</html>The shell's React entry uses React.lazy() to consume the remote. Wrapping it in <Suspense> is required because the remote chunk is fetched on first render.
// apps/shell/src/App.tsx — host consumes the remote
import { lazy, Suspense } from "react";
const ProductsApp = lazy(() => import("products/ProductsApp"));
export default function App() {
return (
<Suspense fallback={<div>Loading…</div>}>
<ProductsApp />
</Suspense>
);
}TypeScript declarations for remotes
TypeScript doesn't know about modules that don't exist on disk at compile time, so each remote needs an ambient declaration. Place the file under src/types/ and make sure tsconfig.json has "include": ["src"] so TypeScript picks it up.
// apps/shell/src/types/remotes.d.ts — TypeScript needs to know remote modules exist
declare module "products/ProductsApp" {
const Component: React.ComponentType;
export default Component;
}Running it
Run pnpm install then pnpm dev and you'll have the shell on :3000 consuming the products remote on :3001.
Reference implementation: the full source for this article (all four apps, domain APIs, the event bus, and deployment headers) lives at the reference implementation. Code snippets in this post are simplified for clarity; refer to the repo for the rest of the code (CORS headers, dynamic remote URLs for staging/prod, etc).
At this point, pnpm dev starts all four servers, the shell loads remotes via React.lazy(), and navigation between Products/Cart/Checkout works. The apps are composed at runtime. But they can't talk to each other yet.
Cross-Domain Communication
The First Approach: Event Bus + Shared Store
The first approach I tried was a shared event bus combined with a global store, both living in a @mfe/shared-lib package.
Event Bus (Pub/Sub): a singleton pub/sub system for fire-and-forget messages. The implementation is small: a Map of event names to listener sets, with emit and on methods, and on returns the unsubscribe function:
// packages/event-bus/src/index.ts
type Listener<T = unknown> = (payload: T) => void;
class EventBus {
private listeners = new Map<string, Set<Listener>>();
emit<T>(event: string, payload: T): void {
this.listeners.get(event)?.forEach((fn) => fn(payload));
}
on<T>(event: string, listener: Listener<T>): () => void {
if (!this.listeners.has(event)) this.listeners.set(event, new Set());
this.listeners.get(event)!.add(listener as Listener);
return () => this.listeners.get(event)?.delete(listener as Listener);
}
}
export const eventBus = new EventBus();
export const Events = {
ADD_TO_CART: "ADD_TO_CART",
CHECKOUT_COMPLETE: "CHECKOUT_COMPLETE",
NAVIGATE: "NAVIGATE",
} as const;Usage:
// Products emits
eventBus.emit(Events.ADD_TO_CART, { product, quantity: 1 });
// Cart listens
eventBus.on(Events.ADD_TO_CART, (payload) => {
// update the cart in the shared store
});Shared Store: a lightweight reactive store (Zustand-like API) for shared state:
// Anyone can read
const items = sharedStore.getState().cartItems;
// Anyone can write
sharedStore.setState((prev) => ({
cartItems: [...prev.cartItems, newItem],
}));
// Cart subscribes to the store and triggers a local re-render on every change
sharedStore.subscribe(() => {
setItems([...sharedStore.getState().cartItems]);
});The idea: Products emits an event when the user clicks "Add to Cart". Cart listens for that event and updates the shared store. The store notifies subscribers, triggering re-renders.
This seemed clean: events for decoupled communication, store for reactive state. But it had a fundamental flaw.
The Unmounted Listener Problem
I click "Add to Cart" on the Products page. Nothing happens. Navigate to Cart: it's empty. The item was lost.
The Cart component subscribed to ADD_TO_CART events inside a useEffect:
// Cart's App.tsx
useEffect(() => {
const unsub = eventBus.on(Events.ADD_TO_CART, (payload) => {
sharedStore.setState(/* ... */);
});
return () => unsub();
}, []);But Cart only mounts when the user navigates to /cart. When the user is on the Products page (/), CartApp isn't rendered, useEffect hasn't run, and no listener exists. The event fires into the void.
Events are transient in a lazily-loaded architecture
If no listener is active when an event fires, the event is gone. In a microfrontend setup where remotes only mount
when their route is active, any remote that subscribes inside a useEffect will miss every event fired while it isn't
rendered.
Moving Listeners to the Shell
The shell is always mounted. If the event listener lives there instead of in Cart, it's always active:
// Shell's Layout component (always mounted)
useEffect(() => {
const unsub = eventBus.on(Events.ADD_TO_CART, (payload) => {
sharedStore.setState((prev) => ({
cartItems: [...prev.cartItems, payload],
}));
});
return () => unsub();
}, []);Cart was simplified to just read from the store:
// Cart just subscribes to the store
useEffect(() => {
return sharedStore.subscribe(() => {
setItems([...sharedStore.getState().cartItems]);
});
}, []);This worked. Click "Add to Cart" on Products → the shell's listener fires → store updates → navigate to Cart → Cart reads the store → items are there.
But I had created a new problem.
Why That Makes Things Worse
The shell was meant to be a thin coordinator (just layout and routing). Now it contained cart business logic: it knew about ADD_TO_CART events, the cartItems array shape, how to merge items, and the store's internal structure. This means:
-
If the Cart team changes the cart data model (e.g., renames
CartItem.quantitytoCartItem.qty), the shell breaks. And so does Checkout, which readssharedStore.getState().cartItems. -
The shell has to know about every domain. If I add a Wishlist microfrontend, the shell needs a new event listener, new store fields, and new business logic. The thin coordinator becomes a monolith.
-
The shared store schema couples everyone. Every team depends on
SharedState:interface SharedState { cartItems: CartItem[]; user: User | null; }Adding a field requires updating the interface in
shared-lib. Changing a field is a breaking change for all consumers. This is the opposite of independent deployability. -
Business logic lives in the wrong place. The Cart team should own "how to add an item to the cart", not the Platform team that maintains the shell.
I needed a way to keep the shell dumb while making sure cross-domain events couldn't silently drop.
The Domain API Pattern
Instead of a shared store that every team writes to, each domain owns its own state behind a public API. The API is the contract. The internal state is an implementation detail that only the owning team can change.
I replaced the monolithic @mfe/shared-lib with per-domain packages:
Package layout after the split
packages/
shared-lib/ ← REMOVED
cart-api/ ← NEW: Cart team owns this
user-api/ ← NEW: Auth team owns this
event-bus/ ← NEW: Just the pub/sub mechanismHow It Works
The Cart team owns @mfe/cart-api. It encapsulates state and exposes only public methods:
// packages/cart-api/src/api.ts
// INTERNAL — Cart team can change freely. Extra fields like addedAt
// and variant get added later (see "The Contract Boundary" below).
interface CartItemInternal {
productId: string;
name: string;
price: number;
quantity: number;
}
// PUBLIC — This is the contract
export interface CartItemView {
productId: string;
name: string;
price: number;
quantity: number;
}
export const cartApi = {
addItem(productId: string, name: string, price: number, quantity?: number): void,
removeItem(productId: string): void,
getItems(): CartItemView[],
getCount(): number,
getTotal(): number,
clear(): void,
subscribe(listener: () => void): () => void,
};Internally, cart-api uses a Zustand vanilla store (createStore from zustand/vanilla) to manage state. Consumers never see this; they interact through the cartApi object:
// packages/cart-api/src/api.ts (implementation)
import { createStore } from "zustand/vanilla";
const store = createStore<{ items: CartItemInternal[] }>(() => ({ items: [] }));
export const cartApi = {
addItem(productId: string, name: string, price: number, quantity = 1): void {
store.setState((prev) => ({
items: [...prev.items, { productId, name, price, quantity }],
}));
},
removeItem(productId: string): void {
store.setState((prev) => ({
items: prev.items.filter((i) => i.productId !== productId),
}));
},
getItems(): CartItemView[] {
return store.getState().items.map(toView);
},
getCount(): number {
return store.getState().items.length;
},
getTotal(): number {
return store.getState().items.reduce((sum, i) => sum + i.price * i.quantity, 0);
},
clear(): void {
store.setState({ items: [] });
},
subscribe(listener: () => void): () => void {
return store.subscribe(listener);
},
};This solves the lifecycle problem from the previous section. Domain API packages are Module Federation singletons: they're loaded before any remote and their state lives at the module level, not inside any React component. There's no useEffect to miss, no listener waiting to be registered.
The "add to cart from Products, then navigate to Cart" flow now works without any cross-mount coordination:
- On the Products page, the user clicks "Add to Cart". Products calls
cartApi.addItem("abc", "Widget", 9.99). cart-apiupdates its internal Zustand store immediately. The state lives at module level, so no React component needs to be mounted to receive the write.- User navigates to
/cart. Cart mounts for the first time and callscartApi.getItems(). The item is already there. - Cart calls
cartApi.subscribe(...)so it re-renders on any future changes.
The Contract Boundary
The internal/public split in cart-api defines what the Cart team can change without coordinating with anyone else. Internal fields are invisible to consumers; public types are the contract:
// INTERNAL — change freely, no one sees this
interface CartItemInternal {
productId: string;
name: string;
price: number;
quantity: number;
addedAt: number; // Cart team added this — no one else knows
variant?: string; // Cart team added this too
}
// PUBLIC — changing this = major version bump
export interface CartItemView {
productId: string;
name: string;
price: number;
quantity: number;
}
// The toView() function maps internal → public, shielding consumers
function toView(item: CartItemInternal): CartItemView {
return {
productId: item.productId,
name: item.name,
price: item.price,
quantity: item.quantity,
};
}The semver implications fall out cleanly:
| Change | Semver | Coordination needed? |
|---|---|---|
| Restructure internal storage | Patch | No (API surface unchanged) |
| Add a new method | Minor | No (additive, backwards compatible) |
| Add optional fields to a public type | Minor | No (existing code still works) |
| Rename/remove a method | Major | Yes (consuming teams must update) |
| Change a public type's existing fields | Major | Yes (consuming teams must update) |
How This Changes Each App
Products calls cartApi.addItem() directly. No events, no store knowledge:
// Before (event-based)
eventBus.emit(Events.ADD_TO_CART, { product, quantity: 1 });
// After (domain API)
cartApi.addItem(product.id, product.name, product.price);Products passes primitives; it doesn't know about CartItem, CartItemView, or how the cart stores data.
Cart reads from cartApi:
const [items, setItems] = useState(cartApi.getItems());
useEffect(() => {
return cartApi.subscribe(() => {
setItems(cartApi.getItems());
});
}, []);Shell becomes completely dumb. No event listeners, no state management, no business logic. It just renders routes and mounts widgets:
// Before: Shell had event listeners and store updates
useEffect(() => {
const unsub = eventBus.on(Events.ADD_TO_CART, (payload) => {
sharedStore.setState(/* cart logic here */);
});
return () => unsub();
}, []);
// After: Shell is just layout
function Layout() {
return (
<div>
<nav>...</nav>
<Suspense fallback={<div>Loading...</div>}>
<Routes>
<Route path="/*" element={<ProductsApp />} />
<Route path="/cart/*" element={<CartApp />} />
<Route path="/checkout/*" element={<CheckoutApp />} />
</Routes>
</Suspense>
</div>
);
}Checkout reads totals and clears cart through the API:
const items = cartApi.getItems();
const total = cartApi.getTotal();
const handleSubmit = () => {
cartApi.clear();
eventBus.emit(Events.CHECKOUT_COMPLETE, { name, email, total });
};Domains Without UI
Not every domain needs a remote app. The User/Auth domain is purely a state slice:
// packages/user-api
export const userApi = {
getUser(): User | null,
setUser(user: User): void,
logout(): void,
isAuthenticated(): boolean,
subscribe(listener: () => void): () => void,
};No apps/user, no remoteEntry.js, no port. Any MFE can call userApi.isAuthenticated(). The Auth team owns this package and evolves it independently.
Exposed Widgets
When a domain needs to render UI outside its own route (e.g., a cart badge in the header), the remote exposes a widget component alongside the full app:
// apps/cart/rspack.config.ts
exposes: {
"./CartApp": "./src/App.tsx",
"./CartBadge": "./src/widgets/CartBadge.tsx",
}The shell mounts it without knowing how it works:
const CartBadge = lazy(() => import("cart/CartBadge"));
<Link to="/cart">
Cart{" "}
<Suspense fallback={null}>
<CartBadge />
</Suspense>
</Link>;CartBadge is a 15-line component that subscribes to cartApi.getCount(). The Cart team owns it, deploys it with the cart remote, and can change its behavior without touching the shell.
Event Bus: Still Useful, but for Broadcasts
With domain APIs handling direct actions, the event bus is reduced to truly cross-cutting broadcasts: things that happened where multiple domains might react:
// Checkout emits after successful purchase
eventBus.emit(Events.CHECKOUT_COMPLETE, { name, email, total });
// Any MFE can request navigation
eventBus.emit(Events.NAVIGATE, { path: "/products" });Event bus vs. domain API
- Know which domain you're talking to? Call its API directly:
cartApi.addItem()- Broadcasting something happened? Use the event bus:eventBus.emit(Events.CHECKOUT_COMPLETE)
Independent Deployability
The Domain API pattern solves communication. But for true independent deployability, there's one more structural change to make.
The Deployment Problem with a Shared Package
Even with per-domain packages, the initial setup put @mfe/cart-api in Module Federation's shared config, same as React. This works, but has a subtle deployment problem: every app that lists @mfe/cart-api in shared bundles it at build time. If the Cart team changes an internal implementation detail, all consumer apps must rebuild even though the public contract didn't change.
That defeats the point of the contract boundary. The whole reason for separating internal from public types was so internal changes wouldn't ripple outward. If they still trigger rebuilds across every consumer, you're paying the abstraction cost without getting the deployment benefit.
Expose Domain APIs as Remote Modules
The fix: expose domain APIs as remote modules from the owning app, not as shared dependencies.
// apps/cart/rspack.config.ts — cart team exposes their API
exposes: {
"./CartApp": "./src/App.tsx",
"./CartBadge": "./src/widgets/CartBadge.tsx",
"./cartApi": "./src/cartApi.ts", // domain API exposed as a remote module
},Consumer apps import it as a remote, loaded at runtime from the cart app's deployment:
// apps/products/src/pages/ProductList.tsx — products consumes cart's API
import { cartApi } from "cart/cartApi"; // resolved at runtime, not bundledWhen the Cart team deploys:
- Cart team adds
getItemCount()to@mfe/cart-api(minor version bump to1.1.0) - Cart team rebuilds and deploys
apps/cart(which bundles and exposes@mfe/cart-api@1.1.0) - At runtime, Products and Checkout load
cartApifrom the cart remote'sremoteEntry.js - They get
1.1.0immediately. No rebuild needed on their side. - Existing
addItem()calls from Products still work. The newgetItemCount()is available.
No other app needs to rebuild or redeploy.
What stays in shared: framework-level singletons (React, React DOM, React Router, Zustand) and cross-cutting infrastructure (@mfe/event-bus). These are not domain-owned; they're runtime infrastructure that genuinely needs a single instance.
Wrapping Up
At this point, you have four independently deployable microfrontends that communicate through domain-owned APIs, composed at runtime by a shell that knows nothing about their internals. The Cart team can ship a new feature without coordinating with Products. The Platform team can change the shell layout without breaking domain logic. New domains slot in by exposing their own API, with no shared schema to update.
If I were doing this again, I'd start with the Domain API pattern from day one rather than re-deriving it from a broken event-bus design. Hopefully this post saves you that detour.