Featured image of post 3 Things React Compiler Won't Auto-Memo: From 512ms Down to 6ms

3 Things React Compiler Won't Auto-Memo: From 512ms Down to 6ms

I thought React Compiler meant no more manual memo. Then tab-switch took 512ms. Three compiler blind spots — child component boundaries, prop identity intent, and setInterval animation state — with extra everyday examples.

“React Compiler auto-memos everything — I don’t need manual optimization, right?” That was me six months ago. Then a teammate reported tab switching was laggy. I opened React DevTools Profiler and recorded a single commit at 512ms. Cold sweat.

The project runs React 19 + React Compiler 1.0. In theory, useMemo, useCallback, and React.memo are all automatic. In practice, profile showed compiler missed every hot path.

This is my walk through three fixes that took the app from 512ms to 6ms, and why compiler couldn’t help for each. Each section ends with extra examples you’re likely to hit in your own codebase — not just the one I was debugging.

Compiler does more than you think. While writing this post I used babel-plugin-react-compiler to compile actual code and verify which patterns really need manual memoization. Many cases where “adding useMemo seems useful” are already handled by the compiler — hook returns like { ...state, ...actions }, .filter() results, or {...DEFAULT, ...overrides} merges are all auto-memoized. The three boundaries below are where you actually need to step in — don’t sprinkle useMemo everywhere else.

The pain: 512ms tab switch

The app is a multi-tab workspace: project list on the left, tabs on top, a scrollable feed in the main area. Each tab lives under a deep provider tree (~15 Context.Providers — auth, settings, messages, realtime notifications, etc.). Each feed item runs Markdown parsing + syntax highlighting, so render cost per item is non-trivial.

User clicks a different tab. React DevTools Profiler records:

1
2
3
4
5
6
7
8
Render: 512.6ms
What caused this update? TabProvider

TabProvider (0.2ms of 512.6ms)
├─ WorkspacePanel (0.3ms of 512.4ms)
│  ├─ TabPanel key="...tab-1" (220ms)  ← tab 1 full re-render
│  ├─ TabPanel key="...tab-2" (150ms)  ← tab 2 full re-render
│  └─ TabPanel key="...tab-3" (140ms)  ← tab 3 full re-render

All three tabs stay mounted (toggled via CSS so we don’t rebuild state), so one click triggers all three subtrees to re-render, and each tab’s feed runs Markdown + syntax highlighting across N items.

React Compiler is enabled. It isn’t helping. Why?

Boundary 1: compiler can’t express “don’t include this prop in identity”

The first thing I caught was that TabProvider’s actions got a new identity on every render.

Original code:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
export function TabProvider({ projectId, children }: { projectId?: string; children: ReactNode }) {
  const [state, setState] = useState<TabState>({ tabs: {}, activeTabId: null });

  const addTab = (id: string) => {
    setState((prev) => ({ ...prev, tabs: { ...prev.tabs, [id]: DEFAULT_META } }));
  };

  const createNewTab = () => {
    const tabId = crypto.randomUUID();
    setState((prev) => ({
      tabs: { ...prev.tabs, [tabId]: { ...DEFAULT_META, projectId } }, // ← uses prop
      activeTabId: tabId,
    }));
    return { tabId };
  };

  // ... 6 more actions

  const actions = { addTab, createNewTab /* ... */ };

  return <TabActionsContext.Provider value={actions}>{children}</TabActionsContext.Provider>;
}

The compiler memoizes, but it sees createNewTab’s closure capture projectId and conservatively adds projectId to the dependency set of actions. When projectId changes (switching projects), actions gets a new identity → <TabContent actions={actions}> re-renders across the whole tree even if wrapped in React.memo.

The issue is one of intent: projectId’s value only matters at the moment createNewTab is called, not when defined. addTab doesn’t use projectId at all, but it gets dragged along because it’s in the same actions object.

This is the compiler’s blind spot: “I want projectId to be read at call time, not included in the identity” is an intent-level piece of information that can’t be expressed in code, so the compiler falls back to the conservative answer.

Fix: pin actions once with a useState initializer, and route the prop through a ref so it’s read fresh at call time.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
export function TabProvider({ projectId, children }: { projectId?: string; children: ReactNode }) {
  const [state, setState] = useState<TabState>({ tabs: {}, activeTabId: null });

  const projectIdRef = useRef(projectId);
  useLayoutEffect(() => {
    projectIdRef.current = projectId; // sync before commit
  });

  const [actions] = useState(() => ({
    addTab: (id: string) => {
      setState((prev) => ({ ...prev, tabs: { ...prev.tabs, [id]: DEFAULT_META } }));
    },
    createNewTab: () => {
      const tabId = crypto.randomUUID();
      setState((prev) => ({
        tabs: { ...prev.tabs, [tabId]: { ...DEFAULT_META, projectId: projectIdRef.current } },
        activeTabId: tabId,
      }));
      return { tabId };
    },
    // ... 6 more actions
  }));

  return <TabActionsContext.Provider value={actions}>{children}</TabActionsContext.Provider>;
}

The useState(() => ({...})) initializer runs once — actions keeps the same reference for the entire lifetime. useLayoutEffect syncs projectIdRef.current before each commit, so whenever an action runs it reads the current value.

You can also write projectIdRef.current = projectId directly in the render body (React officially allows this escape hatch), but under concurrent rendering the render body may re-run or be thrown away. useLayoutEffect is the safer choice.

Now memo(TabContent) can finally do its job — when switching projects, TabProvider itself re-renders, but actions is stable → TabContent’s props are stable → memo short-circuits → the whole subtree skips re-render.

When this pattern is warranted

Not every prop needs this workaround. The ref-capture pattern fits when:

  • An action needs to read the prop’s current value at call time, but you don’t want the action’s identity to change when the prop does
  • Prop change frequency ≫ call frequency (like projectId changing on every project switch, but createNewTab getting called once per session)
  • Something downstream relies on identity for short-circuiting (React.memo prop compare, useEffect deps)

Conversely, if nobody compares the action’s identity, or the prop barely ever changes, don’t bother. Over-using refs makes the timing relationship between prop and action harder to follow.

The compiler’s dependency inference is conservative — if a closure reads a variable, it’s treated as a dependency. “Keep this variable out of the deps” can only be expressed via runtime indirection (like refs), because it isn’t something code itself can convey.

Boundary 2: child components are not auto-wrapped in React.memo

After fixing TabProvider’s actions identity, re-ran the profile. TabProvider’s own render kept actions stable — but WorkspacePanel still re-rendered twice per tab switch.

The reason: WorkspacePanel’s own context subscriptions didn’t change, but its parent re-rendered. React’s default is parent render → child re-run, unless the child is React.memo.

React Compiler 1.0 memoizes JSX elements, object literals, and callbacks inside a component. But it does not automatically wrap child components in React.memo. This is an intentional design boundary — auto-wrapping every component could break code relying on reference equality or intentional re-renders.

The official docs say the compiler “effectively memoizes the whole tree” — that refers to JSX and values inside each component. Bail-out at component boundaries still requires React.memo’s shallow prop comparison.

Fix: manually React.memo the hot-path components.

1
2
3
4
export const WorkspacePanel = memo(function WorkspacePanel() {
  const { activeTabId, tabs } = useTabState();
  // ...
});

WorkspacePanel takes no props. After memo, it only re-renders when its context subscriptions actually change.

When manual React.memo is worth it

Not every component should be memoized — shallow comparison isn’t free, and overuse adds noise. In practice, three cases pay off:

1. Expensive-to-render leaf components

1
2
3
4
5
const FeedItem = memo(function FeedItem({ post }: { post: Post }) {
  const rendered = useMarkdownToReact(post.body);   // Markdown → React nodes
  const highlighted = useSyntaxHighlight(rendered); // shiki / prism
  return <article>{highlighted}</article>;
});

In a list of 100 items, any parent change re-renders all 100 — even if content didn’t change, Markdown re-parses. memo makes only the changed post re-render.

2. Containers whose props are usually stable

1
2
3
const SettingsPanel = memo(function SettingsPanel({ userId }: { userId: string }) {
  // refetches only when userId changes
});

3. Gateways at the top of deep provider trees

If you know a component sits above a huge subtree or chain of Providers, and its props / context rarely change, memo-ing it is a cheap short-circuit gate for that entire subtree.

Conversely, components whose props always change (e.g. receiving onClick, style, or children fresh each render) get no benefit — the shallow compare fails every time. Fix prop identity first (see Boundary 1).

Boundary 3: high-frequency setInterval + setState animations

This was the sneakiest. Same profile session showed 366 commits — each averaging 4.5ms, but cumulatively saturating the main thread.

“What caused this update?” pointed to a single component: LoadingSpinner — the loading indicator cycling through · ✢ * ✶ ✻ ✽.

Original implementation:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
const [iconIndex, setIconIndex] = useState(0);

useEffect(() => {
  const id = setInterval(() => {
    setIconIndex((i) => (i + 1) % ICON_CYCLE.length);
  }, 120);
  return () => clearInterval(id);
}, []);

return <span>{ICON_CYCLE[iconIndex]}</span>;

setState every 120ms = 8 commits per second. Each commit walks the fiber tree, checks memos, schedules effects. Even when every parent bails out, the tree walk itself costs CPU. Over the loading window, 8× per second compounds into sustained background load.

This isn’t a memoization problem — it’s a commit frequency problem. Compiler can’t decide for you which animations should drive DOM directly versus go through React state. That’s a design choice.

Fix: skip React entirely, write DOM directly.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
const iconRef = useRef<HTMLSpanElement | null>(null);

useEffect(() => {
  let i = 0;
  const id = setInterval(() => {
    i = (i + 1) % ICON_CYCLE.length;
    if (iconRef.current) iconRef.current.textContent = ICON_CYCLE[i];
  }, 120);
  return () => clearInterval(id);
}, []);

return <span ref={iconRef}>{ICON_CYCLE[0]}</span>;

ref + textContent = is native DOM manipulation. React never sees the update, so it never commits. Background commit frequency during loading dropped from ~8/sec to ~0.2/sec.

Other “high-frequency state blowing up commits” patterns

Same idea applies to:

Mouse follower:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
// ❌ mousemove fires 60+ times per second
const [pos, setPos] = useState({ x: 0, y: 0 });
useEffect(() => {
  const onMove = (e: MouseEvent) => setPos({ x: e.clientX, y: e.clientY });
  window.addEventListener('mousemove', onMove);
  return () => window.removeEventListener('mousemove', onMove);
}, []);
return <div style={{ transform: `translate(${pos.x}px, ${pos.y}px)` }} />;

// ✅ write style directly, bypass React
const ref = useRef<HTMLDivElement>(null);
useEffect(() => {
  const onMove = (e: MouseEvent) => {
    if (ref.current) {
      ref.current.style.transform = `translate(${e.clientX}px, ${e.clientY}px)`;
    }
  };
  window.addEventListener('mousemove', onMove);
  return () => window.removeEventListener('mousemove', onMove);
}, []);
return <div ref={ref} />;

Scroll progress indicator:

1
2
3
4
5
6
7
// ❌ setState on every scroll event
const [progress, setProgress] = useState(0);
useEffect(() => {
  const onScroll = () => setProgress(window.scrollY / document.body.scrollHeight);
  window.addEventListener('scroll', onScroll, { passive: true });
  return () => window.removeEventListener('scroll', onScroll);
}, []);

Scroll can fire hundreds of times per second. Routing that through state = hundreds of commits per second across the whole subtree. Use ref + style.width or a CSS custom property instead.

Countdown timer (display only):

1
2
3
4
5
6
// ❌ setState every second, forces a tree walk
const [remaining, setRemaining] = useState(60);
useEffect(() => {
  const id = setInterval(() => setRemaining((r) => r - 1), 1000);
  return () => clearInterval(id);
}, []);

If the countdown is purely visual — no other component branches on the current second — ref + textContent is much cheaper. If logic depends on time (auto-submit at zero), then state makes sense.

Rule of thumb: does this value affect React’s render logic? If it’s only visual and no component branches on it, bypass React and touch the DOM directly.

Result: 512ms → 6ms

After all three fixes landed, re-profile:

ActionBeforeAfter
Tab switch512ms4ms
Project switch580ms6.6ms
Loading background commits8/sec0.2/sec

Tab switch went from noticeably laggy to essentially instant. Other interactions (submit, open panel) also improved — they share the provider tree with tab switching, so killing the background noise lifted everything.

What the compiler actually does (don’t duplicate useMemo)

While writing this post I did one thing: fed various patterns to babel-plugin-react-compiler and inspected the output to verify which patterns truly need manual memoization. The answer is that many cases where you’d reach for useMemo are already handled.

Spread / filter / merge inside hooks — no need to add

1
2
3
4
5
6
export function useSession() {
  const state = useContext(StateCtx);
  const actions = useContext(ActionsCtx);
  if (!state || !actions) throw new Error('...');
  return { ...state, ...actions }; // compiler memoizes on [state, actions]
}

Here’s what the compiler actually produces:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
// after compile
let t0;
if ($[0] !== actions || $[1] !== state) {
  t0 = { ...state, ...actions };
  $[0] = actions;
  $[1] = state;
  $[2] = t0;
} else {
  t0 = $[2]; // reuse cache
}
return t0;

Same treatment applies to items.filter(...) and { ...DEFAULT, ...overrides } — all auto-memoized. The useMemo I’d originally added around useSession was pure redundancy — the compiler output confirmed it already does the same thing.

Inline objects in Provider values — no need to add

1
2
3
4
5
6
// these two compile to identical code
return <Ctx.Provider value={{ socket }}>{children}</Ctx.Provider>;

// vs
const value = useMemo(() => ({ socket }), [socket]);
return <Ctx.Provider value={value}>{children}</Ctx.Provider>;

I compiled both sources and the resulting _c(N) cache slot allocation (where N is the total number of values the compiler decides to memoize in that component) and if ($[0] !== socket) dependency check were identical. Adding useMemo yourself only looks different in the source — post-compile it’s identical.

Takeaway

When you’re tempted to reach for useMemo, order of operations:

  1. Profile first — is there actually a render issue?
  2. If so, identify which of the 3 boundaries above (prop captured into identity, missing memo boundary, high-frequency commits)
  3. If it’s none of the 3, the compiler usually already handles it — don’t add memo on guess

How to verify: 30-second probe

Don’t rely on memory or docs — compile and look at the output. Two ways:

Option 1: zero-install, in the browser

Open the React Compiler Playground, paste source on the left, see the compiled result on the right. Great for quick one-snippet checks.

Option 2: local CLI against your own file

1
2
3
4
5
6
7
8
9
# install babel CLI and TS preset (babel-plugin-react-compiler is already installed)
pnpm add -D @babel/cli @babel/core @babel/preset-typescript

# compile a single file and print the output
npx babel \
  --presets @babel/preset-typescript \
  --plugins babel-plugin-react-compiler \
  --no-babelrc \
  src/components/YourProvider.tsx | less

If you do this often, add it to package.json:

1
2
3
4
5
{
  "scripts": {
    "probe": "babel --presets @babel/preset-typescript --plugins babel-plugin-react-compiler --no-babelrc"
  }
}

Then pnpm probe src/components/YourProvider.tsx | less.

Read the output:

  • Top of file has import { c as _c } from "react/compiler-runtime" + const $ = _c(N) → compiler ran successfully
  • if ($[0] !== dep) { t0 = ...; } else { t0 = $[2]; } → this piece is memoized. Don’t add your own useMemo.
  • No cache checks at all → compiler bailed out (possibly detected mutation, ref reads during render, or this isn’t a component/hook)

I found a real example this way. A provider had <AppStateContext.Provider value={{ user, theme, prefs, socket }}> and the compiled output looked like:

1
2
3
4
5
6
7
8
// indices $[16]..$[20] depend on this component's cache slot layout — not fixed values
if ($[16] !== prefs || $[17] !== socket || $[18] !== theme || $[19] !== user) {
  t13 = { user, theme, prefs, socket };
  $[16] = prefs; $[17] = socket; $[18] = theme; $[19] = user;
  $[20] = t13;
} else {
  t13 = $[20];
}

Each of the four fields becomes its own dep — any unchanged field reuses the cached value. Better than writing useMemo(() => ({...}), [prefs, socket, theme, user]) yourself, because the compiler’s dep analysis is more reliable than human memory.

Other compiler blind spots worth knowing

Beyond the three above, the community and official docs have surfaced these situations where compiler also can’t help:

  • Mutating props or objects during render: compiler detects mutation and skips optimizing that code — safety can’t be guaranteed.
  • Reading refs during render: ref.current isn’t tracked by the compiler and can’t participate in memo dependencies.
  • Sharing expensive computation across components: compiler memoization is per-component. Three different components computing the same result from the same input will each run it once. Cache outside with useMemo + a shared map, or lift the computation higher.
  • List virtualization: compiler won’t virtualize a 10,000-item list for you. That’s an architectural choice.

The actual edges of React Compiler

From this debugging session:

Compiler does autoCompiler does NOT auto
useMemo equivalent inside a componentWrap child components in React.memo
useCallback equivalent inside a componentExpress “this prop should not be in identity” intent
Memoize JSX elementsDecide which animations belong in DOM vs React state
Stabilize inline object literals / spread / mergeAnalyze re-render cost across a provider chain
Memoize hook return valuesCache expensive computation shared across components

One sentence: Compiler eliminates ~90% of intra-component memo boilerplate, but component-boundary and architectural optimizations remain your job.

My wrong mental model was “compiler enabled = free performance.” Reality is closer to “compiler enabled = no more boilerplate, but hot spots still need profile-driven manual optimization.”

Practical advice

  • Don’t guess. Profile first. I wasted time assuming the list needed virtualization — spent hours on it before realizing the real culprit was LoadingSpinner’s interval. Ten minutes with Profiler saves a day of guesswork.
  • React DevTools Profiler’s “What caused this update?” is the most direct clue. Trace the trigger, walk up to the root cause.
  • When unsure whether a pattern needs useMemo, compile it and look. Install babel-plugin-react-compiler, a 30-line node script running babel transform tells you whether the compiler already handles it. Much more reliable than guessing.
  • Manually React.memo the hotspots: leaf components called by high-frequency parents, expensive-to-render items (Markdown, syntax highlighting), and boundaries between deep provider trees.
  • High-frequency animations go through refs + DOM: mousemove, scroll, interval icons, countdown displays — any visual-only state should bypass React.

React Compiler is worth using. The boilerplate it saves is significant, and day-to-day you can stop thinking about memo. But enabling it doesn’t mean you can stop caring. Profile-driven, targeted manual optimization complements the compiler — it doesn’t replace it.

References