Back to Blog

Stop Writing useMemo and useCallback: A Senior Dev's Migration Guide to React Compiler 1.0

Adeel Imran
Adeel Imran

Most React developers I talk to are still writing useMemo, useCallback, and React.memo by hand in 2026.

Some do it out of habit. Some do it because their senior dev told them to. Some do it because they genuinely believe their component hot path is a bottleneck and wrapping things in useCallback will fix it.

Almost none of them have actually profiled whether it helped.

Here's the thing React Compiler 1.0 shipped stable in October 2025. It automatically handles all of that memoization for you at build time, more precisely than you can by hand, without a single useMemo in your codebase. It's been running in production at Meta for years. It's available today for React 17 and up.

This post walks through what the compiler actually does, what to do with your existing memoization, and how to roll it out incrementally in a real production codebase; the way I do it as a React contractor dropped into unfamiliar codebases.


What React Compiler actually does

React re-renders a component when its props or state change. The problem is that JS creates new object and function references on every render, which makes things look like they changed even when they didn't. useMemo, useCallback, and React.memo exist entirely to paper over this reference identity problem.

React Compiler solves it at the root. It's a build-time tool (a Babel or SWC plugin) that statically analyzes your component and hook code and inserts memoization automatically, only where it's actually needed, and only for values that can genuinely change.

It uses a Control Flow Graph (CFG) based intermediate representation to understand your code's data flow deeply. This means it can memoize things useMemo literally cannot, like values derived after an early return.

// You write this
function ProductCard({ product, onAddToCart }) {
  const discountedPrice = product.price * 0.9;
  const label = product.inStock ? "Add to cart" : "Out of stock";

  return (
    <div>
      <h2>{product.name}</h2>
      <p>${discountedPrice.toFixed(2)}</p>
      <button onClick={onAddToCart} disabled={!product.inStock}>
        {label}
      </button>
    </div>
  );
}

// React Compiler outputs something equivalent to this
function ProductCard({ product, onAddToCart }) {
  const $ = _c(4); // compiler-managed cache slot
  let t0, t1;

  if ($[0] !== product || $[1] !== onAddToCart) {
    const discountedPrice = product.price * 0.9;
    const label = product.inStock ? "Add to cart" : "Out of stock";

    t0 = <h2>{product.name}</h2>;
    t1 = <p>${discountedPrice.toFixed(2)}</p>;
    // ...rest of JSX
    $[0] = product;
    $[1] = onAddToCart;
    $[2] = t0;
    $[3] = t1;
  } else {
    t0 = $[2];
    t1 = $[3];
  }
  // ...
}

You never write or see the compiled output. You just write idiomatic React and the compiler handles the rest.

Production results from Meta Quest Store: up to 12% faster initial loads, cross-page navigations improved, and certain interactions more than 2.5x faster. Memory usage stayed neutral.


The question every team asks: what do I do with my existing useMemo and useCallback?

The official recommendation is nuanced, and I've seen teams misread it. Let me be direct:

For new code: Don't write useMemo or useCallback unless you have a specific, tested reason. Let the compiler handle it.

For existing code: Leave it in place initially. Don't mass delete your memoization before testing. Removing existing useMemo/useCallback can change the compiler's output in subtle ways, and if you have effects that rely on reference stability as a dependency, you can accidentally cause over firing.

The mental model shift is this: useMemo and useCallback are now escape hatches for precise control, not default tools. Use them when you need a specific memoization guarantee (e.g., a value that feeds into a useEffect dependency array) — not as performance ceiling glass.


How to migrate incrementally (the way I do it as a contractor)

This is the part most blog posts skip. Dropping a compiler into a large codebase and hoping for the best is how you create regressions nobody can diagnose.

Here's the phased approach I use:

Phase 1: Install and audit

npm install --save-dev --save-exact babel-plugin-react-compiler@latest
npm install --save-dev eslint-plugin-react-hooks@latest

Don't enable compilation yet. Start with the linter.

The compiler's ESLint rules (shipped inside eslint-plugin-react-hooks@latest) detect Rules of React violations in your existing code, things like mutations inside render, stale closures written as escape hatches, and refs accessed during render. These violations will cause the compiler to skip those components.

Run the linter across your codebase and categorize the violations by severity. This tells you which files are "compiler-ready" and which need cleanup.

// eslint.config.js
import reactHooks from "eslint-plugin-react-hooks";
import { defineConfig } from "eslint/config";

export default defineConfig([reactHooks.configs.flat.recommended]);

Phase 2: Enable the compiler on a safe subset

React Compiler supports a directory scoped opt in. Start with your least complex feature area, something like UI primitives, or a single low risk route.

// babel.config.js
module.exports = {
  overrides: [
    {
      // Start with only your UI component library
      test: './src/components/ui/**/*.{js,jsx,ts,tsx}',
      plugins: ['babel-plugin-react-compiler'],
    },
  ],
};

Run your test suite. Check your E2E tests. If something breaks, you'll know it's scoped to this directory, and you can add "use no memo" at the top of individual function bodies to opt them out.

Phase 3: Expand and profile

Once the initial rollout is stable, expand the compiler's source scope to cover more of the codebase. At each expansion, profile with React DevTools and the new React Performance Tracks (added in React 19.2), these give you a Scheduler track and a Components track directly inside Chrome DevTools, showing render timing and priorities.

Expand → Profile → Verify → Repeat.

Phase 4: Clean up legacy memoization (carefully)

Once the compiler covers a stable region, you can start removing redundant React.memo, useCallback, and useMemo wrappers but only after checking every usage with the linter and confirming there are no effect dependencies relying on that reference stability.

A simple audit command:

grep -rn "useMemo\|useCallback\|React.memo" src/ --include="*.tsx" | wc -l

Track that number over time as a metric. You're not sprinting to zero — you're systematically verifying and removing.


What the compiler cannot fix (and where seniors earn their keep)

React Compiler eliminates unnecessary re-renders caused by unstable references. It does not:

  • Fix expensive work whose inputs genuinely change every render or that is bottlenecked by broader architectural issues
  • Fix waterfalls caused by sequential data fetching in component trees
  • Fix oversized bundles from poor code splitting
  • Fix components that break the Rules of React (the compiler will skip optimizing them until those violations are fixed)

The biggest mistake I see teams make after enabling the compiler is assuming performance problems are "solved." The compiler removes one category of problem. The others being data fetching waterfalls, large bundle sizes, excessive context subscriptions are still entirely on the table.

Real performance work is architectural, not mechanical. The compiler handles the mechanical part. You still need to design the architecture.


A note on backwards compatibility

One underappreciated fact: React Compiler works with React 17 and up. You don't need to be on React 19 to adopt it.

// babel.config.js for React 17/18
const ReactCompilerConfig = {
  target: "17", // or '18'
};

You'll also need:

npm install react-compiler-runtime

This removes a common blocker for teams sitting on older React versions.


FAQ

Will this break my existing code? The compiler opts out of any component it can't safely analyze. It fails open. Your app will still work; the component just won't be automatically memoized. The linter tells you which components are skipped and why.

Should I still use React.memo for expensive components? In most cases, no. The compiler handles this. If you have a specific, measured bottleneck where you need to guarantee a component never re-renders unless certain props change, you can still use it as a deliberate escape hatch.

Does this work with Next.js? Yes. As of Next.js 15.3.1 with experimental SWC support, build performance is considerably faster when the compiler is enabled. Expo SDK 54+ enables the compiler by default for new apps.

What about third-party libraries? The compiler only transforms your code. Third-party library code isn't compiled unless you pre-compile it yourself. The compiler docs include a guide for shipping pre-compiled library code.


The shift this represents

React Compiler isn't just a performance tool. It represents a shift in what senior React developers need to think about.

For years, understanding memoization and when to use it, when not to, the reference identity trap, the dependency array rules was a meaningful signal of React seniority. That's table stakes now. The compiler handles it.

What still requires senior judgment:

  • Architectural decisions about server vs. client component boundaries
  • Data fetching strategy and waterfall elimination
  • Code splitting and bundle budget discipline
  • When new primitives like <Activity /> are the right tool versus simpler conditional rendering

The developers who will deliver the most value going forward are those who understand why these tools exist well enough to know when to reach past them.


If you're dealing with a React codebase that needs performance work, a compiler migration on an existing app, or an architecture review before scaling. I'm a senior React contractor available for project-based and retainer engagements.