React.memo: When AI Gets Performance Wrong

I’ve been using AI tools to analyse codebases and suggest quick performance wins, and I keep hitting the same frustrating pattern. I’ll ask Claude or ChatGPT to look over a project, and without fail, I get this response:
- **Issue**: Missing React.memo
- **Components to Memoize**:
- `/src/atoms/buttons/Button/index.tsx`
Here’s the thing: this recommendation is not just wrong—it’s actively harmful.
After seeing this pattern dozens of times across different projects, I’ve realized AI tools are treating React.memo like a magic performance wand, sprinkling it everywhere without understanding what it actually does.
The Performance Reality Check
Let me break down what actually happens when you memo that innocent button component.
Take this simple button I extracted from a recent project:
const Button = ({ children, onClick, variant = 'primary' }) => (
<button
className={`btn btn-${variant}`}
onClick={onClick}
>
{children}
</button>
);
When you wrap this in React.memo
, you’re not optimizing—you’re adding overhead. Here’s what React now has to do on every render:
- Memory allocation: Store the previous props somewhere
- Comparison work: Run a shallow comparison across all props
- Function overhead: Execute the memo wrapper logic
AI is making your app slower while thinking it is optimizing it.
When React.memo Actually Matters
React.memo works best in very specific situations. Here’s a component from a recent data dashboard project that actually benefits:
const DataTable = React.memo(({ data, sortConfig, filters }) => {
// This component processes 10,000+ rows
const processedData = useMemo(() => {
return data
.filter(applyFilters(filters)) // ~15ms
.sort(applySorting(sortConfig)) // ~25ms
.map(enrichRowData); // ~40ms
}, [data, sortConfig, filters]);
return (
<div className="data-table">
{/* Rendering 500+ DOM nodes */}
{processedData.map(row => <TableRow key={row.id} {...row} />)}
</div>
);
});
This component justifies memo because:
- Heavy computation: ~80ms of processing time
- Expensive DOM: 500+ elements to reconcile
- Stable props: Parent rarely changes the data/config objects
The memo comparison (maybe 0.1ms) saves 80ms+ of wasted work. That’s a real win.
My Decision Framework
Before adding React.memo, ask:
Is the component computationally expensive?
If it takes less than 5ms to render, memo probably isn’t worth it.
Are the props genuinely stable?
If you’re passing new objects/functions every render, memo won’t help (and will hurt).
Does the component render frequently with the same props?
A slow component that renders once doesn’t need optimization.
Have you actually measured the impact?
I use React DevTools Profiler for every optimization. No measurement = no optimization.
The best components for memo in my experience:
- Data visualization components with complex calculations
- Large lists with expensive row rendering
- Components with heavy DOM manipulation
- Anything that takes >10ms to render consistently
The Bigger Picture: AI’s Fundamental Limitation
This React.memo issue isn’t just about one React feature—it’s a canary in the coal mine for AI-assisted development.
It is one of the most basic performance concepts in React. It has clear documentation and straightforward use cases. If AI consistently gets this wrong, what else is it misunderstanding across your entire application?
After working with AI tools on several production codebases, I’ve noticed patterns:
State Management: AI suggests Redux for simple component state that useState could handle
Architecture Decisions: It recommends microservices patterns for monolith-appropriate applications
Bundle Optimization: Code-splitting suggestions everywhere without actual bundle analysis
Database Queries: Complex joins and indexes without understanding real access patterns
Caching Strategies: Redis recommendations for scenarios where in-memory caching would be better
The React.memo example reveals AI’s core limitation: it pattern-matches against “best practices” without understanding the performance characteristics, trade-offs, or context that make those practices actually beneficial.
The AI Human Oversight Reality
AI can make code work, but “working” and “optimal” are completely different things.
- Working code: Passes tests, doesn’t crash
- Correct code: Solves the right problem efficiently
- Optimal code: Considers long-term maintenance and team understanding
AI excels at the first, struggles with the second, and often completely misses the third.
This doesn’t mean avoiding AI—I use it constantly for rapid prototyping and exploration. But it means maintaining critical oversight:
Question the fundamentals: If AI suggests a pattern, understand why that pattern exists and whether it applies to your context.
Measure everything: Profile before and after. Performance assumptions are usually wrong.
Consider the human cost: Every optimization has a maintenance burden. Will your team understand this code in six months?
Start simple: AI often suggests complex solutions first. Ask: what’s the simplest thing that could work?
The Bottom Line
React.memo is a powerful tool when used correctly. It’s also a perfect microcosm of AI’s current limitations in software development.
If AI consistently misunderstands something as fundamental as when to memoize a React component, it’s worth questioning every other architectural decision it suggests. The code might work, but working code and well-engineered code are not the same thing.
Next time an AI suggests wrapping your button components in memo “for performance,” use it as a reality check. Question the reasoning, measure the impact, and remember that understanding context and trade-offs is still a uniquely human skill.
AI is an incredibly powerful tool that’s transformed how I approach development. But it’s still just that—a tool. The responsibility for thoughtful, contextual decision-making remains firmly in human hands.