I've been doing WordPress performance audits for years. Hundreds of stores. And I keep running into the same problems - not exotic edge cases, just wrong defaults and things running that shouldn't be.
I posted these as a 30-day LinkedIn series. Collecting everything here so it's searchable. The full stack: WooCommerce internals, MySQL config, PHP-FPM tuning, Nginx caching, Redis. Each one is something I've fixed in production.
Note: Most of these tips require access to server config - php.ini, nginx.conf, my.cnf, Redis settings. That means VPS or dedicated hosting. If you're on shared hosting, a lot of this won't be applicable. Worth knowing before you dig in. I host all of my projects on Mikrus <3
WooCommerce internals
Issues baked into WooCommerce defaults. No plugin needed - just knowing they exist.
Cart fragments fire on every page load
Every page on your WooCommerce store fires a hidden AJAX request called wc-ajax=get_refreshed_fragments. It exists to update the cart icon in your header. And it boots the entire WordPress + WooCommerce stack to do it.
That's a full PHP execution on every page load that nobody sees in performance tests. Your users feel it though. If you don't use a mini-cart widget, one line fixes it:
wp_dequeue_script('wc-cart-fragments');
If you do use a mini-cart, at least limit this to shop pages - don't let it fire on your blog posts. I've seen this single change cut server load by 30% on stores with decent traffic.
Redis is evicting your customers' shopping carts
If you're using Redis for object cache, check your maxmemory-policy setting. The default is usually allkeys-lru, which means when Redis runs out of memory it evicts any key - including active WooCommerce cart sessions.
Customer fills a cart, browses for 10 minutes, goes to checkout: empty cart. Because Redis silently deleted their session during a traffic spike.
maxmemory-policy volatile-lru
Now Redis only evicts keys that have a TTL set. Your persistent cache keys survive; sessions with expiry get cycled naturally. Also run:
redis-cli INFO stats | grep evicted_keys
If that number keeps climbing, you need more memory or fewer cached things.
HPOS with sync enabled doubles every order write
WooCommerce's High Performance Order Storage moves orders from the slow wp_posts/wp_postmeta tables to dedicated tables. Real improvement. But by default it runs in compatibility mode with sync enabled - meaning every order write goes to both the old tables and the new ones. You're not saving database load, you're doubling it.
After verifying HPOS works and your plugins are compatible, disable sync: WooCommerce → Settings → Advanced → Features. You'll cut order-related database writes in half overnight.
Product lookup tables go stale silently
WooCommerce maintains a denormalized table called wp_wc_product_meta_lookup - it caches product prices, stock status, ratings. The problem: if you import products via CSV, use a bulk editor, or sync from an ERP, the lookup table doesn't update. WooCommerce only refreshes it through its own hooks.
So your prices are correct in wp_postmeta but the lookup table shows old data. Product sorting is wrong, filters show stale stock, on-sale products don't appear in sale queries.
Fix: WooCommerce → Status → Tools → Regenerate product lookup tables. I run into this at least once a month with stores that do any external data sync.
WooCommerce fires database writes you don't know about
WooCommerce periodically updates woocommerce_tracker_last_send, _transient_wc_count_comments, and various usage tracking options. Each one is a write to wp_options on an autoloaded row - which invalidates the entire alloptions cache (see the alloptions tip).
Even if you opted out of WooCommerce usage tracking, some of these still fire. Small writes, but on a high-traffic store they add up to constant cache invalidation. SAVEQUERIES will find them all.
Don't want to hunt these down manually?
WP Multitool scans your store for slow queries, bloated autoload, orphaned transients, and heavy callbacks. All local, nothing leaves your server.
Database & MySQL
In my experience most WordPress performance issues start in the database. Wrong defaults, bloated tables, missing indexes. I find these on pretty much every store I audit.
Your autoloaded options are probably 5x too big
Run this query right now:
SELECT SUM(LENGTH(option_value)) FROM wp_options WHERE autoload='yes';
If the result is over 1MB, every single page load is dragging that data into memory. WordPress loads all autoloaded options on every request, no exceptions. The usual culprits: abandoned plugin settings that never got cleaned up, WooCommerce transients marked autoload=yes by accident, old theme options from three redesigns ago. I've seen stores with 8MB of autoloaded data. The site owner was blaming their hosting.
MySQL's query cache is hurting you
If you're still running MySQL's query_cache, turn it off. MySQL deprecated it in 5.7 and removed it in 8.0 for a reason. Every time something writes to the database - and WooCommerce writes on every order, every cart update, every stock change - the query cache takes a global mutex lock. All queries wait in line while the cache invalidates.
One order = hundreds of cached queries trashed simultaneously. The "cache" is now your bottleneck.
query_cache_type = 0
query_cache_size = 0
If you're on MariaDB and think you're safe: same problem, same mutex lock.
Your database server is reading from disk on every product page
MySQL's default innodb_buffer_pool_size is 128MB. That's barely enough for a fresh WordPress install with no content. If your database is 500MB+ (and with WooCommerce it probably is - wp_postmeta alone gets massive), MySQL is reading from disk on every query. That's the difference between 2ms and 200ms per query.
The fix: set innodb_buffer_pool_size to 70–80% of available RAM on a dedicated DB server. If your app and DB share a box, aim for 40%. Check your hit ratio:
SHOW STATUS LIKE 'Innodb_buffer_pool_read%';
If Innodb_buffer_pool_reads is more than 1% of Innodb_buffer_pool_read_requests, you're hitting disk too often. This is probably the single biggest performance win available - and most hosts leave it at the default.
Payment gateways are silently bloating your database
Check how many rows Stripe left in your wp_options table:
SELECT COUNT(*) FROM wp_options WHERE option_name LIKE '_transient_wc_stripe%';
Stripe, PayPal, Square - they all cache API responses as individual transient rows. Thousands of them. They're autoload=no, so they don't load on every page. But they still bloat the table's physical size on disk. A bigger table means slower queries for everything else, including the autoloaded options that do load on every page. Clean them out, then figure out why your cleanup cron isn't doing its job.
WooCommerce sessions table grows forever
Run this:
SELECT COUNT(*) FROM wp_woocommerce_sessions;
WooCommerce stores full cart and customer data as serialized blobs for every visitor. The built-in cleanup cron deletes only 1,000 expired sessions every 48 hours. If you get 10,000 visitors a day, you're accumulating dead sessions way faster than they're being cleaned. I've seen stores with 500K+ rows in this table.
It gets worse: the session data is serialized PHP, so some rows are huge. The table fragments, queries slow down, and eventually checkout starts timing out. Clean it manually, then consider lowering the session expiry or running the cleanup more often.
Action Scheduler is silently growing to millions of rows
WooCommerce's Action Scheduler keeps completed actions for 30 days by default. Sounds reasonable until you check the actual table size:
SELECT COUNT(*) FROM wp_actionscheduler_actions;
On a busy store processing orders, sending emails, syncing inventory - this table can hit millions of rows. Other queries reference it, indexes get bloated, and the admin page for "Scheduled Actions" becomes unusable. One filter fixes it:
add_filter('action_scheduler_retention_period', function() {
return DAY_IN_SECONDS;
});
Keep completed actions for 1 day instead of 30. You don't need a month of "email sent successfully" records.
WordPress doesn't index wp_postmeta.meta_value
Every WooCommerce query that filters by price, SKU, stock status, or any custom field runs a full table scan on wp_postmeta - because WordPress doesn't add an index on meta_value. On a store with 500K+ rows in that table, a single product search can take 2 seconds.
One query fixes it:
ALTER TABLE wp_postmeta ADD INDEX idx_meta_value(meta_value(191));
That 191 is the max length for a utf8mb4 index in InnoDB. After this, the same product search takes 20ms. I don't know why WordPress core still doesn't include this index - it's been a known issue for years.
tmp_table_size is killing your WooCommerce reports
If your WooCommerce Analytics pages are painfully slow, it's probably not PHP - it's MySQL creating temporary tables on disk. WooCommerce Analytics runs GROUP BY queries across orders. When the result set doesn't fit in memory, MySQL writes it to disk as a MyISAM temp table. Default tmp_table_size is 16MB. For a store with 10K+ orders, that's not enough.
SHOW STATUS LIKE 'Created_tmp_disk_tables';
If that number keeps climbing, set:
tmp_table_size = 64M
max_heap_table_size = 64M
Both need to match - MySQL uses the lower of the two. After this, analytics queries stay in memory instead of grinding your disk.
PHP & PHP-FPM
PHP-FPM defaults are set for minimal resource usage, not for WooCommerce. Most of the server performance you're leaving on the table is here.
PHP-FPM slow-log is free profiling that nobody uses
There's a free profiling tool built into your server that most devops teams never enable. In your PHP-FPM pool config, add:
slowlog = /var/log/php-fpm-slow.log
request_slowlog_timeout = 3s
Now every request that takes longer than 3 seconds gets a full stack trace dumped to that log. You'll see exactly which function, which plugin, which WooCommerce hook is blocking. No external tools, no paid services, no code changes. It's been there the whole time. This is the first thing I check on any slow WooCommerce site.
OPcache interned strings buffer is too small
PHP's OPcache has a setting called opcache.interned_strings_buffer. Default is 8MB. WordPress + WooCommerce + plugins needs 32–64MB. When it's too small, PHP can't share string data between FPM workers. So every worker duplicates its string storage in memory. If you have 12 workers, that's 12× the memory usage for strings. Your server looks like it needs more RAM. It doesn't. It needs this one php.ini change:
opcache.interned_strings_buffer=64
Check current usage with:
php -r "print_r(opcache_get_status());"
Look at the interned_strings_usage section. If used_memory is close to buffer_size, you're wasting RAM on duplicated strings.
PHP-FPM pm.max_children - the math nobody does
Most people either leave pm.max_children at the default (5) or set it to something ambitious like 50. Both are wrong. Here's the actual formula:
(Total RAM - OS overhead - MySQL - Redis) / average PHP worker memory
Check your worker size:
ps --no-headers -o rss -C php-fpm | awk '{sum+=$1} END {print sum/NR/1024"MB"}'
Typical WooCommerce worker uses 40–80MB. On a 2GB VPS after OS, MySQL, and Redis, you've got maybe 800MB for PHP - that's 10–20 workers. Set it to 50 and you get OOM kills during checkout. Set it to 5 and you've got 45 customers waiting in line. Do the math. It takes 2 minutes.
Shutdown hooks are holding your PHP workers hostage
Your page loads fast. TTFB looks great. But your server feels overloaded with way less traffic than it should handle. Check what's running in PHP shutdown hooks. Plugins that fire analytics pings, API calls, or webhook notifications in register_shutdown_function() keep the FPM worker busy after the response is sent.
The user sees a fast page. But the worker isn't free for the next request yet. If you have 10 workers and each one is held for 500ms by shutdown hooks, your effective capacity is halved. The PHP-FPM slow-log catches these too.
Never use pm = dynamic for WooCommerce
PHP-FPM has three process management modes: static, dynamic, and ondemand. For WooCommerce, never use dynamic. It tries to scale workers up when needed, but the scaling has latency - forking new PHP processes takes time. During a checkout surge, customers wait while FPM spins up workers.
- High-traffic stores: use
pm = static. Workers are always warm, no fork overhead, predictable memory. - Spiky traffic: use
pm = ondemand. Workers scale to zero between bursts, fork on demand. At least it's honest about the cold start.
Dynamic pretends to be smart about both scenarios and fails at both. Pick one.
Nginx & Caching
Most "caching doesn't work" complaints I've seen come from nginx being misconfigured, not broken. The config is mostly right, the strategy is off.
Your fastcgi_cache isn't bypassing WooCommerce carts properly
If you're running nginx with fastcgi_cache, check your cache bypass rules. WooCommerce sets woocommerce_items_in_cart and woocommerce_cart_hash cookies the moment someone adds anything to cart. Your bypass rule needs to check for these specifically - not just the logged-in cookie.
Otherwise you're serving cached cart pages with stale quantities. Customer adds 3 items, sees 1. Checks out, gets the wrong total.
if ($cookie_woocommerce_items_in_cart) { set $skip_cache 1; }
I've debugged this exact issue on at least a dozen stores. The caching "works" - it just serves wrong data to shoppers.
nginx upstream keepalive - and why HTTP/2 won't save you without it
Most nginx configs open a new connection to PHP-FPM for every single request. Even with unix sockets, that's a connect/accept/close cycle every time - 5–15ms of overhead per request. Add this to your upstream block:
upstream php-fpm {
server unix:/var/run/php-fpm.sock;
keepalive 16;
}
And in your location block:
proxy_http_version 1.1;
proxy_set_header Connection "";
Now connections persist between requests. PHP-FPM workers skip the accept/close overhead. On a busy store doing 50+ requests/second, this adds up to real savings.
This matters even more if you've enabled HTTP/2. Your visitors get multiplexed connections and header compression - but between nginx and PHP-FPM you're still on HTTP/1.1 with a new connection per request. Without keepalive, every request to PHP-FPM is a fresh connection. The frontend optimization is pointless if the backend bottleneck remains.
Cache PURGE is nuking your entire store on every sale
If you use nginx fastcgi_cache with nginx-helper for purging, check the purge settings. The default behavior purges everything when any post is updated.
A customer buys a product. Stock changes. WooCommerce fires a post update. nginx-helper purges the entire cache. Every page on your site goes cold. The next 50 visitors all hit PHP-FPM directly.
That's not caching. That's a ticking bomb triggered by every sale. Set it to purge only the modified URL + homepage. A stock update on one product shouldn't nuke your cache for 500 other product pages. I've seen stores where the cache hit ratio was under 20% because of this. They thought nginx caching "didn't work."
stale-while-revalidate prevents cache stampedes
When your nginx cache expires on a popular product page, without stale-while-revalidate, the first visitor after expiry waits for a fresh PHP response. And the second. And the third. All hitting PHP-FPM at once. That's a cache stampede.
Two lines fix it:
fastcgi_cache_use_stale updating;
fastcgi_cache_background_update on;
Now when cache expires, the first visitor gets the stale (but fast) response. Nginx fetches the fresh version in the background. No stampede, no cold cache spike, no checkout timeouts during a sale.
open_file_cache eliminates thousands of syscalls
A typical WooCommerce product page loads 15–30 CSS and JS files. Every one triggers an open() system call in nginx. Multiply by requests per second and that's thousands of unnecessary file lookups. Add to your nginx config:
open_file_cache max=10000 inactive=60s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
Now nginx caches file descriptors and metadata in memory. Second request for the same file skips the filesystem entirely. On stores with lots of product variation images, this is even more significant.
Redis & Object Cache
update_option() invalidates your entire autoload cache
WordPress caches all autoloaded options in one object cache key called alloptions. Smart - until one plugin calls update_option() on any autoloaded option. That single call invalidates the entire alloptions cache. Next request, WordPress fetches all autoloaded options from MySQL again. If your autoloaded data is 2MB, that's a 2MB query on the next page load.
The worst offenders: analytics plugins that store hit counters in wp_options, rate limiters that update timestamps, anything that writes to an autoloaded option on every request. Find them:
define('SAVEQUERIES', true);
Then grep for UPDATE.*wp_options in your query log. If something's updating options on every request, that's your problem.
Your Redis needs more memory than you think
Quick math for WooCommerce Redis sizing:
- Each object cache entry: 1–10KB average
wp_alloptions: 500KB–2MB alone- WooCommerce sessions: 2–5KB each
- 500 concurrent shoppers: 2.5MB just for sessions
- Product cache for 5K products: ~50MB
Most people allocate 64MB for Redis and wonder why things randomly slow down. Redis hits the limit, starts evicting, and you get cache stampedes on the evicted keys. Set Redis maxmemory to at least 2× your typical used_memory. Check with:
redis-cli INFO memory | grep used_memory_human
If it's within 80% of maxmemory, you're living dangerously.
WordPress & Server
wp-cron does a DNS lookup of your own domain on every page load
By default, WordPress fires a loopback HTTP request to itself on every page load. That's spawn_cron() doing a full SSL handshake + DNS resolution of your own domain - just to check if there's a scheduled task to run. On shared hosting, that's 1–3 seconds of invisible overhead. On every request. The fix takes two minutes:
// wp-config.php
define('DISABLE_WP_CRON', true);
# system cron
*/5 * * * * wget -q -O - https://yoursite.com/wp-cron.php
Now cron runs every 5 minutes from a proper scheduler instead of hijacking your visitors' page loads.
One bad pre_get_posts filter destroys your entire admin
If your WordPress admin is slow - especially the posts/products list and search - check your pre_get_posts filters. One plugin adding a meta_query without checking is_admin() or is_main_query() means every admin list table, every AJAX search, every product lookup gets an unindexed JOIN against wp_postmeta. On a store with 50K products, that's a table scan on every keystroke in the admin search.
define('SAVEQUERIES', true);
Check the queries on any slow admin page. You'll find the culprit. It's almost always a plugin trying to be clever with custom sorting or filtering.
The loopback problem nobody talks about
WordPress uses loopback requests for more than just cron. The Site Health check does it. The block editor does it. Plugin update checks do it. WooCommerce background processing does it. Each one is your server making an HTTP request to itself - through the full network stack. DNS resolution, TCP connect, SSL handshake, full WordPress boot on the receiving end.
If your server's DNS is slow, or you're behind a CDN with strict firewall rules, or your hosting blocks loopback connections, these silently fail or add seconds of overhead. Check Site Health for "loopback request" warnings - they're not just informational. They tell you something is fundamentally broken in how your server talks to itself.
Hidden writes killing your object cache hit ratio
Bring these two together: SAVEQUERIES catches known culprits like the pre_get_posts filter problem and the alloptions invalidation. But there's a deeper pattern worth knowing.
Every plugin that writes to wp_options on a page load has the potential to trash your cache hit ratio. One update_option() in a rarely-audited code path, running on every request, compounds across traffic. Run SAVEQUERIES, look for UPDATE and INSERT on wp_options, and trace each one back to its plugin. The offenders are always surprising.
Want me to do this for your store?
I do hands-on WooCommerce performance audits - database, PHP-FPM, Nginx, Redis. Actual config changes, not a report. My nginx config templates are also on GitHub if you'd rather do it yourself:
The one thing that actually matters
I've been doing WordPress performance for years. Hundreds of sites. And the pattern is always the same.
Most performance problems aren't about server specs. They're about things running that shouldn't be running. Queries nobody asked for. Data nobody needs. Processes firing on every page load because someone forgot a conditional check.
The fix is almost never "get a bigger server." It's "figure out what's happening that shouldn't be."
That's why I built WP Multitool - 13 modules that find exactly this stuff: slow queries, bloated autoload, orphaned transients, heavy callbacks. All local, nothing leaves your server.
If any of these tips helped, the plugin will find more issues specific to your store.