Skip to content
Back to Blog
media
March 10, 2026

Building a news platform that doesn't crash on breaking news

What happens when traffic spikes 10x in 30 minutes, and what we've learned from building 5+ media platforms that handle it.

A normal Tuesday for one of our media clients: 50,000 pageviews. A breaking news day: 500,000. That 10x spike happens without warning, and if the server goes down during the biggest story of the week, the editors are not going to be understanding about it.

We’ve built and maintained platforms for La Republica, Semanario Universidad, Ojo al Clima, El Observador, Voz de Guanacaste, and Otras Miradas. Each one has taught us something about handling traffic that arrives fast and leaves fast. This is what we know.

The problem with default WordPress

Most news sites run on WordPress. That’s fine. WordPress powers over 40% of the web and it’s good at content management. The problem is that a stock WordPress install with a handful of plugins and a shared hosting plan falls over at about 5,000 concurrent users. Maybe less, depending on the theme.

Why? Every pageview hits the database. WordPress generates each page dynamically: query the database for the post content, query it again for the sidebar, again for the menu, again for the related articles. On a complex news theme, a single pageview can trigger 50+ database queries. Multiply that by 5,000 concurrent visitors and your MySQL server is toast.

The solution isn’t to leave WordPress. It’s to make sure WordPress barely touches the database on a pageview.

Layer 1: Page caching

The first line of defense. A caching plugin (we use a combination of server-level caching and plugin caching depending on the project) generates a static HTML version of each page. When a visitor requests an article, the server returns the static file without touching WordPress or MySQL at all.

This alone gets you from 5,000 to 50,000 concurrent users on decent hardware. For most sites, this is the only optimization you need.

The catch: cache invalidation. When an editor publishes a new article or updates a headline, the cached version needs to regenerate. If your cache TTL is too long, readers see stale content. If it’s too short, you’re rebuilding pages constantly under load. We set short TTLs on the homepage and category pages (1-5 minutes), longer TTLs on individual articles (30-60 minutes), and trigger purges on publish events.

Layer 2: CDN

CloudFront (AWS) or Cloudflare sits in front of the server. Static assets (images, CSS, JS) are served from edge locations around the world. The origin server never sees those requests.

For news sites specifically, we also cache full HTML pages at the CDN level. This means even the page caching layer doesn’t get hit for most requests. The CDN absorbs the traffic spike. The origin server barely notices.

Cost: minimal. CloudFront for a mid-sized news site runs $20-50/month in normal traffic, spiking to maybe $100-150 on a big news day. Cheap insurance.

Layer 3: Image optimization

Images are the heaviest part of any news page. A single unoptimized hero image can be 2-3MB. On a page with 10 article thumbnails, a sidebar ad, and a header logo, you’re looking at 10-15MB of images per pageview.

We compress all images on upload (WebP format where supported, JPEG fallback). We serve responsive image sizes: the mobile version gets a 400px wide image, not the 1200px desktop version. We lazy-load everything below the fold.

This doesn’t directly prevent crashes, but it reduces bandwidth per request, which means your CDN and server can handle more concurrent users with the same resources.

Layer 4: Database optimization

Even with page caching, some requests hit the database. The admin panel, search queries, AJAX calls for infinite scroll or live updates. These need a database that can handle the load.

Basics: proper indexing on the posts table (post_date, post_status, post_type). Cleaning up post revisions and transient options that WordPress accumulates over years. Using object caching (Redis or Memcached) to cache frequent database queries in memory.

For one client, adding Redis reduced the average database query time from 120ms to 8ms. That’s the difference between a search page that loads in 2 seconds and one that loads in 200ms.

Layer 5: Infrastructure that scales

All the caching in the world doesn’t help if your server is a $5/month VPS. Our media clients run on AWS. The architecture varies, but the general pattern is:

An application server (EC2 or ECS) behind a load balancer. A managed database (RDS MySQL). Redis for object caching. CloudFront for CDN. S3 for media storage.

For clients with unpredictable traffic (which is all news clients), we configure auto-scaling. If CPU or memory crosses a threshold, a new server instance spins up in minutes. When the spike passes, it scales back down.

This costs more than shared hosting. For a news site that makes money from pageviews, the cost of downtime during a traffic spike is far higher.

What we’ve learned the hard way

Test with load before launch, not after. We run load tests (k6 or Apache Bench) simulating 10x normal traffic before any media site goes live. The problems you find in a load test are cheap to fix. The problems you find during a breaking news spike are expensive.

Monitor cache hit rates. If your CDN cache hit rate drops below 90%, something is misconfigured. We’ve caught misconfigurations where a plugin was adding a unique query parameter to every URL, defeating the cache entirely.

Have a runbook for traffic spikes. When the breaking news hits, the team should know: check CDN status, check server load, check database connections, purge cache if content is stale. We document this for every media client.

If you’re running a news site that sweats during traffic spikes, or building a new one that needs to be ready from day one, this is the kind of problem we’ve solved repeatedly. Happy to talk specifics.

Have a project in mind?

Get in Touch