← Articles

April 4, 2026 · Tim Fraser, Cloud Operations Lead

How Much Does AWS Downtime Actually Cost Your Store?

When your e-commerce site goes down for an hour, the cost isn't just the sales you missed during that hour. It's the abandoned carts that never come back, the Google ranking hit that takes weeks to recover from, and the customer trust that's hard to quantify but easy to lose.

Most store operators have a rough sense that downtime is expensive. Fewer have done the actual maths. Here's how to calculate yours, what typically causes AWS-hosted e-commerce outages, and what prevents them.

Calculating the direct cost

Start with a simple formula:

Hourly revenue = Annual online revenue / 8,760 hours

If your store does $2 million per year in online sales, that's roughly $228 per hour on average. But average is misleading — your revenue isn't evenly distributed. Peak hours (weekday evenings, promotional events) might generate 3-5x the average. Downtime during a flash sale or Black Friday is dramatically more expensive than downtime at 3am on a Tuesday.

A more useful calculation: take your last 90 days of order data, group by hour-of-day and day-of-week, and find your peak revenue hours. That's the number to plan around, because outages don't conveniently happen during quiet periods.

The costs you don't see immediately

Abandoned carts that don't return. If a customer gets a 503 error at checkout, they don't bookmark the page and try again later. If your average cart value is $85 and you lose 50 carts during a one-hour outage, that's $4,250 in potential revenue — most of which never comes back. Search engine ranking impact. Google's crawlers visit your site regularly. A multi-hour outage, or repeated short outages, can cause your product pages to drop in search results. Recovering that ranking takes days to weeks of lost organic traffic. Customer trust. A first-time visitor who gets an error page will search for the same product and buy from whoever loads first. Repeat customers are more forgiving, but only to a point. Operational cost. Every outage triggers incident response. A one-hour outage easily generates 10-20 hours of total staff time when you count the investigation, the fix, the testing, and the post-mortem.

What actually causes e-commerce outages on AWS

Based on patterns across the stores we've seen, the most common root causes are:

Database bottlenecks. The database runs out of connections during a traffic spike, queries start queueing, and the application layer times out. This cascades quickly — auto-scaling launches new instances that add even more database connections, making it worse. Auto-scaling that's too slow. It takes 2-5 minutes for CloudWatch to detect a spike, trigger scaling, launch instances, and register them with the load balancer. If traffic doubles in under a minute, those minutes mean errors. Certificate and domain expiry. Completely preventable outages that still happen because nobody was watching the dates. An expired certificate means browsers refuse to load the site at all. Deployment failures. A code push that introduces a checkout bug. Without automated rollback, the site is broken until someone notices, diagnoses, and deploys a fix — easily 30-60 minutes. Third-party dependency failures. Your payment processor, inventory system, or shipping rate API goes down. If your application doesn't handle it gracefully, the checkout breaks even though your own infrastructure is healthy.

Prevention through weekly monitoring

Most of these causes are detectable before they become outages. Database connection counts creeping upward, auto-scaling groups hitting their maximum, certificates approaching expiry, security groups with unexpected changes — these are all things that show up in a routine infrastructure review.

The challenge is doing that review consistently. It's the kind of task that gets skipped when the team is busy, which is exactly when it matters most.

plainfra runs weekly health checks on your AWS infrastructure and delivers a report with specific findings. It flags the things that cause outages — connection limits, scaling configuration, expiring certificates, under-provisioned resources — before they become incidents. Ask:

> "Check for anything in our infrastructure that could cause downtime — database limits, auto-scaling config, certificate expiry, and security group changes."

You get a plain-English summary of what's healthy and what needs attention, with specific numbers and resource IDs. It takes a minute to read and can prevent an outage that costs thousands.

Try plainfra free → 50K tokens, 7 days, no charge. Or see the interactive demo →.