← Articles

April 4, 2026 · Tim Fraser, Cloud Operations Lead

How to Find and Eliminate AWS Waste Automatically

Every AWS account has waste. Not because anyone is careless, but because cloud infrastructure is dynamic and humans forget things. An engineer spins up an instance on Tuesday and means to shut it down on Friday. A project gets cancelled but its resources don't. A database gets upgraded during a traffic spike and never downsized after.

The result is that most AWS accounts are carrying 20-35% in avoidable spend. The challenge isn't knowing that waste exists — it's finding it consistently and acting on it before the bill arrives.

The four types of AWS waste

Understanding what waste looks like makes it easier to spot.

Idle resources. These are resources that exist and are running, but aren't doing useful work. EC2 instances at 1-2% CPU. Load balancers with zero active connections. RDS databases that haven't served a query in weeks. They consume compute hours and cost money, but contribute nothing. The tricky part is distinguishing "idle" from "low-traffic but essential" — a monitoring server at 3% CPU might still be critical. Oversized resources. Everything is bigger than it needs to be. This happens because provisioning is a one-way ratchet: when something runs slow, you scale up. When it's running fine, nobody scales down. A c6i.2xlarge at 15% average CPU could be a c6i.large at 60% CPU — same performance, 75% less cost. Multiply this across every instance, database, and cache in the account. Orphaned storage. EBS volumes left behind when instances are terminated. Snapshots taken "just in case" that are now six months old and forgotten. S3 buckets full of logs that nobody will ever read again but that nobody has set a lifecycle policy on. AMIs registered for instances that no longer exist, with their backing snapshots still costing money. Storage waste grows monotonically — it only goes up unless someone actively cleans it. Unnecessary data transfer. Services communicating across availability zones when they could be in the same AZ. Traffic routing through NAT Gateways when VPC endpoints would be cheaper. Large S3 objects being downloaded repeatedly without CloudFront caching. Data transfer is invisible in the architecture diagram but very visible on the bill.

Why manual cost reviews fail

Most teams know they should review their AWS costs regularly. They plan to do it monthly. Here's what actually happens.

Month 1: Someone does a thorough review, finds $800 in savings, cleans things up. Success. Month 2: The person who did the review is busy with a production incident. They'll do it next week. Then the week after. Month 3: A new engineer joins and spins up resources that don't follow the tagging convention. Nobody catches it because the review didn't happen. Month 4: The quarterly bill comes in 25% higher than expected. Everyone is surprised.

Manual reviews fail for three reasons. First, they require dedicated time from someone with the right AWS knowledge — and that person always has competing priorities. Second, they require knowing what to check, which changes as the account evolves. Third, they don't scale. An account with 50 resources can be reviewed in an hour. An account with 500 resources across three regions can't.

The fundamental problem is that cost review is episodic while cost accumulation is continuous. Waste shows up on Tuesday, but the review happens on the last Friday of the month — if it happens at all.

What automated monitoring looks like

Effective cost monitoring runs continuously, checks everything, and tells you only what's changed. It should answer three questions every week:

The output should be a prioritised list, not a dashboard you have to remember to check. A dashboard only works if someone remembers to look at it. A report that arrives in your inbox doesn't need you to remember anything.

How plainfra automates waste detection

plainfra connects to your AWS account with read-only access and does two things.

On-demand scanning. Ask a question like "where are we wasting money?" or "show me idle resources" and plainfra makes the relevant API calls in real-time. It checks instance utilisation, identifies unattached volumes, finds oversized databases, reviews data transfer patterns, and flags resources without tags. You get a prioritised list with estimated monthly savings for each item. One question, 30 seconds, no console clicking required. Weekly health reports. Every week, plainfra runs a comprehensive scan of your connected accounts and delivers a summary. New waste gets flagged immediately — not at the end of the month. The report covers idle resources, cost anomalies, missing lifecycle policies, security findings, and configuration drift. If nothing has changed, it's a quick skim. If something has appeared that shouldn't be there, you know about it within a week.

This solves the three problems with manual reviews. It doesn't require someone to remember to do it. It checks everything, every time. And it scales to any number of resources and accounts.

The savings from catching one forgotten r6g.xlarge instance ($550/month) or a batch of orphaned gp3 volumes ($200/month) typically exceed the cost of the tool in the first week. But the real value is the drift prevention — the waste that never accumulates because it's caught early.

Stop reviewing your AWS costs manually. Let automation catch what humans forget.

Try plainfra free → 50K tokens, 7 days, no charge. Or see the interactive demo →.