April 4, 2026 · Tim Fraser, Cloud Operations Lead
How to Actually Reduce Your AWS Bill
Your AWS bill is higher than it should be. That's not an assumption — it's a near-universal truth. AWS's own data suggests most organisations overspend by 20-35%, and independent analyses put the figure even higher. The good news is that reducing your bill doesn't require a migration or a multi-month project. It requires knowing where to look and what to do.
Here are ten strategies, ordered roughly by impact and ease of implementation.
1. Right-size your instances
This is the single highest-ROI cost activity. Most EC2 instances and RDS databases are provisioned 2-4x larger than needed. Check CloudWatch CPU and memory utilisation over the past 30 days. If peak usage stays below 40%, you can safely drop one instance size. For a team running ten m6i.xlarge instances that should be m6i.large, that's roughly $1,400/month saved.
2. Buy Savings Plans for steady-state workloads
Once you've right-sized and your baseline has been stable for three months, commit to a 1-year Compute Savings Plan for your predictable workloads. A 1-year no-upfront plan saves around 30% compared to on-demand. Only commit for what you know you'll use — leave variable workloads on-demand.
3. Use Spot Instances for fault-tolerant work
Batch processing, CI/CD runners, dev environments, data pipelines — anything that can tolerate interruption should run on Spot. Savings of 60-90% compared to on-demand are typical. Use Spot Fleet or EC2 Auto Scaling with mixed instance policies to maintain availability.
4. Tier your storage
S3 Standard costs $0.023/GB/month. S3 Infrequent Access is $0.0125/GB. Glacier Instant Retrieval is $0.004/GB. If you have 10TB of data and 80% of it hasn't been accessed in 90 days, moving it to the right tier saves roughly $150/month. Set up S3 Lifecycle policies to automate this — objects transition to cheaper tiers based on age, no manual intervention required.
5. Optimise data transfer
Data transfer is the most overlooked cost on AWS. Cross-AZ traffic costs $0.01/GB in each direction. Internet egress is $0.09/GB. Three things to check: Are tightly coupled services in the same AZ? Are you using VPC endpoints for S3 and DynamoDB instead of routing through a NAT Gateway? Is CloudFront serving your static assets (its egress rates are lower than direct S3 egress)?
6. Clean up unused resources
Unattached EBS volumes, unused Elastic IPs ($3.65/month each), idle load balancers ($16/month minimum), forgotten snapshots, orphaned ENIs. Individually small, but a typical account accumulates $200-500/month in zombie resources. The problem is that nobody remembers creating them, so nobody remembers to delete them.
7. Schedule non-production environments
Your dev and staging environments don't need to run 24/7. A Lambda function that stops instances at 7pm and starts them at 7am, Monday to Friday, cuts compute costs for those environments by 70%. For RDS, use the stop/start feature (instances auto-restart after 7 days, so you'll need automation to re-stop them).
8. Use Reserved Instances for RDS
RDS reserved instances save 30-40% on 1-year terms. Unlike EC2, where Savings Plans are more flexible, RDS reservations are still the primary discount mechanism. If you've had the same production database instance type for three months, buy the reservation.
9. Implement tag-based cost allocation
You can't reduce what you can't attribute. Apply consistent tags — at minimum Environment, Team, and Project — to every resource. Enable cost allocation tags in Billing, then use Cost Explorer to break down spend by team or project. This immediately reveals which teams are overspending and creates accountability.
10. Run quarterly cost reviews
All of the above degrades over time. New resources get created without tags. Someone spins up a large instance "temporarily." Lifecycle policies don't get applied to new buckets. A quarterly review — even just 30 minutes looking at Cost Explorer trends and checking for unused resources — prevents drift from erasing your savings.
Why most teams don't do this
Every strategy above is straightforward in isolation. The problem is consistency. Right-sizing requires checking CloudWatch metrics across dozens of instances. Cleanup requires knowing what's attached to what. Quarterly reviews require someone to own it, and that person usually has higher-priority work.
This is exactly what plainfra automates. Connect your AWS account (read-only access), and ask questions like "what are our biggest cost saving opportunities?" or "show me unused resources." plainfra makes the API calls, cross-references the data, and gives you a prioritised list with dollar estimates.
More importantly, plainfra's weekly health reports run all of these checks automatically. Every Monday, you get a summary of what's changed: new idle resources, instances that could be downsized, storage without lifecycle policies, unexpected cost increases. Cost drift gets caught in the first week, not the next quarter.
One question replaces hours of console clicking. Weekly reports replace the quarterly review you keep postponing.
Try plainfra free → 50K tokens, 7 days, no charge. Or see the interactive demo →.