Pulse x reMKTR

Last Week in AWS: Nov 10 Roundup Before re:Invent

Written by Jacob Heinz | Nov 10, 2025 8:47:24 PM

AWS re:Invent 2025 is three weeks away, and your feed will explode. Last year, 60,000 people showed up in Vegas for it. This year, expect more noise, bigger claims, and Slack stuffed with “did you see this?” links.

Here’s the move right now: don’t try to catch everything you see. Instead, catch what matters for your team and goals today. You’re not paid to read launches; you’re paid to stack advantages. That means focus on updates that cut cost, reduce toil, and lower stack risk.

In the last week, AWS dropped many blog posts, podcasts, and “What’s New” entries. Instead of scrolling till your eyeballs melt, use this roundup for signal over noise. You’ll spot the real action, especially EC2, developer tools, and re:Invent prep, without losing Monday.

And if you’re not subscribed to the AWS News Blog or Podcasts yet, consider this your nudge. Your future self, and your on-call rotation, will seriously thank you.

Quick promise here: this isn’t a dump; it’s a decision aid for you. In minutes, you’ll know what to adopt, what to trial, and what to ignore until after Vegas.

TL;DR

  • Focus on launches that lower cost, reduce toil, or unlock speed.
  • For AWS EC2 news, benchmark before migrating—price-performance claims vary.
  • Subscribe to AWS blog posts via RSS and follow the AWS developer podcast.
  • Track AWS issue news from the official Health Dashboard, not Twitter.
  • Use last week’s updates to plan your re:Invent session list now.

Signal From Last Week

80 20 AWS updates

AWS ships dozens of updates in a normal week. Great for ideas, terrible for calendars. Your filter is simple: prioritize launches that save money, lower risk, or speed delivery. Everything else? Park it for the weekend without guilt.

A practical flow: scan “What’s New” for compute, storage, and networking first. Those parts hit your bill and blast radius the most. Then check items that affect your developer inner loop: SDKs, CLIs, IaC, and local tools. Finally, skim AI/ML for features that lower inference cost or simplify guardrails.

Apply this filter with a few concrete examples:

  • Compute: new EC2 families or price-performance claims are worth a look—but benchmark. Also flag Spot capacity or placement tweaks if you run stateless fleets; Spot can be huge with safe fallbacks.
  • Storage: anything around EBS gp3, S3 Intelligent-Tiering, or lifecycle changes. These often turn directly into lower $/GB or less operational fuss.
  • Networking: changes that reduce data processing or egress cost, simplify connectivity, or improve resiliency are high-signal.
  • Dev tooling: CDK, CloudFormation, SAM, SDKs, or CLI improvements that shave minutes off dev or CI are sneaky compounding wins.

To turn “news” into action, give each item a job:

  • Adopt = implement this quarter; it has clear ROI or real risk reduction.
  • Trial = run a time-boxed spike with success criteria and a rollback plan.
  • Watch = note it, tag it, and revisit post–re:Invent when dust settles.

If your calendar is ruthless, block 30 minutes weekly just for this triage. Speed matters, but so does a consistent, boring process that actually sticks.

Where to look

  • Start with AWS What’s New: it’s the canonical source for launches and expansions.
  • Cross-check with the AWS News Blog for deeper dives and examples.
  • If you want color commentary, read the independent “Last Week in AWS” newsletter.

Ignore duplicative posts that are regional expansions without functionality changes. Good to know, not urgent. Defer niche service deep-dives unless they match your roadmap.

Pro tip: create a private doc tagged “Adopt/Trial/Watch.” If an update lands in “Adopt,” it gets a ticket. “Trial” gets a spike. “Watch” gets a note and a reminder after re:Invent.

For stronger signal, layer a few filters into your reader:

  • Tag by service (EC2, S3, EKS, Lambda) and by topic (cost, resilience, performance).
  • Tag by region if you have strict data residency or latency requirements.
  • Create a “cost” smart folder so posts on Savings Plans, Spot, gp3, Intelligent-Tiering, or VPC endpoints bubble up.

Links to keep open:

EC2 News You Can Use

Read Headline Then Fine Print

When EC2 headlines promise better price-performance, don’t jump to migration. Build a quick benchmark plan before you move anything. Check your workload profile: CPU-bound, memory-bound, IO-bound, or network-bound? A new family may boost one area while hurting another. Check processor type, memory bandwidth, enhanced networking, and EBS throughput limits.

For credible updates, track EC2 posts on the AWS News Blog and service page. Use those as your “source of truth” before building any business case.

Add a lightweight benchmark pack so you compare apples-to-apples:

  • Metrics: p95/p99 latency, throughput, CPU utilization, memory pressure, EBS IO latency, PPS, and tail latencies tied to SLOs.
  • Workload slices: run the same replay or synthetic load on old and new families. Keep same AZ, VPC, AMIs, and kernels so it’s fair.
  • Duration: run long enough to see steady-state and burst behavior. Short tests can hide throttling or IO headroom issues.
  • Cost lens: measure $/request or $/unit of work, not just raw speed. Cheaper per-hour isn’t always cheaper per-output.

If you’re exploring Arm-based options like Graviton, confirm runtime compatibility first. Many teams have smooth paths with managed runtimes, containers, or Java, Go, Python, and Node.js. For compiled binaries or native deps, scan early with Porting Advisor. Pin container base images per architecture to avoid deploy surprises.

Resilience first savings second

Werner says, “Everything fails, all the time.” He’s right, sadly. Don’t let your EC2 change be the reason your weekend fails, too. Before chasing savings, confirm autoscaling is sane, health checks work, and you cover multiple AZs. Then layer savings moves: rightsizing, family modernization, and Savings Plans or RIs after stability.

Smart sequence:

1) Benchmark on a small slice.

2) Validate scaling behavior under load.

3) Roll out via canary or blue/green.

4) Lock pricing only after you feel confident.

This is how you avoid “we migrated for savings and paid in incidents.”

Make this sequence concrete:

  • Architecture: run at least two AZs behind an ALB. Confirm health checks fail fast and recover fast. Test terminating nodes mid-traffic.
  • Autoscaling: verify scale-out and scale-in with load tests. Watch warm-up, connection draining, and target tracking accuracy in real time.
  • Storage: if you’re on EBS gp2, evaluate gp3 for lower $/GB and provisioned IOPS. Separate IO needs from raw storage capacity.
  • Networking: check ENA driver versions in your AMIs. New families often assume enhanced networking for the promised throughput.
  • Pricing: run pilots On-Demand first. Once stable, use Savings Plans for steady loads and Spot for bursts. Keep On-Demand headroom for failover.

A few quick wins many teams realize on EC2:

  • Build images for x86_64 and arm64, then let your orchestrator schedule. This unlocks modern families without a risky big bang cutover.
  • Keep app configs external with AppConfig or similar. Tweak timeouts and retries during a migration without redeploying the app.
  • If you use containers, set resource requests and limits accurately. Rightsizing at pod level compounds with instance rightsizing below it.

Podcasts Feeds and Subscriptions

The shows worth queueing

If you prefer audio over long posts, keep these on rotation:

These give quick context to decide if a launch is worth your time. Ideal for commutes or a walk between meetings, honestly.

Audio hack: skim the episode notes first. If topics touch your stack, queue it. If it’s outside your lane this quarter, skip guilt-free.

Subscribe like a pro

Get updates to come to you, not the other way around:

Set filters for “EC2,” “Lambda,” “EKS,” and “S3” in your reader. If your org relies on a region, tag mentions to catch expansions that matter.

Workflow tip: drop links into a shared Slack channel with three emojis. Adopt, trial, watch. Team votes, you triage. Alignment without another meeting.

Bonus: if your team lives in calendars, schedule a 25-minute Monday review. Keep it focused: three decisions—adopt, test, or park till later.

Add two small guardrails so this stays useful:

  • Define a DRI for each area: compute, storage, networking, and platform. The DRI proposes adopt/trial/watch each week for quick calls.
  • Close the loop: when a trial ends, post results and a one-paragraph summary. Link any runbook updates so nothing gets lost.

reInvent game plan

Book early move fast

re:Invent sessions fill up fast, like absurdly fast. Use last week’s updates to pick deep dives, chalk talks, or workshops. If an EC2 feature could drop compute cost, prioritize roadmap sessions for that family. If you’re heavy on data, filter for cost controls, governance, and performance tuning.

Now, layer logistics:

  • First pass: pick sessions tied to your top three Q1 outcomes. Cost down, risk down, or speed up.
  • Mix formats: grab one workshop, one chalk talk, and one deep dive. That combo yields real takeaways you can ship.
  • Leave air: block time for the expo, expert booths, and hallway track. The serendipity ROI at re:Invent is very real.

Build an agenda with outcomes

Don’t collect sessions like Pokémon cards. Build around outcomes you actually need. “Cut storage costs 20%,” “reduce cold starts 50%,” “onboard devs in under one day.” Map sessions, labs, and booths to those outcomes. You’ll leave with decisions, not just swag.

If marketing analytics is your remit, add AMC and retail media measurement sessions. Consider streamlining that stack with AMC Cloud to automate queries and reporting.

If you’re not traveling, no problem at all. Bookmark the keynotes and session replays once posted. Set calendar holds now, so you don’t catch highlights out of context.

To make the trip pay for itself, prepare two artifacts now:

  • A one-pager per outcome with current pain, target metric, and unknowns to validate.
  • A question bank for service teams: migration steps, pricing, regional coverage, quotas, and common pitfalls.

Prep questions now

Show up with questions ready. Ask service teams about migration guidance, pricing gotchas, and region coverage. If last week’s launch raised a “huh?” moment, bring it to a chalk talk. You’ll get answers and sanity-check assumptions before committing roadmap time.

Examples worth asking:

  • For any “up to X%” claim: what workloads and configs were used in tests? What regressions should we expect in edge cases?
  • For storage tiers: what are retrieval charges, minimum object sizes, or monitoring costs? What surprises teams most often?
  • For networking: how do we measure end-to-end latency impact simply? Which quotas should we pre-raise now?

Stay ahead of issues

Use primary sources not rumors

When something seems off, go to the AWS Health Dashboard first. It’s the canonical status for service events, region advisories, and impact scopes. Bookmark it, and if you’re on-call, pin it so it’s right there.

Do not chase screenshots on social while your incident channel pages. Latency graphs beat hot takes every single time.

Your operational playbook

  • Confirm: check CloudWatch metrics and logs to verify impact. Don’t assume correlation equals causation.
  • Communicate: post a crisp internal update with knowns, unknowns, and the next checkpoint.
  • Mitigate: use feature flags, fail over to multi-AZ or multi-region, and scale buffers.
  • Track: subscribe to the Health event for your account to get updates automatically.
  • Review: once stable, run a blameless retro and document durable fixes.

Tie this back to weekly updates. If an AWS post introduced a resilience feature or limit bump, add it to your backlog. Today’s “nice-to-have” becomes tomorrow’s “thank-goodness we did that.” That’s how you get fewer 2 a.m. incidents.

To make this muscle memory, do two things now:

  • Wire alerts: use CloudWatch alarms on golden signals and set service dashboards. Map alarms to runbooks so action is obvious.
  • Automate health intake: subscribe to Health events and route them to Slack or email. Label by region and service for faster triage.

Fast recap

  • Prioritize updates that cut cost, remove toil, or add speed.
  • For AWS EC2 news, benchmark first; then modernize and commit pricing.
  • Subscribe to AWS blog posts and the official podcasts to stay sane.
  • Turn last week’s launches into re:Invent session picks and questions.
  • Track incidents via the AWS Health Dashboard—not rumor mills.

FAQ

Where to find updates

There isn’t one official weekly digest, but What’s New is the primary source. Pair it with the AWS News Blog for deeper explanations and examples.

Handy tip: add the What’s New RSS to your reader with filters. Flag posts with service names you own, and turn firehose into a focused drip.

Subscribe to AWS blog

Use the AWS News Blog RSS feed in your reader of choice. If your team uses Slack, pipe the RSS into a channel to reduce context switching.

Pro move: label each post—adopt, trial, or watch—and add a one-liner. Note why it matters for your stack, even if it’s a small reason.

Best way to track EC2

Track the EC2 category on the AWS News Blog and the EC2 service page. For extra context and critique, add the independent Last Week in AWS newsletter.

When a family catches your eye, create a tiny benchmark ticket. Add start and stop dates, metrics, and a go/no-go decision rule.

AWS podcast to follow

Start with the Official AWS Podcast for a broad sweep of changes and tips. The AWS Podcasts hub lists shows tailored to developers, data engineers, and builders.

Speed-listen news sections at 1.25x, then slow down for deep dives. Focus on anything tied to your Q1 outcomes for best return.

Check AWS issue news

Use the AWS Health Dashboard for real-time status and impact information. It’s tied to your accounts and regions, so it’s much more relevant.

Bookmark it, then check your CloudWatch dashboards first. Validate real impact to your workloads before changing anything.

Prep for reInvent

Filter last week’s launches to three outcomes you care about most. Cost down, risk down, or speed up. Then pick sessions that help ship those outcomes in Q1.

Add a post-event cadence: schedule a 45-minute readout the week after. Each attendee shares one adopt, one trial, and one watch with owners and dates.

Weekly AWS workflow

  • Scan AWS What’s New for compute, storage, and networking first.
  • Skim the AWS News Blog for deeper context on two to three items.
  • Check EC2 category for instance updates; earmark benchmarks with simple tickets.
  • Pipe AWS blog RSS into Slack; tag items adopt, trial, or watch.
  • Review the Official AWS Podcast notes for anything you might have missed.
  • Create tickets for “Adopt” items; set reminders for “Watch” after re:Invent.

Add a little structure so this sticks:

  • Time-box: 10 minutes scanning, 10 minutes discussion, 10 minutes ticketing.
  • Definition of done: each Adopt gets an owner, metric, and due date. Each Trial gets a hypothesis and a rollback plan.
  • Close the loop: post results in the same Slack thread. You build lightweight docs without extra meetings.

You made it. The punchline is simple: winners don’t read more AWS updates; they apply them. Your job isn’t to memorize launches this week; it’s to turn two or three into cheaper compute, fewer pages, and faster shipping.

If you’re heading to Vegas, start booking and build your questions list now. If you’re not, line up the keynotes and replays in your calendar. Either way, use last week’s news to shape next quarter’s roadmap.

Want to see how teams turn plans into outcomes? Browse our Case Studies.

“In 2025, the smartest AWS teams didn’t track more launches—they turned a handful into fewer incidents, lower bills, and faster releases.”

References