Pulse x reMKTR

Project Rainier: Inside AWS’s Indiana Supercluster For Anthropic

Written by Jacob Heinz | Nov 3, 2025 8:24:10 PM

AWS just flipped on one of the world’s biggest AI clusters. And it’s not where you think. Project Rainier lives in Indiana, not Silicon Valley. It’s packing nearly half a million Trainium2 chips. That’s 5x the compute Anthropic used for earlier Claude models. Translation: faster training, smarter outputs, cheaper inference when you scale.

Here’s the kicker. AWS built and shipped Rainier in under a year. For a cluster this size, that’s Formula 1 in a world of minivans. Think custom interconnects, tuned networking, and a campus made for brutal, hot workloads. It runs 24/7, rain or shine, without flinching.

Wondering if compute is the new oil? Rainier kinda answers that, and hands you a map. Whether you train frontier models or trim latency in prod, this changes your playbook. It also opens fresh lanes for you to grab.

Zoom out for a sec. This isn’t just another cloud press note. It signals a shift from “Can we get GPUs?” to “How fast can we iterate without burning cash?” When the fabric gets faster and cheaper, slides don’t win. Reps win. Project Rainier makes those reps real.

  • TL;DR
  • Project Rainier is an AWS supercluster in Indiana with nearly 500k Trainium2 chips.
  • Anthropic is training and deploying Claude on Rainier; compute is 5x prior levels.
  • The campus was built for power, cooling, and low-latency networking.
  • Expect lower cost-per-token and faster iteration for training and inference.
  • By 2025, Claude runs across 1M+ Trainium2 chips for training and serving.

Inside Project Rainier

Purpose built AI supercluster

Project Rainier is AWS’s fully live AI compute cluster on Trainium2. That’s their second-gen accelerator made for machine learning. The headline stat is loud: nearly half a million chips on day one. Anthropic is already using it to build, tune, and serve Claude at frontier scale. If you’ve tracked "project rainier amazon" updates, this is the slideware turning concrete. It’s real capacity you can plan around.

In plain English, this is a model factory. Trainium-family chips are built for the heavy math behind transformers. The surrounding stack—compilers, libraries, placement, and monitoring—turns raw silicon into steady throughput. That means fewer timeouts mid-epoch and more stable runs at bigger batch sizes. It helps with wider context windows or mixing modalities too.

A supercluster isn’t just lots of servers in a room. It’s a tuned system where compute, memory bandwidth, storage IOPS, and network topology match up. No single piece should choke the rest. When you ask for thousands of accelerators for one job, the scheduler maps them onto the right topology. That keeps collective ops snappy and keeps your step time down. Done wrong, you stare at retries and deadlocks all night.

Hardware and network

At this scale, the magic isn’t just chips. It’s the glue. Rainier leans on custom interconnects and tuned networking to shove huge gradients and checkpoints across thousands of nodes. It does that without melting your latency budget. That hits your step time, optimizer stability, and throughput per dollar. For multimodal training, topology can mean “it converged” versus “we ran out of patience.”

Think about a global batch across thousands of accelerators. You need collective comms that know the topology and handle stragglers. You want mixed-precision paths tuned for speed and stability. You also need fast storage to stream terabytes per minute cleanly. Checkpointing can’t stall the full job, or you lose days to drift. These parts aren’t glamorous, but they decide six days or week three.

Software matters as much as metal. Compiler tricks like kernel fusion, fewer memory moves, and smarter tensor placement can unlock big gains. Double-digit gains sometimes. The more auto-tune the platform brings, the less you rewrite loops. Then you can spend time testing ideas, not plumbing.

Why scale matters

The scaling story hasn’t changed. More compute, more capability. Research keeps showing steady gains from bigger models trained longer on more data. As Kaplan et al. put it, the math is predictable. Add compute, and test loss drops on a power-law curve.

“Test loss scales as a power-law with model size, dataset size, and compute.” — Kaplan et al., 2020

Scale also acts like insurance for research. With more compute, you can run real ablations and compare schedules for real. You can test longer contexts and dig into safety at depth instead of guessing. You get to answer “What if?” with data, not vibes. Breakthroughs stack from many small, fast tests in a row.

And don’t forget inference. With more serving capacity, teams can ship bigger context windows and richer agents. You can add more tool calls without throttling users or blowing up latency. The same scale that trains frontier models makes day-two operations feel chill.

Why Indiana

Rainier location

Location matters because power, water, grid links, and land set cost and reliability. Project Rainier sits in Indiana. AWS opened an $11B data center campus built for dense AI. The Midwest math is simple. It’s close to power, better land costs, and more room to grow without chaos.

Indiana isn’t a postcard flex. It’s a bet on fundamentals. You can’t run a supercluster without steady electricity and tight utility partners. You also need land for future phases. Multiple fiber routes help too, so traffic isn’t stuck in one chokepoint. Fewer constraints means fewer nasty surprises.

For builders, that translates to practical wins. You get more predictable capacity during peak demand, like model releases. Your jobs are more likely to start on time. Pricing is less likely to spike every time the market heats up.

The gigawatts question

Hyperscale AI doesn’t sip electrons. It gulps them. AWS hasn’t tied a specific gigawatt number to Rainier yet. Big AI campuses usually plan for hundreds of megawatts, sometimes more. They leave room to scale past a gigawatt over years. What matters for you is sustained capacity at peak. You also want a reliable grid and power pricing that doesn’t wreck your curve.

If you run cost models, you already know power dominates. Efficiency helps fast. Better PUE targets and liquid cooling cut energy per token. Smarter workload placement helps too. Even small gains turn into real dollars when you serve billions of tokens daily.

Cooling and network egress

Cooling isn’t just HVAC. It’s a design constraint. Expect a mix of advanced air and liquid cooling, with heat reuse in some spots. Tight PUE targets keep cost-per-token competitive. On the network side, Indiana gives room for serious fiber on campus. That means low intra-campus latency and strong inter-region throughput. You get tighter tail latencies and saner egress patterns. Training stays predictable when everyone else is queueing.

“Compute without power is a press release; compute with power is a product.” — A data center architect explaining AI site selection

Water stewardship and sustainability targets matter more as AI scales. Regions with predictable climate and responsible water sources lower risk. Access to renewables helps reduce environmental and regulatory headaches. If you’ve got ESG goals, you can point to efficient sites and cleaner mixes. That makes procurement and compliance smoother.

Anthropic and AWS

Faster iteration

Anthropic is actively training and deploying Claude on Rainier today. The big shift is iteration speed. Run more experiments faster, and you ship safer, smarter models. That 5x compute jump versus prior runs changes your weekly cadence. More runs per week and broader sweeps fit inside real schedules.

In practice, time between model variants gets shorter, sometimes much shorter. Feedback loops from evals to code get tighter. Triage is faster when a run misbehaves, which it will sometimes. Instead of waiting days to confirm a hunch, you can validate in hours. The loop compresses, and team learning compounds.

Training and serving scale

By the end of 2025, Claude should be supported on 1M+ Trainium2 chips. That covers both training and inference. It’s not just a flex. It’s reliability. Serving at that scale lets you push bigger contexts and heavier tool use. You can run richer multimodal pipelines while keeping p95 latency calm. You can also split traffic across regions and AZs without sweating spikes.

Under the hood, think tensor parallelism during training and pipeline parallelism across stages. For inference, smart caching is key. KV caches, retrieval results, and tool outputs should live close to action. With a wide footprint, you place caches near users and route smartly. Users don’t feel the complexity. They just feel speed.

Cost per token and safety

Hardware choice hits cost-per-token straight on. Trainium2 targets ML economics with better throughput per watt. Performance-per-dollar looks strong for the right schedules. GPU pricing can sting on long runs. For a safety-first shop like Anthropic, cheaper compute means more red teaming and eval coverage. You can test stress cases without rationing.

“Compute, data, and algorithms trade off—extra compute lets you explore safer model configurations without giving up capability.” — Interpreting scaling literature (Kaplan et al.; Hoffmann et al.)

Safety isn’t a last step. With abundant compute, you front-load interpretability checks and alternative fine-tunes. You run long-context stress tests to catch odd edge cases. The result is a calmer launch day with fewer fire drills.

Builder playbook

For startups

  • Start with inference economics. Target Trainium-family endpoints via managed services when you can.
  • Pre-train less, fine-tune more. Use strong base models like Claude with great data.
  • Build for spiky demand. Design autoscaling for bursts. Rainier-class capacity compresses queues.

Also, measure everything. Track effective tokens per dollar and cache hit rates closely. Watch user-visible latency, especially p95 and p99. Small choices swing costs more than you think. Prompt templates, chunking, and retry logic all add up. When analytics tells the truth, you can move fast without bill shock.

If you’re running lean, start with managed endpoints to ship fast. As you grow, drop to lower-level primitives for control. Keep abstractions thin so you can switch without rewrites.

For enterprises

  • Migrate high-variance workloads first. Conversational apps, multimodal search, and batch summarization move the needle.
  • Treat “project rainier data center” as a reliability anchor. Plan region diversity and DR with latency-aware routing.
  • Negotiate throughput SLAs that match your release calendar. Capacity is strategy.

Add governance early. Lock approved model versions and prompt policies before pilots spread. Set logging requirements so audits don’t hurt. Align procurement with engineering on scale targets and launch windows. Build a showback model so business units see costs tied to outcomes. They’ll optimize once they own the number.

For researchers

  • Push scaling sweeps across data curation and multimodal fusion. Instruction tuning benefits too.
  • Bank on longer contexts. Tune evals for 100k+ tokens, tool loops, and agentic flows.
  • Measure energy, not just FLOPs. Your best models will meet capability and sustainability.

Make your work reproducible on budget. Track seeds, configs, and datasets carefully. With more compute, it’s tempting to brute-force everything. Resist that urge. A clean sweep teaches more than one lucky run. It’s easier to defend in papers and reviews too.

“Compute is oxygen for AI. You can’t innovate if you can’t breathe.” — A common refrain in modern ML labs

Machine room details

  • Storage: Plan high-throughput ingest and checkpointing. Sharded datasets with prefetch keep accelerators fed.
  • Scheduling: Gang scheduling and topology-aware placement cut stragglers and retry storms.
  • Observability: Tie training metrics to infra metrics. If step time drifts, find the cause fast.
  • Failure modes: Build idempotent steps and smart checkpoints. Assume a node fails mid-run.

These are the unsexy parts that save your launch when stuff breaks.

Rainier reality check

  • Rainier is live in Indiana with nearly 500k Trainium2 chips.
  • Anthropic’s Claude is training and serving there at 5x prior compute.
  • The Indiana campus focuses on power, cooling, and fiber at scale.
  • Expect lower cost-per-token and faster iteration across training and inference.
  • Plan for longer contexts, heavier tooling, and bursty traffic patterns.

Project Rainier FAQs

Where is Rainier

Project Rainier’s campus is in Indiana. It’s a purpose-built site for dense AI workloads. Power, cooling, and the network fabric are tuned for large training and low-latency serving. If you search “project rainier location” or “project rainier indiana,” that’s the anchor.

How big is the cluster

AWS says Rainier runs nearly half a million Trainium2 chips. That’s over five times the compute Anthropic used for earlier Claude training. The plan points to more than one million Trainium2 chips by end of 2025. That spans training and serving.

What about gigawatts

AWS hasn’t shared a gigawatt figure for Rainier yet. In practice, big AI campuses provision hundreds of megawatts with room to expand. Indiana signals a long-term plan for power and cooling without coastal limits.

Trainium2 vs GPUs

Trainium2 is built for ML training and inference on AWS. It targets performance-per-dollar and performance-per-watt. For many jobs, economics can beat general GPUs, especially with managed services and availability.

Access Rainier capacity

You won’t book “Rainier” by name. You’ll use Trainium2 through AWS services and instances as they roll out. Expect support through managed endpoints, training services, and EC2 families for Trainium2.

Only for Anthropic

Anthropic is the flagship partner, yes. But more Trainium2 helps the broader AWS ecosystem. You should see gains in instance availability, managed model performance, and pricing over time.

GPU prices and availability

Indirectly, yeah. More Trainium2 gives real alternatives. That can ease GPU queues and steady prices. Even if you keep some GPU jobs, the market gets less brittle with options.

Data sovereignty and compliance

Region selection still matters a lot. Use region-aware routing and encryption at rest and in transit. Define data residency rules by design. Rainier capacity fits within AWS’s regional model, so sovereignty plans still work.

Porting to Trainium2

Plan for some engineering lift. Use supported frameworks and standard ops when you can. Profile early. The lift drops if you avoid exotic custom ops. Start small, validate numerics, then scale up. Once tuned, the economics repeat every run.

Sustainability and power mix

AI demand is rising fast, and so are efficiency expectations. Watch published efficiency metrics like PUE. Favor regions with more low-carbon power. Track energy per token next to cost per token. Gains compound at scale.

Trainium2 builder checklist

  • Choose your path: managed endpoints for speed; EC2 Trainium2 for control.
  • Right-size your model: start strong, then fine-tune with domain data.
  • Optimize the pipeline: mixed precision and gradient accumulation. Use sharding where supported.
  • Push longer context: design prompts, memory, and evals for 100k+ tokens.
  • Instrument everything: track cost-per-token, p95 latency, and energy per run.
  • Test failover: multi-AZ routing and burst plans for big launches.

More to add as you scale:

  • Caching strategy: KV and retrieval caches can slash costs by double digits. Warm them early.
  • Prompt hygiene: shorter, sharper prompts save tokens and latency. Bake linting into CI.
  • Eval harness: automate adversarial tests and tool loops. Add long-context regressions.
  • Cost guardrails: set budget alarms for token spend and concurrency. Fail gracefully when hot.
  • Rollouts: canary new model versions to 1–5% with shadow evals before full send.

Here’s the bottom line. Project Rainier isn’t just an AWS flex. It resets AI cost and capability curves. When you move from “waiting on capacity” to “we can run another sweep tonight,” you build different. Anthropic’s bet is clear. More compute, more rigorous testing, better models. Your move is to design for this world. Opt into Trainium2 where it fits, re-benchmark cost-per-token, and refactor evals for longer contexts and heavier tool use. Winners won’t only be biggest. They’ll be the fastest to learn.

“Compute is compounding faster than your roadmap. Adjust accordingly.”

Looking to turn this compute edge into real growth? See outcomes in our Case Studies. Building on Amazon Marketing Cloud? Unify activation and analytics with AMC Cloud.

References