Anthropic and SpaceX this week entered a new partnership, giving Anthropic access to the compute capacity of the space company’s data center, Colossus 1.
As Anthropic shakes hands with Elon Musk’s space company, it will increase its compute capacity and, no doubt, hopes to shake off recent complaints from users who said they were hitting usage limits far sooner than expected.
Based in Memphis, Tennessee with over 220,000 NVIDIA GPUs (including H100, H200, and next-generation GB200 accelerators) Colossus 1 is, as SpaceX calls it, “one of the world’s largest and fastest-deployed AI supercomputers” that delivers “scale for AI training, fine-tuning inference, and high-performance computing workloads.”
Expect 3 limit changes, effective immediately
What will the AI company do with its new compute foothold?
As stated in its announcement blog post, Anthropic will get access to more than 300 megawatts of compute capacity through Colossus 1, which it plans to use to “directly improve capacity for Claude Pro and Claude Max subscribers.”
Namely, that means three things.
First, Anthropic is doubling Claude Code’s five-hour rate limits for Pro, Max, Team, and seat-based Enterprise plans. It also removes the peak-hour limit reduction for Prox and Max users.
And API rates for Claude Opus models are getting a hike. For example, for Tier 1 users, the maximum input tokens per minute jump from 30,000 to 500,000, while the maximum output tokens per minute increase from 8,000 to 80,000.
“The shift changes workflows from cautious prompt budgeting to deeper reasoning, bigger tasks, and more complete engineering output.”
Elmer Morales, founder of koderAI, tells The New Stack what the limit changes might mean for developer workflows: “The shift changes workflows from cautious prompt budgeting to deeper reasoning, bigger tasks, and more complete engineering output,” Morales says.
Andy Pernsteiner, Field CTO at VAST Data, expresses a similar sentiment, telling The New Stack that the Anthropic deal will likely enable developers “to use Claude Code to build richer applications and more advanced agents,” hopefully freeing them from the need to “meticulously maintain context and reduce MPC use” — bottlenecks, he says, have become unfortunate parts of his daily workflow.
Anthropic knows something’s got to give
Anthropic inks a deal with SpaceX following a slew of complaints from Claude Code users who said they were hitting usage limits faster than expected. One Redditor, for example, claimed a single prompt cost them 10% of their limit, up from the expected 0.5–1%.
In its blog post announcing the partnership, Anthropic also points out that it “train[s] and run[s] Claude on a range of AI hardware,” naming AWS Trainium, Google TPUs, and NVIDIA in the line-up and noting that it “continue[s] to explore opportunities to bring additional capacity online.”
In the last several months, Anthropic has been on a compute shopping spree, shaking hands with many members of the Big Tech roundtable to secure new capacity and investment.
In April, Anthropic and Amazon announced an agreement under which Anthropic secured up to 5 gigawatts (GW) of Amazon’s Trainium and Graviton cores, along with up to $25 billion in investment from the e-commerce company.
Earlier that same month, Anthropic also made a deal with Google and Broadcom to expand its compute infrastructure starting in 2027 with “multiple gigawatts of next-generation capacity” — that’s after it already expanded its use of Google Cloud technologies (including up to one million TPUs) in October 2025.
And don’t forget the November 2025 partnership Anthropic formed with Microsoft and NVIDIA, in which the AI company agreed to buy $30 billion in Azure compute capacity.
What does it all mean for software developers?
The seemingly never-ending handshaking among Anthropic, Amazon, Microsoft, and the rest of AI’s biggest players makes it hard to place bets on the industry’s future. But both Malores and Pernsteiner take Anthropic’s recent spending spree as a sign that the AI company is preparing itself for continued growth and long-term relevance.
“It is a renewed commitment to ensuring their platform and ecosystem of tools, applications, and frameworks will have enough juice to run at scale.”
Even after Anthropic’s scuffle with the Pentagon earlier this year, Pernsteiner says developers should take its new SpaceX deal as a signal that the AI company is one they can continue betting on:
“It is a renewed commitment to ensuring their platform and ecosystem of tools, applications, and frameworks will have enough juice to run at scale.”
The post Anthropic recruited SpaceX’s 220,000-GPU Colossus 1 to fix what Claude users kept complaining about