Anthropic, the startup already worth 40% of OpenAI: explosive growth, enterprise strategy, cloud partnerships, financials, and why companies choose Claude over ChatGPT
- Graziano Stefanelli
- Jul 12
- 5 min read

In six months Anthropic quadruples its revenue: the race to catch up with OpenAI heats up.
In the first six months of 2025, Anthropic achieved an unprecedented leap: its annualized revenue jumped from about $1 billion to $4 billion, suddenly closing in on 40% of the revenue reported by OpenAI in the same period. This ascent, reported by SaaStr and confirmed by sources like the Financial Times, is not based on record user numbers—Claude has less than 5% of ChatGPT’s weekly users—but on an extraordinary ability to monetize its enterprise product. The “business-first” strategy, built on partnerships with giants like AWS, Slack, and Zoom, has enabled Anthropic to generate much more value from each request than its consumer-based competitors.
Claude’s strength: fewer users but higher revenues thanks to enterprise licenses, APIs, and cloud
Unlike ChatGPT, which dominates the public scene with over 500 million users, Claude has carved out a golden niche in the enterprise segment: most of its revenue comes from APIs integrated into software used daily by large companies, and from its native offering on AWS Bedrock. According to SaaStr, the average revenue per Claude request is up to ten times higher than that of ChatGPT, precisely because each “Claude” query is part of a paid service intended for corporate clients and strategic partners. In this way, the startup manages to generate a disproportionate turnover compared to its presence in the consumer market, reinforcing the perception of being the second economic power in generative AI.
The Amazon alliance, cloud credits, and a business model that rewrites the rules
The agreement between Anthropic and Amazon—over $4 billion in direct investments and cloud credits—gave the startup the computing power needed to scale volumes without excessive reliance on external financing. Thanks to integration with AWS Bedrock, more and more companies are adopting Claude as the go-to solution for security, compliance, and controlled deployment. Today, more than two-thirds of Anthropic’s revenue comes from this cloud partnership, followed by direct licenses with major Fortune 500 groups. The result is a growth trajectory unrivaled among SaaS startups in the past decade, so much so that analysts now see Anthropic as a true “contender” to OpenAI’s commercial dominance.
Record growth but also new challenges: GPU costs, enterprise retention, and long-term sustainability
Behind the success, however, lie risks and questions. Each generational leap of Claude models—such as the upcoming Claude 4—drives up computing costs, and Amazon’s cloud credits will not last forever. The real unknown is the retention of enterprise customers: while many companies are currently piloting Claude, the real test will be converting these pilots into multi-year contracts, especially as competition from Llama-4 (Meta), Qwen-2 (Alibaba), and new OpenAI models intensifies. In addition, increasing regulatory pressure in Europe and the US could slow the rollout of new features or impose ever-higher red-teaming and compliance costs.
Financial comparison between OpenAI / ChatGPT and Anthropic / Claude in the first half of 2025: actual revenues, growth trajectories, revenue mix, and margin pressures
As of June 2025, OpenAI reports a $10 billion run-rate (an internal figure shared with venture partners and confirmed by the Financial Times), up about 82% from the $5.5 billion declared at the end of 2024. Anthropic’s curve is even steeper: a $4 billion run-rate compared to just under $1 billion last December, a jump of over 300% in six months. Adjusting for the different starting points, the effect is that Claude now generates about $0.40 in revenue for every $1 earned by ChatGPT, narrowing a gap that was under 20% at the start of 2024.
In terms of revenue mix, OpenAI maintains a strong exposure to the consumer segment (ChatGPT Plus and Team subscriptions), which accounts for just under 35% of revenue, while the remaining 65% comes from enterprise APIs, licensing, and partnerships (primarily Microsoft). Anthropic shows the exact opposite: over two-thirds of revenue comes from AWS Bedrock and direct enterprise licenses; the consumer part of Claude.ai remains below 10% and is used mainly for UI and agent stress-testing.
Gross profitability depends on computing costs: OpenAI, thanks to capex investments shared with Microsoft, has pushed cloud unit cost to about $0.55 per $1 of revenue, with a gross margin near 45%. Anthropic still benefits from AWS credits, which inflates the theoretical gross margin (≈ 55%), but analysts note that once Trainium and H100 credits are exhausted, the actual cost will rise, bringing contribution margin back to around 40%.
As for cash burn and runway, OpenAI is believed to have closed the semester with a slightly positive operating cash flow (thanks to consumer subscriptions, which are less GPU-intensive) and a capex pipeline of about $2.3 billion co-financed by Microsoft. Anthropic, while burning more cash in absolute terms ($400–450 million in the semester, 80% of it on compute and research staff), has over $7 billion in committed funding from Amazon, Google, and others—enough, according to internal estimates, to cover needs through the end of 2026.
Finally, the quality of revenue: OpenAI already has a base of three-year, auto-renewing contracts with four major cloud integrators, while Anthropic is now signing its first multi-year deals in banking, insurance, and pharma. The key battle will be renewal rates in 2026: if Claude maintains churn below 5%, as Bedrock pilots suggest today, Anthropic could sustain growth above 70% year-over-year; otherwise, the scale gap with OpenAI may reopen. In summary, the semester shows two increasingly distinct monetization models—consumer+API for OpenAI, enterprise-native for Anthropic—but both converge on the result: turning queries and tokens into recurring revenue that already surpasses many legacy SaaS platforms.
___
Why companies choose Anthropic over OpenAI: contractual guarantees, data-centric governance, and deployment flexibility as decisive factors
Behind the growing preference of many Fortune 500 groups for Claude models is, first and foremost, a service contract tailored to the needs of legal and compliance departments: Anthropic offers Service Level Agreements specifying detailed latency thresholds, uptime guarantees, and—crucially—independent audit procedures on training datasets. OpenAI, while offering APIs with superior performance on some metrics, tends to keep its underlying data sources opaque; Claude, by contrast, can be configured so all prompts and responses remain encrypted end-to-end in logs stored on the client’s dedicated cluster—a key requirement for banks, insurers, and healthcare operators who must demonstrate traceability to regulators.
A second element is deployment flexibility via Bedrock. Multinationals already spending billions a year on AWS can run Claude on the same cloud accounts where sensitive data reside, leveraging isolated VPCs and existing IAM controls. In practice, Claude operates as an internal microservice, avoiding extra network hops and reducing data exposure. OpenAI, on the other hand, requires every call to go through its centralized endpoints. For CISOs, this architectural difference translates into a lower risk of data exfiltration or cross-contamination.
The third factor concerns licensing and governance model. Anthropic allows customers to retain exclusive rights over fine-tuning and outputs, including clauses that prevent the startup from reusing data for training future models without explicit consent. OpenAI, although offering opt-out options, maintains a more generic policy that has previously raised concerns among legal departments wary of “contaminating” their proprietary datasets. In sectors where data is a strategic asset—pharma, automotive, defense—this distinction plays a decisive role in procurement decisions.
Finally, there is the issue of cost and economic predictability. Anthropic applies a block-token pricing system that remains stable throughout a three-year contract, with progressive discounts for higher volumes; spending thus becomes easily budgetable. OpenAI, by contrast, has changed its price list several times in the past twelve months and introduces extra fees for advanced features like tools or dedicated fine-tuning, generating uncertainty for CFOs and IT managers. Summing up these elements—granular SLAs, on-prem cloud deployment, data-centric licensing, and linear pricing—Claude becomes the preferred choice for companies that see AI not as a marketing gadget but as critical infrastructure to integrate at the core of their production workflows.
________
FOLLOW US FOR MORE.
DATA STUDIOS