DigitalOcean (DOCN) Q4 2025: AI Customer ARR Surges 150%, Fueling 2026 Growth Acceleration

DigitalOcean’s pivot to powering AI-native workloads is transforming its customer mix and growth engine, with AI customer ARR up 150% and top customers driving record expansion. The company’s focus on agentic inference cloud and vertically integrated infrastructure is unlocking durable, profitable growth, as management signals confidence in reaching 30% revenue growth in 2027 on committed capacity alone. Investors should watch for further AI-driven wallet share gains and execution on capacity ramp as DigitalOcean aims to outpace hyperscalers and GPU rental peers in the inference economy.

Summary

  • AI-Native Shift Reshapes Growth: DigitalOcean’s strategy now centers on high-growth AI and cloud-native customers, not just developers.
  • Vertically Integrated Inference Cloud: The company’s agentic inference stack differentiates beyond GPU rental, driving sticky, high-margin workloads.
  • Capacity Ramp Sets Up Acceleration: Management targets 25% exit growth in 2026 and 30% in 2027, backed by strong demand and committed infrastructure.

Performance Analysis

DigitalOcean delivered a sharp inflection in both growth and profitability, with Q4 revenue up year-over-year, and full-year results showing a significant acceleration versus prior periods. The company’s AI customer ARR reached $120 million, up 150% year-over-year, now representing 12% of total ARR, a clear signal that AI-native workloads are becoming a core growth lever. Notably, million-dollar customer ARR grew 123% to $133 million, and churn in this cohort was zero, highlighting the stickiness and scaling potential of large AI and cloud-native clients.

Margins remain robust despite heavy investment, with adjusted EBITDA margins in the low 40% range and adjusted free cash flow margins near 19%. The company’s capital discipline is underscored by declining stock-based compensation as a percentage of revenue and an active buyback program, even as it ramps data center and GPU capacity. RPO (remaining performance obligations) surged nearly 500% year-over-year, providing forward visibility as demand outstrips supply.

  • AI Revenue Mix Shift: 70% of AI customer ARR now comes from inference services and core cloud, not just GPU rentals, supporting margin durability.
  • Record Incremental ARR: $51 million organic incremental ARR in Q4, the highest in company history, driven by cloud and AI-native customers.
  • Capacity-Driven Growth: Committed 31 megawatts of new data center capacity will underpin the next phase of revenue acceleration, with staged ramping through 2026.

The business model now emphasizes high-value, production AI workloads, with management projecting further growth as capacity utilization increases and AI-native companies scale on the platform.

Executive Commentary

"Our top digital native customers, or DNEs, which include cloud and AI native companies, are now our fastest growing cohort, and in fact, growing significantly faster than the market on DL. In a nutshell, scaling our top customers was once a constraint. Today, it's our growth engine."

Patty Srinivasan, Chief Executive Officer

"We are a rapidly growing and profitable company that is incredibly well positioned to take advantage of the hyperscale sized inference market opportunity. Revenue growth has reaccelerated. We've reversed declines from our top customers, turning them into a key driver of our growth."

Matt Steinfurt, Chief Financial Officer

Strategic Positioning

1. AI-Native and Cloud-Native Customer Focus

DigitalOcean has shifted its strategic lens from serving mainly entry-level developers to targeting high-growth AI and cloud-native enterprises, termed DNEs (digital native enterprises). This cohort now drives 62% of total ARR, growing at 30% year-over-year. The company’s deliberate efforts to eliminate reasons for top customers to leave have resulted in zero churn among million-dollar ARR clients and increasing net dollar retention as customers scale.

2. Agentic Inference Cloud as Differentiator

The company’s vertically integrated agentic inference cloud provides not just GPU infrastructure, but a full stack including compute, storage, databases, networking, and managed AI services. This enables AI-native clients to deploy, orchestrate, and scale agentic workloads with predictable economics and high performance. The platform’s support for both open and closed source models, and its ability to route inference requests intelligently, is a key competitive advantage as the unit economics of open source models improve.

3. Disciplined Capacity Expansion and Capital Allocation

DigitalOcean is ramping 31 megawatts of new data center capacity across three new facilities in 2026, with a staged revenue ramp that aligns investment with demand. The company’s use of equipment leasing, rather than heavy upfront CapEx, allows for rapid scaling without sacrificing cash flow discipline. Management is clear that future capacity commitments will be driven by demonstrated customer demand and return on invested capital, not speculative buildout.

4. Product Innovation and Ecosystem Expansion

Continuous product development—including the agent development kit, GPU observability, and managed NFS— is enhancing platform stickiness and enabling customers to move from AI experimentation to production. The viral adoption of OpenClaw, an open-source agent framework, underscores DigitalOcean’s resonance with developers and AI-native teams, fueling ecosystem growth and reinforcing its role as a natural home for agentic software.

5. Leadership and Organizational Depth

The addition of Vinay Kumar, former Oracle Cloud Infrastructure founding member, as Chief Product and Technology Officer, brings hyperscale experience and accelerates innovation in both inference and core cloud infrastructure. This leadership upgrade is central to DigitalOcean’s ambition to serve enterprise-grade, global-scale workloads.

Key Considerations

This quarter marks a structural shift for DigitalOcean, as it pivots to become a foundational platform for AI-native businesses, moving beyond its legacy as a developer-centric cloud. The interplay of demand, supply, and product depth will define its next growth phase.

Key Considerations:

  • AI-Native Revenue Mix: The rapid growth and sticky nature of AI-native workloads are increasing DigitalOcean’s share of wallet, with inference services and core cloud products driving margin expansion.
  • Capacity Utilization Efficiency: DigitalOcean’s ARR per megawatt remains materially higher than bare metal GPU peers, supporting returns even as AI mix grows.
  • Disciplined Investment: Management’s commitment to matching capacity investments with proven demand and diverse customer base mitigates the risk of overextension.
  • Open Source Model Adoption: The shift toward open source AI models, which are up to 90% more cost-effective, positions DigitalOcean to capture emerging inference workloads with superior unit economics.
  • Margin Dynamics: Near-term margin compression is expected as new capacity comes online, but long-term adjusted EBITDA and free cash flow margins are projected to rebound as utilization ramps.

Risks

Key risks include execution on rapid capacity expansion, where delays or underutilization could pressure margins and returns. The competitive landscape remains intense, with hyperscalers, neoclouds, and inference wrappers all targeting the inference market. AI-native workload volatility and evolving customer needs may require ongoing product and infrastructure adaptation, while component cost inflation and supply chain constraints could impact planned ramps. Management’s approach to capital allocation and customer diversification will be critical to sustaining profitable growth.

Forward Outlook

For Q1 2026, DigitalOcean guided to:

  • Revenue of $249–$250 million, representing 18–19% year-over-year growth
  • Adjusted EBITDA margin of 36–37%
  • Non-GAAP diluted net income per share of $0.22–$0.27

For full-year 2026, management raised guidance to:

  • Revenue growth of 19–23% (21% midpoint), with an exit growth rate of 25%+ in Q4
  • Adjusted EBITDA margin of 36–38%
  • Unlevered adjusted free cash flow margin of 18–20%

Management cited robust AI-native demand, a committed 31 megawatts of new capacity, and a growing pipeline of large-scale workloads as the foundation for these targets. They highlighted:

  • “More demand than supply” dynamic, with AI-native workloads ramping rapidly
  • Visibility from RPO up nearly 500% year-over-year, supporting confidence in growth trajectory

Takeaways

  • AI Inference Platform Emergence: DigitalOcean is now positioned as a vertically integrated inference cloud provider, serving production AI workloads at global scale, not just developer experimentation.
  • Growth Engine Transformation: Top customer cohorts, especially AI and cloud-native enterprises, are the new drivers of record ARR growth, with zero churn and expanding wallet share.
  • Capacity Ramp and Execution Watch: Investors should monitor execution on new data center ramp, AI-native workload stickiness, and the pace of product innovation as DigitalOcean seeks to deliver 30% growth on existing commitments.

Conclusion

DigitalOcean’s Q4 2025 results mark a decisive inflection, with AI-native demand, product integration, and operational discipline driving both acceleration and profitability. The company’s strategy to serve the inference economy with a full-stack, agentic cloud platform is translating into durable growth and margin resilience. Execution on capacity ramp and continued AI-native wallet share gains will determine whether DigitalOcean can sustain its lead as the inference cloud of choice.

Industry Read-Through

DigitalOcean’s pivot to AI-native, production inference workloads signals a broader industry shift from experimentation to scaled, enterprise-grade AI deployment. The company’s success in capturing high-growth, sticky AI-native clients—while maintaining margin discipline—highlights the growing importance of vertically integrated cloud platforms that combine compute, storage, orchestration, and AI services. Open source model adoption and the focus on inference, rather than just training, are trends likely to reshape competitive dynamics across the cloud and AI infrastructure space. Hyperscalers, neoclouds, and GPU rental providers will need to address the demand for integrated, production-ready AI environments to remain competitive as inference workloads become the new battleground for share of wallet and margin expansion.