Logo of jester cap with thought bubble.

Image source: The Motley Fool.

Date

Tuesday, May 5, 2026 at 8 a.m. ET

Call participants

  • Chief Executive Officer — Padmanabhan Srinivasan
  • Chief Financial Officer — Matt Steinfort

Need a quote from a Motley Fool analyst? Email [email protected]

Takeaways

  • Revenue -- $258 million, reflecting 22% year-over-year growth and over 400 basis points sequential acceleration from Q4 2025’s 18% exit growth rate.
  • $1 million-plus customer ARR -- Reached $183 million, increasing 179% year over year.
  • AI customer ARR -- Achieved $170 million, growing 221% year over year, with inference and core cloud representing over 80% of total AI ARR, up from 70% in Q4 2025.
  • Organic incremental ARR -- Delivered a record $62 million, the highest in company history.
  • RPO (remaining performance obligations) -- Totaled $243 million, up 1,700% year over year.
  • Adjusted EBITDA -- $105 million, up 21% year over year, with a margin of 41%.
  • GAAP operating income -- $37 million for the quarter, with a 14% margin.
  • Adjusted operating income -- $64 million, producing a 25% margin.
  • Adjusted free cash flow -- Trailing 12-month figure was $171 million (18% of revenue); after lease principal payments, $154 million (16% of revenue).
  • Equity raise -- Secured $888 million in Q1, used to repay $500 million Term Loan A, with remaining funds targeted to retire $312 million in convertible notes maturing in 2026, and to fund data center expansion.
  • Incremental capacity -- Committed 60 megawatts across four new locations, bringing total to 135 megawatts; buildout will start late in 2026 and begin ramping revenue in 2027.
  • 2026 guidance (full year) -- Revenue projected at $1.13 billion to $1.145 billion (25%-27% growth), adjusted EBITDA margin of 37%-39%, and adjusted free cash flow margin of 9%-12%, including up to $100 million one-time startup costs for new capacity.
  • 2027 preliminary guidance -- Revenue expected to exceed $1.7 billion (≥50% growth), with approximately 40% adjusted EBITDA margin and high teens adjusted free cash flow margin.
  • Product launches -- Released the DigitalOcean AI native cloud platform, featuring 15 new products and five integrated stack layers purpose-built for inference and Agentic workloads.
  • Customer acquisitions -- Added notable AI-native clients including Cursor, Ideogram, and Higgsfield AI, all running production AI workloads on the new platform.

Summary

DigitalOcean (DOCN +40.52%) reported record revenue and accelerated growth, emphasizing its strategic pivot toward AI-native cloud infrastructure and inference services. The company executed a major product expansion, launching a comprehensive AI native cloud platform and securing several fast-scaling AI-native customers that rely on the breadth and depth of its integrated stack. Management signaled further acceleration by raising both short- and medium-term growth forecasts, underpinned by a substantial equity raise, aggressive capacity investments, and robust customer demand centered on inference and Agentic-era workloads. Debt maturities have been pushed out to 2030, with the majority of incremental capital directed toward infrastructure scaling, positioning the company for continued rapid revenue expansion and profitability through at least 2027.

  • Management highlighted, "this revised 2026 growth is entirely driven by our previously committed capacity, without any top line benefit in 2026 from the new 60 megawatts."
  • Padmanabhan Srinivasan stated that customers are now shifting away from Bare Metal, with non-Bare Metal AI customer ARR not only increasing in overall percentage but also declining in absolute Bare Metal dollars.
  • DigitalOcean’s balance sheet flexibility improves through the repayment of all near-term debt and careful alignment of future equipment financing with revenue generation.
  • The company confirmed, "we expect to exit 2026 at approximately 3x net leverage with no material debt maturities until 2030," reinforcing ample liquidity for ongoing expansion.
  • Management attributed the short-term outperformance largely to strong retention and expansion within top cloud and AI-native cohorts, rather than one-off gains or new capacity ramp.
  • Pricing for AI infrastructure services, particularly GPU hours, remains robust with no evidence of price compression, and the company retains the flexibility to respond rapidly to market pricing trends due to the absence of long-term locked contracts.
  • A benchmarking study recognized DigitalOcean with "#1 output speed for leading open source model like DeepSeek version 3.2, Qwen version 3.5, the $397 billion parameter model across all cloud providers," reinforcing product differentiation claims.

Industry glossary

  • ARR (annual recurring revenue): The annualized value of recurring revenue streams from active subscriptions or contracts.
  • Inference/inferencing: The process of running AI models in production to generate predictions or outputs (as opposed to training models).
  • Agentic: Refers to AI systems that execute autonomous tasks, evolving from advisory ("thinking") roles to action-oriented ("doing") capabilities.
  • RPO (remaining performance obligations): Contracted revenue not yet recognized, often used as an indicator of future revenue visibility.
  • Bare Metal: Physical servers provided without virtualization or abstraction layers, as contrasted with managed or virtualized services.
  • Megawatt (MW) capacity: In data centers, the total power capacity reserved for powering servers, often used as a proxy for maximum computing infrastructure scale.
  • Core cloud: Refers to primary compute, networking, and storage services excluding specialized hardware or AI-dedicated infrastructure.
  • Bring your own model (BYOM): Enables customers to deploy custom AI models, rather than only using prebuilt or hosted models provided by the platform.

Full Conference Call Transcript

Padmanabhan Srinivasan: Thank you, Raju. Good morning, everyone, and thank you for joining us today. We had an outstanding Q1 2026, and I'll start with four headlines. First, our momentum is accelerating. Q1 revenue was $258 million up 22% year-over-year, with million dollar plus customers growing 179% year-over-year to $183 million in ARR. AI customer ARR grew 221% to $170 million, and we beat every financial target we shared in our last call. Number two, we launched the DigitalOcean AI native cloud last week, the most significant product launch in our history. With more than 15 new product launches across five fully integrated layers built into a modern, open unified stack, purpose built for the [ inferencing ] and Agentic Era.

Third, we are investing to meet our growing customer demand and to seize the material opportunity in front of us. We raised $888 million in equity during Q1 to strengthen our balance sheet and quickly utilize that flexibility to secure 60 megawatts of incremental capacity that is slated to ramp throughout 2027, bringing our total committed capacity to 135 megawatts. And finally, we are again raising our near- and medium-term guidance on the strength of customer demand and the incrementally committed capacity. For 2026, we are increasing our full year revenue growth projection from 21% and to approximately 26% year-over-year and expect to exit Q4 approaching 30%.

And this revised 2026 growth is entirely driven by our previously committed capacity, without any top line benefit in 2026 from the new 60 megawatts. With the projected ramp of the incremental 60 megawatts in 2027, we are now projecting revenue growth of 50% or more in 2027, meaningfully higher than the 30% growth we communicated just last quarter. I'll now spend a few minutes drilling down on each of these four headlines. The momentum we are generating is clear evidence of both our differentiated position and our strong execution across the board. It starts with the accelerating top line growth.

Q1 revenue was $258 million, up 22% year-over-year and up over 400 basis points over Q4 2025 already strong 18% exit growth rate. We are delivering this growth by continuing to delight our top cloud and AI native customers. Our AI customer ARR reached $170 million, growing 221% year-over-year. Our $1 million customer ARR rates $183 million, growing 179% year-over-year. These are not just customers experimenting on our platform. These are cloud and AI native companies scaling their businesses on DigitalOcean. Our rate of acceleration is also increasing. We delivered a record $62 million in incremental organic ARR, the highest in the company's history. Customers see our differentiated value and are leaning into our platform.

[ RPO ] reached $243 million, up an extraordinary 1,700% year-over-year. And we are doing all of this with strong profitability. We delivered 41% adjusted EBITDA margin and 18% trailing 12-month adjusted free cash flow margins. Drilling into our growth. Our largest customers continue to be our fastest growing and their growth continues to accelerate. ARR from our $100,000 customers grew 73%, while our $500,000 customer ARR grew 132%. ARR from our $1 million-plus customers reached $183 million, growing at 179% year-over-year versus 123% last quarter. Our AI customers are the other key driver of accelerating growth. AI customer ARR reached $170 million, growing 221% year-over-year.

And most critically, inference and core cloud pull-through increased to more than 80% of total AI customer ARR, up from 70% in Q4. That number tells you something important. We are not a GPU rental business. We are a full stack cloud platform that AI native companies depend on to build, run and scale their production AI software. Last week, at our Deploy conference in San Francisco, we launched the DigitalOcean AI native cloud. And let me explain why this is a very significant step. Four forces are fundamentally reshaping AI right now. [ Inferencing ] has overtaken training as the dominant AI computing workload. Open source AI is now in production at over half of AI native companies.

Reasoning models are driving the majority of token consumption. And Agentic systems are rapidly moving from experimentation to production. Together, these forces represents AI evolution from "thinking" in which AI plays an advisory role to both thinking and doing in which AI delivers outcomes by executing autonomous tasks. The thinking part is powered by AI bottles in inferencing mode and the doing part is delivered by a variety of modern cloud computing modules, all working together to take intelligent, autonomous real-world action. DigitalOcean's AI native cloud is purpose built for AI natives building exactly these types of workloads. It starts at the bottom with foundational layers.

We operate a global scale infrastructure with 20 data centers purpose built for AI workloads running a full stack core computing platform with a complete set of computing primitive that Agentic workloads demand. Kubernetes, CPU and GPU droplet, advanced networking stack, including virtual private cloud, object block and file storage and high-performance NFS. This is part of the doing layer, the foundation that vast majority of GPU-centric cloud simply don't have. Last week, we launched a new inference engine, which we co-invented with our customers to address their most critical inferencing needs, and it delivers a lot more than just serving tokens.

It provides serverless and dedicated end points for serving up AI models batch processing for asynchronous token generation, an intelligent policy of our inference router that automatically selects the best model for cost and performance a catalog of over 70 open source and close source frontier models with day 0 access, multimodal capabilities and guardrails. For customers who want to run their own models, we support BYOM, or Bring Your Own Model. This is the "thinking" layer, and it is far more than just serving tokens. It is about serving tokens efficiently with best-in-class performance, tightly integrated with other parts of the cloud.

Augmenting this new inference engine is our data and learning layer for which we announced an enterprise version of our managed MySQL and [ PaaS CRIs ] databases for advanced workloads. We also announced new vector database support for building Agentic workloads. We also launched a brand-new managed agents platform to give AI native everything they need to build, execute and operate autonomous agents at scale with open harnesses, sandbox, state management, agent observability, toolbox for external integration and [ Plano ] based orchestration on an open platform without getting boxed into a single LLM or platform provider.

This is the DigitalOcean AI native cloud, five fully integrated layers from silicon to agents with 0 lock-in because we offer open source options at every single layer. This is absolutely essential as our target customers are AI native companies who are creating and monetizing software. AI infrastructure is a material cost of revenue line item for these AI natives, especially when they scale, maintaining flexibility across models and platforms and leveraging the most efficient model capabilities for every specific task is an existential requirement for them.

AI natives are increasingly adopting open source at every level, including multiple open source models to open agent [ harnesses ], open source vector databases and so on, to a wide lock in and deliver compelling unit economics for their customers as they go into hyper growth mode themselves. Building a truly open, fully integrated platform is hard, and that difficulty is precisely what makes our platform durable. The market is validating what we have long believed that infrastructure without intelligence, without orchestration and a full cloud platform is insufficient for what AI native workloads actually demand.

Agentic applications require intelligence CPU-based execution, stateful memory, manage high-performance storage and databases and orchestration, all working together natively not assembled after the fact. Our integrated stack is built for exactly this architecture, and that's what enables us to deliver differentiated performance with compelling unit economics that matter to our AI native customers. Leading independent benchmarking company, artificial analysis recently reported that DigitalOcean delivers the #1 output speed for leading open source model like DeepSeek version 3.2, Qwen version 3.5, the $397 billion parameter model across all cloud providers. Our 230 output tokens per second on DeepSeek V3.2 is 3.9x faster than one of the leading hyperscalers. This wasn't just a hardware story.

It required co-designing every layer of the stack from NVIDIA's Blackwell ultra GPUs to custom VLLM optimizations, including speculative decoding and kernel fusion, which is exactly the kind of deep engineering that differentiates the modern AI native platform from GPU farms and inference wrapper providers. The clearest validation of our strategy is the caliber of customers choosing to build and scale on us. We recently onboarded Cursor one of the fastest-growing AI applications ever built, for production inference, model fine-tuning and core cloud services. Ideogram, a leading text-to-image foundation model company migrated production inference from a hyperscaler to our AI infrastructure running their own model [ weight ] at scale.

And Higgsfield AI, serving over 20 million creators with cinematic video generation run its full multi-model workflow on our integrated stack. Three different AI native companies in hyper-growth mode, running their production AI on our AI native cloud. And our pipeline continues to grow in both volume and strategic scale. Let me spend a couple of minutes on our competitive positioning with our new platform announcement. At a high level, unlike the hyperscalers, we are more open, purpose built for modern software without the legacy complexity of enterprise workloads designed for the previous era. Compared to the GPU Neoclouds, which are optimized for large training clusters, we are a full stack inferencing and Agentic platform.

And finally, while the inference wrapper providers offer tokens, we offer the breadth AI-native builders need to build complete modern software without forcing them to stitch a platform together themselves. What makes our position genuinely durable is three compounding layers. Number one, our AI middleware. The [ Plano ] data plane and inference router built on technology from our recent Cataneo acquisition completed last quarter, sits between the agents and the underlying infrastructure, intelligently steering workloads across models, regions and accelerator types based on cost, latency and availability trade-offs at real time. Second, our managed agents platform extends computing primitives up the stack with secure run times, execution sandboxes, background workers, observability, orchestration and much more.

All purpose-built for Agentic applications to be built and scaled on this platform. And the third is data gravity through managed databases, vector stores, cashing and object storage, production data lives inside our DigitalOcean AI native platform. Models and GPUs are not sticky, data is. For AI native, the decision of where to build is rarely about a single feature. It is about platform breadth quality of abstractions, openness of the platform and the absence of friction. Delivering that requires deliberate integrated engineering across every layer from silicon to agents. It needs an AI native cloud, which is what digital ocean has been building towards with millions of R&D hours over the last dozen-plus years.

The market opportunity is generational and we are poised to earn more than our fair share. Global inference traffic will grow 10x by 2030, and Agentic workloads consumed 15x more tokens than human users, a multiplier that compounds as AI matures. And we're already seeing it in our numbers. Our AI customer ARR is growing 221%, and over 80% of that is coming from infant services and core cloud, not Bare Metal, these are companies running full stack production AI on digitation and they're accelerating. We are investing to meet this growing customer demand and to seize the opportunity in the massive inferencing and Agentic markets.

In Q1, we raised $888 million in equity proceeds that enable us to expand our data center and GPU capacity to meet our growing customer demand while strengthening our balance sheet. Matt will provide more details on the equity raise and our capital strategy later in our comments. But let me give you a brief highlight on our expansion plans. Starting with our existing committed capacity. We remain on track to deliver our previously communicated 31 megawatts as planned in 2026. With our Richmond facility beginning to ramp revenue in March. On top of this, we have now secured approximately 60 megawatts of incremental data center capacity across four locations. Capacity that will ramp revenue throughout 2027.

This brings our total committed data center capacity to approximately 135 megawatts. And given growing customer demand, we continue to actively pursue additional capacity beyond this new 60 megawatts capacity that will be targeted to come online in 2027 and 2028. The opportunity in front of us is enormous genuinely once in a generation. Every data point we see from our growing customer pipeline to the demand signals we are seeing and hearing from our largest customers to the reactions and interest in our AI native cloud reinforces that conviction.

As we scale our business to meet this opportunity, we will continue to make the right long-term business decisions to seize this moment while building a durable and profitable growth engine. With momentum continuing to grow, we are further raising our near- and medium-term outlook for the full year 2026. We now expect revenue growth of approximately 25% to 27% year-over-year with an exit growth rate approaching 30%, a full year ahead of the guidance we provided just last quarter. This accelerated 2026 growth is based solely on the performance of our previously committed capacity and doesn't include any projected revenue uplift from the newly committed 60 megawatts.

We expect to deliver this 2026 growth with high 30s adjusted EBITDA margins and 9% to 12% adjusted free cash flow margins, which does include some start-up costs for the new 60 megawatts. Looking further out, we now expect 2027 revenue growth of 50% or more, up from our 30% guidance last quarter with approximately 40% adjusted EBITDA margins and high teens adjusted free cash flow margins. This combination of rapid revenue growth and true durable profitability puts us in a ratified company. DigitalOcean is one of just a handful of names across a broad set of software and AI infrastructure players, delivering both attractive GAAP operating margins and material revenue growth.

As I shared on our last call, growth and discipline are not trade-offs for us. They're both operating principles. And our execution of these principles is clear in our results. With that, I will turn it over to Matt to walk through our Q1 results and our updated guidance in more detail. Matt, over to you.

Matt Steinfort: Thanks, Paddy. Good morning, everyone, and thanks for joining us. As Paddy just shared, we had a very good quarter. In my comments, I will review the financial results in detail, walk through our recent balance sheet and capital allocation actions and then provide an update to our near-term and medium-term outlooks. Starting with Q1, our results were very strong, and we exceeded the guidance we last provided on all key metrics. Q1 revenue was $258 million, up 22% year-over-year. above the top end of our recent guide. The vast majority of this Q1 revenue beat came from strong retention in our top [ DNE ] cohorts and from expansion in our top cloud and AI native customers.

The Richmond data center, which began ramping revenue in March, contributed less than $500,000 of revenue and less than 20 basis points of year-over-year growth in Q1. Our top customers continue to drive our growth. Our $1 million customer ARR reached $183 million, growing 179% year-over-year. AI customer ARR reached $170 million, growing 221% year-over-year. And we continue to deliver both durable and profitable growth. First quarter adjusted EBITDA was $105 million, up 21% year-over-year with an adjusted EBITDA margin of 41%. GAAP operating income was $37 million, with an operating income margin of 14%. Adjusted operating income was $64 million, with an adjusted operating income margin of 25%.

Trailing 12-month adjusted free cash flow was $171 million or 18% of revenue. Trailing 12-month adjusted free cash flow less lease principal payments was $154 million or 16% of revenue after including $17 million in financed equipment principal payments over the last 12 months. Next, I'll spend a few minutes on the recent equity raise and what it means for our financial profile and for our capacity plans. In Q1, we raised $888 million in equity, and we have already put the proceeds to work across two important priorities. The first priority was strengthening the balance sheet. We repaid our full $500 million Term Loan A, saving roughly $50 million per year in cash interest and mandatory prepayments.

We intend to use a portion of the remaining cash to retire the outstanding $312 million 2026 convertible notes when they mature. Collectively, these actions result in a flexible balance sheet with no material maturities until 2030. The second priority was expanding capacity to meet demand. As Paddy shared, we have secured approximately 60 megawatts across four new locations, an 80% increase in our committed capacity. This capacity is projected to begin ramping revenue over the course of 2027. While there won't be any 2026 revenue impact, the build-out of some of this capacity is likely to start in late 2026, which will impact 2026 cash flow and margins.

We expect the CapEx per megawatt in this new capacity to be higher than for the equipment ordered last year, for the 31 megawatts. The increase is driven both by the rising component cuts the entire market is seeing and higher cost and higher token capacity equipment that we plan to install. We expect the incremental ARR per megawatt to be higher as well. And importantly, we expect to generate the same or higher return on investment in these new data centers. We are likely to continue to align the timing of our investments with revenue by financing a material portion of the equipment for these facilities.

With all of this, we expect to exit 2026 at approximately 3x net leverage with no material debt maturities until 2030. Looking forward, we are again raising our near-term and medium-term outlook. The strong Q1 retention and growth in our top cloud and AI native cohorts has continued in Q2. For the second quarter of 2026, we expect revenue of $272 million to $274 million, representing 24% to 25% year-over-year growth. We expect second quarter adjusted EBITDA margins in the range of 37% to 38%. And which is $102 million at the midpoint, up 14% year-over-year. We expect non-GAAP diluted net income per share of $0.20 to $0.23.

And based on approximately 121 million to 122 million weighted average fully diluted shares outstanding. Note that our shares outstanding projection includes a benefit from the projected anti-dilutive impact of the cap call that we purchased along with the issuance of our 2030 notes. For the full year 2026, we are again meaningfully raising our outlook. We now expect full year 2026 revenue of $1.13 billion to $1.145 billion, representing 25% to 27% year-over-year growth, with a negative growth rate approaching 30% in Q4. Again, this does not include any projected revenue from the newly committed 60 megawatts. We expect strong full year adjusted EBITDA margins of 37% to 39%, which is $432 million at the midpoint.

Projected adjusted free cash flow margin will be in the range of 9% to 12%. And which includes roughly $100 million cash flow impact in 2026, a projected nonrecurring start-up costs for some of our newly committed capacity. Without these costs, adjusted free cash flow margin would be roughly 18% to 21% for the year, above prior guidance. We expect adjusted free cash flow margin less equipment finance principal payments to be slightly positive for 2026, including the impact of the $100 million in cost for 2027 capacity. We expect full year non-GAAP diluted net income per share of $1.10 to $1.20 on $118 million to 119 million weighted average fully diluted shares outstanding.

This is an increase to our prior guidance despite the equity raise as the interest savings from retiring our Term Loan A more than offset the impact of the higher share count. We are also increasing our medium- to long-term outlook, the 30% 2027 revenue growth outlook we provided last call was based solely on the 75 megawatts of capacity that we had active or under contract at that time. With approximately 60 megawatts of additional committed capacity, projected to begin generating revenue over the course of 2027, we now expect 2027 revenue to exceed $1.7 billion, full year growth of 50% or more year-over-year.

We will deliver this growth while working to make smart investments generate attractive returns and maintain a strong and flexible balance sheet. Our margin outlook for 2027 is healthy. We project approximately 40% adjusted EBITDA margins and high teens adjusted free cash flow margins. While we are excited by our progress and the increased growth outlook, we're not stopping there. We continue to actively look for opportunities to further accelerate durable and profitable growth. With that, I'd like to turn it back over to Paddy.

Padmanabhan Srinivasan: Thank you, Matt. Before we move to Q&A, let me recap what we shared today. First, our momentum has never been stronger. Our $1 million customer ARR reached $183 million, growing 179% year-over-year. Our AI customer ARR reached $170 million, growing 221%, and over 80% of that is coming from infant services and core cloud, not Bare Metal. We are an AI-native inference cloud, not a GPU landlord. Second, we launched the DigitalOcean AI native cloud. We unveiled our full platform last week at Deploy conference. We acquired Cataneo to accelerate our open source AI stack. We landed multiple marquee AI-native customers, including Cursor. Our differentiation is clear. The pipeline is deep and the wins are real.

We are the AI native cloud. Third, we are investing to meet our customer demand. $888 million raised 60 megawatts of incremental capacity committed. We are building for 2027 and beyond with disciplined capital allocation and a strengthened balance sheet. Finally, we again raised our near- and medium-term outlook. Projected exit 2026 revenue growth approaching 30%, accelerating to 50% or more revenue growth in 2027, attractive margins and a flexible balance sheet. We continue to build a durable and profitable growth engine. The inference and Agentic economy is real. The demand is real. And DigitalOcean with its AI native cloud is purpose-built for this opportunity. With that, let's open it up for questions.

Operator: [Operator Instructions] Your first question comes from the line of Kingsley Crane of Canaccord Genuity.

William Kingsley Crane: Needless to say, congrats on the momentum you've earned it, you continue to earn it. It's great to see One of the ideas over the past couple of weeks is that the mix of CPU and GPU should be closer to 1:1 with the Agentic workloads compared to pure LLM calls. And you talk about that new arrow thinking and doing in your deck, which was really well prepared. Just curious how relevant is that CPU renaissance for your business given your large core cloud and CPU footprint? Just trying to think about the quantitative benefit that could create.

Padmanabhan Srinivasan: Yes. Thank you, Kingsley. Appreciate your question. Yes, I think it is unmistakable that we are moving more and more towards an agent ear where more software is going to be rearchitected and there will be a heavy dose of autonomous agents performing tasks that were previously handled by humans. So in that era, the doing part, as I mentioned, will also require intelligence, but it is going to require a tremendous amount of computing that until about 12 months ago or more precisely until open [ plot ] really showed us the blueprint. We were in really as an industry contemplating how compute intensive it is going to be.

When I say compute intensive, it is just not CPUs, right? It is high bandwidth memory. It is advanced databases like the ones that we just announced last week, it is safe agent execution, it is orchestration between these agents. There is a tremendous amount of modern computing primitives that are required to orchestrate all of this. So I don't know whether the ratios that have propped up with say, CPUs to GPUs will go from 1:12, as we were previously thinking to 1:1, I don't know exactly what that ratio will end up being.

But what I can tell you is that we are going to need a hell a lot of more compute to do all of these things as more software gets rearchitected over the next handful of years to be more Agentic, which requires both inferencing for the thinking part and a lot of computing for the doing part. So we are preparing for that, the new capacity that we have just took on. All of our new data centers are deploying our full stack AI native cloud. So it is just not inferencing services. It is the full stack AI native cloud that is getting deployed in these data centers.

And we are getting ready for a compute-heavy future, and we are starting to see that in a very pronounced way from some of our advanced AI native customers as they themselves move into an Agentic Era.

William Kingsley Crane: It's really helpful. And then for either Paddy or Matt, we've been thinking about low to mid-teens revenue per megawatt for AI. You mentioned that the incremental capacity you're bringing on could be higher. And then just in addition to that, like to what extent can software capabilities like inference engine and [ French ] router, open source model adoption, agent framework, push that revenue per megawatt higher. I think we're all doing that megawatt math, but just curious to what extent that figure can become untethered from the peers there? .

Padmanabhan Srinivasan: That's a great question. We definitely expect that we can increase that $13 million per ARR per megawatt over time. I mean you're already seeing that non Bare Metal over 80% of our AI customer ARR, and that should increase the ARR by itself. We're also expecting, as you just pointed out, there's going to be a lot of core cloud and a lot of compute that gets pulled through with that. Right now, it's still -- it's a modest amount of core cloud pull-through, and we think there's upside there.

And then to your point, all of the capabilities that we announced to deploy, the serverless inferencing and a lot of these other capabilities, they detach the pricing and the value creation from a dollars per GPU hour and enable us to capture both higher revenue and higher margins with stickier services. So we're very optimistic about our ability to drive the ARR per revenue up over time. And certainly, that's part of our investment thesis as we've taken on this incremental capacity.

Operator: Your next question comes from the line of Gabriela Borges of Goldman Sachs.

Gabriela Borges: Paddy, you start up this conversation talking about how the beat in the quarter was not driven by new capacity coming online, but rather previously committed capacity. So I defer your thoughts on that. Talk to us a little bit about how we should think about the beat on rate [ cans ]. You're already giving us visibility into 2027 based on capacity coming online. But in any given quarter, what levers do you have to be in raise? And maybe if you could comment on the pricing dynamics and believe as you can pull on pricing within that.

Padmanabhan Srinivasan: That's a great question. I think when we guided to 2026, and we outlined the pace at which capacity was going to come online this year, there's a number of assumptions that we had to make in that, that gave us the ability to have very strong confidence in the guidance that we were providing. One was the timing of the facilities coming online. The second was the -- our ability to sell into that capacity as it came on and the third, the pricing at which we're selling into that capacity.

And if you think about all of those dimensions, again, when we provided that guidance, which was late last year, early or early this year, we had to make sure that we had enough cushion. And what we're finding is we're doing pretty well on all three of those dimensions. The Richmond data center came online. We had said second quarter, it came online in March. It didn't contribute much the first quarter, but it's online and ready to go ahead of what we had said. We're able to sell into it. Much, I'd say, on a very appropriate and aggressive time line, which is really good.

And then as you're seeing in the market, the pricing for GPO hour, even services right now is not seeing any kind of price compression. In fact, we're seeing increases in the prices for [ H100s ] and [ H200s ] and some of the legacy gear. So I'd say we have sufficient ability to continue to beat and raise we just outlined the incremental 60 megawatts for next year. And we're taking a very similar approach, which is we'll be cautious about our expectations around timing of delivery, we'll be cautious about expectations of how long it takes to sell into it, and we'll be cautious about the pricing that we get and then we'll work to exceed that.

Gabriela Borges: Matt, maybe I'll pick it up just some of those comments on being cautious. So I think we can all agree that we're pretty early in what is going to be an incredible product cycle. At some point, the product cycle will peak. So I guess the question is for the both of you. What are the demand signals that you're watching to be able to figure out whether it's 2027 growing north of 50%. Is that the peak growth rate? Does it accelerate from there, does it normalize and come down? What are some of the metrics that we could potentially be tracking from the outset? And what do you track internally?

Padmanabhan Srinivasan: Yes, I can start at a high level, and then I'll let Matt comment on your specific 2027 question. So we all agree, Gabriela that this is such a tectonic shift in how software is built and delivered. And one thing that I also want to highlight here is that inferencing and Agentic workloads will scale very differently compared to training. Training is a onetime, almost episodic turn on, the entire cluster comes online and just stays static from a workload perspective.

While inferencing and Agentic workloads have more of a cloud kind of characteristics in terms of how the workload ramps, although the gradient of the ramp has been significantly steeper than we have ever seen with traditional cloud software. So a lot of our confidence is coming from observing our big marquee AI native customers and seeing their workload growth and hence, the inferencing demand that they translate on to us and our platform. So in terms of the product cycle peaking, I think that is -- we are still a few revisions of our products, certainly and also as an industry to get to that peak cycle.

[ Openly ], I have to remind everyone is barely 100 days old. And since then, there have been a few other personal productivity agents like Hermes agent and a few others that have come and the whole industry is now figuring out what agent harnesses should look like. It is still a very, very early days of the Agentic architecture. So I expect the product cycle refresh to continue for quite a bit into the next several quarters before we can say, okay, we now have a blueprint for how these modern autonomous systems are going to be built and operated in scale. So I think we still have a lot of innovation ahead of us.

And what gives us a lot of confidence is having this front-row seat working with these marquee AI-native customers gives us a tremendous opportunity to learn about their application patterns. And this luxury is available to us because we are not just a Bare Metal provider.

These customers want us to be in the room where they are solving these problems, and that's how we were able to build a lot of these things that we saw last week in terms of innovation, like the intelligent routing, the -- many of the cashing techniques that made us the #1 in DeepSeek and Qwen token throughput and time to first token and things like that, it gives us a front-row seat and a co-invention opportunity to do this alongside our customers. So I definitely feel like the product cycle is not going to peak anytime soon.

Matt Steinfort: And I think the best metric to watch, which we're watching is ARR per megawatt. I mean if you think of token efficiency being one of the primary differentiators in terms of your ability to provide value to your customers is how much revenue can you get for those tokens and how efficiently can you provide them? And how sticky are those services that you're providing, that should all translate into higher ARR per megawatt, which is why we've introduced that metric, we track it internally, and it's all about optimization for us, and that's where we're focused that's what we would point the market to watch as well.

Operator: Your next question comes from the line of Mark Zhang of Citi.

Mark Zhang: So very nice to see the growingness of the non Bare Metal ARR this quarter. Just want to dig into some of the dynamics there in the input. So I wanted to get a sense of contributions from just new land versus existing conversions of the existing Bare Metal customers? And then how should we sort of like think of the pace of the mix shift going forward? And can you give us a sense of the ASP upfront, when you convert from Bare Metal?

Padmanabhan Srinivasan: Thank you, Mark. Your line was a little choppy, but I think I got the essence of your question. So in terms of the mix of the customers, it's a healthy mix of AI native customers that are new to our platform, that are not just consuming core AI services, but also by the nature of their inferencing workloads, they use storage systems and database systems and also increasingly core computing primitive, but we also have some of our existing digital native enterprise customers also starting to ramp up their AI innovation and AI workloads. So it goes both ways, and we are super happy to see that.

And in terms of the Bare Metal consumption, pretty much most of the customers that come to us now are coming to us because they see this rich set of inferencing entry points. So last week, we announced serverless inferencing, dedicated inferencing, batch inferencing and things like that. Increasingly, customers are realizing, especially the AI natives that they were forced to deal with all this complexity over the last couple of years, not because they wanted to, but they have to because there were very few vendors who were able to provide this kind of kernel optimization and performance enhancement using software and hardware codesign.

But now that these kinds of capabilities are available out of the box from our AI native cloud. We are seeing a lot more appetite from our customers to come in at a higher altitude in our platform and we are not having to sell Bare Metal at all. In fact, we don't even have that as part of our standard pitch.

Matt Steinfort: And from a timing standpoint, this is one of the benefits of our consumption-based model with but where we're not locking in bare metal prices for 4 and 5 years. As these Bare Metal customers, if you notice in the materials we provided the Bare Metal not only decreased as a percentage, but it actually decreased in absolute dollars of the AI customer ARR. That's because as these customers come up for contract renewal, we have the opportunity to resize and reconfigure that capacity. If we want to make that available to serverless inferencing, where we know we'll earn a higher return than Bare Metal, that's what we do.

And so we have the ability to steer that percentage down by not consuming our scarce capacity for Bare Metal services. So not only are new customers not asking for it, but the customers that are on it right now, we can rotate them off into the new services or we can repurpose the capacity for higher-margin services, and we control that.

Mark Zhang: No, that's terrific. And then just maybe a follow on. It's terrific to see the new five layers also referencing a new platform that you guys had provided last week at the pot. How should we sort of think of the maybe like changes to the gold market from here? Obviously, there's a lot to sell. There's much more products to for customers to consume. How do you -- how are you thinking about just in terms of the go-to-market partnerships and how you really like officially land new customers won this new module?

Padmanabhan Srinivasan: So our go-to-market over the last several quarters has been aimed at getting marquee AI-native logos. And that's how we have landed some of the customers that I was so proud to announce today. And we just have to scale up in doing what we are already doing. So just as a reminder, we have a very small but mighty team of AI native focused sellers that are quite capable of selling our AI native cloud stack. On top of it, we also have a very focused start-up ecosystem team that nurtures high-quality AI native companies in Silicon Valley and nurture them through their growth phases.

We also have a tremendous luxury of having perhaps the best product-led growth machine, which keeps growing in strength. So we get a tremendous amount of traffic and volume through our product-led growth flywheel, which includes a heavy dose of AI native customers that absolutely just love the simplicity and the absence of friction in our platform that enables them to just come and try our platform and do it without any human intervention. So we have multiple front doors as a way to solicit customer entry into our platform.

So we'll be fortifying some of those things, and we have a very strong partnership team that enables us to build relationships with various frontier model and open source model companies in the rest of the ecosystem.

Operator: Your next question comes from the line of Jason Ader of William Blair.

Jason Ader: Paddy, you guys are exploiting a gap in the market right now, especially with the Neoclouds, but the Neoclouds are all messaging shifting to a full stack approach and a focus on inferencing. So I guess my question is, how sustainable is your differentiation relative to the Neoclouds and what drives that?

Padmanabhan Srinivasan: Yes. Great. Thank you, Jason. I think the market opportunity is just huge and tremendous, right? We feel that the Neoclouds adding software capabilities is a great validation of our strategy and we've been saying that for a long time. But we are in fundamentally different businesses than the Neocloud. They're training first, and that's a great model. And they have a small number of highly concentrated customers with take-or-pay agreements and their needs, that type of contract needs a tremendous amount of infrastructure and discipline and execution to pull that off. So it is a significant heavy lift to deliver on these massive hyperscaler offtake contracts.

So I like our chances of continuing to innovate on the software stack, as I said, it takes a lot of hard work to build a well-integrated stack like the one that we announced last week. It is just not a stack that lives on a PowerPoint slide. You can log into cloud.digitalocean.com and see how these layers work together. We are also incredibly proud of the fact that we have made the stack completely open with open source options at every single layer. That is a pretty big deal that I want everyone to appreciate because our target customers are AI-native customers. and they feel very uncomfortable boxing themselves into a single LLM provider.

That is just not how their businesses will scale. And for them, having open source work as well as close source as part of the native stack is very, very important. So driving this kind of integrated open source enabled stack is really hard. And I like our focus. I like our discipline in terms of doing this. And the market opportunity is going to be so big that I feel very, very convinced that if we focus on learning and understanding our customers better than anyone else and translate that to product innovation, everything else is going to take care of itself. I keep telling my teams be extraordinarily customer-obsessed and competitive aware, not the other way around.

We should obsess over our customers first so that we can build the best product for them while being aware of competition, and not the other way around. So I feel we have a lot of room to run with this strategy.

Jason Ader: Okay. Great. And then one for Matt. Matt, for 2027, you talked about adjusted free cash flow margin in the mid- to high teens, I believe. Could you give us a sense of what it would be, including lease payments?

Matt Steinfort: That's a great question, Jason. It's hard to answer, though, because it will depend entirely on the lease terms that we have. So whether we lease over 4 years or 5 years or a longer period, and it will also depend on the mix of what we lease versus what we pay for upfront. That's why we're not guiding to that at this point. What I can tell you is that we continue to make very disciplined investments, we've created a lot of balance sheet flexibility for ourselves with the equity raise. We've got a lot of options at our disposal. And we're very excited by the return on investment that we're underwriting for these new facilities.

So we'll continue to operate with discipline, but we can't provide specificity on the -- what the lease payments are going to look like in 2027 because we don't know yet.

Operator: Your next question comes from the line of Wamsi Mohan of Bank of America.

Wamsi Mohan: Paddy, for -- when you look across your customer cohorts, how much penetration are you seeing of AI-driven workloads, as you look at sort of $1 million plus in the $500,000 plus customer cohort? And are you actually seeing because of AI, do you expect over the next 2 years to have an even higher chunk of customers graduating from this $500,000 to $1 million-plus cohort as you look through the next few years? And I have a follow-up for Matt.

Padmanabhan Srinivasan: Yes. sees, you're absolutely right. I think the short answer is yes to both. We have a good mix of AI as well as cloud native customers in the $500,000 and $1 million customers. And yes, it is a very important motion that we drive internally to look at every 100,000 customer and drive our teams to find out what is blocking our customers from being a $500,000 customer. And similarly, we look at every 500,000 customer and find out how we can make them $1 million customer and so forth. So with the increased adoption of AI in these customer cohorts, we fully expect those numbers to keep going up to the right for sure.

Operator: Your next question comes from the line of Tom Blakey of Cantor.

Thomas Blakey: Congratulations on the great results here. Maybe a couple of questions on my side. Paddy, we've talked prior about 3 to 4x demand in terms of your 75-megawatt capacity was really impressive to see you announce Cursor here, a great win. Congratulations. Just wondering if you could just maybe update us on the framework of what you're seeing there in terms of your customer selectivity and maybe even turning some customers away in this type of market? And then secondly, for Matt and maybe the team just CapEx per megawatt, I think investors would love a little bit more color in terms of how much higher this can go for the 60 megawatts.

And would it be difficult to just upgrade the prior capacity from a software upgrade perspective to the AI native cloud capacity to maybe kind of pull some of that in, that would be helpful.

Padmanabhan Srinivasan: Yes. I think on the last thing, we -- it is hard to have a non-AI data center deployed with AI hardware because of the limitations, especially all of the new ones that we're deploying are all direct liquid cooled and the hardware specs are just different, Thomas. So that's that. And going back to your first question around the pipeline coverage and how we allocate capacity. I mean, that is some -- a new muscle that everyone in the industry is learning, right? Our pipeline, as I mentioned several times, is 3 to 4x, if not more, in terms of the actual capacity that we have.

Which is a great problem to have, but it is a problem that we are very and very thoughtful about resolving because we have to make some bets just like our customers are making bets on us. We have to make bets on how we want to allocate the capacity. Because, as I said in the last call, if we decide to just sell the capacity to the first or the biggest or the loudest customer we'll be all done. We can go home and the capacity will all be taken.

But we have an intention to run this like a cloud, right, where we want as many customers as possible so that we can learn, we can build a better product and build a bigger competitive moat that customers that only have -- or platforms that only have a few concentrated customers simply don't have the luxury to learn and innovate as fast as we are. So it's a balancing act that we are trying to figure out, but so far, so good with the types of customers we're bringing on board.

Matt Steinfort: In terms of the cost of the CapEx, it's certainly going to be higher than what we experienced for the 31 megawatts equipment was ordered in 2025. And you're seeing broadly across the industry, component costs are going up. But more importantly for us, we're putting in gear that has higher token kind of capacity and capabilities. And we expect to get the same or higher ROI on the investments that we're making. So we'll invest a bit more. We see a phenomenal opportunity in front of us. We got a very differentiated position. We're going to get more capacity out of the investments we make, and we're going to earn similar or better returns on the investments.

Operator: Your next question comes from the line of Josh Baer of Morgan Stanley.

Josh Baer: Congrats on a wonderful quarter. I was hoping you could double-click a little bit on GPU and other pricing trends that you're seeing in the spot market. And wondering if you can quantify the portion of your business that's on demand and exposed to spot versus what portion is contracted and has fixed pricing? And any way that you can characterize the benefit in the quarter or the impact of the 2026 guide from spot market pricing?

Padmanabhan Srinivasan: It's interesting, Josh, that you point to the spot pricing. So we have a portion of -- a small portion right now of on-demand because most of our capacity is locked up with a customer. But if you think about the core of your question, which is how much exposure do we have to the ability to raise GPU prices along with the market. Because we don't have 4- or 5-year contracts with our customers, if we're locked into a customer, it may only be for 3 months or 6 months or a year. And as I said earlier on the call, as those contracts are coming up, we can rotate.

One, we can just raise the price on that customer to whatever the current market prevailing prices. Two, we can rotate it completely out of if it's a GPU per hour price, we can say we're not going to sell that capacity in that model any longer. And if you're interested in that you've got to take our on-demand pricing or you're going to take serverless inferencing. So we have the ability to adjust to the market, I'd say, probably more readily than maybe some of the other folks in the industry. So we feel very, very good about our ability to adapt to pricing.

And as I said, to Gabriela's question, that ability and our ability to execute that is part of the reason why we're able to raise the guidance for this year without getting any benefit from the incremental capacity that we just announced. So that's a great question.

Operator: Your next question comes from the line of Radi Sultan of UBS.

Radi Sultan: If you think about adding more capacity and as the existing AI customer cohort scale, like how should we be thinking about the gross margin profile, this incremental capacity you're looking to add once it's fully utilized. And you mentioned, Matt, the increased component costs. But yes what are the key puts and takes there we should be keeping in mind just on the margin side of things.

Padmanabhan Srinivasan: I think you'll note in our materials that we highlighted, non-GAAP operating margin. And the reason that we did that is because, again, if you think of where the industry is going and how different this business is than the business that we had several years ago, gross margin is one input, but operating margin is a better, more holistic view of what's going on in terms of the overall profitability because the revenue growth is so rapid and it's certainly at a lower gross margin, but it comes with tremendous operating expense leverage. And so the operating margins are very strong and very compelling, and we expect those to continue to be very attractive.

Will we see a small decrease in operating margin as we invest to accelerate our growth, given some of the same timing-related issues with bringing on new capacity, we certainly will. But if you look at the rate of revenue growth, if you look at the strong operating margins, if you look at the fact that we've been very, very disciplined with cash flow, and that we're earning very good returns. I think you'd agree that we're positioned very, very well for very durable and profitable growth.

Operator: Your next question comes from the line of Patrick Walravens of Citizens.

Patrick Walravens: It's amazing results you guys, congratulations. So Paddy, when I was at your Deploy conference, the speaker got interrupted by applause like five or six times. But two of the times were when you talked about the inference router and then also when you guys talked about support for the latest DeepSeek model. So can you just talk a little bit about why your customers are so enthusiastic about that?

Padmanabhan Srinivasan: Yes. Thank you, Patrick. And first of all, thank you for coming to deploy last week. So you bring up a really, really important point. And for those of you who have not seen the keynote video recording from last week, I encourage you to please do that. The two points that Patrick just mentioned are really important because AI Natives are doing something which is incredibly interesting. Number one is they are all running multiple models, right? Because as I mentioned, this is a cost of revenue line item for them, and it will be crippling if they are just beholden to one closed source model. Last week, there were two different models that were announced.

One is DeepSeek version 4 and the other one was the latest version from OpenAI. And the difference in price was 10x. In terms of the output tokens, it was literally $3 versus $30. So AI natives are doing three things: One, they are all becoming multiple models. Number two is they're running a lot of open source. And number three is, many of these AI natives are also running their own version of a model, which is distilled from an open source model or something like that. So there's intelligent router becomes extraordinarily important so that the router can find the right model for the task you're assigning.

So we showed a demo, which was super compelling where it showed better performance at lower [ TCO ] per token by routing the incoming prompt to the right model. And the second thing is Patrick mentioned that there was a lot of supplies for our DeepSeek support, which is fairly obvious because AI natives are embracing open source up and down the stack in a very pronounced manner. So that's why it is really important to understand, our target market is very different. These are AI natives that are building and monetizing software and for them, multiple models, open source and having destiny over their intelligence is an existential thing.

Patrick Walravens: Great. And Matt, if I could ask you a follow-up. Cursor is an amazing win, congratulations. We've all seen the news about SpaceX having an option to buy it. So just how did that fit into your guidance? How did you think about that?

Matt Steinfort: Cursor is a fantastic customer. And as you said, it's a great indication of the quality of the platform. And we're really excited by it based on the fact that they're using -- this is not a Bare Metal contract. They're using our inference services. They've made commitments around the NFS and some of the core cloud capabilities, so we're very encouraged by that, and we have a fantastic relationship with them. We haven't predicated any of our long-term guidance on any single customer.

We have, as Paddy said, to the demand for the capacity that we have available and we were very confident that there'll be a good part of that, but we're not basing any of our forecasts on specific customer.

Operator: And your last question comes from the line of Raimo Lenschow of Barclays.

Raimo Lenschow: Two quick questions. Going back to Gabriela's point in terms of like how big the market is. At the moment, it looks like most of the work is getting done on training models and inference is only starting. Like Paddy from your perspective, which innings are we on inference actually because it seems very, very early still to get an idea about like how long this can go on for. And then, Matt, for you, the one thing that comes up in the market is a lot of like capacity of new data centers, et cetera.

You're not building 100,000 GPU to have data centers who are much smaller, but like what's the constraint of finding sites to kind of go beyond the capacity you announced today?

Padmanabhan Srinivasan: Thank you, Raimo. So to answer your question succinctly, since baseball season is just starting. I would say from an inferencing point of view, we are probably in the top of the second inning. And Agentic, we are just in the national anthem. It's just getting started. So I think there's a lot of room for a lot of innovation. And I am the one thing that I'm super proud of with all the announcements we made last week is 15 new product launches, not just features, 15 new product launches and the velocity and the intensity from our engineering team is just -- it's going to make a difference in terms of our ability to establish a leadership position.

And then Raimo, your -- what was the second question? Second part of the question?

Raimo Lenschow: [indiscernible] how did is it like, yes?

Matt Steinfort: Sorry. The -- we've been able to secure the data center capacity that we've been targeting. We're still in active conversations on additional capacity beyond the both for '27 and '28. And we've not had an issue getting capacity that we've been trying to track down.

Operator: That concludes our Q&A session. And this also concludes today's conference call. Thank you for your participation. You may now disconnect.