Image source: The Motley Fool.
Date
Thursday, May 7, 2026 at 5 p.m. ET
Call participants
- Chief Executive Officer — Michael Intrator
- Chief Financial Officer — Nitin Agrawal
Takeaways
- Revenue -- $2.1 billion, representing increases of 32% sequentially and 112% year over year.
- Customer Bookings -- Over $40 billion of new commitments signed in the quarter, driving contracted revenue backlog to $99.4 billion, up nearly 50% sequentially and close to 4x year over year.
- Adjusted EBITDA -- $1.2 billion, reflecting 91% year-over-year growth and an adjusted EBITDA margin of 56%.
- Adjusted Operating Income -- $21 million and an adjusted operating margin of 1%; management stated this is "expected to be its low point" as margin expansion is anticipated in upcoming quarters.
- Net Loss -- $740 million, compared to $315 million in the same period a year ago.
- CapEx -- $6.8 billion in the quarter with construction in progress remaining roughly unchanged sequentially.
- Active Power -- Surpassed 1 gigawatt deployed; contracted power now exceeds 3.5 gigawatts, with more than 400 megawatts added this quarter and the substantial majority expected online by year-end 2027.
- Cash & Liquidity -- $3.3 billion in cash, cash equivalents, restricted cash, and marketable securities as of March 31.
- New Investment-Grade Debt Facility -- Closed an $8.5 billion Delayed Draw Term Loan 4.0 Facility, the first investment-grade, HPC infrastructure-backed loan, rated A- equivalent, at an implied cost of below 6%.
- Total 2026 and 2027 Revenue Backlog Recognition -- 36% of backlog expected to be recognized in the next 24 months; 75% in the next 4 years.
- Customer Diversification -- 10 customers now committed to spend at least $1 billion each, including deals with Anthropic and Meta ($21 billion agreement announced in April).
- Enterprise & New Vertical Penetration -- Financial services backlog is approaching $10 billion, led by Jane Street and new customers such as Hudson River Trading.
- Pricing & Demand -- Average pricing for A100, H100, H200, and L40 GPUs increased sequentially, with near-term capacity largely sold out across the fleet.
- 2026 and 2027 Run-Rate Revenue Guidance -- Management raised the floor for 2026 exit annualized run-rate revenue to $18 billion and reaffirmed more than $30 billion for 2027, with over 75% already contracted, excluding renewals.
- Component Availability for 2026 Revenue -- Management stated, "the overwhelming majority" of required components and infrastructure for 2026 are already secured.
- Margin Dynamics -- Gross and operating margin currently pressured by timing lag between infrastructure ramp and revenue recognition, with contribution margins expected to normalize in the mid-20s% after three months of deployment.
- Q2 and Full Year Guidance -- Q2 revenue expected at $2.45 billion to $2.6 billion and adjusted operating income at $30 million to $90 million; full year 2026 revenue reaffirmed at $12 billion to $13 billion and adjusted operating income at $900 million to $1.1 billion.
- CapEx Guidance -- Q2 CapEx expected between $7 billion and $9 billion; full year CapEx forecast increased to $31 billion to $35 billion due to higher component pricing.
- Interest Expense Guidance -- Q2 interest expense is anticipated to range from $650 million to $730 million, reflecting recent growth in debt balances.
- No Material Debt Maturities Until 2029 -- Other than self-amortizing contract-backed debt and OEM vendor financing.
Need a quote from a Motley Fool analyst? Email [email protected]
Risks
- Operating expenses rose to $2.2 billion, driven by infrastructure scale-up and higher sales, marketing, and G&A spend, with net loss widening to $740 million.
- Interest expense rose to $536 million from $264 million year over year due to increased debt associated with scaling infrastructure.
- CapEx guidance raised on the low end to reflect "increases in component pricing," as acknowledged by management.
- Gross and operating margins pressured in the near term by negative contribution margins during initial deployment phases, as explained by management.
Summary
CoreWeave (CRWV 11.97%) reported record customer bookings, bringing contracted revenue backlog to nearly $100 billion, with 36% expected to be recognized within the next 24 months and 75% within four years. The quarter marked milestone partnerships with Anthropic and Meta, further diversifying CoreWeave's customer base and securing substantial, multi-year revenue visibility across both hyperscalers and emerging enterprise verticals. Management executed a first-of-its-kind, investment-grade, HPC-backed $8.5 billion debt facility, lowering the company's weighted average cost of debt and securing all required capital for current deployment plans. Despite a wider net loss and increased CapEx, guidance for 2026 and 2027 revenue and margins was reaffirmed or raised, underpinned by strong demand, effective supply chain execution, and high customer commitment ratios.
- The company exceeded 1 gigawatt of active power and reached 3.5 gigawatts of contracted power, positioning it to achieve more than 8 gigawatts of active power by 2030.
- CoreWeave introduced product innovations including Trust Center, Flex Reservation, and Spot pricing, with both new offerings immediately oversubscribed upon launch.
- Management emphasized that gross margin fluctuations are driven by timing of new deployments rather than underlying economics, with normalization expected in upcoming quarters as capacity matures.
- Management stated that the overwhelming majority of components and power needed to deliver 2026 revenue are already secured, limiting execution risk for the current guidance period.
- Customer commitments from noninvestment-grade AI-native companies and foundation labs now represent less than 30% of the total backlog, reflecting a shift toward investment-grade counterparties.
Industry glossary
- Delayed Draw Term Loan (DDTL): A credit facility permitting the borrower to receive funds in increments over a pre-set period, typically used to match capital deployment with project milestones.
- HPC (High Performance Computing): The use of supercomputers and parallel processing techniques for solving complex computational problems at scale.
- Active Power: The amount of data center electrical capacity currently deployed and available for revenue-generating workloads.
- Reserved Instance Customers: Clients who precommit to utilize cloud infrastructure for a defined term at contracted rates, providing the cloud provider upfront revenue visibility.
- Contribution Margin: Revenue minus variable costs associated with servicing customer demand, a measure of contract-level profitability.
Full Conference Call Transcript
Michael Intrator: Good afternoon, everyone, and thank you for joining us. Q1 was a transformational quarter for CoreWeave. We delivered our strongest quarter for customer bookings, signing more than $40 billion of new commitments and growing contracted revenue backlog to nearly $100 billion. We generated approximately $2.1 billion of revenue, up 32% quarter-over-quarter and 112% year-over-year, and surpassed 1 gigawatt of active power, a milestone only a handful of cloud companies have ever achieved as we convert contracted capacity into revenue-generating cloud services. We concluded Q1 stronger than ever. AI diffusion is accelerating and our addressable market, customer base and platform are all expanding rapidly. CoreWeave remains at the forefront of this generational shift.
The four themes I would like to highlight today are: one, the demand environment is intensifying, driven by the hyper growth of our existing customers and the rapid maturation of new enterprise verticals. Two, we have broadened the capabilities of our platform to serve every customer use case from training to inference to agentic workloads, positioning CoreWeave to benefit from sustained margin-accretive growth. Three, we have reached hyperscale with more than 3.5 gigawatts of contracted power, up more than 400 megawatts this quarter alone, with the substantial majority expected to be online by the end of 2027.
And four, our financing engine has taken a significant leap forward, unlocking new sources of capital across markets at a lower weighted average cost, enabling CoreWeave to secure more than $20 billion of debt and equity year-to-date. Beginning with the demand environment. CoreWeave's addressable market is expanding and customers are choosing our platform for the long term. AI workloads are moving from training to inference, agents and enterprise production across industries, which is increasingly compute-intensive. As a result, our core customers, historically hyperscalers and foundation labs are deepening their commitment to us while an entirely new wave of enterprises are arriving and demanding access to CoreWeave's platform at scale.
This trend drove record revenue backlog additions in Q1 as we signed our initial Vera Rubin deals while continuing to monetize our Blackwell, Hopper and Ampere capacity. These new customer commitments mostly contribute towards our 2027 targets. Importantly, we expect them to be highly contribution margin positive and consistent with the return profiles we have historically underwritten for new deployments. In Q1, we added Anthropic as a customer to support the development and deployment of the Claude family of AI models. We also signed multiple new orders with Meta, including the $21 billion agreement announced in early April.
The world's 4 preeminent AI model developers now rely on CoreWeave Cloud as do 9 of the 10 AI leaders outside of China. At the same time, new verticals are emerging that have already reached $1 billion-plus scale. Within financial services, technology-driven firms are scaling their core machine learning workloads with CoreWeave. These are not AI labs, but rather enterprise customers who see the tangible financial impact and attractive returns that come from adopting CoreWeave Cloud. This vertical is already approaching $10 billion in our revenue backlog, driven by expanded commitments from existing partners like Jane Street, who added $6 billion of capacity in Q1 and new customers like Hudson River Trading.
Physical AI and spatial computing has also surpassed $1 billion in revenue backlog contributions. Companies pushing the frontier of world models, robotics, autonomous driving and scientific discovery are choosing CoreWeave because of our unique combination of performance, specialized infrastructure and developer tools that accelerate training and deployments. Recent new customers include World Labs, Physics X, and Sunday Robotics. Taken together with our focused execution, these dynamics are driving CoreWeave's customer diversification. Today, we have 10 customers committed to spending at least $1 billion with CoreWeave. Serving this breadth of customers and workloads requires a mix of new and prior generations of NVIDIA GPUs as inference demand skyrockets and agents enter the workforce.
As a result, demand is accelerating across the board. Average pricing for the A100s, H100s and H200s and L40s all increased quarter-over-quarter, and we remain largely sold out for near-term capacity across our fleet. This validates what we have long believed. Demand for inference-ready compute across generations of GPUs is compounding. We expect this will be durable and accretive to our long-term margins and earnings power. The scale and quality of demand deserves a moment of context. Inference is the monetization of AI and its acceleration is driving real-world productivity gains that are justifying increased investment and broader enterprise adoption. As a result, we added more backlog in a single quarter than most AI cloud platforms have in their history.
Moving on to CoreWeave's cloud platform. We are deliberately strengthening our integrated stack to deliver the most capable AI cloud. CoreWeave even powers researchers, developers and platform engineers to rapidly iterate across training, inference and agentic workloads, accelerating the journey from experimentation to production at scale. As we do, we are introducing new capabilities to ensure enterprise customers can build on CoreWeave. For example, in Q1, we introduced CoreWeave's Trust Center to allow enterprises to productize AI quickly and efficiently without compromising their security or compliance standards. Customers rely on CoreWeave for our fully integrated set of AI cloud capabilities, not just GPUs.
To unlock the full potential of accelerated computing, they need CPUs, storage, networking, software solutions and developer tools working together across every layer. Today, more than 90% of our reserved instance customers use at least two of our products, while more than 75% use three or more. Our storage business continues to multiply quickly. We are also seeing similar trajectories in our software, CPU and networking businesses, each of which we expect to exceed $100 million of ARR by the end of the year. It is inspiring to see how customers are leveraging these products to innovate.
For example, Perplexity will power its next generation of inference workloads on CoreWeave's platform, while also leveraging Weights & Biases to help train, fine tune and manage their models. Meanwhile, Advaita Bio is using CoreWeave to accelerate pathways and single cell analysis on large-scale biological data sets, compressing workflows that previously took weeks into minutes. As we scale, we are offering customers greater flexibility in how they consume compute. In Q1, we introduced Flex Reservation and Spot pricing, revolutionizing how customers manage peak demand and unpredictable bursts, allowing them to budget more effectively while taking advantage of larger preemption windows CoreWeave provides. Both offerings were immediately oversubscribed.
We are also making cross-cloud AI easier for customers, while ensuring CoreWeave becomes the critical cloud partner. Building upon the momentum of CoreWeave's AI object storage and our Zero Egress offerings. We recently announced CoreWeave Interconnect in collaboration with Google Cloud in addition to SUNK Anywhere and LOTA Cross-Cloud. These aim to remove the friction of managing a multi-cloud footprint, making it simpler and faster for organizations to run workloads anywhere and are already proving to be highly effective at capturing increased wallet share. And to meet customers where they are, we are beginning to offer CoreWeave Omni, enabling us to deploy and operate our full cloud stack in customers' own data centers with their GPUs.
Early interest is strong across potential cloud, enterprise and sovereign customers, and we look forward to sharing updates as it comes to market. Taken together, these capabilities reflect a simple but powerful shift. Customers come to CoreWeave to build and deploy with us. Turning to execution. We convert scarce AI infrastructure into revenue-generating AI cloud capacity quickly, reliably and profitably. CoreWeave surpassed 1 gigawatt of active power this quarter. We remain firmly on track to reach or exceed our target of more than 1.7 gigawatts by the end of 2026. We are one of only a handful of cloud platforms in history to reach this scale, and we are the only one that is purpose-built for AI.
Each new build-out is complex with five phases: power, cooling, networking, servers, and the software orchestration layer. We have navigated this complexity across close to 50 data centers, consistently delivering best-in-class dock to live times for customers. CoreWeave views these build-outs as investments in the communities that host them, and we work to earn our place in the regions we operate. Our facilities serve as anchors for regional economic activity that compounds over time, grid upgrades, workforce development, sustained local investment. Each project comes with its own set of unique opportunities and challenges. We approach every market we enter with that in mind, helping our team and partners broaden their expertise and execution capabilities for the next project.
We added more than 400 megawatts of contracted power in Q1, bringing our total contracted power to more than 3.5 gigawatts. We expect the substantial majority of this contracted power to come online by the end of 2027. We added this capacity entirely via long-term leases as data center partners and the financing markets recognized a unique combination of execution, technology and contracted demand that defines our business. Looking ahead, we plan to continue to expand our contracted power footprint through leases while also accelerating our development of self-build sites, which will provide us with greater operational control and long-term financial upside. We expect our first self-build site to come online later this year.
Our lease and self-build strategies are further complemented by our strategic relationship with NVIDIA, where we continue to evaluate opportunities to accelerate the expansion of our footprint together. Our multifaceted approach uniquely positions CoreWeave to grow in a highly competitive market. Finally, touching on our financing approach. In Q1, we reached a transformational milestone that will drive CoreWeave's weighted average cost of capital lower, closing our $8.5 billion Delayed Draw Term Loan 4.0 Facility. While Nitin will speak to our broader success in the capital markets, I wanted to highlight a few elements of this specific facility.
This is the first ever investment-grade Delayed Draw Term Loan backed by HPC infrastructure, achieving an A- equivalent rating from Moody's, Fitch, and DBRS. The facility was nonrecourse to the parent. The transaction, which was well oversubscribed, was priced at a level implying a cost of less than 6%. With this financing, we have taken an important step towards accomplishing our stated goal of driving our cost of debt to investment grade. This is not an incremental improvement. This is a structural shift in how we expect to finance investment-grade customer contracts going forward in what is among the deepest parts of the capital markets.
As a reminder, we have already reduced our weighted average cost of debt by approximately 600 basis points from 2023 to 2025. As of today, we have further compressed our weighted average cost of debt by approximately 80 basis points year-to-date, while securing more than $20 billion of debt and equity capital, de-risking our execution plan and positioning CoreWeave for continued hyper growth.
Before I turn it to Nitin, I wanted to reiterate that we have been building this company against a clear set of defined objectives: one, continue to deliver the most technically advanced cloud platform for AI workloads, empowering our customers to innovate, build and deploy; two, diversify and grow our customer base; three, deliver best-in-class execution at hyperscale and finally, position our capital structure to scale with the opportunity. Our backlog is now approaching $100 billion, all tied to contracts that are either online today or expected to begin to come online through 2026 and 2027.
We have built a diversified customer base that includes each of the world's leading model platforms and extends to large enterprises across industries that represent tens of billions of market opportunity and growth. We have moved beyond just GPUs to deliver an integrated AI cloud platform for our customers, who are rapidly adopting our CPU, storage, networking and software solutions. Active power now exceeds 1 gigawatt and our contracted power is more than 3.5 gigawatts, leaving us strongly positioned to meet our goal of reaching more than 8 gigawatts of active power by 2030.
And we are continuing to innovate in the capital markets, executing the first investment-grade-rated financing ever secured by HPC infrastructure and defining a new asset class along the way. Each of these makes the business more durable, each of them compounds, and each of them is a building block for our next stage of growth. The constraint in AI is no longer whether enterprises and AI labs want to deploy. It is how quickly high-performance, reliable AI cloud capacity can be delivered. That is what CoreWeave does best. With that, I'll turn it over to Nitin.
Nitin Agrawal: Thanks, Mike, and good afternoon, everyone. Q1 marks another historic quarter for CoreWeave. Record customer commitments, bringing backlog to nearly $100 billion, more than $2 billion of quarterly revenue and more than 1 gigawatt of active power while unlocking deeper, more efficient sources of financing. We are delivering precisely in line with the road map we laid out on our last earnings call, diversifying and growing with customers, signing customer contracts with attractive and consistent margins, bringing new capacity online rapidly and expanding our purpose-built AI cloud platform. Demand for CoreWeave Cloud is accelerating and we remain largely sold out of our 2026 capacity with prices increasing across the board from Ampere to Hopper to Blackwell.
We are seeing this extend into 2027 as well as we've begun allocating capacity we expect to come online next year. Turning to Q1 results. Revenue was $2.1 billion in Q1, up 112% year-over-year and 32% sequentially, driven by continued strong execution in deploying our capacity. Demand for CoreWeave Cloud continues to intensify. Revenue backlog for the quarter ended at $99.4 billion, up nearly 50% sequentially and close to 4x year-over-year. This revenue backlog is near-term weighted with 36% expected to be recognized in the next 24 months and 75% in the next 4 years.
With enterprise adoption intensifying and our customer base diversifying, commitments from noninvestment-grade AI-native companies and foundation labs now represents less than 30% of our overall backlog. Customers continue to commit their foundational AI workloads to CoreWeave, resulting in weighted average contract length for new capacity remaining at approximately 5 years. Operating expenses in the first quarter were $2.2 billion, including stock-based compensation expense of $153 million. The increase in our operating expenses was a direct result of continuing to scale our active power capacity, converting backlog into revenue. This drove the corresponding increases in our cost of revenue and technology and infrastructure spend.
In addition, the increase in sales and marketing was driven by increased investment in our go-to-market organization as we further diversify our customer base and expand into new products and markets. G&A increased driven by personnel costs to support our growth while moderating relative to revenue growth on a quarter-on-quarter basis. Adjusted EBITDA for Q1 was $1.2 billion compared to $606 million in Q1 of 2025, growing 91% year-over-year. Our adjusted EBITDA margin was 56%. Adjusted operating income for Q1 was $21 million compared to $163 million in Q1 of 2025, just above the midpoint of our guidance.
Our Q1 adjusted operating margin was 1%, which we continue to expect to be its low point as cloud capacity further ramps in coming quarters. Net loss for Q1 was $740 million compared to a net loss of $315 million in Q1 of 2025. Interest expense for Q1 was $536 million compared to $264 million in Q1 of 2025, driven by increased debt to support the continued scaling of our infrastructure and delivery of our contracted customer commitments. We recorded an income tax provision despite a net loss due to valuation allowance on net deferred tax assets. Absent significant discrete items or a change in circumstances, our tax rate should remain broadly consistent over 2026.
Adjusted net loss for Q1 was $589 million compared to $150 million in Q1 of 2025. Turning to capital expenditures. CapEx in Q1 totaled $6.8 billion as we continue to execute on schedule. Construction in progress, CIP remained roughly unchanged sequentially. As a reminder, construction in progress represents infrastructure, not yet in service and not yet being depreciated. When these assets come into service, they drive incremental revenue and corresponding depreciation. Our financing structure is designed to match this deployment model. Large majority of our term debt is structured as delayed draw facilities, meaning capital is only drawn as the data centers are operationalized.
While global supply chain remains complex, we continue to navigate these challenges with operational discipline, leveraging our partner relationships to strategically source required inputs. Turning to our balance sheet and strong liquidity position. As of March 31, we had more than $3.3 billion in cash, cash equivalents, restricted cash and marketable securities. Since the start of the year, we have made significant progress in strengthening our balance sheet and expanding the depth and breadth of our access to capital. Our success is grounded in the principle that the capital we raise is tied to customer demand. That mindset has informed the business since its inception and has proven critical in our ability to efficiently scale our access to capital.
In Q1, we raised $2 billion of equity in connection with the expansion of our relationship with NVIDIA. We also secured approximately $8.5 billion of investment-grade debt via our fourth Delayed Broad Term Loan, which was a seminal transaction for CoreWeave. As Mike mentioned, DDTL 4.0 marked the first-ever investment-grade rated HPC infrastructure-backed debt facility, receiving an A- equivalent rating from three independent rating agencies. Not only did we price the facility at an implied rate of less than 6%, a meaningful decrease from our previous facilities, we also introduced an ABS style draw feature unlocking an additional $1 billion of drawable capital upon stabilization of the underlying contract.
We can use this incremental capital to help fund future investments for the delivery of subsequent capacity at a highly attractive price. Further, DDTL 4.0 was structured as nonrecourse allowing us to create a facility that offered enhanced capacity and improved pricing without impacting CoreWeave Inc. and its lenders. Overall, we expect this approach to become the new norm for CoreWeave when financing the build-out of capacity for investment-grade customers. This represents a substantial step forward in our ability to access capital at immense size and at rates competitive with the hyperscalers. As we entered Q2, we have built upon this momentum, securing more than $10 billion of additional debt and equity across several different transactions.
Each of these was significantly oversubscribed, highlighting the overwhelming investor demand to participate in CoreWeave's hypergrowth. In fact, both of our recent convertible and high-yield offerings were upsized meaningfully due to investor interest. While the $1 billion strategic investment we received from Jane Street highlights the value and differentiation our customers see in our platform. In conjunction with these raises, S&P also moved our corporate rating outlook from Stable to Positive. Yesterday, we priced our fifth DDTL Facility, the first to be syndicated in the public loan markets to finance contracts with OpenAI and Cohere. Again, investor appetite proved to be significant allowing us to price the facility 50 bps inside our initial marketing range.
Following this transaction, we have secured all financing required to deliver the entirety of our existing commitments with OpenAI. This transaction further underscores investors' support for CoreWeave, when financing investment grade and AI lab customers alike. The combination of these transactions brings us to more than $20 billion of debt and equity capital secured year-to-date. Our broadening access to capital at lower blended cost will continue to be an important lever for CoreWeave as we convert backlog to revenue and operating cash flow and proactively manage our capital stack. Accordingly, we have no debt maturities until 2029 other than self-amortizing contract back debt and OEM vendor financing. Turning to guidance.
The strength of our Q1 results, disciplined execution and continued momentum we are seeing across our customer base gives us confidence in reaffirming our full year guidance of $12 billion to $13 billion of revenue and $900 million to $1.1 billion of adjusted operating income. For Q2, we expect revenue in the range of $2.45 billion to $2.6 billion. We expect Q2 adjusted operating income of $30 million to $90 million as margins start to expand from their Q1 lows, consistent with our previously discussed expectation. This margin dynamic is timing based, not economic. To provide some further detail here.
Upon receipt of a powered shell, we incur lease and power costs while depreciating server and other data center equipment during the fit-out process, which, on average, takes us about 1 to 2 months. During that period, we recognize costs but no revenue, causing these new deployments to run at negative contribution margins. By month 3, however, we are typically generating revenue with contribution margins normalizing in the mid-20s. Since the beginning of 2025, we have almost tripled active power at more than 1 gigawatt, we are rapidly approaching escape velocity and continue to expect adjusted operating margin to expand sequentially for the remainder of the year, returning to low double digits by Q4.
Our Q2 interest expense is expected to be in the range of $650 million to $730 million, reflecting the growth in our debt balance to finance our accelerating deployments. We expect CapEx to be $7 billion to $9 billion as we continue to bring significant capacity online in service of our contracted revenue backlog. For the full year, we now expect CapEx of $31 billion to $35 billion. The increase on the low end from our previous guidance is related to increases in component pricing. The long-term nature of our contracted revenue backlog continues to provide us with clear visibility into 2026 and beyond, and we remain confident in the revenue and margin targets we have put forward.
We now expect to end 2026 with $18 billion to $19 billion of annualized run rate revenue, increasing the low end of our expectations by $1 billion. We continue to expect to grow annualized run rate revenue to more than $30 billion as we exit 2027 more than 75% of which is already contracted, excluding any benefit from not-yet-exercised customer renewals. We have already secured sufficient power capacity to deliver on our 2027 target and expect to continue to add new capacity and customer commitments to further build on our incredibly strong foundation.
With each quarter of execution, each megawatt delivered, each contract signed and each new customer added, we are methodically executing against our long-term plan and building further conviction in our ability to meet or exceed our long-term revenue and margin targets. Q1 was a quarter of measurable progress. Revenue and backlog grew materially as we continue add new blue-chip customers while growing with our existing partners. Our deployments continue to be profitable at the contract level. While Q1 represented the trough of our margin trajectory, we remain on track for sequential margin expansion through the balance of the year with revenue and margin growth expected to inflect as we cross from Q2 to Q3.
As important, we have made significant progress in developing our capital structure, more than $20 billion of debt and equity secured, all via meaningfully over-subscribed transactions. Each of these is independently significant. Together, they compound because the ability to finance the scale of the opportunity ahead at declining cost is one of the most important levers we have. We have a contracted revenue backlog that provides multiyear visibility and a capital structure that is deeper and more cost efficient than at any point in our history. We look forward to updating you on our progress through the balance of the year. Thank you. With that, we'll open up for questions.
Operator: [Operator Instructions] Your first question comes from the line of Keith Weiss from Morgan Stanley.
Keith Weiss: Congratulations on a really great quarter across like all of the key kind of strategies that you guys are trying to push out into this marketplace and the velocity of the business is just outstanding. A couple of questions that I just -- kind of clarifying questions within this because there's a lot of numbers, a lot of new stuff being thrown at us. And maybe starting on the CapEx side of the equation and the higher component pricing. I think that's something that's weighing on the stock a little bit after hours. Can you explain to us how the higher component pricing works through the contract and how that ultimately affects your profitability?
And then -- and maybe if we could talk about the NVIDIA relationship and the expansion of that relationship, what fundamentally changed in that relationship this quarter? And how should we think about that 5 gigawatts within the like 8 gigawatt active power target for 2030? I'll leave the rest of my questions for the call back.
Michael Intrator: Thank you, Keith. That was a mouthful of questions there. Let me try and kind of deconstruct them and kind of work through them one at a time. So the first thing is a question regarding the CapEx budget and the higher component pricing. And this has been a thematic reality for the cloud and for artificial intelligence infrastructure across the space. Look, we've kind of built this company in an environment that has always been challenged on the supply chain side. So we're really good at it. We think about it a lot. We've built the company from an efficiency perspective. And so we're used to operating in a challenging environment around components or inputs into our product.
Having said that, in the last 6, 9 months, there has been an acute shortage of certain components that have moved up. And like I said, you've heard about that across the space. When you're thinking about that for us, the way to think about it is that we are a success-based company, which means that we build our contracts to incorporate the cost of all of the components that are necessary to deliver infrastructure.
And so by and large, we are insulated from the price inflation on some of the components because we include that in our pricing that we ultimately bring to clients in order to target the margins that Nitin spoke to, right, is up in the mid-20s is how we think about it on a unit basis. And so it's an issue, it's a problem, but we have an incredible capacity to navigate the supply chain. We have great partners, and we include the pricing that is required in order to end up delivering the infrastructure that's required, but also ensuring that we're able to secure the economics that we're targeting. The next question you asked was about NVIDIA.
And it's been a very exciting period for us with NVIDIA. They came out and they really did a series of different things, but the two most important were the qualification of our software solution as a reference architecture for them. And that was an incredible validation of the quality of the software solution that we deliver to market. We've talked about this before is that we are dedicated to delivering the best solution to artificial intelligence infrastructure consumers in the world. And we believe we do that and we believe that NVIDIA supports that position because of their position around our software as a reference architecture, which is fantastic.
With regards to the 5 gigawatts worth of infrastructure, I kind of want to start that from the position of in the last 12 months, CoreWeave has secured 2 gigawatts worth of infrastructure. Within the last quarter, we have secured 400 megawatts worth of infrastructure. We are capable of securing infrastructure at gargantuan scale, independent of any support from NVIDIA or anyone else. What the 5 gigawatts does for us is it gives us the ability on an opportunistic basis to accelerate our ability to go out and secure additional infrastructure at a truly amazing scale as our clients are trying to secure our solution for delivery of computing infrastructure.
And I think those really are the highlights around our relationship in the last quarter with NVIDIA, beyond just the standard us integrating with them on an engineering-first basis, which has just been fantastic.
Nitin Agrawal: And Keith, to round up your answer on the CapEx piece. CapEx shows for us before revenue and cash flow. So you see showing up first. The piece on the P&L impact of that is already incorporated in the guidance that we've issued you today. So with that I know this is your last quarter earnings covering us. We would like you to thank you for your partnership and the support that you've shown for us since our IPO to this date today. Thank you so much, Keith. You will be missed.
Keith Weiss: Thank you guys for an inspiring story. What you guys have built here has really been incredible and has been awesome to watch firsthand, how you guys have scaled this out so quickly.
Michael Intrator: Great team.
Operator: Your next question comes from the line of Brent Thill from Jefferies.
Brent Thill: I'll throw the next one for Keith. Congrats. Nitin, $81 million in EBITDA in the front half, but you're guiding to $919 million in the second half on the bottom line. What's giving you conviction in the bottom line build in the back half of the year?
Nitin Agrawal: Absolutely. So from a Q2 perspective, it is coming in exactly where we expected it to be. Remember, everything in our business is defined by active power and capacity ramp schedules, which are not necessarily linear. What is important in this context is that our revenue growth and our margin trajectory is coming across exactly as we articulated in our Q4 earnings. And I think your question was probably around EBIT, not EBITDA. So I just want to clarify that for people. For EBITDA, we generated $1.2 billion in EBITDA this quarter. When you think about from a Q1 perspective, Q1 was the trough of our margin story.
What we remain on track for sequential on expansion through the balance of the year, as we had described, with revenue and margin growth expected to inflect as we cross from Q2 to Q3. We are reaffirming our full year guidance, including full year revenue, full year adjusted operating income and that we will exit the year with low double-digit adjusted operating income margin. We are reaffirming our active power, and we are raising the floor of our 2026 exit ARR guidance. All of these are indicators in the conviction that we have on our execution plan for the remainder of the year.
What you will see us is executing through this plan and expect to see our adjusted operating income accelerate faster than revenue growth in the second half of the year.
Michael Intrator: Brent, just a couple of other things. You asked about what gives us confidence. And a couple of things I wanted to say as you're watching a business kind of achieving escape velocity is that we have gone ahead and work through our supply chains to ensure that we are able to hit those numbers. We are in approximately 50 data centers. We have no single data center provider that delivers more than 17% of our active infrastructure. We have multiple OEMs, ODMs we are really built now to lean into a resilient supply chain that crosses all of the components that we are required to be able to deliver in order to drive that revenue.
And we are reaffirming it because of the level of confidence that we have that because of that resiliency, we will hit those numbers.
Operator: Your next question comes from the line of Mark Murphy from JPMorgan.
Mark Murphy: I'll add my congrats to Keith. And congrats on the strong Q1 performance. To the extent that you have a business that you booked previously, and it's not fully up and running and now the cost of components has risen. I would think the cost of energy has risen, although maybe you have some of that locked in. Is the margin profile of those previously signed contracts slightly different than originally contemplated? Or do you have some recourse somehow to flow through increased pricing or maybe restructure some of those contracts? And then I have a quick follow-up.
Michael Intrator: Yes, it's a good question and thanks. It really -- like I really do want to say that this is an extraordinary quarter for the company, and we are incredibly excited. Look, in terms of the components, you see us making a slight adjustment in the CapEx to capture small components of the CapEx that might go through some price inflation. But when we are pricing our deals, we are pricing them with purchase orders in hand for the infrastructure that is required to be able to deliver on it. We understand what the power cost is going to be today, next year and out through the term of the contract because that is contractually delivered to us.
And so we've done a really good job of understanding what the components and electricity costs are going to be, we have it structured so that it is effectively passed through when we enter into the contract once again, to ensure that we're able to hit our targeted margins on a unit basis.
Mark Murphy: Okay. Understood. And Nitin, just given the success you've had with the bookings in Q1, the data center build-out execution is great, you have the diversification across enterprises, the successful capital raise. And then you do have noticeable revenue upside in Q1. What holds you back from just passing through that revenue upside into the full year revenue guidance because, I mean, in some sense, technically will reduce our rest of year revenue forecast a little. Is there some other effect? Sometimes there's weather or just the infrastructure shortages or maybe labor shortages in there that maybe is creating a mild headwind.
Nitin Agrawal: Yes. From our perspective, we mentioned this in our last earnings call as well. We pretty much remain sold out for our 2026 capacity. So that is continuing to be true for us at this point of time. Where you would see this kind of inflect for us is around two vectors. We've raised the guidance for the floor of exit ARR. So we are going to exit the year in a much stronger position than what we expected a few months ago when we did last quarter.
At the same point of time, where it's also inflecting is showing up in 75% of our 2027 ARR guidance that we provided, which is $30-plus billion already booked at this point of time, excluding any potential renewals, greater than 75% of that. So that is where you see that reflecting. From a 2026 perspective, we pretty much remain sold out of our capacity.
Operator: Your next question comes from the line of Tal Liani from Bank of America.
Tal Liani: You have some people here that's their last quarter of covering the stock, and some people here this is their first quarter of covering the stock. So generation goes, one generation comes.
Michael Intrator: It's great to have you. You're welcome.
Tal Liani: I want to ask about the operations, two things. Number one is your backlog of revenues, what determines the recognition of revenues from it? Is it the completion of data center build-out and live traffic going into it? And that means that we're going to have kind of step-up in recognition of revenues, like really -- some really good quarters when you finish building out certain capacity? That's the first question. And the second question about gross margin. So gross margin has been going down throughout. I look at the last 5 quarters. We started at 78%, now 68%.
And I'm trying to understand the operating structure, how it changes over the next 2 years, let's say, meaning what are the drivers for gross margin improvement. I see also technology and infrastructure expenses went up more than revenue growth. So what are the drivers for that to improve? Like what takes you from the current margin structure to a better margin structure down the road when revenues go up?
Nitin Agrawal: Yes. So before we enter into the revenue question that you asked, let me kind of first answer your second question, which is around how we think about the margin dynamic. The margin dynamic in our business is predominantly timing based, not economic. We receive -- get receipts of powered shell and we start incurring lease expenses as well as power expenses at that time, and we start depreciating server and other data center equipment during the fit-out process. That process takes us about 1 to 2 months. During that period, we are recognizing cost but no revenue. That is what costs us new deployments to be running contribution margin during the -- negative during the deployment phase.
We've tripled our active power capacity over the last year or so. And what you see in our business is a reflection of that rapid ramp because we deploy capacity ahead of revenue generation by a few weeks. By month 3 in these deployments, we are typically generating revenue and the contribution margins stabilize normalizing to a ramped contract in about mid-20s. Operating margin inflection also means a gross margin will inflict on time as we inflict into Q2 exiting Q3. That's what will happen through the remainder of the year, and you'll see sequential expansion. To your question on revenue generation.
When we deploy capacity and we have deployed, tested our GPUs and when we handed them over to the end customer is when we start recognizing revenue. So as data halls, as data centers come online and we deploy capacity for our data centers, that is when we start delivering those capacity components over to our end customers on a contractual basis, and then we start recognizing revenue on a straight-line basis through the life of the customer contract.
Michael Intrator: Tal, just one more quick point. When you're thinking about the gross margin, right, remember, we're installing such massive amounts of infrastructure relative to our installed capacity, right? So if you think of it as we're running 50 megawatts, and we add 300 megawatts in a quarter, the impact on gross margin is going to be enormous. On the other hand, when you're running 2,000 megawatts and you add 50 megawatts, it's not going to have as material an impact on your gross margin. And when you're watching a company like CoreWeave go through a scaling exercise like we have been on, this journey is unique.
And so you're going to have impacts that are short term in the scaling exercise. But as soon as you become large enough that the add that you have a relative basis is more normalized, the margins will immediately reinflate. And when I say that they're achieving escape velocity, that is exactly what I'm talking about. You're talking about a company whose installed base is getting large enough to handle the next incremental unit of compute, the next data hall, the next data center as they are brought online.
Operator: Your next question comes from the line of Amit Daryanani from Evercore.
Amit Daryanani: I have two as well. Maybe Mike, to start it, you sort of always talked about inferencing as a monetization of AI, and that's how you've framed it. I'd love to understand how fast do you think inferencing as a share of your consumed power is growing right now in your installed base. And as that keeps going, what does that really imply for the utilization and contracting economics for the H100s, A100s over the next 12 to 18 months to you.
Michael Intrator: Yes. I mean this is something that we spend a good bit of time thinking about over here. I'm going to answer the question in a couple of different ways, right? First of all, when we build infrastructure, we build what we call AI infrastructure. And so it can be used fungibly back and forth across both training and inference. And it is the buyers who have purchased this infrastructure over time that move their workloads back and forth to optimize the use of the compute that we deliver to them. And so we don't know necessarily exactly what compute is being used for inference or what compute is being used for training because it changes all the time.
However, we can look at the power draw and extrapolate what we believe to be, how much of the compute is being used for inference versus how much is being used for training. And when we do that, we think we are now -- materially in excess of 50% of our compute is being used for inference. And that is a wonderful thing for a company like CoreWeave because it gives us tremendous confidence that the consumers of our compute are making a -- are driving revenue at their entities because they are able to monetize their investment as they sell their compute and their models on to their clients.
With regard for the demand for Hopper and Ampere, I have been pretty steadfast around what I believe is going to be the useful life of this computer. Whether it is the Amperes or the Hoppers or the Blackwells or ultimately as we move through additional iterations of architecture. And what I have said and what I continue to believe is that the use cases within the AI labs, within the companies that are productizing artificial intelligence are broad and they require different types of infrastructure to run some of their training loads, and they require different types of infrastructure for some of their inference loads.
And so when we talk about contracts, when we do multiple contracts with a counterparty, they come in and they buy the most leading-edge infrastructure. And then they use that infrastructure to train and then they take that infrastructure and move it down to the inference load, which is probably less compute-intensive, and they bring in new infrastructure of a new generation with which they use to train the next generation of their models. And so we have seen incredible amounts of demand for our H100s, for our H200s, for our A100s, and that is all driven by the fact that there are many different use cases that require different power and scale and pricing of compute.
We are sold out in our H100s. We are sold out in our A100s. We are seeing price appreciation as more inference is coming in and making demands upon that compute work loads in order to be able to deliver to their clients. And we think that is an incredibly bullish signal to the space at large.
Amit Daryanani: Perfect. And I have a really quick one. On the component side, One of the other issues I think folks are starting to have is just component availability to build out these data centers. And so as you think about the $12 billion to $13 billion in revenues for calendar '26, how much of component procurement is locked in already, be that GPUs or memory or something else for you folks versus you still have to work your way and get those things locked in.
Michael Intrator: Yes. The overwhelming majority of it is locked in already, right? So for 2026, as we said, we are virtually sold out. But likewise, we have placed RPOs, we have secured the infrastructure. We have secured the power and everything else that is necessary for us to execute on delivery on our road map. We are very, very comfortable with the guidance that we've given because of having secured that infrastructure and those components already.
Operator: Your next question comes from the line of Nehal Chokshi from Northland Capital Markets.
Nehal Chokshi: First, so you brought in an incremental 400 megawatts of contracted power that's up from 200 megawatts in 4Q '25, but below the 700 megawatts for quarter end 3Q '25. Just as we think about on a longer-term basis, I know you've already provided the calendar '26 and calendar '27 ARR, but even beyond that, what's kind of like the right way to think about how much incremental power you can bring in each quarter?
Michael Intrator: So I want to be very clear that the 400 megawatts that we added is incremental and distinct from the 200 that we added prior to that. And so we enter into contracts for power with data center providers. Our pipeline for additional capacity is extremely large. We're reviewing lots of different sites where we could build infrastructure, we're reviewing lots of different deals that would allow us to continue to ramp. Another thing that I want to highlight here is, in addition to the transactions that we're doing with third-party data center providers, we are also doing a series of self-build data centers. And that's going to give us additional operational control over our pipeline of data centers.
And that's very important to us as we move out through time. So like I said, we are building our pipeline of infrastructure in coordination with the signals that we are getting from our clients so that we are able to lease the infrastructure or schedule the self-builds to be able to deliver to it. There's no exact number that I can give you and say, hey, we can get you x number of megawatts per quarter for the next 3 years, but we have a very, very robust pipeline of opportunities that we're looking at. And like I said, 400 megawatts this quarter, last quarter was 200 megawatts, but it's been 2 gigawatts in the last 12 months.
Nehal Chokshi: Okay. Great. So if I may summarize, basically, you expect to be able to match supply to your demand and demand is off the charts.
Michael Intrator: That is correct.
Nehal Chokshi: Perfect. All right. The second question is that with about $100 billion of revenue backlog, I would say that should translate to about 2 gigawatts that you've now contracted out. And you've contracted in 3.4 gigawatts as of the end of 1Q '26. So roughly, I would say that means that you have about 1.4 gigawatts to go out and allocate. And I use the word allocate purposely, because I think you guys are trying to be fair about who gets what capacity. So the question is a, is the word allocate rather than sell accurate; and b, is the amount to allocate or sell, if you prefer that word, increasing or decreasing relative to a quarter ago.
Michael Intrator: I think the word allocate is probably accurate. It's unusual to be in a business where the demand for your product is so high that you get to really be thoughtful about which clients you want to bring on to your infrastructure, where do you want to support parts of the infrastructure that are being built, whether it's in life sciences and biology or in foundation labs or inference products, all of those things. And that is a privileged position, and we are trying to build an ecosystem of clients that we allocate power to that are going to be the leaders in the space as it continues to grow.
You made a comment before that, hey, are you able to coordinate your securing of data center capacity with demand and demand is off the charts. Like I don't want to be flip about this, right? Like the truth of the matter is the limiting factor isn't just power, it's labor, it's memory, it's storage, it's our ability to bring up infrastructure. And so we're really orchestrating the coordination of all of those things so that we're able to deliver infrastructure to our clients so they can depend upon the quality and scale of the infrastructure that they require to be able to drive their business.
Operator: That concludes the Q&A session. I will now turn the call back to Mike for closing remarks. Mike, go ahead.
Michael Intrator: Thank you. So as we conclude, I want to thank the CoreWeave team and our customers and partners. None of these accomplishments would have been possible without you. I'm incredibly proud of the growth and the execution that CoreWeave has delivered across every part of our company. Our focus, discipline and commitment to innovation and operational excellence will keep us at the forefront of this revolution. It was truly an amazing quarter, and we look forward to speaking to you guys as we continue to move through the year to update you on our progress. Thank you.
Operator: This concludes today's call. Thank you for attending. You may now disconnect.





