Logo of jester cap with thought bubble.

Image source: The Motley Fool.

DATE

Thursday, May 7, 2026 at 5 p.m. ET

CALL PARTICIPANTS

  • Co-Founder and Co-Chief Executive Officer — Daniel Roberts
  • Chief Revenue Officer — Kent Draper
  • Chief Financial Officer — Anthony Lewis

TAKEAWAYS

  • Secured Power -- 5 gigawatts secured, including new sites in Europe and APAC, forming one of the largest AI infrastructure portfolios globally.
  • Annual Recurring Revenue (ARR) Under Contract -- $3.1 billion contracted, with a target of $3.7 billion in ARR by year-end across 150,000 GPUs, and an expectation to hit this run rate exiting 2026.
  • NVIDIA Partnership and Investment -- Announced a $3.4 billion five-year AI cloud contract with NVIDIA (NASDAQ:NVDA); NVIDIA's $2.1 billion investment vests as 600,000 GPUs are deployed at IREN Limited campuses.
  • Operational Capacity -- All current operational capacity is fully contracted, and new compute is contracted quickly due to ongoing supply-demand imbalance.
  • Cash Position -- $2.6 billion cash as of April 30, with ongoing GPU, data center, and corporate financing initiatives to support phased buildouts.
  • Revenue -- $144.8 million for the quarter, down from $184.7 million in the prior quarter; mining revenue was $111.2 million (declining from $167.4 million), and AI cloud services revenue grew to $33.6 million (up from $17.3 million).
  • Net Loss -- $247.8 million net loss, mainly driven by $140.4 million noncash impairments from decommissioning mining hardware and $23.7 million in unrealized losses on cap calls tied to convertible notes.
  • Adjusted EBITDA -- $59.5 million, compared to $75.3 million in the prior quarter, primarily due to lower revenues and reduced cost of revenues.
  • AI Cloud Expansion (2026 Plan) -- Targeting 480 megawatts of AI cloud capacity delivery across multiple sites, including a flagship 300 megawatt childress liquid-cooled deployment for Microsoft (NASDAQ:MSFT) (Horizon One scheduled for Q3 handoff) and 180 megawatts of air-cooled conversions.
  • 2027 Capacity Ramp -- Plans to scale to 1.21 gigawatts, including 730 megawatts under construction across British Columbia and Texas; Sweetwater One energized a high-voltage substation and began construction for an initial 200 megawatt IT load.
  • Geographic Expansion -- Acquired Nostrum Group to establish a European platform with 490 megawatts of secured Spanish power; APAC pipeline is anchored by large-scale Australian opportunities.
  • Mirantis Acquisition -- Acquired Mirantis, adding 650 engineers and operators with AI cloud management expertise, enhancing delivery and enterprise customer service capabilities.
  • Financing Approach -- Approximately 95% of Microsoft GPU-related CapEx funded via prepayments and GPU financing; early-stage projects use balance sheet sources, later moving to asset-level financing and capital recycling.
  • Air-Cooled AI Cloud Deployments -- Deploying latest-generation air-cooled Blackwell GPUs, which have slightly higher operating margins than liquid-cooled, with retrofits benefiting from lower relative CapEx and rapid deployment timelines.

Need a quote from a Motley Fool analyst? Email [email protected]

RISKS

  • Anthony Lewis stated, "we expect to incur additional noncash impairments associated with decommissioning mining hardware" as the Bitcoin mining transition continues.
  • The quarter's net loss was materially affected by impairments tied to the ongoing shift from Bitcoin mining to AI infrastructure, indicating further near-term financial impact as legacy assets are retired.

SUMMARY

IREN Limited (IREN +7.90%) reported aggressive global AI cloud expansion, underpinned by $3.1 billion of contracted ARR and a $3.54 billion strategic AI cloud partnership with NVIDIA (NASDAQ:NVDA) that includes a $2.1 billion investment conditional on GPU deployment milestones. The company ended April with $2.6 billion in cash and plans to deliver 480 megawatts of AI capacity within the year, scaling to 1.21 gigawatts and further establishing footprints across Europe and APAC through the Nostrum and Mirantis acquisitions. Current financial results display revenue contraction and elevated net losses attributed to the transition from Bitcoin mining to AI cloud, with management reaffirming that all operational compute is rapidly contracted amid acute, industry-wide supply constraints.

  • The $3.4 billion five-year contract with NVIDIA covers managed services utilizing Blackwell GPUs deployed across 60 megawatts of air-cooled capacity at Childress, constituting about $700 million in ARR.
  • All uncontracted capacity for 2026 and 2027 is reportedly subject to "very significant demand," especially for air-cooled deployments, which management sees as "the most constrained portion of the market."
  • The Nostrum Group acquisition in Spain brings 490 megawatts secured power and a European development pipeline, led by Gabriel Nabrita who has extensive sector experience.
  • Mirantis enhances IREN Limited's managed services, providing AI infrastructure orchestration skills and enterprise software, and is integral to supporting the NVIDIA AI Cloud Ready initiative.
  • Management confirms prepayment and GPU financing as central to near- and medium-term capital strategy, leveraging customer contracts to enable high project financeability and scale.

INDUSTRY GLOSSARY

  • ARR (Annual Recurring Revenue): Contracted, recurring revenue expected from customers over a 12‑month period, commonly used to value SaaS and cloud computing businesses.
  • GPU (Graphics Processing Unit): Specialized processor for accelerated computing workloads, critical in AI and high-performance data center applications.
  • Megawatt/Gigawatt Capacity: The power available for data center or compute infrastructure, measured in megawatts (MW, millions of watts) or gigawatts (GW, billions of watts), directly indicating scale for AI workloads.
  • Cap Calls (Capped Call Transactions): Derivative contracts linked to convertible notes, used to hedge dilution but may generate unrealized gains or losses depending on share price movements.
  • Air‑cooled / Liquid‑cooled Deployments: Data center infrastructure cooled by air vs. advanced liquid mechanisms, affecting hardware density, efficiency, and build cost.
  • ISV (Independent Software Vendor): Third-party software provider, often referenced as a critical partner in AI and cloud platform ecosystems.

Full Conference Call Transcript

Daniel Roberts: Thanks, Mike. And thank you everyone for joining us today. Eight years ago, when Will and I founded this business, we spent a lot of time thinking about what the digital future actually meant for the physical world. We talked about films like The Matrix and Ready Player One, not as science fiction, but as a signal—worlds where digital adoption was total, instantaneous, and infinite. The insight we kept coming back to was this: Digital adoption curves can go from zero to one overnight. But the real world does not scale that way. Power infrastructure, land, data centers—these take years to permit, finance, and build. The bigger the demand, the harder delivery becomes.

That gap between exponential digital growth and the physical world's ability to service it—that structural disconnect—is exactly what we set out to solve. That scarcity is now defining where AI infrastructure gets built and who can build it. Eight years later, that thesis is playing out exactly. This quarter, we demonstrated what disciplined execution against it looks like at a global scale. In AI infrastructure, secured power is only valuable if it can be converted into customer-ready compute. That conversion is hard. It requires site control, grid connection work, permitting, design, procurement, construction, GPU installation, networking, commissioning, financing, and customer delivery, all coming together on tight timelines. IREN Limited’s strength is bringing those pieces together.

We have experienced site teams and standardized, repeatable construction processes that allow us to build across multiple sites in parallel. As we scale, each phase builds on the prior phase. The template becomes more repeatable, the procurement and construction process becomes more efficient, and the site teams carry that experience forward. That is where IREN Limited has built its moat, and why real assets and real capabilities are harder to replicate than they might appear. That execution capability is showing up in the numbers—more capacity, more revenue, stronger funding certainty—and in the partnerships we are announcing today. This was a significant quarter and a significant week. Let me run through the highlights.

On capacity, we increased secured power to 5 gigawatts, added new sites in Europe and APAC, energized Week One on schedule, and have Horizon One GPU commissioning now underway for Microsoft. On customers, all of our operational capacity is fully contracted. We are not chasing demand; we are racing to build supply fast enough to meet it. In this market, the moment compute comes online, it goes to work. That is the nature of the structural imbalance between AI infrastructure supply and demand, and it is why time to compute is the most important metric we track.

We increased ARR under contract to $3.1 billion, remain on track to hit $3.7 billion exiting calendar 2026, and this week signed a $3.54 billion five-year AI cloud contract with NVIDIA—the first step in a broader strategic partnership we will come to in a moment. On capital, we had $2.6 billion of cash at April 30, and we continue to progress GPU, data center, and corporate-level financing initiatives to support the next phase of build. But the headline today is the NVIDIA partnership, and it deserves a little more than a bullet point. Let me explain what this partnership actually means.

We are working with NVIDIA to support deployment with up to 5 gigawatts of NVIDIA DSX-aligned AI infrastructure across our global data center platform, alongside DGX environments and the DSX AI Factory reference architecture. The $2.1 billion NVIDIA investment is structured to reflect that. Their rights to invest only vest as NVIDIA GPU infrastructure is deployed across IREN Limited campuses, and only fully vest upon deployment of 600 thousand GPUs. NVIDIA's capital is directly tied to execution. That is not a passive financial investment; NVIDIA is a partner who wins as we deliver. The $3.4 billion AI cloud contract announced today supporting NVIDIA's own internal workloads is the first step in that partnership.

Eight years ago, Will and I set out to build the infrastructure the digital world would need. Today, the world's leading AI infrastructure company has chosen IREN Limited as the partner to help build it. This next slide shows exactly how we build against this. So here is our plan. In 2026, we are targeting 480 megawatts of AI cloud capacity, 150 thousand GPUs, and $3.7 billion of ARR by year end. That is the near-term plan and the clearest bridge from capacity to revenue. In 2027, we are scaling to 1.21 gigawatts, with an additional 730 megawatts currently under construction across British Columbia and Texas, including Childress and the initial phase at Sweetwater One.

The construction flywheel we are running in 2026 carries directly into this next phase. Beyond 2027, we are building against a 5 gigawatt global power portfolio—North America, our new European platform in Spain, and an APAC pipeline anchored by large-scale Australian opportunities. The sequence of delivery matters because it dictates time to compute, and time to compute is what drives revenue. Each phase supports the next. That is how the platform compounds. One more thing before we move on. This week, we welcome Mirantis into the IREN Limited family—650 engineers, operators, and customer support professionals who have spent more than a decade running cloud infrastructure for over 1.5 thousand enterprise customers globally. To Alex and the whole Mirantis team, welcome.

I will come back to what this means for our delivery capability later. But let me start with 2026, where construction and customer demand are coming together most visibly. The 2026 expansion is focused on delivering 480 megawatts of AI cloud capacity across Childress, Prince George, and Mackenzie. This is where the roadmap translates into near-term deployments, customer handoffs, and ARR conversion. We will start with the largest and most complex 2026 work stream: a 300 megawatt Horizon One to Four liquid-cooled deployment at Childress, where NVIDIA [inaudible] installations are now underway. Horizon One is scheduled for Microsoft handoff in Q3, and Horizons Two to Four remain on track for delivery by the end of this year.

This is a major execution milestone. It demonstrates our ability to design, build, fit out, and commission large-scale, next-generation liquid-cooled infrastructure for a hyperscale customer on an accelerated schedule. We have around 3 thousand workers on-site right now. That level of activity reflects both the urgency of AI infrastructure demand and also the depth of our execution capability. Importantly, the model is repeatable. Horizon One establishes the build template. Each subsequent phase benefits from the same design, supply chain, construction sequencing, and site team. That is how we drive faster deployment with each phase. Alongside the liquid-cooled build, we are also converting existing air-cooled capacity into AI cloud deployments across British Columbia and Childress.

In British Columbia and Childress, we are progressing 180 megawatts of air-cooled AI cloud capacity, leveraging existing infrastructure. At Prince George, all air-cooled GPUs have now been delivered and are either operating or undergoing commissioning across the 50 megawatt site. At Mackenzie, 80 megawatts of data center capacity has been prepared for GPU installations commencing in 2026. And finally, at Childress, data center retrofits are underway across an initial 50 megawatts, ahead of GPU deliveries in the second half of this year. This is a capital-efficient part of the roadmap. We are taking existing sites and converting them toward higher-value AI cloud workloads. And it works because we already have the operational teams, infrastructure, and site control in place.

Air-cooled capacity can come online faster than liquid-cooled. In a market where time to compute is everything, that speed is a commercial advantage, and we are using it. We are already seeing this dynamic play out commercially, with capacity continuing to be contracted ahead of commissioning as customers prioritize speed to market. We now have $3.1 billion of ARR under contract, including approximately $700 million of ARR associated with the $3.4 billion five-year contract for Blackwell GPUs to be deployed against 60 megawatts of air-cooled capacity at Childress. Against the full 2026 expansion, we are targeting $3.7 billion of ARR by year end across 150 thousand GPUs.

The remaining uncontracted capacity represents approximately 50 thousand air-cooled GPUs scheduled for delivery in phases through 2026. With the 2026 plan on track, let me turn to what comes next—the 2027 expansion, where the platform scales to 1.21 gigawatts. The 2027 plan is about demonstrating that what we are building in 2026 is not a one-off. It is a repeatable, scalable model that should accelerate over time. Here is what that looks like in practice. In British Columbia, Canal Flats is another example of converting existing infrastructure into AI cloud capacity.

We plan to retrofit all 30 megawatts of existing air-cooled capacity to support AI workloads—capital-efficient, fast to execute, and consistent with the same model we are running at Prince George and Mackenzie. In parallel, Childress continues to be the largest single contributor to the 2027 setup, with both new liquid-cooled capacity and additional air-cooled retrofits adding a total of 400 megawatts of gross capacity. At Childress, the 2027 plan includes 100 megawatts of additional liquid-cooled IT load for Horizons Five and Six, as well as retrofitting an additional 250 megawatts of existing air-cooled capacity. Of that 250 megawatts, approximately 60 megawatts will be deployed to support the NVIDIA AI cloud contract.

The combination of new liquid-cooled data centers and air-cooled retrofits gives us real flexibility. We can support next-generation high-density deployments while continuing to use existing infrastructure where it is the right technical and economic fit. That flexibility is part of what makes Childress such a productive campus. In parallel, Sweetwater becomes the next major Texas campus in 2027. At Sweetwater One, the high-voltage substation has been energized on schedule, and construction is now underway for the initial 200 megawatt IT load phase of liquid-cooled data centers. Energizing the substation is an important milestone. It moves Sweetwater from development into execution and establishes the electrical foundation for the broader site build.

Sweetwater One is being designed for next-generation chip architectures, including the NVIDIA Rubin. Like Childress, we are deliberately sequencing the build so that the first phase creates the backbone for faster subsequent phases. The first 200 megawatts is not just the first 200 megawatts; it is the foundation for a much larger site. The commercial pipeline for our 2027 capacity is anchored on the same principle that is driving everything we are building. Our vertical integration is a genuine advantage for customers because we control more of the critical path than anyone else in this market: power, land, data center construction. The pieces that cause delays for others are the pieces we own and control.

Customers want certainty that capacity will be available when promised. The phased 2027 buildout plan is a concrete basis for those conversations, and we are having them. We are in the process of negotiating large-scale AI cloud deployments across our 2027 capacity today. Demand is not the constraint; it is highly unlikely to be the constraint. The priority is delivering capacity on schedule and converting our time-to-compute advantage into durable, long-term customer relationships. We do expect the customer mix to evolve over time—hyperscalers, AI natives, enterprises, and on-demand use cases—but we do not need to force that outcome. The platform will attract the right customers as it continues to scale.

Beyond 2027, the same execution model extends into a much larger 5 gigawatt global platform. We now have 5 gigawatts of secured power—put that in context, that is not a pipeline number or an aspiration. That is secured power, representing one of the largest portfolios assembled for AI infrastructure anywhere in the world. The question now is how we build against it. The answer is a phased global platform across North America, Europe, and APAC, with additional development opportunities beyond. Let me walk you through each region. We will start with North America, which remains the largest component of the long-term platform.

In North America, the next major phase is driven by Sweetwater and Kiowa—flagship gigawatt-scale campuses in Texas and Oklahoma—with data center capacity expected to commence ramping across 2027 and 2028. We also have multiple development projects advancing through the interconnection processes, including batch candidates in Texas, which represent some of the most strategically valuable interconnection opportunities in the country. The North American pipeline has a natural progression of scale. Childress demonstrates the operating model today, Sweetwater expands it across an even larger campus, and Kiowa provides the path to another hyperscale-tier opportunity as power ramps from 2028. Every campus builds on the last—the compounding effect of having secured the right land and power positions early.

At the same time, we are expanding the platform into Europe through Spain. Today, we announced the acquisition of Nostrum Group and with it our entry into Europe. The transaction adds 490 megawatts of secured power in Spain, a gigawatt-scale development pipeline, and a team of more than 50 people across development, engineering, construction, and operations. But what it really adds is a platform—and the right people to build it. I want to acknowledge Gabriel Nabrita and the Nostrum team. Gabriel spent nearly two decades in European energy—at EDP Renewables managing gigawatts of operating assets across multiple European markets, and most recently as CEO of EDP Solar.

He understands European power infrastructure as well as anyone, and we are excited to have him lead IREN Limited’s European platform. Spain is the right place to start—supportive AI policy, abundant renewables, lower build costs, and strong connectivity into broader European demand. Europe is a market where power availability and grid timelines are increasingly shaping where customers can actually deploy, and Spain gives us a credible, scalable answer to that question. This is not just a power acquisition; it is the establishment of IREN Limited’s European platform. From Europe, we move to the other side of the world—an opportunity that matches the scale of everything we have just described. Australia is obviously not a new idea for us.

We have been progressing large-scale Australian projects towards secured grid access for some time, and we think the opportunity here is as significant as anywhere in our portfolio. And this is why. Asia Pacific is home to roughly 4.8 billion people—around 60% of the world's population. That includes some of the fastest growing AI demand markets on Earth: Indonesia, Singapore, Japan, Korea. The infrastructure requirement to service that demand is enormous, and it is largely unmet. Australia is uniquely positioned to serve it—abundant renewables, a trusted jurisdiction, strong rule of law, and, as the submarine connectivity map shows, direct fiber links into major demand centers across the region. It is the natural anchor point for AI infrastructure servicing APAC.

We are already seeing hyperscalers and frontier labs make significant commitments to Australian operations, and we intend to be a major part of that story. Beyond Australia, we continue to progress global development opportunities that extend IREN Limited’s runway further still. The platform we are building is designed to create scale into demand wherever it develops. The pipeline gives us the flexibility to do exactly that. That is the global platform—secured power across North America and Europe, with a development pipeline extending into APAC and beyond. And securing power and building data centers is only part of the equation. The other part is what happens when the compute goes live—how it is deployed, managed, and supported for customers at scale.

That is where I would like to spend a moment on why we welcome Mirantis into the IREN Limited family. I want to acknowledge the 650 people who joined IREN Limited this week—engineers, operators, customer support professionals—a team that has spent more than a decade building and running cloud infrastructure for over 1.5 thousand enterprise customers globally. That track record speaks for itself. And what they bring is specific. Their Kontena/MCP and Kubernetes platform capabilities, and their work on AI infrastructure management, orchestrate AI infrastructure across bare metal, virtual machines, and Kubernetes environments—exactly the complexity our customers are dealing with as deployments scale.

They are also a founding ISV partner of the NVIDIA AI Cloud Ready initiative, which means they are already deeply embedded in the same ecosystem we are building into. As we scale, delivery is not just about bringing GPUs online; it is about what happens after—provisioning, monitoring, supporting customers through increasingly complex environments. Mirantis strengthens all of that. We are already seeing it, and they will play a central role in supporting our NVIDIA AI cloud contract. To Alex and the whole Mirantis team, a big welcome. We are super excited to have you.

So what you have heard today is a company that has secured power at scale, is contracting revenue at scale, and is now building delivery capability at global scale. Anthony will now walk you through how we are funding it.

Anthony Lewis: The capital strategy is designed to support the phased buildout of capacity Dan discussed, while maintaining flexibility and capital discipline. As of April 30, we had $2.6 billion in cash and cash equivalents. We expect this, together with operating cash flows, GPU financing, and additional financing initiatives, to support our near-term CapEx program, which includes delivery of the Microsoft contract and deployment of air-cooled capacity across Mackenzie and Childress. For GPU CapEx, we are leveraging secured debt and customer prepayments. As we have noted previously, approximately 95% of Microsoft GPU-related CapEx is expected to be funded through prepayments and GPU financing, and we have workstreams underway for additional GPU financing to support upcoming deployments.

On the data center side, we expect our financing approach to evolve as projects move from development to construction and contracting, and ultimately to stabilized operations. Early-stage development can be supported by balance sheet capacity and corporate-level sources. As projects reach construction and customer contracting milestones, asset-level financing can be introduced. And as assets stabilize, refinancing and capital recycling can help support future builds. We will continue to maintain a disciplined balance of debt and equity as the platform continues to scale. I will now turn to the financial results, which continue to reflect the transition underway from Bitcoin mining to AI Cloud. Revenue was $144.8 million for the March quarter, compared to $184.7 million in the prior quarter.

Within that, mining revenue was $111.2 million, down from $167.4 million, driven by a lower average Bitcoin price and the ongoing decommissioning of mining hardware ahead of GPU installations. This was partially offset by continued growth in AI cloud services revenue, which increased to $33.6 million compared to $17.3 million in the prior quarter. Cost of revenues decreased by $25.9 million, primarily due to electricity costs from reduced Bitcoin mining capacity. Net loss for the quarter was $247.8 million, impacted by noncash impairments of $140.4 million primarily related to the decommissioning of mining hardware, as well as $23.7 million of unrealized losses related to cap calls associated with our convertible notes.

As we continue to transition our remaining Bitcoin mining operations towards AI Cloud, we expect to incur additional noncash impairments associated with decommissioning mining hardware. These outcomes reflect the strategic reallocation of infrastructure toward AI Cloud growth, which we believe is the high-value, long-term opportunity. Adjusted EBITDA was $59.5 million, compared to $75.3 million in the prior quarter, primarily on account of the revenue and cost of revenue items noted above. As noted, the quarter reflects the ongoing transition from Bitcoin mining to growing AI Cloud. As Dan noted earlier, we continue to target $3.7 billion in ARR by the end of 2026.

We expect that ramp to be back-end weighted, with Microsoft revenue and revenue from the additional 50 thousand GPUs procured during the quarter expected to begin ramping in Q3 2026. I will now turn back to Dan for closing remarks.

Daniel Roberts: Thanks, Anthony. Eight years ago, Will and I asked a simple question: What does the world need to build the right digital future? The answer was power, land, data centers, and compute—the ability to bring them all together at scale, faster than anyone else. Today, that thesis is playing out. We are just getting started. We will now open the call for questions.

Operator: As a reminder, to ask a question, please press star 11 on your telephone keypad and wait for your name to be announced. To withdraw your question, please press star 11 again. Please stand by as we compile the Q&A roster. Just a moment for the first question, please. First, we have Michael Ng from Goldman Sachs. Please go ahead.

Michael Ng: Good afternoon. Thank you for the questions and congratulations on all the progress. I just had two questions if I could. First, on the five-year NVIDIA AI cloud contract, I was wondering if you could talk a little bit about how many GPUs are being supported by the 60 megawatts and the cost per GPU. And then second, for Sweetwater and Oklahoma, I think you mentioned the data center capacity is coming in 2027 and 2028. At what point do those sites become marketable—or maybe they already are—and what milestones do you typically need to hit to increase the likelihood of a tenant being willing to take that out? Thank you very much.

Kent Draper: On your first question, we have not disclosed the GPU count, but as we mentioned on the call, approximately 60 megawatts of air‑cooled Blackwells are being deployed. We think the contract value we are getting and the relationship we continue to build with NVIDIA is very beneficial coming out of that contract. Importantly, this is a managed services deployment, and it shows our ability to service different segments of the market as we move forward. With respect to your second question, as Dan mentioned earlier, we are still seeing extremely strong levels of demand within the industry, certainly outstripping supply. Capacity becomes increasingly scarce further out than people were expecting.

If we rewind even a number of months, people thought there was a relatively decent amount of capacity available in 2027. We are already seeing that capacity available in 2027 is extremely scarce, and that is continuing to push into 2028 as well. So there is certainly the ability to market those sites for 2027–2028 online dates. As Dan mentioned earlier, we are working through the type of customers that we bring into the mix and making sure that we are structuring the contracts in the right way to enable a flywheel at our end, but the demand signals are very strong.

Daniel Roberts: Maybe just to add to that and directly answer your question—there is nothing stopping us contracting that capacity today. It just gets easier the closer you get. So the focus is on time to compute. The demand we know is there, and it makes the conversations and the negotiations that we are having live for a lot of that capacity much easier when you have a defined construction and delivery plan rather than trying to make things up on the fly in parallel with a full-form agreement.

Michael Ng: Thank you very much. I appreciate the thoughts.

Daniel Roberts: Thank you.

Operator: Next, we have Paul Golding from Macquarie. Please go ahead.

Paul Alexander Golding: Thanks so much, and congrats on all the progress and the new relationships coming in-house. I wanted to ask about air‑cooled GPUs in general. It sounds like with the 60 megawatt deployment at Childress for NVIDIA, that will be an air‑cooled deployment along with the rest of the uncontracted capacity that you are deploying across British Columbia and Texas. Air‑cooled is going to represent a meaningful part of the strategy. How do you see efficiencies as well as hardware performance looking so far based on the deployment you have planned, and how should we think about that from a financial perspective as we model the air‑cooled opportunity?

Kent Draper: In terms of efficiency and performance, what we are deploying across the air‑cooled portfolio is the latest generation of air‑cooled GPUs—Blackwells. They perform extremely well. There is very high demand across all Blackwell GPU types. We certainly continue to see customers finding a very good degree of performance versus cost efficiency from those units over time. And to the second part of your question on margin and retrofitting, from an operational margin perspective, air‑cooled is slightly more efficient than liquid‑cooled deployments. The real benefit, as Dan mentioned earlier, is that it is very capital efficient because we are taking existing air‑cooled data centers that require relatively little CapEx to retrofit compared to brand‑new build liquid‑cooled facilities.

So that is the major difference. At an operating margin level, yes, air‑cooled is probably slightly higher, but immaterial.

Paul Alexander Golding: Thanks. And if I could just sneak one more in around Europe and the Nostrum acquisition. As we think about the roadmap there, are you looking to use a similar form factor to what you used either at Horizon or with air‑cooled facilities, or is there a bespoke form factor you plan to leverage from that platform as you do the European rollout?

Kent Draper: One of the things that attracted us to the Nostrum opportunity—and we have been looking at Europe for a while—is that they have significant land holdings and access to a large amount of secured power. That gives us a large degree of flexibility as we build out that platform over time as to the form factor that we use. Typically, in Europe, you do tend to see slightly more condensed buildouts, but we have the ability there to utilize our typical modular design that we use across North America, which may bring construction advantages with it. That was one of the key elements we saw in terms of the platform they have and the projects they have developed.

Paul Alexander Golding: Thanks so much, Kent. Thank you.

Operator: Next, we have Brett Knoblauch from Cantor Fitzgerald. Please go ahead.

Brett Anthony Knoblauch: Perfect. Thanks, guys. Congrats on the Mirantis acquisition and the NVIDIA partnership and deal. I wanted to touch on Mirantis because I thought that was important to the long-term story. Can you elaborate on how that fits into your go-to-market motion and how it might accelerate your go-to-market motion when it comes to landing these enterprise deals, which also seems aligned with what the NVIDIA partnership wants you to do as well?

Kent Draper: It brings a number of elements that we think are significantly attractive to our business—the ability to deploy quickly, the ability to service enterprise customers that may require a high level of software over and above bare metal. They also, as a large company with substantial internal engineering resources, bring very good capability on the software development side. That can flow through to the business not only in terms of the software stack but also the operations of these large clusters more generally. Further to that, having serviced customers for decades, they have an extremely well-built-out customer support function.

All of those elements are things that attracted us to the Mirantis team and are able to add to the existing skill set and service support that we have already built up internally.

Brett Anthony Knoblauch: Awesome. And then, just double-clicking on the capacity ramp for 2027, am I right in thinking that of the 730 megawatts, 450 will come from the remaining Childress capacity, and the 280 would be coming from elsewhere?

Kent Draper: That is correct.

Operator: Next we have Nick Giles from B. Riley Securities. Please go ahead.

Nicholas Giles: Thank you, operator. Hi, everyone. Congrats on all the developments here. I know the IREN Limited team has a lot of experience in developing infrastructure in Australia, but maybe less under the IREN Limited platform. Could you walk us through some of the key differences, specifically in power procurement and maybe commercial strategy?

Daniel Roberts: Sure. In some ways, Australia is very similar to other markets. The operation of the electricity market in Australia, managed by AEMO, is very similar to what we see in Texas with ERCOT. There are markets in Australia which resemble Texas in other ways—lots of land, good transmission line capacity, good connectivity, and abundant renewables, which are not located close to other demand centers—similar to what we see in West Texas. So there are a lot of parallels. The reality is Texas is just an easier place to do business, and we have been able to accelerate faster there.

It has not stopped us continuing to incubate projects in Australia; we are getting far closer to those projects becoming a reality. The demand environment and the ability to service APAC, and the demand constraints that we are seeing and hearing in our conversations with hyperscalers, means that Australia looks like a fantastic frontier for us. We will look to accelerate that in parallel with North America and Europe.

Operator: Thank you. Just a moment for our next question, please. We have Michael Donovan from Compass Point. Please go ahead.

Michael Donovan: Hi, guys. Thanks for taking my question, and congrats on the progress. How should we think about regional customer mix as the platform expands? Are certain markets globally better suited for enterprise and sovereign AI customers versus hyperscalers? And does that change the expected contract structure or margin profile?

Daniel Roberts: It is going to evolve, and there are a lot of unknowns around this. If you break it down, a hyperscale contract can mean two things. It can mean hyperscalers using capacity for their own purposes—training and servicing workloads such as their own AI models—or it can mean they are acting as intermediaries to aggregate capacity for end customers that we are talking to directly. In the latter case, whether you are dealing with a hyperscaler or going directly to the end customer, the end demand is the same. Then you have different types of workloads—so inference and training. Inference is a little more latency sensitive; training can afford a bit more latency.

We have had conversations around training models in Australia versus the USA. Australia is a long geographic distance, but it is actually not that far over fiber, particularly when you are talking about training models. Given where inference sits today as well, we are all using ChatGPT or Claude—the response times are still adjusting to the level of demand and the supply to service it. Our objective is to build out an expansive ecosystem of end customers. The partnership with NVIDIA is designed around that. The Mirantis integration into our business is designed to help facilitate that over time in addition to all the near-term operational capabilities that it brings.

The goal is very much to build out that diversified customer base over time across all of those markets.

Michael Donovan: Appreciate that. And if I may, can you help bridge the 490 megawatts in Spain from secured power to first to‑compute? What has to happen before construction begins?

Kent Draper: That is secured power, and the sites across the portfolio there are secured as well. From here, it is a matter of working through final design and permitting, which is already well advanced at a number of those sites, and then ultimately construction of those facilities. One of the elements we found very attractive was the near-term security of power—power that is available on a timeline we think ties in very well to general European demand. We are already seeing a number of direct requests from existing and new customers for European capacity.

Michael Donovan: Great. Thank you.

Operator: Next, we have John Todaro from Needham & Company. Please go ahead.

Analyst: Hi. This is Austin Ortiz on the line for John Todaro. How do you intend to finance the buildout for the recently announced NVIDIA deal? It seems to be around 5 gigawatts. Any color on that would be helpful. Thank you.

Anthony Lewis: I can take that. The CapEx involved for the retrofitting of the air‑cooled data centers in Childress is pretty modest in the scheme of things. In terms of the CapEx for GPUs, we have a range of financing sources available to us. That includes initiatives at the corporate level, and we can also look to finance GPU acquisitions in various ways in the capital markets and through debt capital as well. We will be looking at all those initiatives.

Daniel Roberts: In terms of the 5 gigawatts more broadly, that is obviously a lot of capital today, but the reality is you do not need all that capital day one. There is an S‑curve of construction that takes time. It takes years to deliver this—this is the whole point around time to compute. It is not just a case of getting power and land; it is assembling multi‑thousand‑person construction teams and actually delivering it. Funding for that is progressive over time. As we continue to deliver, we continue to drive revenue, and we can reinvest that revenue in CapEx. That unlocks more financing sources over time.

As part of the partnership with NVIDIA we have announced, they have the ability to invest in IREN Limited as we commission GPUs. Equally, there are other support mechanisms being discussed to the extent that we need them. Capital markets are open and have been very supportive of our plan, and we anticipate that continuing. If that changes, there is a whole world of good capital out there in terms of other options, whether you are creative around private markets or otherwise. When you look at GPU financing, which is the lion’s share of that CapEx, the Microsoft contract is a great template.

We financed 95% of that CapEx at an average interest rate of about 3% through prepayments and GPU financing. The capital is out there as long as you sign good contracts and show that you can execute and operate this capacity.

Analyst: Thank you.

Operator: Just a moment, please. Next, we have Joseph Murphy from Canaccord Genuity. Please go ahead.

Analyst: Thanks, guys. Good morning, good afternoon. Congratulations on the great progress. Just to gauge demand out there—you threw out $3.1 billion contracted ARR going to $3.7 billion exiting the year. Your confidence in that uncontracted capacity and signing contracts—how is the demand out there for that extra roughly $0.6 billion of ARR and the kinds of clients you may be looking to bring on board there? And then I have a quick follow-up.

Daniel Roberts: We are trying to reiterate this as much as we can—there are no idle GPUs. The prospect of GPUs sitting unused, given how structurally constrained this market is, is low over the near to medium term. All of our operational capacity is fully contracted. We are contracting substantial portions of capacity before it even arrives. We are in discussions with a variety of customers, from hyperscale clients down to AI-native labs, for all of that 2026 and 2027 capacity. When a signature is put on paper, we will disclose it naturally. Our conviction is around the demand-supply imbalance, and you cannot tap into that until you bring the capacity online.

A customer contract does not deliver revenue—having compute online delivers revenue, and that has been the focus.

Kent Draper: I would add that particularly for our air‑cooled capacity, where we are adding substantial amounts across 2026 and into 2027, there is very significant demand on those timelines. That is the most constrained portion of the market, and that is directly leading to the dynamic Dan discussed—there just are not idle GPUs in this market. Everything on shorter-term timelines is extremely attractive to counterparties.

Analyst: That is great color. Then on your strategy and philosophy around customers and diversification, given you are in the catbird seat relative to fulfilling demand from multiple parties, how are you looking at broadening and diversifying that customer mix over time?

Daniel Roberts: It is something we are looking closely at. There is no set formula as to the proportional splits between different types of customers. There are benefits in having hyperscale clients in terms of financeability and contractual certainty, but there are also consequences in terms of price because you are not servicing the end customer in many of those instances. The ability to service the end customer has been something we have focused on since day one. Our early deployments have been focused on non‑hyperscale customers and getting as close to AI natives and enterprise as we can. The Mirantis acquisition certainly helps that. I am not going to say we are going 100% hyperscale or 100% AI-native end market.

The reality is that blend will emerge organically over time. This is part of the close working relationship we have with NVIDIA. They see the whole ecosystem—the introductions, the referrals, putting us in touch with anyone that needs capacity. It is happening organically and quickly. A combination of hyperscale and other is absolutely the plan.

Analyst: Got it. Congrats. Very exciting times. Thanks, Dan.

Operator: Thank you. Last question comes from Ben Somers from BTIG.

Benjamin Eric Sommers: Good afternoon. Thank you for taking my question. I was curious about older-generation GPUs. You have talked in the past about the useful life of older generations like H100s extending out further than maybe people originally thought. What are you seeing on the demand profile there, and what types of workloads are going onto those older-generation GPUs?

Kent Draper: The comments we made about no idle GPUs apply to all GPUs, not just the latest generation. Older generations—A100s, H100s, even H200s—are effectively fully utilized across the industry. The demand picture continues to be strong. In some instances, you are actually seeing pricing for older-generation units climbing significantly, and there are observable pricing points in the market where you can see that happening. The type of demand may shift over time—you may have older generations being used more for inference—but those older generations are equally suitable for certain types of training. We see strong demand across the board, both on inference and training, and that continues to drive demand and elongate life cycles for those older generations of equipment.

Benjamin Eric Sommers: Great. Thank you. And then on future conversations you are having for potential contracts down the line, is there any talk of prepayment structures?

Kent Draper: Yes, prepayments certainly play a role in a number of those conversations. We are still seeing prepayments on the table in a large number of instances. It obviously factors in as part of the overall equation—term length, prepayment, creditworthiness, and price all need to fit together—but prepayments are very much on the table in the current environment.

Benjamin Eric Sommers: Great. Thank you for taking my questions.

Operator: Thank you. I see no further questions at this time. I will now pass to Dan for closing remarks.

Daniel Roberts: Thanks, operator. Thanks, everyone, for joining us today. We remain focused on delivering the 2026 plan, advancing the 2027 buildout, and positioning our global platform for the opportunity beyond that. We look forward to updating you as we deliver. Thanks, everyone.