
Image source: The Motley Fool.
DATE
Thursday, June 5, 2025 at 5 p.m. ET
CALL PARTICIPANTS
President and Chief Executive Officer — Hock Tan
Chief Financial Officer — Kirsten Spears
Head of Investor Relations — Ji Yoo
Need a quote from one of our analysts? Email [email protected]
TAKEAWAYS
Total Revenue: $15 billion for Q2 FY2025, up 20% year over year, as the prior-year quarter was the first full period with VMware, making the 20% year-over-year growth organic relative to a VMware-included base.
Adjusted EBITDA: Adjusted EBITDA was $10 billion for Q2 FY2025, a 35% increase year over year, representing 67% of revenue and above the Q2 FY2025 guidance of 66%.
Semiconductor Revenue: $8.4 billion for Q2 FY2025, up 17% year over year, with growth accelerating from Q1 FY2025's 11% rate.
AI Semiconductor Revenue: Over $4.4 billion in AI semiconductor revenue for Q2 FY2025, up 46% year over year and marking nine consecutive quarters of growth; AI networking represented 40% of AI revenue in Q2 FY2025 and grew over 70% year over year.
Non-AI Semiconductor Revenue: $4 billion for non-AI semiconductor revenue in Q2 FY2025, down 5% year over year; broadband, enterprise networking, and service storage were sequentially higher, but industrial and wireless declined.
Infrastructure Software Revenue: $6.6 billion infrastructure software revenue for Q2 FY2025, up 25% year over year and above the $6.5 billion outlook for Q2 FY2025, reflecting successful enterprise conversion from perpetual vSphere to the VCF subscription model.
Gross Margin: 79.4% of revenue for Q2 FY2025, exceeding prior guidance, with Semiconductor Solutions gross margin was approximately 69% (up 140 basis points year over year), and Infrastructure Software gross margin was 93% (up from 88% year over year).
Operating Income: Q2 FY2025 operating income was $9.8 billion, up 37% year over year, with a 65% operating margin for Q2 FY2025.
Operating Expenses: $2.1 billion consolidated operating expenses for Q2 FY2025, including $1.5 billion for R&D in Q2 FY2025, and Semiconductor Solutions operating expenses increased 12% year over year to $971 million on AI investment.
Free Cash Flow: $6.4 billion free cash flow for Q2 FY2025, Free cash flow represented 43% of revenue, impacted by increased interest on VMware acquisition debt and higher cash taxes.
Capital Return: $2.8 billion paid as cash dividends ($0.59 per share) in Q2 FY2025, and $4.2 billion spent on share repurchases (approximately 25 million shares).
Balance Sheet: Ended Q2 FY2025 with $9.5 billion cash and $69.4 billion gross principal debt; repaid $1.6 billion after quarter end, reducing gross principal debt to $67.8 billion subsequently.
Q3 Guidance — Consolidated Revenue: Forecasting $15.8 billion consolidated revenue for Q3 FY2025, up 21% year over year.
Q3 Guidance — AI Semiconductor Revenue: $5.1 billion expected AI semiconductor revenue for Q3 FY2025, representing 60% year-over-year growth and tenth consecutive quarter of growth.
Q3 Guidance — Segment Revenue: Semiconductor revenue forecast at approximately $9.1 billion (up 25% year on year) for Q3 FY2025; Infrastructure Software revenue expected at approximately $6.7 billion (up 16% year over year).
Q3 Guidance — Margins: Consolidated gross margin expected to decline by 130 basis points sequentially in Q3 FY2025, primarily due to a higher mix of XPUs in AI revenue.
Customer Adoption Milestone: Over 87% of the 10,000 largest customers have adopted VCF as of Q2 FY2025, with software ARR growth reported as double digits in core infrastructure.
Inventory: Inventory of $2 billion for Q2 FY2025, up 6% sequentially, and 69 days of inventory on hand
Days Sales Outstanding: 34 days in the second quarter, improved from 40 days a year ago.
Product Innovation: Announced Tomahawk 6 switch, delivering 102.4 terabits per second capacity and enabling scale for clusters exceeding 100,000 AI accelerators in two switching tiers.
AI Revenue Growth Outlook: Management stated, "we do anticipate now our fiscal 2025 growth rate of AI semiconductor revenue to sustain into fiscal 2026."
Non-GAAP Tax Rate: Q3 and full-year 2025 expected at 14%.
SUMMARY
Management highlighted that executives provided multi-year roadmap clarity for AI revenue, signaling the current high growth rates could continue into FY2026, based on strong customer visibility and demand for both training and inference workloads. New product cycles, including Tomahawk 6, are supported by what management described as "tremendous demand." The company affirmed a stable capital allocation approach, prioritizing dividends, debt repayment, and opportunistic share repurchase, while maintaining significant free cash flow generation.
Despite a sequential uptick in AI networking content, management expects networking's share of AI revenue to decrease to below 30% in FY2026 as custom accelerators ramp up.
Management noted, "Networking is hard. That doesn't mean XPU is any soft. It's very much along the trajectory we expect it to be." addressing questions on product mix dynamics within AI semiconductors.
On customer conversion for VMware, Hock Tan said, "We probably have at least another year plus, maybe a year and a half to go" in transitioning major accounts to the VCF subscription model.
AI semiconductor demand is increasingly driven by customer efforts to monetize platform investments through inference workloads, with current visibility supporting sustained elevated demand levels.
Kirsten Spears clarified, "XPU margins are slightly lower than the rest of the business other than Wireless." which informs guidance for near-term gross margin shifts.
Management stated that near-term growth forecasts do not include potential future contributions from new "prospects" beyond active customers; updates will be provided only when revenue conversion is certain.
Hock Tan provided no update on the 2027 AI revenue opportunity, emphasizing that forecasts rest solely on factors and customer activity currently visible to Broadcom Inc.
On regulatory risk, Hock Tan said, "Nobody can give anybody comfort in this environment," in response to questions about prospective impacts of changing export controls on AI product shipments.
INDUSTRY GLOSSARY
XPU: A custom accelerator chip, including but not limited to CPUs, GPUs, and AI-focused architectures, purpose-built for a specific hyperscale customer or application.
VCF: VMware Cloud Foundation, a software stack enabling private cloud deployment, including virtualization, storage, and networking for enterprise workloads.
Tomahawk Switch: Broadcom Inc.'s high-performance Ethernet switching product, with Tomahawk 6 as the latest generation capable of 102.4 terabits per second throughput for AI data center clusters.
Co-packaged Optics: Integration of optical interconnect technology within switch silicon to lower power consumption and increase bandwidth for data center networks, especially as cluster sizes scale.
ARR (Annual Recurring Revenue): The value of subscription-based revenues regularized on an annual basis, indicating the stability and runway of software-related sales.
Full Conference Call Transcript
Hock Tan: Thank you, Ji. And thank you, everyone, for joining us today. In our fiscal Q2 2025, total revenue was a record $15 billion, up 20% year on year. This 20% year on year growth was all organic, as Q2 last year was the first full quarter with VMware. Now revenue was driven by continued strength in AI semiconductors and the momentum we have achieved in VMware. Now reflecting excellent operating leverage, Q2 consolidated adjusted EBITDA was $10 billion, up 35% year on year. Now let me provide more color. Q2 semiconductor revenue was $8.4 billion, with growth accelerating to 17% year on year, up from 11% in Q1.
And of course, driving this growth was AI semiconductor revenue of over $4.4 billion, which was up 46% year on year and continues the trajectory of nine consecutive quarters of strong growth. Within this, custom AI accelerators grew double digits year on year, while AI networking grew over 70% year on year. AI networking, which is based on Ethernet, was robust and represented 40% of our AI revenue. As a standards-based open protocol, Ethernet enables one single fabric for both scale-out and scale-up and remains the preferred choice by our hyperscale customers. Our networking portfolio of Tomahawk switches, Jericho routers, and NICs is what's driving our success within AI clusters in hyperscale.
And the momentum continues with our breakthrough Tomahawk 6 switch just announced this week. This represents the next generation 102.4 terabits per second switch capacity. Tomahawk 6 enables clusters of more than 100,000 AI accelerators to be deployed in just two tiers instead of three. This flattening of the AI cluster is huge because it enables much better performance in training next-generation frontier models through a lower latency, higher bandwidth, and lower power. Turning to XPUs or customer accelerators, we continue to make excellent progress on the multiyear journey of enabling our three customers and four prospects to deploy custom AI accelerators.
As we had articulated over six months ago, we eventually expect at least three customers to each deploy 1 million AI accelerated clusters in 2027, largely for training their frontier models. And we forecast and continue to do so a significant percentage of these deployments to be custom XPUs. These partners are still unwavering in their plan to invest despite the uncertain economic environment. In fact, what we've seen recently is that they are doubling down on inference in order to monetize their platforms. And reflecting this, we may actually see an acceleration of XPU demand into the back half of 2026 to meet urgent demand for inference on top of the demand we have indicated from training.
And accordingly, we do anticipate now our fiscal 2025 growth rate of AI semiconductor revenue to sustain into fiscal 2026. Turning to our Q3 outlook, as we continue our current trajectory of growth, we forecast AI semiconductor revenue to be $5.1 billion, up 60% year on year, which would be the tenth consecutive quarter of growth. Now turning to non-AI semiconductors in Q2, revenue of $4 billion was down 5% year on year. Non-AI semiconductor revenue is close to the bottom and has been relatively slow to recover. But there are bright spots. In Q2, broadband, enterprise networking, and service storage revenues were up sequentially. However, industrial was down, and as expected, wireless was also down due to seasonality.
We expect enterprise networking and broadband in Q3 to continue to grow sequentially, but server storage, wireless, and industrial are expected to be largely flat. And overall, we forecast non-AI semiconductor revenue to stay around $4 billion. Now let me talk about our infrastructure software segment. Q2 infrastructure software revenue of $6.6 billion was up 25% year on year, above our outlook of $6.5 billion. As we have said before, this growth reflects our success in converting our enterprise customers from perpetual vSphere to the full VCF software stack subscription.
Customers are increasingly turning to VCF to create a modernized private cloud on-prem, which will enable them to repatriate workloads from public clouds while being able to run modern container-based applications and AI applications. Of our 10,000 largest customers, over 87% have now adopted VCF. The momentum from strong VCF sales over the past eighteen months since the acquisition of VMware has created annual recurring revenue, or otherwise known as ARR, growth of double digits in core infrastructure software. In Q3, we expect infrastructure software revenue to be approximately $6.7 billion, up 16% year on year. So in total, we are guiding Q3 consolidated revenue to be approximately $15.8 billion, up 21% year on year.
We expect Q3 adjusted EBITDA to be at least 66%. With that, let me turn the call over to Kirsten.
Kirsten Spears: Thank you, Hock. Let me now provide additional detail on our Q2 financial performance. Consolidated revenue was a record $15 billion for the quarter, up 20% from a year ago. Gross margin was 79.4% of revenue in the quarter, better than we originally guided on product mix. Consolidated operating expenses were $2.1 billion, of which $1.5 billion was related to R&D. Q2 operating income of $9.8 billion was up 37% from a year ago, with operating margin at 65% of revenue. Adjusted EBITDA was $10 billion or 67% of revenue, above our guidance of 66%. This figure excludes $142 million of depreciation. Now a review of the P&L for our two segments.
Starting with semiconductors, revenue for our Semiconductor Solutions segment was $8.4 billion, with growth accelerating to 17% year on year, driven by AI. Semiconductor revenue represented 56% of total revenue in the quarter. Gross margin for our Semiconductor Solutions segment was approximately 69%, up 140 basis points year on year, driven by product mix. Operating expenses increased 12% year on year to $971 million on increased investment in R&D for leading-edge AI semiconductors. Semiconductor operating margin of 57% was up 200 basis points year on year. Now moving on to Infrastructure Software. Revenue for Infrastructure Software of $6.6 billion was up 25% year on year and represented 44% of total revenue.
Gross margin for infrastructure software was 93% in the quarter, compared to 88% a year ago. Operating expenses were $1.1 billion in the quarter, resulting in Infrastructure Software operating margin of approximately 76%. This compares to an operating margin of 60% a year ago. This year-on-year improvement reflects our disciplined integration of VMware. Moving on to cash flow, free cash flow in the quarter was $6.4 billion and represented 43% of revenue. Free cash flow as a percentage of revenue continues to be impacted by increased interest expense from debt related to the VMware acquisition and increased cash taxes. We spent $144 million on capital expenditures.
Day sales outstanding were 34 days in the second quarter, compared to 40 days a year ago. We ended the second quarter with inventory of $2 billion, up 6% sequentially in anticipation of revenue growth in future quarters. Our days of inventory on hand were 69 days in Q2, as we continue to remain disciplined on how we manage inventory across the ecosystem. We ended the second quarter with $9.5 billion of cash and $69.4 billion of gross principal debt. Subsequent to quarter end, we repaid $1.6 billion of debt, resulting in gross principal debt of $67.8 billion. The weighted average coupon rate and years to maturity of our $59.8 billion in fixed-rate debt is 3.8% and seven years, respectively.
The weighted average interest rate and years to maturity of our $8 billion in floating-rate debt is 5.3% and 2.6 years, respectively. Turning to capital allocation, in Q2, we paid stockholders $2.8 billion of cash dividends based on a quarterly common stock cash dividend of $0.59 per share. In Q2, we repurchased $4.2 billion or approximately 25 million shares of common stock. In Q3, we expect the non-GAAP diluted share count to be 4.97 billion shares, excluding the potential impact of any share repurchases. Now moving on to guidance, our guidance for Q3 is for consolidated revenue of $15.8 billion, up 21% year on year. We forecast semiconductor revenue of approximately $9.1 billion, up 25% year on year.
Within this, we expect Q3 AI Semiconductor revenue of $5.1 billion, up 60% year on year. We expect infrastructure software revenue of approximately $6.7 billion, up 16% year on year. For modeling purposes, we expect Q3 consolidated gross margin to be down 130 basis points sequentially, primarily reflecting a higher mix of XPUs within AI revenue. As a reminder, consolidated gross margins through the year will be impacted by the revenue mix of infrastructure software and semiconductors. We expect Q3 adjusted EBITDA to be at least 66%. We expect the non-GAAP tax rate for Q3 and fiscal year 2025 to remain at 14%. And with this, that concludes my prepared remarks. Operator, please open up the call for questions.
Operator: Withdraw your question, please press 11 again. Due to time restraints, we ask that you please limit yourself to one question. Please stand by while we compile the Q&A roster. And our first question will come from the line of Ross Seymore with Deutsche Bank. Your line is open.
Ross Seymore: Hi, guys. Thanks for letting me ask a question. Hock, I wanted to jump onto the AI side, specifically some of the commentary you had about next year. Can you just give a little bit more color on the inference commentary you gave? And is it more the XPU side, the connectivity side, or both that's giving you the confidence to talk about the growth rate that you have this year being matched next fiscal year?
Hock Tan: Thank you, Ross. Good question. I think we're indicating that what we are seeing and what we have quite a bit of visibility increasingly is increased deployment of XPUs next year and much more than we originally thought. And hand in hand, we did, of course, more and more networking. So it's a combination of both.
Ross Seymore: In the inference side of things?
Hock Tan: Yeah. We're seeing much more inference now. Thank you.
Operator: Thank you. One moment for our next question. And that will come from the line of Harlan Sur with JPMorgan. Your line is open.
Harlan Sur: Good afternoon. Thanks for taking my question and great job on the quarterly execution. Hock, you know, good to see the positive growth in inflection quarter over quarter. Year over year growth rates in your AI business. As a team, as mentioned, right, the quarters can be a bit lumpy. So if I smooth out kind of first 360% year over year. It's kind of right in line with your three-year kind of SAM growth CAGR. Right? Your prepared remarks and knowing that your lead times remain at thirty-five weeks or better, do you see the Broadcom Inc. team sustaining the 60% year over year growth rate exiting this year?
And I assume that potentially implies that you see your AI business sustaining the 60% year over year growth rate into fiscal 2026 again based on your prepared commentary? Which again is in line with your SAM growth taker. Is that kind of a fair way to think about the trajectory this year and next year?
Hock Tan: Yeah. Harlan, that's a very insightful set of analysis here, and that's exactly what we're trying to do here because six over six months ago, we gave you guys a point a year 2027. As we come into the second now into the second half, of 2025, and with improved visibility and updates we are seeing in the way our hyperscale partners are deploying data centers, AI clusters, we are providing you more some level of guidance, visibility, what we are seeing how the trajectory of '26 might look like. I'm not giving you any update on '27. We just still establishing the update we have in '27, months ago.
But what we're doing now is giving you more visibility into where we're seeing '26 head.
Harlan Sur: But is the framework that you laid out for us, like, second half of last year, which implies 60% kind of growth CAGR in your SAM opportunity. Is that kind of the right way to think about it as it relates to the profile of growth in your business this year and next year?
Hock Tan: Yes.
Harlan Sur: Okay. Thank you, Hock.
Operator: Thank you. One moment for our next question. And that will come from the line of Ben Reitzis with Melius Research. Your line is open.
Ben Reitzis: Hey. How are doing? Thanks, guys. Hey, Hock. Networking AI networking was really strong in the quarter. And it seemed like it must have beat expectations. I was wondering if you could just talk about the networking in particular, what caused that and how much is that is your acceleration into next year? And when do you think you see Tomahawk kicking in as part of that acceleration? Thanks.
Hock Tan: Well, I think the network AI networking, as you probably would know, goes pretty hand in hand with deployment of AI accelerated clusters. It isn't. It doesn't deploy on a timetable that's very different from the way the accelerators get deployed, whether they are XPUs or GPUs. It does happen. And they deploy a lot in scale-out where Ethernet, of course, is the choice of protocol, but it's also increasingly moving into the space of what we all call scale-up within those data centers. Where you have much higher, more than we originally thought consumption or density of switches than you have in the scale-out scenario.
It's in fact, the increased density in scale-up is five to 10 times more than in scale-out. That's the part that kind of pleasantly surprised us. And which is why this past quarter Q2, the AI networking portion continues at about 40% from when we reported a quarter ago for Q1. And, at that time, I said, I expect it to drop.
Ben Reitzis: And your thoughts on Tomahawk driving acceleration for next year and when it kicks in?
Hock Tan: Oh, six. Oh, yeah. That's extremely strong interest now. We're not shipping big orders or any orders other than basic proof of concepts out to customers. But there is tremendous demand for this new 102 terabit per second Tomahawk switches.
Ben Reitzis: Thanks, Hock.
Operator: Thank you. One moment for our next question. And that will come from the line of Blayne Curtis with Jefferies. Your line is open.
Blayne Curtis: Hey. Thanks, and results. I just wanted to ask maybe following up on the scale-out opportunity. So today, I guess, your main customer is not really using an NVLink switch style scale-up. I'm just kinda curious your visibility or the timing in terms of when you might be shipping, you know, a switched Ethernet scale-up network to your customers?
Hock Tan: The talking scale-up? Scale-up.
Blayne Curtis: Scale-up.
Hock Tan: Yeah. Well, scale-up is very rapidly converting to Ethernet now. Very much so. It's I for our fairly narrow band of hyperscale customers, scale-up is very much Ethernet.
Operator: Thank you. One moment for our next question. And that will come from the line of Stacy Rasgon with Bernstein. Your line is open.
Stacy Rasgon: Hi, guys. Thanks for taking my questions. Hock, I still wanted to follow-up on that AI 2026 question. I wanna just put some numbers on it. Just to make sure I've got it right. So if you did 60% in the 360% year over year in Q4, puts you at, like, I don't know, $5.8 billion, something like $19 or $20 billion for the year. And then are you saying you're gonna grow 60% in 2026, which would put you $30 billion in AI revenues for 2026. I just wanna make is that the math that you're trying to communicate to us directly?
Hock Tan: I think you're doing the math. I'm giving you the trend. But I did answer that question. I think Harlan may have asked earlier. The rate we are seeing and now so far in fiscal 2025 and will presumably continue. We don't see any reason why it doesn't give an time. Visibility in '25. What we're seeing today based on what we have visibility on '26 is to be able to ramp up this AI revenue in the same trajectory. Yes.
Stacy Rasgon: So is the SAM going up as well? Because now you have inference on top of training. So is the SAM still 60 to 90, or is the SAM higher now as you see it?
Hock Tan: I'm not playing the SAM game here. I'm just giving a trajectory towards where we drew the line on 2027 before. So I have no response to it's the SAM going up or not. Stop talking about SAM now. Thanks.
Stacy Rasgon: Oh, okay. Thank you.
Operator: One moment for our next question. And that will come from the line of Vivek Arya with Bank of America. Your line is open.
Vivek Arya: Thanks for taking my question. I had a near and then a longer term on the XPU business. So, Hock, for near term, if your networking upsided in Q2, and overall AI was in line, it means XPU was perhaps not as strong. So I realize it's lumpy, but anything more to read into that, any product transition or anything else? So just a clarification there. And then longer term, you know, you have outlined a number of additional customers that you're working with. What milestones should we look forward to, and what milestones are you watching to give you the confidence that you can now start adding that addressable opportunity into your 2027 or 2028 or other numbers?
Like, how do we get the confidence that these projects are going to turn into revenue in some, you know, reasonable time frame from now? Thank you.
Hock Tan: Okay. On the first part that you are asking, it's you know, it's like you're trying to be you're trying to count how many angels on a head of a pin. I mean, whether it's XPU or networking, Networking is hard. That doesn't mean XPU is any soft. It's very much along the trajectory we expect it to be. And there's no lumpiness. There's no softening. It's pretty much what we expect. The trajectory to go so far. And into next quarter as well, and probably beyond. So we have a fair it's a fairly I guess, in our view, fairly clear visibility on the short-term trajectory. In terms of going on to 2027, no.
We are not updating any numbers here. We six months ago, we drew a sense for the size of the SAM based on, you know, million XPU clusters for three customers. And that's still very valid at that point. That you'll be there. But and we have not provided any further updates here. Nor are we intending to at this point. When we get a better visibility clearer, sense of where we are, and that probably won't happen until 2026. We'll be happy to give an update to the audience.
But right now, though, to in today's prepared remarks and answering a couple of questions, we are as we are doing as we have done here, we are intending to give you guys more visibility what we've seen the growth trajectory in 2026.
Operator: Thank you. One moment for our next question. And that will come from the line of CJ Muse with Evercore ISI. Your line is open.
CJ Muse: Yes. Good afternoon. Thank you for taking the question. I was hoping to follow-up on Ross' question regarding inference opportunity. You discuss workloads that are optimal that you're seeing for custom silicon? And that over time, what percentage of your XPU business could be inference versus training? Thank you.
Hock Tan: I think there's no differentiation between training and inference in using merchant accelerators versus customer accelerators. I think that all under the whole premise behind going towards custom accelerators continues. Which is not a matter of cost alone. It is that as custom accelerators get used and get developed on a road map with any particular hyperscaler, that's a learning curve. A learning curve on how they could optimize the way they'll go as the algorithms on their large language models get written and tied to silicon. And that ability to do so is a huge value added in creating algorithms that can drive their LLMs to higher and higher performance.
Much more than basically a segregation approach between hardware and the software. It says you literally combine end-to-end hardware and software as they take that. As they take that journey. And it's a journey. They don't learn that in one year. Do it a few cycles, get better and better at it. And then lies the value, the fundamental value in creating your own hardware versus using silicon. A third-party merchant that you are able to optimize your software to the hardware and eventually achieve way higher performance than you otherwise could. And we see that happening.
Operator: Thank you. One moment for our next question. And that will come from the line of Karl Ackerman with BNP Paribas. Your line is open.
Karl Ackerman: Yes. Thank you. Hock, you spoke about the much higher content opportunity in scale-up networking. I was hoping you could discuss how important is demand adoption for co-package optics in achieving this five to 10x higher content for scale-up networks. Or should we anticipate much of the scale-up opportunity will be driven by Tomahawk and Thor and NICs? Thank you.
Hock Tan: I'm trying to decipher this question of yours, so let me try to answer it perhaps in a way I think you want me to clarify. First and foremost, I think most of what's scaling up there are a lot of the scaling up that's going in, as I call it, which means a lot of XPU or GPU to GPU interconnects. It's done on copper. Copper interconnects. And because, you know, there's the size of the size of this in of this scale-up cluster still not that huge yet, that you can get away with. Copper to using copper interconnects. And they're still doing it. Mostly, they're doing it today.
At some point soon, I believe, when you start trying to go beyond maybe 72, GPU to GPU, interconnects, you may have to push towards a different protocol by protocol mode at a different meeting. From copper to optical. And when we do that, yeah, perhaps then things like exotic stuff like co-packaging might be a fault of silicon with optical might become relevant. But truly, what we really are talking about is that at some stage, as the clusters get larger, which means scale-up becomes much bigger, you need to interconnect GPU or XPU to each other in scale-up many more.
Than just 72 or 100 maybe even 28, you start going more and more, you want to use optical interconnects simply because of distance. And that's when optical will start replacing copper. And when that happens, the question is what's the best way to deliver on optical. And one way is co-packaged optics. But it's not the only way. You can just simply use continue use, perhaps pluggable. At low-cost optics. In which case then you can interconnect the bandwidth, the radix of a switch and our switch is down 512 connections. You can now connect all these XPUs GPUs, 512 for scale-up phenomenon. And that was huge. But that's when you go to optical.
That's going to happen within my view a year or two. And we'll be right in the forefront of it. And it may be co-packaged optics, which we are very much in development, it's a lock-in. Co-package, or it could just be as a first step pluggable object. Whatever it is, I think the bigger question is, when does it go from optical and from copper connecting GPU to GPU to optical. Connecting it. And the stamp in that move will be huge. And it's not necessary for package updates, though that definitely one path we are pursuing.
Karl Ackerman: Very clear. Thank you.
Operator: And one moment for our next question. And that will come from the line of Joshua Buchalter with TD Cowen. Your line is open.
Joshua Buchalter: Hey, guys. Thank you for taking my question. Realized the nitpicky, but I wanted to ask about gross margins in the guide. So your revenue implies sort of $800 million and $100 million incremental increase with gross profit up, I think, $400 million to $450 million, which is kind of pretty well below corporate average fall through. Appreciate that semis are dilutive, and custom is probably dilutive within semis, but anything else going on with margins that we should be aware of? And how should we think about the margin profile of longer term as that business continues to scale and diversify? Thank you.
Kirsten Spears: Yes. We've historically said that the XPU margins are slightly lower than the rest of the business other than Wireless. So there's really nothing else going on other than that. It's just exactly what I said. That the majority of it quarter over quarter. Is the 30 basis point decline is being driven by more XPUs.
Hock Tan: You know, there are more moving parts here. Than your simple analysis pros here. And I think your simple analysis is totally wrong in that regard.
Joshua Buchalter: And thank you.
Operator: And one moment for our next question. And that will come from the line of Timothy Arcuri with UBS. Your line is open.
Timothy Arcuri: Thanks a lot. I also wanted to ask about Scale-Up, Hock. So there's a lot of competing ecosystems. There's UA Link, which, of course, you left. And now there's the big, you know, GPU company, you know, opening up NVLink. And they're both trying to build ecosystems. And there's an argument that you're an ecosystem of one. What would you say to that debate? Does opening up NVLink change the landscape? And sort of how do you view your AI networking growth next year? Do you think it's gonna be primarily driven by scale-up or would still be pretty scale-out heavy? Thanks.
Hock Tan: It's you know, people do like to create platforms. And new protocols and systems. The fact of the matter is scale-up. It can just be done easily, and it's currently available. It's open standards open source, Ethernet. Just as well just as well, don't need to create new systems for the sake of doing something that you could easily be doing in networking in Ethernet. And so, yeah, I hear a lot of this interesting new protocols standards that are trying to be created. And most of them, by the way, are proprietary. Much as they like to call it otherwise. One is really open source, and open standards is Ethernet.
And we believe Ethernet won't prevail as it does before for the last twenty years in traditional networking. There's no reason to create a new standard for something that could be easily done in transferring bits and bytes of data.
Timothy Arcuri: Got it, Alex. Thank you.
Operator: And one moment for our next question. And that will come from the line of Christopher Rolland with Susquehanna. Your line is open.
Christopher Rolland: Thanks for the question. Yeah. My question is for you, Hock. It's a kind of a bigger one here. And this kind of acceleration that we're seeing in AI demand, do you think that this acceleration is because of a marked improvement in ASICs or XPUs closing the gap on the software side at your customers? Do you think it's these require tokenomics around inference, test time compute driving that, for example? What do you think is actually driving the upside here? And do you think it leads to a market share shift faster than we were expecting towards XPU from GPU? Thanks.
Hock Tan: Yeah. Interesting question. But no. None of the foregoing that you outlined. So it's simple. The way inference has come out, very, very hot lately is remember, we're only selling to a few customers, hyperscalers with platforms and LLMs. That's it. They are not that many. And you we told you how many we have. And haven't increased any. But what is happening is this all on this hyperscalers and those with LLMs need to justify all the spending they're doing. Doing training makes your frontier model smarter. That's no question. Almost like science. Research and science. Make your frontier models by creating very clever algorithm that deep, consumes a lot of compute for training smarter. Training makes us smarter.
Want to monetize inference. And that's what's driving it. Monetize, I indicated in my prepared remarks. The drive to justify a return on investment and a lot of the investment is training. And then return on investment is by creating use cases a lot AI use cases AI consumption, out there, through availability of a lot of inference. And that's what we are now starting to see among a small group of customers.
Christopher Rolland: Excellent. Thank you.
Operator: And one moment for our next question. And that will come from the line of Vijay Rakesh with Mizuho. Your line is open.
Vijay Rakesh: Yeah. Thanks. Hey, Hock. Just going back on the AI server revenue side. I know you said fiscal 2025 kind of tracking to that up 60% ish growth. If you look at fiscal 2026, you have many new customers ramping a meta and probably, you know, you have the four of the six. Hyper skills that you're talking to past. Would you expect that growth to activate into fiscal 2026? If all that, you know, kind of the 60% you had talked about.
Hock Tan: You know, my prepared remarks, which I clarify, that the grade of growth we are seeing in 2025 will sustain into 2026. Based on improved visibility and the fact that we're seeing inference coming in on top of the demand for training as the clusters get built up again because it still stands. I don't think we are getting very far by trying to pass through my words or data here. It's just a and we see that going from 2025 into 2026 as the best forecast we have at this point.
Vijay Rakesh: Got it. And on the NVLink the NVLink fusion versus the scale-up, do you expect that market to go the route of top of the rack where you've seen some move to the Internet side in kind of scale-out? Do you expect scale-up to kind of go the same route? Thanks.
Hock Tan: Well, Broadcom Inc. does not participate in NVLink. So I'm really not qualified to answer that question, I think.
Vijay Rakesh: Got it. Thank you.
Operator: Thank you. One moment for our next question. And that will come from the line of Aaron Rakers with Wells Fargo. Your line is open.
Aaron Rakers: Yes. Thanks for taking the question. Think all my questions on scale-up have been asked. But I guess Hock, given the execution that you guys have been able to do with the VMware integration, looking at the balance sheet, looking at the debt structure. I'm curious if, you know, if you could give us your thoughts on how the company thinks about capital return versus the thoughts on M&A and the strategy going forward? Thank you.
Hock Tan: Okay. That's an interesting question. And I agree. Not too untimely, I would say. Because, yeah, we have done a lot of the integration of VMware now. And you can see that in the level of free cash flow we're generating from operations. And as we said, the use of capital has always been, we're very I guess, measured and upfront with a return through dividends which is half our free cash flow of the preceding year. And frankly, as Kirsten has mentioned, three months ago and six months ago too in the last two earnings call, the first choice typically of the other free a part of the free cash flow is to bring down our debt.
To a more to a level that we feel closer to no more than two. Ratio of debt to EBITDA. And that doesn't mean that opportunistically, we may go out there and buy back our shares. As we did last quarter. And indicated by Kirsten we did $4.2 billion of stock buyback. Now part of it is used to basically when RS employee, RSUs vest basically use we basically buy back part of the shares in used to be paying taxes on the invested RSU.
But the other part of it, I do a I do a main we use it opportunistically last quarter when we see an opportune situation when basically, we think that it's a good time to buy some shares back. We do. But having said all that, our use of cash outside the dividends would be, at this stage, used towards reducing our debt. And I know you're gonna ask, what about M&A? Well, kind of M&A we will do will, in our view, would be significant, would be substantial enough that we need debt. In any case.
And it's a good and it's a good use of our free cash flow to bring down debt to, in a way, expand, if not preserve our borrowing capacity if we have to do another M&A deal.
Operator: Thank you. One moment for our next question. And that will come from the line of Srini Pajjuri with Raymond James. Your line is open.
Srini Pajjuri: Thank you. Hock, couple of clarifications. First, on your 2026 expectation, are you assuming any meaningful contribution from the four prospects that you talked about?
Hock Tan: No comment. We don't talk on prospects. We only talk on customers.
Srini Pajjuri: Okay. Fair enough. And then my other clarification is that I think you talked about networking being about 40% of the mix within AI. Is it the right kind of mix that you expect going forward? Or is that going to materially change as we, I guess, see XPUs ramping, you know, going forward.
Hock Tan: No. I've always said, and I expect that to be the case in going forward in 2026 as we grow. That networking should be a ratio to XPU should be closer in the range of less than 30%. Not the 40%.
Operator: Thank you. One moment for our next question. And that will come from the line of Joseph Moore with Morgan Stanley. Your line is open.
Joseph Moore: Great. Thank you. You've said you're not gonna be impacted by export controls on AI. I know there's been a number of changes since in the industry since the last time you made the call. Is that still the case? And just know, can you give people comfort that you're there's no impact from that down the road?
Hock Tan: Nobody can give anybody comfort in this environment, Joe. You know that. Rules are changing quite dramatically as trade bilateral trade agreements continue to be negotiated in a very, very dynamic environment. So I'll be honest, I don't I don't know. I know as little as probably you probably know more than I do maybe. In which case then I know very little about this whole thing about whether there's any export control, how the export control will take place we're guessing. So I rather not answer that because no, I know. Whether it will be.
Operator: Thank you. And we do have time for one final question. And that will come from the line of William Stein with Truist Securities. Your line is open.
William Stein: Great. Thank you for squeezing me in. I wanted to ask about VMware. Can you comment as to how far along you are in the process of converting customers to the subscription model? Is that close to complete? Or is there still a number of quarters that we should expect that conversion continues?
Hock Tan: That's a good question. And so let me start off by saying, a good way to measure it is you know, most of our VMware contracts are about three on it. Typically, three years. And that was what VMware did before we acquired them. And that's pretty much what we continue to do. Three is very traditional. So based on that, the renewals, like, two-thirds of the way, almost to the halfway more than halfway through the renewals. We probably have at least another year plus, maybe a year and a half to go.
Ji Yoo: Thank you. And with that, I'd like to turn the call over to Ji Yoo for closing remarks. Thank you, operator. Broadcom Inc. currently plans to report earnings for the third quarter of fiscal year 2025 after the close of market on Thursday, September 4, 2025. A public webcast of Broadcom Inc.'s earnings conference call will follow at 2 PM Pacific. That will conclude our earnings call today. Thank you all for joining. Operator, you may end the call.