Logo of jester cap with thought bubble.

Image source: The Motley Fool.

Date

Tuesday, May 5, 2026 at 4:30 p.m. ET

Call participants

  • Chairperson and Chief Executive Officer — Jayshree Ullal
  • Chief Financial Officer — Chantelle Breithaupt
  • Chief Architect — Andy Bechtolsheim
  • Co-President — Todd Nightingale
  • Co-President and Chief Technology Officer — Kenneth Duda
  • Vice President, Corporate Communications — Rudolph Araujo

Takeaways

  • Revenue -- $2.71 billion, up 35.1% year over year, and exceeding guidance of $2.6 billion.
  • AI sales target -- Raised to $3.5 billion for full year, more than doubling prior AI revenue on annual basis.
  • Gross margin -- 62.4%, within the guidance range of 62%-63%, down sequentially from 63.4% last quarter due to lower enterprise mix.
  • Operating income -- $1.29 billion, representing 47.8% of revenue.
  • Net income -- $1.11 billion, 40.9% of revenue.
  • Diluted earnings per share -- $0.87, up 31.8% year over year, based on 1.27 billion diluted shares.
  • International revenue -- $418.9 million, representing 15.5% of total revenue, down from 21.2% last quarter due to high Americas-based sales to global customers.
  • Operating expenses -- $396.8 million, or 14.6% of revenue; R&D spending at $271.5 million (10% of revenue); sales and marketing at $103.5 million (3.8%); G&A at $21.8 million (0.8%), all down slightly from last quarter.
  • Cash, cash equivalents, and marketable securities -- $12.35 billion at quarter end.
  • Operating cash flow -- $1.69 billion, the highest in company history, driven by strong earnings and increased deferred revenue.
  • Inventory -- $2.38 billion, up from $2.25 billion in prior quarter, as a calculated investment to meet demand.
  • Purchase commitments -- $8.9 billion, up from $6.8 billion sequentially, mostly for chips for new products and AI deployments.
  • Deferred revenue -- $6.2 billion, up from $5.37 billion last quarter, with product deferred revenue up $643 million.
  • Inventory turns -- Improved to 1.7 from 1.5 quarter over quarter.
  • DSOs -- 64 days, improving from 70 days last quarter.
  • Capital expenditures -- $54.5 million in the quarter, with $40 million related to Santa Clara expansion.
  • Share repurchase -- No repurchases during the quarter; $817.9 million remains under current authorization.
  • 2026 outlook -- Revenue growth guide raised to 27.7% ($11.5 billion); gross margin reiterated at 62%-64%; operating margin expected at approximately 46%; tax rate expected at 21.5%.
  • Q2 2026 guidance -- Revenue of ~$2.8 billion; gross margin between 62%-63%; operating margin 46%-47%; diluted EPS ~$0.88; effective tax rate ~21.5%.
  • Customer concentration -- Microsoft and Meta continue as 10%+ customers, with potential for one or two new 10% customers, contingent on shipment volumes.
  • AI fabric mix -- Scale out remains primary, but scale across is expected to contribute at least one-third of AI revenue this year.
  • Net Promoter Score -- Improved from 87 to 89, translating to a 94% customer approval rating.
  • Supply constraints -- "Demand is outstripping our supply," with industry-wide shortages in wafers, chips, CPUs, optics, and memory driving elevated procurement costs and longer lead times.
  • Purchase commitments duration -- Multiyear, reflecting ongoing supply chain constraints requiring forward component purchasing.
  • XPO (Extended Pluggable Optics) launch -- Supports up to 12.8 terabits per module and 204.8 terabits per rack unit with integrated liquid cooling, now endorsed by over 100 consortium vendors.
  • Campus revenue target -- Maintained at $1.25 billion for 2026.

Need a quote from a Motley Fool analyst? Email [email protected]

Risks

  • Supply chain shortages are forecasted to persist for "one or two," impacting availability of wafers, silicon chips, CPUs, optics, and memory. Management indicates this will constrain shipments and necessitate higher costs that "hurt our gross margins."
  • Gross margin pressure is expected to continue, as management states it is "a strong factor of costs going up and us still eating a lot of the costs and giving our customers the benefit" rather than fully passing through price increases.
  • Deferred revenue volatility may increase, with management stating there is "an increase in the volatility of our product deferred revenue balances" due to longer and more complex customer acceptance cycles and new product introductions.

Summary

Arista Networks (ANET 13.59%) delivered 35.1% year-over-year revenue growth in Q1 2026 to $2.71 billion, achieved record operating cash flow, and raised its full-year revenue and AI targets in response to robust demand across enterprise, cloud, and AI segments. Management signaled persistent and broad-based supply shortages across critical components, driving up procurement costs and prompting multiyear supply agreements and purchase commitments. Deferred revenue expanded, reflecting both strong order intake and extended customer qualification and acceptance cycles linked to major new product launches and AI deployments.

  • Management stated, "scale across will definitely contribute at least a third of our AI number," while reinforcing that scale up remains negligible until new specifications arrive in 2027 and beyond.
  • International revenue concentration declined, as major global customers were served out of the Americas this quarter, suggesting shifts in regional booking patterns.
  • XPO, the next-generation pluggable optics form factor, is now backed by more than 100 vendors, with management highlighting, "XPO has a ten-year run, especially at 1.6T and 3.2T where you need liquid cooling and you need that kind of capacity."
  • CEO Jayshree Ullal made clear that, "Our demand is actually the best I have ever seen in my Arista tenure," but capacity is "constrained for the next couple of years" due to lead times and supply limitations.
  • Deferred revenue recognition is taking longer, with acceptance cycles extending "more like six to even eight quarters" for new AI products, which may influence future revenue linearity.
  • The current quarter saw no share repurchases, but $817.9 million remains authorized, with future buybacks dependent on market conditions.
  • AI customer wins and product diversity were highlighted as drivers of both growth and competitive differentiation in cloud, Neo Cloud, and enterprise verticals.

Industry glossary

  • XPO (Extended Pluggable Optics): A high-density, liquid-cooled pluggable optical module specification delivering up to 12.8 Tbps per module, aimed at supporting 1.6T and future 3.2T networking links.
  • EOS (Extensible Operating System): Arista Networks' network operating system providing unified management, automation, and telemetry across switching, routing, and data center environments.
  • AVD (Arista Validated Design): A framework for automating and validating network deployments with repeatable design models and provisioning tools.
  • Scale up / Scale out / Scale across: "Scale up" refers to increasing power in a single system or rack; "scale out" involves adding nodes horizontally in leaf-spine architectures; "scale across" distributes AI workloads and data flows across multiple sites or geographies for optimal bandwidth and power efficiency.
  • Neo Cloud: An industry term in Arista's context for new-generation smaller cloud providers emphasizing AI workloads and requiring tailored network architectures.
  • DMF/DMS (DANZ Monitoring Fabric/Service): Advanced Arista monitoring fabric providing network observability and telemetry for data center and campus environments.

Full Conference Call Transcript

Rudolph Araujo: Good afternoon, everyone, and thank you for joining us. With me on today's call are Jayshree Ullal, Arista Networks, Inc. Chairperson and Chief Executive Officer, and Chantelle Breithaupt, Arista's Chief Financial Officer. This afternoon, Arista Networks, Inc. issued a press release announcing its fiscal first quarter results for the period ending 03/31/2026. If you want a copy of this release, you can find it on our website.

During the course of this conference call, Arista Networks, Inc. management will make forward-looking statements, including those relating to our financial outlook for the second quarter of the 2026 fiscal year, longer-term business model and financial outlooks for 2026 and beyond, our total addressable market and strategy by addressing these market opportunities, including AI, inventory management, lead times, and product innovation, which are subject to the risks and uncertainties that we discussed in detail in our documents filed with the SEC, specifically in our most recent Form 10-Q and Form 10-K, and which could cause actual results to differ materially from those anticipated by these statements.

These forward-looking statements apply as of today and you should not rely on them as representing our views in the future. We undertake no obligation to update these statements after this call. This analysis of our Q1 results and our guidance for Q2 2026 is based on non-GAAP and excludes stock-based compensation expense, intangible asset amortization, gains and losses on strategic investments, and income tax effect of these non-GAAP exclusions, including the recognition of direct access tax benefits associated with stock-based awards. A full reconciliation of our selected GAAP to non-GAAP results is provided in our earnings release. With that, I will turn the call over to Jayshree.

Jayshree Ullal: Thank you, Rudy. Welcome everyone to our first quarter 2026 earnings call. Arista Networks, Inc. has experienced significant velocity in all our sectors in Q1 and we are now commanding the number one market share in high-speed switching in the greater than 10 gigabit Ethernet category. With that, we have overtaken many incumbent vendors according to major market analysts for 2025. Our cloud and AI networking strategy for diverse AI accelerators continues to gain traction. Unlike typical workloads, AI workflow patterns can be long-lived elephant flows, or short-lived and simply not predictable. This implies careful attention to performance where a flow can cause burstiness for a long duration of milliseconds. The intensity of a flow can determine the line-rate throughput.

The shifting traffic patterns to map the flow synchronized to all-to-all, or all-reduce, or bursts of collective communication are all important for AI training and inference applications. I would like to take a moment to review our three AI fabric use cases. In scale up mode, we have familiar technologies such as NVLink and PCIe that have enabled vertical scaling of single compute nodes or racks. The advent of eSun, Ethernet for scale up networking specifications, allows for increasing or decreasing computing power in a flexible manner with Ethernet to automatically adapt to workload demands.

Scale up will be a new entry for Arista Networks, Inc. in 2027 and beyond, where we will be working closely with our customers to build AI racks with very fast interconnects for co-packaged copper, CTC, or open co-packaged optics, CTO, as well as supporting collectives and memory acceleration. Scale out, or horizontal scaling, involves adding more machines to a leaf-spine fabric, moving workloads across multiple servers or nodes, or even connecting other elements like storage or CPUs. As you scale out with massive datasets, bottlenecks can be resolved with collective and protocol acceleration at L2, L3, cluster load balancing all at wire rate. The system must deliver consistent performance without degradation as more nodes participate.

Arista Networks, Inc. is a shining example here with greater than 100 cumulative customers to date in 800 gigabit Ethernet deployments, and we expect the addition of 1.6 terabit in 2027 at production scale. Scale across drives across the cloud and AI, as the AI accelerators in a location may need to be distributed to achieve the appropriate bandwidth capacity with the optimal power. As workloads become more complex and more distributed, the bisectional bandwidth must scale smoothly to avoid bottlenecks and preserve performance. This demands sophisticated traffic engineering, deep routing, encryption properties, and integrated optics based on Arista EOS stack and using Arista's flagship 7800R3 or R4 series.

The 7800 has established itself in this category as the premier scale-across choice. You can see that Arista Networks, Inc.'s accelerated networking strategy and these three types of AI fabrics are critical to deployment of diverse accelerators and frontier models. Traditional static network topologies with hotspot jitter that slows down job completion time or increases time to first token for inference are not the way to go. Arista's EtherLink portfolio addresses both the synchronous flows for massive training and the low latency for concurrent swarms of real-time inference in this era of trillions of tokens, terabits of performance, and terawatts of power.

In 2024, you may recall, we discussed four Ethernet-based AI training deployments and, of course, since then, we have expanded and explored to countless others. This fourth customer from the group has officially moved from InfiniBand to Ethernet at production scale over the last two years. The high-speed Ethernet AI design with flexible air or liquid-cooled infrastructure overcomes the physical constraints of power and space for AI workloads. It results in a low-latency distributed AI supercomputer fabric across global regions. What is clear to me and us is our networking prowess with data control and management and multi-planar orchestration is not only central to our AI switching performance, but also important for high-speed optics transmission.

At the recent Optical Fiber Conference, Arista Networks, Inc. unveiled its extended pluggable optics, XPO, form factor, designed specifically for optics innovations at high speed. Now endorsed by greater than 100 vendors, salient features include record-breaking throughput delivering 12.8 terabits per pluggable module; unprecedented rack density achieving 204.8 terabits per OCP rack unit; integrated cold plate capable of cooling up to 400 watts power per module; and the universality and flexibility across a range of pluggable optics, copper, as well as linear half-time or retimed interfaces. A special kudos to Andy Bechtolsheim, Arista’s chief architect, for driving from OSFP ten years ago to this next generation XPO, bringing structural improvements in power, footprint, and cost reductions.

Our enterprise business experienced strong results in Q1 2026 both in data center and campus. Our Big Switch (BNS)/Bangalore VeloCloud acquisition is also integrating well into our branch and campus strategy, bringing more distributed enterprise use cases and a new channel motion with managed service providers, MSPs. To share some recent wins, let us hear now from Todd Nightingale and Kenneth Duda, our co-presidents, to delineate our Arista 2.0 centers of data strategy. Over to you.

Kenneth Duda: Thanks, Jayshree. Arista Networks, Inc. is diversifying this business with new customer acquisitions covering a broad set of use cases, all unified by Arista's EOS stack and its ability to modernize enterprise infrastructure operating models. Our first highlighted win is a Neo Cloud AI network. The customer was constrained by an incumbent white box architecture that simply could not keep pace with the massive scale-out requirements of AI. Arista Networks, Inc. was selected as a commercially proven and reliable scale-out architecture, with unmatched stability of EOS and the ability to connect AMD MI series XPUs. Arista’s AI leaf and spine EtherLink products were deployed at 800G to provide the incredible performance modern AI networks require.

The AI fabric was tuned using Arista's cluster load balancing scale-out to thousands of XPUs, minimizing hotspots and congestion. On the software side, the customer leveraged AVD, Arista's Validated Design framework, to automate network provisioning which both reduces the total cost of ownership and provides an easy path to reliable network deployment at scale, where without AVD automation a small mistake can cost precious days of debugging time. This was a strategic Neo Cloud win with large potential for upside growth, in an area where we are seeing enormous opportunity and velocity in both Neo Cloud and Sovereign Cloud customers.

Our next win is in the service provider sector, with a leading regional fiber-to-the-home provider serving hundreds of thousands of subscribers. As subscriber bandwidth demands have surged, this customer realized their legacy routing architecture was too rigid, too brittle, and too costly to scale. They needed a solution which would modernize their next-generation backbone and internet peering edge. Arista Networks, Inc. won this upgrade by proving an automation-first approach with a modern operating model driving operational savings and increased subscriber reliability. On the hardware side, we deployed popular 7280 routing platforms using EOS's FLX capabilities, which unlock deep buffering, a rich control-plane software stack, and full internet route scale.

On the software side, Arista's AVD framework again automates router provisioning to reduce the time it takes to turn up services while also reducing errors. Here, we saw great results from our technology partnership with Palo Alto Networks, ensuring the routing edge integrated securely and seamlessly with our overarching security architecture. And here, Arista's core value proposition of lower operating cost and greater reliability drove a competitive win. Now I will hand it off to Todd.

Todd Nightingale: Thanks, Ken. Our third win is in the insurance services sector. Following a year of strategic collaboration, the customer wanted to modernize their infrastructure with a streamlined, automated foundation capable of delivering granular, real-time insights to secure and monitor critical applications. Here, observability was truly the key. Arista Networks, Inc. secured this comprehensive win after executing a flawless proof of concept, proving our architecture significantly exceeded operational standards. To achieve deep network observability, the customer deployed our R3 series filter-and-delivery roles on our monitoring fabric, DANZ Monitoring Fabric (DMF/DMS). Additionally, they deployed campus switches to radically simplify out-of-band management.

Leveraging the rich telemetry capabilities of EOS, the customer unlocked advanced features like VXLAN header stripping and transitioned to a fully automated declarative operational model. Our final win is within the manufacturing sector where we are seeing amazing momentum. Here we have a customer operating more than 100 factory sites globally, servicing continuous 24x7 production. Shifting traffic patterns, manual provisioning, and importantly, a lack of visibility and forensics into microbursts and drops were keeping them from achieving their goals.

Arista Networks, Inc. won an extensive bake-off against two established vendors, both of whom proposed campus designs that could not match what Arista delivered: a universal leaf-spine campus based on open standards, running a single EOS binary across campus, data center, and WAN. The Cognitive Campus solution leveraged a 100G campus spine, high-powered PoE leaves, and Arista Wi-Fi 7. CloudVision drove provisioning, configuration, and lifecycle end-to-end with consistent tooling across the network infrastructure. Here, it really was Arista's modern operating model that drove differentiation in the engagement: hitless production upgrades, latency analyzer for microburst visibility, and true packet drop forensics. The teams were able to significantly reduce production-impacting maintenance windows and expose events that had previously caused line interruption.

In all four of these examples, Arista's support team stood out to customers for its best-in-class service, well known for troubleshooting issues with customers long after Arista gear is no longer suspected to be at fault. Arista's modern operating model also played a key role, especially the AVD tooling that Ken mentioned for architecture, validation, and deployment. We are excited about the momentum across the entire enterprise business and especially the diversification that it brings to Arista Networks, Inc. Thanks, Jayshree.

Jayshree Ullal: Thank you, Todd. Thank you, Ken. It was so fantastic to hear of happy customer outcomes. We had another fitting example of that at our Innovate 2026 event here in the facility held in March. The energy and enthusiasm of our greater than 250 customers who attended was truly infectious and inspiring. I want to especially give a shout out to Ashwin Koli and Divya Wagner's team who have already improved our outstanding Net Promoter Score from 87 to 89 ratings, translating to a 94% customer approval. This really exemplifies the lowest security vulnerabilities in the tech industry. It enhances our ability to better cope with the many risks that AI is creating.

As I look ahead at the year, our Arista 2.0 momentum continues to march on and resonate. Our demand is actually the best I have ever seen in my Arista tenure. The supply, however, is a slightly different and opposite tale. We are experiencing industry-wide shortages across the board, be it wafers, silicon chips, CPUs, optics, and, of course, memory that I referred to last quarter, coupled with elevated cost to procure these. Clearly, our demand is outstripping our supply this year. While we hope the supply chain will ease in the next year or two, the Arista operations team has been diligently engaging with our vendors in strengthening supply agreements and engaging in multiyear purchase commitments.

We anticipate gross margin pressure due to mix and tradeoffs we are making to pay more to assure supply continuity to our customers. Nevertheless, it gives us confidence to increase our forecast growth slightly to 27.7%, aiming now for $11.5 billion for 2026. We also increased our AI target now to $3.5 billion this year, thereby more than doubling our AI sales annually. And with that good news, over to you, Chantelle, for the financial details.

Chantelle Breithaupt: Thank you, Jayshree. I continue to be impressed by the company's ability to deliver such a breadth and depth of networking innovation. It is a core tenet that underpins our strong financial return to shareholders. To detail our most recent financial outcomes: revenues in Q1 were $2.71 billion, up 35.1% year-over-year and above our guidance of $2.6 billion. Growth was seen across the customer sectors, led by our AI and specialty provider customers within the quarter. International revenues for the quarter came in at $418.9 million, or 15.5% of total revenue, down from 21.2% last quarter. This quarter-over-quarter decrease was primarily influenced by Americas-based sales to our large global customers.

The overall gross margin in Q1 was 62.4%, within the guidance range of 62% to 63%, and down from 63.4% in the prior quarter. This quarter-over-quarter decrease is due to the lower mix of sales to our enterprise customers in the quarter. Operating expenses for the quarter were $396.8 million, or 14.6% of revenue, down slightly from last quarter at $397.1 million. R&D spending came in strong at $271.5 million, or 10% of revenue, despite a slight sequential decrease due to the timing of new product introduction costs. Arista Networks, Inc. continues to demonstrate its commitment and focus on networking innovation.

Sales and marketing expense was $103.5 million, or 3.8% of revenue, down from 4% last quarter, representative of the highly efficient Arista go-to-market methodology. Our G&A cost came in at $21.8 million, or 0.8% of revenue, down from $26.3 million last quarter, reflecting our strong base cost productivity within a pure-play networking business model. Our operating income for the quarter was $1.29 billion, or 47.8% of revenue. Let me pause here to thank the greater Arista team for all of their efforts and resulting excellent execution in a dynamic environment. Other income and expense for the quarter was a favorable $110.8 million and our effective tax rate was 21.1%.

Overall, this resulted in net income for the quarter of $1.11 billion, or 40.9% of revenue. Our diluted share count was 1.27 billion shares, resulting in diluted earnings per share for the quarter of $0.87, up 31.8% from the prior year. Now turning to the balance sheet: cash, cash equivalents, and marketable securities ended the quarter at approximately $12.35 billion. In the quarter, we did not repurchase our common stock. Of the $1.5 billion repurchase program approved in May 2025, $817.9 million remain available for repurchase in future quarters. The actual timing and amount of future repurchases will be dependent on market and business conditions, stock price, and other factors.

Now turning to operating cash performance for the quarter: we generated approximately $1.69 billion of cash from operations in the period, strongest in the history of Arista Networks, Inc. This was driven by a robust earnings performance coupled with an increase in deferred revenue due to the linearity of shipments within the quarter. DSOs came in at 64 days, down from 70 days in Q4. Our inventory turns improved slightly, landing at 1.7 versus 1.5 in the prior quarter. We ended the quarter with $2.38 billion in inventory, up from $2.25 billion last quarter. This marginal increase is a calculated investment in the mix of raw materials to fulfill our growing demand.

Our purchase commitments at the end of the quarter were $8.9 billion, up from $6.8 billion at the end of Q4. As mentioned in prior quarters, this expected activity mostly represents purchases for chips related to new products and AI deployments. We will continue to have some variability in future quarters, as a reflection of the combination of demand for our new products, component variability, and the lead times from our key suppliers. This could also result in quarters of elevated inventory balances ahead of the deployment. Our total deferred revenue balance was $6.2 billion, up from $5.37 billion in the prior quarter. The majority of the deferred revenue balance is product-related.

Our product deferred revenue increased approximately $643 million versus last quarter. We remain in a period of ramping our new products, winning new and expanding use cases, including AI. These trends have resulted in increased customer-specific acceptance clauses and an increase in the volatility of our product deferred revenue balances. As mentioned in prior quarters, the deferred balance can move significantly on a quarterly basis independent of underlying business drivers. Accounts payable days were 54 days, down from 60 days in Q4, reflecting the timing of inventory receipts and payments. Capital expenditures for the quarter were $54.5 million. We continue the construction work to build expanded facilities in Santa Clara.

In Q1, we incurred approximately $40 million in CapEx related to this program, estimated to reach $180 million in 2026. These Q1 results have provided a strong start to our fiscal year 2026. As Jayshree mentioned, we are now pleased to raise our 2026 fiscal year outlook to 27.7% revenue growth, delivering approximately $11.5 billion. We maintain our 2026 campus revenue goal of $1.25 billion, and raise our AI fabrics goal from $3.25 billion to $3.5 billion. I would like to take this opportunity to remind the audience that the timing and outcome of customer projects with acceptance terms can create quarterly and sequential dynamics that do not follow prior year trends.

For gross margin, we reiterate the range for the fiscal year of 62% to 64%, inclusive of mix and anticipated supply chain cost increases for memory and silicon. Given this challenging supply backdrop, I am proud of our sourcing team's execution, which strongly contributes to the gross margin outlook holding in our guidance range. We feel confident that we can source the necessary supply to meet our customers' needs. Our operating margin outlook remains at approximately 46% for the fiscal year with a tax rate expected at 21.5%.

On the cash front, we will continue to work to optimize our working capital investments with some expected variability in inventory and cash flow from operations due to the timing of component receipts on purchase commitments. More specifically now, our guidance for the second quarter is as follows, now with the added quarterly metric of diluted earnings per share: revenues of approximately $2.8 billion; gross margin between 62% and 63%; operating margin between 46% and 47%; and diluted earnings per share of approximately $0.88 with approximately 1.27 billion diluted shares. Our effective tax rate is expected to be approximately 21.5%. In closing, we are optimistic about the fiscal year ahead.

The industry has many times demonstrated the pattern of landing on Ethernet, the winning technology, and that is where Arista Networks, Inc. shines best. We appreciate our customers' choice of working with us to achieve their business outcomes. Now, Rudy, back to you for Q&A.

Rudolph Araujo: Thank you, Chantelle. We will now move to the Q&A portion of the Arista Networks, Inc. earnings call. To allow for greater participation, I would like to request that everyone please limit themselves to one question. Your line will be placed on mute after your question. Thank you for your understanding. Regina, please take it away.

Operator: We will now begin the Q&A portion of the Arista Networks, Inc. earnings call. If you would like to ask a question, please press star then one on your telephone keypad. If you would like to withdraw your question, press star and the number one again. Please pick up your handset before asking questions to ensure optimal sound quality. Our first question will come from the line of Simon Leopold with Raymond James. Please go ahead.

Simon Leopold: Great. Thank you very much for taking the question. I wanted to explore your commentary around the scale-across opportunity in particular, and I guess what I am trying to get a better sense of is how much revenue, if any, did that contribute last year and how material is that to the $3.5 billion forecast you are giving this year? And how should that trend longer term? Thank you.

Jayshree Ullal: Sure, Simon. I think last year on scale across we were just beginning, so I think they were small numbers. The majority of the numbers were really scale out. That is our heritage, and that is where we excel. If I were to anticipate how it would be this year, again, scale up is virtually zero and nonexistent because it really only comes to play after the ESUN spec, so consider that more a 2027–2028 number. I think the number will be shared between scale across and scale out. I do not know if I can say it is 50/50 or 70/30 or 60/40, but scale across will definitely contribute at least a third of our AI number.

Operator: Our next question will come from the line of George Notter with Wolfe Research. Please go ahead.

George Notter: Hi, guys. Thanks very much. Maybe just continuing the discussion on scale up. You know, we are starting to see rack design wins. One of your competitors in the ODM space, I think, has got a couple of designs that they have announced at least. And I know you are kind of pointing towards ESUN as being a key catalyst in generating business there. But can you talk a little bit about where you are in terms of designs with customers, progress? Anything you can tell us there would be great. And I, in fact, think a few quarters ago, you said you had five to seven scale-up rack designs that you were at least working on.

I am just wondering if you can update that. Thanks a lot.

Jayshree Ullal: Yeah, that is correct, George. I think there is no doubt in our minds that we will have a number of racks and a number of scale-up use cases in 2027. Maybe some of them will be in early trials, but the majority of them are looking at really starting with 1.6T and 1.6T will really happen in 2027. There may be a few, a handful of them, some experimental stuff at 800G. But we continue to see at least five to seven rack opportunities. Some of them are multiple racks to the same customer. We are actively designing with them.

There is a huge amount of liquid cooling, designs with very dense cabling options, acceleration of collectives and memory, features we have to work on for low latency. So I definitely feel we are in an active engineering phase with Ken and Hugh's teams this year. But unlike the ODMs, I think we are held to a higher bar, and we have to just make sure that this is production worthy and specification-adhering to ESUN. So I would say today scale up is mostly limited to NVLink from NVIDIA and maybe some PCI switching. But the majority of the Ethernet scale up will only really happen in 2027 and 2028.

Operator: Our next question will come from the line of Antoine Chkaiban with New Street Research. Please go ahead.

Antoine Chkaiban: Hi. Thank you very much for taking my question. So with the supply outstripping demand, I am wondering how much your current supply allows you to grow this year and next. Is the updated supply growth guide of 28% growth a good reflection of how much supply you have secured for this year? What could that number look like next year, based on how much supply you think you can get as of today?

Jayshree Ullal: Antoine, I think the supply chain problem—and, Todd, maybe you can add to this—is not a one- or two-quarter phenomenon. We now think it is a one- or two-year phenomenon. At first, we thought it was memory. Now it is all the wafer fabrication facilities. Every chip is challenged, and you can see how Chantelle has leaned in with the purchase commitments for multiple years. So while we will continue to improve, it is a reflection of not just demand, but how much we can ship this year. And as we continue to ship this year, we can give you better visibility on next year.

But I can just tell you, we see multiyear demand and we are going to do everything, including hurt our gross margins, to supply to that demand this year and next year. Because we believe that we certainly do not want to keep GPUs idle and AI infrastructure underutilized because Arista Networks, Inc. did not supply the network. So can the number get better this year? I think this reflects our best attempt at a good number. We started out at 20%, we were at 25%, now we are at 27.7%. Could we improve toward the tail end of the year? We will see. The amount of decommits we are seeing does not feel good.

So we think a lot of this will continue into next year and keep us constrained for the next couple of years.

Operator: Our next question will come from the line of Aaron Rakers with Wells Fargo. Please go ahead.

Aaron Rakers: Jayshree, last quarter you had alluded to engagements with other hyperscale cloud titan customers. I think you also pointed to maybe having one or two new 10% customers this year. I am curious where we stand today. Any updated thoughts on adding one or two new customers at 10% plus? And maybe qualitatively, just talk about your engagements you are having beyond your two big cloud titans across the hyperscale vertical. Thank you.

Jayshree Ullal: Yeah, absolutely. First of all, the two big ones—we never take them for granted—Microsoft and Meta, they are all-time favorites. They have been 10% and greater customers for over a decade, and the partnership could never be stronger. And it continues to get better both in cloud and in AI. In terms of the new entrants, we still expect at least one, maybe two. And maybe I should caveat this by saying, certainly in demand we see one or two. We shall see, Todd, how we do on shipments to see if we can achieve the greater than 10%. The two of them have very interesting characteristics.

They exhibit what I would call the three use cases I just alluded to—scale up, scale out, and scale across—where we really have a fabric notion of creating. So far we have been working with them a lot on the front end; now we get to complement that on the back end, definitely for scale out and scale across and maybe even a little bit of scale up in some of these use cases. The other thing we are seeing with a lot of these use cases is the lack of power in sites and the ability and demand to distribute and get a more multi-tenant scale across is very high in these two use cases.

A third common thread we are seeing across them, much as we all talk about ODM and white box, is they deeply appreciate EOS and the features and the reliability and the observability and just the fact that we have a robust, highly scalable Layer 2/Layer 3 stack commands a lot of superior advantages. So I believe the diversity of these cloud titans is largely due to the fact that we have great hardware and software combined. Ken, do you want to say a few words on that?

Kenneth Duda: It has just been an incredible journey to live through this and see the level of infrastructure build-out we are getting and how well positioned our hardware and software roadmaps are to address these ever-evolving, world-class use cases. It is just a blast to get to work on this stuff.

Jayshree Ullal: That is always fun when your job is a blast. So, Ben, I still see one, maybe two 10% customers. Todd, hopefully, we can ship it. Oh, sorry, Aaron.

Operator: Our next question will come from the line of Ben Reitzes with Melius Research. Please go ahead.

Ben Reitzes: Oh, there you go, Jayshree. Here I am. I wanted to ask around the product constraints. Are you able to say what the number was in the quarter and what it is taking away in terms of the $2.8 billion guide? Is it safe to say things would have been $100 million or $200 million higher for both? And then if you do not mind, if you can touch on why the gross margin should go back up to 63%. What is it that you guys are doing that gives us confidence that it can actually expand a tad from here?

Chantelle Breithaupt: I think that— Hey, Ben. I do not think the commentary about the demand outstripping the supply is a Q1/Q2 issue. I think we are talking about looking ahead Q3, Q4, into next year. So I do not think there is something outside of what we have guided or what we have delivered in the first half. In the sense of the margin, the margin is a mix of things. The team members are executing in full force. The supply chain is doing everything they can on ensuring that we have the best supply at the best price, and so we have incorporated that. The only chance for margin expansion would be due to mix.

So I think that is the opportunity as we look to see what we can deliver in the second half, Ben. I think that would be the opportunity.

Kenneth Duda: The teams are also doing everything they can to make sure we control our costs, especially in the manufacturing side, and that includes bringing on secondary providers, qualifying new components, etc., to make our supply chain more resilient and more cost-effective in the long run.

Jayshree Ullal: And one thing to clarify also on gross margins is we view this as a partnership with our customers. While we did consider and have raised prices a little bit, unlike our competitors, we have not done two price increases. We have not done major price increases. And the price increases really come into play once our backlog starts to reduce. So you will not see the impact of that. Our gross margins are a strong factor of costs going up and us still eating a lot of the costs and giving our customers the benefit and promise of the pricing we said we would give to them.

Operator: Our next question will come from the line of Michael Ng with Goldman Sachs. Please go ahead.

Michael Ng: Hey, good afternoon. Thanks for the question. I was just wondering if you could talk about whether or not Arista Networks, Inc. is seeing networking attach opportunities for customers that are using TPU or TPU-like architectures. And then, anything you could comment about as it relates to growing Neo Cloud traction? Is that something that you think may be a little bit underappreciated by the analyst community? Thank you very much.

Jayshree Ullal: Yeah, Michael. You are absolutely right. I will take your second question first. It is easy to talk about the titans because the giant numbers are so ginormous. But the Neo Clouds are a very important sector because they do not always have the staff to do everything they want to do. They really lean on Arista Networks, Inc.'s design expertise, EOS expertise, network design configurations we can provide them, and a family of 22 products we have in AI. So yes, I would agree with you. It is underappreciated, and the Neo Cloud is very strong this quarter, if I recall, Chantelle, for us in the specialty and cloud providers. The other question you had was the TPU.

In general, we are seeing diverse accelerators. Last time, I spoke about the AMD accelerators. This time, I will definitely give a nod to the TPUs, because in particularly scale-across use cases, we are seeing multi-tenants connecting to different AI accelerators, including GPUs as well. So I think the diversity of accelerators is creating tremendous multi-accelerator opportunity and multi-protocol features that we can provide for them in our network.

Operator: Our next question will come from the line of Analyst with TD Cowen. Please go ahead.

Analyst: Great. Thanks. Congrats on the results, and thanks for letting me join in on the fun here. Jayshree, I wanted to get your thoughts on—we have been talking a lot about agentic AI and the demands that it is placing on maybe some of the more general-purpose infrastructure that has been in the background over the last couple years. You have talked in the past about a two-to-one pressure on front-end networking created by back-end. First, is that still the correct way to think about it?

And second, as agentic workflows become more common, is there any additional demand, from your perspective, having a single-image EOS platform on the front and the back end, or are the front and back end still pretty siloed?

Jayshree Ullal: Well, first of all, welcome to your first call. It will be fun. So agentic AI is kind of a buzzword. Let me break it into how the biggest killer application we see in agentic AI right now is still training. And indeed, it is going to move to more distributed inference, and we would also like to see agentic AI move into a lot of enterprise use cases—all of which we are seeing, by the way. But I would say large, medium, small: the largest killer agentic AI application is training; the medium is inference; and the small is enterprise.

In terms of back end versus front end, we are now seeing way more back-end activity, particularly with our large AI titans and cloud titans, because there is just so much scale they need to prepare for the billions of parameters and tokens. So much so that I think the front end, they might come back and refresh, but they are almost ignoring right now in favor of the back end. Having said that, by virtue of the back-end deployments, I do not know if we see a two-to-one to the front end anymore, but we at least see a one-to-one. And the one-to-one can be wide area, CPU, and storage. Those are probably the three common use cases.

Not all the customers are uplifting everything and doing all three, although we have had cases where some of them did an upgrade at the front end before they went into the back end. But usually, they will have to come back to that because the minute you put that kind of performance pressure and scale on the back end, you almost have to do something in the front end. At the moment, I would say it is more one-to-one. And at the moment, I would also say the scale across in the back end has become a bigger use case than we imagined this time last year.

Kenneth Duda: The other thing I have to mention here is just how good it feels to have the same set of products and the same common operating system, management suite, and operating model across the front end and back end. This lowers cost to the customer, simplifies their design process, and we are one of the few vendors who can do that.

Jayshree Ullal: I think only—we think only. Yes. Absolutely. Good point, Ken.

Operator: Our next question will come from the line of Meta Marshall with Morgan Stanley. Please go ahead.

Meta Marshall: Maybe just a question on XPO monetization or just how it helps you continue to gain share with customers or just mindshare with customers by being so front-footed with the technology. Thanks.

Chantelle Breithaupt: Yeah. Thank you, Meta. I think, as you know, we are not a classic optics vendor.

Jayshree Ullal: But almost always, whenever we are selling our switches, it has to connect to something, and usually it is some form of copper or optics. These innovations with OSFP, I remember this super well where everybody was saying, “Oh no, we can just use QSFP.” It has proven to be not only a contribution for Arista Networks, Inc., but really for the industry wide. And that is still how we see it with XPO as well. While the industry has been talking a lot about co-packaged optics, these are still science experiments, and they are very proprietary with individual vendors doing their own thing.

We embrace open CPO a few years from now, but we think XPO has a ten-year run, especially at 1.6T and 3.2T where you need liquid cooling and you need that kind of capacity. So all those scale-up racks we are talking about would not be possible without XPO or CTC or any one of those technologies. We see this as—just as the last decade was greatly influenced by OSFP, the next decade will be greatly influenced by XPO. And remember, 99% of the optical market today that we connect to is all pluggable optics. So this is a very crucial invention and innovation, not just for Arista Networks, Inc., but the industry at large.

Kenneth Duda: I think this is a great example of how Arista Networks, Inc. enables an ecosystem and then we profit as that ecosystem grows. What XPO unlocks is a standard, interoperable way to get to four times the faceplate bandwidth with liquid cooling, which is absolutely critical for these AI use cases. Without that, you have this huge bottleneck at the front panel. The amount of extra rack space required to get through OSFP is immense. So we are really enabling the future growth of our industry this way, which we benefit from and others benefit from as well.

Jayshree Ullal: It is stunning to me. I remember when I first talked to Andy and Vijay, they said, “Oh, we think we will get about 20 signatures,” and then it was 40. And now it is north of 100. So it tells me the whole consortium is coming together for things like Ethernet, IP, and standardization of optics.

Operator: Our next question will come from the line of Tal Liani with Bank of America. Please go ahead.

Tal Liani: Hi, guys. Can you hear me?

Jayshree Ullal: Yes, Tal. We can hear you.

Chantelle Breithaupt: Yep.

Meta Marshall: Hello.

Tal Liani: I promised myself to be nice today, so I have a good question for you. Deferred revenue. Deferred revenue has doubled in the last year, and it went up—if I combine short term and long term—by $826 million. It went up significantly in the last four quarters. What needs to happen—what are the conditions—for deferred revenues to be recognized over the next few quarters? Is it about data centers going live and traffic going into data centers? What are the sources for the deferred revenue increase? Thanks.

Jayshree Ullal: Tal, I really do like you, so I am going to be nice to you not because I have to, but because I like to. If you remember ten years ago, we had a similar phenomenon where, in the cloud, the whole leaf-spine design was brand new. Nobody really knew how to build it or monetize it, and they were building some of the world's largest networks for Azure, etc. We had new products; they had new designs. They had done traditionally the access-aggregation-core and were now moving to the fat-tree topology. We had some fairly lengthy qualification cycles. I would say there is a customer aspect to it and a product aspect to it.

The customer aspect is they need to have the space, they need to have the facilities, they need to have their—in this case—GPUs; back then it used to be CPUs. They have to have their rack and stack, and in many cases, by the way, we are running into examples where literally they need to manually install the cables, and that takes several months. Thousands of people have to do that. So there is certainly a customer acceptance piece of it which starts with being ready. There is also a new product aspect. Many of these new products in the Arista EtherLink family, particularly for the AI, are brand new—brand new chips, brand new software.

Familiarity with it, particularly in the back end for scale out and scale across, is new to them. So there is a level of testing and making sure it works for the rest of their ecosystem, including the front end, that is super important, and Arista Networks, Inc. bears a huge responsibility for that as well. So all this to tell you, the length of time to qualify this, which used to be two to four quarters, has extended more like six to even eight quarters. It has gotten much longer. Chantelle, do you want to add something?

Chantelle Breithaupt: The only other thing I would add, thank you, Jayshree, is that we do recognize some of it every quarter. So it is not like it is one balance. This is aging and growing, Tal. We recognize things every quarter—things come in and things are recognized to the P&L. So I just want to make sure you understand that it is not just piling. Some things go in and some things come out.

Jayshree Ullal: Does that make sense, Tal? Tal, you are on mute?

Analyst: They mute him after this question.

Kenneth Duda: Alright.

Operator: Our next question will come from the line of Amit Daryanani with Evercore. Please go ahead.

Amit Daryanani: I guess, Jayshree, you folks have kind of positioned XPO as the next OSFP, and I would love to understand that as XPO ramps from the OFC demos to potentially deployments in 2027, how do you see a change in the optics architecture within AI clusters? And then maybe specific to Arista Networks, Inc., does that change the growth profile or your content per AI rack or cluster as we go forward? Thank you.

Jayshree Ullal: Thank you, Amit. I think you should look at XPO as a partner to OSFP. At 400G and 800G you will be fine with OSFP. As we go to higher speeds in 2027–2028 or beyond, OSFP will run out of steam, and XPO will be the new connector of choice. So the migration to higher speeds equals the migration to XPO, particularly for scale out and scale across. Within a rack and scale up, there are still a number of choices. I think within short distances of two to three meters, you are still going to see a lot of co-packaged copper, and I think XPO in terms of density will be another alternative.

But I do not rule out open CPO as well over there. They are really looking to maximize their density in a minimum amount of space. So I think XPO will be particularly prevalent in scale out and scale across, and will be one of the choices in scale up.

Operator: Our next question comes from the line of John Jeffrey Hopson on for Ryan Koontz with Needham. Please go ahead.

John Jeffrey Hopson: Hi. I appreciate the question. On the scale across, it seems like that would be a really good fit for all Arista's capabilities. And I know you mentioned it would maybe be around a third of revenue this year. But is this something where scale across could even be larger than scale out over the next couple of years? Thanks.

Jayshree Ullal: Hi, Jeff. I think the answer to that would lie in how well we do with both and what form factors are used for both. The majority of the scale across today uses a very premier, valuable, heavy-duty routing platform—the 7800. So if we do lots of that, it could get well beyond the 30%. But some of them may do it with fixed boxes too, or fixed switches, and choose to add a lot of cable, in which case it would not go well above that. So we do not know what we do not know.

But I would agree with you that scale across is by far the most significant and differentiated opportunity that really highlights Arista Networks, Inc.'s prowess in both platforms and software.

Operator: Our next question comes from the line of Samik Chatterjee with JPMorgan. Please go ahead.

Samik Chatterjee: Hi. Thanks for taking my question. Slightly related to the last question here. You said most of the cloud revenue near term is going to scale out and scale across as we wait for scale up to ramp. How are you thinking about your market share when it comes to scale out versus scale across? In the early days of scale across, what are you seeing in terms of market share? And are you seeing customer decisions being led in across by the incumbent in scale out? Or is it a different decision altogether in terms of how they are designing vendors for scale across? Thank you.

Jayshree Ullal: Good question, Samik. You are making me think. I would say if it is a greenfield deployment, then they tend to think of it together. They are not only building the sites, but they are thinking of the interconnect across them, and therefore market share is generally strong in both. In some cases where Arista Networks, Inc. has not been a historical participant within the data center, we now have an opportunity to offer the scale across multi-tenant in a non-greenfield situation—let us say in a brownfield—where now they have got disparate data centers or AI clusters that we now have to bring in.

So once again, I think Arista Networks, Inc. is a really fitting example to be in scale across for both of those use cases, but with the additional opportunity in a brand new data center to be in all use cases, if that makes sense. So it is giving us a chance to participate with different types of accelerators and different types of models because people are not getting the power and they are having to distribute the data centers. As a result of distribution, you need more engineering, routing, multitenancy. I would say scale across is the common denominator in all our use cases, and scale up and scale out may be nice options in brand-new greenfields.

Operator: Our next question comes from the line of Karl Ackerman with BNP Paribas. Please go ahead.

Karl Ackerman: Yes, thank you. Jayshree, you are doing more network design today more than ever. Does that change your ability to monetize your services to capture more of the value that you are adding to these applications? As you address that, given the large mix of services revenue within deferred, could services revenue accelerate faster and represent perhaps 25% or 30% of sales going forward? Thank you.

Jayshree Ullal: I do not think so, Karl. I think we are a product company, and the majority of our revenue generation and interest in Arista Networks, Inc. as a company for all the designs we are doing comes from our product heritage. It is not like we charge for services. In fact, we work closely with our partners also. We will recommend network designs. We will support services. And certainly things like we are the gold standard for worldwide support. But I do not expect services as a function of our revenue to go up. I continue to see us as a product-led company.

Operator: Our next question comes from the line of Matt Niknam with Truist. Please go ahead.

Matt Niknam: I wanted to go back to gross margin. We were sort of in that 62-ish range. They dipped about 170 bps year-on-year. I want to dig into whether it was primarily mix-related or, if you can, quantify how significant the memory and cost-related impacts were—if there is any color you can provide. Thanks.

Chantelle Breithaupt: It is a great question. I would say the majority—if you look at prior quarter or prior year—the majority of the difference is mix of the customers. Just to clarify, our larger customers have a lower gross margin accretion, and so that mix is the primary driver. The secondary, although not as significant, would be things depending on the quarter—depending on how deferred is moving—tariffs, the memory costs, or the silicon costs depending on the quarter. So secondary driver, but the primary driver is mix of the customer segments.

Operator: Our next question comes from the line of Analyst with UBS. Please go ahead.

Analyst: Thanks. Hi, this is Andrew for David. From a high level, with almost $2.4 billion of inventory and almost two years in COGS of purchase commitments, how should we think about the supply constraints and where that inventory and purchase commitments are not satisfactory to meet demand? Where are the holes in your inventory?

Kenneth Duda: I would not say we have holes in our inventory, but we have surging demand, especially on the newest platforms, which of course is driving our need for the most modern silicon from our providers, and it is driving a need for an expanded amount of memory—even more than we were expecting before the year began. That is driving us to be a buyer in the market. Luckily, we have got pretty good spending power; we are a very reliable partner in these scenarios, and so we partner closely with these vendors.

But there is no doubt that the newest platforms we are delivering, especially in the AI space, are driving needs of ours in the high end of our portfolio.

Jayshree Ullal: Yeah, and just to add to that, the real hole is lead times. We are experiencing such significant wafer fab shortages that we are not getting the chips in time. So more than a hole, I would just say our purchase commitments are multiyear because we are having to deal with forecasts that are out multiple years so that we get them in time, because the lead time of these chips is so long. I think that is the biggest issue—lead time.

Kenneth Duda: We are experiencing 52-week lead times pretty reliably, with reservation needs beyond that, and our customers certainly do not want to wait that long.

Operator: Our next question comes from the line of James Fish with Piper Sandler. Please go ahead.

James Fish: Hey, guys. Chantelle, maybe for you. The guide raise was primarily all on AI. Are you guys prioritizing these shipments, or what has given the hesitancy around the non-AI, non-campus at this point and leaving that roughly flat still? And, Jayshree, as we think about the mix here on gross margin, what are you seeing in terms of Blue Box adoption now? And are you seeing any net pull-in of demand just given you have a lot of smart customers here and they are very much aware of supply chain constraints? Thanks, guys.

Chantelle Breithaupt: Thank you, James. I do not think we are saying, because we are raising the revenue and attribute that to AI, that we are not excited about all the other customer segments. I think you heard both Jayshree and me talk about being very happy with how the year started and what we are seeing across all three customer segments. We are very happy with what we are seeing in enterprise, which I would not say is quite AI yet, so let us count that as the non-AI bucket that you referred to. Wait and see—we are reporting Q1. We will see how the year goes, but we are very confident across all three that we are seeing strong demand.

So I would leave it at: let us see where we get to in our future quarter guides.

Jayshree Ullal: I would agree with that. Just to remind everybody, we have raised now from about $10.5 billion last September to $11.5 billion. And yes, a high degree of that is AI, but we have aggressive commitments on the campus to go to a $1.25 billion year and continue to service and grow our data center and cloud just as well. So all three are growing, but certainly AI is taking the news headline. Regarding Blue Box adoption, one of the customer use cases you actually heard about was a move from white box to Blue Box.

The goal right now in their desire to move to Blue Boxes is: it works, number one; it scales, number two; it actually does the job for us with AMD accelerators, number three. Down the road, they may use open operating systems, but they were very pleased with the diagnostics capability, the platform SDK—where we literally rewrite every piece of software and know all the Broadcom chip transistors very well—and the EOS features. Down the road, they may use some open NOSes as well. That would be a really good example of a Blue Box that has EOS today and may go down to other NOSes. We continue to see that, particularly in the Neo Clouds.

We have always seen a bit of that in the cloud and AI titans because they know how to work with open NOSes. So we have had that hybrid strategy always, but we are certainly seeing more of that in the Neo Clouds now.

Rudolph Araujo: Regina, we have time for one last question.

Operator: Our final question will come from the line of Ben Bollin with Cleveland Research. Please go ahead.

Ben Bollin: Good afternoon, everyone. Thank you for taking the question. Jayshree, you referenced inference a little bit earlier. You said it is kind of a smaller use case right now. I am interested to hear your thoughts on where you think enterprise is in terms of their ability to consume inference and create agents, and then how that develops over time and where you think the front-end networks and edge networks are today in their ability to support those use cases. Basically, do we get the sustained investment period because what you are seeing now bleeds and becomes much more significant in enterprise, and how long-lasting that might be?

Jayshree Ullal: I tend to agree with your thesis that while today we are in a training fever, a more distributed AI, generative AI paradigm with instances—which means you do not always need the GPU—you are going to have high-end CPUs and a smaller set of parameters and tokens to manage, and you are going to have specific agentic AI use cases and applications. We are seeing very early trials and stages—nothing super big yet. They are not in the hundreds of thousands of GPUs like you see with the AI titans. But we are frequently seeing our customers in certain high-tech sectors want to deploy clusters that are a thousand, a few thousand—definitely not 10,000—in the low thousands.

They tend to be, as you said, not training but more inference-based, more agentic AI edge-inference based as well. I think we will see more of that. This is the calm before the storm, if you will. As AI gets more distributed, I think it does not need GPUs alone. It is going to need more high-performance compute. Many of them seem to feel to us like high-performance compute HPC use cases that are getting revived for AI. So I agree with your thesis, Ben. I think it is going to take a couple of years to fully happen.

Rudolph Araujo: This concludes Arista Networks, Inc. first quarter 2026 earnings call. We have a presentation posted that provides additional information on our results, which you can access on the Investors section of our website. Thank you for joining us today and for your interest in Arista Networks, Inc.

Operator: Thank you for joining, ladies and gentlemen. This concludes today’s call. You may now disconnect.