Image source: The Motley Fool.
DATE
Thursday, May 7, 2026 at 4:30 p.m. ET
CALL PARTICIPANTS
- Chief Executive Officer & Co-Founder — F. Thomson Leighton
- Chief Financial Officer — Edward J. McGowan
Need a quote from a Motley Fool analyst? Email [email protected]
TAKEAWAYS
- Total Revenue -- $1.07 billion, increasing 6% as reported and 4% in constant currency.
- Cloud Infrastructure Services (CIS) Revenue -- $95 million, up 40% as reported and 39% in constant currency.
- Security Revenue -- $590 million, rising 11% as reported and 9% in constant currency.
- Delivery and Other Cloud Applications Revenue -- $389 million, declining 7% as reported and 8% in constant currency, attributed to the Edgeio transaction wrap-around impact; the rate of decline is expected to moderate for the rest of the year.
- International Revenue -- $530 million, representing 49% of total revenue and growing 9% as reported and 5% in constant currency; foreign exchange contributed a $19 million year-over-year benefit.
- Non-GAAP Net Income -- $239 million, or $1.61 per diluted share, down 5% as reported and in constant currency, reflecting higher depreciation, expanded colocation, and increased headcount linked to CIS investments.
- Non-GAAP Operating Margin -- 26%, in line with company expectations.
- Capital Expenditures (CapEx) -- Accounted for 19% of revenue in the first quarter, coming in below guidance due to timing and favorable component pricing; CapEx delayed from Q1 will shift to Q2.
- Share Repurchases -- About 2 million shares repurchased during the quarter; $975 million remains authorized for buybacks.
- Cash, Cash Equivalents, and Marketable Securities -- $1.7 billion as of March 31.
- Landmark Customer Commitment -- Signed a seven-year, $1.8 billion CIS contract with a frontier model company, noted as the largest deal in company history; also referenced a prior $200 million four-year CIS deal from February with a major U.S. tech company.
- Cloud Growth Outlook -- Management now anticipates total company annual top-line revenue growth to reach double digits in 2027, driven by CIS momentum and large transactions.
- Q2 2026 Guidance -- Projected revenue of $1.08 billion to $1.1 billion, up 3%-5% as reported and in constant currency over Q2 2025; expected non-GAAP EPS of $1.45 to $1.65, non-GAAP operating margin of 25%-26%, and EBITDA margin of 38%-39%.
- Q2 CapEx Outlook -- Anticipated CapEx of $433 million to $453 million (40%-41% of revenue) as delayed Q1 spending and NVIDIA GPU deliveries occur.
- Full-Year 2026 Guidance: Revenue -- Expected in the $4.45 billion to $4.55 billion range, representing growth of 6%-8% as reported and 5%-8% in constant currency; foreign exchange projected to benefit full-year revenue by $20 million.
- Full-Year 2026 Guidance: CIS -- Raised to at least 50% year-over-year growth in constant currency, with continued momentum cited for CIS tied to AI scaling and recent transactions.
- Full-Year 2026 Guidance: Security -- Management continues to project high single-digit security revenue growth in constant currency.
- Full-Year 2026 Guidance: Delivery & Other Cloud Apps -- Mid-single-digit year-over-year decline expected in constant currency.
- Full-Year 2026 Guidance: Non-GAAP EPS -- Anticipated in the $6.40 to $7.15 range, based on an 18.5% non-GAAP tax rate and 147 million diluted shares.
- Full-Year 2026 Guidance: CapEx and Operating Margin -- CapEx expected to be 40%-42% of revenue, with $700 million for the $1.8 billion customer; non-GAAP operating margin projected at approximately 26% using current FX rates.
- GPU Capacity Expansion -- The current pipeline significantly exceeds existing and projected GPU inventory; management may place further GPU orders in 2H 2026 not reflected in current CapEx guidance.
- AI Security Tailwinds -- Security growth in Q1 was driven by high demand for web application firewall, API security, and Guardicore segmentation, with increasing customer urgency due to AI-driven attack threats including greater frequency and scale of zero-day attacks.
- WAF Global Deployment -- Web application firewall runs in 4,300 locations across 700 cities, designed to intercept large-scale, distributed attacks.
- Inference Cloud Coverage -- Akamai Inference Cloud spans all 4,300 locations; serverless functions run platform-wide, managed containers are active in over 100 cities, and full IaaS capabilities are deployed in several dozen cities (with a subset hosting new RTX 6000 GPUs).
- Customer Examples -- Q1 wins cited include contract expansions with major global video game and consumer electronics companies, telecom and retail leaders, and multiple enterprises increasing use of security and segmentation solutions.
- Recognition -- Achieved 99% recommendation rating as Customers' Choice in Gartner Peer Insights for microsegmentation, and was the only provider named Customers' Choice for API protection in the same report series.
- Capital Resources -- Management affirmed that current operations and reserves, including a $1 billion untapped credit line, are sufficient to fund expansion needs related to CIS investments and pipeline growth.
SUMMARY
Akamai Technologies (AKAM 4.34%) signed the largest contract in company history—a $1.8 billion, seven-year commitment for cloud infrastructure services with a leading AI-focused entity—further supporting its transition toward large-scale distributed compute for AI workloads. Management stated that revenue from this contract will begin ramping in the fourth quarter with approximately $20 million to $25 million in Q4, and that $700 million in related capital expenditures are planned for this year. Cloud infrastructure services revenue grew 40%, outpacing company-wide revenue growth and causing management to raise its CIS full-year guidance to at least 50% year over year in constant currency. The security business delivered 11% revenue growth, attributed largely to increased customer urgency for solutions such as web application firewall and API security amid rising AI-powered attacks. Guidance reflects expectations of double-digit annual revenue growth starting in 2027, with CapEx and operating expenses increasing ahead of revenue to support expansion. The company’s pipeline for GPU capacity significantly exceeds available hardware, which may necessitate additional mid-year capital commitments; future guidance will be updated accordingly if new orders are placed.
- Management clarified the $1.8 billion contract is a "straight committed deal over seven years" with revenues recognized ratably as capacity is brought online, and all associated CapEx is already factored into fiscal 2026 guidance.
- Foreign exchange provided a $19 million revenue benefit year over year in Q1 and is projected to add $20 million for the full year.
- CEO F. Thomson Leighton said, "I do not know of a comparable time where there is this much concern about what is going to happen with security, and also this much appreciation for what Akamai provides with our security platform."
- Non-GAAP net income and EPS both declined 5%, driven by increased investment in infrastructure and headcount tied to cloud growth initiatives.
- The Inference Cloud architecture now supports serverless functions at all 4,300 locations, managed containers in over 100 cities, and full IaaS resources in multiple regions equipped for both CPU and GPU workloads.
- Management reiterated that Akamai supports both long-term committed capacity and flexible, on-demand GPU/CPU service models, with customer preference currently driving stronger adoption of dedicated capacity agreements.
- The company maintains a fully funded expansion plan using existing liquidity and ongoing cash flow, with backup credit line access but no current need for outside capital to support forecast growth.
INDUSTRY GLOSSARY
- Cloud Infrastructure Services (CIS): Akamai’s portfolio of offerings for distributed compute, storage, and networking services tailored for AI inference and digital workloads at global scale.
- Inference Cloud: Akamai’s integrated AI compute platform deployed across its global network, providing distributed access to CPU and GPU resources for AI model inference workloads.
- Web Application Firewall (WAF): Security solution for protecting web applications by filtering and monitoring HTTP traffic to prevent vulnerabilities and attacks.
- Guardicore Segmentation: Akamai’s microsegmentation product for isolating and protecting critical enterprise resources within hybrid cloud and data center environments.
- Zero-Day Attack: Cyberattack targeting a software vulnerability that is unknown to the vendor and not yet patched, requiring immediate and adaptable security defenses.
- RTX 6000 GPUs: High-performance NVIDIA graphics processing units deployed by Akamai to accelerate AI and compute workloads in its distributed cloud.
- RPO (Remaining Performance Obligation): Total value of contracted but not yet recognized future revenue as committed in customer agreements.
Full Conference Call Transcript
F. Thomson Leighton: Thanks, Mark. I am pleased to report that Akamai is off to a strong start to the year. In just a few months, we have achieved major milestones for our cloud computing strategy, marking a definitive turning point in the growth and evolution of our business. Akamai has long been known for operating the world's largest distributed platform for delivery and security solutions at global scale, with a reputation for reliability, quality, and trust. Now we are leveraging our global footprint and years of experience supporting the world's largest enterprises to become an industry infrastructure provider for the AI-driven economy.
At GTC in March, we unveiled the industry's first global-scale implementation of NVIDIA's AI grid, and we announced the rollout of thousands of NVIDIA RTX 6000 GPUs. By integrating NVIDIA AI infrastructure into Akamai's massive distributed platform and by leveraging intelligent workload orchestration across our network, we intend to move the market for AI beyond isolated AI factories toward a unified distributed grid for AI inference.
By pushing AI inference to the edge, and combining it with our massive deployment of CPUs for delivery, security, and functions-as-a-service, we are enabling customers to run complex models within milliseconds of their end users, with the responsiveness of local compute and the scale of the global web, optimizing performance while reducing latency and cost. Those who attended GTC heard NVIDIA cite Akamai as a vital player in the industry's ecosystem for AI infrastructure, and we have seen very positive market reaction to our rapidly expanding capabilities from a wide spectrum of enterprises.
Today, we are very excited to announce another major milestone for our cloud computing strategy and the evolution of Akamai: the signing of a landmark seven-year $1.8 billion commitment for our cloud infrastructure services by a leading frontier model company. This is the largest customer deal in Akamai history, and it comes on the heels of the $200 million CIS deal we announced in February with a major U.S. tech company also at the forefront of the AI revolution. These leaders in AI have chosen Akamai because their AI workloads need the scale, performance, and reliability that our cloud platform provides. Many other enterprises have chosen Akamai for similar reasons.
For example, since the start of the year, a leading cloud and digital infrastructure provider in Asia chose our GPUs to support their low-latency live streaming media service. An AI company in the U.S. chose our GPU platform to power their voice-first solution to optimize business operations. An AI-powered video intelligence platform in India chose our GPU platform to scale video analytics and computer vision workloads for retailers. A consumer AI platform in the U.S. chose Akamai Cloud to run and scale live personalized agents. An AI commerce company in India chose our distributed inference platform to power their ad personalization engine.
And two premier global retail brands chose our distributed data capabilities to improve the performance and resilience of their online retail applications. But all this is just the beginning. We have a large and rapidly expanding pipeline of prospects that are looking to Akamai for cloud solutions, including some with very large needs. To satisfy this strong and growing demand for our cloud infrastructure services, we expect to continue to build out both our physical infrastructure and our cloud sales and support teams. And as Ed will talk about in a few minutes, we now anticipate significant acceleration of our overall revenue growth heading into 2027 and beyond.
Turning to security, I am pleased to report that Q1 was also strong for our security portfolio, where revenue grew 11% year-over-year as reported and 9% in constant currency. Our security growth was led by strong demand for our market-leading web application firewall, API security, and Guardicore segmentation solutions. Our WAF, in particular, is seeing growing interest from customers eager to deploy the latest defenses for vulnerabilities that could be exposed by the ever-strengthening frontier models and AI-powered attacks. Frontier models are changing vulnerability management, and we are proud to be one of the industry's must-have security providers, partnering with the frontier model companies to help ensure the safe, rapid deployment of AI-enhanced defenses.
With our early access to their vulnerability detection programs, we are applying our expertise to help keep major enterprises and critical infrastructure safe. Of course, and this is important to understand, attackers will also be using more advanced AI technology to develop even more potent ways to cause harm. This means that major enterprises will need Akamai security solutions even more than before. For example, there are many legacy systems and billions of deployed devices that cannot be patched. They will become a lot more vulnerable with the advances in AI, and they will need our security solutions to keep them safe.
For the devices and systems that can be patched, the patching process still takes time, often days or weeks, and they will need our protection until that is done. We have seen this happen before when zero-day attacks emerged, and with the advances in AI, we can expect zero-day attacks to occur much more frequently. There is also an increasing challenge with scale. Because AI is enabling attackers to take over more devices and create enormous bot armies, we are now seeing attacks with unprecedented volumes. Just in the last few weeks, we neutralized a series of app-layer attacks with millions of malicious requests per second from millions of widely distributed IPs.
Akamai can defend against such attacks because of our widely distributed platform. Our WAF runs in 4,300 locations across 700 cities to intercept the attack traffic right where it enters the Internet and well before it can coalesce onto the target. Having a great WAF with the needed defenses for the latest attacks is obviously important. But that alone is not enough in the coming age of AI. The WAFs need to be deployed across a vast distributed platform, and this need provides a unique advantage for Akamai when compared to the competition. In summary, we believe that Akamai's security portfolio will be needed more than ever before as attackers take advantage of the advances in AI.
That is because of our massive platform scale to absorb attacks, our unparalleled access to real-time attack data, our tight integration with the early warning ecosystem to provide up-to-the-minute defenses for the latest zero-day attacks, our large and very experienced human security operations team that is equipped with the latest AI tools to enhance visibility and minimize response times, and our innovative, rapidly evolving, and AI-enabled product suite to help prevent penetrations and to limit the damage when penetrations do occur.
Customers who selected Akamai in Q1 for that kind of protection for their APIs included one of the largest telecom groups in Africa, a major investment management company in South America, one of the premier investment banks in the Middle East, and one of the world's leading fintech companies in the U.S. Customers who added or expanded their use of our Guardicore segmentation solution in Q1 included the leading telecom carrier and media company in South Korea, one of the largest banking groups in Europe, and a leading healthcare company in the U.S. Many of the large renewals we signed in Q1 also included expansions of our security services.
For example, after we protected one of America's leading retailers from unwanted bots during the holiday shopping season, they increased the use of our services in a contract worth $24 million. We signed an expansion contract worth $80 million over two years with one of the world's largest video game companies. We signed an expansion contract worth more than $20 million with a global consumer electronics company in Korea. And one of the largest global professional services companies in the world expanded their use of our ZTNA solution to secure large-scale remote access as they move critical applications to a zero trust model. Our security solutions continue to receive top recognitions from the major analyst firms for their effectiveness.
For example, last quarter, Akamai achieved a 99% recommendation rating as Customers' Choice in Gartner's Peer Insights report on microsegmentation. And last month, Akamai was the only provider to be named Customers' Choice in Gartner's Peer Insights report on API protection. In closing, we are thrilled by the way our growth strategy has taken hold and is generating transformative opportunities for our business. We believe that Akamai is uniquely positioned to enable and benefit from the development of the AI-driven economy. By bringing powerful compute directly to the data and the users at the edge, Akamai is enabling and securing the next generation of agentic AI.
With each quarter, the massive opportunity we see ahead becomes more evident, and we are making bold investments to capitalize on that opportunity and enable Akamai to do for cloud and AI what we have done for security and CDN, to generate significant future growth for our business. Now I will turn the call over to Ed for more on our results and our outlook for Q2 and the year. Ed?
Edward J. McGowan: Thank you, Tom. Before I get started, and to build on Tom's remarks, I want to personally underscore my excitement regarding the $1.8 billion new customer win announced today. This is a powerful validation of the Akamai value proposition in the age of AI and a clear indicator of the scale at which we can operate. To fully capitalize on this momentum and support the accelerated growth we anticipate, we will be investing slightly ahead of revenue. You will see this reflected in the updated capital expenditure and operating margin outlook I will discuss during the guidance portion of my remarks.
We view these investments in our CIS portfolio as critical to ensure we have the foundation to meet the significant demand we see on the horizon. Also, driven by today's announced $1.8 billion win, the $200 million four-year CIS deal we announced last quarter, and our rapidly accelerating pipeline, we now expect total company annual top-line revenue growth to reach double digits in 2027. We look forward to sharing more details in the coming quarters. Clearly, this is an incredibly exciting time for Akamai. With that, let us dive into the Q1 results. We delivered strong first quarter results with total revenue of $1.074 billion, which was up 6% year-over-year as reported and 4% in constant currency.
Cloud infrastructure services (CIS) revenue got off to a robust start to the year with revenue of $95 million, up 40% year-over-year as reported and 39% in constant currency. As Tom noted, we are seeing CIS wins across a wide spectrum of industries, geographies, and use cases. Even more encouraging, the pipeline for AI-specific use cases is building rapidly. We also maintained very strong momentum in security with revenue of $590 million, up 11% year-over-year as reported and 9% in constant currency. The strength in the first quarter continued to be driven by our fast-growing API security and Guardicore segmentation solutions along with strong growth from our largest product, web application firewall. Moving to delivery and other cloud applications.
Revenue was $389 million, down 7% year-over-year as reported and down 8% in constant currency. These results were in line with expectations, driven by the wrap-around impact of the Edgeio transaction in 2025. We expect this effect and the rate of decline to moderate throughout the remainder of the year. International revenue was $530 million, up 9% year-over-year, or up 5% in constant currency, representing 49% of total revenue in Q1. Foreign exchange fluctuations had a positive impact on revenue of $2 million on a sequential basis and a positive $19 million on a year-over-year basis. Moving to profitability.
In Q1, we generated non-GAAP net income of $239 million, or $1.61 of earnings per diluted share, down 5% year-over-year as reported and in constant currency. These results include our expanded colocation investments, higher depreciation, and increased headcount costs, all tied to our strategic investment in cloud infrastructure services during the first quarter. Our non-GAAP operating margin for Q1 was 26%, in line with our expectations. We expect operating margin to remain in this range for the remainder of this year as we ramp up our investment to capture the exciting growth opportunities ahead of us. Our Q1 CapEx was $[inaudible], or 19% of revenue. First quarter CapEx was slightly below our guidance, primarily driven by timing and favorable pricing.
Specifically, some expenditures shifted from Q1 into Q2, and we benefited from some lower-than-expected component costs. Moving to cash and our capital allocation strategy. During the first quarter, we spent approximately $[inaudible] to buy back approximately 2 million shares. We ended the first quarter with approximately $975 million remaining on our current repurchase authorization. Our intention with capital allocation remains the same: to continue buying back shares to offset dilution from employee equity programs over time and to be opportunistic in both M&A and share repurchases. As of March 31, we had approximately $1.7 billion of cash, cash equivalents, and marketable securities. Now, before I provide Q2 and full-year 2026 guidance, I want to touch on a few housekeeping items.
First, for Q2, CapEx is expected to jump significantly as we start to take delivery of the NVIDIA GPUs we discussed on our last quarterly earnings call, and we catch up on some of the CapEx that pushed from Q1 into Q2. Second, we expect to see an increase in operating expenses in the second quarter due primarily to continued investments in go-to-market and the impact of our annual employee merit cycle that went into effect in April. We anticipate revenue from the $1.8 billion customer win to start to ramp in Q4, and we expect to generate approximately $20 million to $25 million of revenue in the fourth quarter.
Finally, regarding CapEx for this win, we expect to spend a total of approximately $800 million to $825 million over the next twelve months to support this customer. We expect to deploy roughly $700 million of that total in 2026 with the remaining balance falling into 2027. Moving now to guidance. For the second quarter, we are projecting revenue in the range of $1.075 billion to $1.1 billion, up 3% to 5% as reported and in constant currency over Q2 2025. If current spot rates hold, foreign exchange fluctuations are expected to have no material impact on Q2 revenue compared to Q1 levels, and a positive $2 million impact year-over-year.
At these revenue levels, we expect cash gross margins of approximately 70% to 71%. Gross margin is impacted by the significant increase in colocation as we accelerate the growth in our CIS business. Q2 non-GAAP operating expenses are projected to be $346 million to $357 million. We anticipate Q2 EBITDA margin of approximately 38% to 39%. We expect non-GAAP depreciation expense of $140 million to $144 million. We expect non-GAAP operating margin of approximately 25% to 26%. And with the overall revenue and spend configuration I just outlined, we expect Q2 non-GAAP EPS in the range of $1.45 to $1.65.
This EPS guidance assumes taxes of $47 million to $54 million based on an estimated quarterly non-GAAP tax rate of approximately 18.5%. It also reflects a fully diluted share count of approximately 146 million shares. Moving to CapEx. For the reasons I highlighted earlier, we expect to spend approximately $433 million to $453 million in the second quarter. This represents approximately 40% to 41% of total revenue. Looking ahead to the full year 2026, we expect revenue of $4.445 billion to $4.55 billion, which is up 6% to 8% as reported and up 5% to 8% in constant currency. For cloud infrastructure services, we are raising our outlook to at least 50% year-over-year growth in constant currency.
We expect momentum in CIS to continue to build throughout 2026, driven mainly by the scaling of our AI opportunities and the impact of the two very large transactions we announced in Q4 and today. Also, we continue to expect security revenue growth in the high single digits on a constant currency basis in 2026. And for delivery and other cloud apps, we continue to expect a decline in the mid-single digits year-over-year on a constant currency basis. At current spot rates, our guidance assumes foreign exchange will have a positive $20 million impact on revenue in 2026 on a year-over-year basis. Moving to operating margin.
For 2026, we are estimating a non-GAAP operating margin of approximately 26% as measured in today's FX rates. Turning to CapEx. At this time, we anticipate our full-year capital will be approximately 40% to 42% of total revenue, including the $700 million impact from the $1.8 billion contract we mentioned earlier. Before I move on, I want to provide some additional color on our CapEx outlook. As Tom noted, the demand we are seeing for CIS, including our GPU deployments, is exceptional. Our current pipeline for GPUs significantly exceeds our existing and projected inventory, meaning we may place additional GPU orders in the second half of the year to meet this demand.
This is not factored into our current annual CapEx guide. We will update CapEx guidance on a subsequent earnings call if we place another GPU order before year-end. Moving to EPS. For full-year 2026, we expect non-GAAP earnings per diluted share in the range of $6.40 to $7.15. This EPS guidance includes the impact from the very large win. This non-GAAP earnings guidance is based on a non-GAAP effective tax rate of approximately 18.5% and a fully diluted share count of approximately 147 million shares. With that, I will wrap things up, and Tom and I are happy to take your questions. Operator?
Operator: We will now begin the question and answer session. To ask a question, you may press star and then one on your touch-tone telephone. If you are using a speakerphone, please pick up your handset before pressing the keys. If at any time your question has been addressed and you would like to withdraw your question, please press star and then two. At this time, we will pause momentarily to assemble our roster. We have the first question from the line of Roger Boyd from UBS. Please go ahead.
Roger Boyd: Congrats on the landmark deal there. Maybe if you can, Tom, just broad strokes about the competitive set to win that deal. Are you going toe to toe with hyperscalers or neoclouds? And anything you can provide on the use cases—Inference, is it agentic workloads? And when you think about your compute-enabled PoPs, how is this customer leveraging Akamai's network as a whole? Thanks.
F. Thomson Leighton: I cannot give any more details about this specific deal. But in general, yes, we do compete with the hyperscalers and the neoclouds with our cloud infrastructure services. That is the primary competition. They select Akamai because of our proven ability to manage and scale complex distributed systems, our ability to get the necessary data center space in locations around the globe, to interconnect that with the world's largest and best performing delivery network and leading security solutions. We offer the best in terms of latency and scalability. We probably deal with more data center companies than anybody, with being in 4,300 locations across 700 cities and 130 countries. So, yes, we have significant competition.
Every deal is competitive, but we also have unique capabilities, which is I think why our pipeline is so strong and why we are winning some very large deals.
Roger Boyd: Excellent. And then on security, I wonder if you could unpack what you are seeing from a demand perspective there. Nice result in the first quarter. What are you seeing around conversion rates, sales cycles? Are you seeing more urgency from organizations that are thinking about ways to reduce the blast radius and defend against an AI-fueled attack landscape? Thanks.
F. Thomson Leighton: I do not think I have ever seen the CSOs more agitated and feeling more of a sense of urgency than they are now. Over the last several weeks, couple of months, I have had the chance to meet with a lot of the world's biggest company CSOs—in many cases, the CEOs and senior executives—and they are very concerned about what happens when the attackers get access to advanced AI with the latest AI frontier models, which it seems that they will. This is going to uncover a lot more vulnerabilities.
We are going to see the equivalent of a lot more zero days, and they are literally scrambling now, in many cases, to make sure all their applications, their agents, their APIs are protected by Akamai. You can imagine, most of the world's major banks rely on us for security, and they are looking at a pretty big wave of new attacks coming their way. I do not know of a comparable time where there is this much concern about what is going to happen with security, and also this much appreciation for what Akamai provides with our security platform.
Operator: Thank you. We have the next question from the line of Patrick Edwin Colville from Scotiabank. Please go ahead.
Patrick Edwin Colville: Thank you so much for taking my question. This one is for Tom. When I think about Akamai, the value prop for the last thirty plus years has been the distributed architecture—700 cities, 130 countries. When I think about this mega deal, is that a highly distributed use case, or should we think about it as being served from a few, like sub-10 type data centers?
F. Thomson Leighton: I am not at liberty to talk about the recent deal. However, I think when you are thinking about Akamai's value proposition, you hit a very key point with our really unparalleled distributed architecture. I did reference a bunch of use cases in the prepared remarks, and they very much rely on our distributed platform, where you want to get the agents and the applications, the business logic, close to users, close to the data, so you get low latency and scalability. Particularly anything to do with video processing or video generation takes a lot of scale, and Akamai is unique there. So, absolutely, what we are able to offer is very compelling.
Patrick Edwin Colville: Thanks for that. My follow-up is for Ed. You gave us a CapEx guide, then you made a subtle point that you might have to increase CapEx further. Help us understand the nuances of why there might be an increase in CapEx midyear and what that might mean?
Edward J. McGowan: Thanks for the question, Patrick. What I mentioned was we have a very, very strong pipeline for our GPU platform, and we are just starting to get the bulk of those chips up and running now, and we have a very large pipeline. It exceeds what we have in inventory. Obviously, we want to prosecute that pipeline—start winning those deals, converting that into contracts, etc. The reason I hedged a little bit is, one, we need to fulfill that pipeline, and, two, there is some time that it takes to get the chips. Even if we were to place an order, it may slip into next year.
I want to give it another quarter, and if, in fact, we are in a position to place an order and receive that by year-end, we will certainly do that and let you know. I see that as a very bullish comment. I just did not want to surprise you with another, you know, whatever it is—couple hundred million or whatever the order may be—without at least giving you some color behind that.
Operator: Thank you. We have the next question on the line of John DiFucci from Guggenheim Securities. Please go ahead.
John DiFucci: Thanks for taking my question. My first question is for Ed, and I have a quick follow-up for Tom. Ed, thanks for all the detail on CapEx. But when I think about the CapEx for this mega deal—and I think Patrick was going here—this is over a long time, right, seven years? Are you accountable, for example, right now we are seeing higher memory costs than we would have thought maybe a year ago. When you have locked in this deal, do you also have the supply locked in, or are you exposed to that if that were to happen, say, two years from now—higher prices again?
Edward J. McGowan: Great question. I was fortunate enough to work very closely with the team on both sides of this transaction. We have been able to get the supply chain ready. We anticipate receiving all the goods that we need to deliver this service over the seven years within the next twelve months, with the majority of it this year, as I laid out in the CapEx cadence. We anticipate receiving a significant portion. There is always the potential for some slippage and delays, but we have mechanisms in our contracts to deal with if, for example, six months from now prices were to go up. We have taken that into consideration.
From a revenue perspective, the way to think about this deal is it is a set amount of capacity that we are deploying, and there is no usage variability to it. It is a straight committed deal over seven years. As soon as we ramp all the capacity up, we will start taking the revenue for a full year. I expect a little bit this year and then next year we will get a partial year as we receive the remainder of what is to be deployed, and then from there it will go on for the remaining six-plus years.
John DiFucci: So even though it is consumption-based infrastructure, it will kind of look like a subscription. Is that accurate?
Edward J. McGowan: Exactly. That is exactly the way to think about it.
John DiFucci: Awesome. Thank you. And Tom, a component of your delivery business is video streaming. In March, we saw OpenAI confirm they shut down their AI video generation system, Sora. Do you expect that to have any effect on your delivery or compute business forecast?
F. Thomson Leighton: No. We partner with OpenAI on security vulnerabilities—helping define them and protecting our customers for the associated attacks—but OpenAI is not and has not been a customer of Akamai. So no impact on us at all.
Operator: Thank you. We have the next question on the line of Jackson Edmund Ader from KeyBanc Capital Markets. Please go ahead.
Aidan Daniels: Hi, this is Aidan Daniels on for Jackson Edmund Ader. Thanks for taking our question. With this big deal, as you allocate capacity going forward, how can we think about the impact on any amount of on-demand GPU capacity you are able to offer going forward? How are you balancing what you have committed from this deal with maintaining flexibility for newer incremental demand going forward?
F. Thomson Leighton: We support both on-demand, per-token or per-VM-hour access to our platform, and also we support large tranche deals. It is not really a matter at this point of trading off. As we need more GPUs, as Ed said, we would purchase more.
Aidan Daniels: Thanks. And then just one quick follow-up. I know you cannot really talk too much about the deal, but how can we think about the proportion of whether it is CPU or more of the GPU inference cloud going forward? Is there a framework we can think about?
F. Thomson Leighton: We cannot comment on this deal. However, in general, with inference and AI, you need both, really. Part of the value we provide is that we can help provide the computational resource that is most appropriate for the workload you have, which might be CPU, might be GPU, because you want to be as efficient as possible and as close as possible to the user so you get the best performance. It is a mix, and every application is different in the mix of CPU versus GPU that it needs.
Operator: Thank you. We have the next question from the line of Fatima Boolani from Citi. Please go ahead.
Fatima Boolani: Good afternoon. Thank you for taking my questions. A higher-level strategic question. You have opted to take more of a dedicated capacity approach in terms of satisfying demand and supply constraints out there. I wanted to dig deeper into why, simply because the spot rates and the market rates for what otherwise could be almost entirely a rental or GPU-as-a-service business are significantly more attractive. So I wanted to get the vision and the decision-making calculus around steering the platform more towards larger customers, longer commits, and more dedicated capacity. And then I have a follow-up.
F. Thomson Leighton: We do both. The larger, bigger deals with long-term commits are more attractive in many ways. You have the commit, and in the big deals, yes, the pricing would be lower. But we also support the on-demand model where you can buy it by the token or the hour, and you get a little bit higher pricing. There can be more expense associated with that—getting the customer on, having a rep engaged in the account. Both are attractive, and we support both. It is not a matter of us doing one or the other.
Edward J. McGowan: The one thing I would add is the customers are really driving that. If I look at our pipeline, a lot of our customers want to have dedicated capacity, say a dedicated number of GPUs, because there is a scarcity in the marketplace. Rather than going on a consumption basis, they can get slightly better pricing and lock in that capacity for themselves. It is really a market-driven thing more than anything.
Fatima Boolani: I appreciate that. And, Ed, since I have you—you telegraphed that should the pipeline continue to grow and positively morph in the way you are seeing, you will be open to continuing to increase CapEx and bring megawatts online. What about sources of funds to finance these investments? Is that something you feel you can intrinsically do from running the business, or should we expect other sources of capital to be tapped?
Edward J. McGowan: So far, no issues as far as financing these buildouts from our own capital today. We are obviously a company that is very profitable and produces a lot of cash. In the years when we are investing big, cash flow will be a bit lower. But these things have phenomenal free cash flow after you do the initial deployment, which is one of the attractive things. From cash and equivalents, we have $1.7 billion on the books today. We also have a line of credit of $1 billion if we need to tap it. And we have excellent credit and would have no problem raising money in the capital markets if we need to.
Right now, we have not announced anything there. If we continue to get large deals and need capital, we will certainly look to do that. But so far, we have been able to use our own funds.
Operator: Thank you. We have the next question on the line of Arti Vula from JPMorgan, for Mark Murphy. Please go ahead.
Arti Vula: Great to see the momentum you are having with the large CIS deals with companies on the AI technology frontier. You had a large deal last quarter, another one this quarter that dwarfed the one before it. At a high level, can you help us understand from your perspective—has this been brewing for a while in the pipeline, or have these been a little bit faster? What has changed that has brought a lot of this business to your doorstep seemingly pretty quickly? And as a quick follow-up, as you are dedicating financial and operational resources to CIS and these large deals, does it change how you are thinking about other business segments?
F. Thomson Leighton: This has been the strategy all along, and we are very pleased to be executing against it. The goal has been to deploy a distributed inference platform and distributed compute platform that would be desired by enterprises across the spectrum, including many large customers. Akamai's customer base features many of the world's largest enterprises. As we have talked about before, they spend 10x or more on compute than they do on our traditional services—delivery and security. This is exactly what we said we were going to do, and now we are delivering those results. The platform is to a point where we can do that, and I think you will see more of this going forward.
Operator: We have the next question from the line of Sanjit Singh from Morgan Stanley. Please go ahead.
Sanjit Singh: Congrats on the biggest deal in company history. On that point—this might be a trivial question—but in terms of this $1.8 billion contract, is that more of a public cloud opportunity, because I know part of the public cloud business also has a GPU component, or was this specifically for Akamai Inference Cloud? I have a follow-up.
F. Thomson Leighton: We really cannot talk more about this particular deal. But there are a lot of companies where we have signed contracts that we did talk about across the spectrum. Those deals for our inference cloud and our cloud capabilities are for our GPUs and our CPUs. Our ability is to bring the right hardware for the particular application and have it located where you get the best benefit for that application.
Sanjit Singh: That is fair enough. My question goes to the delivery business. There is a lot of debate about a potential new lever for growth in CDN and delivery in a world where you have millions, potentially billions, of agents running around, calling tools, executing tasks, doing web searches. Has the team internally revisited its thesis around the secular growth prospects in delivery, or is it still a business that you are mostly looking to harvest for profitability to fund the opportunities in security and compute?
F. Thomson Leighton: Great question. When you look at the proliferation of agents and what is coming, the biggest driver for growth is going to be the compute platform—the cloud platform that supports that—and we are well set up to do that. Next, you have a big security issue because AI and the agents are a whole new surface of vulnerability. Not only do you need your web application firewall and API security, you need special security for AI. We get a real tailwind there.
Also, the agents—you have to interpret what an agent is, who is behind it, what they want to do when you are delivering or protecting an application or a site or another agent, and the response you give is tailored to what the customer wants you to do when an agent of a certain flavor comes and interacts with you. We developed a lot of capabilities there, which generally fall within our security capabilities. In terms of delivery, there will be some traffic that used to be human-generated, now agent-generated. That does not make a huge swing in the amount of bits you are delivering.
That starts to change if you have agents dealing with video—generating video—like you go to a commerce site and the user wants to see what they look like in a sweater they are thinking of buying, and you generate a video showing them wearing the sweater. That generates a lot of traffic. We are at the very early days of seeing things like that. They are being experimented with now. That could generate more traffic for delivery. But the biggest impact for us is in the cloud business, and next in the security business.
Delivery is really important, very synergistic with our whole platform approach, and it generates a lot of cash for us, and we are plowing a lot of that cash into the growth of the cloud business.
Operator: Understood. Thank you. We have the next question from the line of Michael Joseph Cikos from Needham. Please go ahead.
Michael Joseph Cikos: Thanks for taking the questions, and congratulations on the strong quarter and the customer win. I want to make sure I understand the mechanics of this deal. You signed a seven-year $1.8 billion commitment. Can we expect the full $1.8 billion to show up in RPO, or does that include anything as far as potential renewals? Is that all take-or-pay? Anything to make sure we are understanding the mechanics.
Edward J. McGowan: I touched on this a little bit earlier, and there was a follow-up question around dedicated capacity versus pay-by-the-hour. This is more of the dedicated capacity. As soon as we get the capacity set up, we will take the revenue ratably over the contract. As I said, we will get some revenue this year and a partial year next year as we are still building up and getting the capacity live. In terms of RPO, we will see most of that show up next quarter, and then by the time we get everything delivered, it will be all in RPO eventually.
There are some odd mechanics with the first twelve months and how we are receiving the goods and the pricing mechanism to handle if prices were to go up or down. There is a little bit of nuance there. But once we get this fully up and running, you will see it in RPO—there will be some amount next quarter, and then it will build from there.
Michael Joseph Cikos: Thanks for that. And then for Tom, it is great to hear that your largest security product, WAF, is seeing stronger growth, which I would not have expected. Can you tap into that one more time as far as what is driving that? Is it really this heightened environment, or is there something else?
F. Thomson Leighton: There are real advances in AI, and it is getting much better at finding vulnerabilities and helping the attacker take over devices and penetrate enterprises. You need our defenses now more than ever before. There are billions of devices out there that you cannot patch, and now the adversary can find ways into those devices and take them over. As a result, we are seeing attacks much bigger than we have seen before—literally application-layer attacks from millions of distributed IPs with millions of attacks on a target per second. You cannot defend against that with just a WAF in a data center or anything close.
You need the vast platform that we have to be able to intercept all that traffic and deal with it because you have to separate the bad stuff from the good stuff, and there is a huge amount of bad stuff now. Our platform is needed more than ever before for our security services, and our customers know that. There is a heightened sense of urgency now because they know the attacks are getting more capable due to AI and larger in size because attackers can take over all these devices and launch attacks from many more locations. That is why our web application firewall is in a lot more demand.
AI helps on the defense, but it does not solve that problem. Net-net, this is a very challenging time for CSOs, and that is why they are turning to us.
Operator: Thank you. We have the next question from the line of Frank Garrett Louthan from Raymond James. Please go ahead.
Frank Garrett Louthan: Just a follow-up on the question about the $1.8 billion—how that is being booked. Is all of that going to come in as revenue? Will any of that be counted as paid-for upfront CapEx or something like that? And then also, how many locations do you have Inference Cloud built out to currently, and what is the plan? Thank you.
Edward J. McGowan: Yes, it is all revenue. There is no offset to CapEx or anything like that. It is all revenue.
F. Thomson Leighton: And to the second part of the question, the Inference Cloud covers all of our 4,300 locations. We have functions-as-a-service running in a serverless way in all 4,300 locations. We have our managed container service running in well over 100 cities and conceivably can run in all 700 cities, but active in well over 100 today. We have full IaaS capabilities in several dozen cities, and a couple dozen of those are equipped with the new RTX 6000 GPUs. The goal is to have all this orchestrated so that when an application or an agent needs to be run, it is run on the most computationally efficient resource.
If you can do it on an edge server with the existing CPU—fabulous, fast, very low cost. If you can do it on a container in the same city on a CPU—great. If you need a group of GPUs in one of those couple dozen locations—okay. You want it to be on the most efficient resource, to be close to the user, and to already be ready to go—you do not want to have to spin it up in response to a request. That is what our orchestration layer is designed to make possible. This fits with NVIDIA’s AI grid vision—think of AI like you would an electrical grid—and that is what Akamai is building.
Operator: Thank you. We have the next question from the line of William Power from Baird. Please go ahead.
William Power: Thank you, and congrats on the massive deal. Two questions. First, a clarification, perhaps, Ed. When you talk about needing additional GPUs, do you need more GPUs to satisfy the new deal, or is that more related to the building pipeline? And while timing is uncertain as to when the GPUs might be available, is there a framework for what we are talking about in terms of overall cost?
Edward J. McGowan: As noted in the prepared remarks, all the CapEx that we need to satisfy the $1.8 billion deal is in the guidance. That is separate from the comment I made around the additional GPU purchase, which was really tied to how we are doing with the pipeline and how quickly we can execute on that. There is always the question of delivery timing. We will give you more information as it develops. We are seeing a very strong pipeline with some very large opportunities—some customers that want to start with a couple hundred GPUs, some that want to start with a thousand or more. It is growing every day.
The last incremental GPU order we discussed publicly was around $250 million in CapEx. I do not have anything new to size today, but hopefully I will be telling you it is a really big number because we have significant demand.
William Power: Okay. And then any way to frame how you are thinking about gross margin and operating margin impacts? I know 2027 sounds like it is still a partial year. As you look into 2028, how does this impact the overall financial model relative to where you are today?
Edward J. McGowan: At a high level, especially for larger deals that are dedicated-capacity oriented, the biggest cost driver is depreciation over the period of time. The costs that go into cash gross margin are much less—colocation, some bandwidth/networking, and some people costs. These scale pretty well. Over time, you would expect cash gross margin to improve, and EBITDA margin to expand a bit. From an operating margin perspective, it depends on the mix. We are willing to do some very large deals at operating margins that might be below our 30% company operating margin, while GPU-as-a-service rented by the hour is much higher than the company average. You get a lot of scale across OpEx as we take on larger customers.
We are going to focus over the next year or two on capitalizing on this growth, so we will not be in a margin-expansion mode right now. At some point, that will happen naturally and free cash flow margins will improve. We are excited about the growth opportunity and will continue to invest to go get it.
Operator: Thank you. We have the next question from the line of James Fish from Piper Sandler. Please go ahead.
James Fish: Given what you have discussed around power in the past—with the large sites having, I think, five to ten megawatts and smaller sites a fraction of that—it puts you above 300 megawatts. Now you do not have enough revenue that aligns to this. How much of that power is for noncompute services, and is that why you need to bring on, from what I can tell, roughly another 40 megawatts just for this deal alone? It does not seem to match, and you should be able to support this if that power is all allocated to compute. Can you walk us through your megawatts and the plan by 2027?
Edward J. McGowan: Let me start by saying your math is not right in terms of what would be required to deliver this particular deal—it is significantly lower than that. In terms of our capacity, if you think about what uses the megawatts, the CDN and security business is a small fraction—think kilowatts in some cases, maybe a megawatt or two in some of the big CDN deployments. There is not a ton of massive power required to run the CDN business. For the compute business, it is a lot greater, especially when you get customers who want, say, a few thousand GPUs in a particular location or in 20 to 30 locations and they have a lot of CPU.
You see a lot more need for power there. Our typical deployment for some of our larger core compute locations—we talked about having 40 core compute locations—are in the five to ten megawatt range, expandable to 20 to 30. There is plenty of opportunity for us to get additional colo. We expect to light up a lot more going forward. If the concern is access to enough power or colo, that is not a concern of ours right now at all. We have great relationships and are a very attractive client for data center providers. We have excellent credit. We are not a do-it-yourselfer like a hyperscaler. We have much better credit than some of the neoclouds.
We do take significant chunks of colo and in some cases help our colo partners build out. The power dynamics for each product is different—GPUs take more power than CPUs—and the type of equipment matters, with some new hardware being incredibly power efficient. We factor power into any deal that we do and ensure we are not taking on anything that is not profitable.
James Fish: Got it. Makes sense. And on the security side, normally you give us API and zero trust versus core. How did that trend? And then how did compute in the quarter trend?
Edward J. McGowan: With security, we did not break out API and Guardicore this quarter. We did say it was the majority of what is driving growth. The growth rates are similar to what we had last quarter, noting that last quarter had a fair bit of license revenue—so apples to apples, growth is roughly the same when you back out license impacts. On compute, the way to think about enterprise compute is CIS, which we break out, and what we used to call application services is included inside delivery and app services. CIS grew 40% year-over-year, and we expect that to accelerate.
Operator: Thank you. We have the next question from the line of Jonathan Frank Ho from William Blair and Company. Please go ahead.
Jonathan Frank Ho: Good afternoon, and let me echo my congratulations as well. Given the types of mega customers you are bringing onto your platform, is there more opportunity to upsell to them once they are on your platform? Are there potential additional services, or could they come back if they continue to expand their growth as well?
F. Thomson Leighton: As you know, the demand for AI is rapidly increasing. We are really early on there, and I would expect there is plenty of room to grow the existing base and, of course, add other customers of that scale.
Operator: Thank you. We have time for one last question from the line of Jeffrey Van Rhee from Craig-Hallum. Please go ahead.
Jeffrey Van Rhee: Thanks for taking the question. Two quick ones. First, Tom, there is a lot of blowback nationally against AI data centers and the power consumption correlated to them. As you are stepping into deals of this magnitude, how do you think about staying out of the crosshairs of some of the community-wide pushback on the broader AI compute environment?
F. Thomson Leighton: I do not think we have a profile in the popular press anything like the giant hyperscalers. I do not think that is really an issue for us. We are not worried about that yet—maybe that will be a good problem to have once we are much larger than we are today.
Jeffrey Van Rhee: And second, on the security side, given the comments about AI becoming a tailwind there, would you think this year is likely a floor in terms of growth rate—namely, should we be thinking reacceleration as we get into 2027 and beyond?
Edward J. McGowan: We gave you guidance for the year. We are pleased with what we saw in the first quarter. We like what we see, especially around API security—still early days with low penetration—and Guardicore is growing very consistently. We will see how it goes and update you as we go.
Operator: This concludes our question and answer session. The conference has now concluded. Thank you for attending today's presentation. You may now disconnect.



