Image source: The Motley Fool.
Date
Monday, March 2, 2026 at 4:30 p.m. ET
Call participants
- Chief Executive Officer — Jayesh Chandan
- Chief Financial Officer — Bruce Gregory Bower
Need a quote from a Motley Fool analyst? Email [email protected]
Takeaways
- Full-year revenue -- $101.4 million, up 35.7%, marking the company’s first year above $100 million and within prior guidance of $100 million-$110 million.
- IFRS operating loss -- Narrowed to $13.7 million from $66.9 million, a 79.6% reduction.
- IFRS net loss -- Improved to $11.3 million from $64.8 million, an 82.6% reduction.
- IFRS basic EPS -- Improved to $0.51 from $(6.13), a 91.7% improvement.
- Adjusted EBITDA -- Approximately $19.1 million, with adjusted net income at about $19.9 million for the year.
- Adjusted basic EPS -- $0.89 for the year, with adjusted diluted EPS at $0.81.
- Q4 revenue -- Approximately $35.6 million, above consensus of $34.75 million, according to management.
- Q4 adjusted EPS -- Implied at roughly $0.37, versus consensus of $0.30.
- Full-year cash balance -- Ended at $104.8 million; as of Feb. 26, 2026, unrestricted cash increased to $108 million and total cash to $116 million, despite $3 million spent on 2026 buybacks.
- Year-end debt -- Total debt at $13.8 million, down 35.6% from $21.4 million, with released collateral of over $5.3 million.
- Share buybacks -- More than $11 million spent to date cumulatively.
- 2026 revenue guidance -- Maintained at $137 million-$200 million, dependent on project delivery schedules.
- Cash flow objective -- Targeting positive cash flow for 2026 as an explicit operating goal.
- Customer collections -- Over $22 million collected in the first two months of 2026 for 2025-delivered solutions; further collections of “millions plus or minus a few million” expected imminently.
- AI data center pipeline -- Currently evaluating over 600 megawatts of capacity, with project scope expanded substantially beyond the previously disclosed 12 megawatts.
- Workforce expansion plan -- Anticipated increase to 1,200-1,500 full-time employees and 700-800 contractors by mid-June 2026, scaling total personnel to 2,000-2,500.
- GPU deliveries -- Initial set of GPUs for the FRAIR contract scheduled for arrival within days; regular deliveries arranged with OEM partners covering multiple customer contracts.
- Gross margin trend -- CFO Bower said, “So I think the better way to think about it is 2024, we had an abnormally high service mix in the revenue mix, so the majority was service. And then it was a higher percentage of hardware in 2025—it was sort of 40%. So that is why gross margins were a little bit lower than you would expect.” GPU-as-a-service business targeted at 70%+ gross margin with “like a 25% operating margin” at scale.
- Regional projects -- Generating signed deals and active build-outs across Malaysia, Thailand, Indonesia, Singapore, Taiwan, India, and the Middle East (with a Saudi Arabia MOU in effect).
- AstroCos integration -- Acquired AstroCos real-time infrastructure engine is being integrated to strengthen the company’s ability to sell outcomes, not just technology, with real uptime and response time, threat detection, and generating high operational efficiency for the customer.
- Roadmap milestones -- “core quantum cryptography” product targeted for April 2026 release; full deployment of post-quantum SD-WAN by April, with proof of concepts underway for customers.
- Gorilla Technology Capital -- CEO Chandan said, “it is a game-changing catalyst for our next phase. It is designed to expand our ability to execute larger infrastructure programs by structuring capital efficiently, aligning long-duration funding with long-duration assets.”
- Customer pipeline -- CFO Bower referenced “a $7 billion revenue opportunity in the pipeline.”
Summary
Gorilla Technology Group (GRRR 4.67%) reported record annual revenue and significant improvements in operating and net losses, ending the fiscal year ended Dec. 31, 2025, with a stronger cash position and reduced debt. Management emphasized a focus on achieving positive cash flow in 2026 and reaffirmed a wide revenue guidance range, citing the timing of data center project deliveries as a key variable. The company highlighted a substantial expansion in its AI data center pipeline, disciplined project selection, and robust early 2026 customer collections, alongside plans for rapid workforce growth to support its contract backlog.
- CEO Chandan stated, “Our demand is in hundreds of megawatts. And Asia—not just Southeast Asia or East Asia or even South Asia—APAC as a whole does not have the capacity right now,” underscoring the scale of market opportunity and the company’s intent to build new data center capacity.
- CFO Bower said, “we are not prepared to issue gross margin or EBITDA guidance, but stay tuned in the coming months,” indicating ongoing uncertainty in forecasting profitability metrics.
- Management described a shift in AI market dynamics from training to inference and from centralized to distributed/edge compute, which is driving infrastructure buildout demand for the company’s solutions.
- No material impact was reported on Middle East operations due to recent geopolitical events; management maintains a “very, very, very disciplined risk posture.”
- Large contract opportunities, such as the Southeast Asia deal, are maturing into late-stage commercial structuring, and a $1.4 billion agreement has catalyzed broader sovereign-grade AI infrastructure interest in the region.
Industry glossary
- FRAIR: Contract or partnership (implied within context) involving phased GPU/data center project deployments in Southeast Asia; specifics on underlying acronym not explicit in transcript.
- GPU-as-a-service: Cloud or hosted infrastructure service offering high-performance graphics processing units (GPUs) on demand for AI workloads, typically under multi-year fixed contracts with usage-based or term-based pricing.
- SD-WAN: Software-defined wide area networking—technology enabling dynamic, optimized, and secure interconnection across enterprise networks, extended here with post-quantum cryptography for futureproof security.
- AstroCos: Acquired real-time infrastructure intelligence platform integrated into Gorilla’s smart infrastructure stack, providing telemetry, prediction, and operational oversight for mission-critical assets and facilities.
Full Conference Call Transcript
Jayesh Chandan: Thank you very much, Christian. Everyone, and thanks for joining. I will keep it crisp. If you want drama, the market has already provided enough already today. So I will stick to the facts. Now let me start with the headline. We reported a record full year revenue of $101,400,000, up 35.7% year over year. This is the first time we have crossed $100,000,000 annualized revenue. We guided the market to $100,000,000 to $110,000,000 and we delivered inside that range. That matters because credibility matters, and we intend to keep it that way. Now the more important part is how we got here. We executed a real turnaround. Our IFRS operating loss narrowed to about $13,700,000 from $66,900,000 last year.
That was a remarkable improvement of $53,200,000, or a 79.6% reduction in the IFRS operating loss. Now our IFRS net loss narrowed to $11,300,000 from $64,800,000 last year, an 82.6% improvement. And IFRS basic EPS improved to about 0.51 from negative 6.13, which is a 91.7% improvement. So, yes, it was a proper swing. It was not just a cosmetic one. We did all of this while keeping the underlying profitability at scale. Adjusted EBITDA came in around $19,100,000 and adjusted net income was about $19,900,000, with adjusted basic EPS being 0.89, and adjusted diluted EPS at 0.808. What I can tell you is that it is strong, and it is very disciplined.
Now I know what comes next because investors always ask it: how did we do versus expectations? For the fourth quarter, the market consensus was roughly around $34,750,000 of revenue and adjusted EPS of 0.30. Based on our full year results, our fourth quarter revenue was approximately $35,600,000, which is well above consensus. And based on the implied fourth quarter adjusted earnings, our adjusted EPS was roughly around 0.37, which is about a 22% beat versus the 0.30. For the full year, the market consensus was approximately $100,600,000 of revenue, with 0.80 for adjusted EPS. We delivered roughly around $101,400,000 of revenue and delivered 0.89 adjusted EPS, which is about a 6% beat versus consensus.
So the message from my side is simple. We delivered record revenue. We delivered a major IFRS turnaround. We delivered underlying profitability that exceeded expectations. Now let's just talk about the broader market because it has been volatile. The market conversation has shifted from “did you beat the quarter” to “will AI spending hold up?” And I am sure all of you have seen this in the last few days and weeks. That is a fair debate. But personally, it misses the bigger picture. AI is no longer a discretionary software trend. It is rapidly becoming a national capability and a core operating layer for enterprises and governments.
Now the next phase of AI demand cannot be defined by one buyer or one deal. It will be defined by many buyers across various sectors building permanent capacity. Governments, regulated enterprises, telecom operators, logistics networks, financial services platforms. The list is long, and the spend is becoming structural. The compute is also evolving at a rapid pace. This is what the market is really missing. Now AI compute is actually shifting from a training-led cycle to an inference-led cycle. This is important because this does not reduce the market. It broadens demand.
Inference pushes AI into everyday workflows and mission-critical operations, which increases the need for distributed compute across regional data centers and edge environments where latency, data residency, and resiliency requirements matter. Now this is where it becomes a major driver. As most of you know, we were one of the leading edge companies when we went public, and we continue to invest heavily. Edge compute expands what AI can do because it moves inference to the decision point, closer to the sensor, closer to customer interaction, closer to regulated data. It is a force multiplier for adoption in public safety, transportation, logistics, financial services, telecom networks, industrial, and the whole sector, like smart cities.
Now let us talk about the scale of the infrastructure market in our region. We are not relying on slogans. We are tracking the data very, very closely. We have an internal team. We have a research team, which is doing that, and we use external data at the same time. Now we see Asia Pacific data center investment growing from roughly $30,000,000,000 in 2026, up roughly to about $90,000,000,000 by 2030–2031. We see installed capacity broadly doubling from about 29,000 megawatts today to about 63,000 megawatts by the end of the decade.
Now Southeast Asia also follows the same trajectory, going from the low teens of billions towards roughly $30,000,000,000 by 2030, as more capacity is being built in the market rather than exported offshore. India is another example. It is scaling very rapidly. From a little over 1 gigawatt of installed IT load today, they are moving towards about 1.8 gigawatts in 2027 and to multiple gigawatts by 2030. We are seeing the same trend in The Middle East. We are seeing the sovereign buildout dynamic, with market growth from low single-digit billions to a high single-digit billion by early 2030, as governments and national champions scale local compute and secure infrastructure.
This is the structural build cycle we are positioning Gorilla Technology Group Inc. for. So what are we doing in 2026? We are advancing our AI infrastructure and data center build strategy across Malaysia, Thailand, Indonesia, Singapore, and other regions, including Taiwan and so on. We are expanding our evaluation work in India. We are progressing our strategy in The Middle East, which includes Saudi Arabia, where an MOU has already been signed, and actively exploring data center development opportunities in that region. We are also exploring opportunities to buy and/or build our own data center assets. Ownership changes the model.
It gives us more control over delivery and stronger long-term positioning, and the potential to build recurring infrastructure-led revenue streams rather than relying on project cycles. Now in parallel, we are also strengthening our product edge for this next phase of adoption. Our core quantum cryptography was targeted to be ready in April 2026. And our lawful interception product suites remain in continued research and development as we expand sovereign-grade capability across security and intelligence as well as compliance-led deployments. Now come 2027, we are also now putting a team together which will be investing very heavily into 6G local introduction as well.
Now we have currently got about 300 full-time employees today and a little over 200 contractors working on all the projects we have signed. But based on just the projects we have recently signed, we anticipate going to about 1,200 to 1,500 full-time employees by mid-June next year. And that would be an additional roughly around 700 to 800 contractors. So we will have roughly between 2,000 to 2,500 people for the company at any given point of time. Now investors want proof. They want execution, not a narrative. So I will speak directly about the things that matter: delivery, and collections—more importantly, task conversion.
Our top customers are progressing very strongly and our customer satisfaction is reflected in our payment behavior. In the first two months of 2026, we have collected more than $22,000,000 from our largest customers for solutions delivered and invoiced in 2025. We also expect meaningful collections in the coming weeks. Now we finished the year 2025 with total cash of $104,800,000. But what was very important is that we did all this by reducing the total debt load to about $13,800,000, which is 35.6% lower from the $21,400,000 in the prior year.
Now through the refinancing of certain lending agreements and the repayment of others, we also reduced our debt, releasing more than $5,300,000 of deposits previously held as collateral against some of these loan obligations. Now this kind of balance sheet gives us very meaningful flexibility to execute existing programs, fund working capital through delivery cycles, and scale our infrastructure strategy with discipline. Now we have also spent at the same time more than $11,000,000 on buybacks to date, which we believe the market continues to undervalue Gorilla Technology Group Inc. relative to our performance and our strategy. Personally, I think you could call this confidence. I call it arithmetic. Why? Because that leads me to my next point.
We are aiming to be cash flow positive in 2026. That is not just a slogan for me. It is an operating objective that comes with very disciplined delivery, disciplined overhead control, and very disciplined cash collection. And finally, a lot of people have asked me this question over and over again: Gorilla Technology Capital. Personally, it is a game-changing catalyst for our next phase. It is designed to expand our ability to execute larger infrastructure programs by structuring capital efficiently, aligning long-duration funding with long-duration assets, as well as enabling our customers to move faster with clear financing partners. Some people said, hey, maybe they are buying a bank. No. We are not buying a bank.
And you have to understand what Gorilla Technology Capital does. It strengthens our ability to scale data center builds, accelerate GPU infrastructure deployment, and more importantly, participate materially in larger mandates with institutional-grade structures and governance. So I summarize 2025 in one line: we delivered a historic revenue milestone. We executed a major profitability turnaround. We strengthened the balance sheet and positioned Gorilla for the next phase of AI infrastructure, which is sovereign and regional—more importantly, distributed and becoming increasingly edge-enabled.
In 2026, we shift from proving we can build the work to scaling what we can deliver, converting execution into cash, expanding our data center footprint across India, Malaysia, Thailand, Singapore, Indonesia, The Middle East, and, more importantly, using Gorilla Technology Capital to unlock materially larger programs without compromising. All this while accelerating our product roadmap, which means we are investing heavily into R&D. Thank you for your time. I will hand over to Bruce, who knows the numbers well enough to recite them without blinking. Bruce, please take it away. Thank you, Jay. I think you covered the main points in terms of the financials. I wanted to hit on a few things.
Bruce Gregory Bower: So first of all, we mentioned that the cash balance at the end of the year was $104,800,000. I would just like to emphasize that, due to the collections so far this year, the cash balance actually increased. So as of February 26, it was $108,000,000 of unrestricted cash and $116,000,000 of total cash. That is in spite of spending $3,000,000 this calendar year—so in the last two months—on share buybacks. So we have been able to increase cash and also buy back shares this year. So it is a strong start to the year.
The other thing I would point out is when we talked about reducing the debt load and freeing up cash deposits, some people ask why we did not pay off all of the debt. Well, the debt that we have remaining, the $13,800,000, is at an average interest rate of 3%. So, to be blunt, it makes sense to keep it as capital instead of repaying it and borrowing at higher rates. The last thing I would talk about is we issued guidance last year of $137,000,000 to $200,000,000 as the revenue guidance range for this year. We are maintaining that.
At this point, we are not prepared to issue gross margin or EBITDA guidance, but stay tuned in the coming months. We announced the range—why is there such a wide range from $137,000,000 to $200,000,000? It depends on the delivery schedule of certain data center projects we are pursuing with FRAIR and also with others. I think we will have a very good update coming in the next month to month and a half about the timing of those projects, about delivery schedule from NVIDIA, and then also with the customers. And that should help to firm up the guidance and give you a better idea.
With that, I would just like to reinforce what we mentioned in the press release and what Jay said: we believe that the balance sheet has improved to the point where we are able to fund our growth initiatives and also to buy back the shares if we feel that they are undervalued, and that we can take on a lot of the growth projects that we have talked about—not just the increase in revenue this year, forecast to be in the middle of the range would be almost a 70% increase—but also the contracts that we have in the pipeline.
So a $7,000,000,000 revenue opportunity in the pipeline, we believe that we can fund substantially through access to debt facilities, mostly through project finance, then through the cash that we have on the balance sheet at the moment. I will now turn the call back to Jayesh Chandan to open the Q&A.
Jayesh Chandan: Thanks, Bruce. I would love to open up the questions to all standing by. Thank you.
Operator: Thank you. We will now begin the question and answer session. You will hear a tone acknowledging your request. If you are using a speakerphone, please pick up the handset before pressing any keys. Your first question comes from Brian David Kinstlinger with Alliance Global Partners. Please go ahead.
Brian David Kinstlinger: Yeah, close enough. Great. Thanks so much, guys. You have certainly come a very long way over the last two years. Congratulations on that. Has anything changed in terms of your best guess on timing for the first three phases of the FRAIR partnership? I think the plan was project financing to help you start in April for phase one, September for phase two, and December for phase three? And then the second part of that question, outside of financing these projects, are there any gating factors to starting these projects? And if so, what needs to happen in those time frames?
Jayesh Chandan: Brian, good to hear from you. Thank you for your kind comments. We are on track with where we are today. Obviously, considering the market forces today, we have had some slight delays in terms of the delivery. But that said, let me walk you through what has happened. Some of the programs have moved in terms of timing, and we talked about the FRAIR contract, for example. That is on schedule. We are currently in the final stages of getting our first set of GPUs coming through over the next few days. We will be deploying it as we speak. We have also accelerated the timings on some of the data center discussions.
When we spoke last, I think we were looking at about 12.5 megawatts of data centers, if you recollect. And we were looking at roughly several high-density AI racks. What we did was, rather than commissioning them all on a single day, we are slowly putting them in planned phases. So power, cooling, network zones are all commissioned. Revenue ramps are going to be energized as we speak. And as the racks go live, we will drop in the clusters through our partner ecosystem, which also drives what I call the GPU-as-a-service usage line.
Now what is very exciting for us, and I can tell you today, is we have now realized that we would need to be deploying a lot of capital in the data center space ourselves because we have been inundated with a ton of requirements. So we are now currently looking at more than 600 megawatts of capacity, rather than the 12 megawatts alone. And that allows us to control our destiny over a period of time, which means we are looking at several hundreds of millions of dollars per year once all these racks and the GPUs are all in motion.
So from our perspective, Brian, the path to that particular point is a very controlled ramp-up, not a single bang. Now you talked about, are there any delays? There are no significant delays so far. Thailand MOE, for example, had been delayed because of the political transition—some sort of department reorganization, as you know. The new prime minister has been elected, so we are just waiting for the post-election leadership and sign-offs to settle as we speak. But otherwise, we are not facing any delays. We are going ahead with all of the approvals, all of the permitting, all of the site readiness, all of the customer prerequisites as we go into delivery.
So when the customer gates reopen now, I think we will start our billing on time. I hope that answers your question, Brian.
Brian David Kinstlinger: It does. Thank you. My second question is you have got this large pipeline of other data center opportunities you have discussed. Not to say that your business development has been slow—it has been very fast—but do you think those customers are waiting to see how execution is on the first FRAIR contract? Is that going to, in the near term, hold back agreements? Or do you think those will be able to move forward without delivery on those three projects?
Jayesh Chandan: Oh, absolutely not. Like I have mentioned, our pipeline is exploding. And we have not been slow in our sales, Brian. I can tell you the only thing we have been doing has been restricting. We have been inundated—and I am not exaggerating. Inundated is the right word for that. So first of all, the deals are mature. At the start of the year, in January this year, we were looking at POCs, MOUs, and so on. Very promising, but since then, we have moved into late-stage commercial structuring or into full-force contracting, which naturally increases the scale and the certainty of the pipeline.
If you recollect what I told at December, towards December, we are making sure that we have certainty of the pipeline. Now, as I have mentioned, the $1,400,000,000 Southeast Asia contract—that was only a catalyst. Once the government and telcos basically saw what we are able to deliver, and we started signing up with the first 12.5 megawatts, suddenly out of nowhere it triggered some sort of sovereign-grade AI infrastructure requirement and a huge surge in interest for us. So I do not want to give you names, but what has happened is the demand behind that is significantly larger than FRAIR itself. So that is one of the primary reasons why our pipeline is now impairing the balance.
Third most important part is things have changed from ambition—governments are no longer looking at it as an ambition. It has turned into urgency for us. Now I mentioned this last time as well. Not only are we looking at GPU capacity as strategic infrastructure, the shift has actually moved into edge compute. Distributed environments are taking shape right now. And like I said, the market has missed that already. People think, oh, is the spending going to continue? It is going to accelerate. It is not going to continue at the rate it is going. It is going to go exponential.
We are sitting with every single major customer on the planet, and I can tell you these platforms are just going to explode in terms of compute requirements and demand. Then finally, our execution has not just been on one data center, Brian. We have been doing data centers for a very long time. We built data centers on behalf of governments, for example, in Taiwan, in Thailand, in Egypt, and so on and so forth. So things like when we deploy large-scale lawful interception programs are more complex than putting up a data center. The governments and the organizations look at it and say, look, what has Gorilla delivered? They see that and the confidence grows.
So we are not relaxing or resting on our past laurels, but we are putting everything into motion. So like I said in my previous response, we are now targeting over 600 megawatts of power. So the opportunity here is very comfortably substantial, Brian. And it is only growing.
Brian David Kinstlinger: Great. My last question. You highlighted your recruitment needs. In my career, your type of business is always a great leading indicator. How would you characterize the recruiting market in the geographies you are hiring? Then outside of the execution staff, are there significant needs on the AI/HPC senior executive level that give you added strategy and expertise at the high level?
Jayesh Chandan: That is a really good question. So as you know, we are hiring at a rapid pace. What you do not see is, on our website, the names of the top people we have hired already. In Thailand, we are actually going strong with hires of about 80-plus people. In Taiwan, we have deployed a significant data center team and an R&D team for our cybersecurity products. We have done that through what is called a hub-and-spoke model. This is very important because our R&D platform and engineering need to accelerate both our products and our services capability.
So on the services side, as you know, Satish came in mid of last year, and he has been driving all of the client impact and deepening our technical capability. On the R&D side, we have been hiring SD-WAN, post-quantum cryptography, lawful interception capability, video analytics. We have been growing that product. And all I could promise is by April, we would have a fully launched, world-class, fully ready post-quantum SD-WAN. And they are already working on proof of concepts with customers as well. But this is what matters, Brian: localization.
Every single region we are working in, whether it is India, Middle East, North Africa, Southeast Asia, East Asia, they are all asking about how we are building stronger ground capacity. So what do we do? Teams in Thailand. So my team in Thailand, for example, because we are looking at some very large data centers here, will be about a thousand people by the end of this year. It will be probably a thousand people in Thailand. We will be about between 200 to 300 people in India. And our Taiwan team will be north of 200-plus people. Now we are hiring senior executives as well at the same time.
As you have seen, Thomas has come in and joined as the CTO of infrastructure. Jackie has come in from the hardware side and become the GM for Asia. We are also hiring next-level capability under them as well. At the same time, we are also making sure the finance and compliance are tightened. So we are hiring to improve cash discipline, collections, control, audit, and so on and so forth. So think about it this way. The hub-and-spoke model is going to be centered across each of these regions. And as we expand and grow, we will be expanding our teams rapidly over the next few months. And the teams are all ready and running at a rapid pace.
Brian David Kinstlinger: Great. Thanks for all your answers.
Jayesh Chandan: Thanks, Brian.
Operator: Your next question comes from the line of Bharath Nagaraj with Cantor Fitzgerald. Please go ahead.
Bharath Nagaraj: Hi, thank you. Thanks for the presentation. Just a few questions from me. Just to start off with on the gross margin. Just wondering on the mix, which resulted in a, let us say, a slightly different gross margin than what I was expecting, but just wanted to understand what the mix of revenues is. And then the second question is around, given that you are going to deploy the latest compute for data centers in Southeast Asia, what kind of level of revenue are you modeling per megawatt there? What sort of use cases are you thinking about for that?
Jayesh Chandan: Bruce, do you want to take the first part of the question? I will take the second part. Sure.
Bruce Gregory Bower: Sure. So I think the better way to think about it is 2024, we had an abnormally high service mix in the revenue mix, so the majority was service. And then it was a higher percentage of hardware in 2025—it was sort of 40%. So that is why gross margins were a little bit lower than you would expect. The other thing is that we announced last year that we had signed two major law enforcement customers in Asia, and in at least one of those cases, the margin that we had predicted going into the project was a little bit lower than we normally accept.
That is because it was a key win for us as a client and as a solution to demonstrate our capabilities. So altogether, that is why the margin drifted a little bit lower. I would say that going forward—building on what Jay mentioned about the pipeline—we have the ability to be very choosy about the projects that we do. So because we have so much demand, if the margin terms, if the credit terms or the credit profile of the customer is not right, or if the payment terms are not there, then we just say, I am sorry. You either come in line or we will move on to the next project.
The other thing is that the data center, the GPU-as-a-service, has an extremely high gross margin. So it is 70% plus. Seventy percent is kind of the minimum cutoff. There is obviously a depreciation hit because an SPV would hold the equipment and then that would be consolidated onto our financial statements, and we will take the depreciation charge. I would say at scale that would be like a 25% operating margin. But that is at scale—I am not providing yet the forecast for margins for this year. We are going to wait until we get the exact details firmed up.
But that is how I would see 2025—it is kind of a dip in terms of gross margins, and I would see them improving over time and, you know, a much stronger margin profile for all the new business coming in.
Jayesh Chandan: And just to add to that as well, Bharat, more importantly, we are investing very, very heavily into building the business for sustainable long-term growth and gross margins. Right? Now that brings me to the second part of your question. Now in terms of pricing today in Asia, it is structured either in what we call capacity per server per month or in terms of usage per kilowatt hour depending on the customer and the program. Typically for sovereign enterprise deployments, we are targeting contracted multiyear take-or-pay kind of a style where the pricing and sustainable margins and cash conversion are predefined. So we know exactly what we are getting ourselves into. Now we avoid quoting a single rate.
I mean, personally, I do not want a single rate because it varies by GPU class, as you know, term length, utilization profile, power, cooling specs, location, land values, service level stack, and so on and so forth. But the proof point for us comes only when we sign these programs. The unit economics are very disciplined, and our collections and our milestone payments protect our cash. So there is no single kind of an Asia price. And we are not just looking at Southeast Asia, by the way.
There is no “Middle East or Asia price.” But that said, I can tell you, typically, if you are looking at CSP-class GPU rack capacity, they can run in high four figures to low five figures per GPU per month when bundled with power, full space, connectivity, managed services. But, also, remember, these are long-dated fixed milestone agreements. So we often layer what we call a service level fee, compliance components, and so on and so forth. Now each of these can change. For example, in the U.S., spot rents for top-tier GPUs can be 2x to 3x typically on what you see on structured regional capacity in Asia.
But what we are doing is that we are not putting a standard rate, and because our compute requirements are more stringent here, and our contracted deals are much longer, we are able to create a highly competitive pricing as opposed to even the United States. So think about it this way: a compliance premium and service premium will do about 20% to 40% where we include governance, telemetry, managed ops, and so on and so forth. But the energy cost differentials mean that the Asia deals are often much more profitable.
So, if I may say this, comparing Asia and the U.S. is like thinking a hotel in Vegas might be cheaper, but penthouses in Bangkok are much more expensive than some of them even in Manhattan. So yeah. On the question, Bharat.
Bharath Nagaraj: Yeah. Absolutely. Thank you. May I just sneak in a quick couple more small ones. Does the RTCOS acquisition that you made carry, in terms of your strategy, are you planning to have an explicit pricing and margin contribution for the new contracts that you sign for this? Or is it currently being bundled to strengthen your competitive advantage and increase long-term customer lifetime value?
Jayesh Chandan: That is a great question. Let me give you an update on why we invested, why we are integrating. Right? I think that is your question, and what are we going to do. What does your screen would look like? If you would ask me—I do not know. That would be a question I would ask myself. When we actually looked at the AstroCos—first of all, what is AstroCos? AstroCos is a real-time infrastructure intelligence engine that does monitoring, prediction, optimization of critical systems.
Now it is already a deployed system in very serious environments, including some high state-level smart city platforms—for example, the new Indian Parliament Complex, which we had talked about previously—and major initiatives in The Middle East as well. That matters because AstroCos is not a demo. It is a fully deployed solution. Now the second part of your question: what are we doing with it? We are integrating AstroCos into three parts of our stack. First, most important, smart city and master infrastructure operation. It gives us a telemetry and prediction layer that makes national infrastructure more measurable but at the same time optimizable in real time. Now what does that mean?
It strengthens our ability to sell outcomes, not just technology, with real uptime and response time, threat detection, and finally, generating high operational efficiency for the customer. The second is video intelligence and security. Now AstroCos, you know, typically announces real-time monitoring or positioning around critical infrastructure, security, and operational workflows. It complements our video intelligence stack, and it allows us to improve our operationalization of the data across our SOCs and NOC environments. And finally, this is very, very important. This is a big one. GPU-heavy data centers and environments are not like a standard data center. You cannot run very heavy GPU environments without continuous telemetry, predictive optimization, integrated security, and operational automation.
This is what AstroCos actually plugged into—that requirement. So, and then on the kind of the springboard, and if I was to get—for me, it is a springboard in India, but it is very immediate because it brings deep presence in the region, shortens our sales cycle, improves our delivery readiness. In the UAE, we are already working on building our Middle East footprint. In the USA, it is a standard partnership-driven market. So we are progressing market-level entry work in that region as well. So think about it this way. Significant minority investor.
We have an option to materially increase our ownership, but also giving us a lot of flexibility to integrate and progress the traction on a very large commercial scale.
Bharath Nagaraj: Yeah. Absolutely. Thank you. That is very helpful indeed. Thank you very much.
Jayesh Chandan: Thank you, Bharat.
Operator: Your next question comes from the line of Michael James Latimore with Northland Capital Markets. Please go ahead.
Michael James Latimore: Hi. Yes. Thanks very much. Yes, congrats on a great year. Excellent results there. I guess just a couple of things. You talked about maybe some more collections coming in here this quarter. Can you frame that a little bit more? Is that, we are talking a few million dollars, you are talking over ten, or maybe you cannot say it, but just kind of curious on—
Jayesh Chandan: Bruce, do you want to take that?
Bruce Gregory Bower: I would say it is plus or minus—millions plus or minus a few million—$2,000,000 or $3,000,000 on either side.
Michael James Latimore: Okay. And that relates to the 2025 effort?
Bruce Gregory Bower: It is solutions that were delivered and invoiced in 2025. Yes.
Michael James Latimore: Okay. Great. And then just to keep it simple for me, the large Southeast Asian deal. So it sounds like pretty much no change there in terms of total value or value by each of the first three data centers. Is that right?
Jayesh Chandan: That is correct. So but that has become a catalyst, like I have mentioned previously.
Michael James Latimore: Okay. Great. And then, Jay, you talked a little bit about maybe seeing your first group of GPUs in the next few days. I guess just a little bit more on that. Does that specifically relate to the Southeast Asia deal? And then also, how did you sort of say that you expect to get some of these GPUs every week and then that builds over time? Or maybe just a little more clarity on that pattern.
Jayesh Chandan: Sure. So I think we are creating a flywheel effect, if I may, Mike. What we are doing is we are making sure that we have deliveries coming in every week. So the latest agreements we have with our OEMs is that, starting next week, we are getting a few deliveries going in. But again, I have mentioned this previously as well. We have actually won others as well. So we are actually delivering against those contracts as well. So you will see a regular flow there. That is why we have hired a very solid procurement team as well, which will make sure that these deliveries are on time. So for us, these data centers are driving GPU demand.
For us, that GPU demand unlocks much more deeper national engagements. So do not look at the FRAIR contract as a one-off. This is actually, like I said, a catalyst to some very large contracts we have already signed. We have mostly agreed, by the way, with all of the OEMs, local OEMs in the region. We have signed all of the MOUs that are required. We have signed all the LOIs and the pricing agreements. The BOMs have been done. The SOWs have been completed. And as you know, we are now just working on the delivery schedules and the mechanisms over the next few weeks.
Michael James Latimore: Got it. So these GPUs will go to more than the Southeast Asia customer? Is that what it sounds like?
Jayesh Chandan: Yes. If you give us a human-based piece on that, I will give you a very concrete schedule as well.
Michael James Latimore: Okay. Great. And then I guess in terms of the Southeast Asia deal, the first data center—you are still thinking it gets up and running in the second quarter?
Jayesh Chandan: We are trying to push it for the first quarter, depending on the delivery schedules. But I am 100% confident—101% confident—that it will be live in the second quarter. We just completed the agreement on the BOM. We have sent the BOMs to our OEM partners. Obviously, as you can imagine, it is not just the GPUs coming in. You have got a whole bunch of networking equipment which have to come along with that. And as we scale up with the customer and the demand accelerates, we will have to then build on top of it. Now one of the things, Mike, I think your question leads to another important aspect.
We have been struggling to get all of the compute demand from our end customers to be satisfied in the region. As you can imagine, the U.S. is investing hundreds of billions of dollars. We do not see that kind of investment within this region. Yes, we have seen KKR acquired STT for $10,000,000,000 recently. But, to deploy data centers at scale, we need a lot more compute. So we have decided internally that we are going to build our own using modular technology. So we are currently targeting about 600-plus megawatts. And, hopefully—fingers crossed—we should be able to complete all of the signing of those by the end of this year as well.
We would be going into full-scale production in the latter part of this year as well. So we are super excited, and we think we are actually creating a new market which does not exist currently.
Michael James Latimore: Maybe the strategy to buy and build some of your data centers changes this question. But I think in your business update call in January, you mentioned that you are trying to lease out any available capacity you can in co-locations across the regions. I guess, any update on any new leases that you have executed on?
Jayesh Chandan: Yes. Yes. Yes. We already have signed many deals in the region. It is absolutely fascinating. The problem is, like I said, wheat does not exist. Whether it is 9 megawatts, 4.5 megawatts, 9.9 megawatts, 21 and 25—that is kind of the available capacity today. So you are absolutely right. What are we going to do? We are simply going to turn and build new capacity and deliver infrastructure ourselves to the end customer. Mike, maybe I have not made this clear previously. Our demand is in hundreds of megawatts. And Asia—not just Southeast Asia or East Asia or even South Asia—APAC as a whole does not have the capacity right now.
India, for example, has only 1 gigawatt of fully utilized scale. And as you have seen recently, they had the AI summit and India is absolutely going bonkers in terms of deploying the scale. But there are other structural issues. We need power. We need water, and so on and so forth. So we are working—and just FYI, we are working very closely with the Indian government—to make sure that we get our infrastructure ready across various requirements and various architectures and edge deployments in the country as well. So long story short, keep your eyes and your feelers out. We are definitely headed in the right direction over the next few days.
Bruce Gregory Bower: One thing I would add to that—thank you, Mike—is that when we are looking at reserving or lining up capacity or building it ourselves, this is different. We are not in the business of building at scale and building it and hoping the customers come. So the business here is purpose-built, AI-focused data centers or GPU-as-a-service for those clients. So what that means is, first of all, we are not going to invest capital until we see clear customer demand. The second thing is that we demand customer prepayments so that, you know, money talks, and hope does not.
And then, in most cases, the customer prepayments are an integral part of our financing strategy, so that, in between project finance and customer prepayments, we can secure 90% plus of project CapEx cost. So what we found is that when customers show commitments upfront, it obviously makes us more comfortable to move ahead. Also, it makes it more likely that the economics work in our favor.
Operator: Your next question comes from the line of John Marc Roy with Water Tower Research. Please go ahead.
John Marc Roy: Thank you. Obviously, some things changed over the weekend. I was wondering if you could give us any kind of update on operations or outlook for The Middle East given the Iran-U.S. situation?
Jayesh Chandan: Mr. Roy, thank you for the question. First of all, to everybody who is listening and everybody out there, I am genuinely sorry to see what is happening. My heart really goes out to all the families caught up in this, and to everyone who has lost their loved ones, and I am feeling very, very sorry. I have got friends across both sides of the park. Now from a business perspective, John, we are monitoring the situation very closely. And as you know, we have a very, very, very disciplined risk posture. At this point, we are not seeing any material impact on any of our operations. Egypt is progressing at full flow. Our delivery continues against plan.
And across the region, we can see because we are continuing with very appropriate caution, strong compliance, and very clear operational controls. Now what we are watching for are very practical factors that matter: logistic routes being one, supplier lead times, local security conditions, FX exposure, collection cycles, any regulatory changes that could affect movement of goods or personnel. If anything changes, the impact would likely show up on timing rather than demand. In that case, we will respond very quickly, protect our quality of delivery, and update the market when there is something definitive to report. But the trends, John, are in our favor, and they favor us very strongly, and they are accelerating, not slowing.
John Marc Roy: Yes. It does. Actually, speaking of trends, and you obviously were talking about AI in India. Can you give us—maybe take a step back and look at the macro AI environment—and what do you see happening out there in general?
Jayesh Chandan: Actually a good question. I think a lot of people keep asking. I have been speaking about this at various events as well. I would divide this into what I call three different trends, John. First one, in order: AI is currently becoming mass and regulated infrastructure. If you look at governments, telecom operators, regulated enterprises, they are all treating AI compute as strategic capacity tied to their sovereignty, data residency, compliance, and critical services. Now that shifts demand from optional pilots to what I call targeted programs with long duration and intent. So think about Asia. They are rapidly drawing up their shop floors now and thinking, we do not want to fall behind.
And so now they are coming up with large budgets but, more importantly, they have long-duration intent, as I have mentioned. Now the second side to that is the center of gravity—and this is very, very important. Again, I do not know why I am stressing this, but I will stress this again. The market is getting this wrong completely. People are talking about, oh, is the market going to sustain the investment into AI? Companies are investing hundreds of billions of dollars—the U.S. and China. The center of gravity is moving from training to inference, and from inference to distributed inference. Training is very lumpy. Inference is very persistent.
You need to take—I think most people on this call, I am happy for you to take this message. Training is very lumpy. Inference, at the same time, is persistent, which means as inference moves into everyday workflows, your compute demand spreads across regional hubs and closer to the data source, which drives out more build of regional data. It is not going to slow down. It is only going to go up exponentially. And that brings me to my third trend, which is edge. Edge is expanding the addressable market dramatically. Edge brings AI to the decision point where latency, privacy, and resiliency all matter. So what happens now?
It accelerates the adoption across public safety, I mentioned previously, transportation, telecom networks, logistics, industrial operations, and so on and so forth. These things do not replace data centers. It multiplies them—once again, it multiplies them—by creating more endpoints that need reasonable capacity and orchestration. Think about it this way. In the future, you are going to find a lot more what I call distributed inference points, which will create a huge requirement of regional capacity. And that is why you can see the likes of OpenAI, or Meta, or Google, or anybody else in the market—they are moving across a distributed environment. And those trends favor us very, very strongly, and they are only accelerating, John.
They are not slowing down at all.
John Marc Roy: Excellent. Thank you so much for the color.
Jayesh Chandan: Sure.
Operator: Your next question comes from the line of Barrett Boone with RedChip. Please go ahead.
Barrett Boone: Jayesh, Bruce, congratulations on the transformative 2025. I just had one question regarding QuantumSafe Networks and your SD-WAN product. Can you share some concrete milestones that investors can look for?
Jayesh Chandan: Sure. As I have mentioned previously, we have actually created a very strong product and we have already tested it very effectively in the last few months. Raja’s team is very confident that they will be able to launch it by April 2026. Now just to give you—when we deploy AI infrastructure, we are not just dropping GPUs in the room. We are talking about secure connectivity, telemetry, orchestration, and compliance layers. These are very, very important. People need to understand: we are not selling hardware or renting hardware. We are providing a service. I mean, your SD-WAN plus your quantum-safe encryption allows us to control the network edge to the core very securely.
That increases for us the solution value and improves the margin mix. That is number one. Second, our quantum solutions—and why the people be like, oh, they are just going after it because it has the word quantum. No. We are not. People think that we are idiots. They make edge AI viable. Why? Because edge compute only works at scale if connectivity is intelligent and, more importantly, secure. So what does SD-WAN do? What does our post-quantum SD-WAN do? It gives us traffic optimization. It allows segmentation and performance control. Now post-quantum crypto future-proofs the transport layer. So once you build the transport layer, it will help future-proof that.
And together with the distributed AI architecture, which we just responded to, it makes these architectures deployable both in a national and an enterprise environment. Now what does that make us? I think that was probably where you were headed towards with your question. They do not position us as a compute-for-rent kind of a provider. It positions us as a trusted operator. That means we can design sovereign-grade, quantum-safe, policy-compliant AI networks, and more importantly, we can help these GPUs generate additional revenue, secure the networks that protect it, and more importantly, make sure that neither of us fall apart when it gets more complicated.
When the world gets more complicated, like we are in today, we make sure that our SD-WAN and our quantum do not fall apart.
Barrett Boone: Thank you. That is very helpful. And congratulations again.
Jayesh Chandan: Thank you, Barrett.
Operator: This concludes the question and answer session. I would like to turn the conference back over to management for any closing remarks.
Jayesh Chandan: Thank you very much, Christa. That was really helpful. Some very insightful questions. Some caught me off guard as well, which is interesting. But to all our investors, to our analysts, and every person who is supporting Gorilla. First of all, thank you. You have trusted me and the entire Gorilla team long enough to let results replace speculation. There are people out there who say our contracts are garbage. Our numbers are garbage. That is okay. It is speculation. We are building the AI infrastructure that governments and critical industries will rely on. And we intend to execute with discipline. To everybody who knows me, they know me as someone who will execute with discipline.
So I will thank every single one of you and I will stop here and hand over before my tea gets cold. It is 5:25 AM, and that would be a genuine crisis for me. Thank you, everybody. Have a lovely day ahead.
Operator: Ladies and gentlemen, this does conclude today's conference call. Thank you for your participation, and you may now disconnect.