Logo of jester cap with thought bubble.

Image source: The Motley Fool.

Date

Feb. 26, 2026 at 5 p.m. ET

Call participants

  • Chairman and Chief Executive Officer — Jack Abuhoff
  • Interim Chief Financial Officer — Marissa Espineli
  • General Counsel — Amy Agress
  • Senior Vice President, Finance and Corporate Development — Aneesh Pendharkar

Need a quote from a Motley Fool analyst? Email [email protected]

Takeaways

  • Revenue -- $72.4 million for the quarter, a 22% increase year over year.
  • Full-year revenue -- $251.7 million, representing a 48% year-over-year growth rate.
  • Adjusted gross margin -- 42% for the quarter, above the externally communicated 40% target.
  • Adjusted EBITDA -- $15.7 million, or 22% of revenue; exceeded analyst consensus by $1.2 million.
  • Cash -- $82.2 million at quarter end, up by approximately $8.4 million sequentially and $35.3 million year over year.
  • Debt utilization -- No drawdown on the $30 million Wells Fargo credit facility.
  • Innovation and investments -- Growth-driven investments in COGS and SG&A, specifically in capacity, engineers, data scientists, and customer-facing leadership.
  • Customer mix -- Management expects spend from the largest customer to increase, with aggregate growth for the remaining customer base anticipated to occur at a faster rate and to include the MAG-seven, domestic AI innovation labs, sovereign AI initiatives, and leading enterprises.
  • Customer diversification -- Revenue growth is expected to become less concentrated, driven by an expanding and increasingly diverse set of large customers.
  • Revenue guidance -- Forecast of at least 35% year-over-year growth for 2026 based on visible, active programs and recently awarded wins; management notes potential significant upside due to the pace of LLM and AI-driven initiatives.
  • Workflow transition -- In the first quarter, approximately $20 million in post-training workflow revenue run-rate for the largest customer was deprecated and replaced with new post-training and scaled pre-training programs, resulting in a positive net revenue run-rate impact.
  • Adjusted gross margin guidance -- Management expects early 2026 adjusted gross margins in the 35%-40% range with normalization toward the 40% target as new programs ramp and workflow innovations scale.
  • Technological advancements -- Introduced and expanded proprietary systems for agent evaluation, agent optimization pipelines, adversarial simulation, and large-scale data engineering for physical AI, including applications to egocentric and affordance datasets.
  • Benchmark performance -- Developed an AI model for drone and small-object detection achieving a 6.45% improvement over prior state-of-the-art benchmarks, emphasizing commercial and dual-use applications.
  • Interest from hyperscalers and cybersecurity -- Managed services and adversarial training initiatives have attracted new engagements and interest among hyperscalers, cybersecurity companies, and relevant government experts.

Summary

Management revealed new innovation initiatives across generative AI, agentic AI, and physical AI, highlighting data-driven methods as core to product evolution. Proprietary platforms for agent evaluation and adversarial simulation are facilitating new customer traction, especially among hyperscalers and security-focused clients. With continuous reinvestment in people and technology, leadership at Innodata (INOD 4.84%) projects both margin improvement and recurring revenue expansion linked to hybrid software-human offerings, while underlining confidence in early-stage engagement conversion and broadening enterprise relevance.

  • Company management stated, "we believe we are entering a golden age of innovation at Innodata Inc. as a result of investments we have made and intend to make in the future."
  • Leadership emphasized that future gross margin expansion is expected, driven by automation, synthetic systems, and evaluation platforms that structurally increase our operating leverage.
  • Management clarified growth guidance is intentionally conservative, with upside possible as LLM initiatives spin up quickly.
  • In discussion of customer diversification, management shared that new wins and accelerated demand are enabling Innodata to migrate from a vendor to a foundational layer within AI ecosystems.

Industry glossary

  • LLM: Large Language Model; an AI model trained on large datasets to understand and generate natural language text.
  • MAG-seven: Management’s reference to the seven largest U.S. technology companies, typically Microsoft, Apple, Google (Alphabet), Amazon, Meta, Nvidia, and Tesla.
  • Egocentric data: Data captured from the first-person perspective of a robot or sensor-equipped device, reflecting direct environmental experience.
  • Affordance data: Structured data teaching AI systems about possible actions or interactions with physical objects in context.
  • Adversarial simulation: Systematically generated, complex data used to test AI robustness against sophisticated attacks or real-world threats.

Full Conference Call Transcript

Operator: Good afternoon, ladies and gentlemen, and welcome to the Innodata Inc. Fourth Quarter and Fiscal Year 2025 Results Conference Call. At this time, all lines are in listen-only mode. Following the presentation, we will conduct a question-and-answer session. If at any time during this call you require immediate assistance, please press 0 for the operator. This call is being recorded on Thursday, 02/26/2026. I will now turn the conference over to Amy Agress, General Counsel. Please go ahead.

Amy Agress: Thank you, operator. Good afternoon, everyone. Thank you for joining us today. Our speakers today are Jack Abuhoff, Chairman and CEO of Innodata Inc., and Marissa Espineli, Interim CFO. Also on the call today is Aneesh Pendharkar, Senior Vice President, Finance and Corporate Development. Rahul Singhal, President and Chief Revenue Officer, is unable to be here today but looks forward to joining us on our next call. We will hear from Jack first, who will provide perspective about the business, and then Marissa will provide a review of our results for the fourth quarter and fiscal year 2025. We will then take questions from analysts.

Before we get started, I would like to remind everyone that during this call, we will be making forward-looking statements which are predictions, projections, and other statements about future events. These statements are based on current expectations, assumptions, and estimates and are subject to risks and uncertainties. Actual results could differ materially from those contemplated by these forward-looking statements. Factors that could cause these results to differ materially are set forth in today's earnings press release, in the Risk Factors section of our Forms 10-K, Forms 10-Q, and other reports and filings with the Securities and Exchange Commission. We undertake no obligation to update forward-looking information. In addition, during this call, we may discuss certain non-GAAP financial measures.

In our earnings release filed with the SEC today, as well as in our other SEC filings, which are posted on our website, you will find additional disclosures regarding these non-GAAP financial measures, including reconciliations of these measures with comparable GAAP measures. Thank you. I will now turn the call over to Jack.

Jack Abuhoff: Thank you, Amy, and good afternoon, everyone. Q4 was another strong quarter for Innodata Inc. We generated $72,400,000 in revenue, reflecting 22% year-over-year growth. This brought our full-year revenue to $251,700,000, representing 48% year-over-year growth for 2025. Our Q4 consolidated adjusted gross margin was 42%, exceeding our externally communicated target of 40%. Our adjusted EBITDA totaled $15,700,000, or 22% of revenue, also exceeding analyst consensus by $1,200,000. In fact, our results exceeded analyst consensus across the range of key metrics, including revenue, adjusted EBITDA, net income, and EPS. We ended the year with $82,200,000 in cash, up sequentially by approximately $8,400,000. We achieved these results while making meaningful growth-oriented investments in both COGS and SG&A.

In COGS, we carried capacity ahead of revenue ramp, which consistently proved to be the right move. And in SG&A, we invested in engineers, data scientists, and customer-facing account leadership, which investments also proved prudent. Building innovation that has expanded our opportunities. We believe our business momentum to be at an all-time high. We are seeing robust demand across the entire AI life cycle, spanning development, evaluation, and ongoing model optimization. And we believe we are gaining traction with a broad and diversified number of large customers. As a result of market demand and growing traction, we anticipate another year of potentially extraordinary growth in 2026. We currently estimate our 2026 year-over-year growth to potentially be approximately 35% or more.

This estimate reflects active programs, recently awarded wins, late-stage evaluations, and opportunities where we have clear line of sight. Because we are early in the year and because LLM initiatives spin up quickly, we believe there may potentially be significant upside to this range. However, we prefer to guide conservatively and adjust upward as visibility increases. At the same time, given the scale and complexity of the programs we support, timing variability and customer R&D schedules, budget approvals, or shifts in research priorities could influence the pace at which revenue materializes.

Embedded in our outlook is the expectation that spend from our largest customer will increase somewhat in the year, and that the remaining customer base in the aggregate will grow at a faster rate. We expect this other customer growth to come from a mix of the MAG-seven, domestic AI innovation labs, sovereign AI initiatives, and leading enterprises. We believe this will meaningfully contribute to customer diversification. Our customers are moving fast, driving shorter development cycles and responding faster to research breakthroughs. In 2025, we succeeded in this environment in no small part because we followed the research, anticipated customer needs, and pivoted where required.

To illustrate, in the first quarter of this year for our largest customer, we deprecated a meaningful number of post-training workflows that represented in the aggregate approximately $20,000,000 of annualized revenue run-rate but replaced them with a combination of new post-training workflows and scaled pre-training programs, an area of recent focus and investment. From a revenue run-rate perspective, the net effects turned out positive. Indeed, we believe continuous innovation is critical to achieving our ambitious plan for 2026 and beyond. The truly exciting news is we believe we are entering a golden age of innovation at Innodata Inc. as a result of investments we have made and intend to make in the future.

I am now going to share some of our recent innovation initiatives. For competitive reasons, we will be appropriately circumspect, but what we share will give you a meaningful window into how we are thinking, where we are investing, successes we are having, and how we intend to capitalize on the opportunity ahead. I will briefly walk through our recent innovation in three areas: generative AI model training, agentic AI, and physical AI. Before I do, I want to underscore a unifying theme. Every innovation I am about to discuss is fundamentally a data innovation.

Whether the goal is more capable LLMs, more reliable autonomous agents, or more intelligent physical AI systems, data quality, data composition, data validation, and data engineering at scale are at the heart of the matter. These are our core competencies. We will start with generative AI training. Historically, customers told us the kind of training data they wanted. Increasingly, however, they are asking us to diagnose model performance, design the right training datasets, and demonstrate that those datasets will materially improve outcomes. Here is how that works. We begin by identifying performance gaps using our evaluation frameworks. We then engineer targeted datasets and validate their impact by fine-tuning either the customer's model or a structurally similar proxy model.

Only after we measure and demonstrate performance impact do we scale. This shifts the discussion from “how much is the data” to “how effective is the data.” We believe this shift is being driven by two forces: the accelerating pace of AI research and the cost and time incurred to train ever larger models. And conversations about data efficacy play directly to our strengths. We are also advancing methods for creating datasets that improve long-context reasoning—an AI model's ability to observe and reason over very large amounts of information at once. This remains one of the industry's most important technical challenges.

Solving it requires not just architectural improvements, but advances in the creation at scale of very specific types of structured training data. Creating training data that improves long-context reasoning is a nontrivial problem, but we have made and are continuing to make meaningful progress on it. A second area of innovation is around evaluating systems of autonomous agents and improving them through targeted dataset creation. We believe that autonomous agents may represent the most significant business innovation opportunity since the advent of electricity. But companies quickly discover that many AI agents that performed impressively in controlled laboratory settings degrade in real-world production. The real world is chaotic.

It is shaped by edge cases, conflicting constraints, unpredictable user behavior, and adversarial conditions. Addressing this is fundamentally a data challenge. Agents must be continuously trained and rigorously stress-tested with datasets that are realistic, diverse, and complex. For this, we have developed a set of three highly complementary hybrid solutions. The first is an agent evaluation and observability platform. Data scientists can use our platform during development to visualize and annotate agent trace data, to build LLM-as-a-judge evaluators, to create business-aligned evaluation rubrics, to generate golden datasets for regression testing, and to generate test data at scale.

Then, once the agent is deployed, our platform can be used to continuously monitor its performance, perform root cause analysis of performance issues, and obtain mitigation datasets. We are pleased to share that we anticipate soon kicking off a managed services engagement with a hyperscaler in which we will use our platform to create test data at scale, perform automated evaluations, and identify critical model vulnerabilities in order to improve performance of its customer-facing intelligent virtual assistant. The second innovation is a managed agent optimization pipeline designed to systematically train for, and therefore neutralize, the chaos of real-world deployment at scale. The pipeline generates realistic test scenarios, automates evaluation, rigorously measures constraint satisfaction, and produces reinforcement learning datasets.

Using this system, we have demonstrated improvements of up to 25 points in constraint satisfaction—importantly, 31 points. We currently have multiple AI innovation labs and enterprise customers actively exploring the system. The third solution we have designed to support enterprise agent AI is an adversarial simulation system that generates high-quality, semantically diverse, and scalable adversarial attacks to stress-test agents. The system generates a full spectrum of attack types: direct jailbreaks, indirect prompt injection via RAG pipelines, multi-turn social engineering, steganographic payloads, and compound attacks that combine injection techniques with domain-specific knowledge. Once vulnerabilities are identified, it generates highly targeted mitigation datasets to strengthen guardrails.

We believe our system generates real adversarial attacks at scale and in a meaningful way that exceeds existing alternatives. Many tools on the market produce simplistic or templated hostile content that lacks the nuance and sophistication of real-world threat actors, fails to scale across diverse scenarios, or relies on generic tactics that models quickly learn to anticipate and overfit to. By contrast, our framework is designed to simulate adaptive, multi-step, and strategically coherent attack patterns, including highly sophisticated model extraction, cybersecurity, cybercrime, and cybernet threats scenarios, that better reflect how advanced adversaries operate and allow our partners to stay ahead of emerging threats.

The result is adversarial training data that is both scalable and durable, forcing models to generalize rather than memorize and enabling more robust real-world resilience. Our work is garnering interest from CISOs and security leaders at some of the world's premier AI and cybersecurity companies, as well as relevant experts in government, and has led to early-stage engagements with several of them. At a time when the cyber industry is experiencing significant disruption, these capabilities bolster our position in the emerging field of AI trust and safety, an area where we are meaningfully deepening work with several hyperscalers.

We believe Innodata Inc. is well positioned to emerge as a leader in prompt-layer security, protecting AI systems at the point of interaction rather than relying solely on traditional perimeter or endpoint defenses. Taken together, we believe these solutions position us not just as a data supplier, but as a lifecycle partner for agent reliability. We believe 2026 will also mark the acceleration in physical AI—intelligent systems that perceive and interact with the physical world. While robotics provides a mechanical framework, physical AI provides intelligence. The primary bottleneck in this domain is dataset quality and scale. Manual annotation and static QA sampling simply do not scale to billion-sample corpora and continuously evolving environments.

We have developed a large-scale data engineering system that incorporates structural validation, distribution monitoring, temporal consistency checks, and model-in-the-loop instrumentation. This enables us to identify and correct defects in datasets before they propagate into performance failures. We are already using components of this system in the high-visibility engagements we recently announced with Palantir. We recently secured a significant engagement to create foundational datasets for next-generation robotic datasets, including egocentric data. Egocentric data captures the world from the robot's point of view—what it sees and experiences in motion. We are also working with a leading robotic lab to create affordance datasets at scale.

Affordance data teaches the system what actions are possible in a given setting, not just identifying objects but understanding how they can be used. Egocentric data and affordance data, taken together, form the cognitive scaffolding that allows machines to act intelligently in dynamic environments. This work also positions us to support the development of so-called world models—internal simulations that allow AI systems to anticipate outcomes, reason about cause and effect, and plan several steps ahead. World models require richly structured datasets that capture interactions over time and the consequences of actions, precisely the type of data we are now engineering. Finally, we recently developed an AI model for drone and other small-object detection that exceeds prior state-of-the-art benchmarks by 6.45%.

In a field where progress is often measured in fractions of a percentage point, a 6.45% improvement is a material advance. The model improves detection fidelity under real-world conditions where small size, speed, cluttered backgrounds, and environmental noise make reliable perception extraordinarily difficult. We believe this advancement has compelling dual-use implications that we are now actively exploring with potential customers. I would like to underscore one of the important points I just made. For decades, Innodata Inc. has specialized in creating high-quality, complex datasets. Today, these capabilities are central to unlocking the next generation of AI systems. Advanced LLM reasoning, agent reliability in chaotic environments, and robotics perception in the physical world all depend on engineered data ecosystems.

And this is precisely where we operate. Our innovations in LLM training, agentic AI, and physical AI are not separate initiatives. Rather, they are extensions of a single strategic advantage—our ability to engineer data that measurably improves model performance in real-world conditions. We believe our innovation pipeline will be margin enhancing as well as revenue enhancing. We expect early 2026 adjusted gross margins to be in the 35% to 40% range as we ramp up new programs, with normalization toward our target 40% or better adjusted gross margins as new programs ramp up and as innovation-driven workflows scale. Automation, synthetic systems, and evaluation platforms all structurally increase our operating leverage.

I will now turn the call over to Marissa, who will go through the numbers.

Marissa Espineli: Thank you, Jack, and good afternoon, everyone. Revenue for Q4 2025 reached $72,400,000, up 22% year over year. Sequentially, revenue increased 15.7% from Q3 $362,600,000. Adjusted gross profit for Q4 2025 was $30,100,000, an increase of 6% year over year and 9% sequentially, with an adjusted gross margin of 42%. Adjusted EBITDA was $15,700,000, or 22% of revenue, and net income for the quarter was $8,800,000. To reiterate, this is net of significantly expanded data science and engineering efforts that are yielding the types of innovation Jack just spoke about. We ended the quarter with $82,200,000 in cash, up from $73,900,000 at the end of the prior quarter and $46,900,000 at the year-end 2024.

And we did not draw down on our $30,000,000 Wells Fargo credit facility. As Jack mentioned, based on our current momentum, we presently forecast 35% or more year-over-year revenue growth in 2026. Thank you everyone for joining us today. Operator, please open the line for questions.

Jack Abuhoff: Thank you.

Operator: Ladies and gentlemen, we will now begin the question-and-answer session. If you wish to decline from the polling process, please press the star key followed by the number two. If you are using a speakerphone, please lift the handset before pressing any keys. One moment please while we assemble the queue. Your first question comes from George Sutton of Craig-Hallum. Please go ahead.

George Sutton: Thank you, Jack. I feel like I just sat through an advanced AI data science class, so thanks for that. I wanted to step back a little bit because I think people have the assumption that some of what is working for you is somewhat temporary. And I think you have done an interesting job of kind of walking us through in past quarters from post-training as a start, then pre-training, and now there are dramatic other use cases, including things like robotics and autonomous agents. Can you just talk about the breadth of the things you are seeing and where you see us in this continuum of data science opportunity for you?

Having lived through the last couple of years where you started the years with an expectation and you then ended up meaningfully exceeding those initial expectations, is anything set up differently going into 2026 relative to what you see in your sights relative to what you are committing to today?

Jack Abuhoff: Sure. Thank you, George. Thank you for the question. So, as we look out near term, 2026, we see ourselves as being incredibly well set up by the innovations that we invested in 2025. And we see that innovation output as a flywheel. We are getting better. We are getting stronger. We are creating solutions that are solving problems that are the actual impediments that enterprises have when they are looking to integrate AI into their operations. When you look across the spectrum of current capabilities in AI, and future capabilities in things like agentic systems, physical AI, robotics, all of this boils down to challenges in terms of data engineering.

Of course, there are going to be continuous improvements in architectures. There will be bigger models. There will be narrower models for domain-specific challenges. But at the heart of it, in terms of making systems reliable, making them safe at an enterprise level, it is going to be about innovations such as the ones we are announcing today in datasets that are used for evaluation, datasets that are used for training and improving safety and reliability of models. So, we think that we are at the very beginning and that our relevance is by no means diminishing, but only increasing. It is increasing not just at the level of foundation model builders, but it is clearly extending through the enterprise.

We are super excited about where we are right now and about the uptake that the innovations that we are creating are having and are going to be having over the next several years. No. Not at all. We are following exactly that same methodology. We are really limiting our—or we are taking a conservative approach to forecasting growth based on opportunities where we have a very clear line of sight. But where we cannot predict a close rate, where we cannot feel pretty confident in something happening, we are just not baking that into our guidance. Our aspiration is to surprise and to beat expectations.

When I look at this year, I think it will likely be another year of doing exactly that. We are seeing enormous opportunity with a much larger set of customers. We think that is going to result in growth. I think it is likely that we will be increasing guidance as we move through the year. And I think it is going to be a year where we accomplish very meaningful customer diversification. On top of that, as we already discussed, I think it is going to be a year where we are starting to see increasingly hybrid human/technologically driven solutions that spells or presents the promise, I believe, for increased recurring revenue.

I think it promises greater margins over time, greater stickiness, a whole lot of things that will, over time, be, I believe, consistently improving revenue quality as well on top of everything else. In terms of the work we do with foundation model builders, we are seeing tons of traction not just in our largest customer, but in others as well. We are very much aligned with what they are looking to accomplish in things like long-context reasoning improvements. We have innovations that are contributing to that. So we are tremendously excited about where we are right now.

George Sutton: Alright. Good stuff. Thanks, Jack.

Jack Abuhoff: Thank you.

Operator: Your next question comes from Hamed Khorsand of BWS Financial. Please go ahead.

Hamed Khorsand: Just first question is, you were talking earlier about scaling your operations as revenue ramps. Do you have enough employees now? Do you see the need to add more employees? What is your timeline as far as expecting gross margin to move up from here? Thank you. And then is there a timing as far as this pipeline of deals that you are talking about with others than your largest customer?

Jack Abuhoff: Sure. Thanks, Hamed. So I think it really depends on what we are seeing. I think if we begin to project and, internally, growth rates that are very significant, we are going to be making investments in order to ensure that we capture those growth rates. I do think that as a result of digesting some of those people investments that we are making in COGS, as a result of the innovations that we are discussing—different things like that—I do think that we are going to be seeing movement back toward our target gross margins over time. So there are pipelines, but the deals that I am referring to are largely deals that we are closing or have closed.

So we are not depending on—we are not speculating about what will be happening. These are things that are actively underway.

Operator: Your next question comes from Allen Klee of Maxim Group. Please go ahead.

Allen Klee: Yes, hi. For 2025, I think you were just below margins around 23%. And I know it is important for you to reinvest back into the business for the health of the company. My question is, is there any reason to think that you would target a higher or lower adjusted EBITDA margin than what you did in 2025? One of the bullet points you had on the innovation was the structural foundation for margin expansion through automation, synthetic data generation, and evaluation platforms. Can you explain a little what you mean—of which margin expansion are you referring to? Thank you. And the last question I had was just for first quarter 2026.

Is there anything you would want to point out in terms of that might stand out just in terms of, I do not know, revenues or expense spend? Maybe one last quick one. When you were talking about your large base customer, I do not know if I fully understand. You mentioned something about $20,000,000—that maybe it is going to be replaced with more than that—or could you just explain why not yet?

Jack Abuhoff: So we are very much focused on seizing opportunity right now. We believe that we can do that and stay profitable. But we also believe that it is more important to seize opportunity and to do some of the things that we are describing and prove out those innovations than it is to track adjusted gross margin percentages and try to maintain a certain percentage. So we are going to be actively reinvesting in the business. The more opportunities we see, to some extent, the more we will be reinvesting.

We do believe, though, that maintaining profitability is something that we can do while we drive very aggressive growth and while we become progressively more critical to a larger and widening set of customers. Yeah. So we are referring to, over time, gross margin expansion. So a lot of the innovations that we are working on now and that we are bringing into the market are hybridizations of software and human teams. And I think that over time, we are going to be seeing the gross margins associated with those capabilities to be perhaps well in excess of the gross margins that we target today.

Well, I am not going to say it is an ex-quarterly necessarily, but I think, very soon, we are going to be seeing quarters that, from a revenue perspective, are beating what our revenue was for an entire year three years ago. So that is pretty good news right there. As we move through the year, I think you are going to be seeing more proof points and more evidence and more engagement that we have with some very interesting companies around the innovations that we are describing.

I think that we will start to demonstrate that we are somewhat migrating from a vendor to a foundational layer within AI ecosystems, becoming someone that is able to unlock the promise of AI within enterprise engagements. A company that is able to help enterprises embrace complex agents that plan, call tools, execute complex workflows, and create a lot of value. So I think we will be seeing that. I think we will see evidence of that in first quarter. I think we will continue to see evidence of that through the year. Yeah. I think the point that we were making there is how important innovation is to our company today and how it is becoming increasingly important.

There are things that we complete and we are starting new things. And by following the path of innovation—by, you know, what did Wayne Gretzky used to say, by skating to where the puck is going—we are able to deprecate things that the companies no longer require, but be there for them for the things that are the emerging requirements. Again, we are seeing the emerging requirements to be more interesting from a business perspective and a revenue quality perspective and a differentiation perspective than the things that came before. So the investments are proving out. They are enabling us to scale and increase the breadth of engagements.

They are enabling us to win new engagements and new customers, some of which we think are going to be very substantial. They are going to really flower this year. That is going to address the diversification issue. So, when we look at 2026, we see a huge growth year. We believe that we are going to be increasing, likely, our guidance from what we are starting the year at. We think that the solutions and how we are embedded in workflows is going to be progressively more interesting and margin and revenue enhancing. And it promises to be a tremendous year on all of those fronts.

Allen Klee: That is great. Congratulations. Thank you.

Operator: There are no further questions at this time. I will now turn the call back over to Jack Abuhoff. Please continue.

Jack Abuhoff: Thank you, operator. So yes, to wrap up, 2025 was a great year, and 2026 holds the promise of being even better. In 2025, we delivered strong top-line growth. We exceeded expectations across major financial metrics. We expanded margins. We strengthened our balance sheet. We invested successfully ahead of demand. And those investments proved wildly successful and set us up well for 2026. I believe that 2026 is likely to be an incredible year. We have guided to 35% growth based on visibility today. But I believe there may be very considerable upside to that. We will update you through the course of the year much like we have done the last couple of years.

I also want to underscore our belief that this year we will diversify our revenue stream significantly. And we believe expertly engineered data ecosystems are going to be every bit as important as bigger models and new architectures will be in terms of advancing language models, media models, autonomous agents, robots, world models, and other kinds of AI that have not even been conceived of yet. So we are very excited about what lies ahead. We are very confident in our positioning. We are very committed to building one of the most important and, we think, most capable AI enablement companies in the industry. It is going to be an exciting year.

So thank you all for being on the journey with us. We look forward to next time.

Operator: Ladies and gentlemen, that concludes today's conference call. Thank you for your participation. You may now disconnect.