Charlie Munger, Warren Buffett's longtime partner, possessed a remarkable knack for understanding the world. Among his many pieces of wisdom, the concept of "inversion" always stood out as particularly potent. Inversion encourages us to solve problems by thinking backward: Instead of figuring out how to achieve a desired outcome, we consider what actions would guarantee failure. This simple yet profound mental model often reveals hidden pitfalls or, conversely, extraordinary opportunities.
I apply this principle rigorously in my investment research. Sometimes it's a purely academic exercise, a way to play devil's advocate. Other times, it uncovers critical hidden dangers or previously unseen possibilities.

Image source: Getty Images.
Applying this same logic to the field of AI development led me to a profound insight that I'm increasingly convinced is correct: We are not scaling up to artificial general intelligence (AGI), but rather scaling down.
The consumer versions of ChatGPT, Claude, and other large language models are, in essence, distillations of far greater capabilities that already exist. This isn't just a speculative thought; three compelling lines of evidence support this case, radically altering the risk-to-reward curve for AI investors.
The unseen hand: The military's undeniable track record
The military has a long and undeniable history of developing groundbreaking new technologies behind the scenes, only to introduce them to the public years or even decades later via commercial companies. Consider the internet, GPS, or even basic computing -- all had their origins in military or government research before becoming widespread consumer technologies.
Why would AI development proceed any differently? If history is our guide, it likely followed the same pattern: classified breakthroughs first, public release years later.
If this historical pattern holds, the publicly available versions of ChatGPT and its peers are likely scaled-down versions of AGI that already exist in classified settings. Given how incredibly capable these public models are, and assuming a typical five-to-10-year military-to-civilian technology transfer lag, AGI could already be operational in government or military facilities.
The logic is straightforward: If what we're seeing publicly represents technology that's five to 10 years behind classified capabilities, then the private sector has already achieved AGI in some form.

Image source: Getty Images.
And here's where it gets truly explosive. Once AGI is achieved, most experts believe the leap to artificial superintelligence (ASI) happens rapidly. Why? AGI can improve its own code, design better algorithms, and create more efficient hardware architectures. This recursive self-improvement creates an intelligence explosion -- each generation of AI making the next one smarter, faster, more capable.
Yes, I realize claiming ASI might already exist sounds audacious. But if classified programs achieved AGI years ago, they've had time for this intelligence explosion to occur. The same scaling laws that guide $1 trillion in infrastructure investments tell us that getting to AGI is the hard part. Once you're there, the path to ASI becomes a cascade of compounding improvements that could unfold in months, not decades.
Tech leaders are already talking past AGI
The most telling evidence comes from the mouths of tech leaders themselves. The narrative from Silicon Valley's pioneers isn't about if AGI will arrive, but when -- and increasingly, what comes after it.
When asked what he's excited about in 2025, OpenAI CEO Sam Altman immediately responded: "AGI." He added, "I think we are going to get there faster than people expect" and "We actually know what to do... it'll take a while, it'll be hard, but that's tremendously exciting."
In an AMA on Reddit late last year, Altman claimed AGI is achievable with current hardware, calling it "basically an engineering problem" at this point. These aren't the words of someone grappling with fundamental scientific breakthroughs; they sound like a project manager discussing a complex but solvable challenge.
But here's where it gets really interesting: Meta Platforms (META 0.46%) has already moved past talking about AGI entirely. In June 2025, Mark Zuckerberg created "Meta Superintelligence Labs," writing in an internal memo: "As the pace of AI progress accelerates, developing superintelligence is coming into sight. I believe this will be the beginning of a new era for humanity, and I am fully committed to doing what it takes for Meta to lead the way."
While the public debates whether AGI is five or 10 years away, Meta has restructured its entire AI organization around superintelligence -- the level beyond AGI. Zuckerberg is so committed that he's personally recruiting talent and has rearranged Meta's offices so the team sits near him.
The company has embarked on an aggressive recruitment drive, with reports of exceptionally lucrative offers. While OpenAI CEO Sam Altman publicly claimed Meta was offering "$100 million signing bonuses" to poach talent, Meta's CTO Andrew Bosworth and some of the researchers involved have since publicly refuted that specific figure as an exaggeration, clarifying that compensation packages, while substantial, are structured differently and for a very select few.
Regardless of the exact figures, this isn't just R&D; it's a full-scale, high-stakes race for a future that many still consider science fiction, with Meta's moves signaling an urgent pursuit of superintelligence.
Wall Street knows: Pricing in the unthinkable
The financial markets are not merely reacting to hype; they are pricing in something extraordinary. Palantir Technologies, a company known for its deep ties to government and its data analytics and intelligence platforms, trades at a staggering 245 times forward earnings. Nvidia, the semiconductor powerhouse underpinning much of the AI revolution, trades at 52 times earnings, with Wall Street analysts continually raising their consensus price targets.
These aren't the valuations of companies in a nascent, unproven field; they reflect an expectation of unprecedented growth and disruptive power.
Perhaps the biggest tell, however, is what's happening in quantum computing. Quantum computing stocks -- at least the pure-play companies like IonQ (IONQ 1.71%), Rigetti Computing, and D-Wave Quantum -- have gone parabolic this year, despite their platforms still being in early stages of commercialization.
IonQ alone has surged 474% in the past 12 months. Why is this a tell? Most experts believe that advanced forms of AI will act as a significant accelerator in the quantum computing space, meaning meaningful breakthroughs in this technology aren't decades away, but perhaps just years.
Want more proof? IonQ trades at over 230 times trailing sales. This is either pure speculative mania or a deep insight into the imminence of groundbreaking advancements. The market seems to be betting on a near-term convergence of these two transformative technologies.
$1 trillion and counting: The AI superbuild
A staggering $1 trillion in capital is already committed to the AI superbuild through 2030. This commitment isn't coming from wild speculation; it's coming from some of the most sophisticated capital allocators in the world. McKinsey estimates that scaling AI data centers will require $6.7 trillion globally by 2030. The recently announced Stargate Initiative alone represents a reported $500 billion in private sector investment.
My guess is that the risk is limited precisely because advanced AI systems have already been developed in classified settings. Now, we're in a phase of preparing humanity for a profound discontinuity. The infrastructure being built today isn't for developing AGI -- it's for deploying scaled versions of what already exists. Yes, the old system is being phased out, and a new, AI-centric society is rapidly coming into view.
The nuclear energy imperative
Realizing that the AI revolution is likely even further along than publicly acknowledged means the "scaling down" hypothesis has been significantly de-risked. So, how should investors ride this wave?
Focus on the core infrastructure and applications. Data center REITs like Equinix are crucial as the physical backbone of this new era. Core AI infrastructure players, such as Nvidia for chips and Meta Platforms for foundational models and research, are central. Application developers like SoundHound AI, which are leveraging these advanced models, also present opportunities.
Crucially, the immense and continuous power demands of AI data centers are creating a massive surge in demand for reliable, carbon-free energy. This is where nuclear energy emerges as a critical, long-term play. Unlike intermittent renewables, nuclear power provides baseload, 24/7 clean electricity, which is precisely what always-on AI operations require.
Tech giants like Microsoft, Alphabet, and Amazon are already signing multiyear power purchase agreements and investing directly in nuclear power solutions, including both established plants and new small modular reactors (SMRs).
Key nuclear energy plays include established operators like Constellation Energy, the largest nuclear power plant operator in the U.S., which is already signing deals directly with hyperscalers. Uranium miners like Cameco are well-positioned as nuclear demand grows. More speculative plays include SMR developers like NuScale Power and Oklo, which could deploy smaller reactors closer to data centers.
Another key emerging trend to understand is that this intelligence explosion will soon spill over into other sectors such as healthcare (through AI-fueled drug discovery), energy (beyond just power generation), and transportation (autonomous vehicles).
What's the easiest way to play this trend? The Vanguard Information Technology Index Fund (VGT 0.00%) offers broad exposure with an ultra-low expense ratio. With automatic rebalancing, you don't have to pick certain stocks or themes. You can play the entire AI value chain as it evolves in real time.
The inversion that changes everything
Safety remains a significant concern, especially with people like Geoffrey Hinton expressing public reservations. However, the idea increasingly taking hold on Wall Street, and one that has long been held in Silicon Valley, is that the public is underhyped about the true progress of AI.
The inversion principle suggests we're not just waiting for AGI to arrive; it might already be here, and what we're witnessing publicly is just the tip of the iceberg, carefully scaled down for introduction to the world. If this thesis is even partially correct, we're not preparing for a technological revolution -- we're already in the middle of one. Are you prepared for what comes next?