Apple's (AAPL -1.22%) chip strategy has been evolving this year. It has begun branching out various models in its A-family of custom processors tailored for specific purposes. The iPhone maker is now taking its chip strategy to the next level, and oddly, this is one area where Apple is uncharacteristically choosing not to brag about its achievement.

Let it be
When Apple unveiled the iPhone 5, marketing chief Phil Schiller said that the new A6 processor would feature up to two times faster CPU and graphics performance, a relatively vague comparison. Schiller also didn't give any tangible figures to back up that claim.

Source: Apple.

Schiller also said nothing on how many CPU or GPU cores the A6 has, a stark contrast to the third-generation iPad event where he was happy to point out that the A5X powering that device has a dual-core CPU and quad-core GPU. Since Apple didn't disclose much about the A6 chip, the real question is how it's achieving those performance gains.

As it turns out, the A6 is actually an incredibly important achievement for Apple under the hood of its iDevices.

Come together
Apple licenses both specific processor cores as well as an instruction-set architecture from ARM Holdings (ARMH). The company has generally used relatively standard cores in its chips; the A5 and A5X both use two Cortex A9 cores. ARM's next-generation core is the Cortex A15 that its licensees are rushing to get to market this year.

An early report from AnandTech speculated that Apple might have used Cortex A15 cores in the A6 to achieve the performance gains, which would have beat rivals like Texas Instruments (TXN -2.44%) and Samsung to market. Upon further digging, AnandTech discovered that Apple is likely using a custom core of its own designs as opposed to the standard ones, utilizing its instruction set license in a similar way that Qualcomm (QCOM -2.36%) does with its Snapdragons.

This was one of the key designs that allow the A6 in the iPhone 5 to achieve higher performance without sacrificing power consumption and battery life. Another is the fact that Apple has made the chip smaller -- 22% by its claims -- pointing to the use of Samsung's 32-nanometer manufacturing node.

The long and winding road
The Linley Group also details the long journey this chip has taken to get to market, further underscoring how odd it was that Apple didn't spend more time touting the A6. Apple originally (and secretly) licensed the architecture instruction set that would pave the way for its custom ARM-compatible CPUs back in April 2008, shortly after it purchased PA Semi for $278 million.

One of the teams acquired in PA Semi helped create the A4 chip, while another began designing a custom one that would become the A6. By 2010, the early blueprints were complete, and then Apple picked up Intrinsity for $120 million to work on the physical-design phase.

The report estimates that Apple has spent nearly $500 million over the past four years, including acquisitions, licensing costs, and other support expenses, to create the A6, whose performance should be on par with Qualcomm's newest Krait architecture Snapdragons.

You say you want a revolution
With all of the resources and effort Apple has spent building its own processors, coupled with its love of vertical integration, the odds of Intel (INTC -2.40%) chips ever powering an iDevice become increasingly remote. Back in May, Intel CEO Paul Otellini said it wanted to make chips so powerful and efficient that Apple "can't ignore us." Chipzilla's Medfield Atom chips are already built on a 32-nanometer process, and are bout to proceed to 22-nanometer.

Still, changing architectures is a massive undertaking. Apple did it when switching to Intel in Macs in 2006, but I don't think the Mac maker is interested in pursuing such a shift in iOS after all the progress it has made.

Happiness is a warm gun
To be sure, the A6 is the first of many more custom chip designs to come in future iDevices. Apple will undoubtedly begin transitioning new products to these designs and continue building upon what it's learned in processor design. Sorry, Intel.