Siri and other assistants will likely manage much more of our lives in the near future.

Smartphone users love their apps. According to Nielsen, U.S. consumers access 26.7 apps and spend an average of 37.5 hours in them each month.

But a recent report from research company Gartner predicts that by 2020, nearly half of our mobile interactions will be done through virtual personal assistants, like Apple's (AAPL -1.08%) Siri, Microsoft's (MSFT -0.40%) Cortana, and Alphabet's (GOOG -0.75%) (GOOGL -0.90%) Google Now.

Here's what Gartner had to say:

By 2020, smart agents will facilitate 40 percent of mobile interactions, and the postapp era will begin to dominate. Smart agent technologies, in the form of virtual personal assistants (VPAs) and other agents, will monitor user content and behavior in conjunction with cloud-hosted neural networks to build and maintain data models from which the technology will draw inferences about people, content and contexts. Based on these information-gathering and model-building efforts, VPAs can predict users' needs, build trust and ultimately act autonomously on the user's behalf.

Having our phones predict the exact information that we need, when we need it, is already a theme major tech companies have implemented into smartphones to a small degree. But each is making big moves to expand their VPAs, too.

What Apple's doing
Last year, Apple made headlines when it was discovered that the company was diving further into neural networks (a type of machine learning where computers use algorithms to process information in similar ways that our brains do) to make Siri's voice recognition much smarter. 

Image source: Apple.

More recently, though, Apple's made a handful of acquisitions that could beef up Siri once again -- and lead to Gartner's prediction of smarter VPAs. In October, Apple purchased natural-language speech recognition company VocalIQ, which made a form of artificial intelligence (AI) that actively learns from conversations, as opposed to just listening. The technology could help boost Siri's ability to understand the meaning behind what someone is saying, instead of simply what a person is saying. 

Apple also recently picked up the deep-learning company Perceptio, which built a system that allows AI to run in apps without having to collect massive amounts of user data. Perceptio was working on technology that helped AI classify photos without being told what they are. But the technology could be applied to other areas where a phone needs to learn something new without specifically being told. 

Aside from its acquisitions, Apple has also made several recent hires in artificial intelligence as well, including a deep-learning expert from NVIDIA. Of course, not all of Apple's AI moves are geared toward Siri, but considering the iPhone accounts for 63% of Apple's revenue, you can bet the company is working toward ensuring its phones outsmart other VPAs. 

What Alphabet's doing
Alphabet's Google already implements neural net processing in some of its apps, like Google Translate. This allows the app to translate images of a language instantaneously into another language, all without having to access the Internet for information.

Image source: Google.

Similarly, the Alphabet's Google Photos app uses a deep-learning system called TensorFlow to categorize images in the app (like pictures of beaches), so users can find images they're looking for much faster. 

But Google isn't just interested in deep learning and neural networks to power Photos and Translate: The company also uses its AI systems to become the best predictor of what users want as well. Google relies heavily on all of the information from its other apps, Gmail, and online searches, and then offers up predictive information through Google Now. 

As an article in SearchEngineLand recently mentioned, "Google Now is fully cloud-based. Whatever it learns on any device or in any way you interact with Google all goes into a common profile in the cloud." 

So, Alphabet's Google is already using cloud-based information to power Google Now, and as the company dives into more cloud-based neural networking, you can be sure Google Now's predictions will get even better.

What Microsoft's doing
Microsoft has already tapped into neural networking with its Skype app, which can help translate different languages in real-time conversations. And Microsoft's virtual assistant, Cortana, uses natural networks for speech recognition as well. 

But the company continues to expand its AI focus, and earlier this year, Microsoft Research revealed that its deep-learning system was able to identify a set of images, within 1,000 possible categories, better than humans. Microsoft's deep learning system had an error rate of 4.94%, while humans had an error rate of 5.1%.

Image source: Microsoft.

As with Apple and Alphabet, Microsoft isn't expanding its use of neural networks and deep learning just so users can have better photo classification. Eventually, the company wants to use these systems to infer what information users want from their mobile devices. 

Microsoft's Cortana uses cloud-based information to help it understand what users are asking, and pulls information from Bing searches, emails, and Web-browsing history to help it understand what users want. As more of Microsoft Research's findings are put to use with Cortana, the company will be able to expand its VPA's ability to better predict what users want.

No clear winner yet
It's hard to say which platform is expanding its neural-networking expertise better at this point. Right now, virtual personal assistants still feel a bit like a nice-to-have feature, but they aren't critical in helping users get things accomplished throughout the day. Still, if these tech companies continue to pursue neural processing, deep learning, and other AI systems, then they'll likely be one of the most important aspects of our future devices.