Artificial intelligence is quickly becoming the next transformational technology. Market intelligence firm Tractica forecasts that revenue generated by AI will soar from $1.4 billion in 2016 to $59.8 billion by 2025.
What may surprise you even more is how pervasive the technology has become over the last several years. A number of recent advances in AI have already made their way into our daily lives. Here are six ways you may already be interacting with this groundbreaking technology -- without even realizing it.
Here's a mind-boggling statistic: Google processes more than 3.6 billion search queries per day and over 40,000 per second on average. The recent improvements in the relevance of its search results are the direct result of advances in AI.
Google was one of the early pioneers of AI, establishing the Google Brain project back in 2011. The research focused on the AI discipline of "deep learning," which uses complex computer models called neural networks, which are based on the structure and function of the human brain and attempt to replicate our capacity to learn. What resulted was a system with the ability to detect patterns when sorting through massive amounts of data at lightning speeds. The hallmark of these AI models is that they learn and improve with each use.
This breakthrough in AI has led to many developments, including image recognition, voice recognition and synthesis, and natural language processing, among others.
Google has applied these advances to a variety of innovative uses. In cooperation with a Dutch University, it recently used its GoogLeNet AI to review a multitude of medical images, and GoogLeNet successfully identified malignant tumors in breast cancer x-rays 89% of the time, compared to only 73% for human pathologists.
Anyone who has ever uploaded photos on Facebook, Inc. (NASDAQ:FB) has likely seen a small window pop up on their screen and ask if they'd like to tag a friend in the photo -- and then suggest the name of the friend who actually appears in said photo. This helpful (if slightly creepy) feature is the result of a complicated set of AI algorithms based in image recognition and, more specifically, facial recognition.
The system will search through previously uploaded photos and attempt to match the face to someone who has already been identified. The more you use the feature, the more accurate it becomes. Facial recognition examines the physical characteristics of a person's face, like the size and location of prominent facial features, and then uses formulas to match them to existing images. Factors such as lighting, angles, and facial expression affect the accuracy of the result.
Nearly every smartphone and mobile device out there comes with its own version of the digital assistant. Apple Inc.'s (NASDAQ:AAPL) iPhone has Siri, Alphabet Inc.'s (NASDAQ:GOOGL) (NASDAQ:GOOG) smartphones have Google Assistant, and Microsoft Corporation's (NASDAQ:MSFT) Windows phones have Cortana -- but these are just the most well-known examples. These virtual helpers employ AI-based technologies like speech recognition and natural language processing, which have seen drastic improvement over the last few years.
Through natural language processing, the AI listens to a user's request, breaks it down into manageable pieces, determines what is being asked, and decides on the appropriate response or action.
Digital assistants can perform a number of helpful tasks like dictating texts, looking up phone numbers and initiating calls, adding appointments to your calendar, reading email and text messages, and providing reminders.
With the introduction of Alexa, the digital assistant created by Amazon.com, Inc. (NASDAQ:AMZN), the virtual helper made the jump from smartphones to smart speakers. The Amazon Echo smart speaker, which is powered by Alexa, was introduced in 2014, and it quickly became a must-have device among technophiles.
Specifically, a smart speaker is a voice-activated wireless device that features a multi-microphone array and constantly listens for a "wake" word that prompts it to start listening for requests (the wake word for the Echo is "Alexa"). These devices can answer questions, search the internet, stream music, control smart home devices like lights and thermostats, and perform thousands of other functions using third-party apps. The microphones use "far-field" technology to hear commands in noisy environments and across large rooms.
Smart speakers use the same cloud-based AI systems as their underlying digital assistants. Amazon has introduced a variety of form factors for its devices. Competition finally arrived late last year with the introduction of the Google Home smart speaker, and Apple plans to release its own contender, the HomePod, in December.
Ever wonder how Netflix, Inc. (NASDAQ:NFLX) can suggest movies and TV shows based on your viewing history, or how Amazon can recommend products based on those you've already purchased? You can thank AI. These companies, and many others, use algorithms that sort through massive pools of consumer preference data to find patterns and make recommendations that will appeal to you.
Every Netflix user's home screen is unique to that individual. Netflix uses sophisticated AI-based algorithms that factor in data such as the programs you've previously watched, the programs you've turned off, the time of day, the genres you watch most, the titles you've paused to consider when scrolling through the rows of choices, and more. Because AI gets smarter the more it's used, the recommendations get better and better over time.
Netflix has also revolutionized video encoding, the process of compressing video streams for delivery over the internet. Netflix uses AI to reduce the bandwidth necessary to produce the best image quality. In areas with low internet speeds, high compression results in grainy images. The system matches the level of compression to the content contained in each scene, increasing the quality of the images while reducing buffering.
While simple facial recognition compares photographic images, the technology in Apple's newest iPhones, called Face ID, takes it to the next level. Face ID uses a combination of infrared depth sensing, high-resolution cameras, and AI software to create a 3D map of your face. It projects 33,000 infrared dots that detect the depth of your facial features to create a mathematical model of your face.
This system can work in low light or even darkness, and it adapts to gradual changes in appearance that occur naturally over time. It even defeats the use of masks and photos that are used in an effort to imitate the owner. This level of security is vital, as Face ID can unlock the iPhone and authorize payments through Apple Pay.
John Mackey, CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Teresa Kersten is an employee of LinkedIn and is a member of The Motley Fool's board of directors. LinkedIn is owned by Microsoft. Danny Vena owns shares of Alphabet (A shares), Amazon, Apple, Facebook, and Netflix. The Motley Fool owns shares of and recommends Alphabet (A shares), Alphabet (C shares), Amazon, Apple, Facebook, and Netflix. The Motley Fool has the following options: long January 2020 $150 calls on Apple and short January 2020 $155 calls on Apple. The Motley Fool has a disclosure policy.