Over the next 25 years, a revolutionary technology will transform our world. It will save us money, boost production of goods fivefold, and make our lives more comfortable. But it will crater wages by 90%, drive whole swaths of skilled workers to starvation, destroy families, and cause wide-scale political and social upheaval, sparking mobs, riots, and armed rebellion.

The year is 1804. The invention: The Jacquard loom.

The Jacquard loom is a wooden contraption that fits to the head of a regular loom. You feed it punch cards, and it controls the weaving patterns for you. It's efficient, cost-effective, and anyone can operate it. No more need for well-paid cotton and silk weavers.

While plunging prices of textiles made 19th-century consumers better off, skilled weavers went from comfortable three-day workweeks to being too poor to own bedding. Many became "Luddites," machine smashers and arsonists, named for their fictional leader, Ned Ludd of Robin Hood's Sherwood Forest.

A Jacquard loom. Image source: George Bruno, Le Tour de la France par deux enfants. Published 1904. Courtesy of John P. Robarts Research Library, University of Toronto.

The Luddites are no longer with us. What remains is the posterity of their hated looms. For, improbably enough, those very looms draw a straight line to our own modern world of artificial intelligence (AI).

For we do live in a world run by AI. It flies our aircraft, allocates financial resources, and decides what news we read.

Soon it will drive our cars, care for our old, and fight wars.

It will make our lives longer, healthier, and more comfortable while destroying millions of jobs.

And all of that is coming sooner than we think.

Banks, customer service departments, and healthcare are already spending billions of dollars each year on AI tools. Growth estimates vary, but within five to 10 years, annual worldwide spending on AI is projected to reach $30 billion to $100 billion.

This time, it's different

After reading story after story about AI's rapid advances, it's hard not to imagine that this technology will transform our world. But AI's abilities are so hyped, its promised benefits so vast, its dangers so dystopian, its direction so hard to predict, and its mechanisms so technical, that we lack a clear view of what's going on. Our understanding lags our awe.

Why is AI transforming our world? What gives it such power to move, inspire, and terrify us?

From the infancy of written history, we've defined ourselves by our ability to guide our actions through reason and organize the world around us.

Aristotle, the first great taxonomist, classified humanity by our intelligence 24 centuries ago. He did so by emphasizing the distinctiveness of our capacity for reason: "Life," we hold in common with mere plants, perception we share with "the horse, the ox, and every animal." But the unique purpose of humans -- what makes us special -- is activity which "follows ... a rational principle."

That view hasn't fundamentally changed since ancient times. Two millennia later, Carl Linnaeus, the Swedish biologist who invented modern biological taxonomy, named our species Homo sapiens -- "wise man." In the preface to his 1758 work Systema Naturae, Linnaeus characterized our species as "always curious and inquisitive, and ever desirous of adding to his useful knowledge."

Modern paleontologists have essentially confirmed what earlier philosophers and scientists said. Indeed, the most up-to-date fossil and archaeological evidence tells us that characteristics of human intelligence -- especially adaptability and large-scale cooperation -- were responsible for our species' survival and eventual dominance.

We learn new things. We understand complex ideas. We transmit knowledge and cooperate through language and culture. And we fashion a multitude of tools to solve a variety of problems.

Human intelligence is wrapped up with our species' success and is interwoven with our understanding of who we are.

And so, artificial intelligence cuts right to the heart of what it is to be human.

AI has the capacity to augment our most distinctive qualities -- rationality, adaptability, ingenuity, mastery over our environment -- and, in the minds of many, to supplant us as the most sapient residents of this world.

This is what makes AI unlike any other revolutionary technology: In principle, it can do everything we do -- and possibly do it better.

The essence of intelligence

Computers have long made us question who we are and the nature of our relationship with machines.

During the 1830s, England entered into the Industrial Revolution's era of big data. Its government began producing vast troves of statistical information about everything from the cost of "pease and beans" to the export of hats. Processing all that information was time-consuming. Its civil service had to employ an army of clerks to read handwritten census records from every single parish, tabulate the data on large sheets of paper, make tick marks, count the ticks, fold the pages over, and convert everything into new tables, over and over again for every statistic the government wanted to know. They were known as "computers." Many suffered nervous breakdowns from all the ticking and counting.

Nineteenth-century big data: In 1891, Great Britain produced 53 pounds of meat per inhabitant. Image source: Michael George Mulhall, The Dictionary of Statistics. Published 1892. Per Brendan Mackie. Courtesy of Harvard University. 

Charles Babbage, a strange polymath who we know was interested in obscure statistics such as the speeds at which men could saw various kinds of wood, the skeletal weights of different mammals, and the burning rate of potash, designed a mechanical computer he called the "Analytical Engine." It would have been a 2-ton contraption comprised of hundreds of metal gears, pulleys, and switches, programmed with the same punch-card technology that revolutionized spinning looms.

Whereas Babbage only meant to replace human clerks with perfect, lifeless counting gears, his friend, Countess Ada Lovelace, could see what a mechanical computer was capable of.

Lady Lovelace helped to popularize Babbage's proposed Analytical Engine and is often credited with being the first computer programmer. But at the same time, she wrote in a memoir that Babbage's machine could never truly be intelligent (emphasis original):

The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform.

What later came to be called "Lady Lovelace's objection" was the idea that machines can't be intelligent because they are preprogrammed, and so can't come up with anything on their own or surprise us.

Of course, today's computers come up with things on their own all the time. Waze and Google Maps create navigational routes, Netflix suggests movies to watch, and Siri interprets our questions and provides answers. How do they do it?

AI may have come a long way since the era of cotton mills, high-pressure steam power, and laudanum, but it's not as though researchers have discovered some philosopher's stone or Powder of Life. Computers are not conscious, nor do they have minds which can literally think, understand, and desire things. How can a machine, programmed by humans, be capable of doing such things on its own -- things we never told it explicitly to do, and which we might never even have imagined?

Here's how it all works. Computers run on algorithms. An algorithm is a set of instructions for doing something. For instance, an algorithm for finding a lost remote control might be:

  1. Ask yourself where was the last place you remember using it.
  2. Walk to that place and see if it's there.
  3. If it is, congratulations -- you're done. If not, repeat steps 1 through 3 with the next last place you remember using it.

The key to AI is algorithms that don't merely tell a machine what to do but tell the machine how to decide what to do.

Suppose you're not using your feet to find a missing remote control, but rather Google Maps to find the best route from Duluth, Minnesota, to Eau Claire, Wisconsin.

Google could use an unsophisticated algorithm: Check every city neighboring Duluth, check every city neighboring those, and so forth until it finds Eau Claire. But checking that many paths (16 quadrillion) would take a typical computer 350 years and require an amount of memory equivalent to 1% of all the world's data.

A cleverer algorithm can do things much quicker. One such algorithm, known as A*, works like this:

  1. Start in Duluth.
  2. Check every neighboring city, in order of their actual distance from Duluth plus the estimated distance to Eau Claire (based, say, on GPS coordinates).
  3. If the city you're checking is Eau Claire, you're done. If not, repeat steps 2 and 3.

This time, we'll get an answer almost instantaneously.

A* was originally devised to help Shakey the Robot avoid bumping into things, but you can use A* to search for the solution to other problems, too. Many researchers program computers to search grammatical combinations with A* when designing software that can understand human language and speak with us.

These examples may seem simple. But algorithms such as A* form the building blocks of machine intelligence. The sophisticated AI tools that we use every day are made up of many such algorithms, embodied in thousands of lines of computer code. It quickly gets complicated.

Making sense of it all

AI has a unique potential to refashion our world. But ironically, despite everything that AI-based programs know about us, most of us know little about it.

So I went looking for answers.

To better understand AI's inner workings and possible future, I enrolled as a part-time student at Georgetown University to study AI computer science and programming. I also reviewed hundreds of books, industry reports, and academic papers. I studied thousands of moves by AlphaGo, a state-of-the-art Go-playing AI program that's been among the most surprising advancements of the past few years. Separately from my research for this series, I spent over 1,000 hours developing my own AI-powered stock-picking system (and corollary coffee dependency).

Luckily, I didn't have to do all this from a knowledge base of zero because I had some useful background experience. I've been programming since I was 11, playing Go since 19, and analyzing stocks for The Motley Fool for a decade. Knowing something about each of these fields helped me to put all the pieces of the story together.

Here's what I found: AI is magical. And it's not sorcery.

AI relies on extremely clever instructions whose basic principles anyone can grasp. It's only when you see the outcome of this elegance that its grandeur comes into view.

And so, no matter how much or how little you know about computer science concepts, you too can understand what lies behind AI and see a glimpse of our future.

Next: Artificial Intelligence's "Holy Grail" Victory

Read More

Ilan Moscovitz has no position in any of the stocks mentioned. The Motley Fool owns shares of and recommends Netflix. The Motley Fool has a disclosure policy.