Microsoft (Nasdaq: MSFT) recently applied for a body scan patent that largely describes Kinect sensor technology, but also touches the potential next generation of Kinect and a step beyond Kinect avatars: surrogates.

We were a bit surprised to find Microsoft's "Body Scan" patent application (20110109724) as part of the most recent batch of U.S. PTO patent applications. Filed on Jan. 28, this patent comes well after all relevant Kinect patents and, even by itself, would be rather late to arrive as the core Kinect patent. That said, the patent does not offer many surprises and mainly describes a device that is capable to detect human video game players using at least one 3D camera. The abstract reads:

A depth image of a scene may be received, observed, or captured by a device. The depth image may then be analyzed to determine whether the depth image includes a human target. For example, the depth image may include one or more targets including a human target and non-human targets. Each of the targets may be flood filled and compared to a pattern to determine whether the target may be a human target. If one or more of the targets in the depth image includes a human target, the human target may be scanned. A skeletal model of the human target may then be generated based on the scan.

The background description of the patent filing refers to the overall Microsoft claim that natural movements are easier to apply by users rather than having to learn the features of a game controller. However, there is one significant difference. This particular patent does not describe a user building an avatar to be represented on the screen. It describes a technology that actually scans a gamer's body to automatically create an avatar -- which we would then call a surrogate, if we take a cue from the 2009 movie Surrogates.

It is especially noteworthy that the patent discusses a virtual body that matches the actual body in certain criteria: Claim 20 of the patent states that "the first processor determines the human target associated with the user in the depth image by flood filling each target in the scene and comparing each flood filled target with a pattern of a body model of a human to determine whether each flood filled target matches the pattern of the body model of the human."

The system is also capable of recognizing objects the actual user may be using during the game process:

In such embodiments, the user of an electronic game may be holding the object such that the motions of the player and the object may be used to adjust and/or control parameters of the game. For example, the motion of a player holding a racket may be tracked and utilized for controlling an on-screen racket in an electronic sports game. In another example embodiment, the motion of a player holding an object may be tracked and utilized for controlling an on-screen weapon in an electronic combat game.

Recognizing a device is not that revolutionary by itself, but imagine what it could do for Kinect: The camera could finally recognize the exact location and direction of a device, similar to what we can do with Sony's PS3 Move controllers, and the result would be much greater control of sports games, for example. Also, imagine all the branding opportunities if you could hold a very specific tennis racket, instead of a general model. In future, your teenagers may want not just a gun to play a video game. They may want a very specific model. Imagine the all the additional sales video-game developers could achieve.

Much of the player rendering appears to be about flood-filling virtual bodies, but also body shapes (which most of would want to still modify in game environments anyway):

In another embodiment, to determine the location of the shoulders, the bitmask may be parsed downward a certain distance from the head. For example, the top of the bitmask that may be associated with the top of the head may have an X value associated therewith. A stored value associated with the typical distance from the top of the head to the top of the shoulders of a human body may then added to the X value of the top of the head to determine the X value of the shoulders.

The result? We are clearly on a path to project ourselves into virtual environments, beyond simple avatars and beyond actual avatars that we create today as Miis, predefined players, or Kinect avatars that allow us to resemble the look we desire in a cartoonish way. A next-generation Kinect and much more powerful sensors, cameras, processors, and graphics engines could deliver the quest for ultimate reality in video games -- a quest we have followed in video games with artificial and imaginary characters over the past two decades. In the not-too-distant future, you may be able to see yourself on the video screen, exploring and acting in a virtual world. You could call yourself a surrogate, then living in the Matrix. Scary? Possibly. But exciting nevertheless.

You can leave a response, or trackback from your own site.

More from ConceivablyTech:

Want to read more about Microsoft? Add it to My Watchlist, which will find all of our Foolish analysis on this stock.

This article represents the opinion of the writer, who may disagree with the “official” recommendation position of a Motley Fool premium advisory service. We’re motley! Questioning an investing thesis -- even one of our own -- helps us all think critically about investing and make decisions that help us become smarter, happier, and richer.