**
**
3DTVGuide.org Home
Comprehensive 3D TV Guide - 3D Computer Graphics
3D TV Manufacturers & Models - Get the latest info on the latest range of 3D TVs from the leading manufacturers -

Toshiba    Samsung    Mitsubishi    Panasonic    Sony    LG    Vizio    Philips    Sharp
3DTVGuide.org Logo
Dual-image 3D in CG and Software

Where Dual-Images Can be Anything - Less Technical

When someone creates a CG (computer graphics) movie, a video game, or any sort of visual software, they have to create ‘virtual cameras’ to get the images needed for dual-image 3D presentation.  There are several neat aspects to virtual cameras over real ones: it’s almost as easy to take two pictures of a simulated environment as it is to take one; simulated images are all in focus automatically (which is important in dual-image 3D because it reduces visual discomfort); 3D environments and the images they generate can change with viewer input; custom dual-images can be created instantly; and software can have direct access to any aspect of the viewing geometry in a 3D scene.

All of the above statements apply to dual-image 3D created from software.  It’s important to distinguish this from software that is used to make simulated 3D objects (i.e. 3D modeling and animation software),which has been around for so long that there are a bevy of techniques to create single-image 3D environments on a flat screen.  So what we are really looking for are new ways in which software can directly change dual-images.  The most basic premise is that any CG environment can grab an extra perspective for every situation to “make it look more 3D.”  However, this technique only scratches the surfaces of 3D technology’s relationship to artificially created dual-images.

The best way to describe CG dual-images is that “they can be whatever we want.”  This is a rather broad generalization, but it’s an important one because it doesn’t limit any aspect of the dual-images in any way.  Our minds already assume that dual-images have to match up to certain standards, but that is a limitation we put on them from the perspective of ‘real life’ images.  We must be diligent not to treat CG dual-images as just a 3D perspective addition to single-image displays (see the article “3D Perspective (less technical)” for a detailed understanding of this addition) if we ever want to draw out their full potential.

This article could end here because its true purpose is just to inform software developers of the ‘lack of limitations’ on CG-based dual-image 3D technology.  However, it is also important to discuss some of the basic ways in which dual-images can be changed, because the concepts can act as a starting point for any number of more complex changes.  There are at least four basic ways to change a set of dual-images beyond merely “showing them”; we’ll use the following four descriptive words to represent these changes: reactive, overlay, abstract, and tracking.

Reactive dual-images change with various inputs.  The inputs could be controllers, program parameters, external software and hardware, or anything else that can be programmed to directly affect the way the dual-images are drawn.  Some of the possibilities for creating reactive dual-images are: changing the ‘3Dness’ of single objects for emphasis or unnatural presentation; changing the distance between the dual-image perspectives to change the depth of the whole image or simulate a viewer changing size; reversing the dual-images to instantly reverse the 3D feel of everything (it’s something you have to see to understand, you can do it by wearing polarized 3D glasses upside down), and changing one image but not the other to show each eye a specific situation (e.g. bruising, a cracked lens on a pair of glasses, covering one eye with a hand or bandage, eye floaters or halo effects, etc…).

The next type of dual-image change is to use one image as an overlay for the other (or as an underlay, if you prefer; in this case they end up being the same because neither one is really over or under the other).  This is probably the simplest approach to dual-images because it involves starting out with two identical images rather than one each from two different perspectives.  One image can have some extra information or subtle changes while the other remains unchanged.  This is slightly different than just smooshing two-images on top of each other because each eye of the viewer will have one complete image (so you don’t lose anything like you would with a normal overlay).  A few possibilities for this technique are seeing secret or invisible objects, showing bio information for characters, and creating a subtle emphasis on one image that draws both eyes to a certain part of the screen.  This concept could also work with two different perspective dual-images, but then the changes to one image wouldn’t perfectly overlap so it could become confusing.

The next way to change dual-images is to make abstract objects look 3D.  This mostly refers to taking a 2D image from something like a computer operating system, and then giving the objects different apparent depths via dual-images.  Since the objects aren’t really 3D to begin with, there is no ‘correct’ depth for them, but a clever programmer could give different components useful emphasis through appropriately chosen relative depths.  The concept of abstract dual-images has a variety of uses, such as another way to create emphasis and de-emphasis, creating graphs and charts with depth representing actual numbers, and adding more complex geometry to shapes to give them more personality (e.g. a curled up corner on something that can be picked up and moved, a general screen distorting affect that makes a 2D display appear to be warped, a well or curved ‘black hole’ for deleting items).

Tracking is the final category for creating custom dual-images.  Tracking means measuring the position of a viewer’s head and eyes so that the dual-images can show them what they should see at new positions.  The idea is already used with some virtual reality goggles, where internal motion sensors adjust what the user sees as if they are looking around a real CG environment.  A broader approach for all types of dual-image 3D displays is to follow a single distant viewer and change the on screen images to match their immediate location.  There are a few problems with tracking dual-images, notably that the required tracking accuracy has to be very high and that only one person can be tracked for distant dual-images (although it could work for two people if they were each given a single-image with ‘2D glasses’ that filtered out the other image, or were at divergent positions from a glasses-free display-techniques that could also provide a useful alternative to picture-in-picture technology).

Motion sensitive controllers and general motion tracking technology could also be considered a part of the dual-image tracking category, but only if the motions are tracked one-to-one and used for direct interaction with the apparent 3D environment.  The reason that this concept is limited here is because any other use of motion tracking can just be an arbitrary input for reactive dual-images.  And the difference between exact and arbitrary is the difference between simulating an environment and showing an environment.

One of the reasons why dual-image specific changes are not present in modern video games and software is simply because developers are focused on making the single-image versions of their products.  Once they have completed the single-image versions, additional dual-image 3D effects are merely ‘tacked on’.  This allows them to minimize additional cost and to account for the lack of dual-image 3D display technology available to most of their users, while still providing the basic utilities for upgrading the 3D perspective of an environment.  So the art of creating unique dual-images will only grow once developers start to see that dual-image 3D in CG and software presents a practically untapped branch of 3D technology.

4/26/11

Change is Silver

Author of “How to Make a Holodeck” (5Deck.com)
~A book about infinite-perspective imagery and various ways in which it can be approximated.
Creator of Unili arT (UniliarT.com)
~Creative graphic designs across an array of products, such as t-shirts, mugs, and teddy bears.

Artificial Dual-Images - More Technical

Two cameras can be used to create dual-images for a 3D effect, but that won’t work for imagined scenery.  3D simulation or 3D CG (computer graphics) is a major component of modern dual-image 3D technology for several reasons: two images can be pulled from a CG environment without much more effort than one; focal blur can be eliminated so viewers can look at any part of a screen; 3D environments can be made interactive; custom dual-images can be created on the fly; and simulated environments can mathematically account for changing viewing geometry.

The above statements uniquely apply to dual-image 3D presentations, not to 3D rendering software in general.  3D rendering software has been around for decades, but its application in presenting each eye with a unique view has only recently resurfaced with the increased interest in 3D TV.  A lot of the benefits of CG environments come from their ability to act as programs as well as raw image generators.  It seems like a lot of this capability come from the no-brainer standpoint of “I have a 3D environment, I can just take two images and I’m done.”  This works for games, applications, and even pre-rendered CG movies (assuming that the original CG environments are still available to be reprocessed), but that is only the simplest extension into dual-image 3D.

The best description for CG dual-images is that “they can be anything.”  It’s a very general statement, but it’s actually the most useful description possible because it doesn’t limit dual-images in any way.  It doesn’t say they can only be created from 3D environments.  It doesn’t say that there is only one way to pull two images from a CG space.  It doesn’t say that those images can’t be manipulated or altered.  What it does say is that any possible use of two images split between a viewer’s eyes is fair game.  Although it’s nice when dual-images match up to give our brains the extra oomph of depth, even that is not a rule.

Although the discussion of artificial dual-images could end here and be essentially complete, let’s consider more specific ways in which unrestricted dual images could be used.  The first thing to do is firmly establish the difference between 3D CG and dual-image 3D.  “What is 3D? (More Technical)” uses the concepts of Old 3D and New 3D to differentiate the 3D already present in flat images from the simultaneous presentation of two different perspectives.  These concepts will carry over to 3D CG as “Old 3D CG,” which refers to the actual 3D CG environment and its presentation as a single image on a flat screen, and “New 3D CG,” which refers to the presentation of dual-images via CG and software.  We’ll consider four broad categories of New 3D CG: reactive dual-images, overlay dual-images, arbitrary 3D dual-images, and viewer tracking dual-images.

Reactive dual-images are 3D images that change based on one or more inputs.  The most obvious example is a dual-image 3D video game, but that’s only the bare minimum of what would be considered New 3D CG.  As soon as the dual-images change in a different way than a single-image would, and in direct relation to the game content and user inputs, then we’re cooking with reactive dual-images.  Some examples would be creating artificial 3D effects on certain objects (e.g. for emphasis, for an unnatural feeling of depth), altering the camera separation distance (e.g. to change the instantaneous depth, to simulate a change in the size of the viewer), switching the images to create an instantaneous reversal in depth, and using image filters on each dual-image uniquely for different situations (e.g. rain over one eye, damage to one eye, wearing a monocle).

The next category of New 3D CG is overlay dual-images.  The idea is relatively simple.  Start with two identical dual images, then change one or both of them.  This effect does not create a sense of 3D depth, but instead adds extra information to a scene without obfuscating the original view.  The second overlay could be thermal imaging, text and data on top of various objects, a wireframe depth analyzer, or anything else a software designer could conceive.  It seems like you could just create images like that by mushing them together on a single-image display, but that doesn’t take advantage of the fact that we have two images to work with and so it wouldn’t apply to New 3D CG.  This technique can work with two normal dual-images (instead of one duplicated image) but the artificial changes on top of the perspective difference may be too confusing for most viewers.

The third New 3D CG category is arbitrary 3D dual images.  This means that anything which would not normally be considered 3D can be presented as such via the added sense of depth from dual-images.  If you look at computer screen, you can see various shadings on icons and windows that give it a sort of ‘3Dish’ feel.  But this never really generates a 3D effect akin to Old 3D CG.  However, if dual-images are used to create 3D effects, the difference can be remarkable.  With just a small change between dual-images, a strong sense of depth can be created with abstract objects.  A few example uses of this technique are pop-up text from a page, multi-layered windows on a computer, and simulations of complex 3D topography (e.g. a wrinkle on a photograph, warps into the computer screen, hollowed out areas for placing items in a shopping cart).

The last category of New 3D CG is for viewer tracking dual-images.  Any of the New 3D CG effects mentioned so far could use eye tracking devices to determine the position and viewing angle of an individual’s gaze, then alter the perspective origin of the dual-images accordingly.  Some virtual reality goggles already use this technique to simulate motion through an artificial environment (the dual-images move with the viewer, so they can effectively replace real vision for any position and orientation).

An extremely advanced viewer tracking device could also pinpoint the exact location of a gaze (rather than just the relative viewer position and angle) and use it to adjust the depth focus onto the screen at that area (changing the zoom and viewing position simultaneously until the object in focus is apparently at the depth of the screen), or to create a depth sensitive ‘eye-mouse’ cursor for 3D selection purposes (i.e. the cursor would move forward or backwards with focal distance).

Viewer tracking dual-images is mostly just a reference to “where the viewer is looking,” but it can also encompass motion tracking if those motions are mapped exactly to the perspective and position of the artificial 3D world.  The reason that it only covers motion tracking to this degree is because any situation that arbitrarily assigns motions to dual-image changes falls under the category of reactive dual-image.  In the case of an exact overlap, the viewer tracking dual-images will react and change when the viewer believes they are in direct contact with the images. This could work for real objects in a game simulation or abstract objects in a software package (like literally ‘dragging and dropping’ folder icons).

There are endless other ways in which dual-images can be retooled in a simulated environment, and that is the whole point of “New 3D CG.”  Although many people may view New 3D CG as a 3D perspective addition to Old 3D CG (the same way New 3D adds to Old 3D as described in the article “3D Perspective (More Technical)”), it offers a whole new realm of possibilities that are unavailable to non-CG dual-images.  Tapping into those possibilities is within reach for every 3D CG developer, so hopefully some of the more creative approaches will pop up for video games, handheld devices, and personal computers in the near future.

Note: In the article above, I used the term “3D” for three “spatial dimensions.”  In “How to Make a Holodeck” the term has time as a possible dimension, so it’s not always the same thing.  I prefer the term 4D for most instances where people use 3D (i.e. for 3D images that change with time), but I still use 3D with its more general meaning for quicker comprehension.

4/26/11

Change is Silver

Author of “How to Make a Holodeck” (5Deck.com)
~A book about infinite-perspective imagery and various ways in which it can be approximated.
Creator of Unili arT (UniliarT.com)
~Creative graphic designs across an array of products, such as t-shirts, mugs, and teddy bears.
What Is 3D?

3D Perspective

3D & Other Senses

Sterescopy - How We See in 3D

3D Art

Unlimited Potential - The Future of 3D Technology