**
**
3DTVGuide.org Home
Comprehensive 3D TV Guide - 3D Art - Cameras, Photography, and Computer Generated 3D Images
3D TV Manufacturers & Models - Get the latest info on the latest range of 3D TVs from the leading manufacturers -

Toshiba    Samsung    Mitsubishi    Panasonic    Sony    LG    Vizio    Philips    Sharp
3DTVGuide.org Logo
3D Art on a 2D Surface - Less Technical


Dual-image 3D may seem like a simple extension of ‘single-image’ 3D (showing 3D perspective on a flat image), but it can be its own form of art.  To truly understand how this transition works, it will help to start with single-image 3D in each of its basic forms: recording light from real life (photography and video), recording calculated light in a computer simulation (CG imaging and animation), and drawing 3D images from scratch (graphic design).

The first two ways to create single-image 3D-taking real pictures and capturing simulated light-are almost identical.  A camera captures light from objects in an environment on small, flat recording surface.  A simulated environment traces virtual object light onto a simulated recording surface.  These processes are already familiar to everyone in the form of the human eye, which has the sole purpose of turning 3D environments into 2D images.

If you think about the 3D world around you in a very general way, it’s actually not too hard to understand.  Everything has a certain position and you can measure the distances between everything.  However, the light in an environment is a little harder to describe.  Every point on every object in a well-lit room emits light in every direction; a person standing in that room receives all of the light that hits their eyes from any direction.  This is easy enough to follow, but if you delve deeply enough into this concept you’ll find that there is a gap between how light hits an eye and how an eye perceives light.

Let’s say there is a single point of light on an object and there is a human eye viewing that object from a short distance away (hopefully with a person attached).  When the light from the point spreads out in every direction, it will cover the entire pupil of that eye.  This means that every point on every object will send light to every point inside your eye (or at least all the points it can access through your pupil).  The logical result of this is a bunch of overlapping color blurs.  This is where the anatomy of the human eye comes into use.  The pupil harbors a shapeable lens that bends all of the light from each point back into a point on the inside of the eye (i.e. the retina, which is where vision really occurs).  This means that a point looks like a point again instead of a blob.  Unfortunately, any given lens shape can only focus light from certain distances, so the human eye can only focus at one distance at a time.

The overall effect of each point of light hitting the back of an eye is a little ‘painting’ of light we call vision.  Each aspect of 3D Perspective discussed in “3D Perspective (Less Technical)” occurs in these little paintings of light.  The fact that the same type of 3D perspective occurs in TV screens and in eyeballs means that cameras and eyes are very similar.  Cameras and eyes both have lenses, they both can only focus on a certain range at a given time, and they both collect light on a flat, internal surface.

The biggest difference between a camera and the human eye is what happens after the light is collected.  An eye is connected to a brain which immediately analyzes the images in a way best known as the sense of sight (you are really seeing the images in your brain, not your eyes, despite how it feels when you look around).  A camera can be connected to a ‘brain’ of sorts, like a small computer, but usually all the computer does is save the image.  The camera image can then be recalled on a flat screen or in a printed photograph, but doing so is already quite different than the sense of sight.  Your eyes don’t see screens and pictures as complete paintings of light, but as flat objects that resemble them.

What does this all mean in terms of making 3D art on a 2D surface?  It means that there are at least two parts to consider for every single 3D image.  The first is how the light is captured and the second is how it is reproduced and displayed for human eyes.  If the end result looks too much like an ‘object,’ the viewer will feel disconnected from the image. 

Now that we have a good grasp of how light is captured and viewed, let’s return to the three main ways to create single-image 3D.  They are photography and video, CG imaging and animation, and graphic design.

Photography is the art of collecting light from a real environment.  It is a very complex subject, so we’ll just focus on a few important points that relate to dual-image 3D.  A single camera can be used once to capture single-image 3D or it can be used twice to capture dual-image 3D.  That means a photographer can take one picture, move a little, and then take another to get the second perspective required for dual-images.  This technique is nice because it doesn’t require a special camera, but it really only works with unchanging scenery because of the time lag between shots.  There are a few specific factors to consider when creating dual-images with a single camera, including: the distance between shots, the angle of each shot (just a little, if at all), and the time between shots.

3D cameras present another option for creating dual-images.  A 3D camera can be two separate lenses and recording surfaces on a single camera or it can be two cameras linked together by an actuating mechanism.  The best thing about 3D cameras is that the images they take are synced in time.  If an object moves in a picture, both lenses capture that object at the same place.  This means that 3D cameras are the only way to make dual-image 3D videos (a video is a sequence of images, so a dual-image 3D video is a sequence of dual-images), because taking a video invariably captures motion or changes in scenery.

The biggest concern for any dual-image photographer should be the comfort of the viewer.  If two amazing pictures come together improperly, the resulting 3D effect won’t be amazing at all.  Part of the result is the technique used to display the images, but it’s mostly just a matter of skill and practice in taking good dual-images (there are a variety of rules of thumb dedicated to this purpose).

The second method for creating single-image 3D is through a simulated (computer) environment.  A 3D CG model can be created with a software package.  Typically this is done by defining various 3D shapes and textures that the computer can break down into polygons.  Polygons are like little pieces of textured paper in that the 3D environment sees them as flat objects.  The simulated flat objects can emit light or be recorded just like real objects with a real camera.  So all the program needs is a point of view and a surface for mapping light (just like the pupil and retina in the human eye).  A CG designer can add other special effects to the way the light is recorded, but the end result will always be a single flat image.

The process of grabbing a second image from a CG model is as simple as adding a second viewing point and surface to map a second set of light.  There’s also a number of things that a computer can do with dual-images beyond just creating them, some of which are described in the article “Dual-Image 3D in CG and Software (Less Technical).” 

The third and final way to create single-image 3D is to make it from scratch as a graphic design.  The idea is that the presentation of a 3D environment has to obey certain geometric rules and that a skilled artist can work backwards from those rules to create the appearance of a 3D environment.  Graphic designs can be made with real art supplies or software, resulting in endless ways to create shapes on a flat surface.  However, the only thing we are interested in for this article is the capacity to extend single-images into dual-images.  Unfortunately, doing so directly is not very easy with graphic designs.  Consider that an artist already has to be talented to mimic a 3D environment, so they will have to be doubly talented to create matching perspectives of that environment.

A more useful application of graphic design with respect to dual-image 3D is that the techniques can be applied to dual-image photographs and CG images.  When a set of dual-images is created (by any of the three means described so far) and saved as two picture files, those pictures can be treated like raw graphic designs.  Filters, overlays, and geometric manipulations can be written to the pictures as if they didn’t come from a 3D environment.  This can create neat effects like screen ripples, contrast enhancement, and cartoon outlines (note that some of these effects can be pre-conceived in a CG modeling environment, but it’s usually easier to make them in a graphic design).

Once we’ve made the dual-images we want, we still have to present them.  In the case of videos and animations, our choices of presentation are limited to devices that can show changing imagery.  The article “Stereoscopy or ‘Dual-Images’ (Less Technical)” describes many of the displays used for this purpose.  In this case we are more interested in the techniques used to show printed pictures.

Dual-images can be combined into one wider image in side-by-side format so that they can be saved as a single file.  Each image can also be saved individually, but the combined format forces the files to remain connected no matter where they go, and it can give dual-image processing software a cue to its 3D nature.

Side-by-side dual-images can be printed as is, but it’s important to remember which image is for the left eye and which is for the right.  If the images are on the correct side for each eye, a person can ‘uncross’ their eyes to align them or they can view them through a stereoscope (stereoscopes isolate side-by-side images).  If the images are on the opposite sides from the intended eyes, the viewer will have to cross their eyes or wear prismatic glasses to view them.

Dual-images can also be cut into vertical strips and then spliced together to create a single image that works with lenticular lenses or parallax barriers.  This technique works well because viewers don’t have to strain their eyes or use special equipment to view the 3D effect, but it’s also more expensive (due to the cost of the overlay and the process of aligning and bonding it to the image).  As a final option, dual-images can be colorized and merged into a single image for viewing with red-cyan glasses.  Spliced or merged dual-images can be saved into a single picture file, but it’s usually best to keep the original images in side-by-side format and let software do the rest of the work (so that the original images can be recalled for any other printing technique).

There is one more stand-alone topic we should consider: converting a single 3D image into dual-images without access to the original environment.  When only one perspective image is created, the appearance of dual-images can only be guessed.  The easiest way to do this is to figure out which objects are closest to the camera and move them further apart for the dual-images.  This process does create a sense of depth, but it looks somewhat unnatural (kind of like a series of flat pictures at different depths).  A more complex and more accurate technique is to create a complete 3D CG environment of the single-image scenery.  The simulation can guess at the shading and coloring of obscured areas and then can skew and modify the original image to account for continuous changes in depth and altered viewing angles.

The creative and artistic applications of dual-image 3D can work for more than just two images (“The Ultimate Future of 3D (More Technical)” has more details on multi-perspective displays).  “How to Make a Holodeck” describes a device called a “Cameramarray” which takes a whole sheet worth of pictures (similar devices already exist).  Those pictures can then be displayed on a 3V (3D Visualization System) or 4V (4D Visualization System) to recreate an entire environment.  The array of necessary pictures can also be created in a CG modeling environment, and whether real or simulated, they can all be modified as individual graphic designs.  Dealing with so many images would be demanding, but techniques described in “How to Make a Holodeck” (such as tiered parallel processing) have the potential to push the concepts of dual-image 3D art into an entire environment worth of light.

4/29/11

Change is Silver

Author of “How to Make a Holodeck” (5Deck.com)
~A silly technical manual describes serious ways to capture and recreate 3D and 4D light fields.
Creator of Unili arT (UniliarT.com)
~Funny images and random messages may hold deeper meaning across a wide selection of products.

Reference: Wikipedia’s Page on Stereoscopy

3D Art on a 2D Surface - More Technical


The field of dual-image 3D (stereoscopy) can be viewed as a relatively boring doubling of existing art forms, but it can also be seen as a new type of art in itself.  The latter interpretation obviously has more sway with those people who take it seriously, so let’s consider some of the ways in which dual-image 3D can be cultivated as its own form of artistic expression.

First, it’s very important to understand the different ways to create a single image with 3D perspective because creating dual-image 3D is only more complicated.  There are three main ways to produce ‘single-image’ 3D: capturing real light from a real environment, capturing simulated light from a simulated environment, and creating a 3D image without any type of base environment.  These three concepts can be reworded as photography and video, CG imaging and animation, and graphic design.

Taking pictures in a real or simulated environment involves projecting light from objects onto a flat surface.  Whether the light is real or just 1’s and 0’s doesn’t matter.  The basic technique used to capture both types of projection is available to every human in the form of eyeballs.  The function of the human eye is to turn light projected from a 3D environment into a 2D image.

The idea of objects in a 3D environment is not hard to comprehend.  “Things are where they are.”  The relative position and speed of every object is measurable.  Although this is the essence of the physical side a 3D environment, the light bouncing around it is a little more complex.  When light from the sun, lamps, or other lighting devices bounces off of the objects in an environment, it goes every which way.  A human eye only receives a tiny fraction of the light in a given environment, so it seems like the projection of light into the eye should be simple to trace.  However, the eye works in kind of a roundabout way.

Every point of every object sends out light in every direction (except inward, of course).  This means that every point of light spreads out and it no longer a point.  Unfortunately, the rods and cones in our retinas cannot distinguish direction individually, so if they are exposed directly to the spread-out light from every point in a space they will all see all the light from every point and it will just be a mess of light.

Human eyes account for undirected light via a lens.  When a single point from a single object sends light throughout the entire area of one of our pupils, the shape-changing lens inside it bends all that light back together so that it hits our retina as a single point.  Only a small set of rods and cones gets hit by the pinpointed light, so it doesn’t matter what angle it comes from.  That is, what was a point to begin with is a point once again.  Unfortunately, the way lenses work means that only similarly distant points of light can be focused properly.  The result is that we can only see clearly at a certain range (i.e. a focal range).

So now we know how light hits the retina on the curved back inside of our eyes.  Each point of a real object covers a small area of the retina and the total effect of every point is like a little painting of light.  The process of focusing doesn’t change the angle of the light though, so everything goes through the eye and hits it on the opposite part of your retina from the center of your pupil.  This means that the images are upside down on your eye; but that’s okay, because your brain expects that and turns the images right side up for you.

The exact pattern that light paints on to your retina is the premise behind all 2D images that have 3D perspective.  Each of the concepts from “3D Perspective (More Technical)” occurs in your eye just as they do on a TV screen.  The reason for the similarity in images is because a camera acts just like an eyeball.  A single camera has a lens that focuses a different distances, just like the lenses on our eyes; and the light that passes through a camera hits a surface that acts just like a retina (a “CCD” typically), with inverted imaging and everything.  This means that pictures from a camera are really just the paintings of light an eye would see in its place.  The biggest difference after that is that the camera doesn’t have a brain, so the photos need to be printed or displayed on a screen to become visible.

Now comes the tricky part.  When the photos taken from a camera are displayed, they become new ‘objects’.  Unlike when your brain sees the images from your two eyes, flat photos and screens turn into new flat, colored shapes that you brain has to process twice (as an object and as a ‘view’ akin to what your eyes see all the time).  If you looked at a huge printed photo of what your brain showed you at the same point as the camera, you could still tell the difference from any position, because it is no longer an integral part of your vision.  It is now a flat object that changes shape at an angle, has a limited focal range, and has unrecoverable focal information outside of that range.

This is all just a really long way of saying that 3D art on a 2D surface is not just a matter of capturing interesting imagery, but presenting that imagery to human eyes in an appealing way.  With that premise in mind we can return to the three basic ways to capture single-image 3D.  Then we can consider how those processes can be extended into dual-image 3D and appealing presentations.

Cameras collect light in lieu of an eye and then digitize it (unless it is an analog camera and is going straight to a film processing lab) so it can be treated as a flat image.  Mastering this process is the art of photography, but extending it to dual-images can be its own form of art.  There are two basic ways to create dual-images in photography.  The first is to use one camera to take two pictures at different positions and times.  The second is to use two separated cameras to take two pictures at the same time.

Using one camera has the significant advantage that single-image cameras are readily available.  A photographer using one camera can take one picture then move a little and take another.  The amount the person moves (typically the distance between human eyes, more for distant scenery, and less for close-ups), whether they deviate the picture angles (typically to capture the same objects on both pictures), and the time between each picture (a longer setup is more accurate, but things can change between images) is a matter of learned technique.  There are certain rules of thumb and each different choice can result in a different effect when the photos are printed or displayed.

The other technique for taking 3D photos is to use a “3D camera” or two cameras that can be synced reliably.  The significant advantage to this technique is that there is no time lapse between the photos so moving objects can be accurately photographed.  This is also the only technique that reliably extends into taking videos.  Videos are really just photographs taken in sequence, so the need for two cameras acting simultaneously is impossible to ignore.

The main concern for dual-images in photography is the comfort of the viewer.  Any two dual-images can be taken, but only two consistently taken images presented in the proper way can create an effective 3D photograph.

The next way to create single-image 3D is to use a computer to model a 3D environment.  Collecting virtual light from a simulated environment is practically identical to the process of taking a picture.  A 3D CG environment is usually made of tiny, flat, textured polygon (graphic designers rarely create the polygons directly, but use 3D design packages to automatically generate them from larger shapes).  When lots of polygons are placed together they can create almost any shape, with more polygons resulting in smoother and finer shapes.  Each tiny polygon acts just like an object in real life in that the computer sends calculated light rays out in every direction from every point on its surface (sometimes individual light rays aren’t calculated per se, but any standard technique for creating a view from a 3D CG environment simulates their effect).

Once an entire virtual environment has been modeled, programmers and artists can add additional rules to control the behavior of virtual light.  They can make things like shadows, smoke, and mirrors to act exactly how they do in real life; or they can turn concepts like energy beams, auras, and space-time ripples into their own set of light bending rules.  Once the environment has been created, the final step for creating a flat 3D image is define a point where the light is viewed from (like a pupil or camera lens) and a flat frame for light to be recorded upon (like a retina or CCD).  This can be done for animated scenes just as easily as still ones.  The end result is one or more real images from an artificial environment.

The process of grabbing a single image from a CG model or animation can jump to dual-image 3D by adding a second point and frame to collect virtual light.  The article “Dual-Image 3D in CG and Software (More Technical)” describes a number of ways in which this form of light collection can be enhanced beyond real life photography and video.

The third way to create a single 3D image is to create it as a graphic design, or “to make an image without an environment.”  It is the process of using artistic skills, historically proven techniques, and software packages to create the impression of 3D perspective without having access to the exact geometry of the environment being presented.  This is a massive topic far beyond the reach of this article, but it has two important links to the general process of creating 3D.  The first link is that two individual graphic designs can be made together to get a close approximation of dual-images (although it’s tricky and requires great skill).  The second link is that the images created from real cameras and CG imaging/animation can be treated as graphic designs.

The output of photography and video, CG imaging and animation, and graphic design is the same: flat images.  If any set of dual-images is created and treated as two picture files, then they can be manipulated as flat drawings.  This means that anything an artist or software package could do to a 2D image will work on any form of dual-images.  Some of the techniques are additional graphics filters (like rain or grainy film), color and contrast enhancement, cel-shading (cartoon outline), and so on.

It is possible to create any number of dual-images with the three separate (or combined) techniques for creating single-image 3D.  We can also put those images through post-processing by treating them as graphic designs.  The next thing we want to do is display the results.  There are a number of ways to display dual-image 3D, but for the purposes of this article we are just interested in the unchanging printed forms.  The article “Stereoscopy or ‘Dual-Images’ (More Technical)” describes many different ways to present dual-image 3D videos and animations.

Dual-images can be saved as individual files or as hybrid files.  Hybrid files save both images as if they were a single image.  The images are usually just placed side-by-side as a wider image and then the file is written with special tags that tell software how each image is laid out.  This technique can use any standard picture format file extension, but jpegs are the most common for this purpose.

If side-by-side images are printed as is, they require a stereoscope or some other device to isolate the images.  If a stereoscope is available then it is also important to make sure that the images are not on the wrong sides (side-by-side images are typically set up for cross viewing, which is more natural because people naturally cross their eyes when looking at something up close).  If some form of image isolator is not available, the viewer will need to cross or uncross their eyes or wear prismatic glasses.

Dual-images can also be printed as spliced pixel columns, which are made to work with lenticular lenses or parallax barriers.  Using a lens or barrier overlay is ideal for pictures that will have many viewers because it does not require any glasses or actions (i.e. it’s “autostereoscopic”).  Of course, purchasing the proper overlay means the images will cost more to create.  A more traditional option is to colorize the images and overlap them for viewing with red-cyan (anaglyph) 3D glasses.  Any dual-image layout-such as spliced columns or colorized hybrids-can be saved for a specific technique, but the side-by-side layout is the most common because it can easily be processed for use with any of the printing techniques.

One final topic for consideration in 3D art is the conversion of single-image 3D to dual-image 3D without the original environment to form the second image.  Since the scenery was never recorded, the second image can only be guessed.  The most basic approach is to separate each object between the dual images by an amount that is inversely proportional to its apparent distance from the camera.  This technique makes the combined image look like a bunch of flat objects at different depths, which is still somewhat unnatural.  A more advanced approach is to create a full 3D CG environment to match the image and then skew each part of the image according to the resulting depth map.  A way to further enhance this approach is to add or remove details from the sides of each object where the viewer should see more or less.  The details can come from a completely modeled CG environment or they can be a simple extension of the object edges (under the assumption that rotating the objects doesn’t change their appearance much).

The creative and artistic techniques that apply to dual-images can be extended into multi-image 3D (see “The Ultimate Future of 3D (More Technical)” for more details).  For example, a “plenoptic camera” uses bumpy microlenses to take a whole area’s worth of images.  These images can be used with integral imaging (which also uses microlenses) to create a smooth view of a scene from any angle within a limited range.  “How to Make a Holodeck” describes the analogous “Cameramarray,” an array of spherically panoramic cameras that captures a full light field for display on a 4V (4 dimensional Visualization system).  CG imaging and post processing on so many viewing angles would require a lot of raw computer power, but parallel processing could help alleviate some of the burden (“How to Make a Holodeck” discusses a tiered CPU approach for handling dense arrays of images).

Note: In this article the term “3D” stands for three “spatial dimensions.”  In “How to Make a Holodeck” 3D sometimes has time as one of its dimensions.  I prefer 4D for most instances where people use 3D (because time is often an additional factor), but I still use 3D for quick reference to modern technology.

4/29/11

Change is Silver

Author of “How to Make a Holodeck” (5Deck.com)
~A silly technical manual describes serious ways to capture and recreate 3D and 4D light fields.
Creator of Unili arT (UniliarT.com)
~Funny images and random messages may hold deeper meaning across a wide selection of products.

Reference:
Wikipedia’s Page on Stereoscopy
Wikipedia’s Page on Jpeg
Wikipedia’s Page on Plenoptic Cameras
Wikipedia’s Page on Integral Imaging


What Is 3D?

3D Perspective

3D & Other Senses

Sterescopy - How We See in 3D

3D Computer Graphics


Unlimited Potential - The Future of 3D Technology