Processing refinements discussed include techniques for improving image-loading speed, improved interpolation between images, and the merging of images with LIDAR (Light Detection and Ranging) based, point clouds, meshes and other polygonal models.
Image loading improvements--We are going to report on our redesign of the image fetching algorithm—for instance, the whole CAVEcam image need not be loaded for each GPU, only the fraction that is viewable by that screen—this can save 95% of the loading time.
Image representation techniques – Different techniques for the representation of panoramic image data and its subsequent visualization will be discussed. Among others, images may be treated as composites of textures or as spherical shells of 3D points, opening unique opportunities for the optimization of the rendering pipeline.
Image interpolation improvements--The CAVEcam panoramic images are a big step forward for photo realism in our VR systems (Cornea, StarCAVE, NexCAVE), but they are not 3D images, but rather 2D stereo photographs. There is no 3D structure and thus no way to wander around and through scenes as one does in polygon- or point-based models. One way to overcome this is to make CAVEcam movies, that is, series of CAVEcam frames of the same real scene from different viewpoints, like along a path, with branching. Interpolation between frames would, if fast enough, allow the sensation of motion. Of course, as in a movie, the choice of frames determines the extent of the viewer’s exploration, although, also as in a movie, this allows the CAVEcam movie maker to somewhat control the attention of the user, not currently a feature of VR systems. Another concept is to insert an animation or video in a portion of a scene (such as a waterfall) within a larger CAVEcam image.
Merging images with LIDAR meshes--A way to exploit the super resolution of the CAVEcam imagery is to use the raw CAVEcam imagery in combination with some type of depth map reconstruction, with the depth information coming from the stereo information, creating texture maps to be placed on polygonal models (created, perhaps, with Google SketchUp), or derived from LIDAR information.
This presentation covers the above topics and other results of this collaborative research project between KAUST and UCSD.

Loading more stuff…

Hmm…it looks like things are taking a while to load. Try again?

Loading videos…