2. Research Plans

2.1 Modeling

2.1.1 Geometric Modeling

In this next year we will continue intensive research into collaborative design issues, representations, querying and analyzing models from their geometrical properties, user interface and modeling operators, and time-critical algorithms to support such activities.

Design Representation
We will continue to research analysis of models using the special properties of their sculptured surface representations. Instead of considering what can be seen in a model from a single position, for instance, we will reverse the issue and for special cases consider from what domains all of the model can be reached. This work goes beyond the issue of visibility into accessibility, since it might be possible to "see" a section of a model but not have a long (or stable) enough arm to reach it.

Many modeling operators in today's literature have been presented on simplified surfaces, for the most part surfaces that are initially flat and uniformly parametrized. Applying such an operator later in the design process, say after a sequence of modeling operations, may produce quite unintuitive and unintended effects. We are investigating methods, including reparametrization and nonlinear optimization linked with surface analysis and modeling intent, to overcome these difficulties.

We expect to drive further research by carrying out the modeling of complex objects like the HMD. We expect to continue research into design and model analysis for manufacturing within the context of this project. Depending on the particular modeling projects (which would also require manufacturing), we will select from among model analyses for new processes and also for multistage processes. An example of this might be process planning molds for the HMD, with tight tolerances and multiple stages.

Subdivision Surfaces
We are investigating methods to construct subdivision schemes for arbitrary tagged meshes with tension and bias parameters. Also, we are using adaptive subdivision to achieve high frame rates for interactive construction and editing of free-form surfaces based on arbitrary meshes. We are also pursuing a theoretical examination of symmetric subdivision schemes and stochastic texture generation on subdivision surfaces.

Theoretic Analysis of Ck-Smoothness of Subdivision on Arbitrary Meshes
Using the general framework we have created, we will attempt to derive necessary and sufficient criteria for Ck-continuity that generalize and extend most known conditions in subdivision. In addition, we plan to prove a degree estimate for Ck-continuous polynomial schemes that would generalize an estimate of Reif [REIF95b] and give a practical sufficient condition for smoothness.

Volume Modeling and 3D Morphing
We are developing algorithms to convert CSG models consisting of generative modeling primitives into volume data sets. We are investigating 3D morphing techniques, using level set approaches.

2.1.2 Physically Based Modeling

Linear-Complexity Articulated-Body Models with Dynamic Constraints
We plan to combine Barzel-style dynamic constraints with the Featherstone-style articulated modeling approach we have developed. The composite constraint algorithm would have linear time complexity in the "loop-free" parts of the articulated systems, much faster than our original approach. This means that we would be able to simulate articulated systems of immensely more complex objects than we can currently. The "closed-loop" components of the system would retain cubic computational complexity.

We plan to combine the mathematics of the two systems in a structured model that preserves the generality of the dynamic constraint method while allowing the improved efficiency of generalized coordinates when applicable. Other methods can be incorporated in the future if they display a comparative advantage. The automation of the model allows specification of constraints in terms of the desired behavior, independent of the underlying mechanism. The complexity of switching between multiple models can then be hidden from the user, which will yield a conceptually simpler interface.

State Machine for Piecewise Modeling
We are extending our PODE (piecewise ordinary differential equation) solver to apply it to much more general physically based state machines. The solver will automatically switch between different representations and states for different modeling regimes. This will help us develop methods for expressive motion and walking of 3D human figures, robust representations of dynamic contact between rigid and flexible objects, and simulations of instantaneous ``impulses'' when composite constrained objects collide. We are also extending our developmental modeling, artificial life and other testbeds to test our methods of unifying flexible, rigid, and fluid systems of objects. We are collaborating with the MURI project headed by Caltech's Jon Doyle on Mathematical Infrastructure for Robust Virtual Engineering.

2.2 Rendering

2.2.1 Realistic Rendering

We will continue our work in physical simulation and progress into the area, largely unexplored in synthetic computer graphics, of the perceptual domain. We are installing a perception laboratory to study these effects.

One major goal of this long-term research is ultimately to reduce the computational expense of global illumination algorithms. An inherent cause of the slowness of these algorithms is that too much time is spent computing scene features that are measurably unimportant and perceptually below the visible threshold of the average human observer. Algorithms can be substantially accelerated or computed progressively if we can develop perceptually based error metrics that correctly predict the visibility of scene features. The establishment of these techniques will not only allow proper tone mappings, but provide the feedback loop for modifying the physical computations.

We believe that by separating the physically based computations from the perceptually based image creation and by experimentally comparing results at each phase of the process, we can ultimately produce images that are visually and measurably indistinct from real-world images.

2.2.2 Non-Realistic Rendering

In the coming year we will greatly expand the range of aesthetically pleasing and practical non-realistic effects we can achieve while maintaining interactive rates. We have recently developed fast methods for finding silhouette curves of highly tesselated polyhedral models and for determining their visibility without Z-buffering. From these silhouette curves we can produce images resembling line drawings done in pencil, charcoal, or ink. Future work will address the problems of quickly determining visible surface regions and placing shading strokes in them in order to achieve more expressive rendering styles.

2.2.3 Inverse Rendering

Scientific and Mathematical Foundations for Inverting the Rendering Equation (Brown, Caltech, Cornell)
To better understand the potential of image-based methods, we plan to examine them as abstract interpolation problems over higher-dimensional spaces (e.g., the 4-dimensional light field) and attempt to place theoretical limits on what can be accomplished within given error tolerances. Moreover, it is evident that one of the primary challenges for light field approaches is that of representation; without a concise encoding of the four-dimensional function there can be no practical algorithms. One new avenue that we plan to explore is the use of manifolds, which have proven useful in the realm of geometric modeling and may confer some of the same advantages to approximating the light field in higher dimensions.

2.2.4 Image-Based Rendering

In the coming year we will continue our real-problem-driven exploration of telecollaboration and telepresence technologies, again in terms of scene acquisition, reconstruction, and display. With respect to acquisition and reconstruction, we will continue to develop real-time systems that can acquire one user's environment and reproduce it precisely for another user at a remote location, using a potential mix of traditional and modern methods. We see a continuing challenge in trying to do this unobtrusively, yet in a way that presents the participants with a visually compelling experience.

In the area of traditional depth-extraction techniques, we will use uncertainty measures when displaying the depth or geometry information. In general, we feel that the confidence measures can be used to produce depth data with smooth transitions as opposed to data with jumps from discrete differences in neighboring data. For example, depth data could be rendered with varying degrees of transparency corresponding to the confidence of each sample. Such experiments may offer insight into the use of sparse depth data as an aid to modern image-based rendering approaches. We anticipate that the PixelFlow image generation system will be online by the summer of 1997. PixelFlow should provide an uniquely powerful platform for experimenting with various reconstruction-related rendering techniques.

2.2.5 Merging Forward and Inverse Rendering (UNC, Cornell, University of Pennsylvania)

(See Section 3, Changes in Research Direction)

2.3 Interaction

2.3.1 Extending the Sketch System

Our research in desktop interaction will focus primarily on extending the Sketch system. Alias/Wavefront and Autodesk, as well as a number of other makers of 3D modeling software, have expressed strong interest in incorporating notions of the Sketch system into their products. In particular, Alias and Autodesk are starting collaborations with the Center and are providing hardware, users, and 3D modeling frameworks to more easily incorporate our techniques in their future products. These relationships will help ground our research in practical problems and provide us with a base of industrial users for usability testing.

For the Sketch system to be usable in "industrial-strength" applications, we must integrate gestural interface components into existing modeling paradigms. This involves supporting more complex and more detailed scenes in addition to handling further operation types such as texturing and animation. We are also exploring ways to support modeling of free-form shapes (such as human figures). The goal is to allow skilled artists to create such models rapidly by drawing them. In an initial phase, the artist sketches the model as a line drawing, and an approximate model is inferred from the drawing. Additional gestural input can be supplied by the user to guide the inferencing process. In a subsequent phase, the artist refines the surface by drawing bumps and creases directly on it. Shading input will also be processed to modify surface geometry using techniques adapted from computer vision literature.

2.3.2 Haptic Feedback

We want to explore metaphors for haptic user interfaces; in particular, we are not interested in literal simulation of physical environments, as in most haptic demos, or in merely mirroring the physical world. Rather, we are interested primarily in how to present and manipulate features that do not have a unique, intuitive, natural mapping into a haptic form. Simple examples include guiding the user's motion, as in the physical snap-to-grid work done recently in collaboration between Brown and UNC, and gravity relief to alleviate the strain of keeping one's hand in the air for a long time. We believe that the guidance idea in particular can be extended into a very general and useful tool.

We plan to explore and test these ideas in an environment for creating 3D using a two-finger PHANToM. General issues to be researched include making the haptic interface self-disclosing, working around the limitations involved in only having two points for manipulation rather than the whole fingertip surface with its tactile feedback, and dealing with users' expectations that because their finger is stuck in a thimble they can use the whole thimble in interacting with the world, not just a point on their fingertip. In addition, application of haptics to the modeling environment will bring research issues of its own: how to use haptics to enhance accuracy and precision, to ease the task of selecting objects, to select what manipulation to perform, and to improve the accuracy and fluidity of further parametrization of the operation (such as in surface refinement), and where and how to provide haptic guidance. Needless to say, we expect to build on the experience gained by Fred Brooks and his team in using haptics for molecular modeling.

2.3.3 Interaction for Direct Manipulation of Formal Systems (The Smartboard Project) (Brown-Caltech)

Future efforts will include interactive methods for constructing proofs and performing experiments in other branches of mathematics, such as analysis, topology, differential geometry, and combinatorics, and in areas of computer science such as automata theory. This will entail the combined use of hand gestures and speech, as well as more traditional input mechanisms such as pointing devices.

2.4 Performance

2.4.1 Image Display Technologies

We will continue to develop the best possible head-mounted and fixed display systems. We see HMD work progressing as a collaboration between HMD and optics researchers at UNC and design and modeling researchers at Utah. We also anticipate making use of modern optical techniques, possibly via contract (as in the past) with an optical engineering firm. We are also planning to work on high-resolution, wide-field-of-view, immersive fixed displays that make use of minimal infrastructure. One of our goals is to make these fixed displays as convenient and high-resolution as looking through regular eyeglasses.

2.4.2 Time-Critical Frameworks

We will adapt a degradable terrain-rendering algorithm and a varying-level-of-detail generating algorithm to fit into the framework. While it is possible to use conventional performance prediction (e.g., based on feedback loops), algorithm-specific predictors are more accurate and at the same time straight-forward to devise. Similarly, these algorithms require scheduling algorithms that take advantage of algorithm-specific features. Another approach we will explore is that of authoring the entire virtual environment as a single object with a procedurally generated, multiresolution representation. The environment will contain author-supplied information on the application-defined importance of its components. When a user interacts with a scene, it will be simulated and rendered as a function of the user's viewpoint, using lower resolution for more distant or less important components. Using a procedural representation of the scene components will let it to contain an arbitrary amount of detail that is generated only when called for by the scheduler.

2.4.3 Hardware Architectures

First Light with Analog VLSI
We will test our first full analog architecture for making models and displaying images with analog approaches.

2.5 Scientific Visualization

We are developing new scientific visualization techniques for investigating vector-valued and tensor-valued data. This includes acquisition and extraction of diffusion-tensor-valued MR images, vector-valued flow MR images, and scalar and vector-valued laser-images of turbulent flow. We are also continuing our work on new types of partial-volume tissue classification (see Figure 4).

2.5.1 Wavelet Methods for ECG/EEG Visualization and Computational Modeling (Caltech-Utah)

We will develop the wavelet methods needed to solve inverse EEG and ECG problems. For the EEG, these methods will be used to solve Poisson's equation of electrical conduction for the primary current sources in the cerebrum (specifically in the temporal lobe), and Laplace's equation for the voltage field on the surface of the heart (epicardium). These methods include such techniques as variational subdivision schemes, spherical wavelet processing of space physics data, construction of multiresolution meshes directly from volume densities, and construction of subdivision surface wavelets.

2.6 Telecollaboration

We will continue to pursue our goal of giving distance collaborators a compelling sense of common presence in a virtual space, in the limit ``making it as real as being there.'' We are also working to leverage aspects of virtual environments that go beyond real-world simulation, providing added value unique to the virtual environment. For example, we may violate the laws of gravity and let annotations hang in space near a model, or we may share another user's viewpoint, in effect seeing through another user's eyes. Such techniques not only make telecollaboration a more powerful tool but can improve local computer-supported collaborative work as well.

We will continue using the design of electro-optical/mechanical systems as a driving problem. Such a strategy allows us to address two problems at the same time: developing new and more effective display systems and testing the effectiveness of various telepresence and telecollaboration technologies.

We also plan to use structured light to extract depth from scenes with people in them. Preliminary results should be available within a year. In the longer term, we hope to be able to extract and texture map geometry from live video in real time, providing modeled, video-mapped 3D avatars of the televideo participants (and other objects of interest in the scene).


Back to the Table of Contents
Forward to Research Directions