1.0 Our Research Objectives and Approach
The major goals of our STC are to establish a better scientific foundation for future computer graphics and to help create the basic framework for future interactive graphical and multimedia environments. As we move onto the next generation of computing systems, we will need to improve computational speeds for display, incorporate physical behavior into our models, extend the definitions of our models to incorporate manufacturing processes, move beyond the current generation of WIMP (windows, icons, menus, and pointing) user interfaces into post-WIMP user interfaces, scale our systems to handle more complex environments, reduce design cycle times, and store, manage, access, and transmit larger amounts of data. Finally, it is necessary to guarantee the validity and accuracy of our simulations according to each application's needs, particularly in medical, scientific, and engineering areas.2.0 Changes in Research Direction
By far the biggest change in our research direction has been the identification of collaboration within an immersive environment as a vital new field of research. We call this telecollaboration in order to distinguish it from previous work in computer-supported collaborative work (CSCW), which has focused on desktop-oriented applications such as shared whiteboards. Our vision is of a shared or multiparticipant immersive environment that provides a sense of presence -- participants should ideally feel as if they are within the shared environment and that the other participants are there as well, a marked difference from the experience of even the best video conferencing setups. 3.0 Research Accomplishments of the Past Year
3.1 Modeling
The Center has developed an ever-enlarging sense of what is encompassed by modeling, one that includes geometry, constraints, animation, and behavior of objects, but also includes models of light reflectance and transport phenomena, of interfaces, of inter-object interaction (e.g., developmental modeling), and of perceptual phenomena. This section of the report therefore primarily addresses models of objects, while other kinds of models, such as models of lighting phenomena, may be found later in the report.
But one of the features of the NURBS representation is also one of its drawbacks, that is, the ``tensor product'' formulation. While this representation lends form and structure, making computation and analysis feasible, it also restricts the regions over which the ``closed'' operators can be defined. Boolean operators (intersection, union) that lead to ``trimmed'' surface models are usually the last operation performed in creating a model. A spline model with Boolean operations is no longer subject to warping, or physically based operations, since those operations have only been performed on complete spline surface models or on polygon-based models.
As a solution to the problem of torn or trimmed surfaces, we have created and introduced the torn B-spline surface representation as an approach to designing with partial and nonisoparametric feature curves in sculptured surfaces. We call feature curves ``tears'' and curves ``creases.'' We are also developing techniques for manipulating and editing the smooth regions, tears, and creases in a homogeneous way. Finally, to make this representation useful, we must extend the results to complex models consisting of many (possibly trimmed) surfaces. We expect further results in the Center's coming sixth year that will demonstrate the use of this representation in modeling applications of practical interest. We believe that such ``torn surface'' representations will form an essential component of the framework for future geometric and physical modeling applications. [ELLE95]
Modeling Surfaces of Arbitrary Topology Using Manifolds
Graphics has used single-patch parameterizations of objects (e.g., the longitude-latitude parameterization of the sphere) in a great many applications, so that phrases like ``uv-coordinates'' have become current. Unfortunately, for surfaces with topology other than that of the plane or the torus, such parameterizations must have singularities; techniques built atop those parameterizations will have either intrinsic or numerical problems at the singular points (pattern mapping is a good example: in a pattern-map onto a sphere, 30% of the pattern maps to only about 13% of the sphere).
Self-Adjusting Constrained Optimization
A weakness of all parametric modeling techniques is that either the user must specify the parametric information or else the modeling system must provide it, which is typically done via defaults built into the algorithms. This arises in interpolation problems for both curves and surfaces and in animation issues, and is a serious issue lying below the surface of most parametric schemes. Few attempts have been made to deal with it, leaving the designer and animator to specify and ``tweak'' values. But knowing the right values to use can require a great deal of technical and mathematical expertise. This is a foundational issue: the parametric representations that are fundamental in much of graphics implicitly generate problems for users; the mathematical tractability of such representations makes them appealing, but the problems associated with the representation are pervasive. Developmental Modeling
In seeking to develop scientifically based modeling techniques, the Center has created a new type of modeling based on multicellular development. Based on the structured modeling techniques of Barzel's work in 1992, our developmental models combine elements of the chemical, cell lineage, and mechanical models of morphogenesis pioneered by Turing, Lindenmayer, and Odell, respectively; our developmental models are useful both for scientific predictions in computational biology (as described in the Ph.D. thesis [FLEIt95]) and in computer graphics modeling applications (as shown in the Siggraph 95 paper on cellular textures by Fleischer et al, [FLEI95].)
Developmental modeling is a cell-based modeling technique in which discrete cells are controlled by regulatory elements with conditional elements. The internal state of each cell in the model is represented by a time-varying state vector that is updated by piecewise differential equations. The differential equations are formulated as a sum of contributions from different sources, describing gene transcription, kinetics, and cell metabolism. Each term in the differential equation is multiplied by a (usually) smooth conditional expression that models regulatory processes specific to the process described by that term.
3.2 Rendering
Our research in rendering explores a variety of approaches to the problem of creating a synthetic image as quickly and as accurately as possible. This may involve radical prototypes rethinking the entire approach, as in image-based rendering, in which the traditional polygon is replaced by images. Alternatively, creating an accurate image efficiently may involve careful experimentation to determine the best parameters of a lighting model, as in gonioreflectometer measurements of surface reflection properties. In all cases, the research involves improvements to the fundamental science behind rendering, replacing hacks with physically based algorithms verified by experiments. Image-Based Rendering
We have been exploring a new method of rendering real-world scenes based on models constructed from photographs of the environment. We have constructed a concise framework for discussing these ``plenoptic models,'' our name for this class of techniques. We demonstrate a new member of this class that renders views of an environment by a simple traversal of a cylindrical model of the scene. This method allows rapid rendering of very complex environments such as a cluttered room or an outdoor scene with foliage. We believe that fast hardware can be built based on this technique that will allow such scenes to be rendered in real time.[MCMI95b] [MCMI95c] [MCMI95a]
Global Illumination
We developed a new method for accurately solving the global illumination problem that in addition to the diffuse interreflections commonly handled by conventional radiosity methods, can also handle energy transport involving arbitrary non-diffuse surfaces. The method uses density estimation techniques and takes advantage of inherent parallelism in its microscopic view of energy transport. The algorithm has been designed for computing solutions of environments with high geometric complexity (as many as hundreds of thousands of initial surfaces). [SHIR95]
Perceptually Based Lighting Studies
We have conducted visual quality perceptual tests to optimize the kernel functions used to construct an approximate irradiance function for each surface using the density estimation results. Improved performance of lighting
We extended real-time display of simulated environments with global illumination solutions to larger and more complex models; preprocessing techniques reduce the amount of data sent to parallel rendering engines without any appreciable loss in image quality. Analytic Lighting
We have developed the first analytic method for computing direct lighting effects involving area light sources and a wide range of surfaces from diffuse to highly directional: such effects include illumination from directional luminaires and view-dependent glossy reflection and transmission. The method greatly extends the repertoire of effects that can be computed in closed form.[ARVO95b]
Lighting Effects In The Human Eye
We developed a quantitative model approximating the scattering and diffraction in the human eye and an algorithm based on this model to add glare effects to digital images; the resulting digital point-spread function is psychophysically based and can substantially increase the ``perceived'' dynamic range of computer simulations containing light sources. Applications include night visibility and predicting the effects of distracting light sources. [SPEN95]
Lighting Measurements
We installed and calibrated CCD equipment to measure physical environments radiometrically, including full spatial and spectral radiances. This equipment greatly improves the Center's capacity to carry out controlled experiments in the nature of lighting effects and to compare simulated effects with the real world.[FOO95]
Correction of Geometric Perceptual Distortion in Pictures
For many years linear perspective has been used as an idealization to project three dimensional objects and create two dimensional pictures, such as in photography and computer graphics. We have developed an approach for correcting geometric distortions in computer-generated and photographic images. The projection is superior to linear perspective, particularly for wide-angle images, and represents a long term contribution to more than one technology (both computer graphics and photography). The approach is based on a mathematical formalization of perceptually desirable properties of pictures; the projection is useful both for computer generated images and for constructing actual lenses for physical cameras. The work is described in [ZORIt95], and the Center has submitted patent applications for the technique.
From a small set of simple assumptions we obtain perceptually preferable viewing transformations and show that these transformations can be decomposed into a perspective or parallel projection followed by a planar transformation. The decomposition is easily implemented and provides a convenient framework for further analysis of the image mapping.
3.3 High-Performance Architectures
Real-world systems can be extremely complex, requiring inordinate amounts of computation to simulate and display. Thus, the Center is exploring high-performance architectures that perform well even with extremely large problems. Our work in high-performance architectures can be described by four focuses, two targeting a general-purpose system and two targeting a specific application:
The Center has developed a scheduling algorithm that handles both multiple computational resources and continuously variable tasks. By using gradient search techniques, our algorithm can quickly find a good schedule for tasks. Note that finding an optimal schedule is a known NP-complete problem -- scheduling problems settle for approximations to the optimal solution. Because our algorithm uses gradient search, the algorithm itself is continuously variable in its complexity and accuracy. Thus the scheduler can schedule itself, thereby preventing the scheduling from starving out the application's computation.
Developing Time-Critical Collision Detection Algorithms
The Center has also developed a time-critical approach to collision detection. Collision detection is used by a variety of applications, ranging from games to walkthroughs to scientific visualization to telepresence. Our technique approximates the shapes of objects at multiple levels of detail by using sets of spheres arranged into hierarchies we call ``sphere-trees.'' Sphere-trees can be built automatically by a preprocess that uses medial-axis surfaces, which represent the shapes of objects in skeletal form. The root of a sphere-tree is a single bounding sphere. Collision detection between two bounding spheres is fast, but inaccurate. By traversing the hierarchy of spheres, we check for collisions of spheres that bound successively smaller portions of the object, leading to collision detection that provides more accurate results given more computation time.
3.3.2 Hardware Architectures
Analog VLSI
On August 15th, 1995, the Center was granted Patent # 5,442,583 for Compensated Analog Multiplier Circuits. This type of multiplier is part of our project for performing computer graphics calculations in analog VLSI hardware. In addition, this past year there has been a breakthrough in analog VLSI techniques at the NSF ERC for Neuromorphic Engineering. A circuit and method have been developed for setting and stably storing analog values -- in other words, creating stable analog memory. This works around one of the key impediments for achieving quantitative calculations in analog VLSI, the lack of stable analog memory. We will be evaluating the breakthrough and seeing how well it fits with teleological circuit approaches. We expect that this may be a key component of analog computations for computer graphics.3.3.3 Tracking Technology
Tracking continues to be an extremely hard problem, due to the human perceptual system's relative intolerance for lag and inaccuracy. The Center has been attempting to develop techniques to tackle lag, which has been shown to be the largest factor in tracker error. The Center is also developing more useful trackers that are lighter-weight and smaller without sacrificing performance. Analysis of Head-Motion Prediction
The Center has analyzed the performance of two kinds of prediction methods for head-motion tracking. This information is especially useful when designing tracking hardware for immersive virtual reality. A polynomial extrapolation method with perfect data and a Kalman filter prediction method using noisy data were analyzed in the frequency domain. One result of the analysis is that error grows quadratically with both increases in the prediction interval and frequency of motion. These analysis methods will allow designers to determine the largest acceptable delay between tracker reporting and image display based on the characteristics of a user's motion in a given application.[AZUM95a] [AZUM95b]
Light-Weight Tracker:
We have made progress in both hardware and software for a new light-weight optical tracker for virtual reality systems. UNC and Utah collaborated on the design of a novel optical device, the ``hiball,'' which is designed to spot LED beacons that have been placed on the ceiling, an inside-looking-out approach that will allow tracking in very large spaces. The hiball is a metal housing shaped like a dodecahedron (a solid with twelve faces) that places lenses at six of its faces and holds six photodetectors at the opposing faces. After UNC tracker researchers consulted with Utah's experts in manufacturing, the design was improved. Subsequently, several hiballs have been machined at the University of Utah. (See Plate 3)
3.3.4 Radiosity Walkthroughs of Complex Environments
The Center has investigated improving the computation and display of global illumination solutions by leveraging the Center's research in high-performance graphics hardware and in global illumination techniques. The techniques being developed have many uses, particularly in virtual reality applications that display realistic illumination at interactive rates. The team has explored using new algorithmic approaches, special purpose hardware, and parallel processing to generate and display radiosity solutions of building interiors. This research began with an evaluation of the Pixel-Planes hardware and Pixel-Flow simulators on global illumination solutions of complex environments. Weaknesses in current hardware designs were discovered and improvements for future display hardware were suggested. Quality and speed improvements were sought in the display of precomputed radiosity solutions that may influence future global illumination algorithms as well as future display hardware. Parallel global illumination algorithms were designed and implemented, and methods for both multiprocessors and networks of workstations were studied. Interpolation for Interactive Display of Radiosity Solutions
The common method used to render radiosity solutions on graphics accelerators is linear color interpolation, chiefly because its directly supported by the hardware, thus fast. Unfortunately, this method can lead to artifacts, such as Mach banding, and requires careful meshing to give good results. We have implemented and optimized a second-order color interpolation method on Pixel-Planes 5 using that machine's quadratic interpolation hardware, and have used this method to display quadratically interpolated results from discontinuity meshing radiosity. Although second-order interpolation takes longer to compute, less densely meshed models are required for equivalent display quality, resulting in either a net gain in frame rate or better images for the same rendering time. Although PixelFlow, the next machine from UNC, does not support quadratic interpolation, the pixel processors are substantially faster and have more local memory. This extra capability leads us to believe that we can perform cubic color interpolation on PixelFlow, and we are currently investigating algorithms for doing this. Meshing of Radiosity Solutions
The work on higher-order interpolation brought to light the fact that many of the meshes produced by radiosity solutions are less than optimal for display -- there's too much detail in some areas and not enough in others. We have been investigating methods for generating an efficient illumination mesh, and then applying them to our ``height field'' situation, where two dimensions are the parametric coordinates of a patch and the third dimension is the illumination over the patch. We have investigated a method proposed by Scarlatos for meshing of height-field data, and are now investigating extensions of Varshney's meshing algorithm [VARSH94]. We have also been developing an algorithm that takes a density estimation radiosity solution and generates an efficient mesh.[SHIR95] This algorithm does not have the benefit of precomputed discontinuities, but it is free to place sample points wherever they are needed to capture the detail of the solution efficiently. This same algorithm is being used to develop meshes with increasing levels of detail for use in time-critical computing. Our work with these mesh decimation and generation algorithms will also consider the possibility of generating meshes that use higher-order lighting interpolation. We can test combinations of algorithms and interpolation methods to maximize image quality versus display time.
Parallel Radiosity Algorithms
We have implemented a parallel-processing version of a global illumination method using density estimation [ZARE95]. The algorithm uses a network of workstations to efficiently compute a global illumination solution using particle tracing. The results of this tracing are then filtered in a parallel local pass, and a mesh is generated for the solution. We have also investigated implementing the ``Path Buffer'' algorithm [WALT95b] on Pixel-Flow. This algorithm uses Kajiya-style path tracing to calculate a view-dependent illumination solution, and has been implemented in software. We have determined that the algorithm will run on Pixel-Flow hardware, and implementation is underway using the Pixel-Flow simulator. The goal is to generate ten or more screen updates per second in a frameless-rendering environment.
3.4 Interaction
3.4.1 Interaction with Complex Design Operators and Data Visualization
Two distinct research efforts have developed from fundamental research in 3D user interfaces. Interactive 3D widgets have been applied to applications such as CAD modeling and data visualization for computational steering. Our 3D CAD modeling widgets have been applied to allow intuitive specification of various design operators, and also to help understand 3D shape and relations in complex scenes.
The other widget project was motivated by a lecture in the Center's televideo course. The Scientific Computing and Imaging (SCI) group at the Utah site initiated a project in direct manipulation of computational medicine visualizations, in particular, simulation of electric fields in the human torso. Interactive exploration requires a clear relationship between the researcher's manipulations and their effect on the data. Direct manipulation provides the researcher with an intuitive interface, since an element's controls are part of the element, thereby increasing interactivity and allowing fluid exploration of scientific data with greater interactivity and ease of use than traditional interfaces to date.
Optimized Computer-Generated Motions for Animation
In another project, we have continued our work on covariant interpolation. Computer programmers working on computer animation have long been trying to solve the problem of moving objects in user-desired ways with a minimum of user interaction. Objects moving from one place to another go along a path often determined by a spline. We would like to let the user specify a characteristic of the object's motion and have the animation system choose a motion path that evidences that characteristic.
Direct Manipulation of Motion Curves
The Center has also developed techniques for direct manipulation of motion curves. By separating the time of an animation (represented as a 2D monotonic curve) from the parameters being animated (such as the position and orientation, represented as paths through space), an animator can specify such high-level animation concepts as ``reach this point at this time'' or ``go faster at this time.'' The user's manipulations are transformed into displacement functions that can be composed with a path to produce simple, predictable changes to the path. [SNIB95]
3.5 Scientific Visualization
Scientific visualization is a key component of the Center's activities. As a driving application, it both focuses the Center's other research and is research itself.3.5.1 User Interfaces for Scientific Visualization
Research funded in part by NASA is directed primarily at developing 3D interaction techniques (or 3D widgets) for manipulating tools used to visualize and navigate through scientific visualization environments. We are using a computational fluid dynamics (CFD) dataset, provided by NASA, of airflow past the body of the space shuttle. This dataset was computed on a curvilinear grid and contains velocity data at each sample point. Positioning Techniques
The positioning techniques include interactive shadows, object handles aligned with the world coordinate system, the object coordinate system, or the computational grid axes, and data-space handles. These techniques are used to constrain translation to one or two dimensions, and are especially useful for moving objects in three dimensions when only 2D displays and input devices are available. Other Datasets
While most of our development uses the NASA space shuttle dataset, we have also been experimenting with other data sets in other domains to determine how much the visualization domain affects the demands on the user interface. One dataset we have used is a multifield, time-varying simulation of convection currents in the Earth's mantle computed on a rectilinear grid. Another data set is derived from computational medicine and was mentioned earlier in the section on Interaction. Flux Ball
The flux ball is a method developed by the Center for visualizing the direction of a fluid flow as it passes through a region of space, in our case a spherical region. As fluid flows into or out of the spherical region, we calculate the angle at which it crosses the boundary and compare this with the normal to the sphere's surface. By sampling this angle at a number of points on the surface of the sphere, we can produce contour lines of similar angles. We draw these contours and color them according to the direction of flow and the magnitude of the angle. The final effect is a set of concentric contours around the sphere oriented in the direction of flow. This is a visually compact representation of complex data. Advected Ring
Smoke rings are similar to streamlines but do not represent the entire path of a particle through the dataset. Instead, we arrange a set of particles in a ring and advect them all simultaneously through the dataset. At each integration step, we draw a line connecting all of the sample points together. Thus, at the first integration step, we see a ring-shaped object. As this ring of points is advected through the dataset, it deforms according to the vector field data. In order to maintain the ring's visual continuity, if any two adjacent points move too far apart from each other, new points are introduced to fill the gap. Just as with the rake widget, we can see how points that are initially in close formation diverge as they pass through the dataset, so that features such as vortices and divergences are revealed by the ring's deformation. (See Plate 4).3.5.2 Data Analysis for Visualization
Data interpretation is an important step in visualizing any form of measured data. The Center has been exploring the analysis of two widely used forms of medical imaging, ultrasound and magnetic resonance imaging (MRI). Reducing Noise Artifacts in Ultrasound
The Center has been developing a method for reducing noise in medical ultrasound images. This work could be extremely valuable, since ultrasound is now an important tool in widespread use in many areas of medicine. Ultrasound has advantages over other medical imaging techniques in that it is cheap, portable, non-invasive, and generally safe. Its primary drawback is that ultrasound images are heavily corrupted by noise, or ``speckle.'' The problem of reducing this noise while preserving edges is hard because ultrasound images contain both large- and small-scale features (e.g., heart walls, small arteries) and important details that must be preserved, such as a small difference in grey levels between two adjoining areas that could signify a lesion. It has been observed that in ultrasound movies the degree of detail visible suddenly seems to decrease when the movie is paused. Thus the Center's technique uses interframe coherence of features to determine what is detail and what is noise. Geometric Model Extraction from Magnetic Resonance Volume Data
In this work we develop a computational framework and new algorithms for creating geometric models and images of physical objects. Our framework combines magnetic resonance imaging (MRI) research with image processing and volume visualization. This work is extensively interdisciplinary, and has been carried out in close collaboration with the MRI team of the Human Brain Project at the Caltech Biological Imaging Center.[LAIDt95]
Within the model extraction computational framework we measure physical objects yielding vector-valued MRI volume datasets. We process these datasets to identify different materials, and from the classified data we create images and geometric models. New algorithms developed within the framework include a goal-based technique for choosing MRI collection protocols and parameters and a family of Bayesian tissue-classification methods.[GHOS95] (See Plate 1)
The goal-based data-collection technique chooses MRI protocols and parameters subject to specific goals for the collected data. Our goals are to make identification of different tissues possible with data collected in the shortest possible time. Our method compares results across different collection protocols, and is fast enough to use for steering the data-collection process.
The computational framework for building geometric models allows computer graphics users to create models with internal structure and with a high level of detail more easily. Applications arise in a variety of fields including computer graphics modeling, biological modeling, anatomical studies, medical diagnosis, CAD/CAM, robotics, and computer animation.
Converting Isosurfaces to Smooth Surfaces
By using manifolds, as described in the modeling section, we were able to produce efficiently smooth representations of isosurfaces within volume data within a given level of accuracy. This is the first step in further research converting representations and visualizations to alternative forms. [GRIM95b]
3.6 Telecollaboration
The Center has been using multiway televideo to facilitate collaboration for several years now. While this has been a success, we would like to be able to do more. Video conferencing lets us talk about our work, but it can be difficult to show it to one another, more difficult than if we were all sitting in the same room. Some of our initial results include improving our abilities to share resources and providing a shared virtual environment. Remote Interactive Use of Graphics Engines
We are now able to use UNC's unique Pixel-Planes facility directly from Utah's Alpha_1 design system. By creating an interface between Utah's Alpha_1 modeling system and the UNC Pixel-Planes 5 computer, a Pixel-Planes 5 copy of a model under design in Alpha_1 is constantly updated and can be independently viewed. The rapid rendering of sculptured models using Pixel-Planes supports this research. In the long run, we want to expand into virtual environment interfaces for collaborative design in which the user can move around or in a complex model (immersively) and interactively refine the model. Shared Virtual Environments
We have built a prototype system in which two or more people who are at separate locations can be immersed in a shared virtual environment. The goal of such a system is to allow remote participants to collaborate on tasks such as medical consultation and mechanical design review. Both participants wear a head-mounted display that shows the shared environment (e.g., medical data for a patient) and also shows representations of the other participants. In the prototype system, the representation of one's collaborator is a simple polygonal human model that has real-time video of the other user's face placed on the model's head. The model's movement matches that of the collaborator, and one can watch this person's face as he/she is talking. The video of a participant's face is captured by a camera suspended on the head-mounted display.3.7 Standards
The Center has been contributing its experience in 3D graphics to the evolving standard for 3D graphics on the World-Wide Web, VRML (for Virtual Reality Modeling Language). At present, VRML is simply a file format for 3D graphics, but with help from Center researchers, VRML is developing into a full-fledged mechanism for describing geometry and behavior for distributed, multi-participant virtual worlds. Our work in VRML is driven by our early explorations of its capabilities in the setting of a large hypermedia environment. Large Multimedia Web Sites
WAXweb is a large multimedia web site, including text in four languages, images, sound clips, video clips, and 3D environments, all interlinked. The support software for WAXweb was developed by Center researchers and included the first known use of VRML on the Web. [MEYE95b]
The VRML Architecture Group
Work on WaxWeb and the Center's experience with graphics systems and standards efforts led to membership of a Center researcher in the VRML Architecture Group (or VAG), the standards organizers for VRML. The VAG's activities include clarifying the existing specification and describing extensions. Extensions that can be expected for VRML soon include support for multimedia and a revised system model that is more widely portable. The next major release of the VRML specification (VRML 2.0) will support interactive behaviors in the virtual environment, including support for interactive widgets, collision detection, and modeling tools. The subsequent planned release (VRML 3.0) will provide multi-participant distribution. Center researchers have published papers at the first VRML conference about effective ways to provide these capabilities.[MEYE95a] [MEYE95c]
RBML
The Center has also been the only outside consultant on Microsoft's proposal for adding behavior to VRML. The specification of Microsoft's system, called RBML, for Reactive Behavior Modeling Language, has just been publicly released. Microsoft researchers relied on the Center's experience in graphics APIs and behavior specification to provide valuable feedback for the initial release.4.0 Research Plans for the Coming Year
The Center's plans for the coming year call for re-examining and updating as appropriate our research plans. This year is an ideal time to re-evaluate the Center's direction, as the Directorship has recently moved from Don Greenberg to Andy van Dam (as discussed in Section 5, Management).4.1 Mathematical Foundations
We have determined that we need to strengthen our focus on research issues in the mathematical foundations of computer graphics. This includes new techniques and approaches to modeling, rendering and simulation, which combine into one general-purpose framework such elements as differential geometry, constrained optimization, integral equations, piecewise differential equations, and the mechanics of solids and the physics of light. Parallel Implementations of Interval Analysis
We are researching methods to utilize parallelism in our interval analysis calculations, in collaboration with the Center for Computational Biology. They have created CC++, which is a parallel version of C++ that seems suited for this application. Wavelets on surfaces
We plan to develop wavelet constructions for representing functions defined on surfaces; examples include multi-resolution surfaces for computer graphics modeling, functions for characterizing bi-directional reflectance distribution functions (or BRDFs) of real materials, and wavelet-based methods for global illumination. Wavelets have proven to be powerful bases for use in numerical analysis and signal processing, since they require only a small number of coefficients to represent general functions and large data sets accurately. This allows compression and efficient computations. Interval Analysis
We also plan to implement and test a parallel version of an interval analysis testbed in CC++ on heterogeneous networks of workstations and on Paragon parallel supercomputers. Interval analysis is a powerful new approach to computer-assisted geometric computation and modeling. The main advantage of the approach is that it allows specification, rendering, and analysis of many kinds of shapes, graphical interactions, and optimization problems found in computer graphics. Interval analysis algorithms seem highly suited to parallel implementation. 4.2 Modeling
As our modeling work has been proceeding well, we plan to build on our improved mathematical basis and our existing modeling work by addressing the following areas: Higher-Dimensional Manifolds
We plan to extend manifold technology to higher-dimensional objects, including configuration spaces for complex assemblies, and to develop usable tools for expressing ideas of differential geometry on computer-graphics manifolds. Continued Work On Covariant Interpolation
We will be continuing to develop splining and interpolation methods in nonlinear spaces, improving their efficiency and applicability. Continued Work on Correction of Geometric Perceptual Distortion in Pictures
As part of the mathematical foundations of computer graphics work, we are working on additional methods to correct geometric distortions in computer-generated and photographic images. This focuses particularly on additional soft constraints for non-structural conditions for reducing distortion, as well as the use of conformal mapping techniques. Continued Work in Structured and Physically Based Modeling
We are researching structured methods for physically based hierarchy, multipoint collisions, and other simulation methods that will be useful for computer graphics and computational biology. Texel Research
We will be researching methods for utilizing texels (texture elements) in rendering (such as Kajiya's fur-rendering algorithm), as well as ways to incorporate texels in conventional ray tracers. This will be useful for making scientific visualizations of mammals (most of which have fur) from MRI data.4.3 Rendering
With the addition of our light measurement lab, we will continue work on rendering by extending our existing work and verifying its accuracy experimentally. Measure BRDFs
We will measure and distribute the first BRDFs for both isotropic and anisotropic sample materials, within experimentally limited ranges of incident angles. We will then make the results of these measurements publicly available. Verify Lighting Accuracy
We will compare CCD-captured, full spectral images of physical scenes with images simulated using global illumination techniques from models of the same scene, for calibration of global illumination algorithms. Improve Representations of BRDFs
We will develop more accurate and compact representations of BRDFs for both Monte Carlo and finite-element methods of global illumination. Currently there is no computationally convenient way to capture all of the degrees of freedom of BRDFs, whether through theoretical models or from physical measurements. Improve Density Estimation
We plan to extend the density estimation method for global illumination for robustness with large complex models, including geometry difficult to handle with traditional radiosity techniques. Combine Density Estimation and Discontinuity Meshing
We will investigate combining discontinuity meshing from direct lighting (for shadow boundaries) with density estimation techniques in more smoothly varying intermediate regions. Perceptual Studies of Lighting Accuracy
We plan to conduct a series of perceptual studies to investigate the relationships between the accuracy of the computational models used in rendering algorithms and the visual fidelity of the resulting images. From these studies, we will derive perceptual error metrics that will help us create efficient rendering algorithms that maintain the highest possible levels of visual quality for given levels of computational resources. Omission of Lighting Detail
We plan to make possible interactive display of large architectural models with realistic lighting by geometric simplification of models where changes in lighting are too small to notice. The expected factor-of-ten increase in performance will allow smooth motion through (virtual) buildings and large vehicle interiors. 4.4 High-Performance Architectures
Our work in high-performance architectures will continue developing time-critical algorithms. Time-critical Rendering
We will continue development of techniques for time-critical rendering that degrade visual characteristics of less perceptual importance in order to maintain real-time performance. We plan to incorporate the time-critical rendering techniques into a framework for time-critical applications using scheduling algorithms to budget time for rendering and simulation within real-time constraints. Reduce Lag in a Tracker
We will increase the reporting rate of an optical tracking system by roughly a factor of 15 by incorporating each new sighting of an LED beacon into the best estimate of position. A Kalman filter eliminates the need to wait until a large number of sightings can be collected together, and will reduce the latency from 45 to 2 milliseconds. 4.5 Interaction
Our user interface work will move into virtual environments and strive to produce useful interfaces to some of the Center's new modeling techniques. 3D User Interfaces for VR
We will explore the use of our 3D user interface technology in virtual reality. This project will draw on our extensive knowledge of desktop user interfaces, but also will certainly involve constructing new 3D user interface tools and interaction techniques designed specifically for VR environments. Non-Exclusive Collaboration
We plan to investigate user interface mechanisms that allow distributed participants to modify the same object simultaneously. Rather than presenting a user interface with a ``lock'' as in chalk-passing protocols, we wish to provide a seamless experience -- when a participant wishes to modify an object, she simply does so, grabbing it and making the changes as in the real world (rather than first needing to grab the ``chalk'' and then grab the object to be modified). User Interface for a Manifold Surface
We plan to develop a user interface that leverages the power of the manifold-based surface model. Because of the flexibility of the model, general shapes can be built quickly, while still allowing detailed and precise refinements. Previous user interfaces either built general shapes easily or allowed precise refinements, but not both. 4.6 Scientific Visualization
The Center will be continuing its work in scientific visualization, expanding into new domains and improving prior work. Scientific Visualization in Immersive Environments
VR user interfaces for scientific visualization are in many ways similar to user interfaces for other VR applications. For instance, the three basic tasks, picking objects, manipulating objects and viewpoint navigation, are all important. However, the scientific visualization domain presents application-specific requirements that we must consider. For instance, the visualization tools that we place in a dataset have many parameters that should be accessible to the user of the system. We will be looking for what makes a good interface for scientists to these parameters that works within the limitations of input and output devices. Remote Microscope Control
We are developing interactive software to remotely control a high-resolution microscope at the University of California San Diego (CMDA Project) with Professor Mark Ellisman. Scientists will be able to interactively search and focus on portions of specimens using a high resolution electron microscope, receiving volumetric data and surface data at varying resolutions. The project is a collaborative effort between the San Diego Supercomputer Center, the San Diego Microscopy and Imaging Resource (SDMIR) and the Graphics and Visualization Center. The hardware/software environment will be used by NIH researchers throughout the country. Volume Reconstruction of Ultrasound
An Intravascular Cardiology Project is being conducted between the Center and Stanford's Department of Cardiology. Preliminary results show volume reconstruction of intravascular ultrasound imaging techniques within the arteries of a beating heart.[LENG95] Results will enable cardiologists to evaluate appropriate procedures ranging from balloon angioplasty, to atherectomy, to bypass surgery. The work is being conducted with Professor Richard Popp at Stanford.
Improve Tissue Classification
We will continue to develop a wider variety of tissue classification methods and goal-based methods. For example, we plan to develop classification methods for thin (subvoxel) sheets of tissue and for tracking thin filaments of stained material. (This last method presupposes the existence of chemical staining methods which will provide sufficient signal from such small subvoxel materials, to be developed in the Caltech Biological Imaging Center.) In addition, we will be developing methods to automatically calibrate the MRI machine and also will explore different imaging modalities. Continued Work on Model Extraction from Data and Tissue Classification
As part of the collaborations with the Human Brain Project we will continue our work with high-resolution data and goal-based methods.4.7 Telecollaboration
Our plan for research in telecollaboration has two focuses, a prototype telecollaboration facility and research into techniques for reproducing environments and objects remotely. Thus, the prototype will focus on real-world applications that require ``collaboration,'' while the long-term research will focus on providing the ``tele'' capability. The research plan is designed to leverage experience with telecollaboration in the Center, as well as the existing televideo infrastructure.4.8 Standards
The Center's standards work will continue as the VAG develops new ideas for successive VRML revisions. In addition, the Center is considering hosting a VRML Consortium that would provide industry a forum to express its needs in the continuing evolution of VRML. The Center would provide a great deal of leadership and an impartial, vendor-neutral home.