SD&&A HOME

SD&A 2000
Conference Program
2000 Proceedings
Proceedings Contents Page
Proceedings Preface
Proceedings Standards Forum Summary
Proceedings Committee Listing
2000 Photos
Conference Photos
Demonstration Session Photos
SD&&A Dinner Photos

Proceedings
SD&A library

SD&A 2025




Stereoscopic Displays and Virtual Reality Systems VII
Proceedings of the SPIE Volume 3957

Introduction

"Stereoscopic Displays and Virtual Reality Systems VII", volume 3957 of Proceedings of SPIE, combines the presentations from the two complementary conferences: "Stereoscopic Displays and Applications XI" and "The Engineering Reality of Virtual Reality 2000". The two conferences were held sequentially during the IS&T/SPIE Electronic Imaging 2000 Symposium, as part of the Photonics West 2000 Symposium, and are combined in this single proceedings volume due to the close connection of the topics covered in the two conferences.

This year, the Stereoscopic Displays and Applications (SD&A) conference was once again held over a three day period, from Monday 24th January through Wednesday 26th, combining technical paper presentations with the demonstration session, keynote presentation and the conference dinner. There was plenty of opportunity for authors and attendees to meet and talk. It is pleasing to see how these conference events and the conference website help to reinforce a sense of community among those working in the field of stereoscopic imaging. With regard to the technical section of the conference, the presentations and papers continued to track recent developments in the field, both in terms of technology but also in relation to human factors issues.

This year's conference commenced with a half-day session on Stereoscopic Vision and Human Factors chaired by John Merritt. The session started with a paper asking the question of whether stereoscopic displays might cause eye damage. Although the authors were not able to give a definitive answer, they did provide a very good starting point for answering this perennial question. In the other five papers in this session, perception of stereoscopic displays was a common theme.

The next session, Medical Applications, was chaired by Mike Weissman. It covered an area that is seeing an increasing interest in and use of stereoscopic technology. Three papers discussed areas such as stereoscopic retinal topography, an integral photography based autostereoscopic display and the use of stereoscopic technology for surgical planning.

The third session, on Digital Stereoscopic Imaging, was chaired by Janusz Konrad. Three of the papers in this session dealt with the determination and use of disparity information from stereoscopic images. The fourth paper discussed a 3D-measurement system based on a common computer graphics platform.

The final session for the day, Standards in Stereoscopic Imaging, started with a paper discussing the classification of various stereoscopic display methods. The session continued with a panel session to discuss further the setting of standards of relevance to the stereoscopic imaging community. The session was chaired by Mike Weissman, with panel members Andrew Woods, Lenny Lipton, and John Roberts, with good input from the floor. The panel session is discussed in more detail in the separate report by Mike Weissman.

Monday evening saw our regular conference dinner, which was, as in previous years, a well attended event.

Tuesday started with a double session on Autostereoscopic Displays, which lasted almost the full day. This area continues to be a topic of intense interest and continued innovation. The morning session was chaired by Shojiro Nagata and contained six papers. The afternoon session was chaired by Vivian Walworth and contained five papers. The papers discussed a wide range of topics such as new techniques for autostereoscopy, head-tracking systems, fabrication of parts for autostereoscopic displays, and volumetric displays.

The seventh session of the conference, Stereoscopic Teleoperation, chaired by Andrew Woods, contained three papers discussing results of experiments that examined teleoperator performance under varying stereoscopic viewing conditions. One of the papers provided interesting results questioning the conventional method of setting the convergence distance in stereoscopic teleoperation systems - hopefully this will promote further research into this important issue.

The final session of the day, New Developments in Stereoscopic Imaging, contained two papers. They discussed a method for improving the performance of p-cells and a stereoscopic video system that combined four video cameras to provide improved central vs. peripheral resolution. This session was chaired by Andrew Woods.

The third day of the conference started with a session titled "Stereoscopic Display Applications," chaired by Stephen Benton. This session showed very good usage of stereoscopic presentation techniques, with four of the five papers using the stereoscopic video projection, stereoscopic computer projection, or Stereojet projection facilities. Two notable topics were an autostereoscopic teleconferencing system that allowed natural eye contact to be maintained and an analysis of single lens stereoscopic cameras showing their similarity to dual lens stereoscopic cameras.

A continuing feature of the conference is the keynote presentation, where an invited speaker is asked to discuss a high profile or historic area of stereoscopic imaging. This year our keynote presenter was Chris Condon, a pioneer in the design of single-strip stereocinematography systems for feature films (e.g. the StereoVision system) and in the presentation of special-venue stereoscopic films. As well as reviewing the current status of large-screen stereoscopic cinema systems, Chris shared his views that the stereoscopic imaging community should move away from using the term "3D" because many other areas unrelated to stereoscopic imaging also use the term. Chris also believed that we should avoid the use of "cheap" cardboard glasses - although the glasses are not "that cheap" they give stereoscopic projection a "cheap" or "novelty" image. He also made a case for the licensing of individuals or companies offering stereoscopic imaging for public audiences. Chris's presentation was well received and provided an interesting conclusion to the SD&A conference.

A major highlight of this year's conferences was once again the demonstration session of Stereoscopic and VR Technologies and Applications where speakers could provide hands-on up-close experience with the hardware and software mentioned during the paper sessions. The session was chaired by Andrew Woods and Mike Weissman. Demonstrations included:

* Steve Aubrey from Aubrey Imaging (Santa Clara, California) displayed some of his large anaglyph phantograms of the cities of San Francisco and San Jose.

* Vivian Walworth from Rowland Institute for Science (Cambridge, Massachusetts) displayed some of their latest Stereojet transparencies.

* Ed Silver from San Francisco Imaging Services (San Francisco, California) displayed a wide selection of transparency- and reflection-mode Stereojet images.

* Graham Woodgate of Sharp Labs Europe (Oxford, UK) demonstrated their parallax-barrier-based autostereoscopic display, which provides a special alignment strip at the bottom of the image to allow the viewer to easily align him or herself to the correct viewing zone.

* Bryan Costales and Marcia Flynt of SL3D Inc. (Boulder, Colorado) demonstrated their stereoscopic microscope using the single lens 3D approach and a range of SL3D 35mm slides.

* Phil Harman from Dynamic Digital Depth (Perth, Australia) demonstrated their field-sequential demultiplexer.

* The DV120 3D video standards converter from Curtin University (Perth, Australia) was demonstrated.

* David Mark from Mark Resources LLC (San Francisco, California) demonstrated a selection of stereoscopic prints in lenticular, anaglyph, and parallax-barrier format.

* Boris Starosta (Charlottesville, Virginia) demonstrated a wide selection of anaglyph stereoscopic prints ranging from wholly computer generated, to 2D to 3D conversions, to photographic images shot in 3D.

* Toshiki Gunji from Ibaraki University (Ibaraki, Japan) demonstrated the "Scope Cache" system, illustrating how the problem of time lag between the measurement of a viewer's head position and the subsequent display of images on a computer screen can be reduced by advanced algorithms.

* Oliver Schmidt from Vision Drei (Mainz, Germany) demonstrated their field-sequential 3D frame-doubler.

* Dresden University of Technology demonstrated their eye position detection system for use in Autostereoscopic display systems.

* Susumu Nakajima from Tokyo University (Tokyo, Japan) showed an integral photography based autostereoscopic display as an example of the use of stereoscopic technology for surgical planning.

Stereoscopic photographs of these demonstrations can be seen at the SD&A conference web site: http://www.stereoscopic.org

A continuing highlight of the SD&A conference is the use of stereoscopic presentation techniques to augment authors' presentations. Many of this year's presentations were accompanied by large-screen stereoscopic images using stereoscopic video projection, polarized slide projectors, or Stereojet transparencies. We greatly appreciate the support provided by Brad Nelson of Nelsonex (Los Gatos, Calif.) in providing the 3D Black Screen stereoscopic rear-screen video projection system for the duration of the SD&A conference. We must also thank David Mark for providing the dual video output computer for stereoscopic computer projection and Ed Silver from San Francisco Imaging Services for offering free Stereojet transparencies for authors' presentations.

A range of 3D videos were also shown during the conference, including:

* The Color of Gold by Jan Welt of Ice Man Cinema (Anchorage, Alaska)

* Mousetrapped by Ron Labbe of Studio 3D (Maynard, Massachusetts)

* A corporate demonstration video by C3D Television (Venice, California)

* a selection of other presentations that have been seen at previous conferences.

The official web site for the Stereoscopic Displays and Applications conference continues to provide a public face and focal point for related activities during the interval between yearly conferences. The site contains a wealth of information about past conferences, including proceedings listings and galleries of photographs taken at past conferences. The site is kept up to date with the latest news about the upcoming conferences. Point your browser to:

http://www.stereoscopic.org

Again this year the Engineering Reality of Virtual Reality capped off the week of Stereoscopic Display papers by focusing on practical and novel advances in the field of virtual reality and immersive environments. Presentations throughout the day served as a reminder of the wide range of disciplines that are being called upon to further the state of the art. Advances are being made not only in the visual realm, but also in the audio, tactile and cognitive areas. Specifically:

Mr. Nagata opened the morning session, which focused on telepresence. Mr. Gunji presented the first paper discussing the "Scope Cache" system developed at Ibaraki University, which allows for the elimination of the time lag between camera operation of a telepresence system and the indicated images.

Mr. Shibuichi of the Waseda University in Tokyo developed a system that allows a teleconferencing system to obtain a front face view for natural communication among remote users by offering correct eye contact. It accomplishes this by synthesizing an arbitrarily oriented view from several images and solves the problem of extracting a same feature point.

Anna Plooy's and John Wann's research in supporting natural prehension in teleoperation found a perplexing lack of a performance benefit for interactive tasks when motion parallax is allowed versus when it is not. Continuing work at the University of Reading was discussed for resolving this result.

Dave Pape began a presentation on building a VR narrative by generally discussing issues involved in creating art and heritage projects in Virtual Reality, focusing on XP, a new authoring system developed at UIC and on the CAVE. Josephine Anstey continued the presentation by showing a videotape that focused on a particular interactive narrative entitled "The Thing Growing."

Marco Lanzagorta detailed a collaborative system called the Responsive Advance Graphical Environment which is hosted on a GROTTO room and a workbench display. This system can be used to train Special Forces, study military operation, and as a command and control device for rescue operations. Such capability was demonstrated via a Special Forces hostage rescue operation known as Operation Nimrod.

With technology becoming smaller and mobile, broadly deployed augmented-reality systems based on wireless links is becoming possible. Tino Pyssysalo of the University of Oulu outlined an adaptive streaming protocol that considers the quality and reliability of typical wireless links and networks when used for augmented reality.

Simple two-handed modeling of virtual models, as if they were made of clay, is also closer to becoming feasible. Elke Moritz of the University of Kaiserslautern doing work at the University of California at Davis, described a prototype system for immersive clay modeling which only takes a few minutes for novice users to learn to use and allows for modeling with a number of virtual tools.

Duncan Stevenson gave an overview of the challenges and benefits associated with small scale hand immersive virtual environments that use haptic constraints. This gave perspective to the software implementation details of Matthew Hutchins' paper on software components for such a system, and introduced the concept of a constrained proxy.

As the fidelity of simulation increases, a need for higher fidelity visualization is being felt. An immersive room and associated software library is being developed by the Naval Research Lab to meet this need. The Navy's research effort was described by Marco Lanzagorta and Rob Rosenberg--a binary metallic alloy, solar magnetic field line, Langmuir-Blodgett monolayers, and shock wave detonations were given as examples of more than 25 data sets that have been explored using their system.

Working with industrial partners is key to moving new technologies toward being useful disciplines and is central to Duncan Stevenson's work with haptovisual systems. Participants stood up and tried a hands-on exercise of how haptic devices feel and how critical force constraints are for constructive work. A prototype system that is scheduled to become a fielded product was discussed.

How large can such systems be made? Francis Bogsanyi of the Australian National University spoke on this point by exploring the requirements of collocation and scale for hapto-visual environments. Findings included the degradation of colocation fidelity as size increases, usefulness of a larger working volumes, and the need to utilize pen offset.

Why limit input to only one hand? Falko Kuester's work at UC Davis allows for 2-handed interaction with virtual clay on the Designer's Workbench. The system was created to help close the digital gap in the traditional design cycle and facilitate cooperation between designer and engineer. Techniques such as surface-deforming magnets and hand-surface interpolation were highlighted.

Focus on the usability of products and applications for the user is at the center of Mario Doulis' work on user interaction tools for BMW's interactive stereo-projection wall at the Fraunhofer (IAO) institute. A principal finding was the value of a wireless device that allows for free movement through virtual environments. A live demonstration of a prototype input device and software was made.

VR systems for surgical planning have been presented at this conference in the past. This year Kevin Montgomery presented a paper that brings surgical planning tools and information displays into the operating room. His paper, "An Augmented Reality Environment for Intraoperative Assistance," detailed techniques such as HMD tracking to increase the experienced resolution and the placement of live endoscopic imagery aligned with the axis of manipulation.

It is satisfying to see the increasing popularity of the two conferences. Attendance has continued to increase each year, thanks to the diligent efforts of the conference co-chairs, conference committee, the authors, and those who provided equipment for the conferences and the demonstration session. The conferences also owe their success to the attendees, who represent a broad cross-section of the stereoscopic and VR technology community and initiate many important discussions during the sessions. Finally, we would like to express our continued appreciation for the efficient and competent logistics support provided by SPIE and IS&T personnel, who helped in many ways to make the conferences proceed successfully.

John O. Merritt, Stephen A. Benton, Andrew J. Woods, Mark T. Bolas.


[Home] [2000: Contents, Preface, Committee, Photos: Conference, Demonstrations, Dinner] [Committee]
[ 2025, 2024, 2023, 2022, 2021, 2020, 2019, 2018, 2017, 2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 2005, 2004, 2003, 2002, 2001, 2000, 1999, 1998, 1997, 1996 ]

Maintained by: Andrew Woods
Revised: 8 May, 2000.