SD&&A HOME

SD&A 2018
Conference Programarrow
Committee
Short Course
Demonstration Session
3D Screening Session
Sponsorship
Conference Registration

Proceedings
SD&A library

SD&A 2025


  Advance Conference Program:

Stereoscopic Displays and Applications XXIX

The World's Premier Conference for 3D Innovation

camera tripod icon Videos of many of the presentations at the conference are available for free viewing by clicking on the special "Video" icons video button in the program list below.

Monday-Wednesday 29-31 January 2018
Hyatt Regency San Francisco Airport Hotel, Burlingame, California USA, USA.

To be published open-access as part of the IS&T Proceedings of Electronic Imaging.

Part of IS&T's International Symposium on Electronic Imaging: Science and Technology
Sunday-Thursday 28 January - 1 February 2017 ¤ Hyatt Regency San Francisco Airport, Burlingame, California, California, USA.

[ Advance Program: Day 1, Day 2, Day 3, Keynote 1, Keynote 2, Demonstration Session, 3D Theatre, Discussion Forum ]   [ Register, Short Course ]

 

Projection Sponsors:     DepthQ 3D by Lightspeed Design       Christie


3D Theater Partners:     LA 3-D Movie Festival       3-D Film Archive


Conference Chairs: Gregg Favalora, Draper (USA)
Nicolas S. Holliman, University of Newcastle (UK)
Takashi Kawai, Waseda University (Japan)
Andrew J. Woods, Curtin University (Australia);

Founding Chair: John O. Merritt, The Merritt Group (United States).

Program Committee:

Neil A. Dodgson, Victoria University of Wellington (New Zealand);
Davide Gadia, Univ. degli Studi di Milano (Italy);
Hideki Kakeya, Univ. of Tsukuba (Japan);
Stephan Keith, SRK Graphics Research (United States)
Michael Klug, Magic Leap, Inc. (United States);
John D. Stern, Intuitive Surgical, Retired (United States);
Björn Sommer, University of Konstanz (Germany);
Chris Ward, Lightspeed Design (United States)


Monday 29th January 2018

SESSION 1
Stereoscopic Developments
Session Chair: Takashi Kawai, Waseda University (Japan)
Mon. 8:50 - 10:10 AM

8:50 am: Use of VR to assess and treat weaknesses in human stereoscopic vision, Benjamin Backus, James Blaha, Manish Gupta, Brian Dornbos, and Tuan Tran, Vivid Vision, Inc. (United States) [SD&A-109]
video manuscript

9:10 am: Emotional effects of car-based motion representations with stereoscopic images, Ryo Kodama and Nobushige Fujieda, Toyota Central R&D Labs, Inc.; Jo Inami, Yusuke Hasegawa, and Takashi Kawai, Waseda University (Japan) [SD&A-110]
manuscript

9:30 am: Mid-air imaging technique for architecture in public space, Ayaka Sano and Naoya Koizumi, The University of Electro-Communications (Japan) [SD&A-111]
video manuscript

9:50 am: A Refocus-Interface for Diminished Reality Work Area Visualization, Hideo Saito, Keio University; Momoko Maezawa and Shohei Mori, Keio University (Japan) [SD&A-112]
video manuscript

SD&A Conference Opening Remarks
by Andrew Woods, Curtin University (Australia)
Mon. 10:10 am - 10:20 pm
video manuscript

Coffee Break Mon. 10:20 - 10:50 am

SESSION 2
Autostereoscopic Displays 1: Light-field
Session Chair: John Stern, Intuitive Surgical, Inc. (United States)
Mon. 10:50 am - 12:30 pm

10:50 am: Initial work on development of an open Streaming Media Standard for Field of Light Displays (SMFoLD), Jamison Daniel, Benjamin Hernandez Arreguin, Oak Ridge National Laboratory; Stephen Kelley, C. E. (Tommy) Thomas, Paul Jones, Third Dimension Technologies; Chris Chinnock, Insight Media (United States) [SD&A-140]
video manuscript

11:10 am: Simulation tools for light-field displays based on a micro-lens array, Weitao Song, Nanyang Technological University (Singapore) and Advanced Innovation Center for Future Visual Entertainment (China); Dongdong Weng, Yue Liu, and Yongtian Wang, Advanced Innovation Center for Future Visual Entertainment and Beijing Institute of Technology (China) [SD&A-141]
manuscript

11:30 am: Full-parallax spherical light field display using mirror array, Hiroaki Yano and Tomohiro Yendo, Nagaoka University of Technology (Japan) [SD&A-142]
video manuscript

11:50 am: Fast calculation method for full-color computer-generated hologram with real objects captured by a depth camera, Yu Zhao1, Shahinur Alam1, Seok-Hee Jeon3, and Nam Kim2; 1Chungbuk National University and 3Incheon National University (Republic of Korea) [SD&A-250]
video manuscript

12:10 pm: Conversion of sparsely-captured light field into alias-free full-parallax multiview content, Suren Vagharshakyan, Robert Bregovic, Atanas Gotchev; Erdem Sahin, Tampere University of Technology (Finland); Gwangsoon Lee, ETRI (North Korea) [SD&A-144]
video manuscript

Lunch Break Mon. 12:30 - 2:00 pm

EI 2018 Opening Plenary Mon. 2:00 to 3:00 pm


Overview of Modern Machine Learning and Deep Neural Networks - Impact on Imaging and the Field of Computer Vision

Dr. Greg Corrado, co-founder of Google Brain and Principal Scientist at Google

            video

Dr. Corrado is a senior research scientist interested in biological neuroscience, artificial intelligence, and scalable machine learning. He has published in fields ranging across behavioral economics, neuromorphic device physics, systems neuroscience, and deep learning. At Google he has worked for some time on brain inspired computing, and most recently has served as one of the founding members and the co-technical lead of Google's large scale deep neural networks project.

speaker photograph

Coffee Break 3:00 - 3:30 pm

SD&A Keynote Session 1
Session Chair: Session Chair: Nick Holliman, Newcastle University (UK)
Mon. 3:30 - 4:30 pm


What use is 'time-expired' disparity and optic flow information to a moving observer?

Andrew Glennerster, University of Reading (United Kingdom)   [SD&A-388]

video

speaker photograph   Abstract:
It is clear that optic flow is useful to guide an observer's movement and that binocular disparity contributes too (e.g. Roy, Komatsu and Wurtz, 1992). Both cues are important in recovering scene structure. What is less clear is how the information might be useful after a few seconds, when the observer has moved to a new vantage point and the egocentric frame in which the information was gathered is no longer applicable. One answer, pursued successfully in computer vision, is to interpret any new binocular disparity and optic flow information in relation to a 3D reconstruction of the scene (Simultaneous Localisation and Mapping, SLAM). Then, as the estimate of the camera pose is updated, the 3D information computed from earlier frames is always relevant. No-one suggests that animals carry out visual SLAM, at least not in the way that computer vision implements it and yet we have no serious competitor models. Reinforcement learning is just beginning to approach 3D tasks such as navigation and to build representations that are quite unlike a 3D reconstruction. I will describe psychophysical tasks from our VR lab where participants point to unseen targets after navigating to different locations. There are large systematic biases in performance on these tasks that rule out (in line with other evidence) the notion that humans build a stable 3D reconstruction of the scene that is independent of the task at hand. He will discuss some indications about what it might do instead.

Biography:
Prof. Andrew Glennerster studied medicine at Cambridge before working briefly with Michael Morgan at UCL then doing a DPhil and an EU-funded postdoc with Brian Rogers on binocular stereopsis (1989 - 1994). He held an MRC Career Development Award (1994 - 1998) with Andrew Parker in Physiology at Oxford including a year with Suzanne McKee in Smith-Kettlewel, San Francisco. He continued work with Andrew Parker on a Royal Society University Research Fellowship (1999 - 2007) which allowed him to set up a virtual reality laboratory to study 3D perception in moving observer, funded for 12 years by the Wellcome Trust. He moved to Psychology in Reading in 2005, first as a Reader and now as a Professor, where the lab is now funded by EPSRC.

EI 2018 Symposium Reception
The annual Electronic Imaging All-Conference Reception provides a wonderful opportunity to get to know and interact with new and old SD&A colleagues.
Plan to join us for this relaxing and enjoyable event.
Mon. 5:00 - 6:00 pm

SD&A 3D Theatre
Session Chairs: John Stern, Intuitive Surgical, Inc. (United States); Chris Ward, Lightspeed Design, Inc. (United States); and Andrew Woods, Curtin University (Australia)
Mon. 6:00 to 7:30 pm

This ever-popular session of each year's Stereoscopic Displays and Applications Conference showcases the wide variety of 3D content that is being produced and exhibited around the world. All 3D footage screened in the 3D Theater Session is shown in high-quality polarized 3D on a large screen. The final program will be announced at the conference and 3D glasses will be provided.


SD&A Conference Annual Dinner Mon. 7:50 pm to 10:00 pm

The annual informal dinner for SD&A attendees. An opportunity to meet with colleagues and discuss the latest advances. There is no host for the dinner. Information on venue and cost will be provided on the day at the conference.

Tuesday 30th January 2018

SESSION 3
Stereoscopic Applications: VR to Immersive Analytics in Bioinformatics 1 (Joint Session)
Session Chair: Björn Sommer, University of Konstanz (Germany)

This session is jointly sponsored by: Stereoscopic Displays and Applications XXIX and The Engineering Reality of Virtual Reality 2018.

Tue. 8:50 - 10:10 AM

8:50 am: Mesoscopic rigid body modeling of the ExtraCellular Matrix's self assembly, Hua Wong, Nicolas Belloy, and Manuel Dauchez, University of Reims Champagne-Ardenne (France) [SD&A-189]
video manuscript

9:10 am: Semantics for an integrative and immersive pipeline combining visualisation and analysis of molecular data, Mikael Trellet2, Nicolas Ferey2, Patrick Bourdot2, and Marc Baaden1; 1IBPC and 2LIMSI (France) [SD&A-190]
video manuscript

9:30 am: 3D-stereoscopic modeling and visualization of a Chlamydomonas reinhardtii cell, Niklas Biere4, Mehmood Ghaffar4, Daniel Jäger4, Anja Doebbe4, Nils Rothe4, Karsten Klein2,3, Ralf Hofestädt4, Falk Schreiber1,3, Olaf Kruse4, and Björn Sommer1,3; 1University of Konstanz, 2University of Konstanz, Germany, 3Monash University (Australia), and 4Bielefeld University (Germany) [SD&A-191]
manuscript

9:50 am: Immersive analysis and visualization of redox signaling pathways integrating experiments and computational modelling, , Alexandre Maes2, Karen Druart1, Sean Guégan1, Xavier Martinez1,3, Christophe Marchand2, Stéphane Lemaire2, and Marc Baaden1; 1Laboratoire de Biochimie Théorique, CNRS, UPR9080, Univ Paris Diderot, Sorbonne Paris Cité, PSL Research University, 2Institut de Biologie Physico-Chimique, UMR8226, CNRS, Sorbonne Universités, UPMC Université Paris 06, and 3CNRS-LIMSI, VENISE team, Univ Paris-Sud (France) [SD&A-192]
video manuscript

Coffee Break Tues. 10:30 - 10:50 am

SESSION 4
Autostereoscopic Displays 2: Volumetric, Integral, Stackable, and Holographic
Session Chair: Gregg Favalora, Draper (United States)
Tue. 10:50 am - 12:30 pm

10:50 am: Recent progress in volumetric 3D digital light photoactivatable dye displays, Shreya Patel, Jian Cao, Anthony Spearman, Cecilia O'Brien, and Alexander Lippert, Southern Methodist University (United States) [SD&A-246]
video    

11:10 am: Integral imaging system using locally controllable point light source array, Hayato Watanabe, Masahiro Kawakita, Naoto Okaichi, Hisayuki Sasaki, and Tomoyuki Mishina, Science and Technology Research Laboratories, NHK (Japan Broadcasting Corporation) (Japan) [SD&A-247]
manuscript

11:30 am: Mobile integral imaging display using three-dimensional scanning, Munkh-Uchra1 Erdenebat1, Ki-Chul Kwon2, Erkhembaatar Dashdavaa3, Jong-Rae Jeong4, and Nam Kim1; 1Chungbuk National University and 4Suwon Science College (Republic of Korea) [SD&A-248]
video manuscript

11:50 am: Constructing Stackable Multiscopic Display Panels Using Microlenses and Optical Waveguides , Hironobu Gotoda, National Institute of Informatics (Japan) [SD&A-249]
video    

12:10 pm: Angular and Spatial Sampling Requirements in 3D Light Field Displays, Hong Hua, The University of Arizona (United States) [SD&A-143]
video    


Lunch Break Tues. 12:30 - 2:00 pm

EI Plenary Session 2 Tue. 2:00 to 3:00 pm

Fast, Automated 3D Modeling of Buildings and Other GPS Denied Environments

Avideh Zahkor, Qualcomm Chair & Professor at U.C. Berkeley

      video

Dr. Zakhor is a CEO and founder of Indoor Reality, a silicon valley start up with products in 3D reality capture, and visual and metric documentation of building interiors. Zakhor has been faculty member at U.C. Berkeley since 1988 where she has holds the Qualcomm Chair in the Electrical Engineering and Computer Science Department. She co-founded OPC technology in 1996, which was acquired by Mentor Graphics (Nasdaq: MENT) in 1998, and UrbanScan Inc. in 2005 which was acquired by Google (Nasdaq:GOOGL) in 2007. UrbanScan created the first fully automated 3D outdoor mapping system for 3D exterior models of buildings in urban environments. She has received a number of best paper awards in 3D computer vision, image processing, signal processing, is an IEEE fellow, and received the presidential young investigator award from President George Herber Walker Bush in 1992.

  speaker photograph

Coffee Break Tues. 3:00 - 3:30 pm

SD&A Discussion Forum
360° Imaging Should Be 3D - But Why And How?
Tues. 3:30 - 4:30 pm

Moderator: Andrew Woods, Curtin University

Panellists: Dan Sandin (The University of Illinois at Chicago),
                Greg Dawe (University of California San Diego),
                Eric Kurland (3-D Space).

SESSION 5
Stereoscopic Applications: VR to Immersive Analytics in Bioinformatics 2
Session Chair: Marc Baaden, IBPC (France)
Tue. 4:30 - 5:30 pm

This session is jointly sponsored by: The Engineering Reality of Virtual Reality 2018, and Stereoscopic Displays and Applications XXIX.

4:30 pm: Interactive molecular graphics for augmented reality using HoloLens, Christoph Müller, Michael Krone, Markus Huber, Verena Biener, Guido Reina, Daniel Weiskopf, and Thomas Ertl, University of Stuttgart (Germany) [SD&A-288]
video manuscript

4:50 pm: Molecular Dynamics Visualization (MDV): Stereoscopic 3D display of biomolecular structure and interactions using the Unity game engine, Michael Wiebrands, Chris Malajczuk, Andrew Woods, Andrew Rohl, and Ricardo Mancera, Curtin University (Australia) [SD&A-289]
video manuscript


Symposium Demonstration Session
Tues. 5:30 - 7:30 pm
Demonstrations

A symposium-wide demonstration session will be open to attendees 5:30 to 7:30 pm Tuesday evening. Demonstrators will provide interactive, hands-on demonstrations of a wide-range of products related to Electronic Imaging. The demonstration session hosts a vast collection of stereoscopic products providing a perfect opportunity to witness a wide array of stereoscopic displays with your own two eyes.

More information: http://www.stereoscopic.org/demo/index.html.

Wednesday 31st January 2018

SESSION 6
Stereoscopic History
Session Chair: Nicolas Holliman, University of Newcastle (United Kingdom)
Wed. 8:50 - 9:10 am

8:50 am: The History of Stereoscopic Video Games for the Consumer Electronic Market, Ilicia Benoit, NYSA (United States) [SD&A-290]
manuscript

SD&A Keynote Session 2
Session Chair: Session Chair: Nick Holliman, Newcastle University (UK)
Wed. 9:10 - 10:10 am

Over fifty years of working with stereoscopic 3D systems – anecdotes, insights, and advice
illustrated by many examples of stereoscopic imagery, both good and bad

John O. Merritt, The Merritt Group (USA)  

video

speaker photograph   Abstract:
Stereoscopic 3D has been around for well over a century --- so, how could 3D systems still not be completely perfected and accepted by now? Why does stereo 3D popularity wax and wane in cycles? Why were 3D Home TVs readily available in retail stores a few years ago, but now are not to be found? This presentation will discuss "lessons learned" in the course of over fifty years of working with stereo 3D capture and display systems designed for a variety of applications, such as research comparing 2D vs. 3D task performance for telerobotic anthropomorphic manipulators, remotely-driven off-road vehicles, undersea remote work systems, endocscopic and minimally invasive surgical imaging, aerial refueling via 3D video, and others. Common mistakes are described, such as inappropriately applying imaging and display practices developed for 2D imaging to 3D imaging, such as "pulling convergence" along with "pulling focus," using shallow depth-of-field (DOF) in 3D as it is used in 2D to provide monocular depth cues and direct the viewer's attention. Also discussed are the surprisingly frequent and persistent instances of inadvertently swapped Left and Right image channels, creating reversed binocular depth cues that conflict with monocular depth cues. Also mentioned are simple methods for mitigating the perennial problem of conflicts between ocular focus distance (accommodation) vs. binocular fixation distance (convergence), and an interesting trick for minimizing frame violations.

Biography:
John O. Merritt, Senior Consulting Scientist at The Merritt Group, is an internationally recognized expert in the design and evaluation of stereoscopic 3D displays, specializing in the practical application of research in sensory and perceptual science for improvements in visual comfort and other human factors issues related to 3D display systems. His early work using 3D displays in satellite reconnaissance as a Naval Air Intelligence Officer, combined with his many years as a visual human-factors design and evaluation consultant, make him uniquely qualified to assess the strengths and weaknesses of advanced stereo 3D systems. He has extensive experience comparing task performance in 3D vs. 2D evaluation research studies and is the author of many technical reports and papers in the areas of vision research, binocular night vision devices, image-quality standards, photo-interpretation, simulator displays, visual fatigue, and evaluation of stereo 3D systems for minimally invasive surgery and other telepresence/telerobotic systems. He is a member of the Human Factors and Ergonomics Society, a Fellow and Senior Member of the SPIE, and is the Founding Chair of the Stereoscopic Displays and Applications conference (SD&A), held annually in the San Francisco bay area since 1990.

Coffee Break Wed. 10:10 - 10:40 am

Immersive Imaging Keynote (Joint Session):
Session Chair: Gordon Wetzstein (Stanford University), and Nitin Sampat (Rochester Institute of Technology)

This session is jointly sponsored by: Photography, Mobile, and Immersive Imaging 2018; The Engineering Reality of Virtual Reality 2018; and Stereoscopic Displays and Applications XXIX.
Wed. 10:40 - 11:20 am

Real-time capture of people and environments for immersive computing  

Shahram Izadi, PerceptiveIO, Inc. (United States)   [PMII-320]

This session is jointly sponsored by: The Engineering Reality of Virtual Reality 2018, Photography, Mobile, and Immersive Imaging 2018, and Stereoscopic Displays and Applications XXIX.

Dr. Shahram Izadi is co-founder and CTO of perceptiveIO, a new Bay Area startup working on bleeding-edge research and products at the intersection of real-time computer vision, applied machine learning, novel displays, sensing, and human-computer interaction. Prior to perceptiveIO, Dr. Izadis was a research manager at Microsoft, managing a team of researchers and engineers, called Interactive 3D Technologies, working on moonshot projects in the area of augmented and virtual reality and natural user interfaces.

 

SESSION 7
Immersive Imaging (Joint Session)
Session Chair: Gordon Wetzstein (Stanford University), and Nitin Sampat (Rochester Institute of Technology)

This session is jointly sponsored by: Photography, Mobile, and Immersive Imaging 2018; The Engineering Reality of Virtual Reality 2018; and Stereoscopic Displays and Applications XXIX

Wed. 11:20 am - 12:40 pm

11:20 am: SpinVR: Towards Live-Streaming 3D Virtual Reality Video, Donald Dansereau, Robert Konrad, Aniq Masood, and Gordon Wetzstein, Stanford University (United States) [PMII-350]
video

11:40 am: Towards a full parallax cinematic VR system, Haricharan Lakshman, Dolby Labs (United States) [PMII-351]

Noon: Perceptual Evaluation of Six Degrees of Freedom Virtual Reality Rendering from Stacked Omnistereo Representation , Jayant Thatte and Bernd Girod, Stanford University (United States) [PMII-352]
video manuscript

12:20 pm: Image systems simulation for 360° camera rigs , Trisha Lian, Joyce Farrell, and Brian Wandell; Stanford University (United States) [PMII-353]
video manuscript

Lunch Break Wed. 12:40 - 2:00 pm

EI Plenary Session 3 Wed. 2:00 - 3:00 pm


Ubiquitous, Consumer AR Systems to Supplant Smartphones

Ronald T. Azuma, Intel Labs (USA)

      video

Dr. Ronald T. Azuma, Intel Labs researcher and Augmented Reality pioneer, will share his vision for achieving ubiquitous, consumer AR systems. Recent large investments in Augmented Reality reflect the commercial interest in its inherent potential to replace current smartphone technology, but much remains to be done. In his talk, Dr. Azuma gives a vision for achieving this goal, which requires not just solving numerous technical challenges but also determining new, compelling AR experiences that will establish AR as a new platform and novel form of media. Currently, Dr. Azuma leads a team in Intel Labs that designs and prototypes novel experiences and key enabling technologies to enable new forms of media. These technology areas include computational imaging and photography, computational displays, and head-worn displays. Dr. Azuma is recognized as a pioneer and innovator in Augmented Reality, and he has held prominent leadership roles in that research area, including leading and implementing research projects and demonstrations in areas such as AR, visualization, and mobile applications.

  speaker photograph

Coffee Break Wed. 3:00 - 3:30 pm

SESSION 8
Visualization Facilities (Joint Session)
Session Chairs: Margaret Dolinsky, Indiana University (United States) and Andrew Woods, Curtin University (Australia)

This session is jointly sponsored by: The Engineering Reality of Virtual Reality 2018, and Stereoscopic Displays and Applications XXIX.

Wed. 3:30 - 5:10 pm

3:30 pm: xREZ Art + Science Lab - Facilities Presentation, Ruth West, University of North Texas (United States) [ERVR-392]

3:50 pm: CADwalk: life-size MR-AR-VR design experience - optimising and validating mission critical work environments, Gerhard Kimenkowski, CADwalk Global Pty Ltd (Australia) [SD&A-393]

4:10 pm: When One Is Not Enough: Cross-Platform And Collaborative Developments At The Emerging Analytics Center, Dirk Reiners, Carolina Cruz-Neira, and Carsten Neumann, University of Arkansas at Little Rock (United States) [ERVR-394]
video

4:30 pm: Multiplatform VR case study - Beacon Virtua, Andrew Woods1, Nick Oliver1, and Paul Bourke2; 1Curtin University and 2University of Western Australia (Australia) [SD&A-395]
video

4:50 pm: What Will We See Next? Current Visualization Facilities Trends and Future Considerations, Mike Pedersen, Mechdyne Corp. (United States) [SD&A-396]
video

SD&A Conference Closing Remarks
by Nicolas Holliman, Newcastle University (United Kingdom)
Wed. 5:10 pm - 5:30 pm
video

Stereoscopic Displays and Applications XXIX Interactive Papers (Poster) Session
The following works will be presented at the EI 2018 Symposium Interactive Papers Session on Wednesday evening, from 5:30 pm to 7:00 pm. Refreshments will be served.
Wed. 5:30 - 7:00 pm

  • Computer-generated holography method based on orthographic projection using depth camera, Yan-Ling Piao1, Seo-Yeon Park1, Hui-Ying Wu1, Sang-Keun Gil2, and Nam Kim1; 1Chungbuk National University and 2Suwon University (Republic of Korea) [SD&A-410]
    manuscript
  • Full-parallax and high-quality multiview 3D image acquisition method using camera slider, Byeong-Jun Kim, Ki-Chul Kwon, Jae-Min Lee, Young-Tae Lim, and Nam Kim, Chungbuk National University (Republic of Korea) [SD&A-411]
    manuscript
  • Projection type light field display using undulating screen, Masahiro Kajimoto and Tomohiro Yendo, Nagaoka Univ. of Technology (Japan) [SD&A-412]
    manuscript
  • Study of eye tracking type super multi-view display using time division multiplexing, Yuta Takahashi and Tomohiro Yendo, Nagaoka University of Technology (Japan) [SD&A-413]
    manuscript


Changes:
* (Mon 3:30 pm) Thin Form-Factor Super Multiview Head-Up Display System, Ugur Akpinar, Erdem Sahin, Olli Suominen, and Atanas Gotchev, Tampere University of Technology (Finland) [SD&A-157] (Withdrawn)
* [SD&A-143] and [SD&A-250] swapped
* (Tue 4:30 pm) Combining molecular dynamics simulations augmented and virtual reality visualization: a perfect synergy to understand molecular interactions and structure, Rebeca Garcia-Fandino1, Angel Pineiro2, M. Jesús Pérez3, and Alejandro Pan3; 1University of Porto (Portugal), 2Santiago de Compostela University, and 3MD.USE Innovative Solutions S.L. (Spain) [SD&A-287] (Withdrawn)



Stereoscopic Displays and Applications Conference



[Home] [2018: Program, Committee, Short Course, Demonstration Session, 3D Theatre, Sponsor, Register ]
[ 2025, 2024, 2023, 2022, 2021, 2020, 2019, 2018, 2017, 2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 2005, 2004, 2003, 2002, 2001, 2000, 1999, 1998, 1997, 1996]

* Advertisers are not directly affiliated with or specifically endorsed by the Stereoscopic Displays and Applications conference.
Maintained by: Andrew Woods
Revised: 8 March, 2022.