SD&A 2018
Conference Program
Short Course
Demonstration Session
3D Screening Session
Conference Registration

SD&A library

advertisers message
advertisers message

advertise here
  Advance Conference Program:

SD&A 2019

The World's Premier Conference for 3D Innovation

Monday-Wednesday 14-16 January 2019
Hyatt Regency San Francisco Airport Hotel, Burlingame, California USA, USA.

To be published open-access as part of the IS&T Proceedings of Electronic Imaging.

Part of IS&T's International Symposium on Electronic Imaging: Science and Technology
Sunday-Thursday 13-17 January 2019 ¤ Hyatt Regency San Francisco Airport, Burlingame, California, California, USA.

[ Advance Program: Day 1, Day 2, Day 3, Keynote 1, Keynote 2, Keynote 3, Demonstration Session, 3D Theatre]   [ Register, Short Course ]


Projection Sponsors:     DepthQ 3D by Lightspeed Design       Christie

3D Theater Partners:     LA 3-D Movie Festival       3-D Film Archive

Conference Chairs: Gregg Favalora, Draper (USA)
Nicolas S. Holliman, University of Newcastle (UK)
Takashi Kawai, Waseda University (Japan)
Andrew J. Woods, Curtin University (Australia);

Founding Chair: John O. Merritt, The Merritt Group (United States).

Program Committee:

Neil A. Dodgson, Victoria University of Wellington (New Zealand);
Davide Gadia, Univ. degli Studi di Milano (Italy);
Hideki Kakeya, Univ. of Tsukuba (Japan);
Stephan Keith, SRK Graphics Research (United States)
Michael Klug, Magic Leap, Inc. (United States);
John D. Stern, Intuitive Surgical, Retired (United States);
Björn Sommer, University of Konstanz (Germany);
Chris Ward, Lightspeed Design (United States)

Monday 14th January 2019

Stereoscopic Developments
Session Chair: Takashi Kawai, Waseda University (Japan)
Mon. 8:50 - 10:20 AM

8:50 am: 3D image processing - From capture to display (Invited), Toshiaki Fujii, Nagoya University (Japan [50SD&A-625]

9:10 am: 3D TV based on spatial imaging (Invited), Masahiro Kawakita, Hisayuki Sasaki, Naoto Okaichi, Masanori Kano, Hayato Watanabe, Takuya Oomura, and Tomoyuki Mishina, NHK Science and Technology Research Laboratories (Japan) [SD&A-626]

9:30 am: Stereoscopic capture and viewing parameters: Geometry and perception (Invited), Robert Allison and Laurie Wilcox, York University (Canada) [SD&A-627]

9:50 am: 30 Years of SD&A - Milestones and statistics, Andrew Woods (Curtin University)

SD&A Conference Opening Remarks
by Andrew Woods, Curtin University (Australia)
Mon. 10:10 am - 10:20 pm

Coffee Break Mon. 10:20 - 10:50 am

Autostereoscopic Displays I
Session Chair: Gregg Favalora, Draper (United States)
Mon. 10:50 am - 12:30 pm

10:50 am: A Full-HD super-multiview display with a deep viewing zone, Hideki Kakeya and Yuta Watanabe, University of Tsukuba (Japan) [SD&A-628]

11:10 am: A 360-degrees holographic true 3D display unit using a Fresnel phase plate, Levent Onural, Bilkent University (Turkey) [SD&A-629]

11:30 am: Electro-holographic light field projector modules: progress in SAW AOMs, illumination, and packaging, Gregg Favalora, Michael Moebius, Valerie Bloomfield, John LeBlanc, and Sean O'Connor, Draper (United States) [SD&A-630]

11:50 am: Thin form-factor super multiview head-up display system, Ugur Akpinar, Erdem Sahin, Olli Suominen, and Atanas Gotchev, Tampere University of Technology (Finland) [SD&A-631]

12:10 pm: Dynamic multi-view autostereoscopy, Yuzhong Jiao, Man Chi Chan, and Mark P. C. Mok, ASTRI (Hong Kong) [SD&A-632]

Lunch Break Mon. 12:30 - 2:00 pm

Monday EI Plenary Mon. 2:00 to 3:00 pm

Autonomous Driving Technology and the OrCam MyEye
speaker photograph

Amnon Shashua, CEO and CTO, Mobileye, and Senior Vice President of Intel Corporation (United States)

The field of transportation is undergoing a seismic change with the coming introduction of autonomous driving. The technologies required to enable computer driven cars involves the latest cutting edge artificial intelligence algorithms along three major thrusts: Sensing, Planning and Mapping. Prof. Shashua will describe the challenges and the kind of computer vision and machine learning algorithms involved, but will do that through the perspective of Mobileye's activity in this domain. He will then describe how OrCam leverages computer vision, situation awareness and language processing to enable Blind and Visually impaired to interact with the world through a miniature wearable device.

Prof. Amnon Shashua holds the Sachs chair in computer science at the Hebrew University of Jerusalem. His field of expertise is computer vision and machine learning. For his academic achievements he received the MARR prize Honorable Mention in 2001, the Kaye innovation award in 2004, and the Landau award in exact sciences in 2005. In 1999 Prof. Shashua co-founded Mobileye, an Israeli company developing a system-on-chip and computer vision algorithms for a driving assistance system, providing a full range of active safety features using a single camera. Today, approximately 24 million cars rely on Mobileye technology to make their vehicles safer to drive. In August 2014, Mobileye claimed the title for largest Israeli IPO ever, by raising $1B at a market cap of $5.3B. In addition, Mobileye is developing autonomous driving technology with more than a dozen car manufacturers. The introduction of autonomous driving capabilities is of a transformative nature and has the potential of changing the way cars are built, driven and own in the future. In August 2017, Mobileye became an Intel company in the largest Israeli acquisition deal ever of $15.3B. Today, Prof. Shashua is the CEO and CTO of Mobileye and a Senior Vice President of Intel Corporation leading Intel's Autonomous Driving Group. In 2010 Prof. Shashua co-founded OrCam which harnesses computer vision and artificial intelligence to assist people who are visually impaired or blind. The OrCam MyEye device is unique in its ability to provide visual aid to hundreds of millions of people, through a discreet wearable platform. Within its wide-ranging scope of capabilities, OrCam's device can read most texts (both indoors and outdoors) and learn to recognize thousands of new items and faces.

Coffee Break Mon. 3:00 - 3:30 pm

Autostereoscopic Displays II
Session Chair: John Merritt, The Merritt Group (United States)
Mon. 3:30 - 3:50 pm

Spirolactam rhodamines for multiple color volumetric 3D digital light photoactivatable dye displays, Maha Aljowni, Uroob Haris, Bo Li, Cecilia O'Brien, and Alexander Lippert, Southern Methodist University (United States) [SD&A-633]

SD&A Keynote 1
Session Chair: Andrew Woods, Curtin University (Australia)
Mon. 3:50 - 4:50 pm

From Set to Theater: Reporting on the 3D Cinema Business and Technology Roadmaps

Tony Davis, RealD (United States)   [SD&A-388]

speaker photograph   Abstract:
Since the most recent incarnation of stereoscopic cinema began in 2005, the push for brighter, clearer, and more engaging 3D in theaters has not let up. Filmmakers continue to push the limits of the visual experience, and technology groups have been working on solutions that are not only beautiful, but also address long-standing problems in cinema. Audiences demand flawless and ever-improving experiences and will vote with their feet if theaters don’t deliver on their expectations. RealD has been working with many partners throughout the industry to bring a better experience to audiences. This effort starts very early in the concept of a movie. The more a movie is designed for a real 3D experience, the better it will draw audiences in. There are early technical decisions to make, including native-capture 3D vs. post-conversion and, more recently, frame rate decisions. Later in post-production, more decisions can be made that have substantial impact on the viewer’s experience. Finally, in theatrical exhibition, light is key. Brighter exhibition draws the audience in, enhances color perception, and can reveal the artistic intent of the filmmaker. New advancements in polarization-preserving screens and wide-throw light-recycling modulation systems promise to take stereoscopic theater experiences to a new level.

Tony Davis is the VP of Technology at RealD where he works with an outstanding team to perfect the cinema experience from set to screen. Tony Davis has a Masters in Electrical Engineering from Texas Tech University, specializing in advanced signal acquisition and processing. After several years working as a Technical Staff Member for Los Alamos National Laboratory, Mr. Davis was Director of Engineering for a highly successful line of medical and industrial X-ray computed tomography systems at 3M. Later, he was the founder of Tessive, a company dedicated to improvement of temporal representation in motion picture cameras.

EI 2019 Symposium Reception
The annual Electronic Imaging All-Conference Reception provides a wonderful opportunity to get to know and interact with new and old SD&A colleagues. Plan to join us for this relaxing and enjoyable event.
Mon. 5:00 - 6:00 pm

SD&A 3D Theatre
Session Chairs: John Stern, Intuitive Surgical, Inc. (United States); Chris Ward, Lightspeed Design, Inc. (United States); and Andrew Woods, Curtin University (Australia)
Mon. 6:00 to 7:30 pm

This ever-popular session of each year's Stereoscopic Displays and Applications Conference showcases the wide variety of 3D content that is being produced and exhibited around the world. All 3D footage screened in the 3D Theater Session is shown in high-quality polarized 3D on a large screen. The final program will be announced at the conference and 3D glasses will be provided.

SD&A Conference Annual Dinner Mon. 7:50 pm to 10:00 pm

The annual informal dinner for SD&A attendees. An opportunity to meet with colleagues and discuss the latest advances. There is no host for the dinner. Information on venue and cost will be provided on the day at the conference.

Tuesday 15th January 2019

Light Field Imaging and Displays
Session Chair: Hideki Kakeya, University of Tsukuba (Japan)
Tue. 8:50 - 10:10 AM

8:50 am: Light-field display architecture and the heterogeneous display ecosystem, Thomas Burnett, FoVI3D (United States) [SD&A-634]

9:10 am: Understanding ability of 3D integral displays to provide accurate out-of-focus retinal blur with experiments and diffraction simulations, Ginni Grover, Oscar Nestares, and Ronald Azuma, Intel Corporation (United States) [SD&A-635]

9:30 am: EPIModules on a geodesic: Toward 360-degree light-field imaging, Harlyn Baker, EPIImaging, LLC (United States) [SD&A-636]

9:50 am: A photographing method of Integral Photography with high angle reproducibility of light rays, Shotaro Mori, Yue Bao, and Norigi Oishi, Tokyo City University Graduate School (Japan) [SD&A-637]

Coffee Break Tues. 10:10 - 10:50 am

Stereoscopic Vision Testing
Session Chair: John Stern, Intuitive Surgical, Inc. (United States)
Tue. 10:50 - 11:30 am

10:50 am: Operational based vision assessment: Stereo acuity testing research and development, Marc Winterbottom, James Gaska, and Steven Hadley, U.S. Air Force School of Aerospace Medicine; Eleanor O'Keefe, Elizabeth Shoda, KBRwyle; Maria Gavrilescu, Peter Gibbs, Defence, Science & Technology; Mackenzie Glaholt, Defence Research and Development Canada (Canada); Asao Kobayashi, Aeromedical Laboratory, Japan Air Self Defense Force (Japan); Amanda Douglass, Deakin University (Australia); Charles Lloyd, Visual Performance, LLC (United States) [SD&A-638]

11:10 am: Operational based vision assessment: Evaluating the effect of stereoscopic display crosstalk on simulated remote vision system depth discrimination, Eleanor O'Keefe, Alexander Van Atta, KBRwyle; Charles Lloyd, Visual Performance; Marc Winterbottom, U.S. Air Force School of Aerospace Medicine (United States) [SD&A-639]

SD&A Keynote 2
Session Chair: Nick Holliman, Newcastle University (UK)
Tues. 11:30 am - 12:30 pm

What good is imperfect 3D?

Miriam Ross, Victoria University of Wellington (New Zealand)   [SD&A-640]

speaker photograph   Abstract:
As we celebrate the 30th anniversary of the Stereoscopic Displays and Application Conference we encounter three decades of pioneering, exhilarating and steadfast research that has kept abreast of stereoscopy as it has moved into the digital era. For good reason, this research strives to enhance content delivery and the user experience. It strives for perfection. Or, as close to perfection as one can come when working with the complexities of binocular vision. Is there value, then, in glancing askew and looking for the moments in stereoscopy’s lengthy history when perfection has not been the aim? What can an imperfect three-dimensional representation add to our understanding of how we use stereoscopy? In the 1850s, David Brewster asked viewers using the stereoscope to switch left and right eye images to look for inverted depth. He expected this process to confirm his visual theories but viewers became fascinated with the anomalies that occurred, when volumes would not invert and objects did not seem correctly positioned. More recently, Jean-Luc Godard’s Adieu au langage (Goodbye to Language, 2014) diverged the cameras so dramatically that each eye was presented with a completely different scene. The rogue shot, painful to view, laid bare the very process of stereoscopic construction with little care for supporting the film’s narrative. With all their imperfections, these playful approaches to stereoscopic imaging asked viewers to see in different ways and consider their own role in the process of perception. Spanning the 160 years between these examples are many more cases of 'imperfect 3D,' from stereoscopic painting to the new virtual reality. Each enriches the way viewers interact with stereoscopy and come to understand its value as a unique imaging system.

Dr. Miriam Ross is Senior Lecturer in the Film Programme at Victoria University of Wellington. She works with new technologies to combine creative methodologies and traditional academic analysis. She is the author of South American Cinematic Culture: Policy, Production, Distribution and Exhibition (2010) and 3D Cinema: Optical Illusions and Tactile Experiences (2015) as well as publications and creative works relating to film industries, mobile media, virtual reality, stereoscopic media, and film festivals.

Lunch Break Tues. 12:30 - 2:00 pm

Tuesday EI Plenary Tue. 2:00 to 3:00 pm

The Quest for Vision Comfort: Head-Mounted Light Field Displays for Virtual and Augmented Reality

Hong Hua, Professor of Optical Sciences, University of Arizona (United States)

Hong Hua will discuss the high promises and the tremendous progress made recently toward the development of head-mounted displays (HMD) for both virtual and augmented reality displays, developing HMDs that offer uncompromised optical pathways to both digital and physical worlds without encumbrance and discomfort confronts many grand challenges, both from technological perspectives and human factors. She will particularly focus on the recent progress, challenges and opportunities for developing head-mounted light field displays (LF-HMD), which are capable of rendering true 3D synthetic scenes with proper focus cues to stimulate natural eye accommodation responses and address the well-known vergence-accommodation conflict in conventional stereoscopic displays.

Dr. Hong Hua is a Professor of Optical Sciences at the University of Arizona. With over 25 years of experience, Dr. Hua is widely recognized through academia and industry as an expert in wearable display technologies and optical imaging and engineering in general. Dr. Hua's current research focuses on optical technologies enabling advanced 3D displays, especially head-mounted display technologies for virtual reality and augmented reality applications, and microscopic and endoscopic imaging systems for medicine. Dr. Hua has published over 200 technical papers and filed a total of 23 patent applications in her specialty fields, and delivered numerous keynote addresses and invited talks at major conferences and events worldwide. She is an SPIE Fellow and OSA senior member. She was a recipient of NSF Career Award in 2006 and honored as UA Researchers @ Lead Edge in 2010. Dr. Hua and her students shared a total of 8 "Best Paper" awards in various IEEE, SPIE and SID conferences. Dr. Hua received her Ph.D. degree in Optical Engineering from the Beijing Institute of Technology in China in 1999. Prior to joining the UA faculty in 2003, Dr. Hua was an Assistant Professor with the University of Hawaii at Manoa in 2003, was a Beckman Research Fellow at the Beckman Institute of University of Illinois at Urbana-Champaign between 1999 and 2002, and was a post-doc at the University of Central Florida in 1999.

  speaker photograph

Coffee Break Tues. 3:00 - 3:30 pm

Visualization Facilities (Joint Session)
Session Chairs: Margaret Dolinsky, Indiana University (United States) and Björn Sommer, University of Konstanz (Germany)

This session is jointly sponsored by: The Engineering Reality of Virtual Reality 2019, and Stereoscopic Displays and Applications XXX.

Tues. 3:30 - 5:30 pm

3:30 pm: Tiled stereoscopic 3D display wall - Concept, applications and evaluation, Björn Sommer, Alexandra Diehl, Karsten Klein, Philipp Meschenmoser, David Weber, Michael Aichem, Daniel Keim, and Falk Schreiber, University of Konstanz (Germany) [SD&A-641]

3:50 pm: The quality of stereo disparity in the polar regions of a stereo panorama, Daniel Sandin, Electronic Visualization Lab (EVL), University of Illinois at Chicago, California Institute for Telecommunications and Information Technology (Calit2), University of California San Diego; Tom DeFanti, Electronic Visualization Lab (EVL), University of Illinois at Chicago; Alexander Guo, Ahmad Atra, Electronic Visualization Lab (EVL), University of Illinois at Chicago; Haoyu Wang, Maxine Brown, The University of Illinois at Chicago; Dick Ainsworth, Ainsworth & Partners, Inc. (United States) [SD&A-642]

4:10 pm: Simulated landscapes, 3D temples, and floating epigraphy: An immersive virtual map of medieval Angkor, Thomas Chandler, Monash University (Australia) [SD&A-643]

4:30 pm: Opening a 3-D museum - A case study of 3-D SPACE, Eric Kurland, 3-D SPACE (United States) [SD&A-644]

4:50 pm: State of the art of multi-user virtual reality display systems, Juan Munoz Arango, Dirk Reiners, and Carolina Cruz-Neira, University of Arkansas at Little Rock (United States) [SD&A-645]

5:10 pm: StarCAM - A 16K stereo panoramic video camera with a novel parallel interleaved arrangement of sensors, Dominique Meyer, Christopher Mc Farland, Eric Lo, Gregory Dawe, Ji Dai, Truong Nguyen, Falko Kuester, Tom DeFanti, University of California, San Diego; Daniel Sandin, Haoyu Wang, Maxine Brown, The University of Illinois at Chicago; Harlyn Baker, EPIImaging, LLC (United States) [SD&A-646]

Symposium Demonstration Session
Tues. 5:30 - 7:30 pm

A symposium-wide demonstration session will be open to attendees 5:30 to 7:30 pm Tuesday evening. Demonstrators will provide interactive, hands-on demonstrations of a wide-range of products related to Electronic Imaging. The demonstration session hosts a vast collection of stereoscopic products providing a perfect opportunity to witness a wide array of stereoscopic displays with your own two eyes.

More information:

Wednesday 16th January 2019

360, 3D, and VR (Joint Session)
Session Chairs: Neil Dodgson, Victoria University of Wellington (New Zealand) and Ian McDowall, Intuitive Surgical / Fakespace Labs (United States)

This session is jointly sponsored by: The Engineering Reality of Virtual Reality 2019, and Stereoscopic Displays and Applications XXX.

Wed. 8:50 - 10:10 am

8:50 am: Enhanced head-mounted eye tracking data analysis using super-resolution, Qianwen Wan, Aleksandra Kaszowska, Karen Panetta, Holly Taylor, Tufts University; Sos Agaian, CUNY/ The College of Staten Island (United States) [SD&A-647]

9:10 am: Effects of binocular parallax in 360-degree VR images on viewing behavior, Yoshihiro Banchi, Keisuke Yoshikawa, and Takashi Kawai, Waseda University (Japan) [SD&A-648]

9:30 am: Visual quality in VR head mounted device: Lessons learned with StarVR headset, Bernard Mendiburu, Starbreeze (United States) [SD&A-649]

9:50 am: Time course of sickness symptoms with HMD viewing of 360-degree videos (JIST-first), Jukka Häkkinen, University of Helsinki (Finland); Fumiya Ohta, Takashi Kawai, Waseda University (Japan) [SD&A-650]

Industry Exhibition Wed. 10:00 - 7:30 pm

Coffee Break Wed. 10:10 - 10:50 am

Autostereoscopic Displays III
Session Chair: Chris Ward, Lightspeed Design, Inc. (United States)
Wed. 10:50 - 11:30 am

10:50 am: Head-tracked patterned-backlight autostereoscopic (virtual reality) display system, Jean-Etienne Gaudreau, PolarScreens Inc. (Canada) [SD&A-651]

11:10 am: The looking glass: A new type of superstereoscopic display, Shawn Frayne, Looking Glass Factory, Inc. (United States) [SD&A-652]

SD&A Keynote 3
Session Chair: Andrew Woods, Curtin University (Australia)
Wed. 11:30 am - 12:40 pm

Beads of reality drip from pinpricks in space

Mark Bolas, Microsoft Corporation (United States)   [SD&A-653]

speaker photograph   Abstract:
As we engineer electronic imaging technologies to convince pairs of eyes to see virtual worlds, we realize that the ultimate challenge is to impel minds to yield to a reality that has been virtual all along. This talk explores the light-field as a model for how people create mental maps of reality, and how such maps are easily folded.

Mark Bolas loves perceiving and creating synthesized experiences. To feel, hear and touch experiences impossible in reality and yet grounded as designs that bring pleasure, meaning and a state of flow. His work with Ian McDowall, Eric Lorimer and David Eggleston at Fakespace Labs; Scott Fisher and Perry Hoberman at USC's School of Cinematic Arts; the team at USC's Institute for Creative Technologies; Niko Bolas at SonicBox; and Frank Wyatt, Dick Moore and Marc Dolson at UCSD informed results that led to his receipt of both the IEEE Virtual Reality Technical Achievement and Career Awards.

Conference Closing Remarks

Lunch Break Wed. 12:40 - 2:00 pm

Wednesday EI Plenary Wed. 2:00 - 3:00 pm

Light Fields and Light Stages for Photoreal Movies, Games, and Virtual Reality

Paul Debevec, Senior Scientist, Google (United States)

Paul Debevec will discuss the technology and production processes behind "Welcome to Light Fields", the first downloadable virtual reality experience based on light field capture techniques which allow the visual appearance of an explorable volume of space to be recorded and reprojected photorealistically in VR enabling full 6DOF head movement. The lightfields technique differs from conventional approaches such as 3D modelling and photogrammetry. Debevec will discuss the theory and application of the technique. Debevec will also discuss the Light Stage computational illumination and facial scanning systems which use geodesic spheres of inward-pointing LED lights as have been used to create digital actor effects in movies such as Avatar, Benjamin Button, and Gravity, and have recently been used to create photoreal digital actors based on real people in movies such as Furious 7, Blade Runner: 2049, and Ready Player One. Th lighting reproduction process of light stages allows omnidirectional lighting environments captured from the real world to be accurately reproduced in a studio, and has recently be extended with multispectral capabilities to enable LED lighting to accurately mimic the color rendition properties of daylight, incandescent, and mixed lighting environments. They have also recently used their full-body light stage in conjunction with natural language processing and automultiscopic video projection to record and project interactive conversations with survivors of the World War II Holocaust.

Paul Debevec is a Senior Scientist at Google VR, a member of GoogleVR's Daydream team, and Adjunct Research Professor of Computer Science in the Viterbi School of Engineering at the University of Southern California, working within the Vision and Graphics Laboratory at the USC Institute for Creative Technologies. Debevec's computer graphics research has been recognized with ACM SIGGRAPH's first Significant New Researcher Award in 2001 for "Creative and Innovative Work in the Field of Image-Based Modeling and Rendering", a Scientific and Engineering Academy Award in 2010 for "the design and engineering of the Light Stage capture devices and the image-based facial rendering system developed for character relighting in motion pictures" with Tim Hawkins, John Monos, and Mark Sagar, and the SMPTE Progress Medal in 2017 in recognition of his achievements and ongoing work in pioneering techniques for illuminating computer-generated objects based on measurement of real-world illumination and their effective commercial application in numerous Hollywood films. In 2014, he was profiled in The New Yorker magazine's "Pixel Perfect: The Scientist Behind the Digital Cloning of Actors" article by Margaret Talbot.

  speaker photograph

Coffee Break Wed. 3:00 - 3:30 pm

Light Field Imaging and Display (Joint Session)
Session Chair: Gordon Wetzstein, Stanford University (United States)

This session is jointly sponsored by the EI Steering Committee.

Wed. 3:50 - 5:30 pm

3:30 pm: Light fields - From shape recovery to sparse reconstruction (Invited), Ravi Ramamoorthi, University of California, San Diego (United States) [EISS-706]

4:10 pm: The beauty of light fields (Invited), David Fattal, LEIA Inc. (United States) [EISS-707]

4:30 pm: Light field insights from my time at Lytro (Invited),, Kurt Akeley, Google Inc. (United States) [EISS-708]

4:50 pm: Quest for immersion (Invited), Kari Pulli, Stealth Startup (United States) [EISS-709]

5:10 pm: Industrial scale light field printing (Invited), Matthew Hirsch, Lumii Inc. (United States) [EISS-710]

Stereoscopic Displays and Applications XXX Interactive Posters Session
The following works will be presented at the EI 2019 Symposium Interactive Papers Session on Wednesday evening, from 5:30 pm to 7:00 pm. Refreshments will be served.
Wed. 5:30 - 7:00 pm

  • A comprehensive head-mounted eye tracking review: Software solutions, applications, and challenges, Qianwen Wan, Aleksandra Kaszowska, Karen Panetta, Holly Taylor, Tufts University; Sos Agaian, CUNY/ The College of Staten Island (United States) [SD&A-654]
  • A study on 3D projector with four parallaxes, Shohei Yamaguchi and Yue Bao, Tokyo City University (Japan) [SD&A-655]
  • Saliency map based multi-view rendering for autostereoscopic displays, Yuzhong Jiao, Man Chi Chan, and Mark P. C. Mok, ASTRI (Hong Kong) [SD&A-656]
  • Semi-automatic post-processing of multi-view 2D-plus-depth video,, Braulio Sespede, Margrit Gelautz, TU Wien; Florian Seitner, Emotion3D (Austria) [SD&A-657]

Stereoscopic Displays and Applications Conference

[Home] [2019: Program, Committee, Short Course, Demonstration Session, 3D Theatre, Sponsor, Register ]
[ 2019, 2018, 2017, 2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 2005, 2004, 2003, 2002, 2001, 2000, 1999, 1998, 1997, 1996]

* Advertisers are not directly affiliated with or specifically endorsed by the Stereoscopic Displays and Applications conference.
Maintained by: Andrew Woods
Revised: 8 December, 2018.