|
|
Final Program:
The World's Premier Conference for 3D Innovation
|
Videos of many of the presentations at the conference are available for free viewing by clicking on the special "Video" icons in the program list below.
|
Monday-Wednesday 27-29 January 2020
Hyatt Regency San Francisco Airport Hotel, Burlingame, California USA, USA.
To be published open-access as part of the IS&T Proceedings of Electronic Imaging.
Part of IS&T's International Symposium on Electronic Imaging: Science and Technology
Sunday-Thursday 26-30 January 2020 ¤ Hyatt Regency San Francisco Airport, Burlingame, California, California, USA.
[ Advance Program: Day 1, Day 2, Day 3, Keynote 1, Keynote 2, Keynote 3, Demonstration Session, 3D Theatre]
[ Register, Short Course ]
|
Projection Sponsors: |
|
|
|
|
3D Theater Partners: |
|
|
|
Program Committee: |
Neil A. Dodgson, Victoria University of Wellington (New Zealand);
Davide Gadia, Univ. degli Studi di Milano (Italy);
Hideki Kakeya, Univ. of Tsukuba (Japan);
Stephan Keith, SRK Graphics Research (United States)
John D. Stern, Intuitive Surgical, Retired (United States);
Bjorn Sommer, Royal College of Art, London (United Kingdom);
Chris Ward, Lightspeed Design (United States)
|
|
SESSION 1
Human Factors in Stereoscopic Displays (Joint Session)
Session Chairs: Jeffrey Mulligan, NASA Ames Research Center (United States) and Nicolas Holliman, University of Newcastle (United Kingdom)
This session is jointly sponsored by: Human Vision and Electronic Imaging 2020, and Stereoscopic Displays and Applications XXXI.
|
Mon. 8:45 am - 10:10 am |
8:45 am: Conference Welcome
8:50 am: Stereoscopic 3D optic flow distortions caused by mismatches between image acquisition and display parameters (JIST-first), Alex Hwang and Eli Peli, Harvard Medical School (United States) [HVEI-009]
9:10 am: The impact of radial distortions in VR headsets on perceived surface slant (JIST-first), Jonathan Tong, Laurie Wilcox, and Robert Allison, York University (Canada) [HVEI-010]
9:30 am: Visual fatigue assessment based on multitask learning (JIST-first), Danli Wang, Chinese Academy of Sciences (China) [SD&A-011]
9:50 am: Depth sensitivity investigation on multi-view glasses-free 3D display, Di Zhang1, Xinzhu Sang2, and Peng Wang2; 1Communication University of China and 2Beijing University of Posts and Telecommunications (China) [SD&A-012]
Coffee Break |
Mon. 10:10 am - 10:50 am |
SD&A 2020: Welcome and Introduction
Host: Andrew Woods, Curtin University (Australia)
|
Mon. 10:50 am - 11:10 am |
SESSION 2
Autostereoscopy I
Session Chair: Bjorn Sommer, Royal College of Art (United Kingdom)
|
Mon. 11:10 am - 12:30 pm |
11:10 am: Morpholo: A hologram generator algorithm, Enrique Canessa, ICTP (Italy) [SD&A-053]
11:30 am: Remastering 360° 3D videos into 16:9 3D format, Andrew Woods, Curtin University (Australia)
HoloExtension - AI-based 2D backwards compatible super-multiview display technology, Rolf-Dieter Naske, psHolix AG (Germany) [cancelled]
11:50 am: Application of a high resolution autostereoscopic display for medical purposes, Kokoro Higuchi, Ayuki Hayashishita, and Hideki Kakeya, University of Tsukuba (Japan) [SD&A-055]
12:10 pm: Monolithic surface-emitting electroholographic optical modulator, Gregg E. Favalora, Michael G. Moebius, Joy C. Perkinson, Elizabeth J. Brundage, William A. Teynor, Steven J. Byrnes, James C. Hsiao, William D. Sawyer, Dennis M. Callahan, Ian W. Frank[1], and John J. LeBlanc, The Charles Stark Draper Laboratory, Inc. ([1] Employee of Draper at the time the work was performed) [SD&A-403]
Lunch Break |
Mon. 12:30 - 2:00 pm |
Monday EI Plenary |
Mon. 2:00 to 3:10 pm
|
Imaging the unseen: Taking the first picture of a black hole.
Katie Bouman, Assistant Professor in the Computing and Mathematical Sciences Department at the California Institute of Technology (United States)
This talk will present the methods and procedures used to produce the first image of a black hole from the Event Horizon Telescope. It has been theorized for decades that a black hole will leave a "shadow" on a background of hot gas. Taking a picture of this black hole shadow could help to address a number of important scientific questions, both on the nature of black holes and the validity of general relativity. Unfortunately, due to its small size, traditional imaging approaches require an Earth-sized radio telescope. In this talk, I discuss techniques we have developed to photograph a black hole using the Event Horizon Telescope, a network of telescopes scattered across the globe. Imaging a black hole's structure with this computational telescope requires us to reconstruct images from sparse measurements, heavily corrupted by atmospheric error. The resulting image is the distilled product of an observation campaign that collected approximately five petabytes of data over four evenings in 2017. I will summarize how the data from the 2017 observations were calibrated and imaged, explain some of the challenges that arise with a heterogeneous telescope array like the EHT, and discuss future directions and approaches for event horizon scale imaging.
Katie Bouman is an assistant professor in the Computing and Mathematical Sciences Department at the California Institute of Technology. Before joining Caltech, she was a postdoctoral fellow in the Harvard-Smithsonian Center for Astrophysics. She received her PhD in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT in EECS. Before coming to MIT, she received her bachelor's degree in electrical engineering from the University of Michigan. The focus of her research is on using emerging computational methods to push the boundaries of interdisciplinary imaging. |
|
|
Coffee Break |
Mon. 3:10 - 3:30 pm |
SD&A Keynote 1
Session Chair: Takashi Kawai, Waseda University (Japan)
|
Mon. 3:30 - 4:30 pm
|
High frame rate 3D-challenges, issues and techniques for success
Larry Paul, Christie Digital Systems (United States) [SD&A-065]
|
|
Abstract:
Paul will share some of his more than 25 years of experience in the development of immersive 3D display systems. He will discuss the challenges, issues and successes in creating, displaying and experiencing 3D content for audiences. Topics will range from working in dome and curved screen projection systems to 3D in use at Los Alamos National Labs to working with Ang Lee on "Billy Lynn's Long Half Time Walk" and "Gemini Man" at 4K, 120Hz per eye 3D as well as his work with Doug Trumbull on the 3D Magi format. Paul will explore the very important relationship between the perception of 3D in resolution, frame rate, viewing distance, field of view, motion blur, shutter angles, color, contrast and "HDR" and image brightness and how all those things combined add to the complexity of making 3D work effectively. In addition, he will discuss his expertise with active and polarized 3D systems and "color-comb" 6P 3D projection systems. He will also explain the additional value of expanded color volume and the inter-relationship with HDR on the reproduction of accurate color.
Biography:
Larry Paul is a technologist and has more than 25 years of experience in the design and deployment of high-end specialty themed entertainment, giant screens, visualization and simulation projects. He has passion for and expertise with true high-frame rate, multi-channel high resolution 2D and 3D display solutions and is always focused on solving specific customer challenges and improving the visual experience. He has his name on 6 patents. A life-long transportation enthusiast, he was on a crew that restored a WWII flying wing. He has rebuilt numerous classic cars and driven over 300,000 miles in electric vehicles over the course of more than 21 years.
|
|
|
Electronic Imaging Symposium Reception
The annual Electronic Imaging All-Conference Reception provides a wonderful opportunity to get to know and interact with new and old SD&A colleagues.
Plan to join us for this relaxing and enjoyable event. |
Mon. 5:00 - 6:00 pm |
SD&A 3D Theatre
Producers: John Stern, retired (United States); Dan Lawrence, Lightspeed Design Group (United States); Andrew Woods, Curtin University (Australia); and Chris Ward, Lightspeed Design Group (United States). |
Mon. 6:00 to 7:30 pm |
This ever-popular session of each year's Stereoscopic Displays and Applications Conference showcases the wide variety of 3D content that is being produced and exhibited around the world. All 3D footage screened in the 3D Theater Session is shown in high-quality polarized 3D on a large screen. The final program will be announced at the conference and 3D glasses will be provided.
|
SD&A Conference Annual Dinner |
Mon. 7:50 pm to 10:00 pm |
The annual informal dinner for SD&A attendees. An opportunity to meet with colleagues and discuss the latest advances. There is no host for the dinner. Information on venue and cost will be provided on the day at the conference. |
SESSION 3
Autostereoscopy II
Session Chair: Gregg Favalora, Draper Laboratory (United States)
| Tue. 8:50 - 10:10 AM |
8:50 am: Dynamic zero-parallax-setting techniques for multi-view autostereoscopic display, Yuzhong Jiao, Mark Mok, Kayton Cheung, Man Chi Chan, and Tak Wai Shen, United Microelectronics Centre (Hong Kong) [SD&A-098]
9:10 am: Projection type 3D display using spinning screen, Hiroki Hayakawa and Tomohiro Yendo, Nagaoka University of Technology (Japan) [SD&A-099]
9:30 am: Full-parallax 3D display using time-multiplexing projection technology, Takuya Omura, Hayato Watanabe, Naoto Okaichi, Hisayuki Sasaki, and Masahiro Kawakita, NHK (Japan Broadcasting Corporation) (Japan) [SD&A-100]
9:50 am: Light field display using wavelength division multiplexing, Masaki Yamauchi and Tomohiro Yendo, Nagaoka University of Technology (Japan) [SD&A-101]
Coffee Break and Industry Exhibition |
Tues. 10:10 - 10:50 am |
SESSION 4
Stereoscopic Image Processing
Session Chair: Nicolas Holliman, University of Newcastle (United Kingdom) |
Tue. 10:50 am - 12:30 pm |
10:50 am: Objective and subjective evaluation of a multi-stereo 3D reconstruction system, Christian Kapeller 1,2, Braulio Sespede 2, Matej Nezveda 3, Matthias Labschütz 4, Simon Flöry 4, Florian Seitner 3, and Margrit Gelautz 2; 1Austrian Institute of Technology, 2Vienna University of Technology, 3emotion3D GmbH, and 4Rechenraum e.U. (Austria) [SD&A-138]
11:10 am: Flow map guided correction to stereoscopic panorama, Haoyu Wang, Daniel Sandin, and Dan Schonfeld, University of Illinois at Chicago (United States) [SD&A-139] [cancelled]
11:30 am: Spatial distance-based interpolation algorithm for computer generated 2D+Z images, Yuzhong Jiao, Kayton Cheung, and Mark Mok, United Microelectronics Centre (Hong Kong) [SD&A-140]
11:50 am: Processing legacy underwater stereophotography for new applications, Patrick Baker1, Trevor Winton2, Daniel Adams3, and Andrew Woods3; 1Western Australian Museum, 2Flinders University of South Australia, and 3Curtin University (Australia) [SD&A-141]
12:10 pm: Update from the 3-D SPACE Museum - A 2nd case study, Eric Kurland, 3-D SPACE (United States)
Multifunctional stereoscopic machine vision system with multiple 3D outputs, Vasily Ezhov, Natalia Vasilieva, Peter Ivashkin, and Alexander Galstian, GPI RAS (Russian Federation) [SD&A-142] [cancelled]
Lunch Break and Industry Exhibition |
Tues. 12:30 pm - 2:00 pm |
Tuesday EI Plenary |
Tue. 2:00 to 3:10 pm |
Imaging in the Autonomous Vehicle Revolution
Gary Hicok, Senior Vice President of hardware development at NVIDIA (United States)
To deliver on the myriad benefits of autonomous driving, the industry must be able to develop self-driving technology that is truly safe. Through redundant and diverse automotive sensors, algorithms, and high-performance computing, the industry is able to address this challenge. NVIDIA brings together AI deep learning, with data collection, model training, simulation, and a scalable, open autonomous vehicle computing platform to power high-performance, energy-efficient computing for functionally safe self-driving. Innovation of imaging capabilities for AVs has been rapidly improving to the point that the cornerstone AV sensors are cameras. Much like the human brain processes visual data taken in by the eyes, AVs must be able to make sense of this constant flow of information, which requires high-performance computing to respond to the flow of sensor data. This presentation will delve into how these developments in imaging are being used to train, test and operate safe autonomous vehicles. Attendees will walk away with a better understanding of how deep learning, sensor fusion, surround vision and accelerated computing are enabling this deployment.
Gary Hicok is senior vice president of hardware development at NVIDIA, and is responsible for Tegra System Engineering, which oversees Shield, Jetson, and DRIVE platforms. Prior to this role, Hicok served as senior vice president of NVIDIA's Mobile Business Unit. This vertical focused on NVIDIA's Tegra mobile processor, which was used to power next-generation mobile devices as well as in-car safety and infotainment systems. Before that, Hicok ran NVIDIA's Core Logic (MCP) Business Unit also as senior vice president. Throughout his tenure with NVIDIA, Hicok has also held a variety of management roles since joining the company in 1999, with responsibilities focused on console gaming and chipset engineering. He holds a BSEE degree from Arizona State University and has authored 33 issued patents. |
|
|
|
Coffee Break and Industry Exhibition |
Tues. 3:10 - 3:30 pm |
SESSION 5
3D Developments
Session Chair: John D. Stern, Retired (USA)
|
Tues. 3:30 pm - 4:10 pm |
3:30 pm: CubicSpace: A reliable model for proportional, comfortable and universal capture and display of stereoscopic content, Nicholas Routhier, Mindtrick Innovations Inc. (Canada) [SD&A-154]
3:50 pm: A camera array system based on DSLR cameras for autostereoscopic prints, Tzung-Han Lin, Yu-Lun Liu, Chi-Cheng Lee, and Hsuan-Kai Huang, National Taiwan University of Science and Technology (Taiwan) [SD&A-155]
|
SD&A Keynote 2
Session Chair: Gregg Favalora, Draper (United States)
|
Tues. 4:10 pm - 5:10 pm
|
Challenges and solutions for multiple viewer stereoscopic displays
Kurt Hoffmeister, Mechdyne Corporation (United States) [SD&A-400]
|
|
Abstract:
Many 3D experiences, such as movies, are designed for a single viewer perspective. Unfortunately this means that all viewers must share that one perspective view. Any viewer positioned away from the design eye point will see a skewed perspective and less comfortable stereoscopic viewing experience. For the many situations where multiple perspectives would be desired, we ideally want perspective viewpoints unique to each viewer's position and head orientation. Today there are several possible Multiviewer solutions available including personal Head Mounted Displays (HMDs), multiple overlapped projection displays, and high frame rate projection. Each type of solution and application unfortunately has its own pros and cons such that there is no one ideal solution. This presentation will discuss the need for multiviewer solutions as a key challenge for stereoscopic displays and multiple participant applications, it will review some historical approaches, the challenges of technologies used and their implementation, and finally some current solutions readily available. As we all live and work in a collaborative world it is only natural our Virtual Reality and data visualization experiences should account for multiple viewers. For collocated participants there are several available solutions now that have built on years of previous development, some of these solutions can also accommodate remote participants. The intent of this presentation is an enlightened look at multiple viewer stereoscopic display solutions.
Biography:
As a co-founder of Mechdyne Corporation, Kurt Hoffmeister has been a pioneer and worldwide expert in large-screen virtual reality and simulation system design, installation, and integration. A licensed professional engineer with several patents, Hoffmeister was in charge of evaluating and implementing new AV/IT technology and components into Mechdyne's solutions. Kurt has contributed to well over 500 Mechdyne projects, including more than 30 projects worth + $1 million investment. Today Kurt consults as a highly experienced resource for Mechdyne project teams. Kurt has been involved in nearly every Mechdyne project for the past 20 years serving in a variety of capacities, including researcher, consultant, systems designer and systems engineer. Before co-founding Mechdyne, Kurt spent 10 years in technical and management roles with the Michelin Tire Company's North American Research Center, was an early employee and consultant at Engineering Animation, Inc (now a division of Siemens), and was a researcher at Iowa State University. Kurt's current role at Mechdyne is Technology Consultant since retiring in 2018.
|
|
|
Symposium Demonstration Session
Demonstration Chair: Bjorn Sommer, Royal College of Art (United Kingdom)
|
Tues. 5:30 - 7:30 pm |
|
Demonstrations
A symposium-wide demonstration session will be open to attendees 5:30 to 7:30 pm Tuesday evening. Demonstrators will provide interactive, hands-on demonstrations of a wide-range of products related to Electronic Imaging.
The demonstration session hosts a vast collection of stereoscopic products providing a perfect opportunity to witness a wide array of stereoscopic displays with your own two eyes.
More information: http://www.stereoscopic.org/demo/index.html. |
KEYNOTE: Imaging Systems and Processing (Joint Session)
This session is jointly sponsored by: Imaging Sensors and Systems 2020, The Engineering Reality of Virtual Reality 2020, and Stereoscopic Displays and Applications XXXI.
|
Wed. 8:50 am - 9:30 am
|
Mixed reality guided neuronavigation for non-invasive brain stimulation treatment
Christoph Leuze, Stanford University (United States) [ISS-189]
|
|
Abstract:
Medical imaging is used extensively world-wide to visualize the internal anatomy of the human body. Since medical imaging data is traditionally displayed on separate 2D screens, it needs an intermediary or well trained clinician to translate the location of structures in the medical imaging data to the actual location in the patient's body. Mixed reality can solve this issue by allowing to visualize the internal anatomy in the most intuitive manner possible, by directly projecting it onto the actual organs inside the patient. At the Incubator for Medical Mixed and Extended Reality (IMMERS) in Stanford, we are connecting clinicians and engineers to develop techniques that allow to visualize medical imaging data directly overlaid on the relevant anatomy inside the patient, making navigation and guidance for the clinician both simpler and safer. In this presentation I will talk about different projects we are pursuing at IMMERS and go into detail about a project on mixed reality neuronavigation for non-invasive brain stimulation treatment of depression. Transcranial Magnetic Stimulation is a non-invasive brain stimulation technique that is used increasingly for treating depression and a variety of neuropsychiatric diseases. To be effective the clinician needs to accurately stimulate specific brain networks, requiring accurate stimulator positioning. In Stanford we have developed a method that allows the clinician to "look inside" the brain to see functional brain areas using a mixed reality device and I will show how we are currently using this method to perform mixed reality-guided brain stimulation experiments.
Biography:
Christoph Leuze is a research scientist in the Incubator for Medical Mixed and Extended Reality at Stanford University where he focuses on techniques for visualization of MRI data using virtual and augmented reality devices. He published BrainVR, a virtual reality tour through his brain and is closely working with clinicians on techniques to visualize and register medical imaging data to the real world using optical see-through augmented reality devices such as the Microsoft Hololens and the Magic Leap One. Prior to joining Stanford, he worked on high-resolution brain MRI measurements at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, for which he was awarded the Otto Hahn medal by the Max Planck Society for outstanding young researchers.
|
|
|
SD&A 3D Theater - Spotlight Session
Session Chair: John D. Stern, Retired (United States)
This session is an opportunity to take an extended look at some highlights from the Monday night 3D Theater session.
|
Wed. 9:40 am - 10:10 am |
Coffee Break and Industry Exhibition |
Wed. 10:10 - 10:50 am |
SESSION 6
Stereoscopic Perception and VR
Session Chair: Takashi Kawai, Waseda University (Japan)
|
Wed. 10:50 am - 12:10 pm |
10:50 am: Evaluating the stereoscopic display of visual entropy glyphs in complex environments, Nicolas Holliman, University of Newcastle (United Kingdom) [SD&A-243]
11:10 am: Evaluating user experience of 180 and 360 degree images, Yoshihiro Banchi, Keisuke Yoshikawa, and Takashi Kawai, Waseda University (Japan) [SD&A-244]
11:30 am: Visual quality in VR head mounted device: Lessons learned making professional headsets, Bernard Mendiburu, Varjo (Finland) [SD&A-245]
11:50 am: The single image stereoscopic auto-pseudogram - Classification and theory, Ilicia Benoit, National 3-D Day (United States) [SD&A-246]
Lunch Break and Industry Exhibition |
Wed. 12:40 - 2:00 pm |
Wednesday EI Plenary |
Wed. 2:00 - 3:10 pm
|
Quality Screen Time: Leveraging Computational Displays for Spatial Computing
Douglas Lanman, Director of Display Systems Research, Facebook Reality Labs (United States)
Displays pervade our lives and take myriad forms, spanning smart watches, mobile phones, laptops, monitors, televisions, and theaters. Yet, in all these embodiments, modern displays remain largely limited to two-dimensional representations. Correspondingly, our applications, entertainment, and user interfaces must work within the limits of a flat canvas. Head-mounted displays (HMDs) present a practical means to move forward, allowing compelling three-dimensional depictions to be merged seamlessly with our physical environment. As personal viewing devices, head-mounted displays offer a unique means to rapidly deliver richer visual experiences than past direct-view displays that must support a full audience. Viewing optics, display components, rendering algorithms, and sensing elements may all be tuned for a single user. It is the latter aspect that most differentiates from the past, with individualized eye tracking playing an important role in unlocking higher resolutions, wider fields of view, and more comfortable visuals than past displays. This talk will explore such "computational display" concepts and how they may impact VR/AR devices in the coming years.
Douglas Lanman is the Director of Display Systems Research at Facebook Reality Labs, where he leads investigations into advanced display and imaging technologies for augmented and virtual reality. His prior research has focused on head-mounted displays, glasses-free 3D displays, light-field cameras, and active illumination for 3D reconstruction and interaction. He received a BS in applied physics with honors from Caltech in 2002, and his MS and PhD in electrical engineering from Brown University in 2006 and 2010, respectively. He was a Senior Research Scientist at NVIDIA Research from 2012 to 2014, a Postdoctoral Associate at the MIT Media Lab from 2010 to 2012, and an Assistant Research Staff Member at MIT Lincoln Laboratory from 2002 to 2005. His most recent work has focused on developing Half Dome: an eye-tracked, wide-field-of-view varifocal HMD with AI-driven rendering. |
|
|
|
Coffee Break and Industry Exhibition |
Wed. 3:00 - 3:30 pm |
SESSION 7
Visualization Facilities (Joint Session)
Session Chair: Margaret Dolinsky, Indiana University (United States) and Andrew Woods, Curtin University (Australia)
This session is jointly sponsored by: Stereoscopic Displays and Applications XXXI and The Engineering Reality of Virtual Reality 2020.
|
Wed. 3:30 pm - 4:10 pm |
3:30 pm: Immersive design engineering, Bjorn Sommer, Chang Lee, and Savina Toirrisi, Royal College of Art (United Kingdom) [SD&A-265]
3:50 pm: Using a random dot stereogram as a test image for 3D demonstrations, Andrew Woods, Wesley Lamont, and Joshua Hollick, Curtin University (Australia) [SD&A-266]
|
SD&A Keynote 3
Session Chairs: Andrew Woods, Curtin University (Australia) and Margaret Dolinsky, Indiana University (United States)
This session is jointly sponsored by: Stereoscopic Displays and Applications XXXI and The Engineering Reality of Virtual Reality 2020.
| Wed. 4:10 pm - 5:10 pm |
Social holographics: Addressing the forgotten human factor
Bruce Dell, CEO, Euclideon Holographics (Australia) [ERVR-295]
|
|
Abstract:
With all the hype and excitement surrounding Virtual and Augmented Reality, many people forget that while powerful technology can change the way we work, the human factor seems to have been left out of the equation for many modern-day solutions. For example, most modern Virtual Reality HMDs completely isolate the user from their external environment, causing a wide variety of problems. "See-Through" technology is still in its infancy. In this submission we argue that the importance of the social factor outweighs the headlong rush towards better and more realistic graphics, particularly in the design, planning and related engineering disciplines. Large-scale design projects are never the work of a single person, but modern Virtual and Augmented Reality systems forcibly channel users into single-user simulations, with only very complex multi-user solutions slowly becoming available. In our presentation, we will present three different Holographic solutions to the problems of user isolation in Virtual Reality, and discuss the benefits and downsides of each new approach. With all the hype and excitement surrounding Virtual and Augmented Reality, many people forget that while powerful technology can change the way we work, the human factor seems to have been left out of the equation for many modern-day solutions. For example, most modern Virtual Reality HMDs completely isolate the user from their external environment, causing a wide variety of problems. "See-Through" technology is still in its infancy. In this submission we argue that the importance of the social factor outweighs the headlong rush towards better and more realistic graphics, particularly in the design, planning and related engineering disciplines. Large-scale design projects are never the work of a single person, but modern Virtual and Augmented Reality systems forcibly channel users into single-user simulations, with only very complex multi-user solutions slowly becoming available. In our presentation, we will present three different Holographic solutions to the problems of user isolation in Virtual Reality, and discuss the benefits and downsides of each new approach.
Biography:
Bruce Dell is CEO of Euclideon Holographics Ltd Pty located in Brisbane Australia and is Australia’s most publicised inventor with over 3,000 media articles worldwide. |
|
|
SD&A Conference Closing Remarks
|
|
EI Symposium Interactive Posters Session
Posters from the wide range of EI 2020 Symposium conferences.
Refreshments will be served. |
Wed. 5:30 - 7:00 pm |
See other Electronic Imaging Symposium events at: www.ElectronicImaging.org |