|
|
Advance Conference Program:
The World's Premier Conference for 3D Innovation
Held as part of:
The IS&T International Symposium on Electronic Imaging: Science and Technology
11-26 January 2022
- Fully Online -
To be published open-access as part of the IS&T Proceedings of Electronic Imaging.
|
3D Theater Partners: |
|
|
|
Conference Chairs: |
Gregg Favalora, Draper (USA)
Nicolas S. Holliman, Kings College Longdon (UK)
Takashi Kawai, Waseda University (Japan)
Andrew J. Woods, Curtin University (Australia);
|
Founding Chair: |
John O. Merritt (in memoriam), The Merritt Group (United States).
|
Program Committee: |
Neil A. Dodgson, Victoria University of Wellington (New Zealand)
Justus Ilgner, University Hospital Aachen (Germany)
Eric Kurland, 3-D SPACE Museum (United States)
Bjorn Sommer, University of Konstanz (Germany)
John D. Stern, Intuitive Surgical, Retired (USA)
Chris Ward, Lightspeed Design, Inc. (USA)
Laurie Wilcox, York University (Canada)
|
|
* Most times quoted below are in New York (GMT-5 / UTC-5) Timezone. Please adjust for your own timezone. All SD&A conference and EI symposium events and activities are being held fully online.
[ Quick Links: SD&A Session 1, SD&A Session 2, SD&A Session 3, SD&A Keynote, 3D Theatre , Short Course ]
[ ERVR Session 1, ERVR Session 2 ]
[ Register!!! ]
Short Course: Stereoscopic Imaging Fundamentals - part 1
More information (Separate Payment required for short course)
|
18:30 - 20:45 * New York Timezone
|
|
Short Course: Stereoscopic Imaging Fundamentals - part 2
More information (Separate payment required for short course)
|
18:30 - 20:45 * New York Timezone
|
|
SD&A 3D Theater Session
The 3D Theater Session at each year's Stereoscopic Displays and Applications conference showcases the wide variety of 3D content that is being used, produced and exhibited around the world.
There are three separately scheduled screenings to suit different time zones around the world.
All three screenings are the same content.
The three screenings will be streamed via YouTube in both red/cyan anaglyph and 3DTV compatible over-under format - be sure to choose the correct 3D stream.
To get ready for the event, obtain a pair of red(left)-cyan(right) anaglyph glasses, or warm up your 3DTV with approprate 3D glasses at the ready!
Registration for the SD&A conference is paid, but registration for (just) the SD&A 3D Theater session is free via this link.
You'll receive the Youtube links for the session once you register.
|
| New York
Timezone (UTC-5) | Toyko
Timezone (UTC+9) | Paris
Timezone (UTC+1) |
First Screening (Premiere)
Free 3D Theater registration: Click here |
19:30 ‑ 20:30 Sat 15 Jan |
09:30 ‑ 10:30 Sun 16 Jan |
01:30 ‑ 02:30 Sun 16 Jan |
Second Screening
Free 3D Theater registration: Click here |
05:30 ‑ 06:30 Sun 16 Jan |
19:30 ‑ 20:30 Sun 16 Jan |
11:30 ‑ 12:30 Sun 16 Jan |
Final Screening
Free 3D Theater registration: Click here |
13:30 ‑ 14:30 Sun 16 Jan |
03:30 ‑ 04:30 Mon 17 Jan |
19:30 ‑ 20:30 Sun 16 Jan |
More information.
EI Plenary 1
|
10:00 - 11:10 * New York Timezone
|
IS&T Welcome
PLENARY: Quanta Image Sensors: Counting Photons Is the New Game in Town
Eric R. Fossum,
Dartmouth College (United States)
Abstract: The Quanta Image Sensor (QIS) was conceived as a different image sensor—one that counts photoelectrons one at a time using millions or billions of specialized pixels read out at high frame rate with computation imaging used to create gray scale images. QIS devices have been implemented in a CMOS image sensor (CIS) baseline room-temperature technology without using avalanche multiplication, and also with SPAD arrays. This plenary details the QIS concept, how it has been implemented in CIS and in SPADs, and what the major differences are. Applications that can be disrupted or enabled by this technology are also discussed, including smartphone, where CIS-QIS technology could even be employed in just a few years.
Biography: Eric R. Fossum is best known for the invention of the CMOS image sensor "camera-on-a-chip" used in billions of cameras. He is a solid-state image sensor device physicist and engineer, and his career has included academic and government research, and entrepreneurial leadership. At Dartmouth he is a professor of engineering and vice provost for entrepreneurship and technology transfer. Fossum received the 2017 Queen Elizabeth Prize from HRH Prince Charles, considered by many as the Nobel Prize of Engineering "for the creation of digital imaging sensors," along with three others. He was inducted into the National Inventors Hall of Fame, and elected to the National Academy of Engineering among other honors including a recent Emmy Award. He has published more than 300 technical papers and holds more than 175 US patents. He co-founded several startups and co-founded the International Image Sensor Society (IISS), serving as its first president. He is a Fellow of IEEE and OSA. |
|
|
Electronic Imaging 2022 online Welcome Reception
|
11:10 - 11:40 * New York Timezone
|
|
EI Plenary 2
|
10:00 - 11:15 * New York Timezone
|
IS&T Awards
PLENARY: In situ Mobility for Planetary Exploration: Progress and Challenges
Larry Matthies,
Jet Propulsion Laboratory (United States)
Abstract: This year saw exciting milestones in planetary exploration with the successful landing of the Perseverance Mars rover, followed by its operation and the successful technology demonstration of the Ingenuity helicopter, the first heavier-than-air aircraft ever to fly on another planetary body. This plenary highlights new technologies used in this mission, including precision landing for Perseverance, a vision coprocessor, new algorithms for faster rover traverse, and the ingredients of the helicopter. It concludes with a survey of challenges for future planetary mobility systems, particularly for Mars, Earth's moon, and Saturn's moon, Titan.
Biography: Larry Matthies received his PhD in computer science from Carnegie Mellon University (1989), before joining JPL, where he has supervised the Computer Vision Group for 21 years, the past two coordinating internal technology investments in the Mars office. His research interests include 3-D perception, state estimation, terrain classification, and dynamic scene analysis for autonomous navigation of unmanned vehicles on Earth and in space. He has been a principal investigator in many programs involving robot vision and has initiated new technology developments that impacted every US Mars surface mission since 1997, including visual navigation algorithms for rovers, map matching algorithms for precision landers, and autonomous navigation hardware and software architectures for rotorcraft. He is a Fellow of the IEEE and was a joint winner in 2008 of the IEEE's Robotics and Automation Award for his contributions to robotic space exploration. |
|
|
Electronic Imaging 2022 online Poster Session
|
11:20 - 12:20 * New York Timezone
|
|
Stereoscopic Displays and Applications SESSION 1
SD&A Keynote Session
Session Chair: Andrew Woods, Curtin University (Australia)
Moderator: Bjorn Sommer, Royal College of Art (London) |
Mon. 10:00 - 11:05 * New York Timezone |
10:00*: Conference Introduction
SD&A Keynote
|
10:05 New York Timezone
|
Tasks, traps, and tricks of a minion making 3D magic
John R. Benson, Illumination Entertainment (France) [SD&A-267] [Presentation Only]
Abstract:
"Um, this movie is going to be in stereo, like, 3D? Do we have to wear the glasses? How do we do that? How expensive is it going to be? And more importantly, if I buy that tool you wanted, can you finish the movie a week faster? No, ok, then figure it out for yourself. Go on, you can do it. We have faith…" And so it begins. From Coraline to Sing2, with Despicable Me, Minions, Pets, and a few Dr. Suess films, John Benson has designed the look and developed processes for making the stereo films of Illumination Entertainment both cost efficient and beneficial to the final film, whether as 2D or 3D presentations. He will discuss his workflow and design thoughts, as well as the philosophy of how he uses stereo visuals as a story telling device and definitely not a gimmick.
Biography:
John R. Benson began his professional career in the camera department, shooting motion control and animation for "Pee-wee's Playhouse" in the mid 80s. He's been a visual effect supervisor for commercials in New York and San Francisco, managed the CG commercials division for Industrial Light and Magic, and was compositor for several films, including the Matrix sequels and Peter Jackson's "King Kong". After "Kong", he helped design the motion control systems and stereo pipeline for Laika's "Coraline". Since 2009, he has been working for Illumination Entertainment in Paris, France as the Stereographic Supervisor for the "Despicable Me" series, "Minions", several Dr. Seuss adaptations, the "Secret Life of Pets" series and both "Sing" films. Together, the Illumination projects have grossed over $6.7 billion worldwide.
|
|
|
|
|
SD&A Invited Presenter 1
|
10:45 * New York Timezone
|
Multiple independent viewer stereoscopic projection (Invited)
Steve Chapman, Digital Projection Limited (United Kingdom) [SD&A-268] [Presentation Only]
Abstract: It has emerged that one of the major shortcomings of immersive visualization is the problem of isolation. Although the cost and performance of head mounted displays have improved greatly over recent years, they still suffer from fundamental issues: They fail to blend real and virtual environments, potentially resulting in discomfort, disorientation and disassociation from other participants.
Combinations of Stereo Projection and Head-Tracking have been used to provide a single observer with a convincing virtual environment that can be real time updated to maintain the correct perspective as the viewer moves within the model.
Multi-View technology extends the capability of stereo projection and head-tracking to enable multiple viewers to observe the same model and environment, but each from a perspective appropriate to their changing position. They have the benefit that they can move independently and see one another's movements and articulations. This maintains natural human interactions and collaboration.
Multi-View has been achieved by the development of very high frame rate, high resolution projection and fast-switching active glasses. Together, these have enabled time division multiplexed presentation of images to multiple observers.
To date we have demonstrated and installed systems where up to and including six independent stereo views are displayed at native 4K resolution, flicker free (at 120fps per viewer) with excellent greyscale, colour and luminance.
Biography: Steve Chapman: Studied Physics and Mathematics (BSc hons) at Leeds University (UK) followed by a Master's degree in Applied Optics at the University of Salford. Joined the Rank Organisation in 1994 and was a founder member of the newly formed Digital Projection Limited in 1996. Digital Projection (and previously Rank) were Texas Instruments' founding partner in the development of DLP Projection technology; first moving images were displayed at our site in Manchester, UK. Having been instrumental in the development of multiple generations of projectors as the technology evolved, Steve is now Head of R&D at Digital Projection Limited.
|
|
|
|
|
Session Break |
Thu. 11:05 - 11:30 * New York Timezone |
Engineering Reality of Virtual Reality SESSION 1
View/Narrative/Actions in Virtual Reality
Session Chairs: Margaret Dolinsky, Indiana University (United States) and Ian McDowall, Intuitive Surgical / Fakespace Labs (United States)
|
Thu. 11:30 - 12:30 * New York Timezone |
11:30 * : Novel view synthesis in embedded virtual reality devices, Laurie Van Bogaert 1, Daniele Bonatto 1,2, Sarah Fachada 1, and Gauthier Lafruit 1; 1Université Libre de Bruxelles and 2Vrije Universiteit Brussel (Belgium) [ERVR-269]
11:50 * : ReCapture: A virtual reality interactive narrative experience concerning photography, perspectives, and self-understanding, Indira Avendano, Stephanie Carnell, and Carolina Cruz-Neira, University of Central Florida (United States) [ERVR-270]
12:10 * : Erasmus XR – Immersive experiences in European academic institutions, Adnan Hadziselimovic, University of Malta (Malta) [ERVR-271]
Session Break |
Thu. 12:30 - 13:00 * New York Timezone |
SD&A SESSION 2
Stereoscopic Applications I
Session Chair: Nicolas Holliman, Kings College London (United Kingdom)
Moderator: Gregg Favalora, The Charles Stark Draper Laboratory, Inc. (United States)
|
Thu. 13:00 - 14:20 * New York Timezone |
13:00 * : The association of vision measures with simulated air refueling task performance using a stereoscopic display, Eleanor O'Keefe 1, Matthew Ankrom 1, Charles Bullock 2, Eric Seemiller 1, Marc Winterbottom 2, Jonelle Knapp 2, and Steven Hadley 2; 1KBR and 2US Air Force (United States) [SD&A-289]
13:20 * : Towards an immersive virtual studio for innovation design engineering, Bjorn Sommer1, Ayn Sayuti2, Zidong Lin1, Shefali Bohra1, Emre Kayganaci1, Caroline Yan Zheng1, Chang Hee Lee3, Ashley Hall1, and Paul Anderson1; 1Royal College of Art (United Kingdom), 2Universiti Teknologi MARA (UiTM) (Malaysia), and 3Korea Advanced Institute of Science and Technology (KAIST) (Republic of Korea) [SD&A-290]
SD&A Invited Presenter 2
|
13:40 * New York Timezone
|
Underwater 360 3D cameras: A summary of Hollywood and DoD applications (Invited)
Casey Sapp, Blue Ring Imaging (United States) [SD&A-291] [Presentation Only]
Abstract: Since 2015 when Casey Sapp created his first underwater 360 3D camera array his projects have spanned across Hollywood, Natural History, Science, Autonomous Vehicle Navigation, and the DoD. Casey will provide a brief history of underwater 360 camera technology and specifically underwater 360 3D camera technology's increasing adoption in the marketplace.
Bio: Casey Sapp is CEO of Blue Ring Imaging. His background is in underwater technology innovation for Hollywood creating many first of its kind systems including the first 360 3D underwater camera system, the first underwater 360 live broadcast (GMA), the first ROV VR piloting system (MBARI), the first 360 cinematic camera on a submarine, and the highest resolution underwater cinematic camera system in the world (MSG Sphere). Blue Ring Imaging has been awarded 3 different SBIRs to expand the ROV VR offering in the Navy and Air Force for teleoperations and artificial intelligence applications.
|
|
|
|
|
SD&A Invited Presenter 3
|
14:00 * New York Timezone
|
Why Simulated Reality will be the driver for the Metaverse and 3D immersive visualization in general (Invited)
Maarten Tobias, Dimenco B.V. (the Netherlands) [SD&A-292] [Presentation Only]
Abstract:
Easiness of use and access to devices will determine for a large part the way people will be able to experience the Metaverse or immersive content. Using 3D displays, without the need of wearables and that can interact with the end-user in such a way that they feel part of the experience, will provide exactly that experience. If such 3D display technology is relatively low cost, can be easily integrated in existing devices and leverage on known 3D formats and XR standardization, like OpenXR it will drive the adaptation of immersive content in general.
Especially the large tech companies that are pushing this immersive experience will drive the push for accessible and cost effective display solutions. Whereas their focus is often on head mounted devices the market agrees that the majority of devices is – and will remain – devices as we know them today, such as laptops, mobile phones and monitors. Therefore an important specification to facilitate this transition to immersive devices is that they can be used as normal devices. Switchable display technology is therefore a must to facilitate this transition and offer a easy accessible immersive experience.
Biography:
Maarten holds a Master in Strategic Management at Tilburg University and has been in different commercial and
strategic roles at Philips before founding Dimenco in 2010. Maarten introduced the concept of SR in 2017 and is
since then working closely with the technical and commercial team to implement this vision. Maarten is further
since 2019 part of the Supervisory board of Morphotonics and advises several start-ups on strategic challenges.
|
|
|
|
|
Session Break |
Thu. 14:40 - 18:00 * New York Timezone |
Engineering Reality of Virtual Reality SESSION 2
Simulation, Embodiment, and Active Shooters in VR
Session Chairs: Margaret Dolinsky, Indiana University (United States) and Ian McDowall, Intuitive Surgical / Fakespace Labs (United States)
|
Thu. 18:00 - 19:00 * New York Timezone |
18:00 * : VirtualForce: Simulating writing on a 2D-surface in virtual reality, Ziyang Zhang and Jurgen P. Schulze, University of California, San Diego (United States) [ERVR-297]
18:20 * : A systematic review of embodied information behavior in shared, co-present extended reality experiences, Kathryn Hays1, Ruth West1, Christopher Lueg2, Arturo Barrera1, Lydia Ogbadu Oladapo1, Olumuyiwa Oyedare1, Julia Payne1, Mohotarema Rashid1, Jennifer Stanley1, and Lisa Stocker1; 1University of North Texas and 2University of Illinois at Urbana-Champaign (United States) [ERVR-298]
18:40 * : Immersive virtual reality training module for active shooter events, Sharad Sharma and Sri Teja Bodempudi, Bowie State University (United States) [ERVR-299]
Session Break |
Thu. 17:00 - 19:15 * New York Timezone |
SD&A SESSION 3
Stereoscopic Applications II
Session Chair: Andrew Woods, Curtin University (Australia)
Moderator: Takashi Kawai, Waseda University (Japan)
|
Thu. 19:15 - 20:15 * New York Timezone |
19:15 * : Evaluation and estimation of discomfort during continuous work with mixed reality systems by deep learning, Yoshihiro Banchi, Kento Tsuchiya, Masato Hirose, Ryu Takahashi, Riku Yamashita, and Takashi Kawai, Waseda University (Japan) [SD&A-309]
19:35 * : 360° see-through full-parallax light-field display using Holographic Optical Elements, Reiji Nakashima and Tomohiro Yendo, Nagaoka University of Technology (Japan) [SD&A-310]
19:55 * : An aerial floating naked-eye 3D display using crossed mirror arrays, Yoshihiro Sato, Yuto Osada, and Yue Bao, Tokyo City University (Japan) [SD&A-311]
|
EI Plenary 3 and Discussion Panel
|
10:00 - 11:15 * New York Timezone
|
PLENARY: Physics-based Image Systems Simulation
Joyce Farrell,
Executive Director, Stanford Center for Image Systems Engineering, Stanford University, CEO and Co-founder, ImagEval Consulting (United States)
Abstract: Three quarters of a century ago, visionaries in academia and industry saw the need for a new field called photographic engineering and formed what would become the Society for Imaging Science and Technology (IS&T). Thirty-five years ago, IS&T recognized the massive transition from analog to digital imaging and created the Symposium on Electronic Imaging (EI). IS&T and EI continue to evolve by cross-pollinating electronic imaging in the fields of computer graphics, computer vision, machine learning, and visual perception, among others. This talk describes open-source software and applications that build on this vision. The software combines quantitative computer graphics with models of optics and image sensors to generate physically accurate synthetic image data for devices that are being prototyped. These simulations can be a powerful tool in the design and evaluation of novel imaging systems, as well as for the production of synthetic data for machine learning applications.
Biography: Joyce Farrell is a senior research associate and lecturer in the Stanford School of Engineering and the executive director of the Stanford Center for Image Systems Engineering (SCIEN). Joyce received her BS from the University of California at San Diego and her PhD from Stanford University. She was a postdoctoral fellow at NASA Ames Research Center, New York University, and Xerox PARC, before joining the research staff at Hewlett Packard in 1985. In 2000 Joyce joined Shutterfly, a startup company specializing in online digital photofinishing, and in 2001 she formed ImagEval Consulting, LLC, a company specializing in the development of software and design tools for image systems simulation. In 2003, Joyce returned to Stanford University to develop the SCIEN Industry Affiliates Program.
The Brave New World of Virtual Reality: A Panel Discussion
Advances in electronic imaging, computer graphics, and machine learning have made it possible to create photorealistic images and videos. In the future, one can imagine that it will be possible to create a virtual reality that is indistinguishable from real-world experiences. This panel discusses the benefits of this brave new world of virtual reality and how we can mitigate the risks that it poses.
The goal of the panel discussion is to showcase state-of-the art synthetic imagery, learn how this progress benefits society, and discuss how we can mitigate the risks that the technology also poses. After brief demos of the state-of-their-art, the panelists will discuss.
Moderated by: Joyce Farrell
Confirmed Panelists:
Matthias Neissner, professor, Technical University of Munich, on creating photorealistic avatars.
Paul Debevec, director of research, creative algorithms and technology at Netflix, on Project Shoah
Hany Farid, professor, University of California, Berkeley, on digital forensics
|
|
|
|
See other Electronic Imaging Symposium events at: www.ElectronicImaging.org |