2015 Proceedings
2015 Photos
|
|
Advance Conference Program:
The World's Premier Conference for 3D Innovation
|
Videos of many of the presentations at the conference are available for free viewing by clicking on the special "Video" icons in the program list below. |
Monday-Wednesday 9-11 February 2015
Hilton San Francisco Union Square Hotel, San Francisco, California, USA.
To be published as Proceedings of Electronic Imaging, IS&T/SPIE Vol. 9391
Part of IS&T/SPIE's International Symposium on Electronic Imaging: Science and Technology
Sunday-Thursday 8-12 February 2015 ¤ Hilton San Francisco Union Square Hotel,
San Francisco, California, USA
[ Advance Program: Day 1, Day 2, Day 3, Keynote 1, Keynote 2, Demonstration Session, 3D Theatre, Discussion Forum ]
[ Register, Short Course ]
|
Program Committee: |
Neil A. Dodgson, Univ. of Cambridge (United Kingdom);
Davide Gadia, Univ. degli Studi di Milano (Italy);
Hideki Kakeya, Univ. of Tsukuba (Japan);
John D. Stern, Intuitive Surgical, Retired (United States);
Vivian K. Walworth, StereoJet Inc. (United States);
Chris Ward, Lightspeed Design (United States);
Michael A. Weissman, Perspective Systems (United States);
Samuel Z. Zhou, IMAX Corp. (China). |
|
SESSION 1
High Parallax Displays
Session Chair: Hideki Kakeya, Univ. of Tsukuba (Japan) |
Mon. 8:30 to 9:10 am |
8:30 am: Enhancement of the effective viewing window for holographic display with amplitude-only SLM, Geeyoung Sung, Jungkwuen An, Hong-Seok Lee, Sun Il Kim, Song Hoon, Juwon Seo, Hojung Kim, Wontaek Seo, Chil-Sung Choi, U-in Chung, Samsung Advanced Institute of Technology (Korea, Republic of) [9391-1]
8:50 am: A full parallax 3D display with restricted viewing zone tracking viewer's eye, Naoto Beppu, Tomohiro Yendo, Nagaoka Univ. of Technology (Japan) [9391-2]
SD&A Welcome and Opening Remarks |
Mon. 9:10 to 9:20 am |
SD&A Keynote Session 1
Session Chair: Nicolas S. Holliman, The Univ. of York (United Kingdom) |
Mon. 9:20 am to 10:20 pm |
A Stereoscope for the PlayStation Generation [9391-50]
Ian Bickerstaff, Sony Computer Entertainment (United Kingdom)
|
Abstract: After many years of waiting, virtual reality will soon be available for home use. Smart phones have given us small, high quality displays and accurate movement tracking while the games industry has given us the necessary real-time graphics power to drive these displays. In addition, advances in technologies such as free-form optics, and binaural audio processing have arrived at just the right time.
More than just viewing images on a screen, the aim of ventures such as Sony Computer Entertainment's Project Morpheus is to produce a system that convinces the wearer that they have been transported to another place, and the display system is a vital component. Ever since the beginning of photography, equipment has been created to achieve this goal: an 1850's Brewster stereoscope contains many design features found in the latest HMDs. In both, near ortho-stereoscopic viewing conditions ensure that subjects appear life sized and with realistic depth placement. Unlike a monitor or cinema screen, images are always seen from an optimum viewing position with keystone distortion and vertical parallax kept to a minimum. A far greater range of depth can be viewed comfortably on a head-mounted display than is possible on a conventional screen.
Unlike Victorian stereoscopes, the latest devices offer a wide field of view using techniques first pioneered by Eric Howlett with his Leep VR system in the early 1980s. Screen edges are pushed into peripheral vision so that the concept of a stereo window is no longer relevant. Pincushion distortion is used to increase visual acuity in the centre of vision, mimicking the characteristics of the human eye.
To complete the illusion, high frequency data from accelerometers and gyros are fused with lower frequency camera data to provide accurate, low latency tracking of the viewer's head position and orientation. Ingenious new techniques create the illusion of zero latency, drastically reducing the potential for any viewer disorientation.
So how do we create images for these displays? From a software point of view, the main challenge is to achieve guaranteed high frame rates while avoiding pixel aliasing. Using stereography to manage 3D settings is not required though. In fact any unexpected departure from ortho-stereoscopic viewing could lead to viewer disorientation.
One challenge relevant to this conference is how to photograph and display real-world subjects in a virtual reality system. Even basic 360-degree photography is difficult enough without capturing in the three dimensions necessary for these displays. Multi-camera rigs generate image stitching errors across the joins caused by the very parallax necessary for binocular depth cues. An even more fundamental problem is how these images should be encoded. How can the parallax baked into an image be correct for every viewing direction? It is surprising how despite the maturity of conventional 3D photography, capturing 360 degree 3D images is still in its infancy.
Virtual reality technology is developing faster now than ever before but the more we discover, the more we realise how little we actually know. This presents an enormous opportunity for everyone to define the knowledge that will be so important in the future.
|
|
|
Author Biography:
Ian Bickerstaff is the technical director of Sony Computer Entertainment's Immersive Technology Group and is a founder member of their move into virtual reality, Project Morpheus. His background is in stereoscopic 3D and visual simulation. For 15 years, he worked in the aerospace industry, developing immersive displays for military flight simulators. He also played a major part in developing the world's first Formula 1 driving simulator for a top motor racing team. At Sony, after developing virtual camera systems for a major driving game, he helped to introduce stereoscopic functionality to the PlayStation 3 platform. He is now producing technology to bring virtual reality to the living room.
|
|
|
Coffee Break |
10:20 - 10:50 am |
SESSION 2
Camera Designs
Session Chair: John D. Stern, Intuitive Surgical, Inc. (Retired) (United States) |
Mon. 10:50 am to 12:30 pm |
10:50 am: 3D UHDTV contents production with 2/3 inch sensor cameras,Alaric C. Hamacher, Sunil P. Pardeshi, Kwangwoon Univ. (Korea, Republic of); Taeg Keun Whangboo, Gachon Univ. (Korea, Republic of); SeungHyun Lee, Kwangwoon Univ. (Korea, Republic of) [9391-3]
11:10 am: Integral three-dimensional capture system with enhanced viewing angle by using camera array, Masato Miura, Naoto Okaichi, Jun Arai, Tomoyuki Mishina, NHK Japan Broadcasting Corp. (Japan) [9391-4]
11:30 am: A stereoscopic lens for digital cinema cameras, Lenny Lipton, Leonardo IP (United States); John A. Rupkalvis, StereoScope International (United States) [9391-5]
11:50 am: A novel optical design for light field acquisition using camera array, Mei Zhang, Geng Zheng, Zhaoxing Zhang, Institute of Automation (China) [9391-6]
12:10 pm: Real-time viewpoint image synthesis using strips of multi camera images, Munekazu Date, Hideaki Takada, Akira Kojima, Nippon Telegraph and Telephone Corp. (Japan) [9391-7]
Lunch Break |
12:30 - 2:00 pm |
SESSION 3
Applications
Session Chair: Takashi Kawai, Waseda Univ. (Japan) |
Mon. 2:00 - 3:20 pm |
2:00 pm: Interactive stereo games to improve vision in children with amblyopia using dichoptic stimulation, Jonathan H. Purdy, Univ. of Bradford (United Kingdom); Alexander Foss, Nottingham Univ. Hospitals NHS Trust (United Kingdom); Richard M. Eastgate, The Univ. of Nottingham (United Kingdom); Daisy MacKeith, Nottingham Univ. Hospitals NHS Trust (United Kingdom); Nicola Herbison, The Univ. of Nottingham (United Kingdom); Anthony Vivian, Nottingham Univ. Hospitals NHS Trust (United Kingdom) [9391-8]
2:20 pm: Stereoscopic visualization of 3D volumetric data for patient-individual skull base prosthesis prior to manufacturing, Justus F. Ilgner, Martin Westhofen, Univ. Hospital Aachen (Germany) [9391-9]
2:40 pm: Visual perception and stereoscopic imaging: an artist's perspective, Steve Mason, Yavapai College (United States) [9391-10]
3:00 pm: Assessing the benefits of stereoscopic displays to visual search: methodology and initial findings, Hayward J. Godwin, Univ. of Southampton (United Kingdom); Nicolas S. Holliman, The Univ. of York (United Kingdom); Tamaryn Menneer, Simon P. Liversedge, Univ. of Southampton (United Kingdom); Kyle R. Cave, Univ. of Massachusetts Amherst (United States); Nicholas Donnelly, Univ. of Southampton (United Kingdom) [9391-11]
Coffee Break |
3:20 - 4:00 pm |
SESSION 4
Light Field Displays
Session Chair: Hideki Kakeya, Univ. of Tsukuba (Japan) |
Mon. 4:00 to 5:20 pm |
4:00 pm: Small form factor full parallax tiled light field display , Zahir Y. Alpaslan, Hussein S. El-Ghoroury, Ostendo Technologies, Inc. (United States) [9391-12]
4:20 pm: Load-balancing multi-LCD light field display, Xuan Cao, Zheng Geng, Mei Zhang, Xiao Zhang, Institute of Automation (China) [9391-13]
4:40 pm: Light field display simulation for light field quality assessment, Rie Matsubara, Zahir Y. Alpaslan, Hussein S. El-Ghoroury, Ostendo Technologies, Inc. (United States) [9391-14]
5:00 pm: Integration of real-time 3D capture, reconstruction, and light-field display, Zhaoxing Zhang, Zheng Geng, Tuotuo Li, Institute of Automation (China); Yongchun Liu, Nanjing Univ. of Aeronautics and Astronautics (China); Xiao Zhang, Jiangsu Univ. (China) [9391-15]
SD&A 3D Theatre
Session Chairs: Chris Ward, Lightspeed Design, Inc.; John D. Stern, retired; Andrew J. Woods, Curtin Univ. of Technology (Australia). |
Mon. 5:30 to 7:30 pm |
This ever-popular event allows attendees to see large-screen examples of 3D content from around the world.
Program announced at the conference. 3D glasses provided.
|
SD&A Conference Annual Dinner |
Mon. 7:50 pm to 10:00 pm |
The annual informal dinner for SD&A attendees. An opportunity to meet with colleagues and discuss the latest advances. There is no host for the dinner. Information on venue and cost will be provided on the day at the conference. |
EI Plenary Session and Society Award Presentations |
Tues. 8:30 to 9:50 am |
"Analyzing Social Interactions through Behavioral Imaging" [9391-500]
James M. Rehg, Georgia Institute of Technology (United States)
Abstract: Beginning in infancy, individuals
acquire the social and communication skills
that are vital for a healthy and productive life.
Children with developmental delays face great
challenges in acquiring these skills, resulting in
substantial lifetime risks. Children with an Autism
Spectrum Disorder (ASD) represent a particularly significant
risk category, due both to the increasing rate of diagnosis of
ASD and its consequences. Since the genetic basis for ASD
is unclear, the diagnosis, treatment, and study of the disorder
depends fundamentally on the observation of behavior. In this
talk, I will describe our research agenda in Behavioral Imaging,
which targets the capture, modeling, and analysis of social and
communicative behaviors between children and their caregivers
and peers. We are developing computational methods and
statistical models for the analysis of vision, audio, and wearable
sensor data. Our goal is to develop a new set of capabilities for
the large-scale collection and interpretation of behavioral data. I
will describe several research challenges in multi-modal sensor
fusion and statistical modeling which arise in this area, and
present illustrative results from the analysis of social interactions
with children and adults. more...
|
|
|
SESSION 5
Autostereoscopic Displays
Session Chair: Hideki Kakeya, Univ. of Tsukuba (Japan) |
Tue. 10:10 am - 11:30 pm |
10:10 am: A large 1D retroreflective autostereoscopic display, Quinn Y. Smithwick, Disney Research, Los Angeles (United States); Nicola Ranieri, ETH Zürich (Switzerland) [9391-16]
10:30 am: Time-sequential lenticular display with layered LCD panels, Hironobu Gotoda, National Institute of Informatics (Japan) [9391-17]
10:50 am: Dual side transparent OLED 3D display using Gabor super-lens, Sergey Chestak, Dae-Sik Kim, Sung-Woo Cho, SAMSUNG Electronics Co., Ltd. (Korea, Republic of) [9391-18]
11:10 am: 360-degree three-dimensional flat panel display using holographic optical elements, Hirofumi Yabu, Osaka City Univ. (Japan); Kayo Yoshimoto, Osaka Univ. (Japan); Hideya Takahashi, Osaka City Univ. (Japan); Kenji Yamada, Osaka Univ. (Japan) [9391-19] |
SD&A Keynote Presentation II
Session Chair: Nicolas S. Holliman, The Univ. of York (United Kingdom) |
Tues. 11:30 am - 12:30 pm |
What is stereoscopic vision good for? [9391-49]
Jenny C. A. Read, Newcastle Univ. (United Kingdom)
|
Abstract: Stereoscopic vision has been described as "one of the glories of nature". Humans can detect disparities between the two eyes' images which are less than the diameter of one photoreceptor. But when we close one eye, the most obvious change is the loss of peripheral vision rather than any alteration in perceived depth. Many people are stereoblind without even realising the fact. So what is stereoscopic vision actually good for? In this wide-ranging keynote address, I will consider some possible answers, discussing some of the uses stereo vision may have in three different domains: in evolution, in art and in medicine.
Stereo vision incurs significant costs, e.g. the duplication of resources to cover a region of the visual field twice, and the neuronal tissue needed to extract disparities. Nevertheless, it has evolved in many animals including monkeys, owls, horses, sheep, toads and insects. It must therefore convey significant fitness benefits. It is often assumed that the main benefit is improved accuracy of depth judgments, but camouflage breaking may be as important, particularly in predatory animals. I will discuss my lab's attempts to gain insight into these questions by studying stereo vision in an insect system, the praying mantis.
In humans, for the last 150 years, stereo vision has been turned to a new use: helping us reproduce visual reality for artistic purposes. By recreating the different views of a scene seen by the two eyes, stereo achieves unprecedented levels of realism. However, it also has some unexpected effects on viewer experience. For example, by reducing the salience of the picture surface, it can affect our ability to correct for factors such as oblique viewing. The disruption of established mechanisms for interpreting pictures may be one reason why some viewers find stereoscopic content disturbing.
Stereo vision also has uses in ophthalmology. The sub-photoreceptor level of stereoacuity referred to in my opening paragraph requires the entire visual system to be functioning optimally: the optics and retina of both eyes, the brain areas which control eye movements, the muscles which move the eyes, and the brain areas which extract disparity. Thus, assessing stereoscopic vision provides an immediate, non-invasive assessment of binocular visual function. Clinical stereoacuity tests are used in the management of conditions such as strabismus and amblyopia as well as vision screening. Stereoacuity can reveal the effectiveness of therapy and even predict long-term outcomes post surgery. Yet current clinical stereo tests fall far short of the accuracy and precision achievable in the lab. At Newcastle we are exploiting the recent availability of autostereo 3D tablet computers to design a clinical stereotest app in the form of a game suitable for young children. Our goal is to enable quick, accurate and precise stereoacuity measures which will enable clinicians to obtain better outcomes for children with visual disorders.
|
|
|
Author Biography:
Jenny Read leads a multidisciplinary team researching many aspects of vision, especially 3D or stereoscopic vision. Her research interests include how vision may be altered in clinical conditions like strabismus or epilepsy, how viewers perceive depth in 3D displays, and how 3D vision works in insects. Read's scientific career began with a doctorate in theoretical astrophysics at Oxford University before moving into visual neuroscience. After four years in the USA working at the National Institutes of Health, in 2005 she returned to the UK with a research fellowship from the Royal Society, Britain's national academy of science. She is now a Reader (Associate Professor) in Vision Science at Newcastle University's Institute of Neuroscience in the beautiful North East of England.
|
|
|
Lunch Break |
12:30 - 2:00 pm |
SESSION 6
Human Factors and Performance
Session Chair: John O. Merritt, The Merritt Group (United States) |
Tue. 2:00 - 3:30 pm |
2:00 pm: Subjective contrast sensitivity function assessment in stereoscopic viewing of Gabor patches, Johanna Rousson, Jérémy Haar, Barco N.V. (Belgium); Ljiljana Platiša, Univ. Gent (Belgium); Arnout Vetsuypens, Bastian Piepers, Tom R. Kimpe, Barco N.V. (Belgium); Wilfried Philips, Univ. Gent (Belgium) [9391-20]
2:20 pm: An objective method for 3D quality prediction using perceptual thresholds and acceptability, Darya Khaustova, Orange SA (France); Olivier Le Meur, Univ. de Rennes 1 (France); Jerome Fournier, Emmanuel Wyckens, Orange SA (France) [9391-21]
2:40 pm: Disparity modification in stereoscopic images for emotional enhancement, Takashi Kawai, Daiki Atsuta, Sanghyun Kim, Waseda Univ. (Japan); Jukka P. Häkkinen, Univ. of Helsinki (Finland) [9391-22]
3:00 pm: Preference for motion and depth in 3D film, Brittney A. Hartle, York Univ. (Canada); Arthur Lugtigheid, Univ. of Southampton (United Kingdom); Ali Kazimi, Robert S. Allison, Laurie M. Wilcox, York Univ. (Canada) [9391-23]
Coffee Break |
3:20 - 4:00 pm |
SESSION 7
Visual Comfort Studies
Session Chair: Takashi Kawai, Waseda Univ. (Japan) |
Tue. 4:00 - 5:20 pm |
4:00 pm: Microstereopsis is good, but orthostereopsis is better: precision alignment task performance and viewer discomfort with a stereoscopic 3D display, John P. McIntire, Paul R. Havig, Air Force Research Lab. (United States); Lawrence K. Harrington, Ball Aerospace & Technologies Corp. (United States); Steve T. Wright, U.S. Air Force (United States); Scott N. J. Watamaniuk, Wright State Univ. (United States); Eric L. Heft, Air Force Research Lab. (United States) [9391-24]
4:20 pm: Effects of blurring and vertical misalignment on visual fatigue of stereoscopic displays, Sangwook Baek, Chulhee Lee, Yonsei Univ. (Korea, Republic of) [9391-25]
4:40 pm: Subjective and objective evaluation of visual fatigue on viewing 3D display continuously, Danli Wang, Yaohua Xie, Yang Lu, Institute of Software (China) [9391-26]
5:00 pm: Study of objective parameters of 3D visual fatigue based on RDS related tasks, Yi Huang, Yue Liu, Bochao Zou, Dongdong Weng, Beijing Institute of Technology (China) [9391-27]
Symposium Demonstration Session and Interactive Paper Session
|
Tues. 5:30 - 7:30 pm |
|
Demonstrations
A symposium-wide demonstration session will be open to attendees 5:30 to 7:30 pm Tuesday evening. Demonstrators will provide interactive, hands-on demonstrations of a wide-range of products related to Electronic Imaging.
The session will have a focused "Stereoscopic Displays and Applications" area. The demonstration session hosts a vast collection of stereoscopic products providing a perfect opportunity to witness a wide array of stereoscopic displays with your own two eyes.
|
Posters
The poster session, with authors present, will be held Tuesday evening.
The full listing of poster papers appears below. |
Interactive Paper (Poster) Session
Interactive papers will be placed on display after 10:00 am on Tuesday. An interactive paper session, with authors present at their papers, will be held Tuesday evening.
Refreshments will be served.
The Interactive Paper (Poster) Session runs in parallel with the SD&A / EI Symposium Demonstration Session. |
Tue. 5:30 - 7:00 pm |
- Enhancement of viewing angle with homogenized brightness for autostereoscopic display with lens-based directional backlight, Takuya Mukai, Hideki Kakeya, Univ. of Tsukuba (Japan) [9391-40]
- Effect of Petzval curvature on integral imaging display, Ganbat Baasantseren, National Univ. of Mongolia (Mongolia) [9391-41]
- [9391-42] moved to Session 8
- Free-viewpoint video synthesis from mixed resolution multi-view images and low resolution depth maps, Takaaki Emori, Nagoya Univ. Graduate School of Engineering (Japan); Mehrdad Panahpour Tehrani, Keita Takahashi, Nagoya Univ. (Japan); Toshiaki Fujii, Nagoya Univ. Graduate School of Engineering (Japan) [9391-43]
- Formalizing the potential of stereoscopic 3D user experience in interactive entertainment, Jonas Schild, Consultant (Germany) [9391-44]
Development of binocular eye tracker system via virtual data, Frank Hofmeyer, Sara Kepplinger, Technische Univ. Ilmenau (Germany); Manuel Leonhardt, Nikolaus Hottong, Hochschule Furtwangen Univ. (Germany) [9391-45]
- Two CCD cameras stereoscopic position measurement for multi fiber positioners on ground-based telescope, Zengxiang Zhou, Hongzhuan Hu, Jianping Wang, Jiaru Chu, Zhigang Liu, Univ. of Science and Technology of China (China) [9391-46]
- Usability of stereoscopic view in teleoperation, Wutthigrai Boonsuk, Eastern Illinois Univ. (United States) [9391-47]
- Using binocular and monocular properties for the construction of a quality assessment metric for stereoscopic images, Mohamed-Chaker Larabi, Univ. of Poitiers (France); Iana Iatsun, XLIM-SIC (France) [9391-48]
- Dynamic mapping for multiview autostereoscopic displays, Jing Liu, Univ. of California, Santa Cruz (United States); Tom Malzbender, Cultural Heritage Imaging (United States); Siyang Qin, Bipeng Zhang, Che-An Wu, James Davis, Univ. of California, Santa Cruz (United States) [9391-51]
EI Plenary Session and EI Conference Award Presentations |
Wed. 8:30 to 9:50 am |
What Makes Big Visual Data Hard?
by Alexei (Alyosha) Efros, Univ. of California, Berkeley (United States) [9391-501]
|
Abstract: There are an estimated 3.5 trillion photographs in the world, of which 10% have been taken in the past 12 months. Facebook alone reports 6 billion photo uploads per month. Every minute, 72 hours of video are uploaded to YouTube. Cisco estimates that in the next few years, visual data (photos and video) will account for over 85% of total internet traffic. Yet, we currently lack effective computational methods for making sense of all this mass of visual data. Unlike easily indexed content, such as text, visual content is not routinely searched or mined; it's not even hyperlinked. Visual data is Internet's "digital dark matter" [Perona,2010] -- it's just sitting there! In this talk, I will first discuss some of the unique challenges that make Big Visual Data difficult compared to other types of content. In particular, I will argue that the central problem is the lack a good measure of similarity for visual data. I will then present some of our recent work that aims to address this challenge in the context of visual matching, image retrieval, visual data mining, and interactive visual data exploration. more...
|
|
|
Coffee Break |
9:50 - 10:10 am |
SESSION 8
Image Processing
Session Chair: Davide Gadia, Univ. degli Studi di Milano (Italy) |
Wed. 10:10 - 11:30 am |
10:10 am: Multi-view stereo image synthesis using binocular symmetry based global optimization, Hak Gu Kim, Yong Ju Jung, Soosung Yoon, Yong Man Ro, KAIST (Korea, Republic of) [9391-28]
10:30 am: Depth assisted compression of full parallax light fields, Danillo Graziosi, Zahir Y. Alpaslan, Hussein S. El-Ghoroury, Ostendo Technologies, Inc. (United States) [9391-29]
10:50 am: A 3D mosaic algorithm using disparity map, Bo Yu, Hideki Kakeya, Univ. of Tsukuba (Japan) [9391-30]
11:10 am: Data conversion from multi-view cameras to layered light field display for aliasing-free 3D visualization, Toyohiro Saito, Keita Takahashi, Mehrdad P. Tehrani, Toshiaki Fujii, Nagoya Univ. (Japan) [9391-42] Post inserted object calibration for stereo video rectification, Weiming Li, Samsung Advanced Institute of Technology (China) [9391-31] |
SD&A Discussion Forum |
Wed. 11:30 am - 12:30 pm |
VR and 3D: Is good 3D necessary for good VR?
Stereoscopic 3D is now a well established tool for film making in cinema and television. The stereography of good 3D has been an important part of this process, enabling the best content to produce both compelling and entertaining depth sensations. We know that this success has come from an understanding of what is good in human terms, in display terms and in content terms. Our panel of distinguished researchers in the fields of 3D and VR will discuss the role good 3D has in the VR experience. In particular is good stereoscopic 3D necessary for VR to be immersive and compelling.
Moderator:
Lenny Lipton (Leonardo IP)
Panellists:
Ian Bickerstaff (Sony Computer Entertainment),
Carolina Cruz-Neira (University of Arkansas at Little Rock),
Margaret Dolinsky (Indiana University), and
Gordon Wetzstein (Stanford University).
|
Lunch Break |
12:30 to 2:00 pm |
SESSION 9
Multi-View and Integral Imaging Displays
Session Chair: Hideki Kakeya, Univ. of Tsukuba (Japan) |
Wed. 2:00 - 3:20 pm |
2:00 pm: A new type of multiview display, René de la Barré, Fraunhofer-Institut für Nachrichtentechnik Heinrich-Hertz-Institut (Germany); Silvio Jurk, Technical Univ. Berlin (Germany); Mathias Kuhlmey, Fraunhofer-Institut für Nachrichtentechnik Heinrich-Hertz-Institut (Germany) [9391-32]
2:20 pm: Compact multi-projection 3D display using a wedge prism, Byoungho Lee, Soon-gi Park, Chang-Kun Lee, Seoul National Univ. (Korea, Republic of) [9391-33]
2:40 pm: Integral 3D display using multiple LCDs, Naoto Okaichi, Masato Miura, Jun Arai, Tomoyuki Mishina, NHK Japan Broadcasting Corp. (Japan) [9391-34]
3:00 pm: A super multi-view display with small viewing zone tracking using directional backlight, Jin Miyazaki, Tomohiro Yendo, Nagaoka Univ. of Technology (Japan) [9391-35]
Coffee Break |
3:20 - 3:50 pm |
SESSION 10
Image Production and Perception
Session Chair: Davide Gadia, Univ. degli Studi di Milano (Italy) |
Wed. 3:50 - 5:10 pm |
3:50 pm: Real object-based 360 degree integral-floating display using multi depth camera, Munkh-Uchral Erdenebat, Erkhembaatar Dashdavaa, Ki-Chul Kwon, Kwan-Hee Yoo, Nam Kim, Chungbuk National Univ. (Korea, Republic of) [9391-36]
4:10 pm: Multi-layer 3D imaging using multiple viewpoint images and depth map, Hidetsugu Suginohara, Hirotaka Sakamoto, Satoshi Yamanaka, Mitsubishi Electric Corp. (Japan); Shiro Suyama, Univ. of Tokushima (Japan); Hirotsugu Yamamoto, Utsunomiya Univ. (Japan), The Univ. of Tokushima (Japan) [9391-37]
4:30 pm: Evaluation of vision training using 3D play game, Jungho Kim, Soon Chul Kwon, Kwang-Chul Son, SeungHyun Lee, Kwangwoon Univ. (Korea, Republic of) [9391-38]
4:50 pm: Partially converted stereoscopic images and the effects on visual attention and memory, Sanghyun Kim, Waseda Univ. (Japan); Hiroyuki Morikawa, Aoyama Gakuin Univ. (Japan); Reiko Mitsuya, Takashi Kawai, Waseda Univ. (Japan); Katsumi Watanabe, The Univ. of Tokyo (Japan) [9391-39]
SD&A Prizes and Closing Remarks
Nicolas S. Holliman, The Univ. of York (United Kingdom) |
Wed. 5:10 to 5:30 pm |
|
Meal Break
Time to grab a meal in a local San Francisco restaurant (while all the EI conference chairs have their big annual planning session).
Don't forget to come back at 8:00pm for the All-Conference Dessert Reception. |
5:30 to 8:00 pm |
All-Conference Dessert Reception
|
Wed. 8:00 to 9:30 pm |
The annual Electronic Imaging All-Conference Reception provides a wonderful opportunity to get to know and interact with new and old SD&A colleagues. Plan to join us for this relaxing and enjoyable event.
|
|