IEEE International Conference on Computational Photography 2019

The University of Tokyo, Japan
May15-17, 2019

The field of Computational Photography seeks to create new photographic functionalities and experiences that go beyond what is possible with traditional cameras and image processing tools. The IEEE International Conference on Computational Photography is organized with the vision of fostering the community of researchers, from many different disciplines, working on computational photography.

Future Direction


ICCP2019 Tokyo was ended succesflly.
See you at ICCP2020 in Saint Louis!

Access

4-6-1 KOMABA MEGURO-KU, TOKYO 153-8505, JAPAN
Institute of Industrial Science, The University of Tokyo, Convention Hall (An block 2nd floor)

From Narita International Airport (NRT)

Transit

From Tokyo International Airport (HND)

Transit

Housing

Shibuya is a terminal close to the conference venue, we would recommend booking one of those hotels or any other place close to the station.

Shibuya Tokyu REI Hotel
JR-EAST Hotel Mets SHibuya
HOTEL UNIZO Tokyo Shibuya

Awards

Best Paper Award

Video from Stills: Lensless Imaging with Rolling Shutter
Nicholas Antipa , Patrick Oare , Emrah Bostan, Ren Ng, and Laura Waller

Demo Award

Slope Disparity Gating using a Synchronized Projector-Camera System
Tomoki Ueda, Hiroyuki Kubo, Suren Jayasuriya, Takuya Funatomi, and Yasuhiro Mukaigawa

Poster Award

PhaseCam3D - Learning Phase Masks for Passive Single View Depth Estimation
Yicheng Wu, Vivek Boominathan, Huaijin Chen, Aswin C. Sankaranarayanan, and Ashok Veeraraghavan

Program

Wednesday May-15

9:00 am - Registration
10:30 am - 10:45 am Welcome
10:45 am - 11:45 am
Session1: NLOS Imaging
Session Chair: Aswin C Sankaranarayanan (CMU)
Thermal Non-Line-of-Sight Imaging
Tomohiro Maeda (MIT)*; Yiqin Wang (UCLA); Ramesh Raskar (MIT); Achuta Kadambi (UCLA)
SNLOS: Non-line-of-sight Scanning through Temporal Focusing
Adithya Pediredla (Rice University)*; Akshat Dave (Rice University); Ashok Veeraraghavan (Rice University)
Corner Occluder Computational Periscopy: Estimating a Hidden Scene from a Single Photograph
Sheila W Seidel (Boston University)*; Yanting Ma (Boston University); John Murray-Bruce (Boston University); Charles Saunders (Boston University); Bill Freeman (MIT); Christopher Yu (Charles Stark Draper Laboratory); Goyal Vivek (Boston University)
12:00 pm - 1:30 pm Lunch
1:30 pm - 2:30 pm
Session 2: Imaging with Novel Optics and Sensors
Session Chair: Sara Abrahamsson (UCSC)
Wavelet Tree Parsing with Freeform Lensing
Vishwanath Saragadam Raja Venkata (Carnegie Mellon University)*; Aswin Sankaranarayanan (Carnegie Mellon University)
STORM: Super-resolving Transients by OveRsampled Measurements
Ankit Raghuram (Rice University)*; Adithya Pediredla (Rice University); Srinivasa Narasimhan (Carnegie Mellon University, USA); Ioannis Gkioulekas (Carnegie Mellon University); Ashok Veeraraghavan (Rice University)
2:30 pm - 3:00 pm Coffee break
3:00 pm - 4:00 pm
Session 3: Video and High Speed Photography
Session Chair: Ashok Veeraraghavan ( Rice University )
Video from Stills: Lensless Imaging with Rolling Shutter
Nicholas Antipa (University of California Berkeley)*; Patrick Oare (UC Berkeley); Emrah Bostan (University of California, Berkeley); Ren Ng (UC Berkeley); Laura Waller (UC Berkeley);
A Bit Too Much? High Speed Imaging from Sparse Photon Counts
Paramanand Chandramouli (University of Siegen)*; Samuel Burri (EPFL); Claudio Bruschini (EPFL); Edoardo Charbon (EPFL); Andreas Kolb (University of Siegen)
Wireless Software Synchronization of Multiple Distributed Cameras
Sameer Ansari (Google)*; Neal Wadhwa (Google); Rahul Garg (Google); Jiawen Chen (Google)
4:00 pm - Poster & Demo 1
6:00pm- Reception at ape cucina naturale (An block 1F)

Thursday May-16

9:15 am - 10:35 am
Session 4: 3D Photography
Session Chair: Achuta Kadambi (UCLA)
PhaseCam3D — Learning Phase Masks for Passive Single View Depth Estimation
Yicheng Wu (Rice University)*; Vivek Boominathan (Rice University); Huaijin Chen (Rice University); Aswin Sankaranarayanan (Carnegie Mellon University); Ashok Veeraraghavan (Rice University)
Episcan360: Active Epipolar Imaging for Live Omnidirectional Stereo
Wil Hamilton (Carnegie Mellon University); James Bourne (Carnegie Mellon University); Jeff McMahill (Carnegie Mellon University); Joan Campoy (Carnegie Mellon University); Herman Herman (Carnegie Mellon University); Srinivasa G Narasimhan (Carnegie Mellon University)*
Spatio-temporal Phase Disambiguation in Depth Sensing
Takahiro Kushida (Nara Institute of Science and Technology)*; Kenichiro Tanaka (NAIST); Takahito Aoto (University of Tsukuba); Takuya Funatomi (Nara Institute of Science and Technology); Yasuhiro Mukaigawa (NAIST)
Depth From Texture Integration
Mark Sheinin (technion)*; Schechner Yoav (Technion)
10:35 am - 11:00 am Coffee break
11:00 am - 12:00 pm
Kavita Bala (Cornell University)

Session Chair: Suren Jayasuriya (Arizona State University)
12:00 pm - 1:30 pm Lunch
1:30 pm - 2:30 pm
Session5: Imaging for Microscopy
Session Chair: Adithya Kumar Pediredla(Rice University)
Miniature 3D Microscope and Reflectometer for Space Exploration
Gustav M Pettersson (KTH Royal Institute of Technology)*; Michael Dille (NASA Ames Research Center); Sara Abrahamsson (University of California, Santa Cruz); Uland Wong (NASA Ames Research Center)
Data-Driven Design for Fourier Ptychographic Microscopy
Michael Kellman (UC Berkeley)*; Emrah Bostan (University of California, Berkeley); Michael Chen (UC Berkeley); Laura Waller (UC Berkeley)
Keisuke Goda (Department of Chemistry, University of Tokyo; Institute of Technological Sciences, Wuhan University)
2:30 pm - 3:00 pm Coffee break
3:00 pm - 4:00 pm
Kokichi Sugihara (Meiji University)

Session Chair: Hajime Nagahara (Osaka University)
4:00 pm - Poster & Demo 2
7:00pm- Banquet at Gonpachi Shibuya (E. Space Tower 14F)

Friday May-17

9:30 am - 10:30 am
Session6: Reconstruction beyond 3D
Session Chair: Oliver S. Cossairt (Northwestern University)
4D X-Ray CT Reconstruction using Multi-Slice Fusion
Soumendu Majee (Purdue University)*; Thilo Balke (Purdue University); Craig Kemp (Eli Lilly and Company); Gregery T Buzzard (Purdue University); Charles Bouman (Purdue University)
Mirror Surface Reconstruction Using Polarization Field
Jie Lu (Shanghaitech University); Yu Ji (Plex-VR); Jingyi Yu (Shanghai Tech University); Jinwei Ye (Louisiana State University)*
Slope Disparity Gating using a Synchronized Projector-Camera System
Tomoki Ueda (Nara Institute of Science and Technorogy)*; Hiroyuki Kubo (Nara Institute of Science and Technology ); Suren Jayasuriya (Arizona State University); Takuya Funatomi (Nara Institute of Science and Technology); Yasuhiro Mukaigawa (NAIST)
10:30 am - 11:00 am Coffee break
11:00 am - 12:00 pm
Vladlen Koltun (Intel)

Session Chair: Sanjeev Koppal (University of Florida)
12:00 pm - 1:30 pm Lunch
1:30 pm - 2:50 pm
Session 7: High-Performance / High Dynamic Range Photography
Session Chair: Mark Sheinin (Technion)
Imari Sato (National Institute of Informatics)
Stereoscopic Dark Flash for Low-light Photography
Jian Wang (Carnegie Mellon University); Tianfan Xue (Google); Jonathan Barron (-); Jiawen Chen (Google)*
A fast, scalable, and reliable deghosting method for extreme exposure fusion
Ram K Prabhakar (Indian Institute of Science)*; Rajat Arora (IISc); Adhitya Swaminathan (Indian Institute of Technology (Banaras Hindu University)); Kunal Singh (IITR); Venkatesh Babu RADHAKRISHNAN (Indian Institute of Science)
Tomoo Mitsunaga (Sony Semiconductor Solutions Corporation)
2:50 pm - 3:20 pm Concluding remarks and awards ceremony

Demo

Day1 & Day2 (15th-16th May)
1. Joint optimization for compressive video sensing and reconstruction under hardware constraints
Michitaka Yoshida, Akihiko Torii, Masatoshi Okutomi, Kenta Endo, Yukinobu Sugiyama, Rin-ichiro Taniguchi, and Hajime Nagahara
2. Local Zoo Animal Stamp Rally Application using Image Recognition
Kaoru Kimura, Rabarison M. Felicia, Kohei Kawanaka, Hiroki Tanioka, and Tetsushi Ueta
3. Reproduction of Takigi Noh Based on Anisotropic Reflection Rendering of Noh Costume with Dynamic Illumination
Shiro Tanaka and Hiromi T. Tanaka
4. Super Field-of-View Lensless Camera by Coded Image Sensors
Tomoya Nakamura, Keiichiro Kagawa, Shiho Torashima, and Masahiro Yamaguchi
5. FlashFusion: Real-time Globally Consistent Dense 3D Reconstruction using CPU Computing
Lu Fang and Lei Han
6. Slope Disparity Gating using a Synchronized Projector-Camera System
Tomoki Ueda, Hiroyuki Kubo, Suren Jayasuriya, Takuya Funatomi, and Yasuhiro Mukaigawa
7. Episcan360: Active Epipolar Imaging for Live Omnidirectional Stereo
Wil Hamilton, James Bourne, Jeff McMahill, Joan Campoy, Herman Herman, and Srinivasa G Narasimhan
8. Cinematic Virtual Reality with Head-Motion Parallax
Jayant Thatte and Bernd Girod
9. Coded Two-Bucket Cameras for Computer Vision
Mian Wei, Zhengfan Xia, Navid Sarhangnejad, Gairik Dutta, Nikita Gusev, Rahul Gulve, Roman Genov, and Kiriakos N. Kutulakos
10. A single-photon camera with 97 kfps time-gated 24 Gphotons/s 512 x 512 SPAD pixels for computational imaging and time-of-flight vision
Kazuhiro Morimoto, Arin Can Ulku, Michel Antolović, Claudio Bruschini, and Edoardo Charbon

Poster

Day1 (15th May) Day2 (16th May)
1. RESTORATION OF FOGGY AND MOTION-BLURRED ROAD SCENES
Thangamani Veeramani, Ambasamudram N. Rajagopalan, and Guna Seetharaman
2. Light-field camera based on single-pixel camera type acquisitions
T. Gregory, Matthew P. Edgar, Graham M. Gibson, M. J. Padgett, and P.-A. Moreau
3. Improving Animal Recognition Accuracy using Deep Learning
Kohei Kawanaka, Rabarison M. Felicia, Hiroki Tanioka, Masahiko Sano, Kenji Matsuura, and Tetsushi Ueta
4. Player Tracking in Sports Video using 360 degree camera
Hiroki Tanioka, Kenji Matsuura, Stephen Githinji KARUNGARU, Naka Gotoda, Tomohiro Kai, Tomohito Wada, and Yohei Takai
5. A Generic Multi-Projection-Center Model and Calibration Method for Light Field Cameras
Qi Zhang, Chunping Zhang, Jinbo Ling, Qing Wang, and Jingyi Yu
6. Improved Illumination Correction that Preserves Medium Sized Objects
Anders Hast and Andrea Marchetti
7. Dense Light Field Reconstruction from Sparse Sampling Using Residual Network
Mantang Guo, Hao Zhu, Guoqing Zhou, and Qing Wang
8. Full View Optical Flow Estimation Leveraged from Light Field Superpixel
Hao Zhu, Xiaoming Sun, Qi Zhang, Qing Wang, Antonio Robles-Kelly, Hongdong Li, and Shaodi You
9. Functional CMOS Image Sensor with flexible integration time setting among adjacent pixels
Kenta Endo, Yukinobu Sugiyama, Michitaka Yoshida, Hajime Nagahara, Kento Kaneta, Keisuke Uchida, Yasuhito Yoneta, and Masaharu Muramatsu
10. Focus Manipulation Detection via Photometric Histogram Analysis
Can Chen, Scott McCloskey, and Jingyi Yu
11. Learning-Based Framework for Capturing Light Fields through a Coded Aperture Camera
Yasutaka Inagaki, Keita Takahashi, Toshiaki Fujii, and Hajime Nagahara
12. A Dataset for Benchmarking Time-Resolved Non-Line-of-Sight Imaging
Miguel Galindo, Julio Marco, Matthew O’Toole, Gordon Wetzstein, Diego Gutierrez, and Adrian Jarabo
13. Skin-based identification from multispectral image data using CNNs
Takeshi Uemori, Atsushi Ito, Yusuke Moriuchi, Alexander Gatto, and Jun Murayama
14. A Bio-inspired Metalens Depth Sensor
Qi Guo, Zhujun Shi, Yao-Wei Huang, Emma Alexander, Federico Capasso, and Todd Zickler
15. A method for passive, monocular distance measurement of virtual image in VR/AR
Lihui Wang, Yunpu Hu, Hongjin Xu, and Masatoshi Ishikawa
16. Polarimetric Camera Calibration Using an LCD Monitor
Zhixiang Wang, Yinqiang Zheng, and Yung-Yu Chuang
17. Ellipsoidal path connections for time-gated rendering
Adithya Pediredla, Ashok Veeraraghavan, and Ioannis Gkiouleka
18. Moving Frames Based 3D Feature Extraction of RGB-D Data
Chang Liu, Jun Qiu, and Lina Wu
19. Diffusion Equation Based Parameterization of Light Field and Its Computational Imaging Model
Jun Qiu and Chang Liu
20. Ray-Space Projection Model for Light Field Camera
Qi Zhang, Jinbo Ling, Qing Wang, and Jingyi Yu
21. Refraction-free Underwater Active One-shot Scan using Light Field Camera
Kazuto Ichimaru and Hiroshi Kawasaki
22. Speckle Based Pose Estimation Considering Depth Information for 3D Measurement of Texture-less Environment
Hiroshi Higuchi, Hiromitsu Fujii, Atsushi Taniguchi, Masahiro Watanabe, Atsushi Yamashita, and Hajime Asama
23. Acceleration of 3D Measurement of Large Structures with Ring Laser and Camera via FFT-based Template Matching
Momoko Kawata, Hiroshi Higuchi, Hiromitsu Fujii, Atsushi Taniguchi, Masahiro Watanabe, Atsushi Yamashita, and Hajime Asama
24. Unifying the refocusing algorithms and parameterizations for traditional and focused plenoptic cameras
Herzog Charlotte, de La Rochefoucauld Ombeline, Harms Fabrice, Zeitoun Philippe, and Granier Xavier
25. DeepToF: Off-the-Shelf Real-Time Correction of Multipath Interference in Time-of-Flight Imaging
Julio Marco, Quercus Hernandez, Adolfo Muñoz, Yue Dong, Adrian Jarabo, Min H. Kim, Xin Tong, and Diego Gutierrez
26. GigaVision: When Gigapixel Videography Meets Computer Vision
Lu Fang, Xiaoyun Yuan, and Qionghai Dai
27. Light Field Messaging with Deep Photographic Steganography
Eric Wengrowski and Kristin Dana
28. Non-blind Image Restoration Based on Convolutional Neural Network
Kazutaka Uchida, Masayuki Tanaka, and Masatoshi Okutomi
29. Gradient-Based Low-Light Image Enhancement
Masayuki Tanaka, Takashi Shibata, and Masatoshi Okutomi
30. PhaseCam3D - Learning Phase Masks for Passive Single View Depth Estimation
Yicheng Wu, Vivek Boominathan, Huaijin Chen, Aswin C. Sankaranarayanan, and Ashok Veeraraghavan
31. Hyperspectral Imaging with Random Printed Mask
Yuanyuan Zhao, Hui Guo, Zhan Ma, Xuemei Hu, Xun Cao
32. Wood Pixels for Near-Photorealistic Parquetry
Julian Iseringhausen, Michael Weinmann, Weizhen Huang and Matthias Hullin
33. CloudCT: Spaceborne scattering tomography by a large formation of small satellites for improving climate predictions
Yoav Y. Schechner, Ilan Koren, and Klaus Schilling
34. Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification
Julie Chang and Gordon Wetzstein
35. Reflective and Fluorescent Separation under Narrow-Band Illumination
Koji Koyamatsu, Daichi Hidaka, Takahiro Okabe, and Hendrik Lensch
36. Beyond Volumetric Albedo --- A Surface Optimization Framework for Non-Line-of-Sight Imaging
Chia-Yin Tsai, Aswin C. Sankaranarayanan, and Ioannis Gkioulekas
37. Light Structure from Pin Motion: Simple and Accurate Point Light Calibration for Physics-based Modeling
Hiroaki Santo, Michael Waechter, Masaki Samejima, Yusuke Sugano, and Yasuyuki Matsushita
38. Lunar surface image restoration using U-Net based deep neural networks
Hiya Roy, Subhajit Chaudhury, Toshihiko Yamasaki, Danielle DeLatte, Makiko Ohtake, and Tatsuaki Hashimoto
39. A Theory of Fermat Paths for Non-Line-of-Sight Shape Reconstruction
Shumian Xin, Sotiris Nousias, Kyros Kutulakos, Aswin Sankaranarayanan, Srinivasa Narasimhan, and Ioannis Gkioulekas
40. Learning to Separate Multiple Illuminants in a Single Image
Zhuo Hui, Ayan Chakrabarti, Kalyan Sunkavalli, and Aswin C. Sankaranarayanan
42. Adaptive Lighting for Data Driven Non-line-of-sight 3D Localisation
Sreenithy Chandran and Suren Jayasuriya
43. Directionally Controlled Time-of-Flight Ranging for Mobile Sensing Platforms
Francesco Pittaluga, Zaid Tasneem, Ayan Chakrabarti, and Sanjeev J. Koppal
44. 360 Panorama Synthesis from Sparse Set of Images with Unknown FOV
Julius Surya Sumantri and In Kyu Park
45. Classification and Restoration of Compositely Degraded Images using Deep Learning
Jung Un Yun, Hajime Nagahara, and In Kyu Park
46. Programmable spectrometry -- per-pixel material classification using learned filters
Vishwanath Saragadam and Aswin C. Sankaranarayanan
47. SNLOS: Non-line-of-sight Scanning through Temporal Focusing
Adithya Pediredla, Akshat Dave, and Ashok Veeraraghavan
48. STORM: Super-resolving Transients by OveRsampled Measurements
Ankit Raghuram, Adithya Pediredla, Ioannis Gkioulekas, Srinivasa Narasimhan, and Ashok Veeraraghavan
49. Thermal Non-Line-of-Sight Imaging
Tomohiro Maeda, Yiqin Wang, Ramesh Raskar, Achuta Kadambi
50. Spatio-temporal Phase Disambiguation in Depth Sensing
Takahiro Kushida, Kenichiro Tanaka, Takahito Aoto, Takuya Funatomi, Yasuhiro Mukaigawa
51. Depth From Texture Integration
Mark Sheinin, Yoav Schechner
52. Mirror Surface Reconstruction Using Polarization Field
Jie Lu, Yu Ji, Jingyi Yu, Jinwei Ye
Day1 (15th May)
1. RESTORATION OF FOGGY AND MOTION-BLURRED ROAD SCENES
Thangamani Veeramani, Ambasamudram N. Rajagopalan, and Guna Seetharaman
3. Improving Animal Recognition Accuracy using Deep Learning
Kohei Kawanaka, Rabarison M. Felicia, Hiroki Tanioka, Masahiko Sano, Kenji Matsuura, and Tetsushi Ueta
5. A Generic Multi-Projection-Center Model and Calibration Method for Light Field Cameras
Qi Zhang, Chunping Zhang, Jinbo Ling, Qing Wang, and Jingyi Yu
7. Dense Light Field Reconstruction from Sparse Sampling Using Residual Network
Mantang Guo, Hao Zhu, Guoqing Zhou, and Qing Wang
9. Functional CMOS Image Sensor with flexible integration time setting among adjacent pixels
Kenta Endo, Yukinobu Sugiyama, Michitaka Yoshida, Hajime Nagahara, Kento Kaneta, Keisuke Uchida, Yasuhito Yoneta, and Masaharu Muramatsu
11. Learning-Based Framework for Capturing Light Fields through a Coded Aperture Camera
Yasutaka Inagaki, Keita Takahashi, Toshiaki Fujii, and Hajime Nagahara
13. Skin-based identification from multispectral image data using CNNs
Takeshi Uemori, Atsushi Ito, Yusuke Moriuchi, Alexander Gatto, and Jun Murayama
15. A method for passive, monocular distance measurement of virtual image in VR/AR
Lihui Wang, Yunpu Hu, Hongjin Xu, and Masatoshi Ishikawa
17. Ellipsoidal path connections for time-gated rendering
Adithya Pediredla, Ashok Veeraraghavan, and Ioannis Gkiouleka
19. Diffusion Equation Based Parameterization of Light Field and Its Computational Imaging Model
Jun Qiu and Chang Liu
21. Refraction-free Underwater Active One-shot Scan using Light Field Camera
Kazuto Ichimaru and Hiroshi Kawasaki
23. Acceleration of 3D Measurement of Large Structures with Ring Laser and Camera via FFT-based Template Matching
Momoko Kawata, Hiroshi Higuchi, Hiromitsu Fujii, Atsushi Taniguchi, Masahiro Watanabe, Atsushi Yamashita, and Hajime Asama
25. DeepToF: Off-the-Shelf Real-Time Correction of Multipath Interference in Time-of-Flight Imaging
Julio Marco, Quercus Hernandez, Adolfo Muñoz, Yue Dong, Adrian Jarabo, Min H. Kim, Xin Tong, and Diego Gutierrez
27. Light Field Messaging with Deep Photographic Steganography
Eric Wengrowski and Kristin Dana
29. Gradient-Based Low-Light Image Enhancement
Masayuki Tanaka, Takashi Shibata, and Masatoshi Okutomi
31. Hyperspectral Imaging with Random Printed Mask
Yuanyuan Zhao, Hui Guo, Zhan Ma, Xuemei Hu, Xun Cao
33. CloudCT: Spaceborne scattering tomography by a large formation of small satellites for improving climate predictions
Yoav Y. Schechner, Ilan Koren, and Klaus Schilling
37. Light Structure from Pin Motion: Simple and Accurate Point Light Calibration for Physics-based Modeling
Hiroaki Santo, Michael Waechter, Masaki Samejima, Yusuke Sugano, and Yasuyuki Matsushita
39. A Theory of Fermat Paths for Non-Line-of-Sight Shape Reconstruction
Shumian Xin, Sotiris Nousias, Kyros Kutulakos, Aswin Sankaranarayanan, Srinivasa Narasimhan, and Ioannis Gkioulekas
43. Directionally Controlled Time-of-Flight Ranging for Mobile Sensing Platforms
Francesco Pittaluga, Zaid Tasneem, Ayan Chakrabarti, and Sanjeev J. Koppal
45. Classification and Restoration of Compositely Degraded Images using Deep Learning
Jung Un Yun, Hajime Nagahara, and In Kyu Park
47. SNLOS: Non-line-of-sight Scanning through Temporal Focusing
Adithya Pediredla, Akshat Dave, and Ashok Veeraraghavan
48. STORM: Super-resolving Transients by OveRsampled Measurements
Ankit Raghuram, Adithya Pediredla, Ioannis Gkioulekas, Srinivasa Narasimhan, and Ashok Veeraraghavan
49. Thermal Non-Line-of-Sight Imaging
Tomohiro Maeda, Yiqin Wang, Ramesh Raskar, Achuta Kadambi
Day2 (16th May)
2. Light-field camera based on single-pixel camera type acquisitions
T. Gregory, Matthew P. Edgar, Graham M. Gibson, M. J. Padgett, and P.-A. Moreau
4. Player Tracking in Sports Video using 360 degree camera
Hiroki Tanioka, Kenji Matsuura, Stephen Githinji KARUNGARU, Naka Gotoda, Tomohiro Kai, Tomohito Wada, and Yohei Takai
6. Improved Illumination Correction that Preserves Medium Sized Objects
Anders Hast and Andrea Marchetti
8. Full View Optical Flow Estimation Leveraged from Light Field Superpixel
Hao Zhu, Xiaoming Sun, Qi Zhang, Qing Wang, Antonio Robles-Kelly, Hongdong Li, and Shaodi You
10. Focus Manipulation Detection via Photometric Histogram Analysis
Can Chen, Scott McCloskey, and Jingyi Yu
12. A Dataset for Benchmarking Time-Resolved Non-Line-of-Sight Imaging
Miguel Galindo, Julio Marco, Matthew O’Toole, Gordon Wetzstein, Diego Gutierrez, and Adrian Jarabo
14. A Bio-inspired Metalens Depth Sensor
Qi Guo, Zhujun Shi, Yao-Wei Huang, Emma Alexander, Federico Capasso, and Todd Zickler
16. Polarimetric Camera Calibration Using an LCD Monitor
Zhixiang Wang, Yinqiang Zheng, and Yung-Yu Chuang
18. Moving Frames Based 3D Feature Extraction of RGB-D Data
Chang Liu, Jun Qiu, and Lina Wu
20. Ray-Space Projection Model for Light Field Camera
Qi Zhang, Jinbo Ling, Qing Wang, and Jingyi Yu
22. Speckle Based Pose Estimation Considering Depth Information for 3D Measurement of Texture-less Environment
Hiroshi Higuchi, Hiromitsu Fujii, Atsushi Taniguchi, Masahiro Watanabe, Atsushi Yamashita, and Hajime Asama
24. Unifying the refocusing algorithms and parameterizations for traditional and focused plenoptic cameras
Herzog Charlotte, de La Rochefoucauld Ombeline, Harms Fabrice, Zeitoun Philippe, and Granier Xavier
26. GigaVision: When Gigapixel Videography Meets Computer Vision
Lu Fang, Xiaoyun Yuan, and Qionghai Dai
28. Non-blind Image Restoration Based on Convolutional Neural Network
Kazutaka Uchida, Masayuki Tanaka, and Masatoshi Okutomi
30. PhaseCam3D - Learning Phase Masks for Passive Single View Depth Estimation
Yicheng Wu, Vivek Boominathan, Huaijin Chen, Aswin C. Sankaranarayanan, and Ashok Veeraraghavan
32. Wood Pixels for Near-Photorealistic Parquetry
Julian Iseringhausen, Michael Weinmann, Weizhen Huang and Matthias Hullin
34. Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification
Julie Chang and Gordon Wetzstein
35. Reflective and Fluorescent Separation under Narrow-Band Illumination
Koji Koyamatsu, Daichi Hidaka, Takahiro Okabe, and Hendrik Lensch
36. Beyond Volumetric Albedo --- A Surface Optimization Framework for Non-Line-of-Sight Imaging
Chia-Yin Tsai, Aswin C. Sankaranarayanan, and Ioannis Gkioulekas
38. Lunar surface image restoration using U-Net based deep neural networks
Hiya Roy, Subhajit Chaudhury, Toshihiko Yamasaki, Danielle DeLatte, Makiko Ohtake, and Tatsuaki Hashimoto
40. Learning to Separate Multiple Illuminants in a Single Image
Zhuo Hui, Ayan Chakrabarti, Kalyan Sunkavalli, and Aswin C. Sankaranarayanan
42. Adaptive Lighting for Data Driven Non-line-of-sight 3D Localisation
Sreenithy Chandran and Suren Jayasuriya
44. 360 Panorama Synthesis from Sparse Set of Images with Unknown FOV
Julius Surya Sumantri and In Kyu Park
46. Programmable spectrometry -- per-pixel material classification using learned filters
Vishwanath Saragadam and Aswin C. Sankaranarayanan
50. Spatio-temporal Phase Disambiguation in Depth Sensing
Takahiro Kushida, Kenichiro Tanaka, Takahito Aoto, Takuya Funatomi, Yasuhiro Mukaigawa
51. Depth From Texture Integration
Mark Sheinin, Yoav Schechner
52. Mirror Surface Reconstruction Using Polarization Field
Jie Lu, Yu Ji, Jingyi Yu, Jinwei Ye

Keynotes

Recognizing and Understanding Visual Appearance from Micro Scale to Global Scale
Kavita Bala (Cornell University)
Abstract:TBA
Speaker bio: Kavita Bala is the Chair of the Computer Science Department at Cornell University. She received her S.M. and Ph.D. from the Massachusetts Institute of Technology (MIT), and her B.Tech. from the Indian Institute of Technology (IIT, Bombay). She co-founded GrokStyle (acquired by Facebook), and is a faculty Fellow with the Atkinson Center for a Sustainable Future.
Bala specializes in computer vision and computer graphics, leading research in recognition and visual search; material modeling and acquisition using physics and learning; and material and lighting perception. Bala's work on scalable rendering, Lightcuts, is the core production rendering engine in Autodesk's cloud renderer; and her instance recognition research was the core technology of GrokStyle's visual search engine. Bala has co-authored the graduate-level textbook "Advanced Global Illumination".
Bala has served as the Editor-in-Chief of Transactions on Graphics (TOG), is on the Papers Advisory Group for SIGGRAPH, and she chaired SIGGRAPH Asia 2011.

Computational Illusion: Various Behaviors of Impossible Objects
Kokichi Sugihara (Meiji University)
Impossible objects originally meant imaginary 3D structures that could not exist as actual physical objects, but mathematical study discovered that they are not necessarily impossible. So the meaning of impossible objects is changed; “impossible objects” are actual 3D structures whose behaviors appear to be impossible due to visual illusion. In this talk, various classes of impossible objects are shown. They include “impossible motion objects” which create apparently inconsistent motion, "ambiguous objects" whose appearances change drastically in a mirror, "partly invisible objects” part of which disappears in a mirror, and “deformable objects" which appear to deform as we change the viewpoints. They might give us a new insight to our visual systems.
Speaker bio: Kokichi Sugihara received Dr. of Engineering from the University of Tokyo in 1980, and worked at Electrotechnical Laboratory in the Ministry of International Trade and Industry of Japan, Nagoya University and the University of Tokyo before moving to Meiji University in 2009. His research area is mathematical engineering. In his research on computer vision, he found a method for constructing 3D objects from “impossible figures”, and extended his research interest to human vision and optical illusion. He is acting also as an illusion artist by creating various impossible objects. He won the first prize three times (2010, 2013 and 2018) and the second prize twice (2015 and 2016) in the Best Illusion of the Year Contest. He is the author of “Machine Interpretation of Line Drawing” (MIT Press) and a coauthor of “Spatial Tessellations: Concepts and Applications of Voronoi Diagrams” (Wiley and Sons).

Deep Image Processing
Vladlen Koltun (Intel)
Deep learning initially appeared relevant primarily for higher-level signal analysis, such as image recognition or object detection. But recent work has clarified that image processing is not immune and may benefit substantially from ability to reliably optimize multi-layer function approximators. I will review a line of work that investigates applications of deep networks to image processing. First, I will discuss the remarkable ability of convolutional networks to fit a variety of image processing operators. Next, I will present approaches that replace much of the traditional image processing pipeline by a deep network, with substantial benefits for applications such as low-light imaging and computational zoom. One take-away is that deep learning is a surprisingly exciting and consequential development for image processing.
Speaker bio: Vladlen Koltun is a Senior Principal Researcher and the director of the Intelligent Systems Lab at Intel. His lab conducts high-impact basic research on intelligent systems. Vladlen received a PhD in 2002 for new results in theoretical computational geometry, spent three years at UC Berkeley as a postdoc in the theory group, and joined the Stanford Computer Science faculty in 2005 as a theoretician. He joined Intel in 2015 to establish a new lab devoted to basic research.

Invited Speakers

White Light Imaging with Diffractive Optical Elements
Wolfgang Heidrich (KAUST)
Co-designing optics and computational methods provides access to new regions in the optical design space, promising improved imaging performance and increased flexibility. Computational imaging with diffractive optics in particular shows great promise for lighter, more compact, flexible, and powerful imaging systems. In this talk I will outline some recent advances that promise to make diffractive optics competitive for full-color imaging with small and lightweight form factors using modern optimization and machine learning techniques for joint optical design and computational reconstruction.
Speaker bio: Wolfgang Heidrich is a Professor of Computer Science and the Director of the Visual Computing Center at King Abdullah University of Science and Technology (KAUST). He accepted this position in 2014, after 13 years as a faculty member at the University of British Columbia. Dr. Heidrich received his PhD in from the University of Erlangen in 1999, and then worked as a Research Associate in the Computer Graphics Group of the Max-Planck-Institute for Computer Science in Saarbrucken, Germany, before joining UBC in 2000. Dr. Heidrich's research interests lie at the intersection of imaging, optics, computer vision, computer graphics, and inverse problems. His more recent interest is in computational imaging, focusing on hardware-software co-design of the next generation of imaging systems, with applications such as High-Dynamic Range imaging, compact computational cameras, hyperspectral cameras, to name just a few. Dr. Heidrich's work on High Dynamic Range Displays served as the basis for the technology behind Brightside Technologies, which was acquired by Dolby in 2007. Dr. Heidrich has served on numerous program committees for top-tier conferences such as Siggraph, Siggraph Asia, Eurographics, EGSR, and in 2016 he chaired the papers program for both Siggraph Asia and the International Conference of Computational Photography (ICCP). Dr. Heidrich is the recipient of a 2014 Humboldt Research Award.

Image-activated cell sorting
Keisuke Goda (Department of Chemistry, University of Tokyo; Institute of Technological Sciences, Wuhan University)
Click to Show Details
A fundamental challenge of biology is to understand the vast heterogeneity of cells, particularly how the spatial architectures of cells are linked to their physiological functions. Unfortunately, conventional technologies such as fluorescence-activated cell sorting are limited in uncovering these relations. In this talk, I introduce our machine intelligence technology known as “Intelligent Image-Activated Cell Sorting” [Cell 175, 266 (2018)] that builds on a radically different architecture that realizes real-time image-based intelligent cell sorting at an unprecedented rate. This technology integrates high-throughput cell microscopy, focusing, sorting, and deep learning on a hybrid software-hardware data-management infrastructure, enabling real-time automated operation for data acquisition, data processing, intelligent decision-making, and actuation. I also show the broad utility of the technology to real-time image-activated sorting of microalgal and blood cells based on intracellular protein localization and cell-cell interaction from large heterogeneous populations for studying photosynthesis and atherothrombosis, respectively. The technology is highly versatile and expected to enable machine-based scientific discovery in biological, pharmaceutical, and medical sciences.
Speaker bio: Keisuke Goda is currently a professor of physical chemistry in the Department of Chemistry at the University of Tokyo and an adjunct professor in the Institute of Technological Sciences at Wuhan University. His research focuses on the development of discovery-enabling technologies based on molecular imaging and spectroscopy together with microfluidics and computational analytics to push the frontier of science. He obtained a B.A. degree summa cum laude from the University of California, Berkeley in 2001 and a Ph.D. degree from the Massachusetts Institute of Technology (MIT) in 2007, both in physics. At MIT, he worked on the development of gravitational wave detectors in the LIGO group, which led to the Nobel Prize in Physics (2017). After several years of research at Caltech and UCLA, he joined the University of Tokyo as a professor. He has co-launched two tech startups. He has received numerous prizes and awards including Eiichi Takano Award, WIRED Audi Innovation Award, MEXT Young Scientist Award, JSPS Prize, Japan Academy Medal, and Yomiuri Gold Medal. He has published more than 250 papers and holds more than 20 patents. His work has been featured by the media including Nature, Science, Cell, BBC, NHK, Wired Magazine, TIME Magazine, and Scientific American. He served as Chemistry Department Chair at the University of Tokyo (2016 - 2017), Co-Chair of the IEEE Photonics Society's Los Angeles Chapter (2007 – 2011), Founding Chair of Southern California Japanese Scholars Forum (2007 – 2012), Conference Co-Chair of SPIE Photonics Asia (2014) and Symposium Chair of Optics & Photonics Japan (2014 – 2015) and currently serves as Conference Chair of SPIE Photonics West BIOS (2015 – 2019). He also serves as an Associate Editor of APL Photonics (American Institute of Physics) and a Guest Editor of Cytometry Part A (Wiley). For his global leadership and contribution to the photonics community worldwide, he was selected by World Economic Forum as a Young Global Leader (2014) and by AERA as one of the top 100 leaders in Japan.

Spectral Signature Analysis of Real Scenes
Imari Sato (National Institute of Informatics)
Click to Show Details
The spectral reflectance of objects provides innate information about material properties that have proven useful in applications such as classification, synthetic relighting, and medical imaging to name a few. In recent years, fluorescence analysis of scenes has received attention. What makes fluorescence different from ordinary reflection is the transfer of energy from one wavelength to another. it is well-known that the fluorescence excitation-emission characteristics of many organic objects can serve as a kind of "fingerprint" for detecting the presence of specific substances in classification tasks. In this talk, I will present a coded illumination approach whereby light spectra are learned such that key visual fluorescent features can be easily seen for material classification. I will also introduce scene analysis based on hyperspectral reflectances such as deriving analytical spectral appearance model of wet surfaces for recovering the original surface color and the degree of wetness from a single observation and a novel depth recovery method based on light absorption in water in the near infrared.
Speaker bio: Imari Sato received the BS degree in policy management from Keio University in 1994. After studying at Robotics Institute of Carnegie Mellon University as a visiting scholar, she received the MS and Ph.D. degrees in interdisciplinary Information Studies from the University of Tokyo in 2002 and 2005, respectively. In 2005, she joined the National Institute of Informatics, where she is currently a professor. Concurrently, she serves a visiting professor at Tokyo Institute of Technology and a professor at the University of Tokyo. Her primary research interests are in the fields of computer vision (physics-based vision, image-based modeling) and Computer Graphics (image-based rendering, augmented reality). She has received various research awards, including The Young Scientists’ Prize from The Commendation for Science and Technology by the Minister of Education, Culture, Sports, Science and Technology (2009), and Microsoft Research Japan New Faculty award (2011).

Sensor x DNN
Tomoo Mitsunaga (Sony Semiconductor Solutions Corporation)
Click to Show Details
Recent dramatic evolution of image understanding and machine vision technologies has made by deep neural networks(DNNs) and a huge of computing power. For now, the evolution is extending to the edge of the information network where there are a huge of sensors working with the sensor signal processing. The presenter introduces a survey on the recent DNN-based approaches for sensor signal processing and summarizes what we expect from the change of the sensor signal processing.
Speaker bio: Tomoo Mitsunaga received his B.E. and M.E. degree in biophysical engineering from Osaka University, Japan, in 1989 and 1991, respectively. He has been working for Sony Corporation since 1991. He studied computer vision and computational photography as a visiting scholar with Prof. Shree Nayar in Columbia University from 1997 to 1999. Since 2016 he is appointed General Manager and leads a team of development on signal processing for image sensors.

Submission timeline

Dec. 10, 2018
[Extended] Dec. 17, 2018 (PM23:59 PST)
Paper submission deadline
Dec. 17, 2018 (PM23:59 PST) Supplemental material due
Feb. 11-15, 2019 Rebuttal period
Feb. 26, 2019 Paper decisions
Mar. 29, 2019
[Extended] April. 5, 2019
Camera ready versions

Posters/Demos timeline

Mar. 25, 2019
[Extended] Mar. 29, 2019 (PM23:59 PST)
Posters/demos deadline
Apr. 5, 2019 Posters/demos decision
Jump to CFP

Registration

[Extended]April 22 (PM23:59 JST) Advance registration ends
May 17 Late registration ends
May 15-17 ICCP2019

Registration

Categories

(8% JCT incl.)
Registration type [Extended]
Before April 22 (PM23:59 JST)
After April 22 (PM23:59 JST)
IEEE Student Member 15,000 JPY 21,000 JPY
IEEE Member 25,000 JPY 35,000 JPY
Student Non-Member 21,000 JPY 27,000 JPY
Non-Member 35,000 JPY 45,000 JPY
IEEE Life Member 15,000 JPY 15,000 JPY
Note: Please register online. On-site registration is not available.

Refund policy:
No refunds. You are happy to accommodate changes in the name of the registrant.

Submission

Paper submission

Click to Show Details (Submission deadline has been passed.)

IEEE International Conference on Computational Photography (ICCP 2019) seeks high quality submissions in all areas related to computational photography. The field of computational photography seeks to create new photographic and imaging functionalities and experiences that go beyond what is possible with traditional cameras and image processing tools. The IEEE International Conference on Computational Photography is organized with the vision of fostering the community of researchers, from many different disciplines, working on computational photography. We welcome all submissions that introduce new ideas to the field including, but not limited to, those in the following areas:

  • Advanced Image Processing
  • Camera Arrays and Multiple Images
  • Coded Aperture Imaging
  • Coherent Light Imaging
  • Compressive Sensing
  • Computational Displays
  • Computational Illumination
  • Computational Imaging for Microscopy
  • Computational Optics (wavefront coding, compressive optical sensing, digital holography, …)
  • High-performance Imaging (high-speed, hyper-spectral, high-dynamic range, thermal, confocal, polarization,…)
  • Imaging and Illumination Hardware
  • Imaging for Health and Biology
  • Machine Learning Techniques for Camera Design
  • Machine Learning Techniques for Image Processing
  • Mobile Imaging
  • Novel Imaging and Illumination Techniques for User Interfaces
  • Organizing and Exploiting Photon / Video Collections
  • Scientific Imaging and Videography
  • Structu Light and Time-of-flight Imaging

How to Submit

Paper format
Download author's kit. Submissions should be full papers in the IEEE transaction format. The paper for submission must be uploaded as a single pdf file, with a maximum size of 20 MB allowed.
Paper length
A typical paper length in ICCP has been 6-8 pages. This is a rough guideline, and there is no arbitrary strict maximum length imposed. Reviewers will be instructed to weigh the contribution of a paper relative to its length.
Supplementary material
Supplementary material can also be submitted. It must be uploaded as a single zip file with size up to 100 MB total.
Policies
The reviewing process will be double blind. Submissions must present original unpublished work. Furthermore, work submitted to ICCP cannot be submitted to another forum (journal, conference or workshop) during the ICCP reviewing period (Dec. 2018 – Feb. 2019).


Poster/Demo submission

Click to Show Details (Submission deadline has been passed.)

We are now accepting poster and demo submissions. Whereas ICCP papers must describe original research, posters and demos give an opportunity to showcase previously published or yet-to-be published work to a larger community. Submissions should be emailed to . More details below.

ICCP brings together researchers and practitioners from the multiple fields that computational photography intersects: computational imaging, computer graphics, computer vision, optics, art, and design. We therefore invite you to present your work to this broad audience during the ICCP posters session. Whereas ICCP papers must describe original research, the posters and demos give an opportunity to showcase previously published or yet-to-be published work to a larger community. Specifically, we seek posters presenting:

  • Recent research on computational photography previously published in another venue. This is your chance to present your work to the full computational photography audience!
  • Late-breaking technical results and research, including, but not limited to, progress in computational algorithms, optical system design, and innovative applications.

In addition to posters, we also welcome:

  • Demos of working computational photography prototypes and tools and software platform and/or imaging instrumentation utilizing computational photography techniques, including both research and commercial systems.
  • Artwork and visual designs that relate to or utilize computational photography or video. Mediums include, but are not limited to, photographs, videos, multimedia presentations, and installations.

The list of accepted and presented Posters and Demos are announced on our conference website, which serves as a record of presentation.

Submissions should include one or more paragraphs describing the proposed poster/demo/artwork, as well as author names and affiliations. We strongly encourage you to submit supporting materials, such as published papers, images or other media, videos, demos, and websites describing the work. Attachments under 5 MB are accepted. Otherwise, please provide a URL, or use an attachment delivery service like Dropbox or Hightail. Please email your submission directly to .

Proposal submission deadline: [Extended] March 29th, 2019
Proposal acceptance notification: Apr. 5th, 2019

Presentation instruction

Oral presentation instructions:

The total duration of each talk (both regular paper and invited) is 20 minutes, including 3 minutes for Q&A. The presenter(s) should plan to speak for no more than 17 minutes; please be mindful of finishing your presentation within the designated time slot. You should bring your own laptop for your presentation. The projector will have HMDI and D-sub connectors.

We do not provide any display adapters. If you need specific display adapters (e.g., usb-c to HMDI or D-sub etc), please remember to bring your own. The projector do not support higher resolution, so please set the resolution and refresh rate to the standard one.

Please introduce yourself to the session chair at least 10 minutes before the start of your session, and test the laptop-projector connection at the podium.

Poster presentation instructions:

The poster format is A0 portrait. Adhesive material and/or pins will be provided for mounting the posters to the boards. If you have special requirements, please contact the ICCP 2019 Demo/Poster Chairs (demopostersiccp2019@gmail.com) as soon as possible. We will try to accommodate your requests as much as possible.

Odd poster numbers are allocated for presentation on 15th May.

Even poster numbers are allocated for presentation on 16th May.

Poster presenters can install their posters anytime prior to the poster session on the corresponding date.

Demo presentation instructions:

The demo booth dimension is roughly 1800mm x 1800mm. Two of 1530mm x 890mm (H x W) panels will be provided for each demo to place posters and other materials. Adhesive material and/or pins will be provided for mounting the posters to the boards. If you have special requirements, please contact the ICCP 2019 Demo/Poster Chairs (demopostersiccp2019@gmail.com) as soon as possible. We will try to accommodate your requests as much as possible.

Demo presentations will take place on both 15th and 16th May.

Demo presenters can install their demos from 9:30 on 15th May, and leave the installation over the night for the presentation on 16th (the presentation hall will be locked overnight). The demos may be uninstalled on 17th.

Team

General Chair
Yasuhiro Mukaigawa
NAIST
Program Chairs
Hajime Nagahara
Osaka University
 
Mohit Gupta
University of Wisconsin-Madison
 
Matthias Hullin
University of Bonn
Local Arrangement
Yoichi Sato
The University of Tokyo
 
Yusuke Matsui
The University of Tokyo
 
Keita Higuchi
The University of Tokyo
Publication
Jean-François Lalonde
Université Laval
Transaction
Oliver Cossairt
Northwestern University
Poster/Demo
Yasuyuki Matsushita
Osaka University
 
Jingyi Yu
University of Delaware
Industry
Shinsaku Hiura
University of Hyogo
 
Ashok Veeraraghavan
Rice University
Finance
Takuya Funatomi
NAIST
 
Aswin Sankaranarayanan
Carnegie Mellon University
Web/Social Media
Hiroyuki Kubo
NAIST
Technical Support Members
Tsuyoshi Takatani
NAIST
 
Takahiro Kushida
NAIST

Program Committee

Achuta Kadambi
UCLA
Adams Andrew
Google
Adrian Jarabo
University of Zaragoza
Anat Levin
Technion
Ashok Veeraraghavan
Rice University
Aswin Sankaranarayanan
Carnegie Mellon University
Atul Ingle
U. Wisconsin-Madison
Ayan Chakrabarti
Washington University in St. Louis
Belen Masia
Unizar
Clem Karl
Boston University
David Lindell
Stanford University
Diego Gutierrez
University of Zaragoza
Donald Dansereau
University of Sydney
Felix Heide
Princeton University
Gordon Wetzstein
Stanford University
Guy Satat
MIT Media Lab
Ioannis Gkioulekas
Carnegie Mellon University
Itzik Malkiel
Tel Aviv University
Ivo Ihrke
Inria Bordeaux
Jian Wang
Carnegie Mellon University
Jiawen Chen
Google
Jinwei Gu
Nvidia
Jonthan Barron
Google Research
Jue Wang
Face++ (Megvii)
Kyros Kutulakos
University of Toronto
Laura Waller
UC Berkeley
M. Salman Asif
University of California, Riverside
Martin Fuchs
Hochschule der Medien
Matthew O'Toole
Carnegie Mellon University
Miki Rubinstein
Google Inc.
Mitra Kaushik
IIT Madras
Nicolas Bonneel
CNRS
Nima Khademi Kalantari
Texas A&M University
Oliver Cossairt
Northwestern University
Ozcan Aydogan
UCLA
Paolo Favaro
University of Bern
Qing Wang
Northwestern Polytechnical University
Rick Szeliski
FaceBook
Shaodi You
Data61-CSIRO
Sunkavalli Kalyan
Adobe Research
Tali Treibitz
University of Haifa
Jun Tanida
Osaka University
Wolfgang Heidrich
KAUST
Xiang Huang
Argonne National Laboratory
Xing Lin
UCLA
Yebin Liu
Tsinghua University
Yoichi Sato
University of Tokyo
Zhan Yu
Adobe
Kalyan Sunkavalli
Adobe
Katherine Bouman
MIT
Vishwanath Saragadam Raja Venkata
CMU

Sponsors

Gold Sponsors
Silver Sponsors
Bronze Sponsors
General Sponsor
Technical Co-sponsorship
Call for Sponsors
Click to Show Details
Platinum Gold Silver Bronze
Full Admissions (free registrations) 4 3 2 1
Demo table (if requested) 1 1 1 1
Logo on website and all printed materials
Special recognition during banquet and other ceremonies
Customized events * - - -
Costs (Incl. sales tax. US Dollar or Japanese Yen) 10,000USD
1,100,000JPY
7,500USD
825,000JPY
5,000USD
550,000JPY
2,500USD
275,000JPY
*Industry chairs will work with Platinum sponsors to customize the sponsor’s involvement; examples include sponsorship of best paper award, student grants, the banquet dinner.
Industry Chairs: Shinsaku Hiura, Ashok Veeraraghavan
Contact : iccp2019-support@is.naist.jp
Call for Sponsors(PDF)

Statements

IEEE Computer Society Open Conference Statement

Equity, Diversity, and Inclusion are central to the goals of the IEEE Computer Society and all of its conferences. Equity at its heart is about removing barriers, biases, and obstacles that impede equal access and opportunity to succeed. Diversity is fundamentally about valuing human differences and recognizing diverse talents. Inclusion is the active engagement of Diversity and Equity.

A goal of the IEEE Computer Society is to foster an environment in which all individuals are entitled to participate in any IEEE Computer Society activity free of discrimination. For this reason, the IEEE Computer Society is firmly committed to team compositions in all sponsored activities, including but not limited to, technical committees, steering committees, conference organizations, standards committees, and ad hoc committees that display Equity, Diversity, and Inclusion.

IEEE Computer Society meetings, conferences and workshops must provide a welcoming, open and safe environment, that embraces the value of every person, regardless of race, color, sex, sexual orientation, gender identity or expression, age, marital status, religion, national origin, ancestry, or disability. All individuals are entitled to participate in any IEEE Computer Society activity free of discrimination, including harassment based on any of the above factors.

IEEE Event Conduct and Safety Statement

IEEE believes that science, technology, and engineering are fundamental human activities, for which openness, international collaboration, and the free flow of talent and ideas are essential. Its meetings, conferences, and other events seek to enable engaging, thought provoking conversations that support IEEE’s core mission of advancing technology for humanity. Accordingly, IEEE is committed to providing a safe, productive, and welcoming environment to all participants, including staff and vendors, at IEEE-related events.

IEEE has no tolerance for discrimination, harassment, or bullying in any form at IEEE-related events. All participants have the right to pursue shared interests without harassment or discrimination in an environment that supports diversity and inclusion. Participants are expected to adhere to these principles and respect the rights of others.

IEEE seeks to provide a secure environment at its events. Participants should report any behavior inconsistent with the principles outlined here, to on site staff, security or venue personnel, or to eventconduct@ieee.org