ProfessorNagahara Hajime
Intelligence and Sensing (D3 Center)
Computer Science
2001 Ph.D in Engineering, Graduate school of system engineering, Osaka University
2001 Research Associate of the Japan Society for the Promotion Science
2003 Assistant Professor in Graduate School of Engineering Sceince, Osaka University
2007-2008 Visiting researcher in Columbia University, USA
2010 Associate Professor in Faculty of Information Science and Electrical
Engineering, Kyushu University
2016-2017 Visiting researcher in Columbia University, USA
2017 Professor in Institute for Datability Science, Osaka University.
Theme
Computational Photography
Researchers have used a regular camera as an input image in computer vision and image processing communities. The standard camera is designed for visualization of human vision. However, it is no guarantee the image captured by the camera is the best for machine vision tasks. We proposed new camera systems consisting of special optics and sensors called "Computational cameras." They are designed for specific tasks such as depth sensing, high-speed imaging, object recognition, etc. These unique cameras can obtain better-captured images and drastically improve performance.
Deep Sensing
A deep neural network (DNN) is a powerful tool for solving computer vision tasks such as object recognition, scene understanding, and image reconstruction, etc. It realizes to drastically improve the accuracy of recognition and reconstruction compared to the classical methods since feature extractor and classifier models are designed by training based on the target data. However, DNN have been used for only the digital domain in the imaging pipeline, such as the feature extractor and classifier models after the image is captured and digitized. On the other hand, optics and sensors in the analog layer still have been designed by hand based on theoretical or empirical analysis. It is not always guaranteed that the designs and hardware setting parameters are optimal to the applications and target tasks. In this research, we propose a new framework called “deep sensing,” as shown in the figure. The proposed framework also models the analog layer to the neural network model and jointly optimizes the parameters in optics and sensor designs of a camera, as well as reconstruction and classification models by the same training strategy.
Physics-based Vision
We can measure and understand a scene from an image since light has the information as the result of reflection, refraction, and scattering on the surface of the objects. However, a standard camera only captures RGB image, which has only the intensity of the light. Hence, the RGB image loses the other information about the light, creating an ambiguity in the image from the scene. We proposed to use multimodal light information, such as hyper-spectrum, interference, polarization, and time of flight and modeled the physical interactions of the light and object. We realized better shape reconstruction and scene understanding based on the physics-based analysis of the light interaction.
Contact
E-mail: nagahara@ids.
TEL: S*6068
The four-digit phone numbers are extensions used inside Osaka University. The phone numbers from outside Osaka University are as follows: S: 06-6879-xxxx, S*: 06-6105-xxxx and T: 06-6850-xxxx.
The domain name is omitted from e-mail addresses. Please add “osaka-u.ac.jp” to each e-mail address.