Smart Multimedia Beyond the Visible Spectrum
Huixing Zhou, Xidian University, email@example.com
Hanlin Qin, Xidian University, firstname.lastname@example.org
Images and videos widely used in multimedia are mostly based on grayscale or color sensors that capture data in the visible spectrum. Many multimedia applications, such as mobile phones and autonomous driving, however, also require imaging capability beyond the visible spectrum in order to explore the ultraviolet and infrared range of information, and/or provide high spectral resolution to better capture the reflectance property of objects. In these scenarios, both passive sensors and active sensors may be required to provide complementary and high information of the scene. These requirements lead to the adoption of sensors in different modalities, such as ultraviolet, infrared, multispectral, hyperspectral, LiDAR and so on.
While new sensing technology has greatly expanded the scope and capability of traditional multimedia system, it is still challenging tasks to process, analyse, and understand captured images or videos. IN particular, each type of data has its unique properties. This requires smart multimedia technologies to be developed so that domain specific knowledge can be embedded into the data processing, and to allow exploration of new methods that enables knowledge transfer between domains and fusion of data from different modalities.
The goal of this special session is to provide a forum for researchers and developers in the multimedia community to present novel and original research in processing data beyond the visible spectrum. The topics include but not limited to:
- Processing of image and video data captured in the ultraviolet, infrared, multispectral, and hyperspectral form.
- Image denoising
- Multimodal image registration
- Object detection, recognition, and tracking
- Image classification
- Multimodal data fusion
- Knowledge transfer and domain adaptation