3D Learning: Mohamed Daoudi

To be added


Learning & Applications in Remote Sensing


Organizers: Irene Cheng, University of Alberta, locheng@ualberta.ca and Alvin Sun, University of Alberta, xinyao1@ualberta.ca

The advances of sensor technology have broadened the scope in earth observation. High precision long range data acquisition devices can now be mounted on the ground or space, including the use of satellites, planes and Unmanned Aerial Vehicles (UAVs) like drones, to capture the dynamic on the earth surface, benefiting various sectors, e.g., infrastructure monitoring and planning, energy operation safety, environmental and climate protection, precision agriculture, and so on. Data analytics techniques, both vision-based and learning-based, have been studied by researchers to handle the rapidly increasing and diverse sensor data. The objective of this Special Session is to inspire and explore, discuss challenges and learn from current findings, in order to define a more effective and collaborative R&D direction, translating learnt concepts to real-world scenarios.


Topics include, but are not limited to:

·       Remote sensing acquisition devices and methods

·       Remote robotic and pattern recognition

·       Remote sensing data analytics methodology

·       Remote sensing applications

·       Remote sensing outcome validation metrics

·       Internet of Things and mobile network for remote sensing

·       Hyperspectral and multispectral imagery

·       Integration and fusion of remote sensing data

·       Remote sensing data transmission and cloud processing


Smart Multimedia Beyond the Visible Spectrum

Organizers: Huixing Zhou, Xidian University, hxzhou@mail.xidian.edu.cn and Hanlin Qin, Xidian University, hlqin@mail.xidian.edu.cn

Images and videos widely used in multimedia are mostly based on grayscale or color sensors that capture data in the visible spectrum. Many multimedia applications, such as mobile phones and autonomous driving, however, also require imaging capability beyond the visible spectrum in order to explore the ultraviolet and infrared range of information, and/or provide high spectral resolution to better capture the reflectance property of objects. In these scenarios, both passive sensors and active sensors may be required to provide complementary and high information of the scene. These requirements lead to the adoption of sensors in different modalities, such as ultraviolet, infrared, multispectral, hyperspectral, LiDAR and so on.

While new sensing technology has greatly expanded the scope and capability of traditional multimedia system, it is still challenging tasks to process, analyse, and understand captured images or videos. In particular, each type of data has its unique properties. This requires smart multimedia technologies, which is also include the AI technology, to be developed so that domain specific knowledge can be embedded into the data processing, and to allow exploration of new methods that enables knowledge transfer between domains and fusion of data from different modalities. s

The goal of this special session is to provide a forum for researchers and developers in the multimedia community to present novel and original research in processing data beyond the visible spectrum. The topics include but not limited to:

(1) Processing of image and video data captured in the ultraviolet, infrared, multispectral, hyperspectral, LiDAR and SAR form.

(2) Image denoising

(3) Multimodal image registration

(4) Object detection, recognition, and tracking

(5) Image classification

(6) Multimodal data fusion

(7) Knowledge transfer and domain adaptation


Smart Multimedia Haptic Training Simulation

Session Chairs:


Many professions require dexterous manipulation so initial and continuing hands-on training. For example, in the medical context, simulators such as animals, cadavers, or phantoms have been a convenient way to learn by trial for decades. Yet these training supporting resources are expensive, non-continuously available, may raise ethical issues, and provide a limited set of study cases to practice on. These difficulties limit the opportunities for trainee populations to perform hands-on training during their curriculum. It is necessary to provide cost-efficient solutions facilitating the hands-on practice on any study case at any time as often as necessary.

For a decade, Virtual Reality (VR) simulators have been designed to overcome the aforementioned drawbacks. With such devices, which can be parameterized online, it becomes possible to provide an infinite set of study cases and, further, to adapt difficulty level to a specific learning curve. VR simulators have been progressively improved to provide trainees with a more realistic environment in 2D and more recently in 3D. With haptic training simulators, the additional force feedback provides a realistic interaction, which has been demonstrated as efficient training for advanced tasks in some medical contexts. Airplane pilot simulators are a sample of a widespread solution for hands-on training in difficult situations without taking any risk and with the ability to objectively assess each performance. They have become a necessary step before training on real planes. In a medical training context, the development of such simulators also generates the need to simulate in real-time the behavior of organs interacting with each other and with surgical tools.

This special session aims to is to provide a forum for researchers and developers in the multimedia community to present novel and original research in providing effective haptic feedback in the context of medical training simulators.

The topics include but are not limited to:

  • Haptic rendering
  • Computer graphics
  • Virtual/augmented/mixed reality,
  • Variable Stiffness Actuators
  • Multimodal simulation
  • Training simulation
  • Motion capture/analysis, cognitive performance


Sylvain Bouchigny is a researcher in Human-Computer Interaction at the CEA LIST Institute, France. Trained in Physics, he received an M. Eng. in scientific instrumentation from the National Engineering School in Caen in 1998 and a Ph.D. in nuclear physics from the University of Paris Sud 11 (Orsay) in 2004. However, in 2007, he joined CEA LIST to work on physics applied to human interaction which was closer to his interest. His research focuses on Multimodal Human-Computer Interaction, haptics, and virtual environments applied to education, training, and rehabilitation. He conducted projects on tangible interactions on interactive tables for education and post-stroke rehabilitation and, for the last ten years, leads the development of a VR haptic platform for surgical education.

Arnaud Lelevé was an associate professor at INSA Lyon from 2001 to 2022 and is now a full professor. He received his PhD in Robotics in 2000 from Université de Montpellier, France. He first worked in a Computer Science laboratory on Remote-Lab systems and then joined Ampère lab in 2011 in the Robotics team. He has conducted numerous R&D projects including INTELO (mobile robot for bridge inspection) and Greenshield project (which aims at replacing pesticides by farming robots in the crops), and medical-robotics-based research projects such as SoHappy (pneumatic master for tele-echography). He has also participated in the development of hands- on training projects such as SAGA (birth simulator) or PeriSim (Epidural needle insertion simulator). He has strong skills in applied mechatronics, real-time computer science, and a good experience in scientific program management.

Florence Zara has been an associate professor since 2005 for computer graphics, animation, and virtual reality at LIRIS, Université Lyon1, France. She defended her HDR (Accreditation to supervise research) in Computer Sciences at Lyon in 2020. Before that, she defended her Ph.D. at Grenoble in 2003. It combined high-performance computing, physical modeling, and Virtual Reality in the ID-IMAG laboratory. Then, she spent two years at LSIIT, Strasbourg, in 4D data visualization.

Her research interest is focused on the realization of training simulators for medical gestures. Her expertise includes animation for computer graphics, topological and physical modeling of 3D deformable objects, parallel algorithms, biomechanical simulation of soft tissues, mechanical simulation of pregnant pelvic systems, and the coupling of numerical simulations and haptic devices. She is the co-leader of the Origami team in LIRIS Laboratory. She co-organized twice the Vriphys (Virtual Reality Interaction and Physical Simulation) workshop at Lyon.


Smart Homes and Cities: the Future through Smart Technological Innovations

Organizers: Omar Mata, Tecnológico de Monterrey, omar.mata@tec.mx, and Pedro Ponce, Tecnológico de Monterrey, pedro.ponce@tec.mx


This session explores the transformative influence of technological advancements on residential habitats and urban environments. We delve into the convergence of Internet of Things (IoT), Artificial Intelligence (AI), smart infrastructure, architectural design supplemented with supportive technologies, and virtual reality, providing an in-depth examination of the forthcoming landscape of smart homes and cities. The topics extend beyond mere discussion of the evolution of smart home technologies and sustainable strategies for smart cities, and venture into the realms of cybersecurity in an interconnected world, energy, security, healthcare, and the broader societal implications of such progressions. Esteemed experts of machine learning, haptic device, urban planning, sustainability, technology, and related disciplines will impart stimulating outcomes, case studies, and participate in insightful discussions. We invite you to partake in our exploration of how these technologies are revolutionizing our living environments and how we can optimally leverage their capabilities while simultaneously mitigating their potential risks.


Smart Haptics:

Organizers: Cédric Dumas, IMT Atlantique, France, cedric.dumas@imt-atlantique.fr and Carlos Rossa, Carleton University, Canada, rossa@sce.carleton.ca

Recent advancements in haptics are shaping the way people interact with a continuously growing virtual world and are revolutionizing human-computer interaction in several sectors spanning healthcare and industry. Haptics, tactile, and force feedback devices provide news way to perceive a virtual or teleoperated environment, enabling more intuitive and precise information flow and control of cyber-physical systems.  This Special Session solicits original research in the area of “Smart haptics”, and will gather research results in a variety of fields pertaining to haptics and applications. It will provide an opportunity for haptic researchers to present their latest works, share ideas, and network with other researchers in the field. 


Topics include, but are not limited to:

  • Haptics
  • Tactile and force feedback devices
  • Modelling, design, and control of haptic devices and systems
  • Telerobotics and telepresence
  • Mobile and wearable haptic devices
  • Evaluation of haptic devices
  • Haptics for virtual and augmented reality systems
  • Integration of haptic feedback in virtual reality simulation
  • Applications of haptics in healthcare and wellness technologies
  • Applications of haptics in the industry sector
  • Intelligent control of haptic systems