Smart Multimedia Beyond the Visible Spectrum

Session Chairs:

Submission code:9kwqg

Images and videos widely used in multimedia are mostly based on grayscale or color sensors that capture data in the visible spectrum. Many multimedia applications, such as mobile phones and autonomous driving, however, also require imaging capability beyond the visible spectrum in order to explore the ultraviolet and infrared range of information, and/or provide high spectral resolution to better capture the reflectance property of objects. In these scenarios, both passive sensors and active sensors may be required to provide complementary and high information of the scene. These requirements led to the adoption of sensors in different modalities, such as ultraviolet, infrared, multispectral, hyperspectral, LiDAR and so on.

While new sensing technology has greatly expanded the scope and capability of traditional multimedia system, it is still a challenging task to process, analyse, and understand captured images or videos. In particular, each type of data has its unique properties. This requires smart multimedia technologies to be developed so that domain specific knowledge can be embedded into the data processing, and to allow exploration of new methods that enables knowledge transfer between domains and fusion of data from different modalities.   

The goal of this special session is to provide a forum for researchers and developers in the multimedia community to present novel and original research in processing data beyond the visible spectrum. The topics include but are not limited to:

Session Chairs:

  • Processing of image and video data captured in the ultraviolet, infrared, multispectral, hyperspectral and SAR form.
  • Image denoising
  • Multimodal image registration
  • Object detection, recognition, and tracking
  • Image classification
  • Multimodal data fusion
  • Knowledge transfer and domain adaptation

Huixin Zhou is currently a professor and the vice-dean at the School of Physics and Optoelectronic Engineering of Xidian University, Xi’an, Shaanxi, China. He received his Ph.D degrees in 2004 at Xidian University. He worked at Xidian University from 2004. His current research focuses on Optoelectronic imaging and real-time image processing, Object detection and tracking, high/hyperspectral image processing, etc.

He is a senior member of the Photoelectronic Technology Professional Committee of Chinese Society of Astronautics, a senior member of Chinese Optical Society and also a member of The Optical Society of America, and IEEE. He is an Associate Editor of the Journal of Infrared Physics & Technology, He has published more than 100 research papers, and has also got more than 20 authorized patents and 10 software copyrights. He has awarded the first prize for technology innovation in colleges and universities by Ministry of Education of China in 2015.

 

 

Affect-Driven Smart Multimedia

Submission code:wxq77

Affect (emotions, sentiments, and moods) play a crucial role in daily work since it might affect the performance and outcome of both individual and group activities. Affective states are experienced continuously and communicated through direct or computer-mediated interactions. Emotion AI is a wide-ranging branch of computer science concerned with building systems that can recognize and react to human affect.

Affect is especially relevant for multimedia systems where there is a goal to involve people in a broad range of activities (including creating, communicating, interacting, and navigating) and where both personality and affect play a crucial role. Successful multimedia technologies, systems, and applications have users experiencing positive affect. Therefore, leveraging affect-awareness in multimedia enhances the user experience and product quality.

This special session is a forum for researchers and practitioners interested in the role of affect in multimedia to meet, present, and discuss their work and share their perspectives with others interested in the various aspects of Emotion AI applied to multimedia. High-quality contributions, from both academia and industry, about research results and experience reports on Emotion AI in multimedia, including algorithms, applications, empirical studies, theoretical models, and tools for supporting affect-awareness in multimedia, are invited to this special session from both academia and industry. Topics of interest include but are not limited to:

  • Impact of affect on individual and group user experience.
  • The role of affect in the multimedia creation ecosystem.
  • Leveraging users’ affective feedback to improve software, tools, and processes.
  • Design, development, and evaluation of tools and datasets for supporting Affect-Driven Smart Multimedia.
  • Reusable software frameworks, APIs, and patterns for designing and maintaining Affect-Driven Smart Multimedia.
  • Modeling of affect in multimedia systems.
  • Affect sensing without sensors –message boards, issue tracking, social media.
  • Implications of Emotion awareness for different genders, cultures, ages.
  • Implications of Emotion awareness in cross-cultural, genders, cultures, ages groups in a global environment.
  • Methodologies and standards and replications of prior studies from diverse areas.

Dr. Javier Gonzalez-Sanchez is a faculty in the School of Computing, Informatics, and Decision Systems Engineering at Arizona State University. His research takes place at the intersection of software engineering and human-computer interaction. It focuses on developing and advancing development approaches for human-centric technologies; particularly intelligent and self-adaptive systems that understand and respond to cognitive and affective human factors. His vision is to provide insights that influence how software engineers design and build emerging software systems leveraging pervasive, context-aware, and deep learning capabilities. Dr. Gonzalez-Sanchez received a Ph.D. in Computer Science from Arizona State University, a Master's in Electrical Engineering from the Center for Research and Advanced Studies (CINVESTAV) of the Mexican National Polytechnic Institute, and a BS in Computer Engineering from Universidad de Guadalajara. Dr. Gonzalez-Sanchez is a senior member of the ACM and a senior member of the IEEE. Currently, he is appointed as ACM distinguished speaker.

Helen Chavez is a faculty member in the Department of Academic and Student Affairs and the Department of Computer Science at Arizona State University. Helen’s research areas of interest are focused on human computing interaction, affective computing, educational technology, and engineering education; particularly applied to improve learners experience and in general user experience. Helen holds a B.S. in Computer Systems and a M.S. in Computer Science from the Tecnológico de Monterrey, Mexico; and a PhD in Computer Science from Arizona State University.

 

 

Smart Multimedia Haptic Training Simulation

Session Chairs:

Submission code:t7swy

Many professions require dexterous manipulation and so initial and continuing hands-on training. For example, in the medical context, simulators such as animals, cadavers or phantoms have been a convenient way to learn by trial for decades. Yet these training supporting resources are expensive, non-continuously available, may raise ethical issues, and provide a limited set of study cases to practice on. These difficulties limit the opportunities of trainee populations in performing hands-on training during their curriculum. It is necessary to provide cost-efficient solutions facilitating the hands-on practice on any study case at any time as often as necessary.

For a decade, Virtual Reality (VR) simulators have been designed to overcome the aforementioned drawbacks. With such devices, which can be parameterized on-line, it becomes possible to provide an infinite set of study cases and, further, to adapt difficulty level to a specific learning curve. VR simulators have been progressively improved to provide trainees with a more realistic environment in 2D and more recently in 3D. With haptic training simulators, the additional force feedback provides a realistic interaction, which has been demonstrated as an efficient training for advanced tasks in some medical contexts. Airplane pilot simulators are a sample of a widespread solution for hands-on training on difficult situations without taking any risk and with the ability to objectively assess each performance. They have become a necessary step before training on real planes.

This special session aims to is to provide a forum for researchers and developers in the multimedia community to present novel and original research in providing effective haptic feedback. The topics include but are not limited to:

  • Haptic rendering
  • Computer graphics
  • Virtual/augmented/mixed reality
  • Variable Stiffness Actuators
  • Multimodal simulation
  • Training simulation
  • Motion capture/analysis, cognitive performance

Sylvain Bouchigny is a researcher in Human-Computer Interaction at the CEA LIST institute, France. Trained in Physics, he received a M. Eng. in scientific instrumentation from the National Engineering School in Caen in 1998 and a Ph.D. in nuclear physics from the University of Paris Sud 11 (Orsay) in 2004. However, in 2007, he joined CEA LIST to work on physics applied to human interaction which was closer to his interest. His research focus on Multimodal Human-Computer Interaction, haptics and virtual environment applied to education, training and rehabilitation. He conducted projects on tangible interactions on interactive tables for education and post-stroke rehabilitation and, for the last ten years, leads the development of a VR haptic platform for surgical education.

Arnaud Lelevé has been associate professor at INSA Lyon since 2001. He received his PhD in Robotics in 2000 from Université de Montpellier, France. He first worked in a Computer Science laboratory on Remote-Lab systems and then joined Ampère lab in 2011. Since 2016, he leads the Robotics team of Ampère lab and is the scientific coordinator of He has conducted numerous R&D projects including INTELO (mobile robot for bridge inspection) and Greenshield project (which aims at replacing pesticides by farming robots in the crops), and medical-robotics-based research projects such as SoHappy (pneumatic master for tele-echography). He has also participated to the development of hands-on training projets such as SAGA (birth simulator) or PeriSim (Epidural needle insertion simulator). He has strong skills in applied mechatronics, real-time computer science, and a good experience in scientific program management.

Carlos Rossa is an Associate Professor in the Department of Systems and Computer Engineering at Carleton University, Ottawa, ON, Canada. He received his BEng and MSc degrees in Mechanical Engineering from the Ecole Nationale d'Ingénieurs de Metz, Metz, France, both in 2010, and earned his PhD degree in Mechatronics and Robotics from the Sorbonne Université (UPMC), Paris, France, in Dec 2013 under the auspices of the Commissariat à l'Energie Atomique. His research interests include medical robotics, image guided surgery, instrumentation, and haptics.

 

 

Connected citizens, communities, and cities

Session Chairs:

Submission code:ximmj

In this workshop, methodologies, technologies, and frameworks that improve the performance of connected devices, which are integrated into smart homes and buildings, are discussed. These connected devices are developed according to the citizen's needs.

Sometimes, the proposed solutions on the market are not entirely accepted and adopted. Moreover, the expectations and perceptions of citizens are not achieved. Also, this workshop focuses on how citizens could be interconnected by smart-connected devices to improve the quality of life in connected communities that are the essential elements for building smart cities. Since citizens generate a significant amount of digital information daily, it is possible to understand more about citizens' needs and specific social features such as personality and behavior.

Furthermore, citizens could also be classified according to their behavior and personality to communicate between them and connected devices efficiently. Thus, fundamental and supportive technologies require to be integrated to provide an online communication responsive system. Moreover, AI, gamification, vision systems, voice systems, and serious games could also integrate for engaging citizens into connected communication structures that allow creating social, sensing, innovative, and sustainable devices that can provide and promote tailored solutions to citizens' needs. Those products have to include special functions for social and communication activities in households or public places. Besides, advanced citizen's interfaces can run on different hardware and software platforms, and they could be built using AI algorithms on the cloud or mobile devices. When a smart community is implemented, citizens' needs must be solved using reconfigurable dynamic communication topologies that can run online according to the updated online data from citizens. Hence, smart communities and citizens require innovative and creative solutions in connected devices. As a result, this workshop presents the latest technological proposals in connected devices that are deployed in smart homes and buildings that allow constructing smart cities methodologically.

Authors are invited to submit contributions including, but not limited to the following areas:

  • Smart interfaces
  • Artificial intelligence for smart cities and citizens
  • Smart cities based on connected devices
  • Smart communities on connected devices
  • Data fusion in smart homes and buildings
  • Social communication systems
  • Social robotic in smart houses and buildings
  • Cyber-physical systems in smart homes and buildings
  • Digital twins for modeling smart homes and cities
  • Citizens' social behavior
  • Vision systems for connected devices
  • Voice systems for connected devices
  • Energy systems for connected devices
  • Security systems for connected devices
  • Artificial Intelligence for classification of human features
  • Older people's needs in smart homes and buildings using connected devices
  • Public Transportation by connected devices
  • Interfaces for connected social devices
  • Education systems for smart communities

Pedro Ponce is a control system and automation engineer; with a PhD (2002) and master degree (1998) in Electrical Engineering focusing on control systems. For 12 years, he worked in control and automation systems where he focused on designing, integrating and testing control and automation solutions and products. In 2002, He joined Tecnologico de Monterrey, campus Mexico City, as a senior researcher in control systems and automation. He has written 15 books, 10 book chapters, and around 240 research papers. On the other hand, He has 2 patents in United States and 2 patents in Mexico. His main research areas are control systems, digital systems, power electronics, electric systems, robotics, artificial intelligence and mechatronics.

Sergio Castellanos is an assistant professor at the University of Texas at Austin and leads the RESET (Rapid, Equitable & Sustainable Energy Transitions) Lab, analyzing decarbonization pathways for emerging economies, data-driven sustainable transportation approaches, and equitable clean tech adoption and deployment strategies. With collaborators, his interdisciplinary projects have been awarded international prizes (United Nations' Data for Climate Action Challenge), won national competitions (México), and gathered media attention (Forbes, Greentech Media). Prior to UT Austin, he worked as a researcher at UC Berkeley leading bi-national (US-Mexico) projects, helping to bridge the clean energy technology gap between these two countries. Sergio holds an Engineering Ph.D. from MIT.

 

 

From Blockchain to Metaverse

Session Chairs:

Submission code:5xmd9

Blockchain technology has been a popular topic of discussion throughout the world in recent years. We believe that blockchain technology has the potential to serve as a trust mechanism not just for Bitcoin transactions but for finance, healthcare, supply chain management, and even crypto artworks. Furthermore, we believe that smart contracts can aid computer scientists in executing programmes with worldwide consensus. As a result, countless decentralized applications are founded in recent years, as are numerous inventions.

Metaverse is a synthesis of several virtual worlds, cyberspace, and the future generation of human habitat. It is a term that alludes to a completely developed digital world that exists independently of the analogue world in which we live. The Metaverse is frequently associated with user-generated content, gamification, virtual and augmented reality, as well as various other fascinating multimedia topics.

Both Blockchain and Metaverse are reshaping how the Internet operates. While Blockchain provides data integrity, decentralization, and consensus, Metaverse infuses cyberspace with a wealth of experience. Both technologies work in tandem to ensure the path of the new Internet.

In this special section, authors are invited to submit contributions including, but not limited to the following areas:

  • Blockchain middleware and development tools for Smart Multimedia 
  • Blockchain-based applications & services for Smart Multimedia 
  • Decentralized application development for Smart Multimedia 
  • Smart contracts and verification for Smart Multimedia 
    • Distributed database technologies for Blockchain 
  • Artificial intelligence for Metaverse 
  • Metaverse creation tools 
  • Metaverse perception Infrastructure 
  • Applications & Services in Metaverse  
  • Creation of a virtual idol and a virtual YouTuber 
  • Digital twins technology in smart cities 
  • Metaverse education system

Xiao Wu is currently the director of Yangtze River Blockchain International Innovation Center and CEO of WhiteMatrix Inc. Xiao Wu received his M.Sc. and B.Sc. from University of Alberta in 2015 and 2010, respectively. In 2018, Xiao Wu founded WhiteMatrix Inc. WhiteMatrix received 4 rounds of financing from AntGroup, Hashkey Captial, Neo Global Capital etc. In 2020, Xiao Wu win the first prize of China Blockchain Development Contest.

Dr. Wei Cai is currently an assistant professor of Computer Engineering, School of Science and Engineering at The Chinese University of Hong Kong, Shenzhen. He is serving as the director of the Human-Cloud Systems Laboratory, as well as the director of the CUHK(SZ)-White Matrix Joint Metaverse Laboratory. Wei received Ph.D., M.Sc. and B.Eng. from The University of British Columbia (UBC), Seoul National University and Xiamen University in 2016, 2011 and 2008, respectively. Before joining CUHK-Shenzhen, he was a postdoctoral research fellow in Wireless Networks and Mobile Systems (WiNMoS) Laboratory at UBC. He has completed research visits at Academia Sinica (Taiwan), The Hong Kong Polytechnic University and National Institute of Informatics, Japan. Dr. Cai has published more than 60 journal and conference papers in the area of cloud computing, edge computing, interactive multimedia and blockchain systems. He is serving as an associate editor of IEEE Transactions on Cloud Computing. He was a recipient of the 2015 Chinese Government Award for the Outstanding Self-Financed Students Abroad, the UBC Doctoral Four-Year-Fellowship from 2011 to 2015, and the Brain Korea 21 Scholarship. He also received the best student paper award from ACM BSCI2019 and the best paper awards from CCF CBC2018, IEEE CloudCom2014, SmartComp2014, and CloudComp2013.

Dr. Han Qiu received the B.E. degree from the Beijing University of Posts and Telecommunications, Bei- jing, China, in 2011, the M.S. degree from Telecom- ParisTech (Institute Eurecom), Biot, France, in 2013, and the Ph.D. degree in computer science from the Department of Networks and Computer Sci- ence, Telecom-ParisTech, Paris, France, in 2017. He worked as a postdoc and a research engineer with Telecom Paris and LINCS Lab from 2017 to 2020. Currently, he is an assistant professor at Institute for Network Sciences and Cyberspace, Tsinghua University, Beijing, China. His research interests include AI security, blockchain, big data security, and the security of intelligent transportation systems.

 

 

3D Visualization and Image Understanding

Session Chairs:

Submission code:b76a2

Researchers are facing complex challenges as computer vision applications move from 2D to 3D. Much of our working life consists of 2D projections of 3D objects and scenes; this makes accurate, interpretable visualization of 3D models and data an integral part of researchers’ workflows. Moreover, the extra dimension dramatically increases complexity of computer vision tasks; this necessitates novel solutions to increase algorithm efficiency when dealing with 3D data. Example strategies include dimensionality reduction as part of the algorithm, or to establish a simplified approximation of the real-world input before processing. Many of these solutions require that the researcher has excellent visualization tools, a deep understanding of the problem to be solved, and intricate knowledge of the sensing device used. As dimensionality increases, there is also greater risk of producing results which are highly sensitive to changes in one of the input dimensions or model parameters; as such, intelligent means for properly locating and orienting the sensed image in the search space is a prerequisite to many real-world tasks. Similarly, proper design and evaluation of the model to ensure robustness to small permutations in the input becomes paramount.

This special session is a forum for researches and practitioners of automated 3D perception and visualization to present and discuss their work with others interested in artificial intelligence algorithms and applications. High-quality contributions, from both academia and industry, about research results and experience, including algorithms, applications, empirical studies, theoretical models, and tools for 3D visualization and image understanding are invited to this special session. Topics of interest include but are not limited to:

  • Novel methods for interpreting and visualizing 3D models and data.
  • Applications of 3D image sensing and visualization.  
  • Efficient algorithms and techniques for processing and interpreting 3D input data.
  • 2.5-dimension computer vision advantages and limitations.
  • Multi-modal 3D or 2.5D sensing including LiDAR, ultrasound, CT, and thermal imaging.
  • Promoting and evaluating robustness of 3D perception tasks to noise, orientation, and other real-world sensing issues.
  • Software frameworks, APIs, and patterns for designing, building, visualizing and evaluating 3D perception solutions.
  • Cultural and societal implications of improved 3D perception and visualization.
  • Techniques and strategies to promote deeper automated understanding of the 3D world.

Dr. Najjaran is a Professor of Mechanical and Mechatronics Engineering and Head of the Advanced Control and Intelligent Systems (ACIS) laboratory at the University of British Columbia (UBC). He received his Ph.D. from the Department of Mechanical and Industrial Engineering at the University of Toronto in 2002. His research focuses on the analysis and design of mechatronics and control systems with broad applications including unmanned ground and aerial vehicles, industrial automation and microelectromechanical systems. Over the past decade, he and his students have contributed to multiple aspects of safe and reliable operation of robots through computer vision, artificial intelligence and machine learning techniques. Dr. Najjaran is a registered P.Eng. in BC, Fellow of CSME, and the President of the Advanced Engineering Solutions Inc. providing design and technical consultation services to the automation industry.

Brandon Graham-Knight is a PhD student researching artificial intelligence and computer vision. He is a member of the Advanced Control and Intelligent Systems (ACIS) laboratory at the University of British Columbia (UBC). He is also currently working at the BC Cancer agency as a research intern evaluating AI tools for possible clinical application.