Satellite Workshops
Workshop 1: Edge Intelligence: Smart, Efficient, and Scalable Solutions for IoT, Wearables, and Embedded Devices (SEEDS)
Organizers: Pasquale Coscia, Università degli Studi di Milano, Italy, Konstantinos N. Plataniotis, University of Toronto, Canada, Nikolaos Boulgouris, Brunel University of London, United Kingdom, Pai Chet Ng, Singapore Institute of Technology, Singapore
Date and Time: TBA
Location: TBD
Website: https://sites.google.com/view/seeds2025
Abstract: We are entering a new era of Artificial Intelligence (AI), where performance and computational efficiency serve as the two foundational pillars for complex systems comprising hundreds or even thousands of agents. The conventional paradigm, mainly focused on final performance, poses significant challenges, as high-capacity networks drive up costs and result in substantial power consumption due to the vast amounts of data being processed. Addressing these challenges by leveraging tiny, low-power devices is required to reduce dependence on data streaming to powerful servers, enable faster inference, and ensure privacy-preserving processing. This workshop aims to gather researchers and practitioners to explore these key issues, such as efficient algorithms, enhanced robustness, and the development of resilient systems able to operate effectively in unpredictable environments. Emphasis will be placed on efficient methodologies for model and data optimization, including techniques like distillation, and strategies for improving trustworthiness through explainable methods. It will highlight cutting-edge research and practical solutions that enable the deployment of AI models on resource-constrained, intelligent devices. By providing an inclusive platform, this workshop will facilitate discussions on recent advancements in AI and Artificial Intelligence of Things (AIoT) domains.
Workshop 2: Transparent Image Processing (TIP)
Organizers: Chang-Su Kim (changsukim@korea.ac.kr), Korea University, Yeong Jun Koh (yjkoh@cnu.ac.kr), Chungnam National University, Jiaying Liu (liujiaying@pku.edu.cn), Peking University, Xinchao Wang (xinchao@nus.edu.sg), National University of Singapore
Date and Time: TBA
Location: TBD
Website: https://uinone.github.io/tip/
Abstract: Image processing has many subtopics including image acquisition, representation, enhancement, restoration, analysis, and compression. Various analytical tools and algorithms have been developed for these subtopics traditionally. These tools and algorithms are “transparent” in that they can be clearly described by mathematical formulae or logical reasoning. On the other hand, deep learning based on big data has been proven to be a strong, effective tool for image processing. It provides excellent performance in many image processing tasks and continues breaking its own results with deeper networks and bigger data. However, these deep learning techniques are rather “opaque” because they focus on end-to-end training and intermediate states of the networks are unexplainable in many cases. This success of data-oriented deep learning, however, has an undesirable aspect of suppressing new ideas for analytical tools; an interesting idea may not be sufficiently tested and improved just because its initial result is experimentally inferior to so-called state-of-the-art deep learning techniques, which demand immensely more computational resources.
Hence, this workshop intends to provide a meeting place for presenting and discussing analytical image processing tools, especially transform, decomposition, and clustering tools. However, this workshop is open to other image processing papers as well, provided that they include new ideas for analytical tools, i.e. transparent parts.
This workshop and another workshop titled “Learning Beyond Deep Learning (LBDL)” of ICIP2025 are sister workshops. TIP covers learning- and non-learning-based image processing algorithms with an emphasis on algorithmic transparency. In contrast, LBDL focuses on learning-based models that deviate from deep learning in parts or whole.
Workshop 3: Third IEEE Workshop on Coding for Machines
Organizers: Changsheng Gao, Nanyang Technological University, Singapore, Ying Liu, Santa Clara University, USA, Heming Sun, Yokohama National University, Japan, Hyomin Choi, InterDigital, USA, Fengqing Maggie Zhu, Purdue University, USA, Ivan V. Bajić, Simon Fraser University, Canada
Date and Time: TBA
Location: TBD
Website: https://www.ieeecfm.org/home
Abstract: Multimedia signals have traditionally been processed for human consumption, but with the rise of machine-to-machine (M2M) communication, a shift toward efficient machine-based analysis is necessary. Applications such as autonomous navigation, surveillance, and smart infrastructure require new approaches to compression and processing. Standardization efforts, including MPEG VCM, MPEG FCM, and JPEG AI, aim to optimize coding for machines, achieving significant bit savings while maintaining inference accuracy. However, challenges remain, including trade-offs in coding efficiency, multi-task compatibility, and security. This workshop brings together researchers from academia and industry to explore cutting-edge theories, methods, and applications in coding for machines.
Workshop 4: First International Workshop on “Real-Time Implementation and Lightweight GNNs for Conventional and Event-based Cameras”
Organizers: Thierry Bouwmans, Associate Professor (HDR), Laboratoire MIA, La Rochelle Université, France, Tomaz Kryjak, Associate Professor, AGH University, Krakow, Poland, Mohamed Shehata, Professor, University of British Columbia, Okanagan Campus, Canada, Ananda S. Chowdhury, Professor, Jadavpur University, Kolkata, India, Badri N. Subudhi, Associate Professor, Indian Institute of Technology, Jammu, India
Date and Time: TBA
Location: TBD
Website: https://sites.google.com/view/rt-gnns-2025/accueil
Abstract: Object classification and detection from a video stream captured by conventional cameras or event-based cameras is a fundamental step in applications such as visual surveillance of human activities, observation of animals and insect behaviors human-machine interaction and all kinds of advanced mobile robotics perceptions systems. A large number of graph neural networks applied for detection and classification of moving objects have been published outperforming conventional deep learning approaches. Many scientific efforts have been reported in the literature to improve their application in a more progressive way in applications where challenges are becoming more complex. But no algorithm is able to simultaneously address all the key challenges that are present in videos during long sequences as in the real cases.
However, the top background subtraction methods currently compared in CDnet 2014 are based on deep convolutional neural networks. But, their main drawbacks are computational and memory requirements, and also supervised aspects requiring labeling of a large amount of data. In addition, their performance decreases significantly in the presence of unseen videos. Thus, the current top algorithms are not practicable in real applications despite high performance regarding moving object detection.
In recent years, GNNs have also been increasingly used in object detection, object tracking, and mobile robot navigation. Their ability to model spatial and temporal dependencies makes them well-suited for these applications, especially in dynamic environments, where relationships between objects and scene elements must be continuously updated. However, real-time deployment of GNN-based solutions remains a challenge, as they often require significant computational resources, limiting their practicality in embedded and resource-constrained environments. Recently, only a few works have addressed real-time and lightweight GNN algorithms .
Workshop 5: Bridging the Gap: Advanced Data Processing for Natural Disaster Management – Integrating Visual and Non-Visual Insights
Organizers: Dr. Dmitriy Shutin, German Aerospace Center (DLR), Dmitriy.shutin@dlr.de, Dr. Vasileios Mygdalis, Department of Informatics, Aristotle University of Thessaloniki, Greece, mygdalisv@csd.auth.gr
Date and Time: TBA
Location: TBD
Website: https://icarus.csd.auth.gr/cfp-brigding-icip25-workshop/
Abstract: In recent years, the world has witnessed an increasing frequency and intensity of natural disasters, with unprecedented floods and intensive forest fires causing devastating impacts on communities, ecosystems, and infrastructure. Our ability to respond effectively to such events depends strongly, among other things, on timely and accurate processing of diverse sources of data, ranging from satellite imagery and camera-equipped drones to in-situ sensor networks and weather models. The use of imagery data from satellite and airborne platforms, along with well-established classical and AI-based image processing methods, belongs since long into a standard natural disaster management toolkit. Nonetheless, there still remains a significant gap between processing of visual data (such as images and videos of events) and their non-visual counterpart, exemplified by meteorological data, smoke models based on chemical sensor readings, as well as social media reports. Bridging this gap is essential for improving disaster management strategies and ensuring rapid, well-informed decision-making. This workshop aims to explore innovative techniques for integrating visual and non-visual data sources in natural disaster management. Contributions are encouraged in the following areas:
- Joint image/video processing and computational fluid dynamics for local weather modeling
- Modeling smoke propagation around fires constrained with visual information clues
- Fire nest detection through visual and robotic olfaction methods for smoke source localization
- Numerical flood simulation models constrained by satellite or drone imagery
- Joint analysis of social media posts and visual/in-situ information
While the focus is on forest fires and floods, other extreme events are also relevant. The workshop seeks to bring together researchers from various technical fields and IEEE societies to address data processing challenges in natural disaster scenarios. By investigating cutting-edge cross-domain approaches, participants will gain novel insights into methods for predicting, monitoring, and mitigating the impacts of disasters, improving situational awareness, and enabling faster, more accurate crisis management.
Workshop 6: Innovative Approaches in Image and Signal Processing for Autonomous Vehicles: Integrating Incremental Learning and Explainable AI
Organizers: Dr. Lucio Marcenaro, Associate Professor, Department of Electrical, Electronic, Telecommunications, Engineering and Naval Architecture, Dr. Maheshkumar H. Kolekar, Associate Professor, Electrical Engineering Department, IIT Patna, Dr. Marcus Greiff, Research Scientist, Human Interactive Driving, Toyota Research Institute, Los Altos, California, USA
Date and Time: TBA
Location: TBD
Website: https://sites.google.com/view/ispav25
Abstract: Explainable AI (xAI), incremental learning, and sophisticated image and signal processing are essential for enhancing the performance, reliability, and safety of autonomous vehicles (AVs) as they develop further. Conventional AV systems frequently use static models, which might not be able to effectively adjust to novel surroundings or deal with the dynamic nature of actual driving situations. Additionally, to maintain transparency and confidence in the decision-making processes, AI models in autonomous cars increasingly need to be interpretable. The purpose of this workshop is to bring together engineers, researchers, and business experts to examine how explainable AI and incremental learning may advance image and signal processing methods for autonomous vehicles.
The workshop will cover a wide range of subjects, including autonomous systems engineering, robotics, AI ethics, computer vision, and machine learning. Each of these areas represents a rapidly evolving field with its own set of challenges, research questions, and technological innovations. Focused conversations and the chance to delve deeper into the particular subtleties and new trends within each of these areas—discussions that might not receive the same degree of attention at the general ICIP conference—are made possible by a dedicated workshop. Incremental learning, particularly in the context of autonomous vehicles, is a niche subject that involves the development of algorithms capable of adapting to new data streams without requiring complete retraining of the model.
Workshop 7: Generative AI for Forensics and Security applications
Organizers: Dr. Maheshkumar H. Kolekar, Associate Professor, Electrical Engineering Department, IIT Patna, mahesh@iitp.ac.in, Dr. Lucio Marcenaro, Associate Professor, Department of Electrical, Electronic, Telecommunications, Engineering and Naval Architecture (DITEN), lucio.marcenaro@unige.it
Date and Time: TBA
Location: TBD
Website: https://sites.google.com/view/genai-fs-2025
Abstract: The advent of generative AI is transforming the fields of forensics and security by enabling novel capabilities in data synthesis, anomaly detection, and enhanced image analysis. This workshop on “Generative AI for Forensics & Security Applications” seeks to bring together researchers, practitioners, and policymakers to explore cutting-edge advancements and address critical challenges. The workshop will focus on using generative models, such as GANs, diffusion models, and transformer-based architectures, to improve forensic evidence generation, counter deepfakes, and bolster security systems through robust and adaptive synthetic data generation.
This workshop is essential because it tackles domain-specific challenges in security and forensic imaging. By focusing on security imaging applications, it complements the conference’s theme, “Imaging in the Age of Generative AI,” by emphasizing real-world impacts and ethical implications in high-stakes environments. This dedicated venue will generate momentum by promoting research into deepfake detection, adversarial robustness, privacy-preserving video analytics, and secure data synthesis for criminal investigations. The workshop will feature a balanced composition of peer-reviewed papers, keynote presentations by prominent experts in generative AI and security, and a panel discussion.
Topics of interest include, but are not limited to:
- Synthetic Data Generation for Security and Forensic Applications
- Deepfake Detection and Mitigation in Forensic Investigations
- Privacy-Preserving Generative Models for Surveillance Systems
- Adversarial Attacks and Defenses in Generative Security Systems
- AI-Generated Evidence and Legal Implications in Digital Forensics
- Real-Time Anomaly Detection Using Generative AI in Security Monitoring
- Biometric Synthesis and Recognition for Secure Authentication
- Ethical Considerations and Bias Mitigation in AI-Driven Forensics
- Generative AI for Scene Reconstruction and Predictive Analysis
- Generative AI in IoT and Edge-Based Security Applications
Workshop 8: Point Cloud Compression – Advances in Technology Development
Organizers: Zhan Ma, Nanjing University, mazhan@nju.edu.cn, Dandan Ding, Hangzhou Normal University, DandanDing@hznu.edu.cn, Zhu Li, University of Missouri, zhu.li@ieee.org
Date and Time: TBA
Location: TBD
Website: https://pcc-icip2025.github.io/
Abstract: Recent advances in sensor technologies and algorithms, e.g., LiDAR and radar systems and ultra-high resolution camera arrays, have facilitated point cloud acquisition and processing in vast applications like autonomous machinery and immersive communication. Given that point cloud data often present an excessive amount of random, unstructured points in a 3D space, efficient compression of the point cloud is highly desired for its successful service enabling, especially for networked applications. In this workshop, we will review the latest advances in point cloud compression, including coding tool development, (in loop or post) processing filters, quality assessment, as well as standardization progress.
Workshop 9: Computer Vision for Ecological and Biodiversity Monitoring (CV-EBM)
Organizers: University of Lincoln, UK, Dr James Brown, Associate Professor in Computer Science (He/Him), Dr Petra Bosilj, Assistant Professor in Computer Science (She/Her), Dr Wenting Duan, Assistant Professor in Computer Science, Dr Lan Qie, Assistant Professor in Ecology and Conservation, Dr Hongrui Shi, Postdoctoral Research Associate, Mx Villanelle O’Reilly, PhD Candidate (They/Them), University of Oxford, UK, Dr Katrina J Davis, Associate Professor in Conservation Biology, Dr Rob Salguero-Gómez, Associate Professor in Ecology, Prof Ben Sheldon, Professor of Ornithology, Prof Graham Taylor, Professor of Mathematical Biology, Dr Georgios Voulgaris, Postdoctoral Researcher in Deep Learning and Spatial Ecology
Date and Time: TBA
Location: TBD
Website: https://cvebm.blogs.lincoln.ac.uk/
Abstract: We are facing a global environmental crisis caused by anthropogenic climate change, and the destruction of habitats and ecosystems due to agriculture and urbanization. To understand the extent of this impact and environmental responses to intervention, it is necessary to monitor ecosystems through the collection and identification of patterns in various data, including remote sensing imagery, ground-level imagery, and video.
This workshop will bring together leading experts from both the computational and ecological research communities to share the latest innovations, datasets, and applications in a focused forum with two keynote speakers alongside regular paper submissions.
Workshop 10: Learning Beyond Deep Learning (LBDL)
Organizers: C.-C. Jay Kuo, University of Southern California, USA, email: jckuo@usc.edu ,Ling Guan, Toronto Metropolitan Univ/Ryerson Univ, Canada, email: lguan@ee.ryerson.ca
Date and Time: TBA
Location: TBD
Website: https://mcl.usc.edu/learning-beyond-deep-learning-lbdl/
Abstract: There has been a rapid development of artificial intelligence and machine learning technologies in the last decade. Although deep learning networks have significantly impacted various application domains such as computer vision, natural language processing, autonomous driving, robotics navigation, etc., they have several limitations. They are mathematically intractable, vulnerable to adversarial attacks, and demand a lot of training data. Furthermore, their training is computationally intensive, and their large model sizes make deploying mobile and edge devices a significant challenge. Developing new machine learning paradigms beyond deep learning is highly desirable. We intend to use this workshop to attract researchers of common interests and generate momentum for future breakthroughs. One or more characteristics will feature the new learning paradigm: interpretability, smaller model sizes, lower computational complexities, and high performance.
This workshop and another workshop titled “Transparent Image Processing (TIP)” of ICIP 2025 are sister workshops. LBDL focuses on learning-based models that deviate from deep learning in parts or whole. In contrast, TIP covers learning- and non-learning-based image processing algorithms emphasizing algorithmic transparency.
Technical Program Committee Members
- Lei Gao, Toronto Metropolitan University, Canada
- Dongwoo Kang, Hongik University, Korea
- Jewon Kang, Ewha Womans University, Korea
- Ming-Sui Lee, National Taiwan University, Taiwan
- Jianquan Liu, NEC Corporation, Japan
- Xiaofeng Liu, Yale University, USA
- Bojan Mihaljevic, Universidad Politécnica de Madrid, Spain
- Paisarn Muneesawang, Mahidol University, Thailand
- Witold Pedrycz, University of Alberta, Canada
- Simon Pun, Chinese University of Hong Kong (Shenzhen), China
- Yuzhuo Ren, Nvidia, USA
- Xinchao Wang, National University of Singapore, Singapore
- Harry Yang, Hong Kong University of Science and Technology, Hong Kong
- Niclas Zeller, Hochschule Karlsruhe University of Applied Sciences, Germany
Workshop 11: Advanced Research on Online Evolutive Learning for Image Processing
Organizers: Liang Song, Fudan University, China, songl@fudan.edu.cn, Kostas N. Plataniotis, University Of Toronto, Canada, kostas@ece.utoronto.ca, Yang Liu, Fudan University, China, yang_liu20@fudan.edu.cn, Jiangchuan Liu, Simon Fraser University, Canada, jcliu@sfu.ca, Victor C. M. Leung, University Of British Columbia, Canada, VLeung@ece.ubc.ca
Date and Time: TBA
Location: TBD
Website: https://sites.google.com/view/oel-ipworkshop/
Abstract: Online Evolutive Learning (OEL)—a paradigm-shifting methodology enabling autonomous, dynamic model optimization through multi-agent collaboration and environmental adaptation is reshaping the field of image processing. By integrating theoretical advancements with real-world deployment workflows—spanning communication protocols, edge-device constraints, and real-time adaptation—the initiative establishes OEL as a foundational framework for next-generation vision technologies. Emerging applications in collaborative perception, edge-cloud visual analytics, multimodal LLMs, and generative AI highlight OEL’s transformative potential for real-time vision systems. The workshop will bridge critical gaps between conventional image processing and evolving OEL architectures (e.g., end-cloud coordination, perception-control integration), addressing challenges in lifelong learning, heterogeneous data fusion, and resource-efficient deployment. It promotes standardized evaluation frameworks for OEL-driven systems while fostering ethical development through transparent data practices.
Workshop 12: Artificial Intelligence in Precision, Autonomous and Sustainable Agriculture (AIPASA)
Organizers: Jie Liu, Harbin Institute of Technology, China (jieliu@hit.edu.cn), Wen Hu, UNSW, Australia (wen.hu@unsw.edu.au), Feng Zhao, Northeast Forestry University, China (fzhao@nefu.edu.cn), Rongqiang Zhao, Harbin Institute of Technology (zhaorq@hit.edu.cn)
Date and Time: TBA
Location: TBD
Website: https://aipasa25hotcrp.cse.unsw.edu.au/
Abstract: Image and video data are pivotal in emerging smart agriculture. The workshop aims to bring together researchers and practitioners working in the areas of artificial intelligence and data science for precision, autonomous, and sustainable agriculture, broadly defined. Agriculture, facing the challenges of climate change, aging labor force, and soil degradation, poses great research opportunities for AI and data sciences. Remote sensing, hyperspectral images, autonomous driving and operations in agriculture also provide new contents and rich scenarios for image and video processing.
Topics of interest include, but are not limited to:
- Image and video processing in agricultural applications
- Remote sensing and information processing for agriculture
- Multispectral and hyperspectral image/video processing for agriculture
- LLM and multi-modal large models for agriculture
- Self-driving farm vehicles and precision operations
- Computer vision and navigation techniques for farm robots
- Case studies of AI and data science for smart farms
Workshop 13: 2nd Integrating Image Processing with Large-Scale Language/Vision Models for Advanced Visual Understanding (LVLM)
Organizers: Yong Man Ro, KAIST, South Korea, Hak Gu Kim Chung-Ang, University, South Korea, Wen-Huang Cheng, National Taiwan University, Taiwan
Date and Time: TBA
Location: TBD
Website: https://carai.kaist.ac.kr/lvlm
Abstract: This workshop aims to bridge the gap between conventional image processing techniques and the latest advancements in large-scale vision and language models. Recent developments in large-scale models have revolutionized image processing tasks, significantly enhancing capabilities in visual object understanding, image classification, and generative image synthesis. Furthermore, the large-scale models have opened new avenues for human-machine multimodal interactive dialogue systems, where the synergy between visual and linguistic processing enables more intuitive and dynamic interactions.
This workshop will provide a platform for researchers and practitioners to explore how cutting-edge large-scale models integrate with image processing methods and foster innovation across diverse applications. Discussions will extend beyond conventional tasks to address the role of vision-language models in Generative AI and their use in multimodal systems, such as virtual assistants that interact seamlessly using images, text, and speech.
Workshop 14: Time-Resolved Computational Imaging
Organizers: Miguel Heredia Conde (https://ihct.uni-wuppertal.de/en/team/detail/heredia-conde) and Peter Vouras (https://sagroups.ieee.org/sps-sasc/)
Date and Time: TBA
Location: TBD
Website: https://sagroups.ieee.org/sps-sasc/icip-2025-workshop-on-time-resolved-computational-imaging/
Abstract: The ICIP-2025 workshop, “Time-Resolved Computational Imaging,” focuses on new techniques that leverage high-resolution measurements of time delay to produce images of exceptional quality and information. For example, these images may provide depth and 3-dimensional views of objects in a factory that a robot can interpret. Engineers and scientists who attend this workshop will be in a better position to make contributions to this emerging field and to leverage time-resolved imaging technology for their research. Topics of particular interest include;
- Ultrafast “light-in-flight” imaging,
- Non-line-of-sight (NLOS) imaging,
- Computational radar imaging (SAR, ISAR),
- Time-of-Flight and 3D imaging,
- Time-resolved hyperspectral imaging,
- Time-resolved medical imaging,
- Computational acoustic imaging systems, incl. microphone arrays, SONAR systems,
- Photoacoustic imaging,
- FLIM and computational time-resolved microscopy.
Workshop 15: Generative AI for World Simulations and Communications & Celebrating 40 Years of Excellence in Education: Honoring Professor Aggelos Katsaggelos
Organizers: Haohong Wang, General Manager, TCL Research America, USA (haohongwang@gmail.com), Sotirios A. Tsaftaris, Professor, University of Edinburgh, UK (S.Tsaftaris@ed.ac.uk), Maggie Zhu, Associate Professor, Purdue University, USA (zhu0@purdue.edu), Joon Ki Paik, Professor, Chung-Ang University, Korea (paikj@cau.ac.kr), Andrew Segall, Head of Video Coding Standards, Amazon, USA (andrew@andrewsegall.com), Zhu Li, Professor, University of Missouri, Kansas City, USA (lizhu@umkc.edu)
Date and Time: TBA
Location: TBD
Website:
Abstract: The rapid advancement of Generative AI (GenAI) is driving transformative changes across a variety of fields in the general scope of world simulations and communications, such as, film production, gaming, social media, training and education, virtual and augmented reality, customer services, immersive human computer interaction, and home entertainment applications. However, there are many remaining challenges and limitations in today’s GenAI advances, such as hallucination in text generation, inconsistency in video generation, camera control and direct ability in AI storytelling, quality and precision in 3D world acquisition, realistic look of generated human facial details, natural body mechanics simulations, natural human interactions simulation, physical norm understanding in human interactions with environment, just name a few.
This workshop aims to foster breakthroughs and address critical problems of GenAI in various industries and applications that are relevant to world simulations and communications. We welcome submissions that examine the technical challenges and opportunities of GenAI, as well as the ethical and legal implications of using this technology.
The topics of this workshop include, but are not limited to:
- Advances in generative AI models and representations, such as Diffusion models, neural fields, Gaussian splatting, etc.
- 3D world simulations from multi-modality, such as text, image, audio, video and graphics
- World communications via multi-modality, such as text, image, audio, video and graphics
- GenAI based multi-modal human interactions and communications
- GenAI based content representation, coding and communications
- Advances in GenAI Applications relevant to world simulations and communications
- Evaluation metrics and benchmarks for generative models
- Regulatory and policy frameworks relevant to generative AI for world simulations
- Economic impacts and business models enabled by generative AI for world simulations
Workshop 16: Optimizing Deep Learning Architectures for Advanced Hyperspectral Data and Spectral Analysis
Organizers: Dr. Emanuela Marasco, George Mason University, Dr. Thirimachos Bourlai, University of Georgia’s School of Electrical and Computer Engineering
Date and Time: TBA
Location: TBD
Website: https://dl-hsa.com/
Abstract: Hyperspectral imaging has undergone remarkable advancements in recent years, shifting from labor-intensive and time-consuming processing methods to efficient, real-time analysis techniques. This workshop focuses on leveraging deep learning to tackle the unique challenges posed by hyperspectral imaging, including its inherent spectral complexity, high dimensionality, and the critical task of preserving spectral band integrity—an aspect often overlooked in conventional methods. The workshop will explore intelligent algorithms for automated data interpretation, advanced data fusion techniques for multi-source integration, and strategies to enhance spectral continuity in model outputs. Topics such as transfer learning, automated image classification, and segmentation will also be highlighted, showcasing their role in advancing hyperspectral imaging capabilities. This workshop aims to foster innovation and collaboration by bringing together leading researchers and practitioners advancing in hyperspectral imaging and applied AI applications across diverse sectors such as healthcare, environmental monitoring, agriculture, public safety, forensics sciences, and defense.
Topics
- Intelligent Algorithms for Automated Hyperspectral Data Analysis
- Optimizing Deep Learning Architectures for Hyperspectral Imaging
- Spectral Continuity Preservation in Deep Learning Models
- Data Fusion and Multi-Source Hyperspectral Analysis
- Transfer Learning for Hyperspectral Image Analysis
- Automated Hyperspectral Image Classification and Segmentation