Industry Keynote Speeches
We are honored and delighted to host the following IEEE ICIP 2025 Keynote Speeches:
Why Is Visual Intelligence Still Limited in the Age of AI?
Dr. Charbel Rizk
Bio: Dr. Rizk is the Founder CEO of Oculi, leading the revolution in Vision AI at the edge, previously Associate Research Professor at Johns Hopkins ECE and Principal Professional Staff at JHU APL. He has been recognized as a top innovator, thought leader, and successful Principal Investigator / S&T manager. He has spent most of his career developing autonomous systems where vision was always the bottleneck. This experience motived the development of a new architecture for artificial (machine/computer) vision that combines the best of both worlds: the efficiency of biology and the speed of machines. Dr. Rizk has successfully collaborated with various FFRDC’s, government labs, academia, and industry of various sizes. He is a senior member of IEEE, AIAA, and a member of AUVSI, and OSA.
Abstract: Despite the proliferation of ultra-high-resolution imaging sensors, advanced processing
capabilities, and abundant memory resources, efficient visual intelligence remains elusive in
today’s AI-driven environments. Human vision still surpasses computer vision systems by
approximately 40,000 times in energy efficiency, and reducing machine-vision latency
significantly increases system cost and complexity. Given these remarkable technological
advancements, why does machine vision lag so significantly behind human visual perception?
The critical difference lies in the underlying architecture. Human vision operates via a highly
efficient, real-time programmable vision sensor (the eye), coupled with a dynamic processing
platform (the brain) that optimally allocates computational resources based on incoming sensory
data. While brain-inspired processing has been widely studied, this presentation specifically
addresses innovations in the sensor itself.
We introduce a novel vision sensor architecture that merges biological efficiency with
computational speed, centered around our patented IntelliPixel® technology. IntelliPixel
integrates processing and memory at the pixel level, enabling programmable and parallel
processing. This architecture significantly reduces latency and power consumption, while
inherently safeguarding privacy by processing data directly at its source.
Our presentation will detail the OCULI SPU S-series, the first product family based on this
architecture, showcasing applications with substantial reductions in latency and energy use,
resulting in lower overall costs (bill-of-materials). Importantly, the IntelliPixel architecture is
sensor- and modality-agnostic, compatible with CMOS manufacturing techniques, colloidal
quantum dots (CQD), dual-color, and various infrared detectors. It also supports time-of-flight
(TOF) depth sensing, addressing significant bandwidth challenges inherent in depth-based
imaging.
As the global technological landscape shifts toward greater autonomy and smart systems,
effective visual intelligence at the edge becomes crucial. Achieving this requires a fundamental
transition from conventional imaging sensors—originally designed for human vision—to
dedicated programmable vision sensors optimized specifically for machine and AI-driven
applications. Our work presents the world’s first programmable vision sensor specifically
engineered for these next-generation visual intelligence needs.

Evolution of Video and Video Delivery Technologies
Yuriy Reznik
Bio: Yuriy Reznik is a Technology Fellow and Vice President of Research at Brightcove, Inc. Previously, he held engineering and management positions at InterDigital, Inc. (2011-2016), Qualcomm Inc. (2005-2011), and RealNetworks, Inc. (1998-2005). In 2008 he was a Visiting Scholar at the Information Systems Laboratory at Stanford University. Since 2001 he was also involved in the work of ITU-T SG16 and MPEG standards committees and made contributions to several multimedia coding and delivery standards, including ITU-T H.264 / MPEG-4 AVC, MPEG-4 ALS, ITU-T G.718, ITU-T H.265 / MPEG HEVC, and MPEG DASH.
Several technologies, standards, and products that Yuriy Reznik has helped to develop (RealAudio / RealVideo, ITU-T H.264 / MPEG-4 AVC, Zencoder, Brightcove CAE, and MPEG-DASH) have been recognized by the NATAS Technology & Engineering Emmy Awards.
Yuriy Reznik holds a Ph.D. degree in Computer Science from Kyiv University. He is a senior member of IEEE, a senior member of SPIE, and a member of the ACM, AES, and SMPTE. He is a co-author of over 150 conference and journal papers, and co-inventor of over 80 granted US patents.
Abstract: In this presentation, I will provide a historical overview of key inventions and concepts that have shaped the architecture of modern video systems. These will include the development of cameras, photography, kinetoscopes, terrestrial television, cable television, and modern Internet streaming technologies like HLS and DASH systems. Additionally, I will explore how the concept of “video” has evolved—from the photographic capture of natural scenes to content enhanced through artistic techniques, computer processing, and most recently, Generative AI (GenAI). Finally, I will discuss how shifts in the creation of visual content, along with advances in user device processing capabilities, could influence the future of video delivery technologies. This will encompass changes in content production, quality assessment metrics, encoding, video rendering, and methods of delivery.
