TY - CONF
T1 - Lossy encoding of time-aggregated neuromorphic vision sensor data based on point cloud compression
AU - Adhuran, Jayasingam
AU - Khan, Nabeel
AU - Martini, Maria G.
N1 - Note: Published in: Del Bue, Alessio, Canton, Cristian, Pont-Tuset, Jordi, and Tommasi, Tatiana (eds.) (2025) Computer Vision - ECCV 2024 Workshops : Milan, Italy, September 29-October 4, 2024, Proceedings, Part XXIV. Cham, Switzerland : Springer. ISSN 0302-9743 ISBN 9783031924590.
PY - 2024/9/29
Y1 - 2024/9/29
N2 - Neuromorphic vision sensors capture visual scenes reporting only light intensity changes in the form of spikes or events, represented by their location in the (x, y) plane, timestamp and polarity (positive or negative change). This enables an extremely high temporal resolution and high dynamic range, but also a compact representation of visual data and the relevant sensors operate with very limited energy requirements. Such data can be further compressed prior to transmission, e.g. in an Internet of Things scenario. We have shown in previous work that lossless compression can be achieved by appropriately representing the data as a point cloud and adopting point cloud compression. In this paper, we show that we can compress the data much further if we accept minor losses in data representation. For this purpose, we propose a modification of a classical point cloud encoder and define quality metrics specific to this use case. Results are reported in terms of achievable compression ratios for a specific compression level and different time aggregation intervals and in terms of spatial and temporal distortion vs. bits per event, supporting coding decisions based on the compromise between quality and bitrate.
AB - Neuromorphic vision sensors capture visual scenes reporting only light intensity changes in the form of spikes or events, represented by their location in the (x, y) plane, timestamp and polarity (positive or negative change). This enables an extremely high temporal resolution and high dynamic range, but also a compact representation of visual data and the relevant sensors operate with very limited energy requirements. Such data can be further compressed prior to transmission, e.g. in an Internet of Things scenario. We have shown in previous work that lossless compression can be achieved by appropriately representing the data as a point cloud and adopting point cloud compression. In this paper, we show that we can compress the data much further if we accept minor losses in data representation. For this purpose, we propose a modification of a classical point cloud encoder and define quality metrics specific to this use case. Results are reported in terms of achievable compression ratios for a specific compression level and different time aggregation intervals and in terms of spatial and temporal distortion vs. bits per event, supporting coding decisions based on the compromise between quality and bitrate.
KW - Computer science and informatics
U2 - 10.1007/978-3-031-92460-6_6
DO - 10.1007/978-3-031-92460-6_6
M3 - Paper
T2 - Workshop on Neuromorphic Vision : Advantages and Applications of Event Cameras (NeVi 2024)
Y2 - 29 September 2024 through 29 September 2024
ER -