Lossy compression of hyperspectral and ultraspectral images is traditionally performed using 3D transform coding. This approach yields good performance, but its complexity and memory requirements are unsuitable for onboard compression. In this paper we propose a low-complexity lossy compression scheme based on prediction, uniform-threshold quantization, and rate-distortion optimization. Its performance is competitive with that of state-of-the-art 3D transform coding schemes, but the complexity is immensely lower. The algorithm is able to limit the scope of errors, and is amenable to parallel implementation, making it suitable for onboard compression at high throughputs.

Abrardo, A., Barni, M., & E., M. (2011). Low-complexity predictive lossy compression of hyperspectral and ultraspectral images. In Proceedings of ICASSP 2011, IEEE International Conference on Acoustic Speech and Signal Processing (pp.797-800). New York : IEEE [10.1109/ICASSP.2011.5946524].

Low-complexity predictive lossy compression of hyperspectral and ultraspectral images

ABRARDO, ANDREA;BARNI, MAURO;
2011

Abstract

Lossy compression of hyperspectral and ultraspectral images is traditionally performed using 3D transform coding. This approach yields good performance, but its complexity and memory requirements are unsuitable for onboard compression. In this paper we propose a low-complexity lossy compression scheme based on prediction, uniform-threshold quantization, and rate-distortion optimization. Its performance is competitive with that of state-of-the-art 3D transform coding schemes, but the complexity is immensely lower. The algorithm is able to limit the scope of errors, and is amenable to parallel implementation, making it suitable for onboard compression at high throughputs.
978-1-4577-0539-7
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: http://hdl.handle.net/11365/4954
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo