In this paper, we use Generative Adversarial Networks (GANs) to synthesize high-quality retinal images along with the corresponding semantic label-maps, instead of real images during training of a segmentation network. Different from other previous proposals, we employ a two-step approach: first, a progressively growing GAN is trained to generate the semantic label-maps, which describes the blood vessel structure (i.e., the vasculature); second, an image-to-image translation approach is used to obtain realistic retinal images from the generated vasculature. The adoption of a two-stage process simplifies the generation task, so that the network training requires fewer images with consequent lower memory usage. Moreover, learning is effective, and with only a handful of training samples, our approach generates realistic high-resolution images, which can be successfully used to enlarge small available datasets. Comparable results were obtained by employing only synthetic images in place of real data during training. The practical viability of the proposed approach was demonstrated on two well-established benchmark sets for retinal vessel segmentation — both containing a very small number of training samples — obtaining better performance with respect to state-of-the-art techniques.

Andreini, P., Ciano, G., Bonechi, S., Graziani, C., Lachi, V., Mecocci, A., et al. (2022). A Two-Stage GAN for High-Resolution Retinal Image Generation and Segmentation. ELECTRONICS, 11(1) [10.3390/electronics11010060].

A Two-Stage GAN for High-Resolution Retinal Image Generation and Segmentation

Paolo Andreini;Giorgio Ciano;Simone Bonechi;Caterina Graziani;Veronica Lachi;Alessandro Mecocci;Franco Scarselli;Monica Bianchini
2022-01-01

Abstract

In this paper, we use Generative Adversarial Networks (GANs) to synthesize high-quality retinal images along with the corresponding semantic label-maps, instead of real images during training of a segmentation network. Different from other previous proposals, we employ a two-step approach: first, a progressively growing GAN is trained to generate the semantic label-maps, which describes the blood vessel structure (i.e., the vasculature); second, an image-to-image translation approach is used to obtain realistic retinal images from the generated vasculature. The adoption of a two-stage process simplifies the generation task, so that the network training requires fewer images with consequent lower memory usage. Moreover, learning is effective, and with only a handful of training samples, our approach generates realistic high-resolution images, which can be successfully used to enlarge small available datasets. Comparable results were obtained by employing only synthetic images in place of real data during training. The practical viability of the proposed approach was demonstrated on two well-established benchmark sets for retinal vessel segmentation — both containing a very small number of training samples — obtaining better performance with respect to state-of-the-art techniques.
2022
Andreini, P., Ciano, G., Bonechi, S., Graziani, C., Lachi, V., Mecocci, A., et al. (2022). A Two-Stage GAN for High-Resolution Retinal Image Generation and Segmentation. ELECTRONICS, 11(1) [10.3390/electronics11010060].
File in questo prodotto:
File Dimensione Formato  
electronics-11-00060-v2.pdf

accesso aperto

Tipologia: PDF editoriale
Licenza: Creative commons
Dimensione 409.3 kB
Formato Adobe PDF
409.3 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11365/1175487