Generative Adversarial Networks (GANs) aim to produce realistic samples by mapping a low-dimensional latent space with known distribution to a high-dimensional data space, exploiting an adversarial training mechanism. However, without an effective characterisation of the generative process, such models face significant challenges in training and architecture selection. In this work, we propose a Topological Data Analysis based approach, using persistent homology, which can provide such a characterisation, where topological information of a data manifold is summarised by its Persistent Diagram and the evolution of its topological features is tracked throughout training. Our approach is applied across multiple GAN architectures using two benchmark datasets, where we demonstrate that conventional metrics such as Fréchet Inception Distance and intrinsic dimension estimates cannot adequately capture the quality of generated samples. Instead, our results confirm that a topological description of the generative process within GANs successfully captures training convergence and mode collapse. Finally, the layer-wise topological analysis determines the role each layer plays in the generative process, and may provide future guidance for refinement of architectures. Code available at https://github.com/bcorrad/genfold25.git.
Corradini, B.T., Cullen, B., Gallegati, C., Marziali, S., Alessio D'Inverno, G., Bianchini, M., et al. (2026). Training dynamics of GANs through the lens of persistent homology. NEUROCOMPUTING, 661 [10.1016/j.neucom.2025.131976].
Training dynamics of GANs through the lens of persistent homology
Barbara Toniella Corradini
;Caterina Gallegati;Sara Marziali;Monica Bianchini;Franco Scarselli
2026-01-01
Abstract
Generative Adversarial Networks (GANs) aim to produce realistic samples by mapping a low-dimensional latent space with known distribution to a high-dimensional data space, exploiting an adversarial training mechanism. However, without an effective characterisation of the generative process, such models face significant challenges in training and architecture selection. In this work, we propose a Topological Data Analysis based approach, using persistent homology, which can provide such a characterisation, where topological information of a data manifold is summarised by its Persistent Diagram and the evolution of its topological features is tracked throughout training. Our approach is applied across multiple GAN architectures using two benchmark datasets, where we demonstrate that conventional metrics such as Fréchet Inception Distance and intrinsic dimension estimates cannot adequately capture the quality of generated samples. Instead, our results confirm that a topological description of the generative process within GANs successfully captures training convergence and mode collapse. Finally, the layer-wise topological analysis determines the role each layer plays in the generative process, and may provide future guidance for refinement of architectures. Code available at https://github.com/bcorrad/genfold25.git.| File | Dimensione | Formato | |
|---|---|---|---|
|
1-s2.0-S0925231225026487-main (1)_compressed.pdf
accesso aperto
Tipologia:
PDF editoriale
Licenza:
Creative commons
Dimensione
4.47 MB
Formato
Adobe PDF
|
4.47 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11365/1302995
