Writers, poets, singers usually do not create their compositions in just one breath. Text is revisited, adjusted, modified, rephrased, even multiple times, in order to better convey meanings, emotions and feelings that the author wants to express. Amongst the noble written arts, Poetry is probably the one that needs to be elaborated the most, since the composition has to formally respect predefined meter and rhyming schemes. In this paper, we propose a framework to generate poems that are repeatedly revisited and corrected, as humans do, in order to improve their overall quality. We frame the problem of revising poems in the context of Reinforcement Learning and, in particular, using Proximal Policy Optimization. Our model generates poems from scratch and it learns to progressively adjust the generated text in order to match a target criterion. We evaluate this approach in the case of matching a rhyming scheme, without having any information on which words are responsible of creating rhymes and on how to coherently alter the poem words. The proposed framework is general and, with an appropriate reward shaping, it can be applied to other text generation problems.

Zugarini, A., Pasqualini, L., Melacci, S., Maggini, M. (2021). Generate and Revise: Reinforcement Learning in Neural Poetry. In 2021 International Joint Conference on Neural Networks (IJCNN) (pp.1-8). New York : Institute of Electrical and Electronics Engineers Inc. [10.1109/IJCNN52387.2021.9533573].

Generate and Revise: Reinforcement Learning in Neural Poetry

Zugarini A.;Pasqualini L.;Melacci S.;Maggini M.
2021-01-01

Abstract

Writers, poets, singers usually do not create their compositions in just one breath. Text is revisited, adjusted, modified, rephrased, even multiple times, in order to better convey meanings, emotions and feelings that the author wants to express. Amongst the noble written arts, Poetry is probably the one that needs to be elaborated the most, since the composition has to formally respect predefined meter and rhyming schemes. In this paper, we propose a framework to generate poems that are repeatedly revisited and corrected, as humans do, in order to improve their overall quality. We frame the problem of revising poems in the context of Reinforcement Learning and, in particular, using Proximal Policy Optimization. Our model generates poems from scratch and it learns to progressively adjust the generated text in order to match a target criterion. We evaluate this approach in the case of matching a rhyming scheme, without having any information on which words are responsible of creating rhymes and on how to coherently alter the poem words. The proposed framework is general and, with an appropriate reward shaping, it can be applied to other text generation problems.
2021
978-1-6654-3900-8
Zugarini, A., Pasqualini, L., Melacci, S., Maggini, M. (2021). Generate and Revise: Reinforcement Learning in Neural Poetry. In 2021 International Joint Conference on Neural Networks (IJCNN) (pp.1-8). New York : Institute of Electrical and Electronics Engineers Inc. [10.1109/IJCNN52387.2021.9533573].
File in questo prodotto:
File Dimensione Formato  
melacci_IJCNN2021b.pdf

non disponibili

Tipologia: PDF editoriale
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 1.84 MB
Formato Adobe PDF
1.84 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11365/1206725