Real-world business applications require a trade-off between language model performance and size. We propose a new method for model compression that relies on vocabulary transfer. We evaluate the method on various vertical domains and downstream tasks. Our results indicate that vocabulary transfer can be effectively used in combination with other compression techniques, yielding a significant reduction in model size and inference time while marginally compromising on performance.
Gee, L., Zugarini, A., Rigutini, L., Torroni, P. (2022). Fast Vocabulary Transfer for Language Model Compression. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track (pp.409-416). Association for Computational Linguistics (ACL) [10.18653/v1/2022.emnlp-industry.41].
Fast Vocabulary Transfer for Language Model Compression
Rigutini, Leonardo
Supervision
;
2022-01-01
Abstract
Real-world business applications require a trade-off between language model performance and size. We propose a new method for model compression that relies on vocabulary transfer. We evaluate the method on various vertical domains and downstream tasks. Our results indicate that vocabulary transfer can be effectively used in combination with other compression techniques, yielding a significant reduction in model size and inference time while marginally compromising on performance.File | Dimensione | Formato | |
---|---|---|---|
2022.emnlp-industry.41.pdf
accesso aperto
Tipologia:
PDF editoriale
Licenza:
Creative commons
Dimensione
469.08 kB
Formato
Adobe PDF
|
469.08 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11365/1245834