Pre-trained LLMs have demonstrated substantial capabilities across a range of conventional natural language processing (NLP) tasks, such as summarization and entity recognition. In this paper, we explore the application of LLMs in the generation of high-quality protein sequences. Specifically, we adopt a suite of pre-trained LLMs, including Mistral-7B1, Llama-2-7B2, Llama-3-8B3, and gemma-7B4, to produce valid protein sequences. All of these models are publicly available.5 Unlike previous work in this field, our approach utilizes a relatively small dataset comprising 42, 000 distinct human protein sequences. We retrain these models to process protein-related data, ensuring the generation of biologically feasible protein structures. Our findings demonstrate that even with limited data, the adapted models exhibit efficiency comparable to established protein-focused models such as ProGen varieties, ProtGPT2, and ProLLaMA, which were trained on millions of protein sequences. To validate and quantify the performance of our models, we conduct comparative analyses employing standard metrics such as pLDDT, RMSD, TM-score, and REU. Furthermore, we commit to making the trained versions of all four models publicly available, fostering greater transparency and collaboration in the field of computational biology.

Zeinalipour, K., Jamshidi, N., Bianchini, M., Maggini, M., Gori, M. (2024). Design Proteins Using Large Language Models: Enhancements and Comparative Analyses. In Proceedings of the 1st Workshop on Language + Molecules (L+M 2024) (pp.35-48). Association for Computational Linguistics (ACL) [10.18653/v1/2024.langmol-1.5].

Design Proteins Using Large Language Models: Enhancements and Comparative Analyses

Zeinalipour K.
;
Jamshidi N.;Bianchini M.;Maggini M.;Gori M.
2024-01-01

Abstract

Pre-trained LLMs have demonstrated substantial capabilities across a range of conventional natural language processing (NLP) tasks, such as summarization and entity recognition. In this paper, we explore the application of LLMs in the generation of high-quality protein sequences. Specifically, we adopt a suite of pre-trained LLMs, including Mistral-7B1, Llama-2-7B2, Llama-3-8B3, and gemma-7B4, to produce valid protein sequences. All of these models are publicly available.5 Unlike previous work in this field, our approach utilizes a relatively small dataset comprising 42, 000 distinct human protein sequences. We retrain these models to process protein-related data, ensuring the generation of biologically feasible protein structures. Our findings demonstrate that even with limited data, the adapted models exhibit efficiency comparable to established protein-focused models such as ProGen varieties, ProtGPT2, and ProLLaMA, which were trained on millions of protein sequences. To validate and quantify the performance of our models, we conduct comparative analyses employing standard metrics such as pLDDT, RMSD, TM-score, and REU. Furthermore, we commit to making the trained versions of all four models publicly available, fostering greater transparency and collaboration in the field of computational biology.
2024
Zeinalipour, K., Jamshidi, N., Bianchini, M., Maggini, M., Gori, M. (2024). Design Proteins Using Large Language Models: Enhancements and Comparative Analyses. In Proceedings of the 1st Workshop on Language + Molecules (L+M 2024) (pp.35-48). Association for Computational Linguistics (ACL) [10.18653/v1/2024.langmol-1.5].
File in questo prodotto:
File Dimensione Formato  
2024.langmol-1.5.pdf

accesso aperto

Tipologia: PDF editoriale
Licenza: Creative commons
Dimensione 4.77 MB
Formato Adobe PDF
4.77 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11365/1274334