Downloads: 7
United States | Data Knowledge Engineering | Volume 14 Issue 6, June 2025 | Pages: 1166 - 1176
Parameter-Efficient Fine-Tuning for Generative AI: Implementations, Best Practices, and Trade-Offs
Abstract: Large language models (LLMs) have revolutionized natural language processing (NLP), yet full fine-tuning remains computationally expensive. Parameter-efficient fine-tuning (PEFT) techniques offer a solution by modifying only a small number of parameters. This paper provides a comparative analysis of key PEFT strategies, including Adapters, Low-Rank Adaptation (LoRA), Prompt Tuning, and Prefix Tuning, discussing their best practices, trade-offs, and empirical evaluations. We present a real-world implementation demonstrating the efficacy of these methods on a text classification task.
Keywords: Generative AI, Parameter-efficient fine-tuning (PEFT), Adapters, Low-Rank Adaptation (LoRA), Prompt Tuning, Prefix Tuning, Natural Language Processing (NLP)
How to Cite?: Chandan Singh Troughia, Sriraman Suryanarayanan, "Parameter-Efficient Fine-Tuning for Generative AI: Implementations, Best Practices, and Trade-Offs", Volume 14 Issue 6, June 2025, International Journal of Science and Research (IJSR), Pages: 1166-1176, https://www.ijsr.net/getabstract.php?paperid=SR25612082136, DOI: https://dx.doi.org/10.21275/SR25612082136