목차
LLM Fine-Tuning
PEFT
RLHF
LLM Fine-Tuning
Fine-tuning 20B LLMs with RLHF on a 24GB consumer GPU
Fine Tuning LLMs on a Single Consumer Graphic Card
Phinetuning 2.0
Code LoRA from Scratch
PEFT: Parameter-Efficient Fine-Tuning of Billion-Scale Models on Low-Resource Hardware
PEFT
2024-10 Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank Constraint?
2024-03 Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey
2024-03 GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
2023-12 Batched Low-Rank Adaptation of Foundation Models
RLHF
2024-01 WARM: On the Benefits of Weight Averaged Reward Models
2024-01 Secrets of RLHF in Large Language Models Part II: Reward Modeling
2024-01 Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation
2024-01 [SPO] A Minimaximalist Approach to Reinforcement Learning from Human Feedback
2024-01 ARGS: Alignment as Reward-Guided Search
2023-12 [DPO] Direct Preference Optimization: Your Language Model is Secretly a Reward Model
2023-10 Vanishing Gradients in Reinforcement Finetuning of Language Models
2023-10 [IPO] A General Theoretical Paradigm to Understand Learning from Human Preferences
2023-06 Secrets of RLHF in Large Language Models Part I: PPO