Dataset Viewer
Auto-converted to Parquet
text
stringclasses
5 values
Hugging Face Back to blog Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA Published May 24, 2023 Update on GitHub ybelkada Younes Belkada timdettmers Tim Dettmers guest artidoro Artidoro Pagnoni guest sgugger Sylvain Gugger smangrul Sourab Mangrulkar LLMs are known to be large, and running or training them in consumer hardware is a huge challenge for users and accessibility.
Our LLM.int8 blogpost showed how the techniques in the LLM.int8 paper were integrated in transformers using the bitsandbytes library. As we strive to make models even more accessible to anyone, we decided to collaborate with bitsandbytes again to allow users to run models in 4-bit precision.
This includes a large majority of HF models, in any modality (text, vision, multi-modal, etc.). Users can also train adapters on top of 4bit models leveraging tools from the Hugging Face ecosystem.
This is a new method introduced today in the QLoRA paper by Dettmers et al. The abstract of the paper is as follows: We present QLoRA, an efficient finetuning approach that reduces memory usage enough to finetune a 65B parameter model on a single 48GB GPU while preserving full 16-bit finetuning task performance.
QLoRA backpropagates gradients through a frozen, 4-bit quantized pretrained language model into Low Rank Adapters~(LoRA). Our best model family, which we name Guanaco, outperforms all previous openly released models on the Vicuna benchmark, reaching 99.3% of the performance level of ChatGPT while only requiring 24 hours of finetuning on a single GPU.

No dataset card yet

Downloads last month
7