text1 / text_dataset.json
mindchain's picture
Upload text_dataset.json
9a12e44
[{"text": "Hugging Face\nBack to blog\nMaking LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA\nPublished May 24, 2023\nUpdate on GitHub\nybelkada\nYounes Belkada\ntimdettmers\nTim Dettmers guest\nartidoro\nArtidoro Pagnoni guest\nsgugger\nSylvain Gugger\nsmangrul\nSourab Mangrulkar\n\nLLMs are known to be large, and running or training them in consumer hardware is a huge challenge for users and accessibility."}, {"text": "Our LLM.int8 blogpost showed how the techniques in the LLM.int8 paper were integrated in transformers using the bitsandbytes library. As we strive to make models even more accessible to anyone, we decided to collaborate with bitsandbytes again to allow users to run models in 4-bit precision."}, {"text": "This includes a large majority of HF models, in any modality (text, vision, multi-modal, etc.). Users can also train adapters on top of 4bit models leveraging tools from the Hugging Face ecosystem."}, {"text": "This is a new method introduced today in the QLoRA paper by Dettmers et al. The abstract of the paper is as follows:\n\nWe present QLoRA, an efficient finetuning approach that reduces memory usage enough to finetune a 65B parameter model on a single 48GB GPU while preserving full 16-bit finetuning task performance."}, {"text": "QLoRA backpropagates gradients through a frozen, 4-bit quantized pretrained language model into Low Rank Adapters~(LoRA). Our best model family, which we name Guanaco, outperforms all previous openly released models on the Vicuna benchmark, reaching 99.3% of the performance level of ChatGPT while only requiring 24 hours of finetuning on a single GPU."}]