title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
The CodeLlama BASE is strangely fantastic general purpose for finetuning!
103
I'm having a sudden significant jump in quality with the CodeLLaMa 13b and 34B as bases for finetuning. ( BASE, not the python or instruct)You can so easily finetune it into anything - very flexible. Don't let the "Code" fool you. Strangely, even without any finetuning, you can simply use a system prompt on the base model and it will follow it very nicely as if it was already finetuned, while the previous versions (especially LLama 1 base) would, as expected just go into a neverending schizophrenic twist. I think the CodeLLama is one of the best BASE now, in my opinion for further tweaking. Definitely try it.
2023-08-30T22:46:59
https://www.reddit.com/r/LocalLLaMA/comments/165tb0q/the_codellama_base_is_strangely_fantastic_general/
FPham
self.LocalLLaMA
2023-08-31T13:47:51
0
{}
165tb0q
false
null
t3_165tb0q
/r/LocalLLaMA/comments/165tb0q/the_codellama_base_is_strangely_fantastic_general/
false
false
self
103
null
Does ram speed and size matter when your GPU can load the model?
1
Hi I'm new to local LLM stuff. Like the title says, I was wondering if the RAM speed and size affect the text generating performance. For instance, if an RTX3060 can load a 13b size model, will adding more RAM boost the performance? ​ I'm planning on setting up my PC like this \- CPU: Intel i5 13600k \- M/B: Gigabyte B660m Aorus Pro \- RAM: DDR4 16GB 3200Mhz \- GPU: RTX3060 12GB
2023-08-30T23:51:23
https://www.reddit.com/r/LocalLLaMA/comments/165uuep/does_ram_speed_and_size_matter_when_your_gpu_can/
Sufficient_Bit_3312
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
165uuep
false
null
t3_165uuep
/r/LocalLLaMA/comments/165uuep/does_ram_speed_and_size_matter_when_your_gpu_can/
false
false
self
1
null
Using 2 GPUs?
1
I’m currently interested to buy another GPU as my 4070ti is lacking the VRAM needed for the best models. Will this even work? And, if it does do I need two of the same GPUs. Thank for any responses.
2023-08-31T00:17:18
https://www.reddit.com/r/LocalLLaMA/comments/165vg4p/using_2_gpus/
marv34001
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
165vg4p
false
null
t3_165vg4p
/r/LocalLLaMA/comments/165vg4p/using_2_gpus/
false
false
self
1
null
Formatting Training Datasets? Getting pwned
1
I've been training LLaMA-2-7B-bf16-sharded models with datasets off HuggingFace inside Google Colabs. The training loss goes down and they work well. I've tried a couple different datasets and it works well. Then I try the same notebook code with my own dataset (uploaded to HuggingFace) and all sorts of bizarre things happen. Each epoch takes WAY longer, the training loss jumps around, the training loss sometimes starts at 0.000 sometimes, etc. I'm completely self-taught here off Reddit and Youtube, so am completely ignorant of a lot of best practices. After looking at a lot of other datasets, I'm beginning to **suspect that the formatting of my data files is off**?? Maybe the ###human/###assistant thing? I wanted to share it here and then get some feedback / get ripped a new one by anyone kind enough / aggressive enough to indulge me. All insight is appreciated. So the dataset is 5,000 diary entries and then 5,000 keyword lists for each diary entry. A typical input/output pair looks like this {diary entry} : {list of keywords from diary entry} I have formatted the file as a JSONL as such: {“text”: “###Human: I cooked dinner tonight. It’s become such a routine, but I still put effort into it. I made his favorite, chicken parmesan. I set the table with candles, hoping to create a romantic atmosphere. But he barely looked up from his phone, engrossed in his stupid game..### Assistant: [‘routine’,‘candles’,‘romantic’,‘connection’,‘intimacy’]“} In my HuggingFace account there's just these two files: .gitattributes DiaryData_5k.jsonl
2023-08-31T01:36:34
https://www.reddit.com/r/LocalLLaMA/comments/165x9if/formatting_training_datasets_getting_pwned/
TaleOfTwoDres
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
165x9if
false
null
t3_165x9if
/r/LocalLLaMA/comments/165x9if/formatting_training_datasets_getting_pwned/
false
false
self
1
null
How to set up CodeLlama on Exllama
1
I've been trying to set up various extended context models on Exllama and I just want to make sure I'm doing things properly. I've been able to get longer responses out of the box if I set the max seq len to longer but the responses start to get weird/unreliable after 4k tokens. Is there anything else I need to do to get better responses? By extension, is there anything I need to do for the vicuna-1.5-16k models? I've been setting the compress_pos_emb to 4.0 which allows me to get the longer context but things still get weird at times and I just want to make sure I'm doing things correctly.
2023-08-31T01:57:55
https://www.reddit.com/r/LocalLLaMA/comments/165xqw4/how_to_set_up_codellama_on_exllama/
a_slay_nub
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
165xqw4
false
null
t3_165xqw4
/r/LocalLLaMA/comments/165xqw4/how_to_set_up_codellama_on_exllama/
false
false
self
1
null
Perplexity of Q4_K_M vs GPTQ 64G?
1
Anyone have 2 identical models in these quants and can run perplexity in ooba? Something quick like PTB_NEW? The test can run in HF. Might shed some light as to whether it's better to get the GPTQ of a 70b or the GGXX. The Q4 is the last that fits in 48g, extra context not withstanding.
2023-08-31T02:33:20
https://www.reddit.com/r/LocalLLaMA/comments/165ykbp/perplexity_of_q4_k_m_vs_gptq_64g/
a_beautiful_rhind
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
165ykbp
false
null
t3_165ykbp
/r/LocalLLaMA/comments/165ykbp/perplexity_of_q4_k_m_vs_gptq_64g/
false
false
self
1
null
Compute Express Link aka CXL
1
Is cxl memory something that will help make large models easier to run or not really? https://www.asteralabs.com/product-details/aurora-a-series/
2023-08-31T02:54:40
https://www.reddit.com/r/LocalLLaMA/comments/165z17q/compute_express_link_aka_cxl/
Ergosyn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
165z17q
false
null
t3_165z17q
/r/LocalLLaMA/comments/165z17q/compute_express_link_aka_cxl/
false
false
self
1
{'enabled': False, 'images': [{'id': 'CKuF0-D924_Ztr8tBFLdsMJyTAWJW-zYH5Pn48BocOk', 'resolutions': [{'height': 88, 'url': 'https://external-preview.redd.it/YREXUyVQH_L_aYlZSATW50HahrwkOJS_WoSwtMD6fEs.jpg?width=108&crop=smart&auto=webp&s=738f18ab1491c35a85da4b0e5058c714c4ca85c6', 'width': 108}, {'height': 176, 'url': 'https://external-preview.redd.it/YREXUyVQH_L_aYlZSATW50HahrwkOJS_WoSwtMD6fEs.jpg?width=216&crop=smart&auto=webp&s=a80910f775ecbe2de1ae81daaf5b9d40d597096c', 'width': 216}, {'height': 260, 'url': 'https://external-preview.redd.it/YREXUyVQH_L_aYlZSATW50HahrwkOJS_WoSwtMD6fEs.jpg?width=320&crop=smart&auto=webp&s=e361720378a3ad7f46af1922df962ab366e450c6', 'width': 320}, {'height': 521, 'url': 'https://external-preview.redd.it/YREXUyVQH_L_aYlZSATW50HahrwkOJS_WoSwtMD6fEs.jpg?width=640&crop=smart&auto=webp&s=da7dca6fe35a395f35651fc436d90f657b228904', 'width': 640}], 'source': {'height': 550, 'url': 'https://external-preview.redd.it/YREXUyVQH_L_aYlZSATW50HahrwkOJS_WoSwtMD6fEs.jpg?auto=webp&s=929935f618e712fe1a3950f763b4e32c8b1beb93', 'width': 675}, 'variants': {}}]}
WizardLM vs. Phind-CodeLlama - Test yourself Battleground
1
[removed]
2023-08-31T03:26:04
https://www.reddit.com/r/LocalLLaMA/comments/165zphb/wizardlm_vs_phindcodellama_test_yourself/
VideoTo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
165zphb
false
null
t3_165zphb
/r/LocalLLaMA/comments/165zphb/wizardlm_vs_phindcodellama_test_yourself/
false
false
self
1
null
[R] LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models
1
2023-08-31T03:26:18
https://i.redd.it/2opuwgax6dlb1.png
ntortellini
i.redd.it
1970-01-01T00:00:00
0
{}
165zpn9
false
null
t3_165zpn9
/r/LocalLLaMA/comments/165zpn9/r_lminfinite_simple_onthefly_length/
false
false
https://a.thumbs.redditm…zq8JNoJm6s98.jpg
1
{'enabled': True, 'images': [{'id': 'YaivsmwidzU1zSPZw6YOL4YfR58kZiimrOnfFCkoEtE', 'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/2opuwgax6dlb1.png?width=108&crop=smart&auto=webp&s=c1778e96cbe101240b1aa235185235ff2ffff212', 'width': 108}, {'height': 100, 'url': 'https://preview.redd.it/2opuwgax6dlb1.png?width=216&crop=smart&auto=webp&s=1a08c2ded60e67cc4a215f256a72b38251fc2983', 'width': 216}, {'height': 149, 'url': 'https://preview.redd.it/2opuwgax6dlb1.png?width=320&crop=smart&auto=webp&s=1ef6d16cd9b1ba4b6308b0a474c46dbeec9e9129', 'width': 320}, {'height': 298, 'url': 'https://preview.redd.it/2opuwgax6dlb1.png?width=640&crop=smart&auto=webp&s=47b4db7d7afc78bb15abd43d61c3cf2edafafe53', 'width': 640}, {'height': 447, 'url': 'https://preview.redd.it/2opuwgax6dlb1.png?width=960&crop=smart&auto=webp&s=fbb2d5c0eaffb60383733f00ce44c883a47e9113', 'width': 960}, {'height': 503, 'url': 'https://preview.redd.it/2opuwgax6dlb1.png?width=1080&crop=smart&auto=webp&s=c8b37c5312276ab6a005c22d58f4fe5465501491', 'width': 1080}], 'source': {'height': 944, 'url': 'https://preview.redd.it/2opuwgax6dlb1.png?auto=webp&s=3133f1e92134aa577334833ab48ea976435d29bc', 'width': 2024}, 'variants': {}}]}
Lightweight LLama variants for Mobile applications
1
Are there LLama v2 or other variants of LLama (Camel, Alpaca) that can be utilized on mobile devices? I am looking for lightweight models essentially, preferably with python bindings so that i can run that on my jupyter notebook.
2023-08-31T03:34:04
https://www.reddit.com/r/LocalLLaMA/comments/165zvle/lightweight_llama_variants_for_mobile_applications/
thesithlord27
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
165zvle
false
null
t3_165zvle
/r/LocalLLaMA/comments/165zvle/lightweight_llama_variants_for_mobile_applications/
false
false
self
1
null
LLama 2 7B and 13B in the browser via WebGPU
1
2023-08-31T03:50:43
https://thiggle.com/local-llm
sublimefunk
thiggle.com
1970-01-01T00:00:00
0
{}
16607w6
false
null
t3_16607w6
/r/LocalLLaMA/comments/16607w6/llama_2_7b_and_13b_in_the_browser_via_webgpu/
false
false
https://a.thumbs.redditm…DaM-Y-ZlKAF4.jpg
1
{'enabled': False, 'images': [{'id': 'qD0XgrR1PRzf6wj7d5HKfhz2TB6LOaoRDoPJiDUjdsw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/S_k4VWqTvOHibc6HBs8xYmcId_zC25eK5sfu1wePLGA.jpg?width=108&crop=smart&auto=webp&s=0fdc6e2a2852dac7fc2c7b298e8e5bbb0707ab47', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/S_k4VWqTvOHibc6HBs8xYmcId_zC25eK5sfu1wePLGA.jpg?width=216&crop=smart&auto=webp&s=9cff88c1c1af206334448b2dd31788c27eb09e98', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/S_k4VWqTvOHibc6HBs8xYmcId_zC25eK5sfu1wePLGA.jpg?width=320&crop=smart&auto=webp&s=e53a56036df71f612508c6da4142cf85ce0428dd', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/S_k4VWqTvOHibc6HBs8xYmcId_zC25eK5sfu1wePLGA.jpg?width=640&crop=smart&auto=webp&s=9501dc791927f60b66972aed2cb37882b12c2070', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/S_k4VWqTvOHibc6HBs8xYmcId_zC25eK5sfu1wePLGA.jpg?width=960&crop=smart&auto=webp&s=96938ee5600234c44006af7adf6643ce159a70ac', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/S_k4VWqTvOHibc6HBs8xYmcId_zC25eK5sfu1wePLGA.jpg?width=1080&crop=smart&auto=webp&s=e6cb76507f47d3bf409b60c80ca28d23968bfc89', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/S_k4VWqTvOHibc6HBs8xYmcId_zC25eK5sfu1wePLGA.jpg?auto=webp&s=484943f6ccb44d960a7bfb5e19002c16ddc4eaa9', 'width': 1200}, 'variants': {}}]}
Looking for testers: I'm hosting open-source LLMs for free.
1
I'm working on a [project](https://www.fullmetal.ai) that's a distributed network of hosted LLMs. I believe this can be useful for those who don't have a 1000+ USD/mo budget to host their own LLM. Also, it's much easier & quicker than setting up your own VM. If you're building a startup/experimental app that requires open-source LLM, I will gladly provide free API access, assuming your usage is relatively low (100k tokens/day or less). Please DM me if you are interested. Happy to answer any questions. Thanks!
2023-08-31T03:54:10
https://www.reddit.com/r/LocalLLaMA/comments/1660abe/looking_for_testers_im_hosting_opensource_llms/
m0dE
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1660abe
false
null
t3_1660abe
/r/LocalLLaMA/comments/1660abe/looking_for_testers_im_hosting_opensource_llms/
false
false
self
1
{'enabled': False, 'images': [{'id': 'ibz-WbgWLTq9fNGmdvXvmXTzV2aIzevVHMd_bWL6pI8', 'resolutions': [{'height': 51, 'url': 'https://external-preview.redd.it/ih6BzyHit_orBvtNDmmd8ZxaCux0xj8dbcoGoogR9nE.jpg?width=108&crop=smart&auto=webp&s=457a261efe0bdaad9b0facd7c5344552c3630b55', 'width': 108}, {'height': 103, 'url': 'https://external-preview.redd.it/ih6BzyHit_orBvtNDmmd8ZxaCux0xj8dbcoGoogR9nE.jpg?width=216&crop=smart&auto=webp&s=b51603741608f2575127261f0b3a8bcccda3ba67', 'width': 216}, {'height': 153, 'url': 'https://external-preview.redd.it/ih6BzyHit_orBvtNDmmd8ZxaCux0xj8dbcoGoogR9nE.jpg?width=320&crop=smart&auto=webp&s=8cb897cee67687a0c47b8a1686bb71b02b5cefc2', 'width': 320}, {'height': 306, 'url': 'https://external-preview.redd.it/ih6BzyHit_orBvtNDmmd8ZxaCux0xj8dbcoGoogR9nE.jpg?width=640&crop=smart&auto=webp&s=3f32f849c0e7f2519c0692fe6354ce96bb977afb', 'width': 640}, {'height': 459, 'url': 'https://external-preview.redd.it/ih6BzyHit_orBvtNDmmd8ZxaCux0xj8dbcoGoogR9nE.jpg?width=960&crop=smart&auto=webp&s=4240a126c8e18e326f265d4a81f100ec4c1f7740', 'width': 960}, {'height': 517, 'url': 'https://external-preview.redd.it/ih6BzyHit_orBvtNDmmd8ZxaCux0xj8dbcoGoogR9nE.jpg?width=1080&crop=smart&auto=webp&s=aa311e4d22eadc1367aa7febaa641c75bf25b2c1', 'width': 1080}], 'source': {'height': 911, 'url': 'https://external-preview.redd.it/ih6BzyHit_orBvtNDmmd8ZxaCux0xj8dbcoGoogR9nE.jpg?auto=webp&s=1b966e536cb16d12efedee8f2341bc7593891f86', 'width': 1902}, 'variants': {}}]}
Is LLaMA 2 34B not coming?
1
Is LLaMA 2 34B not coming? They seem to have a code version but why not a regular model?
2023-08-31T04:10:50
https://www.reddit.com/r/LocalLLaMA/comments/1660mht/is_llama_2_34b_not_coming/
ninjasaid13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1660mht
false
null
t3_1660mht
/r/LocalLLaMA/comments/1660mht/is_llama_2_34b_not_coming/
false
false
self
1
null
Llama 2
1
[removed]
2023-08-31T04:34:16
https://www.reddit.com/r/LocalLLaMA/comments/16612oc/llama_2/
BadriMLJ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16612oc
false
null
t3_16612oc
/r/LocalLLaMA/comments/16612oc/llama_2/
false
false
self
1
null
Llama2 13B - 4070ti
1
Hello! Im new to the local llms topic so dont judge me. I set up the oobabooga WebUI from github and tested some models so i tried Llama2 13B (theBloke version from hf). I tested the chat GGML and the for gpu optimized GPTQ (both with the correct model loader). With the default settings for model loader im wating like 3 seconds until the response stream starts. I thought with my 4070 ti it would be much faster. I double checked the cuda installation and everything seems fine. Im using the WSL2 inside W11 (i like linux more than windows), could that be the reason for the response delay?
2023-08-31T04:45:44
https://www.reddit.com/r/LocalLLaMA/comments/1661ag6/llama2_13b_4070ti/
Able_Stop
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1661ag6
false
null
t3_1661ag6
/r/LocalLLaMA/comments/1661ag6/llama2_13b_4070ti/
false
false
self
1
null
How to determine max model size for 12 Gb VRAM & 32Gb RAM?
1
[removed]
2023-08-31T04:53:59
https://www.reddit.com/r/LocalLLaMA/comments/1661g3x/how_to_determine_max_model_size_for_12_gb_vram/
Ok-Conversation-2418
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1661g3x
false
null
t3_1661g3x
/r/LocalLLaMA/comments/1661g3x/how_to_determine_max_model_size_for_12_gb_vram/
false
false
default
1
null
Best Way to do this?
1
[removed]
2023-08-31T04:54:46
https://www.reddit.com/r/LocalLLaMA/comments/1661gmc/best_way_to_do_this/
himaw26303
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1661gmc
false
null
t3_1661gmc
/r/LocalLLaMA/comments/1661gmc/best_way_to_do_this/
false
false
self
1
null
Is this contextsize limit I am hitting? kcpp really slows down.
1
[removed]
2023-08-31T05:34:07
https://www.reddit.com/r/LocalLLaMA/comments/16626i3/is_this_contextsize_limit_i_am_hitting_kcpp/
wh33t
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16626i3
false
null
t3_16626i3
/r/LocalLLaMA/comments/16626i3/is_this_contextsize_limit_i_am_hitting_kcpp/
false
false
self
1
null
Suggested reading?
1
[removed]
2023-08-31T05:43:49
https://www.reddit.com/r/LocalLLaMA/comments/1662cs7/suggested_reading/
Seclusion72
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1662cs7
false
null
t3_1662cs7
/r/LocalLLaMA/comments/1662cs7/suggested_reading/
false
false
self
1
null
Meta Research publishes LM-Infinite Paper
1
[deleted]
2023-08-31T06:14:05
https://arxiv.org/abs/2308.16137
ninjasaid13
arxiv.org
1970-01-01T00:00:00
0
{}
1662wjh
false
null
t3_1662wjh
/r/LocalLLaMA/comments/1662wjh/meta_research_publishes_lminfinite_paper/
false
false
https://a.thumbs.redditm…9TLahCsBMC-0.jpg
1
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
Creating a chatbot for work using open source vs openai
1
I was thinking of showing some initiative and creating a chatbot for the company I work for. My company hosts thousands of clients product catalogs and the antique search engine isn't exactly the most user friendly and most speedy way to find what your looking for. If I'm searching for product X that does ABC but the company doesn't make product X anymore and I need a similar product that does exactly ABC. that's where the chatbot will shine, it will recommend other products that are similar from clients with catalogs on our site. Let's say I get a spreadsheet with every client, their catalogs with all the specs of their products etc.. How much trouble would it be to create and deploy a chatbot that is capable of the scenario I presented earlier and more in openai vs opensource?
2023-08-31T06:19:03
https://www.reddit.com/r/LocalLLaMA/comments/1662zq6/creating_a_chatbot_for_work_using_open_source_vs/
Erdeem
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1662zq6
false
null
t3_1662zq6
/r/LocalLLaMA/comments/1662zq6/creating_a_chatbot_for_work_using_open_source_vs/
false
false
self
1
null
RoPE Feq Base for CodeLLaMA
1
I've seen a few contradictory statements on what the value for RoPE freb base in CodeLLaMA models should be. Any reason that RoPE feq base value should not be set to 10^6 for CodeLLaMA even if you are not using long context? Does it even make any difference, has anyone try running perplexity test with the value set at 1 and at 10^6 ?
2023-08-31T06:29:08
https://i.redd.it/qgb1bbmz3elb1.jpg
onil_gova
i.redd.it
1970-01-01T00:00:00
0
{}
166366q
false
null
t3_166366q
/r/LocalLLaMA/comments/166366q/rope_feq_base_for_codellama/
false
false
https://b.thumbs.redditm…eYhaRoGBW6VA.jpg
1
{'enabled': True, 'images': [{'id': 'rx1yuBljknMD1wbXooHDs6XcDuvG8NcWdVBVDyLgrio', 'resolutions': [{'height': 15, 'url': 'https://preview.redd.it/qgb1bbmz3elb1.jpg?width=108&crop=smart&auto=webp&s=e844163c330903c9c07c83ef2aee48da844cff70', 'width': 108}, {'height': 30, 'url': 'https://preview.redd.it/qgb1bbmz3elb1.jpg?width=216&crop=smart&auto=webp&s=406f67a19baa9ccf70373b7c1c5dbdea58d71366', 'width': 216}, {'height': 45, 'url': 'https://preview.redd.it/qgb1bbmz3elb1.jpg?width=320&crop=smart&auto=webp&s=f05f95609b36686432a9aae084d82164150c85e9', 'width': 320}, {'height': 90, 'url': 'https://preview.redd.it/qgb1bbmz3elb1.jpg?width=640&crop=smart&auto=webp&s=229b621aef06d14ab46f21bd2f87a4fdb918c500', 'width': 640}, {'height': 135, 'url': 'https://preview.redd.it/qgb1bbmz3elb1.jpg?width=960&crop=smart&auto=webp&s=a85d74fcb2dca765d141494b493d597b67eeee7a', 'width': 960}, {'height': 152, 'url': 'https://preview.redd.it/qgb1bbmz3elb1.jpg?width=1080&crop=smart&auto=webp&s=2355b1085e3d57a713989a896c59c4dc044012c8', 'width': 1080}], 'source': {'height': 300, 'url': 'https://preview.redd.it/qgb1bbmz3elb1.jpg?auto=webp&s=70685d02e709ff687ec4e024d096fc3f00cad2df', 'width': 2129}, 'variants': {}}]}
L0 Airdrop Odyssey: Navigating the Crypto Unknown
1
[removed]
2023-08-31T06:44:11
https://www.reddit.com/r/LocalLLaMA/comments/1663fmz/l0_airdrop_odyssey_navigating_the_crypto_unknown/
Maximum-Unhappy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1663fmz
false
null
t3_1663fmz
/r/LocalLLaMA/comments/1663fmz/l0_airdrop_odyssey_navigating_the_crypto_unknown/
false
false
self
1
null
Your best model?
1
I’m running in my KVM virtual server (24GB RAM) and my best model so far is: 7B: llama-2-Chat 13B: airoboros l2 2.1 What about yours?
2023-08-31T07:08:31
https://www.reddit.com/r/LocalLLaMA/comments/1663v5e/your_best_model/
MichaelBui2812
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1663v5e
false
null
t3_1663v5e
/r/LocalLLaMA/comments/1663v5e/your_best_model/
false
false
self
1
null
Q&A bot with conversation memory
1
[removed]
2023-08-31T07:56:07
https://www.reddit.com/r/LocalLLaMA/comments/1664p7a/qa_bot_with_conversation_memory/
anindya_42
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1664p7a
false
null
t3_1664p7a
/r/LocalLLaMA/comments/1664p7a/qa_bot_with_conversation_memory/
false
false
self
1
null
oobabooga WebUI, how to load Airoboros or RuGPT?
1
[removed]
2023-08-31T08:43:53
https://www.reddit.com/r/LocalLLaMA/comments/1665imc/oobabooga_webui_how_to_load_airoboros_or_rugpt/
Hatred_grows
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1665imc
false
null
t3_1665imc
/r/LocalLLaMA/comments/1665imc/oobabooga_webui_how_to_load_airoboros_or_rugpt/
false
false
https://b.thumbs.redditm…OlHXWVzV8Qgg.jpg
1
{'enabled': False, 'images': [{'id': 'BSGypydhj3aqZI6EFooEI4Yg4z_M69pfAA9SmTL6gTg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/zFynjMQ7SyJu_FUI1H1baj_aJj6tdZrubpz8CiFhw5w.jpg?width=108&crop=smart&auto=webp&s=1363007b542f1f33e3a73dfb37d0de15806c13b6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/zFynjMQ7SyJu_FUI1H1baj_aJj6tdZrubpz8CiFhw5w.jpg?width=216&crop=smart&auto=webp&s=fa7ac271ff46a1d704ff9f51a23e76ba10377419', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/zFynjMQ7SyJu_FUI1H1baj_aJj6tdZrubpz8CiFhw5w.jpg?width=320&crop=smart&auto=webp&s=16d086d39690fb3e1e2399ddc951ba8dc968a6da', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/zFynjMQ7SyJu_FUI1H1baj_aJj6tdZrubpz8CiFhw5w.jpg?width=640&crop=smart&auto=webp&s=940c0499a630d9bfae458e598159d1e86a0a0b4f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/zFynjMQ7SyJu_FUI1H1baj_aJj6tdZrubpz8CiFhw5w.jpg?width=960&crop=smart&auto=webp&s=835d633a92d542e9dc0ad9dd5498c42e76778f34', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/zFynjMQ7SyJu_FUI1H1baj_aJj6tdZrubpz8CiFhw5w.jpg?width=1080&crop=smart&auto=webp&s=8a31e9ca330491987c6ed91c6ef8a6ef5df63d10', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/zFynjMQ7SyJu_FUI1H1baj_aJj6tdZrubpz8CiFhw5w.jpg?auto=webp&s=57862c6a47716edc1659a496ea94ae277ba44ee1', 'width': 1200}, 'variants': {}}]}
Easiest way to fine-tune local llama on local documents?
1
[removed]
2023-08-31T10:11:37
https://www.reddit.com/r/LocalLLaMA/comments/166729y/easiest_way_to_finetune_local_llama_on_local/
innocuousAzureus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166729y
false
null
t3_166729y
/r/LocalLLaMA/comments/166729y/easiest_way_to_finetune_local_llama_on_local/
false
false
self
1
null
General guidance on my project please.
1
Could someone please help with fleshing out the steps that I need to take to get my project underway? Here is the info: I have a rented ubuntu server(ryzen 5900x, 64gb ram) that I can access remotely. No graphics card and no graphical interface on the server. I want to run a uncensored LLM on this rig. I tried downloading koboldcpp+some llama model, but kobold has a graphical interface and it was suuper slow through X11 and xming server. 1. How would i run an llm on ubuntu with only command line. 2. How would I give it a persistent character? 3. Is Langchain what i need? 4. Do i need to set up a code interpreter on the server to run it all? I think I just need "big picture" steps to understand how it all sits together. Thanks.
2023-08-31T10:15:01
https://www.reddit.com/r/LocalLLaMA/comments/16674h6/general_guidance_on_my_project_please/
toorik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16674h6
false
null
t3_16674h6
/r/LocalLLaMA/comments/16674h6/general_guidance_on_my_project_please/
false
false
self
1
null
convert WizardCoder-15B-V1.0 pytorch_model.bin to "gguf" format
1
I know that there is gguf Wizard Coder model in gguf format online, but I want to try different quantization. I tried with `llama.cpp`'s `convert`, but seems the Wizard Coder config.json lacks of parameters, at least: - hidden_size - num_hidden_layers - intermediate_size
2023-08-31T10:54:30
https://www.reddit.com/r/LocalLLaMA/comments/1667uy3/convert_wizardcoder15bv10_pytorch_modelbin_to/
RobotEntropy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1667uy3
false
null
t3_1667uy3
/r/LocalLLaMA/comments/1667uy3/convert_wizardcoder15bv10_pytorch_modelbin_to/
false
false
self
1
null
What kinda of models could I train and run with 2x 2080 TI gpus?
1
What kinda of models could I train and run with 2x 2080 TI gpus? Could I finetune the 35B model with those? ​
2023-08-31T12:40:33
https://www.reddit.com/r/LocalLLaMA/comments/166a2sg/what_kinda_of_models_could_i_train_and_run_with/
MaleficentArgument51
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166a2sg
false
null
t3_166a2sg
/r/LocalLLaMA/comments/166a2sg/what_kinda_of_models_could_i_train_and_run_with/
false
false
self
1
null
What kinda of models could I train and run with 2x 2080 TI gpus?
1
What kinda of models could I train and run with 2x 2080 TI gpus? Could I finetune the 35B model with those? ​
2023-08-31T12:40:47
https://www.reddit.com/r/LocalLLaMA/comments/166a2zn/what_kinda_of_models_could_i_train_and_run_with/
MaleficentArgument51
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166a2zn
false
null
t3_166a2zn
/r/LocalLLaMA/comments/166a2zn/what_kinda_of_models_could_i_train_and_run_with/
false
false
self
1
null
AI tool to classify sentences (not sentiment)
1
Hi all, I've got a ton of sentences that I'd like to try to analyze. Mostly what I'm wondering is if there's an ai tool that looks at whether or not the sentence is structurally correct. I'll kind of define what I want below. I've been through a lot on hugging face and have a feeling I just don't know what to search for. They're conversations which I have split into utterances and able to put back together as a conversation. Necessary: Look at a sentence and say whether it makes sense (return a score of how "sensible" it is) ​ Nice to have: Be able to return counts for parts of speech (number of nouns, verbs, adjectives etc). Be able to both check the sentence on it's own, and as part of a conversation. Be able to find errant punctuation or characters. Spelling and grammar would be cool but not necessary (returning issues with them)
2023-08-31T12:50:11
https://www.reddit.com/r/LocalLLaMA/comments/166aama/ai_tool_to_classify_sentences_not_sentiment/
I_M_Scott
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166aama
false
null
t3_166aama
/r/LocalLLaMA/comments/166aama/ai_tool_to_classify_sentences_not_sentiment/
false
false
self
1
null
Cerebras, G42's Inception, and MBZUAI announce Jais a 13B parameter model that trained on a new 395 billion token Arabic-English-Code dataset
1
2023-08-31T13:07:32
https://huggingface.co/inception-mbzuai
maroule
huggingface.co
1970-01-01T00:00:00
0
{}
166ap2h
false
null
t3_166ap2h
/r/LocalLLaMA/comments/166ap2h/cerebras_g42s_inception_and_mbzuai_announce_jais/
false
false
https://b.thumbs.redditm…f5HV6lg_mXHo.jpg
1
{'enabled': False, 'images': [{'id': '0aRd9X0kpVtiEBxYuP4bF4GEppnt9yPik8OjEWz4f8Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/p2HnJ-0BA9ymRyqDEq24xs62SgKEMQ52SL8WG4dNQPg.jpg?width=108&crop=smart&auto=webp&s=3c552bf6c07ba0f3989a232364d6258df098be04', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/p2HnJ-0BA9ymRyqDEq24xs62SgKEMQ52SL8WG4dNQPg.jpg?width=216&crop=smart&auto=webp&s=904d47d74618feea6878c701444996db7f0e2f6e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/p2HnJ-0BA9ymRyqDEq24xs62SgKEMQ52SL8WG4dNQPg.jpg?width=320&crop=smart&auto=webp&s=79b5a67418f6837dd8fe9a28090227f3dbc38c38', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/p2HnJ-0BA9ymRyqDEq24xs62SgKEMQ52SL8WG4dNQPg.jpg?width=640&crop=smart&auto=webp&s=0da75da274ba1b7921401ff5ab42f436c956cdc5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/p2HnJ-0BA9ymRyqDEq24xs62SgKEMQ52SL8WG4dNQPg.jpg?width=960&crop=smart&auto=webp&s=efd117a4b24e583e23dcac2e28ed01c2ec28aec1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/p2HnJ-0BA9ymRyqDEq24xs62SgKEMQ52SL8WG4dNQPg.jpg?width=1080&crop=smart&auto=webp&s=78bb1e7e7b1053cc950420d2c21d6878f2c22589', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/p2HnJ-0BA9ymRyqDEq24xs62SgKEMQ52SL8WG4dNQPg.jpg?auto=webp&s=756b29c8465848d4af246bcdd66820a4dff1db02', 'width': 1200}, 'variants': {}}]}
Code Llama digression
1
I use Code Llama with Llama.cpp. I do not know why, but sometime, Code Llama digresses a lot. He changes its name arbitrarily from the name given by \`prompts/chat-with-bob.txt\`. So yesterday, Code Llama changed from "Bob" to "Doctor" (visible in the prompt), and started a medical consultation. Today, it changes its name from Bob to "Art", "who" obviously a computer tech. What am I doing wrong? What can I do to stop these annoying digressions?
2023-08-31T13:47:21
https://www.reddit.com/r/LocalLLaMA/comments/166bnj1/code_llama_digression/
RobotEntropy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166bnj1
false
null
t3_166bnj1
/r/LocalLLaMA/comments/166bnj1/code_llama_digression/
false
false
self
1
null
Is anyone using Llama 2 for serious financial work?
1
[removed]
2023-08-31T13:57:59
https://www.reddit.com/r/LocalLLaMA/comments/166bwrv/is_anyone_using_llama_2_for_serious_financial_work/
Natural-Sentence-601
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166bwrv
false
null
t3_166bwrv
/r/LocalLLaMA/comments/166bwrv/is_anyone_using_llama_2_for_serious_financial_work/
false
false
default
1
null
I want to deploy my fine tuned model like a chatbot
1
I don't mind if it's a paid service. I recently fine tuned a model and now want to deploy it so my client can test with some users. I tried replicate but got totally lost on how to push the model their. I would appreciate any advice from you guys
2023-08-31T14:07:29
https://www.reddit.com/r/LocalLLaMA/comments/166c5fq/i_want_to_deploy_my_fine_tuned_model_like_a/
_Sneaky_Bastard_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166c5fq
false
null
t3_166c5fq
/r/LocalLLaMA/comments/166c5fq/i_want_to_deploy_my_fine_tuned_model_like_a/
false
false
self
1
null
How much would you be willing to pay for a RTX 4090 with 48GB of VRAM
1
[removed] [View Poll](https://www.reddit.com/poll/166dgq7)
2023-08-31T14:58:44
https://www.reddit.com/r/LocalLLaMA/comments/166dgq7/how_much_would_you_be_willing_to_pay_for_a_rtx/
tripmine
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166dgq7
false
null
t3_166dgq7
/r/LocalLLaMA/comments/166dgq7/how_much_would_you_be_willing_to_pay_for_a_rtx/
false
false
self
1
null
4 Bit + Exlamma on H100 or A100
1
I have heard that the H100 gives drastic inference speed boost compared to the 3090s(upto 30x). I tested a 13B parameter model(4 bit + Exlamma) on the H100 but got only about a 30% speed boost. All GPUS running on runpod. ​ Is this normal or am i missing something?
2023-08-31T15:28:04
https://www.reddit.com/r/LocalLLaMA/comments/166e8cp/4_bit_exlamma_on_h100_or_a100/
ll_Teto_ll
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166e8cp
false
null
t3_166e8cp
/r/LocalLLaMA/comments/166e8cp/4_bit_exlamma_on_h100_or_a100/
false
false
self
1
null
Falcon-40B on 2 NVIDIA RTX A6000 48GB
1
I want to run inference with Falcon-40B-instruct and I have 2 Nvidia A6000 with 48gb each. Do you know if I can "combine" the memory of these GPUs to run this model?
2023-08-31T15:38:08
https://www.reddit.com/r/LocalLLaMA/comments/166eiat/falcon40b_on_2_nvidia_rtx_a6000_48gb/
rancidog
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166eiat
false
null
t3_166eiat
/r/LocalLLaMA/comments/166eiat/falcon40b_on_2_nvidia_rtx_a6000_48gb/
false
false
self
1
null
Has anyone manged to use fill in the middle with CodeLlama in 4bit?
1
Looked into exllama and others, there seems to be feature request in llama.cpp what about other libs?
2023-08-31T16:50:42
https://www.reddit.com/r/LocalLLaMA/comments/166gcx2/has_anyone_manged_to_use_fill_in_the_middle_with/
kpodkanowicz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166gcx2
false
null
t3_166gcx2
/r/LocalLLaMA/comments/166gcx2/has_anyone_manged_to_use_fill_in_the_middle_with/
false
false
self
1
null
Local LLM roleplay using raw transformers library
1
Hi everyone, I'm having some difficulty using transformers library with 'airoboros-l2-13b'. My goal is to give a 'persona' to the AI and talk with it. However, based on the 'prompt template' recommended for airoboros, I really don't get how to do it. I don't want to use any front-end, text UI or anything. Because I want later to use the AI text output in a python code. Thank you, I hope the community will help me.
2023-08-31T16:54:42
https://www.reddit.com/r/LocalLLaMA/comments/166ggja/local_llm_roleplay_using_raw_transformers_library/
Possible-Ball-3423
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166ggja
false
null
t3_166ggja
/r/LocalLLaMA/comments/166ggja/local_llm_roleplay_using_raw_transformers_library/
false
false
self
1
null
🤖 Agenta: LLaMA-Compatible Open-Source Platform for LLM Prompt Engineering, Evaluation, and Deployment
1
2023-08-31T17:18:37
https://v.redd.it/rrzoiej79hlb1
resiros
/r/LocalLLaMA/comments/166h29a/agenta_llamacompatible_opensource_platform_for/
1970-01-01T00:00:00
0
{}
166h29a
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/rrzoiej79hlb1/DASHPlaylist.mpd?a=1696180716%2CY2M4MzE5NTBkYTliODAwYjFhOGQ3ZGEyNWJmZjM1NTliZjExNWI5N2ViNTYwZjVjMzM0ZjJiNGM3N2EwZGU5OA%3D%3D&v=1&f=sd', 'duration': 127, 'fallback_url': 'https://v.redd.it/rrzoiej79hlb1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/rrzoiej79hlb1/HLSPlaylist.m3u8?a=1696180716%2CNDJlZjU1ODVmMjhjMWI1ZDEzZjI1YzE2YmU4YTI0MjY1MmIxYjE0M2FkYTA1ZWIyYjUyMzRmMGM0MDYwMDIwMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/rrzoiej79hlb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_166h29a
/r/LocalLLaMA/comments/166h29a/agenta_llamacompatible_opensource_platform_for/
false
false
https://b.thumbs.redditm…jPKH7fvCKLsg.jpg
1
{'enabled': False, 'images': [{'id': 'hghi4oqfuhZuhSekD3-ctTZO2Hvfwh-szT7SqZIahYQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/KrZ1T2cUm0mycqt5AuvjI72kMfQz9pb06oB3tvw0z9w.png?width=108&crop=smart&format=pjpg&auto=webp&s=7a6086b320bd28f15684d42824839c707fd11d72', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/KrZ1T2cUm0mycqt5AuvjI72kMfQz9pb06oB3tvw0z9w.png?width=216&crop=smart&format=pjpg&auto=webp&s=df67366d1c55fa28f1cbd8e9090ccea7406563c5', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/KrZ1T2cUm0mycqt5AuvjI72kMfQz9pb06oB3tvw0z9w.png?width=320&crop=smart&format=pjpg&auto=webp&s=06431391dbecfe1c5eaec21d115c57c8bb29961c', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/KrZ1T2cUm0mycqt5AuvjI72kMfQz9pb06oB3tvw0z9w.png?width=640&crop=smart&format=pjpg&auto=webp&s=77a7e8a5499765ba60c9532e9195d39c22eef47a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/KrZ1T2cUm0mycqt5AuvjI72kMfQz9pb06oB3tvw0z9w.png?width=960&crop=smart&format=pjpg&auto=webp&s=595faf6e29b20e58c7c71b634bace06f448253a3', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/KrZ1T2cUm0mycqt5AuvjI72kMfQz9pb06oB3tvw0z9w.png?width=1080&crop=smart&format=pjpg&auto=webp&s=171874c28ea069c40a739074a0de6d87e3763933', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/KrZ1T2cUm0mycqt5AuvjI72kMfQz9pb06oB3tvw0z9w.png?format=pjpg&auto=webp&s=48f9f965e9b87d6db99a6654bc780b4fe11e67d9', 'width': 3840}, 'variants': {}}]}
Model parallelism with LoRA
1
I've been experimenting with fine-tuning Llama2 models using 3 A6000 GPUs, and I've been surprised to discover than none of the widely-discussed model parallelism methods actually distribute compute and memory across all the cards. Using HF Accelerate with `device_map='auto'` distributes the memory across cards, but it doesn't actually work in parallel. Only one card is actually used at a time. You can see this by running `nvidia-smi dmon` while the model is training (look at the `sm` column). Deepspeed zero3 and PyTorch FSDP don't take advantage of LoRA, because (AFAICT) they don't properly handle the frozen layers and as a result the memory usage of the activations and optimiser states is not distributed across the GPUs. This is discussed here: https://github.com/pytorch/pytorch/issues/91165 . Has anyone here found a good way to fine-tune large Llama2 models on multiple GPUs, where the model training doesn't fit on a single GPU, and that spreads the compute over the GPUs?
2023-08-31T17:23:05
https://www.reddit.com/r/LocalLLaMA/comments/166h6bx/model_parallelism_with_lora/
jeremyhoward
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166h6bx
false
null
t3_166h6bx
/r/LocalLLaMA/comments/166h6bx/model_parallelism_with_lora/
false
false
self
1
{'enabled': False, 'images': [{'id': 'QQsNn7b-lo6lk-hu0XsOUKBoGabYmEoJdf2Nqjgj3Ts', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UwKA4IMS8kW5jvwLK1u8bYVOBs-1DGEksStdVjgNMU0.jpg?width=108&crop=smart&auto=webp&s=edf14f36e15f2da0afac14595f5e398c4425771c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/UwKA4IMS8kW5jvwLK1u8bYVOBs-1DGEksStdVjgNMU0.jpg?width=216&crop=smart&auto=webp&s=51ea6e6c0e684ca9b10d204984fe389e5c90b7de', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/UwKA4IMS8kW5jvwLK1u8bYVOBs-1DGEksStdVjgNMU0.jpg?width=320&crop=smart&auto=webp&s=d86ed787ae726ac32f9af4f164b88e0b0c120199', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/UwKA4IMS8kW5jvwLK1u8bYVOBs-1DGEksStdVjgNMU0.jpg?width=640&crop=smart&auto=webp&s=88bbd4d69b812d696c0f5704ec8e369c36ef424d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/UwKA4IMS8kW5jvwLK1u8bYVOBs-1DGEksStdVjgNMU0.jpg?width=960&crop=smart&auto=webp&s=dabd0ae491d2119a6177b5e8cd400bb381400ce4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/UwKA4IMS8kW5jvwLK1u8bYVOBs-1DGEksStdVjgNMU0.jpg?width=1080&crop=smart&auto=webp&s=2768b38b3f4199e41208ffdd2162c2d9bebe97c3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/UwKA4IMS8kW5jvwLK1u8bYVOBs-1DGEksStdVjgNMU0.jpg?auto=webp&s=4850ef10d7471e700a15971f347ba0d3d627c045', 'width': 1200}, 'variants': {}}]}
I compared a few different Code Llama variants locally
1
2023-08-31T17:26:49
http://www.xethub.com/blog/comparing-code-llama-models-locally-macbook/
semicausal
xethub.com
1970-01-01T00:00:00
0
{}
166h9rd
false
null
t3_166h9rd
/r/LocalLLaMA/comments/166h9rd/i_compared_a_few_different_code_llama_variants/
false
false
https://b.thumbs.redditm…lVFOpkOwqA5I.jpg
1
{'enabled': False, 'images': [{'id': 'tLlKuE5EoPs6lN46OBoqyTS43XgbiyuXtsh2PGLKmmI', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/nFP8ssVeCQUjuhFTrPJLEMOkHOegECPHkcBycy1MSsQ.jpg?width=108&crop=smart&auto=webp&s=471c7bcf4ed32308d966385ed67170af64845c87', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/nFP8ssVeCQUjuhFTrPJLEMOkHOegECPHkcBycy1MSsQ.jpg?width=216&crop=smart&auto=webp&s=6486ada79914c3a30493c9a6273630dc84e485be', 'width': 216}, {'height': 176, 'url': 'https://external-preview.redd.it/nFP8ssVeCQUjuhFTrPJLEMOkHOegECPHkcBycy1MSsQ.jpg?width=320&crop=smart&auto=webp&s=d54fc7a1eb9c2395b2059e23b3c88797ea9215f1', 'width': 320}, {'height': 352, 'url': 'https://external-preview.redd.it/nFP8ssVeCQUjuhFTrPJLEMOkHOegECPHkcBycy1MSsQ.jpg?width=640&crop=smart&auto=webp&s=75fe4be5f89b32715235f54a7f7c41b3ea577788', 'width': 640}, {'height': 528, 'url': 'https://external-preview.redd.it/nFP8ssVeCQUjuhFTrPJLEMOkHOegECPHkcBycy1MSsQ.jpg?width=960&crop=smart&auto=webp&s=566438dcc0cffcf0bcef28c2efa3ce98239a467c', 'width': 960}, {'height': 594, 'url': 'https://external-preview.redd.it/nFP8ssVeCQUjuhFTrPJLEMOkHOegECPHkcBycy1MSsQ.jpg?width=1080&crop=smart&auto=webp&s=c6d22eb12fc5314fd42e7bca5548ea52629f7179', 'width': 1080}], 'source': {'height': 1400, 'url': 'https://external-preview.redd.it/nFP8ssVeCQUjuhFTrPJLEMOkHOegECPHkcBycy1MSsQ.jpg?auto=webp&s=8da6a8c5ed24c5bc7b6e91504e4dbcaf446812e7', 'width': 2544}, 'variants': {}}]}
Code Llama 34B F16 at 20t/s on a MacBook
1
2023-08-31T17:55:31
https://twitter.com/ggerganov/status/1697262700165013689
sleeper-2
twitter.com
1970-01-01T00:00:00
0
{}
166i0sw
false
{'oembed': {'author_name': 'Georgi Gerganov', 'author_url': 'https://twitter.com/ggerganov', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Full F16 precision 34B Code Llama at &gt;20 t/s on M2 Ultra <a href="https://t.co/7diki8zes4">pic.twitter.com/7diki8zes4</a></p>&mdash; Georgi Gerganov (@ggerganov) <a href="https://twitter.com/ggerganov/status/1697262700165013689?ref_src=twsrc%5Etfw">August 31, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/ggerganov/status/1697262700165013689', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_166i0sw
/r/LocalLLaMA/comments/166i0sw/code_llama_34b_f16_at_20ts_on_a_macbook/
false
false
https://b.thumbs.redditm…-VTMut6DV9pI.jpg
1
{'enabled': False, 'images': [{'id': 'yDjqZZNr5Jhf8s-3eNrIbB_jkTuhTJPTbC4EGUIkLbQ', 'resolutions': [{'height': 80, 'url': 'https://external-preview.redd.it/NHxQ79GDr5I3zzp9gT4sRc89fSL4CF4IuCfj7VHtPUc.jpg?width=108&crop=smart&auto=webp&s=f592c4a04583e809ecf686b210850d7cb842eaf1', 'width': 108}], 'source': {'height': 104, 'url': 'https://external-preview.redd.it/NHxQ79GDr5I3zzp9gT4sRc89fSL4CF4IuCfj7VHtPUc.jpg?auto=webp&s=59852efad297170f970101d77885529bc28ea52b', 'width': 140}, 'variants': {}}]}
Finetuning Chat LLM (Llama2-chat): Data set best practices
1
What are some best practices when it comes to finetuning (Axolotl Repo) an LLM (Llama2-chat) and regards to data, quantity, quality, depths, etc to make the finetuning and interaction meaningful? The goal is to feed it n number of snippets from various document sources to be able to act as an assistant when it comes to those documents. Note: can't use embeddings at this time because of privacy so will be using custom built datasets formatted in JSONL as per repo requirements.
2023-08-31T18:14:39
https://www.reddit.com/r/LocalLLaMA/comments/166ij98/finetuning_chat_llm_llama2chat_data_set_best/
orangeatom
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166ij98
false
null
t3_166ij98
/r/LocalLLaMA/comments/166ij98/finetuning_chat_llm_llama2chat_data_set_best/
false
false
self
1
null
128k Context Llama 2 Finetunes Using YaRN Interpolation (successor to NTK-aware interpolation) and Flash Attention 2
1
[removed]
2023-08-31T18:37:54
https://www.reddit.com/r/LocalLLaMA/comments/166j59j/128k_context_llama_2_finetunes_using_yarn/
bloc97
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166j59j
false
null
t3_166j59j
/r/LocalLLaMA/comments/166j59j/128k_context_llama_2_finetunes_using_yarn/
false
false
https://b.thumbs.redditm…R2DWLLirOhXI.jpg
1
{'enabled': False, 'images': [{'id': '--9zHPHUP3AoAb8GNz4v4pSPddJXXlQHm5Cx9Vu6KXE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wG3-GYKNXOm8mm81hod5tuijuBKBdtTWXCP6Lz8Rrxo.jpg?width=108&crop=smart&auto=webp&s=9894ee258ab24c10cb56f13f2be2bc34a93d3d23', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/wG3-GYKNXOm8mm81hod5tuijuBKBdtTWXCP6Lz8Rrxo.jpg?width=216&crop=smart&auto=webp&s=cb3ca9e912a24f60e7d34f8cf08eb8f351d7f1b4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/wG3-GYKNXOm8mm81hod5tuijuBKBdtTWXCP6Lz8Rrxo.jpg?width=320&crop=smart&auto=webp&s=4b3a81eee53693053ab7d7c7137d00568ead31c2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/wG3-GYKNXOm8mm81hod5tuijuBKBdtTWXCP6Lz8Rrxo.jpg?width=640&crop=smart&auto=webp&s=4bdeb98ef1692b06616bd95c3451b76249e5ef64', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/wG3-GYKNXOm8mm81hod5tuijuBKBdtTWXCP6Lz8Rrxo.jpg?width=960&crop=smart&auto=webp&s=25323a8ba371b2bebbd9bdd143e0babeb930542e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/wG3-GYKNXOm8mm81hod5tuijuBKBdtTWXCP6Lz8Rrxo.jpg?width=1080&crop=smart&auto=webp&s=8bbad01d3051277a56e1f5009ef1d7c8d4890763', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/wG3-GYKNXOm8mm81hod5tuijuBKBdtTWXCP6Lz8Rrxo.jpg?auto=webp&s=971f608eea6a4b13b74f4274ece84fc988c6b44a', 'width': 1200}, 'variants': {}}]}
128k Context Llama 2 Finetunes Using YaRN Interpolation (successor to NTK-aware interpolation) and Flash Attention 2
1
[removed]
2023-08-31T18:46:28
https://www.reddit.com/r/LocalLLaMA/comments/166jdbh/128k_context_llama_2_finetunes_using_yarn/
bloc97
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166jdbh
false
null
t3_166jdbh
/r/LocalLLaMA/comments/166jdbh/128k_context_llama_2_finetunes_using_yarn/
false
false
https://b.thumbs.redditm…YcDLftbA-3hE.jpg
1
{'enabled': False, 'images': [{'id': 'BZSrezHRZHYsRr1vcKM9NmhztB0BCRyk3SjbicIY0FI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZU6AxVsLZjiF3E-jxsgtJQ9lCZ-Ed2W49RmJvkcbPus.jpg?width=108&crop=smart&auto=webp&s=bbd7d402b8eedcc23d9c16ce44e970b394a6fac4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZU6AxVsLZjiF3E-jxsgtJQ9lCZ-Ed2W49RmJvkcbPus.jpg?width=216&crop=smart&auto=webp&s=869af62fb0fb86e2ec91f12a3bea793705fad380', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZU6AxVsLZjiF3E-jxsgtJQ9lCZ-Ed2W49RmJvkcbPus.jpg?width=320&crop=smart&auto=webp&s=b13481f76c96163673c0cf6120261685fe70a858', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZU6AxVsLZjiF3E-jxsgtJQ9lCZ-Ed2W49RmJvkcbPus.jpg?width=640&crop=smart&auto=webp&s=78b3ec0c68ba56c1dbb733d5362b6d5043843fd5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZU6AxVsLZjiF3E-jxsgtJQ9lCZ-Ed2W49RmJvkcbPus.jpg?width=960&crop=smart&auto=webp&s=cb897ea69152ba6411085d3bb2681b5cd96da525', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZU6AxVsLZjiF3E-jxsgtJQ9lCZ-Ed2W49RmJvkcbPus.jpg?width=1080&crop=smart&auto=webp&s=5cf8be8e1f8f7f77ff694c034ecb4cc96e1a73ce', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZU6AxVsLZjiF3E-jxsgtJQ9lCZ-Ed2W49RmJvkcbPus.jpg?auto=webp&s=ddcbc2d2b859fdbed940d6f361fb507dd19df742', 'width': 1200}, 'variants': {}}]}
Llama-2 with 128k context length thanks to YaRN
1
2023-08-31T18:47:28
https://twitter.com/EnricoShippole/status/1697317625116742119?s=20
hackerllama
twitter.com
1970-01-01T00:00:00
0
{}
166je92
false
{'oembed': {'author_name': 'Enrico Shippole', 'author_url': 'https://twitter.com/EnricoShippole', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Releasing Yarn-Llama-2-13b-128k, a Llama-2 model, trained for 128k context length using YaRN scaling. The model was trained in collaboration with u/bloc97 and <a href="https://twitter.com/theemozilla?ref_src=twsrc%5Etfw">@theemozilla</a> of <a href="https://twitter.com/NousResearch?ref_src=twsrc%5Etfw">@NousResearch</a> and <a href="https://twitter.com/Void13950782?ref_src=twsrc%5Etfw">@Void13950782</a> of <a href="https://twitter.com/AiEleuther?ref_src=twsrc%5Etfw">@AiEleuther</a>. <a href="https://t.co/CmvZgHdJEF">pic.twitter.com/CmvZgHdJEF</a></p>&mdash; Enrico Shippole (@EnricoShippole) <a href="https://twitter.com/EnricoShippole/status/1697317625116742119?ref_src=twsrc%5Etfw">August 31, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/EnricoShippole/status/1697317625116742119', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_166je92
/r/LocalLLaMA/comments/166je92/llama2_with_128k_context_length_thanks_to_yarn/
false
false
https://b.thumbs.redditm…jU-PM_iEK9kA.jpg
1
{'enabled': False, 'images': [{'id': 'V6VT4I1rRJrroUZRSWQkoDyJhHPsirqib-AblynNy30', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/993FeXP1LyP0fGszQgeS_S6P1DeKojDFcf685MGBzLo.jpg?width=108&crop=smart&auto=webp&s=d8b4eda9903ec1b3deac06d7405c3cbd97bce0a9', 'width': 108}], 'source': {'height': 70, 'url': 'https://external-preview.redd.it/993FeXP1LyP0fGszQgeS_S6P1DeKojDFcf685MGBzLo.jpg?auto=webp&s=65568ca4c71cd6a5d35684f3e28fc49f247847f0', 'width': 140}, 'variants': {}}]}
128k Context Llama 2 Finetunes Using YaRN Interpolation (successor to NTK-aware interpolation) and Flash Attention 2
1
GitHub (Includes links to models and preprint): [https://github.com/jquesnelle/yarn](https://github.com/jquesnelle/yarn) arXiv link: coming soon! Demo (Multiple-choice quiz on a novel of \~110k context): [https://colab.research.google.com/drive/1p7iNUQMbVGYWqrKMHvPPO4Q13fB5mwDF?usp=sharing](https://colab.research.google.com/drive/1p7iNUQMbVGYWqrKMHvPPO4Q13fB5mwDF?usp=sharing) This entire project is the culmination of 2 months of hard work from me, u/emozilla, EnricoShippole and honglu2875. (And a lot of compute, even though we are still heavily compute starved...)These models aren't fully converged yet, the base models have only been further pretrained for 400 steps (\~1.7B tokens), compared to the 1000 steps in Meta's PI paper, however given that we have an improved interpolation method, the non-converged results are already superior to PI. We are claiming SoTA for open-source 128k context models. The GitHub repo provides the code and datasets that allows anyone to completely reproduce the results in the paper from scratch. We strongly believe in fully open-source and transparent research, and are releasing everything under MIT license. (Except the models, which are bound under Meta's license) Note that these are base models, not yet instruction-tuned, and the 13b-128k model can already acheive a 1-shot accuracy of \~52% on the Sherlock Holmes book quiz demo (the model has never seen long context QA), this tests the model's understanding of the story. All of our metrics point to these models being the new SoTA for long context models (see Experiments section of paper), even if the models aren't fully trained yet. We expect performance to improve given more training. Stay tuned! All models include a ready-to-use implementation of FA2 if run using `trust_remote_code=True` in the transformers library. The 13b model requires approximatively 360GB of VRAM (eg. 8x48GB or 4x80GB) for the full 128k context size. Passkey retreival results are not yet in the paper (still running), but preliminary results show >80% across the entire 128k context. Also big thanks to the entire Nous Research team, Stability AI, CarperAI, Eleuther AI, a16z and PygmalionAI for their insights and generous support of compute resources that enabled the completion of this research. (If I'm forgetting anyone please let me know asap!) We're also not forgetting everyone from the open-source community that participated and contributed in the discussions and code implementations on all social media and code sharing platforms. I say thanks to all of you! I would like to end this post with us all having a big round of applause for everyone! [As always, a PPL chart for good measure...](https://preview.redd.it/ith1xv7dshlb1.png?width=1209&format=png&auto=webp&s=95bd68feb05cb2a36bc97979f96d49aeb141b7dc) P.S. We need more compute in order to release fully converged 7b, 13b models and a 70b model. 128k context requires so much VRAM during training, it's insane... (For training, these models barely fit in 128 80GB A100s using DeepSpeed and FA2) If anyone is feeling generous enough to provide large scale training compute, we will have the 70b model out in no time.
2023-08-31T18:52:03
https://www.reddit.com/r/LocalLLaMA/comments/166jik4/128k_context_llama_2_finetunes_using_yarn/
bloc97
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166jik4
false
null
t3_166jik4
/r/LocalLLaMA/comments/166jik4/128k_context_llama_2_finetunes_using_yarn/
false
false
https://b.thumbs.redditm…YcDLftbA-3hE.jpg
1
{'enabled': False, 'images': [{'id': 'BZSrezHRZHYsRr1vcKM9NmhztB0BCRyk3SjbicIY0FI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZU6AxVsLZjiF3E-jxsgtJQ9lCZ-Ed2W49RmJvkcbPus.jpg?width=108&crop=smart&auto=webp&s=bbd7d402b8eedcc23d9c16ce44e970b394a6fac4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZU6AxVsLZjiF3E-jxsgtJQ9lCZ-Ed2W49RmJvkcbPus.jpg?width=216&crop=smart&auto=webp&s=869af62fb0fb86e2ec91f12a3bea793705fad380', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZU6AxVsLZjiF3E-jxsgtJQ9lCZ-Ed2W49RmJvkcbPus.jpg?width=320&crop=smart&auto=webp&s=b13481f76c96163673c0cf6120261685fe70a858', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZU6AxVsLZjiF3E-jxsgtJQ9lCZ-Ed2W49RmJvkcbPus.jpg?width=640&crop=smart&auto=webp&s=78b3ec0c68ba56c1dbb733d5362b6d5043843fd5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZU6AxVsLZjiF3E-jxsgtJQ9lCZ-Ed2W49RmJvkcbPus.jpg?width=960&crop=smart&auto=webp&s=cb897ea69152ba6411085d3bb2681b5cd96da525', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZU6AxVsLZjiF3E-jxsgtJQ9lCZ-Ed2W49RmJvkcbPus.jpg?width=1080&crop=smart&auto=webp&s=5cf8be8e1f8f7f77ff694c034ecb4cc96e1a73ce', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZU6AxVsLZjiF3E-jxsgtJQ9lCZ-Ed2W49RmJvkcbPus.jpg?auto=webp&s=ddcbc2d2b859fdbed940d6f361fb507dd19df742', 'width': 1200}, 'variants': {}}]}
Trouble moving from Llama to Llama 2
1
I'm using the procedure outlined in this colab notebook: [https://colab.research.google.com/drive/1rqWABmz2ZfolJOdoy6TRc6YI7d128cQO](https://colab.research.google.com/drive/1rqWABmz2ZfolJOdoy6TRc6YI7d128cQO) It works great. It's the first one I've found that seems to really produce good results. The only problem is, the model is Llama, and I want to use Llama 2. I've subbed a couple different Llama 2 models in, and I can't get any of them to work. The loss fluctuates wildly up and down. I've tried adjusting learning rate, and a few other hyperparams, as well as using a variety of data sets. &#x200B; Can anyone see what I'm missing?
2023-08-31T19:02:55
https://www.reddit.com/r/LocalLLaMA/comments/166jszj/trouble_moving_from_llama_to_llama_2/
Nathanielmhld
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166jszj
false
null
t3_166jszj
/r/LocalLLaMA/comments/166jszj/trouble_moving_from_llama_to_llama_2/
false
false
self
1
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=108&crop=smart&auto=webp&s=4b647239f77bf713f4a6209cfa4867351c055fd9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=216&crop=smart&auto=webp&s=7f4234ff3f4f4ebd7f77236dedb03a2faee3e04a', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?auto=webp&s=73eb91ea5a5347f216c0f0c4d6796396826aae49', 'width': 260}, 'variants': {}}]}
3D artist - is there any model for Blender?
1
I've seen a few scripts connecting Blender to the OpenAI API. Are there any local models trained on Blender? I don't care if it "connects" to Blender. Just having something I can ask questions to locally and privately would be phenomenal.
2023-08-31T19:58:34
https://www.reddit.com/r/LocalLLaMA/comments/166l9bq/3d_artist_is_there_any_model_for_blender/
JebryyathHS
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166l9bq
false
null
t3_166l9bq
/r/LocalLLaMA/comments/166l9bq/3d_artist_is_there_any_model_for_blender/
false
false
self
1
null
RTX 4060 Ti 16 GB Users: Viable for 33/34b Models on ExLlama/GGML?
1
Has anyone tried using this GPU with ExLlama for 33/34b models? What's your experience? Additionally, I'm curious about offloading speeds for GGML/GGUF. Please share the tokens/s with specific context sizes. TIA!
2023-08-31T21:02:34
https://www.reddit.com/r/LocalLLaMA/comments/166mykp/rtx_4060_ti_16_gb_users_viable_for_3334b_models/
Xhehab_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166mykp
false
null
t3_166mykp
/r/LocalLLaMA/comments/166mykp/rtx_4060_ti_16_gb_users_viable_for_3334b_models/
false
false
self
1
null
conceptofmind/Yarn-Llama-2-13b-128k · Hugging Face
1
2023-08-31T21:04:00
https://huggingface.co/conceptofmind/Yarn-Llama-2-13b-128k
ninjasaid13
huggingface.co
1970-01-01T00:00:00
0
{}
166n016
false
null
t3_166n016
/r/LocalLLaMA/comments/166n016/conceptofmindyarnllama213b128k_hugging_face/
false
false
https://b.thumbs.redditm…BC3KLP66Tj5A.jpg
1
{'enabled': False, 'images': [{'id': '0zs2x8DfIxFl6IvCGjBc6nQCxvmLW425TKuxLZ_bNlk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mpzz35zuqFFkT_P0fEg4RWpx4Xo9LEtTmaz_kBbuBeE.jpg?width=108&crop=smart&auto=webp&s=6dfe0d4329687120bca5a26454aa97241489780f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/mpzz35zuqFFkT_P0fEg4RWpx4Xo9LEtTmaz_kBbuBeE.jpg?width=216&crop=smart&auto=webp&s=0eebb006a38f8836dbfb99b84abee455fdfee90e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/mpzz35zuqFFkT_P0fEg4RWpx4Xo9LEtTmaz_kBbuBeE.jpg?width=320&crop=smart&auto=webp&s=c41068ea2066908735531212b13ce18308dd05c6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/mpzz35zuqFFkT_P0fEg4RWpx4Xo9LEtTmaz_kBbuBeE.jpg?width=640&crop=smart&auto=webp&s=3240b908a7baddfc89cbe780727fd04e1b2d26eb', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/mpzz35zuqFFkT_P0fEg4RWpx4Xo9LEtTmaz_kBbuBeE.jpg?width=960&crop=smart&auto=webp&s=928d842fb1f4ea90e9d5713013f8678da001ee91', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/mpzz35zuqFFkT_P0fEg4RWpx4Xo9LEtTmaz_kBbuBeE.jpg?width=1080&crop=smart&auto=webp&s=676ab128786b88e64ef6991b787efb72667563e8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/mpzz35zuqFFkT_P0fEg4RWpx4Xo9LEtTmaz_kBbuBeE.jpg?auto=webp&s=0078ceb25a4704ea81bf3b0c91445dc043d072ca', 'width': 1200}, 'variants': {}}]}
What is the best community/group to join if I want to connect with ChatGPT/LLM application developers?
1
I'm looking for a community that is mostly or exclusively made up of **makers**/**builders** who are collaborating together and launching prototypes, say every few months. Thanks!
2023-08-31T21:30:42
https://www.reddit.com/r/LocalLLaMA/comments/166nprz/what_is_the_best_communitygroup_to_join_if_i_want/
AndreeSmothers
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166nprz
false
null
t3_166nprz
/r/LocalLLaMA/comments/166nprz/what_is_the_best_communitygroup_to_join_if_i_want/
false
false
self
1
null
Seeking Opinions: Best Open-Source Model for Q/A and Summarization on Financial Documents (between 13-40B)
1
Hello Guys ! I'm currently on the lookout for the best open-source language model for tackling question-answering (Q/A) and summarization tasks specifically tailored to financial documents. I'm looking for models with a capacity between 13 and 40 billion parameters. I've already test llama2-chat-13b & mpt-30b-instruct. I'd love to hear your experiences and opinions on which models have performed well in this domain or not. Thanks in advance for your input!
2023-08-31T22:01:27
https://www.reddit.com/r/LocalLLaMA/comments/166oii7/seeking_opinions_best_opensource_model_for_qa_and/
GregLeSang
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166oii7
false
null
t3_166oii7
/r/LocalLLaMA/comments/166oii7/seeking_opinions_best_opensource_model_for_qa_and/
false
false
self
1
null
Shoutout to thebloke for ranking on HuggingFace leaderboard with a gptq model, Genz 70B
1
[removed]
2023-08-31T22:46:19
https://www.reddit.com/r/LocalLLaMA/comments/166pn34/shoutout_to_thebloke_for_ranking_on_huggingface/
multiverse_fan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166pn34
false
null
t3_166pn34
/r/LocalLLaMA/comments/166pn34/shoutout_to_thebloke_for_ranking_on_huggingface/
false
false
https://b.thumbs.redditm…z4qZ0u-rT0NY.jpg
1
null
Simplest python/local llm example?
1
I'm trying to get my head around wether this is feasible. Can i download a model from huggingface directly (say, nous-hermes) onto my machine (mac) and use a simple python script to interact with it? Or must I use an intermediary like oogabooga or soething to communicate with it? I'd live to find a minimal example Thanks!
2023-08-31T23:58:01
https://www.reddit.com/r/LocalLLaMA/comments/166reh7/simplest_pythonlocal_llm_example/
FahrenheitUnrequited
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166reh7
false
null
t3_166reh7
/r/LocalLLaMA/comments/166reh7/simplest_pythonlocal_llm_example/
false
false
self
1
null
Is there a general model size where q2 quants don't say nonsense?
1
&#x200B; [This is a 13B think it's understandable since it's a q2 model, but still shocking in the moment. It was coherent up to this point. ](https://preview.redd.it/pgom9nvv7jlb1.png?width=894&format=png&auto=webp&s=b793a5e9e9ea49197d814d321dde24444b65ca16)
2023-09-01T00:06:28
https://www.reddit.com/r/LocalLLaMA/comments/166rm7k/is_there_a_general_model_size_where_q2_quants/
multiverse_fan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166rm7k
false
null
t3_166rm7k
/r/LocalLLaMA/comments/166rm7k/is_there_a_general_model_size_where_q2_quants/
false
false
https://b.thumbs.redditm…nx8EJTPjyC3E.jpg
1
null
New to LLMs and I have questions about video cards
1
Are two cards with 16GB of VRAM each basically the same as a single 32GB card? Could I run a >16GB model by spreading it across two cards, or does it need to run on one single card with enough RAM to handle it? I see MI25s on ebay for around $100, are they worth it? I can't get a $500 or $1000 card but I can spend $100, maybe $200. Would it make sense to get two MI25s?
2023-09-01T00:47:34
https://www.reddit.com/r/LocalLLaMA/comments/166slcm/new_to_llms_and_i_have_questions_about_video_cards/
timschwartz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166slcm
false
null
t3_166slcm
/r/LocalLLaMA/comments/166slcm/new_to_llms_and_i_have_questions_about_video_cards/
false
false
self
1
null
Llama 2 Platform APIs with per-token pricing - are there any I'm not aware of?
1
I'm interested in finding the best Llama 2 API service - I want to use Llama 2 as a cheaper/faster alternative to gpt-3.5-turbo in an application I'm building. I have bursty requests and a lot of time without users so I really don't want to host my own instance of Llama 2, it's only viable for me if I can pay per-token and have someone else manage compute (otherwise I'd just use gpt-3.5-turbo!). &#x200B; So far, here's my understanding of the market for hosted Llama 2 APIs: * [Deepinfra](https://deepinfra.com/pricing) \- only available option with no dealbreakers; well-priced at just over of half gpt-3.5-turbo average pricing (but currently slower than gpt-3.5-tubo and relatively unknown company) * [MosaicML](https://www.mosaicml.com/inference) \- no open sign-up (have to submit request form), pricing for llama-2-70b-chat is actually slightly higher than gpt-3.5-turbo anyway * [Replicate](https://replicate.com/replicate/llama-2-70b-chat) \- great service for image gen models but for LLMs it's so inefficient to run on a single GPU with pay-per-second that my cost estimates for it are 10-100x the price of gpt-3.5-turbo * Amazon Bedrock - not live yet, can't find pricing, unclear if it'll have Llama 2 at launch anyway (potentially depends on Jassy and Zuck making friends) &#x200B; Anything else I should be aware of? Here's my current pricing table ($ per 1M tokens): | Provider | Model Name | Input | Output | Combined (4:1 input:output assumption) | |-----------|------------------|-------|--------|----------------------------------------| | OpenAI | GPT-3.5 Turbo | 1.50 | 2.00 | 1.60 | | deepinfra | llama-2-70b-chat | 1.00 | 1.00 | 1.00 | | mosaicml | llama-2-70b-chat | 2.00 | 2.00 | 2.00 | | replicate | llama-2-70b-chat | 0.00 | 208.84 | 41.77 | | replicate | llama-2-13b-chat | 0.00 | 89.98 | 18.00 |
2023-09-01T01:37:23
https://www.reddit.com/r/LocalLLaMA/comments/166tp5q/llama_2_platform_apis_with_pertoken_pricing_are/
mikachip
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166tp5q
false
null
t3_166tp5q
/r/LocalLLaMA/comments/166tp5q/llama_2_platform_apis_with_pertoken_pricing_are/
false
false
self
1
{'enabled': False, 'images': [{'id': 'gdv5Bh89JWXxDpkAlTrk_zCb-qJxrREhAWY8_SsUogU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/tD62_gbrTKSFS-T2_bh8iKW4Yhfa_e5FHjFP5FsdITU.jpg?width=108&crop=smart&auto=webp&s=ec134d5f1c4f53b9f67d5942dbd314037af38666', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/tD62_gbrTKSFS-T2_bh8iKW4Yhfa_e5FHjFP5FsdITU.jpg?width=216&crop=smart&auto=webp&s=1061b54db08292c4fbc9fae7d80cdb4981b4e4cb', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/tD62_gbrTKSFS-T2_bh8iKW4Yhfa_e5FHjFP5FsdITU.jpg?width=320&crop=smart&auto=webp&s=d936cb000bed7a92e6285dd0683d5d881ece9b08', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/tD62_gbrTKSFS-T2_bh8iKW4Yhfa_e5FHjFP5FsdITU.jpg?auto=webp&s=1cc6f0f38166af4c6d724b3e743d8ec60cc03085', 'width': 512}, 'variants': {}}]}
If you feed an LLM with a fragment of its own output, it'll tend to reproduce the fragment literally.
1
I noticed an odd behavior. In order to summarize a long text I decided to iterate over a bunch of paragraphs, one bunch at a time, and have the LLM generate the summary for each bunch, until it's finished. It occurred to me that it would be nice to provide to each bunch, as context, the summary of the previous bunch, so that the LLM can make more sense of the text to be summarized. But to my surprise the LLM refuses to create a summary of the new text, it just rewrites literally the summary provided as context. I'm using nous-hermes-llama2-13b.ggmlv3.q4\_K\_M, which works great for other tasks using the Instruction/Response template. Also tried with a few other models (orca) and happened too as well. My intuition is that the model is overly sensitive to its own style of text in the sense that it triggers a very high signal mathematically speaking. Even more so because the original text I'm trying to summarize in a transcription from a spoken interview, which is obviously less structured and more chaotic. A bit disappointed of this behavior overall. Anyone else noticed it?
2023-09-01T01:52:01
https://www.reddit.com/r/LocalLLaMA/comments/166u0ig/if_you_feed_an_llm_with_a_fragment_of_its_own/
Responsible_Warning3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166u0ig
false
null
t3_166u0ig
/r/LocalLLaMA/comments/166u0ig/if_you_feed_an_llm_with_a_fragment_of_its_own/
false
false
self
1
null
Converting HuggingFace Models to GGUF/GGML | Tutorial
1
2023-09-01T02:06:12
https://www.substratus.ai/blog/converting-hf-model-gguf-model/
samosx
substratus.ai
1970-01-01T00:00:00
0
{}
166ubn5
false
null
t3_166ubn5
/r/LocalLLaMA/comments/166ubn5/converting_huggingface_models_to_ggufggml_tutorial/
false
false
https://a.thumbs.redditm…HrUEn7Wq4u30.jpg
1
{'enabled': False, 'images': [{'id': 'wm8asZPwMNjKu2oMaTYZapHi2pDCrsnaUBEG7KVDlpQ', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/AV55SHIcXwiLujwKI4072jb2GNTPWU_P7VCDjuCPVQ4.jpg?width=108&crop=smart&auto=webp&s=66b33151536e8f150f0a75d6f01a889bdf71a44a', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/AV55SHIcXwiLujwKI4072jb2GNTPWU_P7VCDjuCPVQ4.jpg?width=216&crop=smart&auto=webp&s=40a15fa12f410ac44e6c3f8cbd52a8f58cd3aa8b', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/AV55SHIcXwiLujwKI4072jb2GNTPWU_P7VCDjuCPVQ4.jpg?width=320&crop=smart&auto=webp&s=8f250150d9fe6c5e8618a9ab6286adb50f3dabb2', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/AV55SHIcXwiLujwKI4072jb2GNTPWU_P7VCDjuCPVQ4.jpg?auto=webp&s=8e71385885030934a82bc08382d74696d21301c9', 'width': 512}, 'variants': {}}]}
Cheapest Llama2 chatbot solution costs only $4/mon
1
2023-09-01T02:07:38
https://news.ycombinator.com/item?id=37341332
nalaginrut
news.ycombinator.com
1970-01-01T00:00:00
0
{}
166ucqb
false
null
t3_166ucqb
/r/LocalLLaMA/comments/166ucqb/cheapest_llama2_chatbot_solution_costs_only_4mon/
false
false
default
1
null
Best method or tool for data extraction, transformation and and structuring for use in an LLM memory database?
1
Hey all, so I’m wanting to clean a data file that’s fairly large to then use it as a memory database for a chat model. I tried with code interpreter however as the file is so large it was unsuccessful. I’m sure I’ve seen a few methods recently for this specific case however can’t remember what they were. Would appreciate any help!
2023-09-01T02:54:37
https://www.reddit.com/r/LocalLLaMA/comments/166vbtb/best_method_or_tool_for_data_extraction/
sardoa11
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166vbtb
false
null
t3_166vbtb
/r/LocalLLaMA/comments/166vbtb/best_method_or_tool_for_data_extraction/
false
false
self
1
null
easiest tool for running Code LLama on CPU instead of GPU?
1
I tried installing the windows KoboldCPP, which seems like it might sort of work with the 7B Code LLama model, but when I load the large model, it crashes. is there another simple way of doing it that does not leave me in WSL dependency hell? any windows-based tools for running an LLM with CPU and system ram RAM?
2023-09-01T03:40:53
https://www.reddit.com/r/LocalLLaMA/comments/166w9lg/easiest_tool_for_running_code_llama_on_cpu/
Cunninghams_right
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166w9lg
false
null
t3_166w9lg
/r/LocalLLaMA/comments/166w9lg/easiest_tool_for_running_code_llama_on_cpu/
false
false
self
1
null
Finetuned Open Source LLM models Marketplace?
1
Im trying to satisfy a use case my company has for LLMs by making an application in my free time, but it needs to be open source so we can run it locally, so API calls are out of the picture. I can finetune a foundational model but before I do, can I search for specific fine-tuned models on HuggingFace? I find their search mechanism difficult to navigate when trying to find specific fine-tuned models. I have to click randomly and read the description most of the time. Is there possibly a marketplace or a potential for a marketplace for specific fine-tuned models? Is there an easy no-code way to fine-tune a foundational model if I have the dataset? I want my non-dev mates to be able to play around!
2023-09-01T04:58:42
https://www.reddit.com/r/LocalLLaMA/comments/166xrfr/finetuned_open_source_llm_models_marketplace/
PigWedgion
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166xrfr
false
null
t3_166xrfr
/r/LocalLLaMA/comments/166xrfr/finetuned_open_source_llm_models_marketplace/
false
false
self
1
null
Llama-2 HF 7B for downstream tasks using LoRA
1
Hey Guys, I am fine-tuning Llama-2 HF 7B for downstream tasks using LoRA on 1 A100 SXM 40GB GPU. I am new to LoRA and unable to understand a few things. Can someone provide me a link to a nice explanation of LoRA along with a nice documentation of every argument it takes? Also, I am unable to understand, 1. What is the effect of lora\_alpha? 2. How to decide the best rank? &#x200B;
2023-09-01T05:02:05
https://www.reddit.com/r/LocalLLaMA/comments/166xtth/llama2_hf_7b_for_downstream_tasks_using_lora/
Excellent-Screen-836
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166xtth
false
null
t3_166xtth
/r/LocalLLaMA/comments/166xtth/llama2_hf_7b_for_downstream_tasks_using_lora/
false
false
self
1
null
Sojee: My own little dual-stage prompt embedding chatbot that can be quickly customized to any particular corpus.
1
tl;dr: I and my business created a simple self-hosted Llama-2 chatbot called "Sojee" that uses two-stages of prompt embedding to first classify a question into one of a number of topics, and then loads in a prompt based on that topic and then re-asks the question. I've MIT licensed the source code and published it at [https://github.com/ChiapasEDI/SojeeChatClient](https://github.com/ChiapasEDI/SojeeChatClient). This is the C#/Blazor Server front-end; it relies on a running text-generation-webui API as a backend with the API enabled. This chatbot that can be customized to just about any purpose. In fact, I made a 35+ minute video [https://youtu.be/pjNjdcRf2TE](https://youtu.be/pjNjdcRf2TE) of me going through the specific business requirements (self-hosted, accurate and easily maintained), going through some basics of AI self-hosting (about 10 minutes), compiling and running Sojee (about another 10 minutes), and then customizing Sojee so that instead of answering questions on product's corpus, it answers questions on several topics about Diamonds, copy and pasted straight from the Diamond wikipedia page. The text-generation-webui (thank you u/Oobabooga and contributors!) serves as the back-end with it's very simple webhook API, and by separating the front-end and back-end, it makes it very easy to swap out the model I use with another. I actually found the OpenAssistant Llama-2 13B Orca 8K (8 bit quantization) to be the best all around, and this was after a *lot* of experimentation. But the *correct* model is really - whatever understands your corpus text and is able to answer questions on it. Since this chat client supports switching quickly between different topics, the supported context size is less of a barrier - if you can split your corpus into 10-15 topics, then you can have the full context size dedicated to each topic. However, if you ask a question that goes into *two* topics, get ready for some hallucination, because only one topic gets loaded in at a time. One way to mitigate this, is to dedicate some % of your context size in *each* topic to some generalized info common to all of the topics - this way, the language model doesn't have to guess as much about what the info in those other topics might be about. Since you can carve your corpus into multiple topics, and thus use smaller embedded prompts, this also enables smaller GPUs to run Sojee, or to dedicate more VRAM to answering the questions. I found a good rule of thumb is to have the model consume 1/3rd and no more than 1/2 of available GPU RAM, and the rest the model will use for the context. A few notes about "carving" your own corpus into topics: 1. I basically allocated about 650 tokens for question and answer space from the 8192 token budget. 2. The default Sojee topics "business" and "automation" use much less than the 8192 tokens, but the "reference" topic (the product's API reference) used every bit of it - thus the instructions in the index.razor page to reload the interface for each question on that topic, as there's no token budget for a real conversation - it can answer a single question before the embedded prompt starts to lose text at the top. 3. I note very briefly in the video, but there's a lot of hyperparameters that can be overridden on a per-topic basis - temperature, top\_k, etc., just putting temperature:0.7 before the dashed line in the embedded prompt. I put seed:1 in each topic for QA purposes, and this helps to predict what a specific answer to any particular question is going to have. 4. The "initial" topic is required, and sharp eyes may notice that the text for "business" and "initial" is largely the same, with the difference being that the "initial" prompt has some extra text to direct the AI model to classify a question instead of answering it. 5. One "topic" got cut late in the game - I had a 12KB PowerShell script and early on, it seemed to be able to extract functional bits of it following a template I provided - which I thought was pretty cool. On second thought, however, I questioned how practical this was, but the real dealbreaker was that 10% of the time the AI model would hallucinate brand new API calls. I found that echoing back the syntax of a single API call like in the "reference" topic was about the closest to coding I would trust the AI model to go with hallucinating stuff. I don't want to make any big claims about Sojee - it's very buggy, needs actual coding comments and needs more work! Honestly, it was much more of a learning exercise to prove to myself that I've learned something in the last couple months, and I need to move on to other things right now. But, seeing that there is a lot of interest in self-hosted corporate chatbots, I thought someone might get something useful out of it if I shared my own small discoveries. &#x200B;
2023-09-01T05:13:03
https://www.reddit.com/r/LocalLLaMA/comments/166y1a5/sojee_my_own_little_dualstage_prompt_embedding/
alittleteap0t
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166y1a5
false
null
t3_166y1a5
/r/LocalLLaMA/comments/166y1a5/sojee_my_own_little_dualstage_prompt_embedding/
false
false
self
1
{'enabled': False, 'images': [{'id': 'PzrQBGiSqgMVDQ9iBiCzmnZjQXsOSIUdtT7a-T5T6BE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LRhLMzTzOUjAHY8m1ZsPtfC0M1ueNAn1tpcvPc5UhYM.jpg?width=108&crop=smart&auto=webp&s=549202db489f84de1928e07171bee1bf5ffc749b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LRhLMzTzOUjAHY8m1ZsPtfC0M1ueNAn1tpcvPc5UhYM.jpg?width=216&crop=smart&auto=webp&s=a6fb6b30c3997254c974208ab9dc60b335439871', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LRhLMzTzOUjAHY8m1ZsPtfC0M1ueNAn1tpcvPc5UhYM.jpg?width=320&crop=smart&auto=webp&s=fea6fdba32309a4a619a6ebd71596f2d82ecf57f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LRhLMzTzOUjAHY8m1ZsPtfC0M1ueNAn1tpcvPc5UhYM.jpg?width=640&crop=smart&auto=webp&s=25688c8059d9bb086c77148b755466ce0cbe379c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LRhLMzTzOUjAHY8m1ZsPtfC0M1ueNAn1tpcvPc5UhYM.jpg?width=960&crop=smart&auto=webp&s=57a01d0a9ec48ed5a56bc3460c93b66c22f7bac5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LRhLMzTzOUjAHY8m1ZsPtfC0M1ueNAn1tpcvPc5UhYM.jpg?width=1080&crop=smart&auto=webp&s=8e67783e75ada275b902f90805b5c91c5756b2ba', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LRhLMzTzOUjAHY8m1ZsPtfC0M1ueNAn1tpcvPc5UhYM.jpg?auto=webp&s=8a1e01b69a06ad7db60351d9243a3b33896bee27', 'width': 1200}, 'variants': {}}]}
Abuse Detection by LLM
1
I am hitting a roadblock when i try an LLM (Llama) to recognize the hate speech or offensive words. It refuses to comply with the prompts. The idea is to identify hate speech and flag it, replace it with non-offensive words. Is there any other way around it?
2023-09-01T06:09:24
https://www.reddit.com/r/LocalLLaMA/comments/166z0rm/abuse_detection_by_llm/
thesithlord27
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166z0rm
false
null
t3_166z0rm
/r/LocalLLaMA/comments/166z0rm/abuse_detection_by_llm/
false
false
self
1
null
Tips for fine tuning Replit-Code-3B
1
I’m planning to fine tune (not LORA but full fine tuning) the Replit-Code-3B model on a proprietary API (in Python) for code completion. I’m planning to integrate it to a VSCode extension like GitHub copilot. The reason I chose Replit-code-3B was due to Alibi so I can scale the context windows when doing code completion. Since I’m fine tuning the whole model, I assume aside from my sample data (which probably is small), I should include a generic Python dataset to make sure fine tuning will not cause deterioration in the performance of the model with regards to Python in general. However, I’m not sure what Python dataset I should use? Do you have any ideas and other tips I should consider? Thanks a lot!
2023-09-01T07:01:14
https://www.reddit.com/r/LocalLLaMA/comments/166zvvf/tips_for_fine_tuning_replitcode3b/
Acrobatic-Site2065
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166zvvf
false
null
t3_166zvvf
/r/LocalLLaMA/comments/166zvvf/tips_for_fine_tuning_replitcode3b/
false
false
self
1
null
Chat history and White Spaces after response
1
So I have been trying to build a chatbot using the Llama-2-7b gptq model. I found the gptq-4bit-128g-actorder\_True to be doing well for a single question and answer. Now I wanted to the model to remember the conversation, so I just used a loop to keep feeding the response to it as the prompt in the same template as given in the huggingface model card. But the issue is it keeps exceeding the 2048 token limit of the model after something around 2-3 questions, is there a way to increase this limit or a workaround? Once the limit is exceeded it just gives me white spaces as output or something like: \[\[\[\]\[\[\[\[....Another issue I have been facing is the white spaces. Even without the conversation history thing, sometimes it gives me white spaces at the end of my response. I think this also a reason why I am exceeding the token limit when using the history. Is there anyway to stop the model from generating white spaces?
2023-09-01T07:03:08
https://www.reddit.com/r/LocalLLaMA/comments/166zx44/chat_history_and_white_spaces_after_response/
IamFuckinTomato
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166zx44
false
null
t3_166zx44
/r/LocalLLaMA/comments/166zx44/chat_history_and_white_spaces_after_response/
false
false
self
1
null
Why are the answers getting dumber as the discussion continues?
1
New to the topic. Why when I start a discussion with a chatbot, its answers are good enough - almost perfect grammar, good consistency of words and sentences. But as the discussion continues, he begins to give out nonsense: incorrectly coordinates words, builds incorrect sentences, although they still do not lose their meaning and retains the thread of the narrative. &#x200B; I use derivatives of LLama-2, such as Airoboros-70B, WizardLM-30B and others. But in one way or another, this is manifested in all models. I do not consider options less than 30b at all, because it is useless to communicate with them on general topics - their answers do not stand up to any criticism.
2023-09-01T07:21:20
https://www.reddit.com/r/LocalLLaMA/comments/167088h/why_are_the_answers_getting_dumber_as_the/
Hatred_grows
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
167088h
false
null
t3_167088h
/r/LocalLLaMA/comments/167088h/why_are_the_answers_getting_dumber_as_the/
false
false
self
1
null
psa: vLLM gptq branch is twice as fast as llama.cpp
1
*but it's a massive pita to setup RTX 3060, prompt: "USER: write a book about ducks\n\nASSISTANT:" temp 0.8 topp 0.95 vicuna 16k 13b, ymmv: vllm: INFO 09-01 09:24:30 llm_engine.py:394] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 23.5 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 16.3%, CPU KV cache usage: 0.0% llama.cpp (40/43 layers offloaded, cublas) llama_print_timings: load time = 7965.47 ms llama_print_timings: sample time = 204.59 ms / 917 runs ( 0.22 ms per token, 4482.16 tokens per second) llama_print_timings: prompt eval time = 268.07 ms / 18 tokens ( 14.89 ms per token, 67.15 tokens per second) llama_print_timings: eval time = 72101.74 ms / 916 runs ( 78.71 ms per token, 12.70 tokens per second) llama_print_timings: total time = 72704.57 ms linux or vsl required. gptq branch: https://github.com/chu-tianxiang/vllm-gptq streaming decoding returns garbage, but both llm and llm_engine work
2023-09-01T07:31:16
https://www.reddit.com/r/LocalLLaMA/comments/1670e97/psa_vllm_gptq_branch_is_twice_as_fast_as_llamacpp/
LoSboccacc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1670e97
false
null
t3_1670e97
/r/LocalLLaMA/comments/1670e97/psa_vllm_gptq_branch_is_twice_as_fast_as_llamacpp/
false
false
self
1
{'enabled': False, 'images': [{'id': 'xto1_SHsAuaFcYccSHNPrBvMZC281tY-WYnO2-LaI6c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0s1LyiVKN6cH2NqhN1bUozBSbfsX7EQ7qDYo4eSgkUs.jpg?width=108&crop=smart&auto=webp&s=0293c32c3c1ef4aedb54b38ad09e094ac4f05bf5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0s1LyiVKN6cH2NqhN1bUozBSbfsX7EQ7qDYo4eSgkUs.jpg?width=216&crop=smart&auto=webp&s=2a336131b74ef5dad80f296f2dab41cf0d6eb472', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0s1LyiVKN6cH2NqhN1bUozBSbfsX7EQ7qDYo4eSgkUs.jpg?width=320&crop=smart&auto=webp&s=e9a4fbeb2ee21378d5100ae553fc6dbbd97dc93e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0s1LyiVKN6cH2NqhN1bUozBSbfsX7EQ7qDYo4eSgkUs.jpg?width=640&crop=smart&auto=webp&s=69b7decd849e08e1d2ac76bbd480215508cb14e4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0s1LyiVKN6cH2NqhN1bUozBSbfsX7EQ7qDYo4eSgkUs.jpg?width=960&crop=smart&auto=webp&s=bde753fa3a3df71d46271bfdc59a797f71e17023', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0s1LyiVKN6cH2NqhN1bUozBSbfsX7EQ7qDYo4eSgkUs.jpg?width=1080&crop=smart&auto=webp&s=459fb631385630c585a9afba9a4d520bddf5b9f1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0s1LyiVKN6cH2NqhN1bUozBSbfsX7EQ7qDYo4eSgkUs.jpg?auto=webp&s=add70bcddb97510bf260bdee7f889ae49b6c681f', 'width': 1200}, 'variants': {}}]}
continue button question - do I really need to press it after 10 list..?
1
hi guys, firstly - so excited that we can run gpt locally - it is incredible fun! I was testing 7b models to fit my 6GB VRAM and they are fun and fast but now, I'm using mainly airobos-65b-gpt4 -> with 64GB RAM, I'm albe to have nice answers in a little time.. like I ask and then, after few minutes, I read the whole answer - it's ok for me BUT.. &#x200B; but there is continue button which I have to click like when I ask it to list 20 options, it list 10 and then stop.. I need to click continue to list other 10.. why? is there any way how to get rid of it or is it some AI thing..? btw using pinocio -> great automated installation of AI under windows. &#x200B; thank you for your help!
2023-09-01T08:01:52
https://www.reddit.com/r/LocalLLaMA/comments/1670wls/continue_button_question_do_i_really_need_to/
ovnf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1670wls
false
null
t3_1670wls
/r/LocalLLaMA/comments/1670wls/continue_button_question_do_i_really_need_to/
false
false
self
1
null
Fine-Tuning Llama-2-13b Model: Is a 5-Hour Training Time Reasonable?
1
I have an A100 80GB GPU, and I've set my training with the following parameters: &#x200B; \- model\_name: "meta-llama/Llama-2-13b-hf" \- use\_4bit: True \- per\_device\_train\_batch\_size: 8 \- optim: "paged\_adamw\_32bit" \- learning\_rate: 2e-4 \- max\_seq\_length: 1024 \- num\_training\_epochs: 3 &#x200B; I've started the training, and it's showing that it will take approximately 5 hours to complete. Since I'm fine-tuning such a large model for the first time, I'm not sure if this is a good time or if it's considered too long.
2023-09-01T08:02:16
https://www.reddit.com/r/LocalLLaMA/comments/1670wwb/finetuning_llama213b_model_is_a_5hour_training/
Pritish-Mishra
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1670wwb
false
null
t3_1670wwb
/r/LocalLLaMA/comments/1670wwb/finetuning_llama213b_model_is_a_5hour_training/
false
false
self
1
null
LLM Engine with GPTQ, OpenAI API and GPU first
1
I'm looking for some specific piece of software to run a LLM behind an API (locally). Currently I need the following: \- GPTQ support \- single and multi GPU \- openai like API (dropin replacement) \- proper threading So far I tried the following: \- oobabooga: no threading, can only handle a single api call at a time. Also there seem to be some bugs here and there with the openai api variation \- aphrodite engine: no GPTQ support yet, aside from that it is blazingly fast \- localai: looked into this starting yesterday, but GPU support seems experimental and I prefer no docker reliance Do you guys have any other options in mind which I can look at?
2023-09-01T08:54:11
https://www.reddit.com/r/LocalLLaMA/comments/1671r1z/llm_engine_with_gptq_openai_api_and_gpu_first/
AWAS666
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1671r1z
false
null
t3_1671r1z
/r/LocalLLaMA/comments/1671r1z/llm_engine_with_gptq_openai_api_and_gpu_first/
false
false
self
1
null
LMoE: Airoboro's MoE implementation
1
Found no Reddit post about it. From the airoboros [README](https://github.com/jondurbin/airoboros/tree/main#lmoe): >LMoE is the simplest architecture I can think of for a mixture of experts. It doesn't use a switch transformer, doesn't require slicing and merging layers with additional fine-tuning, etc. It just dynamically loads the best PEFT/LoRA adapter model based on the incoming request. > >By using this method, we can theoretically crowdsource generation of dozens (or hundreds/thousands?) of very task-specific adapters and have an extremely powerful ensemble of models with very limited resources on top of a single base model (llama-2 7b/13b/70b). Seems really promising.
2023-09-01T09:16:53
https://www.reddit.com/r/LocalLLaMA/comments/16724y3/lmoe_airoboros_moe_implementation/
noioiomio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16724y3
false
null
t3_16724y3
/r/LocalLLaMA/comments/16724y3/lmoe_airoboros_moe_implementation/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Bc__o2V1-hocKmae0-c4X66wQnibBEb9e6D2OseCBU8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/AbcnnWATduNOcF33ULpPt8wWqf7hfFojGeAd40i-cpo.jpg?width=108&crop=smart&auto=webp&s=076daf3a51c41beac862a251215034b912824285', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/AbcnnWATduNOcF33ULpPt8wWqf7hfFojGeAd40i-cpo.jpg?width=216&crop=smart&auto=webp&s=0ec31ea4491ba35346cb7460aeb2c7b76f70a710', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/AbcnnWATduNOcF33ULpPt8wWqf7hfFojGeAd40i-cpo.jpg?width=320&crop=smart&auto=webp&s=d67189c86637253b9cf770397594175afe44b447', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/AbcnnWATduNOcF33ULpPt8wWqf7hfFojGeAd40i-cpo.jpg?width=640&crop=smart&auto=webp&s=e2236e1f44d3e4b9a758e6fd123d5f5ef14c926f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/AbcnnWATduNOcF33ULpPt8wWqf7hfFojGeAd40i-cpo.jpg?width=960&crop=smart&auto=webp&s=865b9b8a114934a1b292ae58d87b7acc54fde2b4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/AbcnnWATduNOcF33ULpPt8wWqf7hfFojGeAd40i-cpo.jpg?width=1080&crop=smart&auto=webp&s=1e42df67c90a0e922911f8752a124e059a1d0b08', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/AbcnnWATduNOcF33ULpPt8wWqf7hfFojGeAd40i-cpo.jpg?auto=webp&s=3a94c9b8822872e1b8f92c8a12f5e6d55210f9f9', 'width': 1200}, 'variants': {}}]}
Fine-Tuning Llama-2-13b Model: Is a 5-Hour Training Time Reasonable?
1
[removed]
2023-09-01T09:59:36
https://www.reddit.com/r/LocalLLaMA/comments/1672vfc/finetuning_llama213b_model_is_a_5hour_training/
Pritish-Mishra
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1672vfc
false
null
t3_1672vfc
/r/LocalLLaMA/comments/1672vfc/finetuning_llama213b_model_is_a_5hour_training/
false
false
self
1
null
Dataset from C++ and VB for finetuning
1
Hi, I've my codes written in C++ and VB that used to program microcontroller & electronics stuff. How should I format my codes so it can be used for finetuning? Should I convert the whole code into a string and turn it into like code-alpaca format? Is there any tool that can convert my codes into the dataset?
2023-09-01T10:00:31
https://www.reddit.com/r/LocalLLaMA/comments/1672w1h/dataset_from_c_and_vb_for_finetuning/
alelallele
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1672w1h
false
null
t3_1672w1h
/r/LocalLLaMA/comments/1672w1h/dataset_from_c_and_vb_for_finetuning/
false
false
self
1
null
Getting real tired of these NVIDIA drivers
1
Just wanted to make a post to complain, really. It's already absurd that I've had to stay on an NVIDIA driver from March just for better performance. But today, I had to install the latest driver to get significantly improved performance in *Starfield*. Normally, I don't bother with the Game Ready Drivers, because they don't usually help that much with the games they're made for and sometimes introduce bugs into others. But this one gives you a solid 10 FPS or so, so I felt it was warranted. Only to see my ExLlama performance in Ooba drop to llama.cpp levels. So now, I'm tweaking settings in *Starfield* to eke out some enough FPS to make up for switching back. Anyone know a way around this problem, by any chance? The whole thing is giving me flashbacks to the early 2000s, I swear to God...
2023-09-01T10:10:00
https://www.reddit.com/r/LocalLLaMA/comments/1673291/getting_real_tired_of_these_nvidia_drivers/
smile_e_face
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1673291
false
null
t3_1673291
/r/LocalLLaMA/comments/1673291/getting_real_tired_of_these_nvidia_drivers/
false
false
self
1
null
Guys what should i choose ?
1
2023-09-01T11:29:38
https://i.redd.it/5xl2ca2hqmlb1.png
Merchant_Lawrence
i.redd.it
1970-01-01T00:00:00
0
{}
1674jhk
false
null
t3_1674jhk
/r/LocalLLaMA/comments/1674jhk/guys_what_should_i_choose/
false
false
https://b.thumbs.redditm…D_A1pva8B40M.jpg
1
{'enabled': True, 'images': [{'id': '_zMHMoOHd7-zzEhUj7m9tE1LJLwqCZXeiF64p88lFrs', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/5xl2ca2hqmlb1.png?width=108&crop=smart&auto=webp&s=bb5b1e0d137be30c53b76f51be29ebb37b48584e', 'width': 108}, {'height': 146, 'url': 'https://preview.redd.it/5xl2ca2hqmlb1.png?width=216&crop=smart&auto=webp&s=41c7c02d2118f92dc027e1ed1b893ec30c129889', 'width': 216}, {'height': 216, 'url': 'https://preview.redd.it/5xl2ca2hqmlb1.png?width=320&crop=smart&auto=webp&s=904edbde1bac901e5c71ba4bcbef7ad82f48a726', 'width': 320}, {'height': 433, 'url': 'https://preview.redd.it/5xl2ca2hqmlb1.png?width=640&crop=smart&auto=webp&s=cffba6932adf77d0138beebb383fd51dfb0f656d', 'width': 640}, {'height': 650, 'url': 'https://preview.redd.it/5xl2ca2hqmlb1.png?width=960&crop=smart&auto=webp&s=5a0204e4cbc877f05bc9c2df65d0a88811baf796', 'width': 960}], 'source': {'height': 671, 'url': 'https://preview.redd.it/5xl2ca2hqmlb1.png?auto=webp&s=d83f8fa5a2abf385abe03d8d05c0ec380b49440a', 'width': 990}, 'variants': {}}]}
How to load Pytorch Raw LLMs models ?
1
[removed]
2023-09-01T11:37:10
https://www.reddit.com/r/LocalLLaMA/comments/1674oy1/how_to_load_pytorch_raw_llms_models/
dodo13333
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1674oy1
false
null
t3_1674oy1
/r/LocalLLaMA/comments/1674oy1/how_to_load_pytorch_raw_llms_models/
false
false
self
1
{'enabled': False, 'images': [{'id': 'fm8o6JYKIqFJhNSvDfBOR5BHCQoNc6UiBxUb6l_Riyw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ykzHf971NEmjiqjfKj_VzOnuCJqypHQio7h5NBxEZdw.jpg?width=108&crop=smart&auto=webp&s=86b9526935141d736987ef8662e109bfa241099b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ykzHf971NEmjiqjfKj_VzOnuCJqypHQio7h5NBxEZdw.jpg?width=216&crop=smart&auto=webp&s=4792b850ee297dd7ea255b26eb392a427070fd78', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ykzHf971NEmjiqjfKj_VzOnuCJqypHQio7h5NBxEZdw.jpg?width=320&crop=smart&auto=webp&s=58a7f875bee446b9827bb34dea9ef4e95ee73298', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ykzHf971NEmjiqjfKj_VzOnuCJqypHQio7h5NBxEZdw.jpg?width=640&crop=smart&auto=webp&s=a28c723d58ce23313bd60b2c14b08b3081482750', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ykzHf971NEmjiqjfKj_VzOnuCJqypHQio7h5NBxEZdw.jpg?width=960&crop=smart&auto=webp&s=bf8242d211a229558ad82a2a8721841e191da9ec', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ykzHf971NEmjiqjfKj_VzOnuCJqypHQio7h5NBxEZdw.jpg?width=1080&crop=smart&auto=webp&s=e729a2c4c12c1a4cd28bcdba923f7d9d80927f95', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ykzHf971NEmjiqjfKj_VzOnuCJqypHQio7h5NBxEZdw.jpg?auto=webp&s=862ca5a115f976e5b6176549f68a537065a2d43d', 'width': 1200}, 'variants': {}}]}
Vicuna training question (about training time inference)
1
Hi, While I'm trying to train a local chat model using the Vicuna way of training, I'm confused with the following points. &#x200B; If I understand correctly, Vicuna adjusts training loss to account for training loss like this: given chat data with users A and B, A1, B1, A2, B2, A3, B3 they masked user A's message for making a target (label): \###, B1, ###, B2, ###, B3 &#x200B; but during the training time, it seems model inference outputs given A1: A1, b1, a2, b2, a3, b3 &#x200B; and trying to make b1, b2, b3 close to the B1, B2, B3. &#x200B; What's unclear to me here: b2, b3 are generated given a2 and a3. So, there are many chances that b2 and b3 are not close to B2 and B3, but still good sentences. Even more, because a2 and a3 can lead conversations to totally different areas, B2 and B3 could not be good targets at all in many cases. &#x200B; How does this way of training work?
2023-09-01T11:51:32
https://www.reddit.com/r/LocalLLaMA/comments/1674zb4/vicuna_training_question_about_training_time/
Realistic_Carrot_438
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1674zb4
false
null
t3_1674zb4
/r/LocalLLaMA/comments/1674zb4/vicuna_training_question_about_training_time/
false
false
self
1
null
What models can I use speculative sampling with?
1
What kind of draft and target models can we use with speculative sampling? Can I use this in conjunction with adaptors (QLoRA) on one or both of the models?
2023-09-01T12:16:06
https://www.reddit.com/r/LocalLLaMA/comments/1675ihh/what_models_can_i_use_speculative_sampling_with/
LiquidGunay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1675ihh
false
null
t3_1675ihh
/r/LocalLLaMA/comments/1675ihh/what_models_can_i_use_speculative_sampling_with/
false
false
self
1
null
Check out open source AI Agent community with LLAM 2 supported
1
2023-09-01T12:29:12
https://illa.ai/
silencerxyz
illa.ai
1970-01-01T00:00:00
0
{}
1675shu
false
null
t3_1675shu
/r/LocalLLaMA/comments/1675shu/check_out_open_source_ai_agent_community_with/
false
false
default
1
null
How to setup a chat like service for many users?
1
I am able to get the llama.cpp server working for multiple users for single LLM calls, but can't get the chat to work (for even a single user) Any tutorial or alternative would be appreciated. Additionally, could someone link me to a resource on how I would buffer the requests if there are too many queries to the model/ setup some sort of a queue for users. (I'm learning how to write apis but still a noob)
2023-09-01T12:47:21
https://www.reddit.com/r/LocalLLaMA/comments/16766pb/how_to_setup_a_chat_like_service_for_many_users/
LiquidGunay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16766pb
false
null
t3_16766pb
/r/LocalLLaMA/comments/16766pb/how_to_setup_a_chat_like_service_for_many_users/
false
false
self
1
null
They told me to run GPT at home... now it sounds terrible.
1
2023-09-01T13:25:31
https://v.redd.it/us3ukp53bnlb1
Nondzu
/r/LocalLLaMA/comments/16773sf/they_told_me_to_run_gpt_at_home_now_it_sounds/
1970-01-01T00:00:00
0
{}
16773sf
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/us3ukp53bnlb1/DASHPlaylist.mpd?a=1696253135%2CNmUxOTNhNDFlNzJlZDUyYzM1ZjZkZDAzMmY3OTY1NzRkMjhmOWI4Zjc5N2FmMGM3OTkzMjkyMDMzYzFiYjZkZQ%3D%3D&v=1&f=sd', 'duration': 35, 'fallback_url': 'https://v.redd.it/us3ukp53bnlb1/DASH_1080.mp4?source=fallback', 'height': 1920, 'hls_url': 'https://v.redd.it/us3ukp53bnlb1/HLSPlaylist.m3u8?a=1696253135%2CMTkxZWNlY2VhZTdmMTE2N2I1NzY3YWYzNjRjNjhhZmQ5OGE1ZGEwMWJiNjU1MjZiYTNjYjJmNTU0ZjE2ZjdlMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/us3ukp53bnlb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_16773sf
/r/LocalLLaMA/comments/16773sf/they_told_me_to_run_gpt_at_home_now_it_sounds/
false
false
https://b.thumbs.redditm…12xVtAn64qZY.jpg
1
{'enabled': False, 'images': [{'id': 'DyHl139pFxSfV0vGewsML6eEa2ixvvtyssIOSiwEaV0', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/J4edBlSGD7DlUJqBucbiDy6okCTyc4qY8slesZBSYPg.png?width=108&crop=smart&format=pjpg&auto=webp&s=0609cdfc9f8f92ef9a6ba7324a5c9aa337be836f', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/J4edBlSGD7DlUJqBucbiDy6okCTyc4qY8slesZBSYPg.png?width=216&crop=smart&format=pjpg&auto=webp&s=04263abb359ddd99eb7d95fd922d118e8b497b66', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/J4edBlSGD7DlUJqBucbiDy6okCTyc4qY8slesZBSYPg.png?width=320&crop=smart&format=pjpg&auto=webp&s=fc83bc0a1c9effdce78441a449dd4884bc8bebe7', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/J4edBlSGD7DlUJqBucbiDy6okCTyc4qY8slesZBSYPg.png?width=640&crop=smart&format=pjpg&auto=webp&s=740756c2e982677dfc0d7caa8f1fea66cbbf5d42', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/J4edBlSGD7DlUJqBucbiDy6okCTyc4qY8slesZBSYPg.png?width=960&crop=smart&format=pjpg&auto=webp&s=41589ad6ff431e2bcd50a283342e32fea730b0ba', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/J4edBlSGD7DlUJqBucbiDy6okCTyc4qY8slesZBSYPg.png?width=1080&crop=smart&format=pjpg&auto=webp&s=7733127b5512936bd3853d41ae766daf1be2b762', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/J4edBlSGD7DlUJqBucbiDy6okCTyc4qY8slesZBSYPg.png?format=pjpg&auto=webp&s=c1b3c9a3dab61ccaaca7754c0dded1c2fb1617a4', 'width': 1080}, 'variants': {}}]}
Can anyone tell me why when using cublas the model apparently has two or more extra layers when compared with clblas?
1
[removed]
2023-09-01T14:36:00
https://www.reddit.com/r/LocalLLaMA/comments/1678vrc/can_anyone_tell_me_why_when_using_cublas_the/
wh33t
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1678vrc
false
null
t3_1678vrc
/r/LocalLLaMA/comments/1678vrc/can_anyone_tell_me_why_when_using_cublas_the/
false
false
self
1
null
Some guidance with choosing a finetuning method/library, please?
1
With all the different model architectures and training methods coming out + quantization to complicate things further, I’m a bit lost. For inference, I have a 3060 with 12GB of VRAM and mostly run quantized 13B GPTQ models from TheBloke, SuperHOT variants for the 8k max context size when available. I’d like to use whatever finetuning method would allow me to merge the resulting weights back into one of these models. Ideally, it would also run on my GPU, but I can rent a Paperspace instance or something if necessary. Any guidance would be appreciated, tried a few LoRA and QLoRA libraries without much luck.
2023-09-01T15:00:38
https://www.reddit.com/r/LocalLLaMA/comments/1679imo/some_guidance_with_choosing_a_finetuning/
GeneriAcc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1679imo
false
null
t3_1679imo
/r/LocalLLaMA/comments/1679imo/some_guidance_with_choosing_a_finetuning/
false
false
self
1
null
Does 4-bit hurt a model a lot ?
1
How much does the 4-bit quantize affect LLM output quality ?
2023-09-01T15:24:53
https://www.reddit.com/r/LocalLLaMA/comments/167a5pb/does_4bit_hurt_a_model_a_lot/
snwfdhmp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
167a5pb
false
null
t3_167a5pb
/r/LocalLLaMA/comments/167a5pb/does_4bit_hurt_a_model_a_lot/
false
false
self
1
null
Trying to run llama 2 on Raspberry PI4 B 4GB RAM
1
[removed]
2023-09-01T15:33:10
https://www.reddit.com/r/LocalLLaMA/comments/167adf3/trying_to_run_llama_2_on_raspberry_pi4_b_4gb_ram/
LaurensWissels
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
167adf3
false
null
t3_167adf3
/r/LocalLLaMA/comments/167adf3/trying_to_run_llama_2_on_raspberry_pi4_b_4gb_ram/
false
false
self
1
null
Running llama 2 model on my Raspberry PI4 with 4GB RAM
1
[removed]
2023-09-01T15:36:29
https://www.reddit.com/r/LocalLLaMA/comments/167agfm/running_llama_2_model_on_my_raspberry_pi4_with/
LaurensWissels
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
167agfm
false
null
t3_167agfm
/r/LocalLLaMA/comments/167agfm/running_llama_2_model_on_my_raspberry_pi4_with/
false
false
self
1
null
Trying to run llama2 7B on Raspberry PI4 4GB RAM.
1
[removed]
2023-09-01T15:42:15
https://www.reddit.com/r/LocalLLaMA/comments/167alra/trying_to_run_llama2_7b_on_raspberry_pi4_4gb_ram/
LaurensWissels
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
167alra
false
null
t3_167alra
/r/LocalLLaMA/comments/167alra/trying_to_run_llama2_7b_on_raspberry_pi4_4gb_ram/
false
false
self
1
null
Trying to run llama2 7B on Raspberry PI4 4GB RAM.
1
[removed]
2023-09-01T15:43:22
https://www.reddit.com/r/LocalLLaMA/comments/167amqz/trying_to_run_llama2_7b_on_raspberry_pi4_4gb_ram/
LaurensWissels
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
167amqz
false
null
t3_167amqz
/r/LocalLLaMA/comments/167amqz/trying_to_run_llama2_7b_on_raspberry_pi4_4gb_ram/
false
false
self
1
null
Good news - Fine-tuning is improving more than just HumanEval results
1
We are seeing some evidence that recent fine-tuning effort are greatly improving model performance on challenging out-of-sample LeetCode problems! &#x200B; See below for Wizard-Coder / Phind vs. CodeLlama on over \~400 recent LeetCode problems. It's good to see confirmation on an orthogonal dataset like this, as it is really easy to over-fit a single dataset. I personally am a lot less skeptical about these methodologies now. &#x200B; https://preview.redd.it/ashb8eub4olb1.png?width=1424&format=png&auto=webp&s=0afb88792faee6684b047e292ccfa279cd8cfc99 *\*Note, analysis framework still needs hardening, if you'd like to help out with model inference / evaluation please check the repo here -* [*https://github.com/emrgnt-cmplxty/zero-shot-replication*](https://github.com/emrgnt-cmplxty/zero-shot-replication)
2023-09-01T16:10:27
https://www.reddit.com/r/LocalLLaMA/comments/167bcrx/good_news_finetuning_is_improving_more_than_just/
docsoc1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
167bcrx
false
null
t3_167bcrx
/r/LocalLLaMA/comments/167bcrx/good_news_finetuning_is_improving_more_than_just/
false
false
https://b.thumbs.redditm…Xwh3DyreD5IE.jpg
1
{'enabled': False, 'images': [{'id': 'sP5e9zh0_bEoq0UZQkuR2inqgR16dGDoZc6pDD75ufY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tULe496XTwLkVz3h1oezh7Ddy5K43uj1vRGd4qNvNuw.jpg?width=108&crop=smart&auto=webp&s=1659b30384771d03620883a5b0a2482cc8fa859f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tULe496XTwLkVz3h1oezh7Ddy5K43uj1vRGd4qNvNuw.jpg?width=216&crop=smart&auto=webp&s=74c91bf5bedd364f1565c1cf688ca00e1f32cc61', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tULe496XTwLkVz3h1oezh7Ddy5K43uj1vRGd4qNvNuw.jpg?width=320&crop=smart&auto=webp&s=f1177f9426dea232a3d1444448477c27e138391e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tULe496XTwLkVz3h1oezh7Ddy5K43uj1vRGd4qNvNuw.jpg?width=640&crop=smart&auto=webp&s=306691ba38f10138a3846f18692194119439cdae', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tULe496XTwLkVz3h1oezh7Ddy5K43uj1vRGd4qNvNuw.jpg?width=960&crop=smart&auto=webp&s=a284e3c69d19b06ae337934b08fbcdf5d3417ff6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tULe496XTwLkVz3h1oezh7Ddy5K43uj1vRGd4qNvNuw.jpg?width=1080&crop=smart&auto=webp&s=fb2600022b238fde6d367fe61aa3a87683f780be', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tULe496XTwLkVz3h1oezh7Ddy5K43uj1vRGd4qNvNuw.jpg?auto=webp&s=5835903aafe8818bc2334a8827300f5951a7ce18', 'width': 1200}, 'variants': {}}]}
Finetune llama2 chat 7B 4bit on windows
1
Hi, i am trying to fine tune llama2-7B-chat with 4-bit quantization on a Windows 11 machine. I am struggling with bitsandbytes (0.41.0) since it is not compiling with GPU support. I tried to modify the [main.py](https://main.py) file in bitsandbytes as stated [here](https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/) but stiil it does not work. This is the output of python -m bitsandbytes: &#x200B; https://preview.redd.it/lylwtzzbaolb1.png?width=1564&format=png&auto=webp&s=650c9f3a9243c222a9e236c183e9a27ca404db16 Any idea of what am i doing wrong? torch.cuda.is\_available() returns True (cuda version 11.7). Thank you in advice :)
2023-09-01T16:45:52
https://www.reddit.com/r/LocalLLaMA/comments/167cafq/finetune_llama2_chat_7b_4bit_on_windows/
Mindless-Picture-430
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
167cafq
false
null
t3_167cafq
/r/LocalLLaMA/comments/167cafq/finetune_llama2_chat_7b_4bit_on_windows/
false
false
https://b.thumbs.redditm…budlyYEuxQOg.jpg
1
null
Help with objective tokens per second measurement
1
Hi guys, I am doing a project that aims to run LLMs locally on less powerful devices such as raspberry pis, orange pis or mini pcs. I am trying to measure both performance (using EleutherAU's lm-evaluation-harness) and token generation speed. To do so I would like to be able to objectively measure the tokens/s for different methods of running e.g. llama.cpp, MLC with and without different quantisation methods. I have tried sourcing online, but could not find much information about this. Could I check if there is currently an easy and objective way to do this? Or must it be done manually (e.g. measuring the time for the entire response and counting the tokens and the time taken). &#x200B;
2023-09-01T16:50:56
https://www.reddit.com/r/LocalLLaMA/comments/167cf4x/help_with_objective_tokens_per_second_measurement/
zDraco_Meteor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
167cf4x
false
null
t3_167cf4x
/r/LocalLLaMA/comments/167cf4x/help_with_objective_tokens_per_second_measurement/
false
false
self
1
null
Training LLaMA-2 for Keyword Extraction
1
I’m struggling with training a LLaMA-2-7b model. Have tried both chat and base model. Been training for 4 or 5 days without much encouraging success. My task is simple keyword extraction. Input is a journal entry. The output should be a list of emotional keywords from the journal entry. The closest I’ve come is with the LLaMA-2-7b-chat-hf model. For really long journal entries, it outputs like 80% correct keywords list. For other entries, it’s TOTALLY OFF! Not even close. Trained LLaMA-1 successfully, but LLaMA-2 is another beast. It seems like an easy task, so I am frustrated at how difficult it is. Looking for any guidance. I'm using an A100 on this [colab notebook](https://github.com/brevdev/notebooks/blob/main/llama2-finetune.ipynb). Data Prompt Template: Perform the follow task and return results that satisfy their requirements. ### INSTRUCTION: Identify a list of emotional keywords from the following text entry. ### INPUT: {text_entry} ### OUTPUT: {keyword_list} Model / Lora Settings: training_args = TrainingArguments( output_dir=output_dir, per_device_train_batch_size=4, gradient_accumulation_steps=4, learning_rate=2e-4, logging_steps=50, max_steps=1000, logging_dir="./logs", save_strategy="steps", save_steps=50, evaluation_strategy="steps", eval_steps=50, do_eval=True ) peft_config = LoraConfig( lora_alpha=16, lora_dropout=0.1, r=64, bias="none", task_type="CAUSAL_LM", )
2023-09-01T18:06:56
https://www.reddit.com/r/LocalLLaMA/comments/167eg5k/training_llama2_for_keyword_extraction/
TaleOfTwoDres
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
167eg5k
false
null
t3_167eg5k
/r/LocalLLaMA/comments/167eg5k/training_llama2_for_keyword_extraction/
false
false
self
1
{'enabled': False, 'images': [{'id': 'nf6hhCof57OeXlwlZR_dbzuaYw9VL4waumu_bFzey3g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UFk5rEODI5hy4_q8Y3zq1KZrDQBwGEhbet7EJE3o_K0.jpg?width=108&crop=smart&auto=webp&s=5b2f4d140d4f6cf19272ffba8ae91b0871b78eec', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/UFk5rEODI5hy4_q8Y3zq1KZrDQBwGEhbet7EJE3o_K0.jpg?width=216&crop=smart&auto=webp&s=81d0193f0dcd9bd8472c40e8e663b823ba6b2525', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/UFk5rEODI5hy4_q8Y3zq1KZrDQBwGEhbet7EJE3o_K0.jpg?width=320&crop=smart&auto=webp&s=ef60d9265af3d24fcafa6dfb399ac28fe3c6bd1d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/UFk5rEODI5hy4_q8Y3zq1KZrDQBwGEhbet7EJE3o_K0.jpg?width=640&crop=smart&auto=webp&s=3b51d56d13bfe61e73205c33aaa1a501bbbfec17', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/UFk5rEODI5hy4_q8Y3zq1KZrDQBwGEhbet7EJE3o_K0.jpg?width=960&crop=smart&auto=webp&s=b204c763c3fd0c42d2ed8cfbdf5980aaac6d40b3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/UFk5rEODI5hy4_q8Y3zq1KZrDQBwGEhbet7EJE3o_K0.jpg?width=1080&crop=smart&auto=webp&s=9ceb355e7d6b209b839cb45003b71fda19db9bc7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/UFk5rEODI5hy4_q8Y3zq1KZrDQBwGEhbet7EJE3o_K0.jpg?auto=webp&s=cd68eef6101717552f1c8bfb7fef34b45e7aa6bd', 'width': 1200}, 'variants': {}}]}
References to develop Q&A on docs using M2
1
[removed]
2023-09-01T18:12:06
https://www.reddit.com/r/LocalLLaMA/comments/167el5m/references_to_develop_qa_on_docs_using_m2/
5rest
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
167el5m
false
null
t3_167el5m
/r/LocalLLaMA/comments/167el5m/references_to_develop_qa_on_docs_using_m2/
false
false
self
1
null