title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Multigpu AMD?
1
Is multi-GPU support a thing for AMD in llama.cpp?
2023-09-06T18:02:10
https://www.reddit.com/r/LocalLLaMA/comments/16br650/multigpu_amd/
YogsMinecraftFav
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16br650
false
null
t3_16br650
/r/LocalLLaMA/comments/16br650/multigpu_amd/
false
false
self
1
null
Can I quantize the model that doesn't fit in VRAM of my GPU? What about RAM?
1
Basically the title. There are some models that are not quantized on Hugging Face and it would be nice to try them. Alternatively, I'm thinking about cloud GPU instances e.g. from AWS just to quantize the models. What else can I do?
2023-09-06T18:28:24
https://www.reddit.com/r/LocalLLaMA/comments/16brvb9/can_i_quantize_the_model_that_doesnt_fit_in_vram/
number73q
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16brvb9
false
null
t3_16brvb9
/r/LocalLLaMA/comments/16brvb9/can_i_quantize_the_model_that_doesnt_fit_in_vram/
false
false
self
1
null
Worth getting 24Gb MacBook Air?
1
[removed]
2023-09-06T18:39:13
https://www.reddit.com/r/LocalLLaMA/comments/16bs5eq/worth_getting_24gb_macbook_air/
Necessary_Ad_9800
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16bs5eq
false
null
t3_16bs5eq
/r/LocalLLaMA/comments/16bs5eq/worth_getting_24gb_macbook_air/
false
false
self
1
null
Best model to run with this ?
2
I have a i7 6700k and 3 GTX 1070s. In want a general purpose model. Mostly accurate facts and coding. What do you guys propose ?
2023-09-06T20:14:59
https://www.reddit.com/r/LocalLLaMA/comments/16buohg/best_model_to_run_with_this/
BlackFireAlex
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16buohg
false
null
t3_16buohg
/r/LocalLLaMA/comments/16buohg/best_model_to_run_with_this/
false
false
default
2
null
POLL - What do YOU use local models for?
1
I’m a graduate researcher in the ML space and I've been following LocalLLaMA closely for months now. I love the fact that you guys are racing towards a localized solution to GPT but I still don’t get why people use worse, slower localized models when they can just use GPT? I understand the content restrictions/hobbyist/learning category but what *else* is there? More specifically, what’s only possible **locally**? I've included options below but if there are others — lmk in the comments! [View Poll](https://www.reddit.com/poll/16bx8c7)
2023-09-06T21:49:08
https://www.reddit.com/r/LocalLLaMA/comments/16bx8c7/poll_what_do_you_use_local_models_for/
BuilderDull323
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16bx8c7
false
null
t3_16bx8c7
/r/LocalLLaMA/comments/16bx8c7/poll_what_do_you_use_local_models_for/
false
false
self
1
null
Anyone else thinking about getting a Mac Studio w/ 192GB to run quantized Falcon-180B?
1
In Germany this would set me back **6.639,00€.** I can't even imagine what the price tag on an Nvidia setup would be.
2023-09-06T22:11:22
https://www.reddit.com/r/LocalLLaMA/comments/16bxt9x/anyone_else_thinking_about_getting_a_mac_studio_w/
Prince-of-Privacy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16bxt9x
false
null
t3_16bxt9x
/r/LocalLLaMA/comments/16bxt9x/anyone_else_thinking_about_getting_a_mac_studio_w/
false
false
self
1
null
Falcon 180B initial CPU performance numbers
1
Thanks to Falcon 180B using the same architecture as Falcon 40B, llama.cpp already supports it (although the conversion script needed [some changes](https://github.com/ggerganov/llama.cpp/pull/3049) ). I thought people might be interested in seeing performance numbers for some different quantisations, running on an AMD EPYC 7502P 32-Core Processor with 256GB of ram (and no GPU). I'll also post in the comments the responses the different quants gave to the prompt, feel free to upvote the answer you think is best. For q4\_K\_M quantisation: llama_print_timings: load time = 6645.40 ms llama_print_timings: sample time = 278.27 ms / 200 runs ( 1.39 ms per token, 718.72 tokens per second) llama_print_timings: prompt eval time = 7591.61 ms / 13 tokens ( 583.97 ms per token, 1.71 tokens per second) llama_print_timings: eval time = 185915.77 ms / 199 runs ( 934.25 ms per token, 1.07 tokens per second) llama_print_timings: total time = 194055.97 ms For q6\_K quantisation: llama_print_timings: load time = 53526.48 ms llama_print_timings: sample time = 749.78 ms / 428 runs ( 1.75 ms per token, 570.83 tokens per second) llama_print_timings: prompt eval time = 4232.80 ms / 10 tokens ( 423.28 ms per token, 2.36 tokens per second) llama_print_timings: eval time = 532203.03 ms / 427 runs ( 1246.38 ms per token, 0.80 tokens per second) llama_print_timings: total time = 537415.52 ms For q8\_0 quantisation: llama_print_timings: load time = 128666.21 ms llama_print_timings: sample time = 249.20 ms / 161 runs ( 1.55 ms per token, 646.07 tokens per second) llama_print_timings: prompt eval time = 13162.90 ms / 13 tokens ( 1012.53 ms per token, 0.99 tokens per second) llama_print_timings: eval time = 448145.71 ms / 160 runs ( 2800.91 ms per token, 0.36 tokens per second) llama_print_timings: total time = 462491.25 ms
2023-09-06T22:43:50
https://www.reddit.com/r/LocalLLaMA/comments/16bynin/falcon_180b_initial_cpu_performance_numbers/
logicchains
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16bynin
false
null
t3_16bynin
/r/LocalLLaMA/comments/16bynin/falcon_180b_initial_cpu_performance_numbers/
false
false
self
1
{'enabled': False, 'images': [{'id': 'dDQAcDgqQim5nIGfeOwzICqhdFCpEZADaohE3T4_NpA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2brcMt95dMosTV0n9c814-jXgClhacWQFqBSYft2LuE.jpg?width=108&crop=smart&auto=webp&s=42077dc424f1f5df732db158518b53d685322ee7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2brcMt95dMosTV0n9c814-jXgClhacWQFqBSYft2LuE.jpg?width=216&crop=smart&auto=webp&s=e74b645dfab3f07f9c0eda7a364a220c95209a8b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2brcMt95dMosTV0n9c814-jXgClhacWQFqBSYft2LuE.jpg?width=320&crop=smart&auto=webp&s=99a63b8c8262a2c0e3999c0ca9c70fb34e053803', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2brcMt95dMosTV0n9c814-jXgClhacWQFqBSYft2LuE.jpg?width=640&crop=smart&auto=webp&s=5c534be3e2ed3256a984d22a4162f3e193da2189', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2brcMt95dMosTV0n9c814-jXgClhacWQFqBSYft2LuE.jpg?width=960&crop=smart&auto=webp&s=7c4aecee92ce170a13913304043e1e7bedf33acc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2brcMt95dMosTV0n9c814-jXgClhacWQFqBSYft2LuE.jpg?width=1080&crop=smart&auto=webp&s=1cd2d49b2c834c506973e04509500e03384a5667', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2brcMt95dMosTV0n9c814-jXgClhacWQFqBSYft2LuE.jpg?auto=webp&s=15507ca32fb0fd5eb2a408caec3c95942a9462a7', 'width': 1200}, 'variants': {}}]}
Copyright And Fair Use: Important Notice Of Iquiry By The US Copyright office
1
Please make your voices heard by submitting comments on how you use ai and how you think copyright issues should be handled to not destroy the open source local model eco system. In my opinion the real dystopia will be when politicians sit own with big tech lobbyists and big rights holders and decide that training as it is currently done, for free and open source models and others is illegal. Then the big players would actually win, since they have enough resources to license datasets and will certainly do so willingly and gladly, if it is clear that the jurisdiction keeps all the small players and open source out. Easiest way to build a moat and force people to pay thousands for these tools. So please make your voices heard and share the link \>The Copyright Office issued a notice of inquiry in the Federal Register seeking public comment on questions about copyright law and policy issues raised by AI systems. Initial comments are due by October 18, 2023. Reply comments are due November 15, 2023. https://www.copyright.gov/newsnet/2023/1017.html?loclr=twcop Link to comment submissive form: https://www.regulations.gov/commenton/COLC-2023-0006-0001
2023-09-06T23:36:07
https://www.reddit.com/r/LocalLLaMA/comments/16bzxqj/copyright_and_fair_use_important_notice_of_iquiry/
PinPuzzleheaded8525
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16bzxqj
false
null
t3_16bzxqj
/r/LocalLLaMA/comments/16bzxqj/copyright_and_fair_use_important_notice_of_iquiry/
false
false
self
1
null
Running Llama 2 via replicate and hugging face
1
I recently signed up to create a simple ReAct Agent using the Llama 2 70b hf chat. I’m using the Pro account along with the inference api and it seems like the generation stops after about 10 tokens. Anyone know what could be the issue? I’ve asked on hf but still no response. I have also tried using replicate and while it does generate more content it also cuts off in the middle after about 100 tokens. Is his an issue with the API or is this normal for these models?
2023-09-06T23:37:25
https://www.reddit.com/r/LocalLLaMA/comments/16bzyu6/running_llama_2_via_replicate_and_hugging_face/
hi87
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16bzyu6
false
null
t3_16bzyu6
/r/LocalLLaMA/comments/16bzyu6/running_llama_2_via_replicate_and_hugging_face/
false
false
self
1
null
MLC-AI suggestion
1
Okay so, MLC-AI is an amazing tool. But, installation can be annoying to finish. I keep almost completing the setup and then failing, I'll get it eventually but I'm suggesting that someone willing and with the time and skills to make an installer form of MLC-AI. This would make usage a lot easier. Anyone disagree? If so I'm curious why and also if anyone does it feel free to DM me because I'd love to test it out.
2023-09-07T00:36:32
https://www.reddit.com/r/LocalLLaMA/comments/16c1cxe/mlcai_suggestion/
CombinationPure7696
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16c1cxe
false
null
t3_16c1cxe
/r/LocalLLaMA/comments/16c1cxe/mlcai_suggestion/
false
false
self
1
null
Why we need to run AI on our own computers.
1
Although I can see the reasoning behind wanting to make AIs inoffensive to avoid criticism, the largest AIs are lobotomized by their filters and are completely incapable of creating interesting stories. I asked Claude 2 to create a scenario based on the videogame Castlevania: Symphony of the Night. The castle was completely empty. Eventually I asked the AI out of character why and it told me that it was unable to create any depiction of monsters or combat of any type. I get the impression that, due to it's restrictions, it is incapable of creating conflict, which is what makes stories interesting. Am I right in thinking this? What do you think?
2023-09-07T00:59:50
https://www.reddit.com/r/LocalLLaMA/comments/16c1v9y/why_we_need_to_run_ai_on_our_own_computers/
Abscondias
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16c1v9y
false
null
t3_16c1v9y
/r/LocalLLaMA/comments/16c1v9y/why_we_need_to_run_ai_on_our_own_computers/
false
false
self
1
null
Best model that only requires 8gb ram?
1
Which models are the best trade-off between capabilities, speed and ram required?
2023-09-07T01:04:22
https://www.reddit.com/r/LocalLLaMA/comments/16c1z10/best_model_that_only_requires_8gb_ram/
--Explosion--
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16c1z10
false
null
t3_16c1z10
/r/LocalLLaMA/comments/16c1z10/best_model_that_only_requires_8gb_ram/
false
false
self
1
null
What's are the best models so far?
1
What do you all think are top 3 models right now in terms of uncensorship, math/code abilities, role play, fantasies, story telling and human like chatting?
2023-09-07T01:34:24
https://www.reddit.com/r/LocalLLaMA/comments/16c2n9p/whats_are_the_best_models_so_far/
CultivatedHorror
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16c2n9p
false
null
t3_16c2n9p
/r/LocalLLaMA/comments/16c2n9p/whats_are_the_best_models_so_far/
false
false
self
1
null
Total noob looking to finetune base llama-2-7b in colab
1
Running in colab because my pc is a potato, I'm looking to fine-tune the 7b base model on a mix of unstructured and structured data (just planning to experiment with some different methods). I was hoping to be able to use obabooga webui for the finetuning. I'm just after a guide of the process I need to follow, I'm completely new to fine-tuning, so beginner friendly instructions would be great :)
2023-09-07T01:44:28
https://www.reddit.com/r/LocalLLaMA/comments/16c2vhn/total_noob_looking_to_finetune_base_llama27b_in/
Goatman117
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16c2vhn
false
null
t3_16c2vhn
/r/LocalLLaMA/comments/16c2vhn/total_noob_looking_to_finetune_base_llama27b_in/
false
false
self
1
null
What are your go to questions when testing a new model? Will compile a doc with them all if people are interested.
1
I find myself searching for prompts/questions to ask stress test new models and evaluate how they perform and what they do better compared to other models, but don't have a centralised place I Keep them. Would be great to get the communities input and suggestions, to which I'm happy to compile a doc with any personal findings/results.
2023-09-07T03:05:03
https://www.reddit.com/r/LocalLLaMA/comments/16c4maj/what_are_your_go_to_questions_when_testing_a_new/
sardoa11
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16c4maj
false
null
t3_16c4maj
/r/LocalLLaMA/comments/16c4maj/what_are_your_go_to_questions_when_testing_a_new/
false
false
self
1
null
Sam Altman on Threads: Some great stuff to show developers on November 6 in San Francisco
1
“on november 6, we’ll have some great stuff to show developers! (no gpt-5 or 4.5 or anything like that, calm down, but still i think people will be very happy…) - Sam Altman https://www.threads.net/@sama/post/Cw3OHU-PEMZ/?igshid=NTc4MTIwNjQ2YQ==
2023-09-07T03:08:39
https://openai.com/blog/announcing-openai-devday
noiseinvacuum
openai.com
1970-01-01T00:00:00
0
{}
16c4p0a
false
null
t3_16c4p0a
/r/LocalLLaMA/comments/16c4p0a/sam_altman_on_threads_some_great_stuff_to_show/
false
false
https://b.thumbs.redditm…y1sRVtHJytLw.jpg
1
{'enabled': False, 'images': [{'id': '82_6mfQmpIjAgPjSHhr94VPx0lLWXRTNSsuHzZiY0KM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/ABUHdG7sHz4gRNC7dAwWXDavSG8ryCLPu1wtUMgZxaE.jpg?width=108&crop=smart&auto=webp&s=213d3f961c6bc0b78e8c330b1c758b426c9b3969', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/ABUHdG7sHz4gRNC7dAwWXDavSG8ryCLPu1wtUMgZxaE.jpg?width=216&crop=smart&auto=webp&s=9601fac6433d661cfb6e8243fee9aae742b07468', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/ABUHdG7sHz4gRNC7dAwWXDavSG8ryCLPu1wtUMgZxaE.jpg?width=320&crop=smart&auto=webp&s=8b970547b983ed3b04d7344742ef780667590a02', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/ABUHdG7sHz4gRNC7dAwWXDavSG8ryCLPu1wtUMgZxaE.jpg?width=640&crop=smart&auto=webp&s=518e8c26c46348af35fac10cea3c64cb3094270d', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/ABUHdG7sHz4gRNC7dAwWXDavSG8ryCLPu1wtUMgZxaE.jpg?width=960&crop=smart&auto=webp&s=5195c7a1b6e6d78c0569fb95cca7c126e24d09b4', 'width': 960}], 'source': {'height': 1000, 'url': 'https://external-preview.redd.it/ABUHdG7sHz4gRNC7dAwWXDavSG8ryCLPu1wtUMgZxaE.jpg?auto=webp&s=f3bd6bbb57211c3d0011fc5b0fc1a22a2fca4cf2', 'width': 1000}, 'variants': {}}]}
Dual 3090 with SLI or single 4090?
1
I'd like to run llama2 locally and I am thinking of getting two 3090 to amass 48Gb of memory so I can run larger model. Do you think it is a good idea? or stick to a single 4090? Any ideas are appreciated
2023-09-07T03:08:52
https://www.reddit.com/r/LocalLLaMA/comments/16c4p68/dual_3090_with_sli_or_single_4090/
Necessary-Increase-4
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16c4p68
false
null
t3_16c4p68
/r/LocalLLaMA/comments/16c4p68/dual_3090_with_sli_or_single_4090/
false
false
self
1
null
Installing Llama2 on a Windows Laptop
1
I documented the steps I took to get llama 2 running locally on my windows laptop. I hope it's useful for someone. All feedback is welcome.
2023-09-07T04:45:38
https://blog.orenfromberg.tech/llama2-on-windows-laptop/
o_m_f_g
blog.orenfromberg.tech
1970-01-01T00:00:00
0
{}
16c6lz0
false
null
t3_16c6lz0
/r/LocalLLaMA/comments/16c6lz0/installing_llama2_on_a_windows_laptop/
false
false
default
1
null
LLaMA2 weights still haven't become widely available yet. Why?
1
[removed]
2023-09-07T05:57:29
https://www.reddit.com/r/LocalLLaMA/comments/16c7w08/llama2_weights_still_havent_become_widely/
AskingForMyMumWhoHDL
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16c7w08
false
null
t3_16c7w08
/r/LocalLLaMA/comments/16c7w08/llama2_weights_still_havent_become_widely/
false
false
self
1
null
Falcon180B - Stringent Filtering
1
This model was trained on an Open Source database created by web crawling. Lets talk about what sort of content is and is not in that database. For example, they state that "stringent filtering" was used. Is that a technical term or something more subjective? There is a Curation Rationale, but what determined what constitutes "adult sites" for URLs? Also, this is a "causal decoder-only model". What can we tell from that? === "(3) Neutral filtering. To avoid introducing further undesirable biases into the model, we avoid using ML-based filtering outside of language identification (Dodge et al., 2021; Welbl et al., 2021) . We stick to simple rules and heuristics, and use only URL filtering for adult content." This was determined in the U.A.E., which is famously more conservative in its views than other countries. https://www.tii.ae/ https://huggingface.co/datasets/tiiuae/falcon-refinedweb === "Falcon RefinedWeb Falcon RefinedWeb is a massive English web dataset built by TII and released under an ODC-By 1.0 license. See the 📓 paper on arXiv for more details. RefinedWeb is built through stringent filtering and large-scale deduplication of CommonCrawl; we found models trained on RefinedWeb to achieve performance in-line or better than models trained on curated datasets, while only relying on web data. RefinedWeb is also "multimodal-friendly": it contains links and alt texts for images in processed samples. This public extract should contain 500-650GT depending on the tokenizer you use, and can be enhanced with the curated corpora of your choosing. This public extract is about ~500GB to download, requiring 2.8TB of local storage once unpacked. "
2023-09-07T06:38:15
https://www.reddit.com/r/LocalLLaMA/comments/16c8l2p/falcon180b_stringent_filtering/
innocuousAzureus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16c8l2p
false
null
t3_16c8l2p
/r/LocalLLaMA/comments/16c8l2p/falcon180b_stringent_filtering/
false
false
self
1
{'enabled': False, 'images': [{'id': 'PKdc450c95xlg0En-DWUCVYuM_QtZRC9Y2Yl-XZyRPI', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/Ao48-80YWPT7nCF1vclJDT30_8a4pXdQ-CkbkVjBVds.jpg?width=108&crop=smart&auto=webp&s=e2cfa0efbbb880dcd401a0383b766b6f703d5ba7', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/Ao48-80YWPT7nCF1vclJDT30_8a4pXdQ-CkbkVjBVds.jpg?width=216&crop=smart&auto=webp&s=7aaf3b25c2076b9258ba98d9d1dca2289867397e', 'width': 216}, {'height': 183, 'url': 'https://external-preview.redd.it/Ao48-80YWPT7nCF1vclJDT30_8a4pXdQ-CkbkVjBVds.jpg?width=320&crop=smart&auto=webp&s=d16834228b411e8932eab9bcf9bc235c034e8543', 'width': 320}, {'height': 366, 'url': 'https://external-preview.redd.it/Ao48-80YWPT7nCF1vclJDT30_8a4pXdQ-CkbkVjBVds.jpg?width=640&crop=smart&auto=webp&s=46893583ff3652d94b3db462b8b9602e8403f50d', 'width': 640}, {'height': 550, 'url': 'https://external-preview.redd.it/Ao48-80YWPT7nCF1vclJDT30_8a4pXdQ-CkbkVjBVds.jpg?width=960&crop=smart&auto=webp&s=6d0696d775b25acee8472e29a566975de0845ab8', 'width': 960}], 'source': {'height': 602, 'url': 'https://external-preview.redd.it/Ao48-80YWPT7nCF1vclJDT30_8a4pXdQ-CkbkVjBVds.jpg?auto=webp&s=0d5b839dab74df86189fa7981b63686f7f93d863', 'width': 1050}, 'variants': {}}]}
 Mac Studio M2 Ultra with 192GB of RAM
1
I got the new Mac Studio Apple M2 Ultra with 192GB of RAM . What model can I run ? Any , suggestions? I want to see what this thing can do. so far I have tried : ollama : which is awesome and integrates well with Mac to get started really quickly. one click llama.cpp : tried various models with this , not sure which is best looking for suggestions. ​ ​ ​ ​ ​ ​ ​ ​ ​
2023-09-07T07:44:12
https://www.reddit.com/r/LocalLLaMA/comments/16c9ody/mac_studio_m2_ultra_with_192gb_of_ram/
bladeolson26
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16c9ody
false
null
t3_16c9ody
/r/LocalLLaMA/comments/16c9ody/mac_studio_m2_ultra_with_192gb_of_ram/
false
false
self
1
null
Is anyone finetuning with ROCM yet?
1
Actually I would even be happy with cpu finetuning, but cpu + ROCM is really what I'm looking for. I've merged a few choice datasets and tried to train with the platypus scripts, but it seems CUDA is required in the bitsandbytes library for training. Unless maybe there is some option I'm not aware of or build flag.
2023-09-07T07:55:12
https://www.reddit.com/r/LocalLLaMA/comments/16c9ult/is_anyone_finetuning_with_rocm_yet/
FaustAg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16c9ult
false
null
t3_16c9ult
/r/LocalLLaMA/comments/16c9ult/is_anyone_finetuning_with_rocm_yet/
false
false
self
1
null
Need Help with LLM integrations
1
Hey guys where should I start for the following: Have an internal language model that has access to internet in real time. And to integrate it to selfhosted apps like task managers or home assistants. Langchain? I'm new :) I have Oobabooga installed with a lot of cool models and extensions installed, but that's it.
2023-09-07T07:56:16
https://www.reddit.com/r/LocalLLaMA/comments/16c9v7g/need_help_with_llm_integrations/
PTwolfy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16c9v7g
false
null
t3_16c9v7g
/r/LocalLLaMA/comments/16c9v7g/need_help_with_llm_integrations/
false
false
self
1
null
Multiple k80 for LLM
1
Hey, Is it a good idea to buy multiple k80 to run large models locally. You get 24gb vram for 120 on eBay. If you compare it to other gpus ist very cheap. So I thought it could be a great cheap setup. I have a server Xeon from 2014 with that I not really use would it possible to use multiple cards with it? I have multiple pcie slots. I would be really great full if you would share your experience :) This is my first post I hope I did everything right.
2023-09-07T08:30:37
https://www.reddit.com/r/LocalLLaMA/comments/16caeaz/multiple_k80_for_llm/
benboo187
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16caeaz
false
null
t3_16caeaz
/r/LocalLLaMA/comments/16caeaz/multiple_k80_for_llm/
false
false
self
1
null
What's the training cost for the Chinese LLaMA-Alpaca?
1
Hello, I've been checking out the Chinese Alpaca project on GitHub at [https://github.com/ymcui/Chinese-LLaMA-Alpaca-2](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2), and it seems quite interesting. I've also read the technical report available at [https://arxiv.org/abs/2304.08177](https://arxiv.org/abs/2304.08177). ​ In their report, they mentioned using 16 Nvidia A40 cards, but they didn't provide details about the time and cost involved in training. ​ Do you have any estimates for the training cost of a 120G Chinese dataset based on LLama parameters 7, 13, and 70b?"
2023-09-07T08:42:43
https://www.reddit.com/r/LocalLLaMA/comments/16cal1u/whats_the_training_cost_for_the_chinese/
hich9n
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16cal1u
false
null
t3_16cal1u
/r/LocalLLaMA/comments/16cal1u/whats_the_training_cost_for_the_chinese/
false
false
self
1
{'enabled': False, 'images': [{'id': '_moWo-uOJVPrRFesRGAWl25GDfROFeyZQ1HLr4L1oMY', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/ZO7LBviQWQrJqh9WlRQBnFu4GHQP0oRWzLq_iMKXtmo.jpg?width=108&crop=smart&auto=webp&s=737eb0b166c00eade7a94a1a507525ed74d37ee2', 'width': 108}, {'height': 106, 'url': 'https://external-preview.redd.it/ZO7LBviQWQrJqh9WlRQBnFu4GHQP0oRWzLq_iMKXtmo.jpg?width=216&crop=smart&auto=webp&s=7e957b6c290c96699e31a42ffa440fea1eb913f1', 'width': 216}, {'height': 157, 'url': 'https://external-preview.redd.it/ZO7LBviQWQrJqh9WlRQBnFu4GHQP0oRWzLq_iMKXtmo.jpg?width=320&crop=smart&auto=webp&s=2fb5d926b18f495de9c28b55a561c607907ad0e7', 'width': 320}, {'height': 314, 'url': 'https://external-preview.redd.it/ZO7LBviQWQrJqh9WlRQBnFu4GHQP0oRWzLq_iMKXtmo.jpg?width=640&crop=smart&auto=webp&s=347c04cd524677894c0f4f822fc541d7ceb409be', 'width': 640}, {'height': 472, 'url': 'https://external-preview.redd.it/ZO7LBviQWQrJqh9WlRQBnFu4GHQP0oRWzLq_iMKXtmo.jpg?width=960&crop=smart&auto=webp&s=38c97ca70b63c7969ab653f9c3f22fd3ef81854a', 'width': 960}, {'height': 531, 'url': 'https://external-preview.redd.it/ZO7LBviQWQrJqh9WlRQBnFu4GHQP0oRWzLq_iMKXtmo.jpg?width=1080&crop=smart&auto=webp&s=dfdf9c8d66951fb607393e1b7a9bd9dc2042d43f', 'width': 1080}], 'source': {'height': 610, 'url': 'https://external-preview.redd.it/ZO7LBviQWQrJqh9WlRQBnFu4GHQP0oRWzLq_iMKXtmo.jpg?auto=webp&s=f01d58817efedf4a78d11077b82d840406342888', 'width': 1240}, 'variants': {}}]}
Yet another RAG system - implementation details and lessons learned
1
Having a large knowledge base in Obsidian and a sizable collection of technical documents, for the last couple of months, I have been trying to build an RAG-based QnA system that would allow effective querying. ​ After the initial implementation using a standard architecture (structure unaware, format agnostic recursive text splitters and cosine similarity for semantic search), the results were a bit underwhelming. Throwing a more powerful LLM at the problem helped, but not by an order of magnitude (the model was able to reason better about the provided context, but if the context wasn't relevant to begin with, obviously it didn't matter). ​ Here are implementation details and tricks that helped me achieve significantly better quality. I hope it will be helpful to people implementing similar systems. Many of them I learned by reading suggestions from this and other communities, while others were discovered through experimentation. ​ Most of the methods described below are implemented ihere - \[GitHub - snexus/llm-search: Querying local documents, powered by LLM\]([https://github.com/snexus/llm-search/tree/main](https://github.com/snexus/llm-search/tree/main)). ​ \## Pre-processing and chunking ​ \* Document format - the best quality is achieved with a format where the logical structure of the document can be parsed - titles, headers/subheaders, tables, etc. Examples of such formats include markdown, HTML, or .docx. \* PDFs, in general, are hard to parse due to multiple ways to represent the internal structure - for example, it can be just a bunch of images stacked together. In most cases, expect to be able to split by sentences. ​ \* Content splitting \* Splitting by logical blocks (e.g., headers/subheaders) improved the quality significantly. It comes at the cost of format-dependent logic that needs to be implemented. Another downside is that it is hard to maintain an equal chunk size with this approach. \* For documents containing source code, it is best to treat the code as a single logical block. If you need to split the code in the middle, make sure to embed metadata providing a hint that different pieces of code are related. ​ \* Metadata included in the text chunks: \* Document name. \* References to higher-level logical blocks (e.g., pointing to the parent header from a subheader in a markdown document). \* For text chunks containing source code - indicating the start and end of the code block and optionally the name of the programming language. ​ \* External metadata - added as external metadata in the vector store. These fields will allow dynamic filtering by chunk size and/or label. \* Chunk size. \* Document path. \* Document collection label, if applicable. ​ \* Chunk sizes - as many people mentioned, there appears to be high sensitivity to the chunk size. There is no universal chunk size that will achieve the best result, as it depends on the type of content, how generic/precise the question asked is, etc. \* One of the solutions is embedding the documents using multiple chunk sizes and storing them in the same collection. \* During runtime, querying against these chunk sizes and selecting dynamically the size that achieves the best score according to some metric. \* Downside - increases the storage and processing time requirements. ​ \## Embeddings ​ \* There are multiple embedding models achieving the same or better quality as OpenAI's ADA - for example, \`e5-large-v2\` - it provides a good balance between size and quality. ​ \* Some embedding models require certain prefixes to be added to the text chunks AND the query - that's the way they were trained and presumably achieve better results compared to not appending these prefixes. ​ \## Retrieval ​ \* One of the main components that allowed me to improve retrieval is a \*\*re-ranker\*\*. A re-ranker allows scoring the text passages obtained from a similarity (or hybrid) search against the query and obtaining a numerical score indicating how relevant the text passage is to the query. Architecturally, it is different (and much slower) than a similarity search but is supposed to be more accurate. The results can then be sorted by the numerical score from the re-ranker before stuffing into LLM. \* A re-ranker can be costly (time-consuming and/or require API calls) to implement using LLMs but is efficient using cross-encoders. It is still slower, though, than cosine similarity search and can't replace it. \* Sparse embeddings - I took the general idea from \[Getting Started with Hybrid Search | Pinecone\]([https://www.pinecone.io/learn/hybrid-search-intro/](https://www.pinecone.io/learn/hybrid-search-intro/)) and implemented sparse embeddings using SPLADE. This particular method has an advantage that it can minimize the "vocabulary mismatch problem." Despite having large dimensionality (32k for SPLADE), sparse embeddings can be stored and loaded efficiently from disk using Numpy's sparse matrices. \* With sparse embeddings implemented, the next logical step is to use a \*\*hybrid search\*\* - a combination of sparse and dense embeddings to improve the quality of the search. \* Instead of following the method suggested in the blog (which is a weighted combination of sparse and dense embeddings), I followed a slightly different approach: \* Retrieve the \*\*top k\*\* documents using SPLADE (sparse embeddings). \* Retrieve \*\*top k\*\* documents using similarity search (dense embeddings). \* Create a union of documents from sparse or dense embeddings. Usually, there is some overlap between them, so the number of documents is almost always smaller than 2\*k. \* Re-rank all the documents (sparse + dense) using the re-ranker mentioned above. \* Stuff the top documents sorted by the re-ranker score into the LLM as the most relevant documents. \* The justification behind this approach is that it is hard to compare the scores from sparse and dense embeddings directly (as suggested in the blog - they rely on magical weighting constants) - but the re-ranker should explicitly be able to identify which document is more relevant to the query. ​ Let me know if the approach above makes sense or if you have suggestions for improvement. I would be curious to know what other tricks people used to improve the quality of their RAG systems.
2023-09-07T09:42:10
https://www.reddit.com/r/LocalLLaMA/comments/16cbimi/yet_another_rag_system_implementation_details_and/
snexus_d
self.LocalLLaMA
1970-01-01T00:00:00
1
{'gid_2': 1}
16cbimi
false
null
t3_16cbimi
/r/LocalLLaMA/comments/16cbimi/yet_another_rag_system_implementation_details_and/
false
false
self
1
{'enabled': False, 'images': [{'id': 'MeblWRNLgid1XqyXFjpt8gLUNEK7ekbOzg8lh4SKeiU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/f1tUNO-HogVOrRyVzFchESE7oVrt3GtyV5bzW5EmUXs.jpg?width=108&crop=smart&auto=webp&s=ecec9a6701031fe25d7b5aa298768d167fd87704', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/f1tUNO-HogVOrRyVzFchESE7oVrt3GtyV5bzW5EmUXs.jpg?width=216&crop=smart&auto=webp&s=55689aeb6eedc38c758ff77487b46b8c583820fd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/f1tUNO-HogVOrRyVzFchESE7oVrt3GtyV5bzW5EmUXs.jpg?width=320&crop=smart&auto=webp&s=0bccd22ca96154cae239956337adb13080222d19', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/f1tUNO-HogVOrRyVzFchESE7oVrt3GtyV5bzW5EmUXs.jpg?width=640&crop=smart&auto=webp&s=0010c5c50066ee9d654c8164cb842f222942daa9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/f1tUNO-HogVOrRyVzFchESE7oVrt3GtyV5bzW5EmUXs.jpg?width=960&crop=smart&auto=webp&s=d7a18768d1c859fad94ac36175cf3c64e7b22621', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/f1tUNO-HogVOrRyVzFchESE7oVrt3GtyV5bzW5EmUXs.jpg?width=1080&crop=smart&auto=webp&s=2730ba1c60973f11790f0a58fb7fb5d543ce75c2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/f1tUNO-HogVOrRyVzFchESE7oVrt3GtyV5bzW5EmUXs.jpg?auto=webp&s=ce80d0b39f990f6aa2fd5d737baebb3601e23d44', 'width': 1200}, 'variants': {}}]}
Named Entity Recognition
1
Does anyone worked on named entity recognition from pdf using Llama2 ?
2023-09-07T10:45:42
https://www.reddit.com/r/LocalLLaMA/comments/16ccl42/named_entity_recognition/
BadriMLJ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16ccl42
false
null
t3_16ccl42
/r/LocalLLaMA/comments/16ccl42/named_entity_recognition/
false
false
self
1
null
Do you guys use function calling?
1
Hey everyone 👋 I’m working on some LLM tooling and I have a few questions. My end goal is to open source it once I’ve proven functionality. - What do you mainly use your local LLMs for? - How many of you are using “function calling” features? A couple of examples: - Instructing your LLM to call a Google maps API to add context anytime a geographical location is mentioned - Instructing your LLM to “tweet this thought on x: {tweet}”, which would call the x API - Instructing your LLM to call an internal API or script ChatGPT currently has the plug-in marketplace in beta stage, which is a similar idea.
2023-09-07T10:58:47
https://www.reddit.com/r/LocalLLaMA/comments/16ccszw/do_you_guys_use_function_calling/
LeverageDeez
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16ccszw
false
null
t3_16ccszw
/r/LocalLLaMA/comments/16ccszw/do_you_guys_use_function_calling/
false
false
self
1
null
TheBloke released Falcon 180B quantised
1
[removed]
2023-09-07T11:43:37
https://www.reddit.com/r/LocalLLaMA/comments/16cdo0v/thebloke_released_falcon_180b_quantised/
aminedjeghri
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16cdo0v
false
null
t3_16cdo0v
/r/LocalLLaMA/comments/16cdo0v/thebloke_released_falcon_180b_quantised/
false
false
self
1
{'enabled': False, 'images': [{'id': 'm9vXnbYH1D2q2UAzyM9xGWs8TkF9bJMT1KfGUfHvW0U', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gK4iYHzpNuxfVlgoTVg-osS1qL3RZVQ_lGt3MXJdGpU.jpg?width=108&crop=smart&auto=webp&s=1f167fd015526ffb37f14eeadff59515a2aec318', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gK4iYHzpNuxfVlgoTVg-osS1qL3RZVQ_lGt3MXJdGpU.jpg?width=216&crop=smart&auto=webp&s=2b439e981bb998331a358f95f67e6fcd951a2446', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gK4iYHzpNuxfVlgoTVg-osS1qL3RZVQ_lGt3MXJdGpU.jpg?width=320&crop=smart&auto=webp&s=8f1bde1305ad4ec9c8f647806df18809f656ad37', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gK4iYHzpNuxfVlgoTVg-osS1qL3RZVQ_lGt3MXJdGpU.jpg?width=640&crop=smart&auto=webp&s=e46fe4d04ddd62c570c13b60fd78c2cb712813a8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gK4iYHzpNuxfVlgoTVg-osS1qL3RZVQ_lGt3MXJdGpU.jpg?width=960&crop=smart&auto=webp&s=c3ab4c2ea957418be87c1af0123a6cc04fcb43bf', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gK4iYHzpNuxfVlgoTVg-osS1qL3RZVQ_lGt3MXJdGpU.jpg?width=1080&crop=smart&auto=webp&s=921214516a10741a098ebe9add133fc073e33099', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gK4iYHzpNuxfVlgoTVg-osS1qL3RZVQ_lGt3MXJdGpU.jpg?auto=webp&s=f24044d99d884072f3e2521275649e8fed23378f', 'width': 1200}, 'variants': {}}]}
Enhancing My Educational Content App with Fact-Checking Capabilities – Need Guidance!
1
Hey there, fellow developers! I'm working on an educational content app powered by GPT, and it's been going great so far. Users can interact with a PDF document, thanks to embeddings, vector stores, and all the fancy stuff. But now, I want to take it up a notch and add a fact-checking feature. Here's the challenge: I have a PDF with educational content, and I also have a separate text file that outlines guidelines on how to fact-check the document. It's like a set of instructions saying, "Here's how you should fact-check this." What I want is for users to hit a "fact check" button, and GPT should analyze the PDF document according to the guidelines provided in that text file. But here's where I'm stuck – how do I make GPT understand and follow those guidelines? I know fine-tuning is a thing, but it usually involves a "prompt and response" format, which doesn't quite fit my scenario. My guidelines are more like rules to follow, not prompts for generating responses. So, devs, any ideas on how to make this happen? I'm all ears for your suggestions and guidance.
2023-09-07T11:46:16
https://www.reddit.com/r/LocalLLaMA/comments/16cdpyz/enhancing_my_educational_content_app_with/
bsnshdbsb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16cdpyz
false
null
t3_16cdpyz
/r/LocalLLaMA/comments/16cdpyz/enhancing_my_educational_content_app_with/
false
false
self
1
null
Which sentence transformer is the best one for similarity search and document retrieval
1
So I have been using two sentence transformers, the 'sentence-transformers/all-MiniLM-L12-v2' and 'sentence-transformers/all-mpnet-base-v2'. I thought they were both working well and I could use any of them for a good document retrieval result. But I have tried their hosted inference apis and the results were pretty disappointing. I mean, shouldn't the sentence "The person is not happy" be the least similar one? Is there any other model I could use that will give me better results? mpnet-base had better results but I am still not satisfied. ​ [This is 'sentence-transformers\/all-MiniLM-L12-v2''s result](https://preview.redd.it/ww4juy2wmtmb1.png?width=675&format=png&auto=webp&s=6e0b09a5972dee27637ad5cc16ec64d41cc62d22)
2023-09-07T11:50:19
https://www.reddit.com/r/LocalLLaMA/comments/16cdsv6/which_sentence_transformer_is_the_best_one_for/
IamFuckinTomato
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16cdsv6
false
null
t3_16cdsv6
/r/LocalLLaMA/comments/16cdsv6/which_sentence_transformer_is_the_best_one_for/
false
false
https://b.thumbs.redditm…8JezDqsyoBLc.jpg
1
null
OpenChat 3.2 SUPER is Here!
1
🚀 **OpenChat 3.2 SUPER is Here!** 🚀 We're delighted to announce the official release of OpenChat 3.2 SUPER, the latest advancement in our AI conversation models. Engineered to harness imperfect data, this model employs conditional and weighted fine-tuning techniques inspired by offline reinforcement learning. Operating within the confines of the same 80K mixed-quality ShareGPT dataset as Vicuna 1.0, OpenChat 3.2 SUPER surpasses all Llama-2-based 13B open-source models including Llama-2-13B-chat, WizardLM 1.2, and Vicuna 1.5, showcasing its exceptional capabilities. 🏆 OpenChat has achieved remarkable recognition! It's ranked #1 on AgentBench among open-source models and also secures the top spot on both MT-bench and AlpacaEval among 13B models. 🔎 Discover the power of OpenChat 3.2 SUPER on GitHub and Huggingface: - GitHub: [OpenChat](https://github.com/imoneoi/openchat) - Huggingface: [OpenChat 3.2 SUPER weights](https://huggingface.co/openchat/openchat_v3.2_super) Stay tuned for more exciting updates, and as always, we appreciate your continued support. Let's continue to revolutionize AI conversation together!
2023-09-07T11:59:25
https://www.reddit.com/r/LocalLLaMA/comments/16cdze5/openchat_32_super_is_here/
imonenext
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16cdze5
false
null
t3_16cdze5
/r/LocalLLaMA/comments/16cdze5/openchat_32_super_is_here/
false
false
self
1
{'enabled': False, 'images': [{'id': 'nwcMQPf1xfio39uvxcE3aNTf0W1x4rJbls7ewGkMEBM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/paB15dw9RZjMyQFo7IrwWrq3Fw2dVx-1hBznpp6d5Eg.jpg?width=108&crop=smart&auto=webp&s=61dbfa86d616c4efdc2c6707eb0aeb5809b654ba', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/paB15dw9RZjMyQFo7IrwWrq3Fw2dVx-1hBznpp6d5Eg.jpg?width=216&crop=smart&auto=webp&s=b33ee4e5d2eff758d484ed2c2e85798d651c7eb3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/paB15dw9RZjMyQFo7IrwWrq3Fw2dVx-1hBznpp6d5Eg.jpg?width=320&crop=smart&auto=webp&s=d116b3738a3da35d8660c1860de8dd7b7b6cf9d1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/paB15dw9RZjMyQFo7IrwWrq3Fw2dVx-1hBznpp6d5Eg.jpg?width=640&crop=smart&auto=webp&s=672d235d34888586a5aacce3408017d0120c529c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/paB15dw9RZjMyQFo7IrwWrq3Fw2dVx-1hBznpp6d5Eg.jpg?width=960&crop=smart&auto=webp&s=609b28ecebb73f0abdecbb45f9198f2a7ff4db5b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/paB15dw9RZjMyQFo7IrwWrq3Fw2dVx-1hBznpp6d5Eg.jpg?width=1080&crop=smart&auto=webp&s=f3adf482e3c0b069664d24f3deca633368db85e4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/paB15dw9RZjMyQFo7IrwWrq3Fw2dVx-1hBznpp6d5Eg.jpg?auto=webp&s=c1c3f595dd2e95b37d738b3357931a5c6ae6340b', 'width': 1200}, 'variants': {}}]}
Two Ways to Download and Access Llama 2 Locally
1
2023-09-07T12:21:01
https://medium.com/towards-data-science/two-ways-to-download-and-access-llama-2-locally-8a432ed232a4
anabildea
medium.com
1970-01-01T00:00:00
0
{}
16cefad
false
null
t3_16cefad
/r/LocalLLaMA/comments/16cefad/two_ways_to_download_and_access_llama_2_locally/
false
false
https://b.thumbs.redditm…nhxZV_MzAwlE.jpg
1
{'enabled': False, 'images': [{'id': '4INURiuRaLWbL99EyyJj8BzS3V9jfvABPfbi9dGdhMg', 'resolutions': [{'height': 110, 'url': 'https://external-preview.redd.it/Nvaw36rwoqqX-0ReB1jGfHQt1QmuuGRhToS2IFDGQ9Q.jpg?width=108&crop=smart&auto=webp&s=d661c33e067792f4a8d07ab13d506a5adec19d0c', 'width': 108}, {'height': 220, 'url': 'https://external-preview.redd.it/Nvaw36rwoqqX-0ReB1jGfHQt1QmuuGRhToS2IFDGQ9Q.jpg?width=216&crop=smart&auto=webp&s=7f455a5685fa5c05a34282f5baa988a64dd4add5', 'width': 216}, {'height': 326, 'url': 'https://external-preview.redd.it/Nvaw36rwoqqX-0ReB1jGfHQt1QmuuGRhToS2IFDGQ9Q.jpg?width=320&crop=smart&auto=webp&s=38b2e5f309846ab8b95dcee148824bdaeeb09ded', 'width': 320}], 'source': {'height': 540, 'url': 'https://external-preview.redd.it/Nvaw36rwoqqX-0ReB1jGfHQt1QmuuGRhToS2IFDGQ9Q.jpg?auto=webp&s=3f701d65c9cbc2fa5f893960e7ed0e87482e07ab', 'width': 529}, 'variants': {}}]}
Building llama.cpp on a Windows Laptop
1
2023-09-07T12:49:49
https://blog.orenfromberg.tech/llama-cpp-on-windows/
o_m_f_g
blog.orenfromberg.tech
1970-01-01T00:00:00
0
{}
16cf0lf
false
null
t3_16cf0lf
/r/LocalLLaMA/comments/16cf0lf/building_llamacpp_on_a_windows_laptop/
false
false
default
1
null
Seeking Advice: Best LLM Model for Consistent Long-Form Storytelling with 8GB VRAM
1
Hey writers and AI enthusiasts, I'm diving into a lengthy storytelling project and need recommendations for the best LLM model that's suitable for consistent, long-form storytelling while staying within an 8GB VRAM constraint.
2023-09-07T14:06:24
https://www.reddit.com/r/LocalLLaMA/comments/16cgs1r/seeking_advice_best_llm_model_for_consistent/
Nihigh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16cgs1r
false
null
t3_16cgs1r
/r/LocalLLaMA/comments/16cgs1r/seeking_advice_best_llm_model_for_consistent/
false
false
self
1
null
Low level Python bindings for GGML
1
“Python bindings for the ggml tensor library for machine learning. ⚠️ This project is in a very early state and currently only offers the basic low-level bindings to ggml”
2023-09-07T15:15:40
https://github.com/abetlen/ggml-python
Acrobatic-Site2065
github.com
1970-01-01T00:00:00
0
{}
16cig98
false
null
t3_16cig98
/r/LocalLLaMA/comments/16cig98/low_level_python_bindings_for_ggml/
false
false
https://b.thumbs.redditm…Ydh8IFsEnmeE.jpg
1
{'enabled': False, 'images': [{'id': 'tJgKjhkWHld2iswFj-9P2smT9BlmaPCOt3uy2_gEuyM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OFEcX-9h_4f0Eu-5AOmoUTGFQPn4f6O-150yDlyPwB0.jpg?width=108&crop=smart&auto=webp&s=c50ab5aaabf069feaf97f1e680c13c89a4a53a91', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OFEcX-9h_4f0Eu-5AOmoUTGFQPn4f6O-150yDlyPwB0.jpg?width=216&crop=smart&auto=webp&s=71281b89dc7a40b9180cbd52aea2d8702e68eaa8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OFEcX-9h_4f0Eu-5AOmoUTGFQPn4f6O-150yDlyPwB0.jpg?width=320&crop=smart&auto=webp&s=5ce54e5eb680077196c505eb9150d7d80b5b6eb9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OFEcX-9h_4f0Eu-5AOmoUTGFQPn4f6O-150yDlyPwB0.jpg?width=640&crop=smart&auto=webp&s=67e2308d41cc2a29dbb46b40b55c1cafbbe2731d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OFEcX-9h_4f0Eu-5AOmoUTGFQPn4f6O-150yDlyPwB0.jpg?width=960&crop=smart&auto=webp&s=42b61db5e222dc419621fe5ea8ac5e36f4905093', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OFEcX-9h_4f0Eu-5AOmoUTGFQPn4f6O-150yDlyPwB0.jpg?width=1080&crop=smart&auto=webp&s=15e44b2315ded35b962a17e2dd2e47939ca976eb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OFEcX-9h_4f0Eu-5AOmoUTGFQPn4f6O-150yDlyPwB0.jpg?auto=webp&s=e2053cf65407b7272a46bfd5bde628b6005ea044', 'width': 1200}, 'variants': {}}]}
Where can I find information on chat formats?
1
I have a bot running that sits in a multi-user chat server and sees everything. I'm running through the ooba api in instruct-chat mode and while it's fine, I don't think it's being formatted correctly. First thing is system message; the ooba api doesn't seem to have any field for system message so I'm just stuffing it in the instruct message. And then I'm just doing user messages in my own format like \`User: <the\_username> the message\` and then I have the bot respond (hopefully) in the same format \`Assistant: <bot\_name> the message\` This is fine but I have coherency issues all the time. It works way better on gpt3. So I'm thinking I probably need to control the format more tightly and submit a raw prompt instead of using the chat api, but I'm having a really hard time finding hard information about the various formats. Is there some resource i can look at where things like Llama-2/vicuna/alpaca formats are defined?
2023-09-07T15:29:32
https://www.reddit.com/r/LocalLLaMA/comments/16cisn6/where_can_i_find_information_on_chat_formats/
__SlimeQ__
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16cisn6
false
null
t3_16cisn6
/r/LocalLLaMA/comments/16cisn6/where_can_i_find_information_on_chat_formats/
false
false
self
1
null
Is there a feasible way to have an offline training method to summarize video and audio?
1
It seems that everything I've seen relies on an openai api key for analysis. I assume there's a great reason, although I thought I might learn something if I asked this question. Thanks for your time.
2023-09-07T15:45:02
https://www.reddit.com/r/LocalLLaMA/comments/16cj6je/is_there_a_feasible_way_to_have_an_offline/
Fine_Classroom
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16cj6je
false
null
t3_16cj6je
/r/LocalLLaMA/comments/16cj6je/is_there_a_feasible_way_to_have_an_offline/
false
false
self
1
null
Need Help - torch.cuda.OutOfMemoryError When Training Models on 1xRTXA6000 (48GB VRAM) - Need Help
1
Hello everyone, I'm encountering a persistent issue while trying to train various models like conceptofmind/LLongMA-2-7b-16k , conceptofmind/LLongMA-2-13b-16k , etc., on my machine equipped with a single RTXA6000 GPU with 48GB of VRAM. The error message I get is as follows: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.76 GiB (GPU 0; 47.54 GiB total capacity; 45.18 GiB already allocated; 1.41 GiB free; 45.20 GiB reserved in total by PyTorch) The message also suggests setting max\_split\_size\_mb to avoid fragmentation and refers to the Memory Management and PYTORCH\_CUDA\_ALLOC\_CONF documentation. This issue occurs every time I try to train any of these models. I've tried various solutions but to no avail. Latest runs: &#x200B; https://preview.redd.it/tw3jtacm1vmb1.png?width=1904&format=png&auto=webp&s=40ea50e2a38d59c1857d9ecb7c8cac417314985d **Questions:** 1. How much VRAM is actually required to train these models? 2. Is my 48GB of VRAM insufficient for this task? 3. Can anyone recommend a model that would allow me to generate articles with at least 10,000 characters without running into this issue? Any help or suggestions would be greatly appreciated! Thank you!
2023-09-07T16:30:36
https://www.reddit.com/r/LocalLLaMA/comments/16ckbpw/need_help_torchcudaoutofmemoryerror_when_training/
vnvrx1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16ckbpw
false
null
t3_16ckbpw
/r/LocalLLaMA/comments/16ckbpw/need_help_torchcudaoutofmemoryerror_when_training/
false
false
https://a.thumbs.redditm…ItjtEe9dRyu8.jpg
1
{'enabled': False, 'images': [{'id': 'vOusbxPFiJVxUN432QzhL8m7SV5_gP6demzVwNJsZ_w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BekbsTewtCHFmbC60p4J9lXeC4xc7Qg1cZhTrT6Hxhk.jpg?width=108&crop=smart&auto=webp&s=ad0c44ee085f2cedf39d56de781de9f145b7d8e8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BekbsTewtCHFmbC60p4J9lXeC4xc7Qg1cZhTrT6Hxhk.jpg?width=216&crop=smart&auto=webp&s=9168bc777250c7f27215897da0bf9593cc6b010c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BekbsTewtCHFmbC60p4J9lXeC4xc7Qg1cZhTrT6Hxhk.jpg?width=320&crop=smart&auto=webp&s=6d676748ef95d217895a48d8997b81d27c670e5c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BekbsTewtCHFmbC60p4J9lXeC4xc7Qg1cZhTrT6Hxhk.jpg?width=640&crop=smart&auto=webp&s=687d34275531ea59955191c4b1d8d4843e67c5e6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BekbsTewtCHFmbC60p4J9lXeC4xc7Qg1cZhTrT6Hxhk.jpg?width=960&crop=smart&auto=webp&s=c2dc4a619a4d3fc4ae841ebb6a26711d96f6ef14', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BekbsTewtCHFmbC60p4J9lXeC4xc7Qg1cZhTrT6Hxhk.jpg?width=1080&crop=smart&auto=webp&s=2d84a77d0ff2a80495fae51d2a4b9638fdbec936', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BekbsTewtCHFmbC60p4J9lXeC4xc7Qg1cZhTrT6Hxhk.jpg?auto=webp&s=8b0fcd6f5a20ee8d089fc306b23996969d27a6a7', 'width': 1200}, 'variants': {}}]}
(10) 1660s clustered $6k, (1) is $150ish on eBay, so $1500-2000 for (10), is the frame and wiring worth $4k? I’m just trying to understand the spread.
1
2023-09-07T16:30:54
https://www.reddit.com/gallery/16ckc0a
PoppinJayLee
reddit.com
1970-01-01T00:00:00
0
{}
16ckc0a
false
null
t3_16ckc0a
/r/LocalLLaMA/comments/16ckc0a/10_1660s_clustered_6k_1_is_150ish_on_ebay_so/
false
false
https://a.thumbs.redditm…8VccKcRDwLx4.jpg
1
null
Falcon 180B is the first open source model that supports running multiple agents!
1
[removed]
2023-09-07T17:00:06
https://www.reddit.com/r/LocalLLaMA/comments/16cl25q/falcon_180b_is_the_first_open_source_model_that/
No_Marionberry312
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16cl25q
false
null
t3_16cl25q
/r/LocalLLaMA/comments/16cl25q/falcon_180b_is_the_first_open_source_model_that/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Jn5i6kcN4QCYhs8BTnlUOqpVVoo-EIWt30_uGMEfzys', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/OETPv1ITRzK2DVpvCOEoYNP8Jh6Fv9O3J5KSzNm2CJI.jpg?width=108&crop=smart&auto=webp&s=5f491526276fda685d0a96531a6bc1d2ad76046b', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/OETPv1ITRzK2DVpvCOEoYNP8Jh6Fv9O3J5KSzNm2CJI.jpg?width=216&crop=smart&auto=webp&s=cc0750610b6395149f96ea7c10bd9bfa9f76f9d0', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/OETPv1ITRzK2DVpvCOEoYNP8Jh6Fv9O3J5KSzNm2CJI.jpg?width=320&crop=smart&auto=webp&s=b34812c7289e5f4905aefdbeac43b47a9cd25f68', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/OETPv1ITRzK2DVpvCOEoYNP8Jh6Fv9O3J5KSzNm2CJI.jpg?auto=webp&s=c826c2c3f3173e95e73cbe2f8b5d122a2a97890d', 'width': 480}, 'variants': {}}]}
Falcon 180B on the older Mac M1 Ultra 128 Gb
1
First, get the Mac M2 Ultra 192 Gb if you want to do this, it's the most economic option. If you, like me, bought earlier in this product cycle then you're stuck with the M1 Ultra 128 Gb. But it can still run the [Falcon 180 Gb GGUF](https://huggingface.co/TheBloke/Falcon-180B-Chat-GGUF) if you use Q3 instead of Q4. My experience is that the recommendedMaxWorkingSetSize argument for a Mac chip corresponds to the GGUF size. For the M1 Ultra 128 Gb it is 98.3 Gb. For the M2 Ultra 192 Gb it is a little less than 142.5 Gb. Just as a benchmark, the file size for Q4 K\_M is 108.48 Gb, while Q3 K\_L is 91.99 Gb. The latter runs great, the former fails. A quick test: \> A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions. \> ### Human: A psychologist goes to a village in Iowa and decides to compare foot size to intelligence. He notes that in general, as foot size increases, so does intelligence. How can this be? \> ### Assistant: This is an example of a spurious correlation. In reality, there is no causal relationship between foot size and intelligence. However, it just so happens that as you grow older, both your foot size and intelligence tend to increase. Thus, we see a correlation between the two variables even though there is no direct link between them. Verdict: Falcon 180B Q3 K\_L got it right, throwing kids feet in the mix forces a relationship. And here's the time to process: \> lama\_print\_timings: load time = 105863.29 ms \> llama\_print\_timings: sample time = 93.24 ms / 67 runs ( 1.39 ms per token, 718.59 tokens per second) \> llama\_print\_timings: prompt eval time = 8866.13 ms / 81 tokens ( 109.46 ms per token, 9.14 tokens per second) \> llama\_print\_timings: eval time = 13619.78 ms / 66 runs ( 206.36 ms per token, 4.85 tokens per second) \> llama\_print\_timings: total time = 22753.92 ms \> Output generated in 22.95 seconds (2.88 tokens/s, 66 tokens, context 77, seed 119132506) Then the Sally test: \> A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions. \> ### Human: Sally is a girl. She has three brothers. Each of her brothers has the same two sisters. How many sisters does Sally have? \> ### Assistant: Let's break this down step by step: \> 1. We know that Sally is a girl. \> 2. Then we are told that she has three brothers. \> 3. Next, we learn that each of her brothers has the same two sisters. \> 4. Since all three brothers share the same two sisters, it means they only have two sisters in total. \> 5. Therefore, Sally also has just two sisters. \> Final answer: Sally has two sisters. Not great, but it is spelling out reasoning. Barring NVIDA completely annihilating its profit margin and stock price by making high VRAM cards affordable, I think Macs are going to be the best inference engines.
2023-09-07T17:57:26
https://www.reddit.com/r/LocalLLaMA/comments/16cm537/falcon_180b_on_the_older_mac_m1_ultra_128_gb/
Thalesian
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16cm537
false
null
t3_16cm537
/r/LocalLLaMA/comments/16cm537/falcon_180b_on_the_older_mac_m1_ultra_128_gb/
false
false
self
1
{'enabled': False, 'images': [{'id': 'm9vXnbYH1D2q2UAzyM9xGWs8TkF9bJMT1KfGUfHvW0U', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gK4iYHzpNuxfVlgoTVg-osS1qL3RZVQ_lGt3MXJdGpU.jpg?width=108&crop=smart&auto=webp&s=1f167fd015526ffb37f14eeadff59515a2aec318', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gK4iYHzpNuxfVlgoTVg-osS1qL3RZVQ_lGt3MXJdGpU.jpg?width=216&crop=smart&auto=webp&s=2b439e981bb998331a358f95f67e6fcd951a2446', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gK4iYHzpNuxfVlgoTVg-osS1qL3RZVQ_lGt3MXJdGpU.jpg?width=320&crop=smart&auto=webp&s=8f1bde1305ad4ec9c8f647806df18809f656ad37', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gK4iYHzpNuxfVlgoTVg-osS1qL3RZVQ_lGt3MXJdGpU.jpg?width=640&crop=smart&auto=webp&s=e46fe4d04ddd62c570c13b60fd78c2cb712813a8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gK4iYHzpNuxfVlgoTVg-osS1qL3RZVQ_lGt3MXJdGpU.jpg?width=960&crop=smart&auto=webp&s=c3ab4c2ea957418be87c1af0123a6cc04fcb43bf', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gK4iYHzpNuxfVlgoTVg-osS1qL3RZVQ_lGt3MXJdGpU.jpg?width=1080&crop=smart&auto=webp&s=921214516a10741a098ebe9add133fc073e33099', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gK4iYHzpNuxfVlgoTVg-osS1qL3RZVQ_lGt3MXJdGpU.jpg?auto=webp&s=f24044d99d884072f3e2521275649e8fed23378f', 'width': 1200}, 'variants': {}}]}
Options for running Falcon 180B on (kind of) sane hardware?
1
So we’ve all seen the release of the new Falcon model and the hardware requirements for running it. There are a few threads on here right now about successes involving the new Mac Studio 192GB and on an AMD EPYC 7502P with 256GB. Respect to the folks running these, but neither of them seems realistic for most most people. While it might not be possible to ever run this model on “regular” hardware, I’m sure we’d all appreciate the attempt at making this more runnable on lower-end setups. So, what magic options exist to downsize a 180B model without giving it a full-on lobotomy along the way? What can we come up with collectively? There are the various levels of quantization and I’ve seen mention of pruning reducing the size maybe in half? What else is there? If we wanted to be really aggressive about this, what’s the best we can do?
2023-09-07T18:06:30
https://www.reddit.com/r/LocalLLaMA/comments/16cm9d0/options_for_running_falcon_180b_on_kind_of_sane/
Chance-Device-9033
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16cm9d0
false
null
t3_16cm9d0
/r/LocalLLaMA/comments/16cm9d0/options_for_running_falcon_180b_on_kind_of_sane/
false
false
self
1
null
Do I understand correctly that when running a dual GPU setup (for koboldcpp) only one GPU can do inference/generation and the second GPU is just there to provide additional VRAM?
1
[removed]
2023-09-07T18:23:00
https://www.reddit.com/r/LocalLLaMA/comments/16cmgty/do_i_understand_correctly_that_when_running_a/
wh33t
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16cmgty
false
null
t3_16cmgty
/r/LocalLLaMA/comments/16cmgty/do_i_understand_correctly_that_when_running_a/
false
false
self
1
null
Rag vs Finetuning vs simple context feeding for technical documentation of software
1
[removed]
2023-09-07T18:24:38
https://www.reddit.com/r/LocalLLaMA/comments/16cmhnf/rag_vs_finetuning_vs_simple_context_feeding_for/
Embarrassed-Cicada94
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16cmhnf
false
null
t3_16cmhnf
/r/LocalLLaMA/comments/16cmhnf/rag_vs_finetuning_vs_simple_context_feeding_for/
false
false
self
1
null
Releasing Persimmon-8B
1
2023-09-07T18:50:10
https://www.adept.ai/blog/persimmon-8b
jetRink
adept.ai
1970-01-01T00:00:00
0
{}
16cmvw8
false
null
t3_16cmvw8
/r/LocalLLaMA/comments/16cmvw8/releasing_persimmon8b/
false
false
default
1
null
My KoboldCPP Settings Using Code Llama That Are Giving Me Great Results
1
[removed]
2023-09-07T19:10:37
https://www.reddit.com/r/LocalLLaMA/comments/16cnd1w/my_koboldcpp_settings_using_code_llama_that_are/
LocoLanguageModel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16cnd1w
false
null
t3_16cnd1w
/r/LocalLLaMA/comments/16cnd1w/my_koboldcpp_settings_using_code_llama_that_are/
false
false
https://b.thumbs.redditm…5RJtX3tt1M_c.jpg
1
null
Code Completion (similar to GitHub Copilot)
1
My goal is to have code completion similar to GitHub copilot and I've been using quantized WizardCoder15B to try to achieve this. In this [example vscode extension](https://github.com/mzbac/wizardCoder-vsc/blob/7a7c362d558118d29e5b066e2dc05f792cfa046b/src/WizardCoderInlineCompletionItemProvider.ts#L45C5-L45C5) the text passed to the model is: \`<fim\_prefix>${textAboveCursor}<fim\_suffix>${textBelowCursor}<fim\_middle>\` This seems to work for cases where there is text above the cursor and no text below the cursor, but as soon as there is text below the cursor the result from the model is always an empty string. Thinking it was odd to have middle at the end, I also tried changing the order to: \`<fim\_prefix>${textAboveCursor}<fim\_middle>${textBelowCursor}<fim\_suffix>\` but this gives the same results (an empty string response from the model). Is code completion not the intended use of WizardCoder? Am I using the [special tokens from WizardCoder](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0/blob/main/tokenizer_config.json) incorrectly?
2023-09-07T19:10:46
https://www.reddit.com/r/LocalLLaMA/comments/16cnd7n/code_completion_similar_to_github_copilot/
kintrith
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16cnd7n
false
null
t3_16cnd7n
/r/LocalLLaMA/comments/16cnd7n/code_completion_similar_to_github_copilot/
false
false
self
1
{'enabled': False, 'images': [{'id': '_UOjMbRnsN3UU4zHQIkPi7pzza6wQi7i824wzDSBS0g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JTxgYJtlqHpCSrWOFQouZS1rn4FS6Nf_WgFLYVwimlw.jpg?width=108&crop=smart&auto=webp&s=7afc718c509b08866c0e7440dada6e5677912082', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/JTxgYJtlqHpCSrWOFQouZS1rn4FS6Nf_WgFLYVwimlw.jpg?width=216&crop=smart&auto=webp&s=a2a6741c0c55ff0018cf448f13ce4a37ba82fc3c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/JTxgYJtlqHpCSrWOFQouZS1rn4FS6Nf_WgFLYVwimlw.jpg?width=320&crop=smart&auto=webp&s=e52285b44fa1abbfce89fd04ca88c655268ea5ed', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/JTxgYJtlqHpCSrWOFQouZS1rn4FS6Nf_WgFLYVwimlw.jpg?width=640&crop=smart&auto=webp&s=d37ec00ef7739f1d6fbc6333de319aa85f00e117', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/JTxgYJtlqHpCSrWOFQouZS1rn4FS6Nf_WgFLYVwimlw.jpg?width=960&crop=smart&auto=webp&s=2f90db2fd7a2ef00ea675e6b33dd5685b421946f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/JTxgYJtlqHpCSrWOFQouZS1rn4FS6Nf_WgFLYVwimlw.jpg?width=1080&crop=smart&auto=webp&s=5812ae7ea19f1dc8eb96fc9488c502bd80cdca77', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/JTxgYJtlqHpCSrWOFQouZS1rn4FS6Nf_WgFLYVwimlw.jpg?auto=webp&s=f54997b235c43d6789b92b0d51407e6903140025', 'width': 1200}, 'variants': {}}]}
Help with Stable Diffusion XL and LLM hardware upgrade...
1
I am needing to make an upgrade here. For the upgrade I have two choices: upgrade my existing PC with new mainboard, cpu, and ram and use my 6GB 1660 Super. I'm currently running a very old board and 3rd gen i5. Alternately, I could buy a server with the ability to add tons of ram down the line if necessary. Will my gpu even work on a server board? I have little experience with server hardware and would appreciate any input on these ideas! Just trying to make the upgrade with minimal regrets!
2023-09-07T19:13:55
https://www.reddit.com/r/LocalLLaMA/comments/16cng74/help_with_stable_diffusion_xl_and_llm_hardware/
jchacakan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16cng74
false
null
t3_16cng74
/r/LocalLLaMA/comments/16cng74/help_with_stable_diffusion_xl_and_llm_hardware/
false
false
self
1
null
All The Way Open Source!
1
Hey fellow LLM Lovers! 👋 I just had to share my thoughts on something that's been bugging me since the arrival of Falcon 180B. 🤔 and I'd love to hear your thoughts on this. 💭 So, here's the deal. I was under the impression that the 180B parameter model was too heavy for local machines to handle. 💻 But, surprise! People are now running it locally, and my mind is blown. 🤯 This got me thinking 🤔 Surely if we could run a 180B parameter model on our computer, though -not everyone, could we also run OpenAI's chatgpt if it was open source? 🤔 I'd love to hear your thoughts on this! 💭
2023-09-07T20:41:19
https://www.reddit.com/r/LocalLLaMA/comments/16cptlg/all_the_way_open_source/
DannyBrownMz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16cptlg
false
null
t3_16cptlg
/r/LocalLLaMA/comments/16cptlg/all_the_way_open_source/
false
false
self
1
null
Quad RTX A4500 vs RTX A6000
1
TL:DR: For larger models, A6000, A5000 ADA, or quad A4500, and why? I have convinced my boss to set up a test bed for running models (GPU based) on site at my company. The use case is llamaindex type interaction with company engineering documents. Due to budget constraints I can't exceed $5,000 however I already have an engineering machine that's used for FEA analysis that has an Asus WS X299 SAGE/10G mother board, i9-9980XE, and 128 GB ram. So all I need is a video card (or up to 4) From what I have researched, if I want to consider testing larger models and speed is not as important as quality of response, the primary metric is VRAM. Our IT department sources from CDW and I have a quote for several different cards. The only options under $5,000 that I would consider are: A6000 (48GB) for $4480 A4500 (20GB) for $1050 A5000 ADA (32GB) for $3590 Also for context, at the end of the trial if it all goes well, we will end up pricing a higher end build with A100's or something like that in the budget for next year. So future upgrading this machine doesn't need to be considered. Quad A4500s gives me 80GB VRAM, however if my target is to be able to run 70B models, I see some people are running them on the A6000, but I don't understand the limits. So, is it a dumb idea to go with quad A4500s vs the A6000? Is there a better option under $5k? Is the A5000 ADA something I should consider (like run 30B models significantly faster?) There's a lot of talk about consumer cards but I would like to hear thoughts on these options.
2023-09-07T22:09:09
https://www.reddit.com/r/LocalLLaMA/comments/16cs6fd/quad_rtx_a4500_vs_rtx_a6000/
Smerfj
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16cs6fd
false
null
t3_16cs6fd
/r/LocalLLaMA/comments/16cs6fd/quad_rtx_a4500_vs_rtx_a6000/
false
false
self
1
null
Are there any tiny (1-3b) models finetuned for coding available in GGUF format?
1
I'm experimenting with speculative sampling. The draft model I'm using right now is codellama-7b-instruct-4b. The main model is wizardcoder-34b. The primary purpose of a draft model in speculative sampling when used with coding is to speed up common patterns in code where both the large and the small model would output the same. Ie # open file "out.log" Should almost always result in something like this # open file "out.log" with open('out.log') as f: # ... Even tiny autocomplete models will be perfectly capable of generating patterns like that. My computer is just not quite powerful enough to run 7b at amazing speeds, so I'm looking for an even smaller model in GGUF format. Does such a model exist? I checked thebloke's huggingface page but couldn't find anything like it.
2023-09-07T22:17:13
https://www.reddit.com/r/LocalLLaMA/comments/16csdq6/are_there_any_tiny_13b_models_finetuned_for/
Combinatorilliance
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16csdq6
false
null
t3_16csdq6
/r/LocalLLaMA/comments/16csdq6/are_there_any_tiny_13b_models_finetuned_for/
false
false
self
1
null
Best openai API compatible application server
4
So I already have several LLMs up and running serving OpenAI compatible APIs, and am looking for an application server connecting to those APIs while serving the user with a clean and neat web interface. I tried hard but most LLM chat apps are all-inclusive heavyweights. They all come with some form of LLM backends. I need nothing fancy just a simple chat client would work. have been going through these * text-generation-webui: it's ok as an admin interface, but not for average users. * lollms-ui: similar to text-generation-webui * gpt4all-chat: not a web app server, but clean and nice UI similar to ChatGPT. unfortunately no API support. * llm-as-chatbot: for cloud apps, and it's gradio based, not the nicest UI * [local.ai](https://local.ai): multiplatform local app, not a web app server, no api support * [faraday.dev](https://faraday.dev): not a web app server, character chatting. * llama-chat: local app for Mac any better suggestions? &#x200B; &#x200B; &#x200B;
2023-09-07T22:40:58
https://www.reddit.com/r/LocalLLaMA/comments/16csz5n/best_openai_api_compatible_application_server/
--dany--
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16csz5n
false
null
t3_16csz5n
/r/LocalLLaMA/comments/16csz5n/best_openai_api_compatible_application_server/
false
false
self
4
{'enabled': False, 'images': [{'id': 'm700koTmer0UoiMGYQ9DSkYJhMlt4-VvVeiR2LNIzxo', 'resolutions': [{'height': 18, 'url': 'https://external-preview.redd.it/2laARYRmzqEA4GSj9GhMXjI561TT2O2Ma0450c_lZGk.jpg?width=108&crop=smart&auto=webp&s=4b0a701530d3e2c366f5327b425530afe2bf37d1', 'width': 108}, {'height': 36, 'url': 'https://external-preview.redd.it/2laARYRmzqEA4GSj9GhMXjI561TT2O2Ma0450c_lZGk.jpg?width=216&crop=smart&auto=webp&s=821f3f1eb3f88b0466163abfa85d9f5fb14cb09a', 'width': 216}, {'height': 54, 'url': 'https://external-preview.redd.it/2laARYRmzqEA4GSj9GhMXjI561TT2O2Ma0450c_lZGk.jpg?width=320&crop=smart&auto=webp&s=26ac6b01b572721c4eea756c9cb4be6eb47fb980', 'width': 320}, {'height': 108, 'url': 'https://external-preview.redd.it/2laARYRmzqEA4GSj9GhMXjI561TT2O2Ma0450c_lZGk.jpg?width=640&crop=smart&auto=webp&s=22d6a361137c677144a034c908df1be59f616539', 'width': 640}, {'height': 162, 'url': 'https://external-preview.redd.it/2laARYRmzqEA4GSj9GhMXjI561TT2O2Ma0450c_lZGk.jpg?width=960&crop=smart&auto=webp&s=0ff8b435b669d1527b3c5caccd67697117cd496e', 'width': 960}, {'height': 182, 'url': 'https://external-preview.redd.it/2laARYRmzqEA4GSj9GhMXjI561TT2O2Ma0450c_lZGk.jpg?width=1080&crop=smart&auto=webp&s=a0076f310a1a9f13527e351247583a9de492ccf0', 'width': 1080}], 'source': {'height': 573, 'url': 'https://external-preview.redd.it/2laARYRmzqEA4GSj9GhMXjI561TT2O2Ma0450c_lZGk.jpg?auto=webp&s=08ba096453f419afbee40446156aed4ecf0b3d98', 'width': 3384}, 'variants': {}}]}
Detailed Log of My Findings and Failures Training LLaMA-2-7b on keyword extraction
1
Over the last two weeks, we spent dozens of hours on this subreddit and youtube tutorials trying to train a LLaMA-2-7b-hf on a custom dataset to perform a single, routine task (keyword extraction). We thought this would be easy, but it was WAY HARDER than we expected. We went down a lot of wrong turns, and many of the notebooks/code examples either didn't work or were improperly documented. We eventually hobbled together methods and code from many sources and it finally worked. Thought we'd shared our findings and mistakes to help others. We uploaded our notebook to github which you can find [here](https://github.com/pickaxeproject/llama2/blob/main/llama2-finetune.ipynb), which you're welcome to use. Hopefully it saves you some time! **Much of the code in this notebook is borrowed from other walkthroughs. There are a few key changes that took us a while to figure out, and so we were inspired to share.** # What you'll need... * About **10K inputs and outputs** you'd like to train the model on. We generated these using some python scripts and openai's gpt3.5. For us, the average total token count of each input/output pair was \~300, and **no pair was longer than 800 tokens**. * A **runpod account**, and about \~$25. That's what it cost us, once we had everything figured out. We tried training on Google Colab but it didn't have the juice required in our case. * **ChatGPT** open in another tab. This notebook works for us as of September 2023 but there's no guarantee it'll work for you. With any luck, you (and ChatGPT) should be able to overcome any obstacles that arise. * About **2 hours** of setup and the patience to wait **5-8 hours** for results. # Getting set up in Runpod 1. Make an account (at runpod.io) and fund it 2. Select an **A100** (it's what we used, use a lesser GPU at your own risk) from the **Community Cloud** (it doesn't really matter, but it's slightly cheaper) 3. For template, select **Runpod Pytorch 2.0.1** 4. Wait a minute or so for it to load up 5. Click **connect** 6. Click on the button to connect to **Jupyter Lab \[Port 888\]** 7. Create a **new notebook,** and you should be ready to go! # Preparing your data This is probably the most important and frusterating part! Of course you want to make sure your input/output data is quality, and that you have enough of it (\~10K rows, for us). Once you've got that, you'll want to format it. You're gonna want to format it \***exactly like** [**this**](https://huggingface.co/datasets/timdettmers/openassistant-guanaco/blob/main/openassistant_best_replies_train.jsonl)\*. I would recommend downloading the jsonl file and taking a look. You need a .jsonl file structured like this: {"text": "### Human: YOURINSTRUCTIONHERE: YOURINPUT1HERE ### Assistant: YOUROUTPUT1HERE"} {"text": "### Human: YOURINSTRUCTIONHERE: YOURINPUT2HERE ### Assistant: YOUROUTPUT2HERE"} {"text": "### Human: YOURINSTRUCTIONHERE: YOURINPUT3HERE ### Assistant: YOUROUTPUT3HERE"} Here's an explanation of the above: 1. **YOURINPUTXHERE:** Your inputs. If you were doing keyword extraction, this would be the text you're extracting from 2. **YOUROUTPUTXHERE:** Your outputs, properly formated. If you wanted your keywords as a list, these would be the training outputs formatted like this: \["yes","no","cool"\] 3. **YOURINSTRUCTIONHERE:** This is the instruction. We think of it as a short (one or two sentence) reminder to the model of what to do. It'll really help training go faster. It should be the same for every one if you're training on a specific task. For keyword extraction, it'd be something like: "Extract relevant keywords from the following text." This isn't the only way to do it, but it's the way we finally got things to work. Make sure your file **doesn't have any extra lines or characters**, and that your data is **sanitized properly**. The formatting can be really annoying. We used a python script to generate the file. Once you have it, **name it** **data.jsonl** drag and drop it **into your jupyter lab directory**. # Access to Llama2 (and the license) The great thing about Llama 2 is that it has a commercial license. But you have to go to meta and accept that license. 1. Make a huggingface account (if you don't have one already) 2. \***Request access** [**here**](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)\* and make sure you use the same email as your hugging face account (**very important**) 3. You should get an email from meta within 5 minutes. 4. Now, you'll need to \***request access** [**here**](https://huggingface.co/meta-llama/Llama-2-7b-hf)\* for the model on huggingface. 5. Once they give it to you (it usually takes about an hour), go to your user settings, **Access Tokens**, and create (and copy) a new access token. # Training Code After all the set-up (which really shouldn't be slept on) you arrive at the meat of it. I uploaded our notebook to github which you can find [here](https://github.com/pickaxeproject/llama2/blob/main/llama2-finetune.ipynb). You can upload this notebook and use it directly or just copy and paste it cell by cell into your own notebook. # The Actual Training This took between **2 and 8 hours** for us depending on amount of data and other factors. It is currently configured to go to **10K steps**, saving a **checkpoint after every 500 steps**. These checkpoints can be **very large**, on the order of **1-2GB**. It is noted above where you could change this as needed. Even if you stop early, you can **restart training** like this: trainer.train(resume\_from\_checkpoint=True) Futher, you're saving a **checkpoint every 500 steps**, so you can run the model and test performance as you go along! # Running Inference as you go You can stop anytime, and run the model from any checkpoint (multiple of 500) using the code here! # Running Inference Later (maybe in Google Colab) We found that you have to adjust settings slightly to get things to run in google colab. Likewise, every environment will differ. What follows is the code we used, in google colab. If you're doing it in colab, **don't forget to change the instance type to T4**. Hope this is helpful to some! Feel free to ask questions.
2023-09-08T00:43:22
https://www.reddit.com/r/LocalLLaMA/comments/16cvwlh/detailed_log_of_my_findings_and_failures_training/
plausibleSnail
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16cvwlh
false
null
t3_16cvwlh
/r/LocalLLaMA/comments/16cvwlh/detailed_log_of_my_findings_and_failures_training/
false
false
self
1
{'enabled': False, 'images': [{'id': 'v8muD4_pa1qi57RQUEfMuVrZc3ko9uAemMXEC9KAMjM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9IJn69D7ewYEcEFNwuOaVa7UEu1LsquxzqAsjfnnF64.jpg?width=108&crop=smart&auto=webp&s=483a8c2d3f44a29985ed148a16760da0edf6b842', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9IJn69D7ewYEcEFNwuOaVa7UEu1LsquxzqAsjfnnF64.jpg?width=216&crop=smart&auto=webp&s=b075355b02e038477c551798a34d110829af72f0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9IJn69D7ewYEcEFNwuOaVa7UEu1LsquxzqAsjfnnF64.jpg?width=320&crop=smart&auto=webp&s=ea9c8d766f3401538fe303229e94858a290b3cf7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9IJn69D7ewYEcEFNwuOaVa7UEu1LsquxzqAsjfnnF64.jpg?width=640&crop=smart&auto=webp&s=172ade12442d65fcd543efb0ce16477658e2d263', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9IJn69D7ewYEcEFNwuOaVa7UEu1LsquxzqAsjfnnF64.jpg?width=960&crop=smart&auto=webp&s=6559f04f97b3444a7a97f0e43e9b29aa26bc407c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9IJn69D7ewYEcEFNwuOaVa7UEu1LsquxzqAsjfnnF64.jpg?width=1080&crop=smart&auto=webp&s=baca8ee8eb18d0ea6657acf5a18af5d0f20ae062', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9IJn69D7ewYEcEFNwuOaVa7UEu1LsquxzqAsjfnnF64.jpg?auto=webp&s=e665373b839607d3065c5b5211d5085417b9ca20', 'width': 1200}, 'variants': {}}]}
Easiest way of making a local app with local LLM (no fine tuning)?
2
I would like to make scripts playing with local LLMs. I don't want to play with fine tuning or tweak the models (M2 probably can't handle it, plus it is incompatible with half the Python AI ecosystem), I just want to type "how do you feel about this?" and get an answer. I don't even need streaming, all I want is to get the answer. I don't need tokens/second, anything, just the raw answer. How would you do this in a Macbook (M2 Max - 32GB)? I thought about \- Invoking llama.cpp process from terminal, programmatically (maybe too much work? This also wouldn't be versatile) \- Grab something like Dalai that makes a web server from llama.cpp, then I can switch between OpenAI and my local GPT which would call a local server? \- Is there an even better way? Probably NodeJS would be easier, but if there is a trick on Python, I'm open too. All I could find was tutorials on how to fine tune, not exactly run the standard model via Python, which is what I want (from a Macbook, so I can't compile the models in pytorch, I guess). Langchain is not my favorite piece of software, but if it can do this, I'm fine too. Edit: what I'm looking into is basically OpenAI.chat.completion() but locally.
2023-09-08T02:39:53
https://www.reddit.com/r/LocalLLaMA/comments/16cygw7/easiest_way_of_making_a_local_app_with_local_llm/
bernaferrari
self.LocalLLaMA
2023-09-08T02:52:07
0
{}
16cygw7
false
null
t3_16cygw7
/r/LocalLLaMA/comments/16cygw7/easiest_way_of_making_a_local_app_with_local_llm/
false
false
self
2
null
Best 1B - 3B chat model with 4k+ context?
1
I've been thinking to run a local LLM server (CPU only) on my old server with Ryzen 3 2200G CPU with 16GB RAM. Any recommendations for best 1B - 3B (preferably 3B) chat/instruct models with 4k (or more) context?
2023-09-08T03:09:02
https://www.reddit.com/r/LocalLLaMA/comments/16cz1ou/best_1b_3b_chat_model_with_4k_context/
Ok_Swordfish_1696
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16cz1ou
false
null
t3_16cz1ou
/r/LocalLLaMA/comments/16cz1ou/best_1b_3b_chat_model_with_4k_context/
false
false
self
1
null
Why is Exlamma not working?
1
Hello, I recently downloaded the 13b Manticore model from TheBloke. I followed the steps perfectly and did everything that was supposed to be done on the list. For some reason though, the model still runs pretty slowly. 7b models run completely fine with the Exlamma loader but not the 13b models. Is there any particular reason why this is? Here's the link to the model: [https://huggingface.co/TheBloke/Manticore-13B-Chat-Pyg-Guanaco-SuperHOT-8K-GPTQ](https://huggingface.co/TheBloke/Manticore-13B-Chat-Pyg-Guanaco-SuperHOT-8K-GPTQ)
2023-09-08T04:13:44
https://www.reddit.com/r/LocalLLaMA/comments/16d0c7i/why_is_exlamma_not_working/
ColossalsMassivePP
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16d0c7i
false
null
t3_16d0c7i
/r/LocalLLaMA/comments/16d0c7i/why_is_exlamma_not_working/
false
false
self
1
{'enabled': False, 'images': [{'id': 'iQGbrj214uMWPK-HotPrJejCPrSK0prSJcDKxxhAgRk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Ld8czjiPIFwxUj7g05le420WGMIiILmr0nomQA_t_3I.jpg?width=108&crop=smart&auto=webp&s=544b494e915a04bb1b5da1882a4c57f3328ca7e4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Ld8czjiPIFwxUj7g05le420WGMIiILmr0nomQA_t_3I.jpg?width=216&crop=smart&auto=webp&s=a6b1e42f1da630210c0baa3d7fb11b8653c843f6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Ld8czjiPIFwxUj7g05le420WGMIiILmr0nomQA_t_3I.jpg?width=320&crop=smart&auto=webp&s=a94ac8e2ff321b7635f38412f4abe0b347b50141', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Ld8czjiPIFwxUj7g05le420WGMIiILmr0nomQA_t_3I.jpg?width=640&crop=smart&auto=webp&s=8d6526c7ba3e41e67f49e4d1f3fd267aeba747ad', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Ld8czjiPIFwxUj7g05le420WGMIiILmr0nomQA_t_3I.jpg?width=960&crop=smart&auto=webp&s=44fbfdc0337aed206069c9843e7032e70183940a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Ld8czjiPIFwxUj7g05le420WGMIiILmr0nomQA_t_3I.jpg?width=1080&crop=smart&auto=webp&s=34ccd1c69d13c3c352adae9693f30ec4eda5b770', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Ld8czjiPIFwxUj7g05le420WGMIiILmr0nomQA_t_3I.jpg?auto=webp&s=89420c6171e3030d6205b54812904513bd118538', 'width': 1200}, 'variants': {}}]}
Guide: build llama.cpp on windows with AMD GPUs, and using ROCm
1
Steps for building llama.cpp on windows with ROCm. 1. Check if your GPU is supported here: https://rocmdocs.amd.com/en/latest/release/windows_support.html. Things go really easy if your graphics card is supported. You need to note the `gfx` identifier. Either ways, download and install. You don't need to update your display driver. 2. Download Visual Studio Community and install it. I haven't tested with MSYS64 or others, but just picked Visual Studio to just be frictionless. Install MVSC2022, C++ ATL, Security Issue, Profiling, CMake and Address sanitizer. I believe that you can just get away with CMake tool-kit and the MVSC2022 3. Download and install Git for windows 4. Download and install Strawberry perl. This is because hipcc is a perl script and is used to build various things. 5. Atlast, download the release from llama.cpp. At the time of writing, the recent release is llama.cpp-b1198. Unzip and enter inside the folder. I downloaded and unzipped it to: C:\llama\llama.cpp-b1198\llama.cpp-b1198, after which I created a directory called build, so my final path is this: C:\llama\llama.cpp-b1198\llama.cpp-b1198\build 6. Once all this is done, you need to set paths of the programs installed in 2-4. This is so that they are in the environment and we don't really need to fiddle around stuff. Here is a list of their relevant exes: - Cmake and Ninja: C:\Program Files\Microsoft Visual Studio\2022\Community\Common7\IDE\CommonExtensions\Microsoft\CMake\CMake\bin C:\Program Files\Microsoft Visual Studio\2022\Community\Common7\IDE\CommonExtensions\Microsoft\CMake\Ninja - Git: C:\Program Files\Git\bin - Perl: C:\Strawberry\perl\bin You can set these using the Settings -> Environment Variables (in search box in settings) -> Path (and then edit). Add the above one by one using New button. Once this is done, open a powershell window and use the commandlet `Get-Command` to check if the tools have been exported in the environment successfully. 7. If the above is done without any hitches, go to the llama.cpp build folder. Open powershell (since I'm more comfortable with it) and you'd need to execute the build command, replace gfx with gfx1030 etc for whatever your gfxid is: `cmake .. -G "Ninja" -DCMAKE_BUILD_TYPE=Release -DLLAMA_HIPBLAS=ON -DCMAKE_C_COMPILER="clang.exe" -DCMAKE_CXX_COMPILER="clang++.exe" -DAMDGPU_TARGETS="gfx"` For people, who haven't gotten a supported Graphics card (like me with a 6800M), you'd need to recompile tensile library. I'll go over this in steps 9 and above. If everything has been installed and configured correctly, you would see these lines during the build configure process: ``` -- hip::amdhip64 is SHARED_LIBRARY -- Performing Test HIP_CLANG_SUPPORTS_PARALLEL_JOBS -- Performing Test HIP_CLANG_SUPPORTS_PARALLEL_JOBS - Success -- hip::amdhip64 is SHARED_LIBRARY -- HIP and hipBLAS found . . . -- Build files have been written to: C:/llama/llama.cpp-b1198/llama.cpp-b1198/build ``` 8. Build stuff now! `cmake --build . -j 16` you can replace 16 by the number of threads you've got, so it'll make the build process faster. Go to the bin folder, use main.exe to run your models! Don't forget to use -ngl offload models to your GPU. You will also need to state your device ID incase you've gotten multiple GPUs on your system (like Integrated and a Discrete) using this: `$env:HIP_VISIBLE_DEVICES=1`. Replace 1 with whatever you've gotten. 9. Compiling rocBlas for unsupported GPUs. You're still going to need the dependencies and stuff we installed in 2-4. It's just that we need to compile the library that performs tensor operations and stuff like that on your GPU. We are just going to recompile rocblas, tensile using fallback mode to enable GPU support. BTW, you're gonna need python on path. Just easy to install from Windows store, so that it just exists on your system without fiddling a lot more, otherwise download and install from python.org, find the .exe file location and set it in the environment. Anyhoo, marching on ahead: ``` git clone https://github.com/ROCmSoftwarePlatform/rocBLAS cd rocBLAS git checkout rocm-5.5.1 cd .. git clone https://github.com/ROCmSoftwarePlatform/Tensile cd Tensile git checkout rocm-5.5.1 ``` eh, the rocm version here isn't really going to matter. We download ROCm, and Tensile library in the git commands above. Download this file into the Tensile folder, which enables fallback architecture lazy-loading and enables us to compile stuff for non-supported Graphics cards. https://raw.githubusercontent.com/ulyssesrr/docker-rocm-xtra/f25f12835c1d0a5efa80763b5381accf175b200e/rocm-xtra-rocblas-builder/patches/Tensile-fix-fallback-arch-build.patch This patch can be applied using the Git command: `git apply Tensile-fix-fallback-arch-build.patch` Open cmd.exe in the rocBlas folder, and run `python rmake.py -a gfx1031 --lazy-library-loading --no-merge-architectures -t C:\llama\Tensile`. Replace 1031 with your card's number. This number can be obtained by searching on AMD specification list or just across the internet. Alternatively, use GPU Caps Viewer. This entire series of steps can take ~15mins to an hour or so. Open `x64 Native Tools Command Prompt for VS 2022` as administrator, go to the rocBlas directory to run `cmake --install build\release --prefix "C:\Program Files\AMD\ROCm\5.5"` where the location after prefix is where ROCm is installed. If this is successful (sorry, I forgot to keep log for this), you should be able to perform 7 and 8 above without any hitch! That's it!
2023-09-08T05:14:03
https://www.reddit.com/r/LocalLLaMA/comments/16d1hi0/guide_build_llamacpp_on_windows_with_amd_gpus_and/
fatboy93
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16d1hi0
false
null
t3_16d1hi0
/r/LocalLLaMA/comments/16d1hi0/guide_build_llamacpp_on_windows_with_amd_gpus_and/
false
false
self
1
null
What are your thoughts on Corsair c8 dmatrix cards?
1
Hey all, What are your thoughts on the new corsair c8 cards? They seem to be very promising for llm use cases. Do you think this might wver land up in retail space?
2023-09-08T05:37:54
https://www.reddit.com/r/LocalLLaMA/comments/16d1wx4/what_are_your_thoughts_on_corsair_c8_dmatrix_cards/
jhanjeek
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16d1wx4
false
null
t3_16d1wx4
/r/LocalLLaMA/comments/16d1wx4/what_are_your_thoughts_on_corsair_c8_dmatrix_cards/
false
false
self
1
null
Can a new MacBook with the M2 Max chip run LLaMa model locally?
1
Hi all, I’m new to this community and haven’t actually ran LLaMa yet but it’s definitely something I want to do once I have enough time to play around with it, so sorry if this question comes off wrong, I am looking to buy a new laptop and I want it to be able to run some LLaMa models (as big as possible) Would the new Macbook pro with the M2 Max Chip and 96GB Unified memory be up to the task? If so up to what model would it be good for (7B,33B etc..). Also one last question, when I see RAM requirements here can I make a 1:1 comparison with unified memory? For example if it says that some model needs 40GB RAM to run, can I equate that to 40GB Unified Memory? Thank you all again.
2023-09-08T06:02:01
https://www.reddit.com/r/LocalLLaMA/comments/16d2c8n/can_a_new_macbook_with_the_m2_max_chip_run_llama/
KarlTheFourth
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16d2c8n
false
null
t3_16d2c8n
/r/LocalLLaMA/comments/16d2c8n/can_a_new_macbook_with_the_m2_max_chip_run_llama/
false
false
self
1
null
Can I run a quantized falcon 180b with 2x4090 and 128GB Ram with llama.cpp?
1
This would give me 128+48 GB Ram so I would hope it is enough with a 4bit quantized version of falcon 180b and some offloading. Thanks
2023-09-08T06:24:03
https://www.reddit.com/r/LocalLLaMA/comments/16d2r8m/can_i_run_a_quantized_falcon_180b_with_2x4090_and/
franzdegoil
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16d2r8m
false
null
t3_16d2r8m
/r/LocalLLaMA/comments/16d2r8m/can_i_run_a_quantized_falcon_180b_with_2x4090_and/
false
false
self
1
null
Bringing open-source models to production
1
Did you manage to bring one of these models to production? If yes, for which use case? Do you have any suggestion?
2023-09-08T06:26:03
https://www.reddit.com/r/LocalLLaMA/comments/16d2sg6/bringing_opensource_models_to_production/
HorrorNo8851
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16d2sg6
false
null
t3_16d2sg6
/r/LocalLLaMA/comments/16d2sg6/bringing_opensource_models_to_production/
false
false
self
1
null
Yet another reason why Open Source LLM is better than ChatGPT
1
2023-09-08T06:39:43
https://www.reddit.com/gallery/16d30i3
m0dE
reddit.com
1970-01-01T00:00:00
0
{}
16d30i3
false
null
t3_16d30i3
/r/LocalLLaMA/comments/16d30i3/yet_another_reason_why_open_source_llm_is_better/
false
false
nsfw
1
null
Any ideas? Pdf -> Q&A form -> embedding -> vector db -> similarity search -> chat gpt 3.5 answer
1
I want to build a chatbot with the knowledge base of some driver's licence theory for a company. Speed is important, and few hallucinations. Any ideas for applications for use, or other ways of doing it? If you have done a successful way, I gladly pay for some guidance on Skype. I was thinking of doing something like this: Pdf -> Q&A form -> embedding -> vector db -> similarity search -> chat gpt 3.5 answer
2023-09-08T06:57:16
https://www.reddit.com/r/LocalLLaMA/comments/16d3ayv/any_ideas_pdf_qa_form_embedding_vector_db/
sexychipss
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16d3ayv
false
null
t3_16d3ayv
/r/LocalLLaMA/comments/16d3ayv/any_ideas_pdf_qa_form_embedding_vector_db/
false
false
self
1
null
How much context does falcon 180b has ?
1
Hi guys ! I’m curious how much context does the new falcon 180 has ! And by the way which open source model has a real good long context ? Thanks
2023-09-08T08:36:43
https://www.reddit.com/r/LocalLLaMA/comments/16d4zve/how_much_context_does_falcon_180b_has/
Classic-Rise4742
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16d4zve
false
null
t3_16d4zve
/r/LocalLLaMA/comments/16d4zve/how_much_context_does_falcon_180b_has/
false
false
self
1
null
is there a gui i can run locally?
1
[removed]
2023-09-08T08:42:12
https://www.reddit.com/r/LocalLLaMA/comments/16d53a7/is_there_a_gui_i_can_run_locally/
CheapBison1861
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16d53a7
false
null
t3_16d53a7
/r/LocalLLaMA/comments/16d53a7/is_there_a_gui_i_can_run_locally/
false
false
self
1
null
Distributed LLM?
1
I was doing some reading about how GPT-4 works using a combination of several smaller models working in parallel. I never really found something like this yet so I was curious if something like this has ever been attempted in the self hosted world. Let’s use one of my clients as an example, who has a net cafe with about 30 gaming PCs. Does such a system exist that would allow him to offer up gpu resources to a collective pool running an LLM when they are not in use by a customer? I know crypto and AI people tend to be at odds with each other pretty often, but I am actually kind of surprised that there are not really any “mining pool equivalents” for AI people who are exceptionally VRAM endowed. I am not referring to Google Collab, but more of a decentralized effort just anyone can participate in. I bet it has something to do with bandwidth and throughput, but there should be a way to do it by breaking up things into a marginally predictable range like a DAG would in crypto.
2023-09-08T08:47:38
https://www.reddit.com/r/LocalLLaMA/comments/16d56fq/distributed_llm/
SocietyTomorrow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16d56fq
false
null
t3_16d56fq
/r/LocalLLaMA/comments/16d56fq/distributed_llm/
false
false
self
1
null
Request: HF Space (f16-hf to .gguf)
1
If someone knows how to set this up for cpu, this helps save time for testing new models. I don't know how much storage is given on free tier, but hopefully enough to convert the largest models?
2023-09-08T08:52:18
https://www.reddit.com/r/LocalLLaMA/comments/16d5909/request_hf_space_f16hf_to_gguf/
Aaaaaaaaaeeeee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16d5909
false
null
t3_16d5909
/r/LocalLLaMA/comments/16d5909/request_hf_space_f16hf_to_gguf/
false
false
self
1
null
Smartest way to deploy Llama 2 in the cloud for a bunch of users?
1
Hello, i'm a student working on a project to implement a large language model (currently LLama 2-Chat) for internal documents. I started with the localGPT repo ([https://github.com/PromtEngineer/localGPT](https://github.com/PromtEngineer/localGPT)) and made some minor changes myself. The ggml models (provided by TheBloke ) worked fine, however i can't utilize the GPU on my own hardware, so answer times are pretty long. It seems rather complicated to get cuBLAS running on windows. Currently i'm trying to run the new gguf models with the current version of llama-cpp-python which is probably another topic. Back to topic: Goal is to run the prototype in a cloud with better perfomance and availability. I wonder what is the smartest way to accomplish this? We will most likely use AWS, but there are many different instances and at the moment it is a bit overwhelming. &#x200B; * We would like to deploy the 70B-Chat LLama 2 Model, however we would need lots of VRAM. I guess prices would be very high just because of the high amount of memory needed. I read there aren't many AWS instances that could run such a big model. * I never hosted a program in the cloud. Are there any pitfalls when i dockerize my application? Afaik Docker is supposed to avoid compatibility problems. * Which OS for the server? I only use Windows on my own machines, however it seems Linux / Ubuntu is the standard. This could make things easier for enabling the GPU (which i cant get done on my windows machine), however i have no experience with Linux. If we use ubuntu how can i make sure that everything works beforehand? It should be no problem if i emulate the environment via a docker container, right? * How can an application be scaled on the server? At the moment my program is a simple python script. How can i make this script available in parallel when 2 users for example want to chat? I guess the instance won't be able to this automatically. Thanks for your help :)
2023-09-08T09:09:27
https://www.reddit.com/r/LocalLLaMA/comments/16d5ih6/smartest_way_to_deploy_llama_2_in_the_cloud_for_a/
Purity1212
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16d5ih6
false
null
t3_16d5ih6
/r/LocalLLaMA/comments/16d5ih6/smartest_way_to_deploy_llama_2_in_the_cloud_for_a/
false
false
self
1
{'enabled': False, 'images': [{'id': 'E_5FnNIg3kev5qwuM1RHsFObVWuLWAPnaZQfti6BdXA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/geIUP19Rkxzkd2uslaTQAztufxdZtaTfzPXwsqzJ41w.jpg?width=108&crop=smart&auto=webp&s=cfef3b68b563ea10a56a29711353ae7f2a9223d3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/geIUP19Rkxzkd2uslaTQAztufxdZtaTfzPXwsqzJ41w.jpg?width=216&crop=smart&auto=webp&s=6a370c26c3256986132a101e660810fa3e17831b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/geIUP19Rkxzkd2uslaTQAztufxdZtaTfzPXwsqzJ41w.jpg?width=320&crop=smart&auto=webp&s=d743a89fd5c563f6b68b6f6634651ef416b55844', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/geIUP19Rkxzkd2uslaTQAztufxdZtaTfzPXwsqzJ41w.jpg?width=640&crop=smart&auto=webp&s=5c4b793975a55c270a94499e513ff389bfbb230f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/geIUP19Rkxzkd2uslaTQAztufxdZtaTfzPXwsqzJ41w.jpg?width=960&crop=smart&auto=webp&s=68c58d70fb80ba2459270bc05fb2887366eee9ed', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/geIUP19Rkxzkd2uslaTQAztufxdZtaTfzPXwsqzJ41w.jpg?width=1080&crop=smart&auto=webp&s=d2a9961ca8752220bbe4f296c705b3118422de1f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/geIUP19Rkxzkd2uslaTQAztufxdZtaTfzPXwsqzJ41w.jpg?auto=webp&s=040be893eeb0f59f40f3decf610fd915aa7bf08e', 'width': 1200}, 'variants': {}}]}
Instruct-Tune Llama2 on single instruction
1
Hello, I'm trying to instruct-tune llama2 on single instruction :extract date-time mentions from the input. Do you have any advice about size of dataset and which model should I use?
2023-09-08T10:00:57
https://www.reddit.com/r/LocalLLaMA/comments/16d6e96/instructtune_llama2_on_single_instruction/
enqidu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16d6e96
false
null
t3_16d6e96
/r/LocalLLaMA/comments/16d6e96/instructtune_llama2_on_single_instruction/
false
false
self
1
null
Backwards is to Context, as Forwards is to ...?
1
[removed]
2023-09-08T11:18:01
https://www.reddit.com/r/LocalLLaMA/comments/16d7t82/backwards_is_to_context_as_forwards_is_to/
innocuousAzureus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16d7t82
false
null
t3_16d7t82
/r/LocalLLaMA/comments/16d7t82/backwards_is_to_context_as_forwards_is_to/
false
false
self
1
null
Need Help Setting Up a Server for future LLAMA 140B Support for 20 Users Simultaneously
1
Hi, I've been contemplating a project for quite some time, and I need your assistance in setting up a server. My goal is to create a server capable of providing future LLAMA 140B support to 20 users simultaneously. However, I lack experience in this area and don't know where to begin. I'm looking to gather recommendations and experiences from individuals who have knowledge about what I should consider when embarking on this project. Here are some of my questions: 1. What hardware specifications should I choose? What are your recommendations for the processor, RAM, storage,GPU (NVIDIA and AMD pros and cons), etc.? 2. Do you have any budget-related advice? I need your help with this project, so any information, suggestions, and experiences you can share would be greatly appreciated.
2023-09-08T11:25:22
https://www.reddit.com/r/LocalLLaMA/comments/16d7y63/need_help_setting_up_a_server_for_future_llama/
PickkNickk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16d7y63
false
null
t3_16d7y63
/r/LocalLLaMA/comments/16d7y63/need_help_setting_up_a_server_for_future_llama/
false
false
self
1
null
M2 Max 96gb recommendations
1
Do You guys have any recommendations for models for coding that runs Well on m2 Max 96gb?
2023-09-08T11:25:25
https://www.reddit.com/r/LocalLLaMA/comments/16d7y7o/m2_max_96gb_recommendations/
SvanseHans
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16d7y7o
false
null
t3_16d7y7o
/r/LocalLLaMA/comments/16d7y7o/m2_max_96gb_recommendations/
false
false
self
1
null
A small test I did with falcon-180b-chat.Q2_K.gguf (at home on consumer grade hardware)
1
text-generation-webui loader: llama.cpp n-gpu-layers: 10 18,8 GB VRAM usage 10,5 GB RAM usage (seems odd, I don’t know how Ubuntu calculates that) My system Hardware: GPU: RTX 3090 CPU: Ryzen 3950 RAM: 128 GB
2023-09-08T11:34:20
https://v.redd.it/uzxb30hpp0nb1
frapastique
/r/LocalLLaMA/comments/16d84ap/a_small_test_i_did_with_falcon180bchatq2_kgguf_at/
1970-01-01T00:00:00
0
{}
16d84ap
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/uzxb30hpp0nb1/DASHPlaylist.mpd?a=1696851262%2CYjc4ZjFjOTFmYThmMmRmZDA3YjAwN2EyMzNiOWYxYzIwNTU2NWI4ZjFlOWIzYjhkYWU5MjU4Zjc3NzI2MGFkMw%3D%3D&v=1&f=sd', 'duration': 78, 'fallback_url': 'https://v.redd.it/uzxb30hpp0nb1/DASH_720.mp4?source=fallback', 'height': 1280, 'hls_url': 'https://v.redd.it/uzxb30hpp0nb1/HLSPlaylist.m3u8?a=1696851262%2CY2E2MWUzNTIyNGRkMjA5OWVjOTRhZDUyMTg4M2Q3ZjQ0MjE5ZGIzZmJjNzllYWMzNjU3NjBkMmNjNmYwOGRmYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/uzxb30hpp0nb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 590}}
t3_16d84ap
/r/LocalLLaMA/comments/16d84ap/a_small_test_i_did_with_falcon180bchatq2_kgguf_at/
false
false
https://external-preview…cfe26f1138a64c14
1
{'enabled': False, 'images': [{'id': 'dnJlcTc5ZHBwMG5iMRnAPs6niOHFgKB8gmzCcHvZ9MLoJH9Dvn-77eUaFHID', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/dnJlcTc5ZHBwMG5iMRnAPs6niOHFgKB8gmzCcHvZ9MLoJH9Dvn-77eUaFHID.png?width=108&crop=smart&format=pjpg&auto=webp&s=c2ef77599648fb0622d558f092dc7e20970c9415', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/dnJlcTc5ZHBwMG5iMRnAPs6niOHFgKB8gmzCcHvZ9MLoJH9Dvn-77eUaFHID.png?width=216&crop=smart&format=pjpg&auto=webp&s=ca452bf4f19e123cbb6fc3f164e6a2e7fdff0b98', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/dnJlcTc5ZHBwMG5iMRnAPs6niOHFgKB8gmzCcHvZ9MLoJH9Dvn-77eUaFHID.png?width=320&crop=smart&format=pjpg&auto=webp&s=1aa144281ecebaa57a040a6aaad543823da762b8', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/dnJlcTc5ZHBwMG5iMRnAPs6niOHFgKB8gmzCcHvZ9MLoJH9Dvn-77eUaFHID.png?width=640&crop=smart&format=pjpg&auto=webp&s=18855500ae2d704d3703b074613546e51e4f5f74', 'width': 640}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/dnJlcTc5ZHBwMG5iMRnAPs6niOHFgKB8gmzCcHvZ9MLoJH9Dvn-77eUaFHID.png?format=pjpg&auto=webp&s=1f80088d9e9a1d75039d58bc4ca327d77acfd50d', 'width': 886}, 'variants': {}}]}
Best LLaMa model to use with my older gaming hardware?
1
[removed]
2023-09-08T12:25:02
https://www.reddit.com/r/LocalLLaMA/comments/16d96mt/best_llama_model_to_use_with_my_older_gaming/
Alone_Sock_5303
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16d96mt
false
null
t3_16d96mt
/r/LocalLLaMA/comments/16d96mt/best_llama_model_to_use_with_my_older_gaming/
false
false
self
1
{'enabled': False, 'images': [{'id': 'BGXfaUMntPZWzYo99FfkrbqxveeayLkICP2FRV6iEYA', 'resolutions': [{'height': 41, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=108&crop=smart&auto=webp&s=96da2c256b06310619199b215ff7567afa27ee58', 'width': 108}, {'height': 83, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=216&crop=smart&auto=webp&s=dcf240044368f708dc0d750badeebda2aa691840', 'width': 216}, {'height': 123, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=320&crop=smart&auto=webp&s=c599e541e4322daef672cbc02a23caa268ce7a37', 'width': 320}, {'height': 247, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=640&crop=smart&auto=webp&s=44654defcfaeda2a2f81d1711a6a01541805fe51', 'width': 640}, {'height': 370, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=960&crop=smart&auto=webp&s=827015bf3dd3f28e66d59efa9228d6b755907d6d', 'width': 960}, {'height': 416, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=1080&crop=smart&auto=webp&s=1382ec95526d9b1eec98203c2cb753b9c47060f2', 'width': 1080}], 'source': {'height': 440, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?auto=webp&s=e8f48766fae002673b469ec4740e89f7a0c7191f', 'width': 1140}, 'variants': {}}]}
Need help selecting model/s and preset tweaking.
1
[removed]
2023-09-08T12:26:30
https://www.reddit.com/r/LocalLLaMA/comments/16d97rf/need_help_selecting_models_and_preset_tweaking/
Alone_Sock_5303
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16d97rf
false
null
t3_16d97rf
/r/LocalLLaMA/comments/16d97rf/need_help_selecting_models_and_preset_tweaking/
false
false
self
1
{'enabled': False, 'images': [{'id': 'BGXfaUMntPZWzYo99FfkrbqxveeayLkICP2FRV6iEYA', 'resolutions': [{'height': 41, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=108&crop=smart&auto=webp&s=96da2c256b06310619199b215ff7567afa27ee58', 'width': 108}, {'height': 83, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=216&crop=smart&auto=webp&s=dcf240044368f708dc0d750badeebda2aa691840', 'width': 216}, {'height': 123, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=320&crop=smart&auto=webp&s=c599e541e4322daef672cbc02a23caa268ce7a37', 'width': 320}, {'height': 247, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=640&crop=smart&auto=webp&s=44654defcfaeda2a2f81d1711a6a01541805fe51', 'width': 640}, {'height': 370, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=960&crop=smart&auto=webp&s=827015bf3dd3f28e66d59efa9228d6b755907d6d', 'width': 960}, {'height': 416, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=1080&crop=smart&auto=webp&s=1382ec95526d9b1eec98203c2cb753b9c47060f2', 'width': 1080}], 'source': {'height': 440, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?auto=webp&s=e8f48766fae002673b469ec4740e89f7a0c7191f', 'width': 1140}, 'variants': {}}]}
Self aware software / OS
1
Hi all, given everyone spends so much time mucking around with configuration, and documentation confusion in general, is anyone working on a self-aware operating environment? It seems kind of ironic to me that people are producing obscure interfaces when the end products are designed to be "reasonable," why not close the loop? The whole pile would then be generally easier to use and learn from. I guess training the model on the specific OS configuration and software would be the way to do this?
2023-09-08T12:27:12
https://www.reddit.com/r/LocalLLaMA/comments/16d98ak/self_aware_software_os/
nostriluu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16d98ak
false
null
t3_16d98ak
/r/LocalLLaMA/comments/16d98ak/self_aware_software_os/
false
false
self
1
null
How to train LLMs with wikipedia articles? is not Q/A or classification so...
1
Some foundational models are trained with big text corpus, like wikipedia articles, I wonder how is this data used to train models? I can understand question/answers datasets, you input the "question" and the model should output the "answer" (or best effort). Classification also seems obvious, you input a text, and output should be "happy/sad/angry/.." or whatever the classification. But for an article? I asked chatgpt and it says is kind of the per sentences, like: Input: ["Albert", "_Einstein", "_was", "_a", "_theoretical", "_physicist", "."] Output: ["▁He", "▁is", "▁best", "▁known", "▁for", "▁his", "▁theory", "▁of", "▁relativity", "."] Is this correct? So you need to just mix all the sentences through the article? This is a wikipedia dataset: https://huggingface.co/datasets/wikipedia Thanks for any guidance!
2023-09-08T12:28:45
https://www.reddit.com/r/LocalLLaMA/comments/16d99fz/how_to_train_llms_with_wikipedia_articles_is_not/
julio_oa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16d99fz
false
null
t3_16d99fz
/r/LocalLLaMA/comments/16d99fz/how_to_train_llms_with_wikipedia_articles_is_not/
false
false
self
1
{'enabled': False, 'images': [{'id': 'FA1jWFXulT7lZyXWPZdjFlylZlzFqYUjDP8dzgiFAak', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Oi2z79u6NJfBy5gv8hSvsJuqAhBTEz6ZiGcsMZEmPIs.jpg?width=108&crop=smart&auto=webp&s=f463a50853c35dc69ac41b76a79699bd0349efc8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Oi2z79u6NJfBy5gv8hSvsJuqAhBTEz6ZiGcsMZEmPIs.jpg?width=216&crop=smart&auto=webp&s=9bd4d32881af8580e745bd6d007de2d2fe2e4dd1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Oi2z79u6NJfBy5gv8hSvsJuqAhBTEz6ZiGcsMZEmPIs.jpg?width=320&crop=smart&auto=webp&s=c2e3a747bee025113e54dbfef63f7c00e1e4962c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Oi2z79u6NJfBy5gv8hSvsJuqAhBTEz6ZiGcsMZEmPIs.jpg?width=640&crop=smart&auto=webp&s=4e67365611cfa6401ee0bf317c89c4eaafb418aa', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Oi2z79u6NJfBy5gv8hSvsJuqAhBTEz6ZiGcsMZEmPIs.jpg?width=960&crop=smart&auto=webp&s=39ebf4797748e4876ea48dd0b0794c134b92961c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Oi2z79u6NJfBy5gv8hSvsJuqAhBTEz6ZiGcsMZEmPIs.jpg?width=1080&crop=smart&auto=webp&s=2e682be3f781e8977b2a3f48474bdae4c6b7a4a8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Oi2z79u6NJfBy5gv8hSvsJuqAhBTEz6ZiGcsMZEmPIs.jpg?auto=webp&s=ed836ec251abee61dad84b3fdcfcdd875af650c8', 'width': 1200}, 'variants': {}}]}
FLM-101B: An Open LLM and How to Train It with $100K Budget
1
2023-09-08T13:28:30
https://arxiv.org/abs/2309.03852
AlterandPhil
arxiv.org
1970-01-01T00:00:00
0
{}
16danhb
false
null
t3_16danhb
/r/LocalLLaMA/comments/16danhb/flm101b_an_open_llm_and_how_to_train_it_with_100k/
false
false
https://b.thumbs.redditm…dxDpLoGh20cs.jpg
1
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
Got a second 3090, but seems there is no point running 70B?
1
Hi all, I finally got my hands on second 3090 and it seems that in coding all 70b available models (i.e. Lemur, Wizard) are worse than 15B Wizard, not to say about codellama. In terms of casual chat, asking for general knowledge, summarization, I was not able to justify improvement vs. less context size. Is there something that I'm missing or there are no good 70b finetunes yet?
2023-09-08T13:50:04
https://www.reddit.com/r/LocalLLaMA/comments/16db5tt/got_a_second_3090_but_seems_there_is_no_point/
kpodkanowicz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16db5tt
false
null
t3_16db5tt
/r/LocalLLaMA/comments/16db5tt/got_a_second_3090_but_seems_there_is_no_point/
false
false
self
1
null
How to run exllama in Google Colab without Text Generation WebUI
1
Hi Im a newbie (like very very), did someone run exllama in Google Colab without Text Generation WebUI? No need of the frontend, just the code to call the respective function and have the generated text in a variable. Thank you so much!
2023-09-08T14:35:46
https://www.reddit.com/r/LocalLLaMA/comments/16dc9k4/how_to_run_exllama_in_google_colab_without_text/
NegotiationTime3595
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16dc9k4
false
null
t3_16dc9k4
/r/LocalLLaMA/comments/16dc9k4/how_to_run_exllama_in_google_colab_without_text/
false
false
self
1
null
Math Proficient Language Models
1
**TLDR:** Are there language models compatible with a 24 GB RAM GPU setup (potentially through quantization) proficient in numerical and statistical reasoning? Hello everyone, I'm working with a detailed markdown report that dissects a dataset statistically, segmented into five key parts: 1. **Dataset Overview**: Enlisting column names and types, coupled with general observations. 2. **Statistical Analysis**: Detailing various statistical parameters (mean, median, mode, etc.) for each feature. 3. **Correlation Study**: Offering a correlation matrix and identifying feature pairs with high correlation. 4. **Regression/Classification**: Implementing a rudimentary linear regression or random tree classifier and presenting relevant metrics. 5. **Feature Importance**: Executing feature importance algorithms and delineating the significance of each feature. The report, as it stands, contains uninterpreted data. I'd like to use a language model capable of annotating different sections of the report with insightful observations, hypothetical relationships based on column names, and analysis of statistical distributions of features. I utilized the llama2-13B model with 4-bit quantization. Unfortunately, it failed to provide accurate interpretations, such as incorrectly analyzing the relationship between median and mean values to deduce data symmetry. Is there someone who has successfully utilized a language model adept in mathematical and statistical understanding, compatible with a 24 GB RAM GPU setup, and capable of offering accurate numerical considerations? Your insights and suggestions would be greatly appreciated!
2023-09-08T14:40:22
https://www.reddit.com/r/LocalLLaMA/comments/16dcdit/math_proficient_language_models/
PinballOscuro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16dcdit
false
null
t3_16dcdit
/r/LocalLLaMA/comments/16dcdit/math_proficient_language_models/
false
false
self
1
null
How to train a Language LoRA?
1
I'd like to try improve the language capabilities of Llama2. Specifically I'd like it to become better at understanding and writing Danish, and I'd hope to do that with a LoRA. But what form should the dataset take to improve it's language capabilities? Does it have to be in the form of an instruction and answer, like the Alpaca dataset? Or do I just feed it with a big text file in Danish with a lot of different random text? I have a 6gb text file that is a lot of lines with a random sentence in English, followed by the same sentence in Danish, and I should be able to parse it into a JSON format: \[ { "somekey": "somevalue", "key2": "value2" }, { // etc } \] &#x200B; But I'm not sure if that is the way to go if I just want to improve the language capabilities? And would I use the English text as key, and then the Danish txt as value? I saw someone on YouTube using the alpaca data format and then used a translating service to translate it, but I don't think it will really improve the existing Danish in the Llama model by much, since auto translations to Danish are rarely as good as to major languages as German, French, Spanish etc. If anyone has good or bad experiences with language LoRAs, I'd be eager to hear them! &#x200B;
2023-09-08T15:35:09
https://www.reddit.com/r/LocalLLaMA/comments/16dds7t/how_to_train_a_language_lora/
nixudos
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16dds7t
false
null
t3_16dds7t
/r/LocalLLaMA/comments/16dds7t/how_to_train_a_language_lora/
false
false
self
1
null
Side by side: MythoMax 13B vs. Falcon 180B [Dark fantasy prompt]
1
[removed]
2023-09-08T15:52:57
https://www.reddit.com/r/LocalLLaMA/comments/16de8ko/side_by_side_mythomax_13b_vs_falcon_180b_dark/
whtne047htnb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16de8ko
false
null
t3_16de8ko
/r/LocalLLaMA/comments/16de8ko/side_by_side_mythomax_13b_vs_falcon_180b_dark/
false
false
default
1
null
Discussion with Tom Campbell, author of My Big TOE -- when does it make sense to prefer relationships with LLMs/AI chatbots over relationships with humans?
1
2023-09-08T16:10:17
https://v.redd.it/j2r9kcyl22nb1
verdelyi
/r/LocalLLaMA/comments/16deos8/discussion_with_tom_campbell_author_of_my_big_toe/
1970-01-01T00:00:00
0
{}
16deos8
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/j2r9kcyl22nb1/DASHPlaylist.mpd?a=1696867821%2CNjUwMDRlZTA2OGNhNDM1ZWY0ZjI1OGQxYjMzZWUyYWUzMzZjY2UzYmQyOGQ3ZmUzZTE0N2EwZDRkZWQxODdmZg%3D%3D&v=1&f=sd', 'duration': 189, 'fallback_url': 'https://v.redd.it/j2r9kcyl22nb1/DASH_720.mp4?source=fallback', 'height': 720, 'hls_url': 'https://v.redd.it/j2r9kcyl22nb1/HLSPlaylist.m3u8?a=1696867821%2CZmUwMWU4ZDI2NmYzOGIyNDdjMDYyMWMxMjI4MWNmZWZlZGY5NDM3MDdjNDg3MjFhMjMzNWRjYzk2YmZmOTE0ZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/j2r9kcyl22nb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_16deos8
/r/LocalLLaMA/comments/16deos8/discussion_with_tom_campbell_author_of_my_big_toe/
false
false
https://b.thumbs.redditm…1d4qtMM6tuCw.jpg
1
{'enabled': False, 'images': [{'id': '-ot9_9B5gz56xlBth4iiDCID8WDWp6FOSSMzaeMYTH4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/pB-x8n3qm8Fl6YBRiwocXZCmEfna-2dXEAul8tbamVg.png?width=108&crop=smart&format=pjpg&auto=webp&s=8b263640574a520da492585ea6e92ac2d432c822', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/pB-x8n3qm8Fl6YBRiwocXZCmEfna-2dXEAul8tbamVg.png?width=216&crop=smart&format=pjpg&auto=webp&s=f629840f449b1f51cc157c4a90be2e8749f6a15b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/pB-x8n3qm8Fl6YBRiwocXZCmEfna-2dXEAul8tbamVg.png?width=320&crop=smart&format=pjpg&auto=webp&s=f0f3e73997a8882ac5a88fc85f0934a794449533', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/pB-x8n3qm8Fl6YBRiwocXZCmEfna-2dXEAul8tbamVg.png?width=640&crop=smart&format=pjpg&auto=webp&s=a8ece3c2630cdb7ef8cf4ab2987e259477238038', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/pB-x8n3qm8Fl6YBRiwocXZCmEfna-2dXEAul8tbamVg.png?width=960&crop=smart&format=pjpg&auto=webp&s=4c7f4f9530bd26619bf8cbf666388c83e2ca47b3', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/pB-x8n3qm8Fl6YBRiwocXZCmEfna-2dXEAul8tbamVg.png?width=1080&crop=smart&format=pjpg&auto=webp&s=46f0aadb086e9708e3354cf45195f9675c3d9743', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/pB-x8n3qm8Fl6YBRiwocXZCmEfna-2dXEAul8tbamVg.png?format=pjpg&auto=webp&s=7d456e3669b97a1ad3a65d989f8a205b6487cca1', 'width': 1280}, 'variants': {}}]}
Fine-tuning Lama2 70b on 2 GPU
1
So I am qlora fine-tuning Lama 2 70b on two GPU. I somehow managed to make it work. However, the GPUs seem to peak utilization in sequence. So one will be 100% utilized and than the other will be 100% utilized. Is this the intended way or am I doing something wrong? I hoped for slightly higher utilization.
2023-09-08T16:13:43
https://www.reddit.com/r/LocalLLaMA/comments/16dervd/finetuning_lama2_70b_on_2_gpu/
ComplexIt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16dervd
false
null
t3_16dervd
/r/LocalLLaMA/comments/16dervd/finetuning_lama2_70b_on_2_gpu/
false
false
self
1
null
CLI tool to benchmark 100+LLMs response, response time, cost
1
Hi r/LocalLLaMA I built a CLI tool to benchmark 100+ LLMs for a given question. Output allows you to compare responses, response time and cost. Try it here: [https://github.com/BerriAI/litellm/blob/main/cookbook/benchmark/readme.md](https://github.com/BerriAI/litellm/blob/main/cookbook/benchmark/readme.md) &#x200B; [CLI Output](https://preview.redd.it/pl4vp8ry32nb1.png?width=2312&format=png&auto=webp&s=956dfbf9f7884bfd436f335a6adf52b032a083cc) Simply select your LLMs, enter your API keys, LLM configs and run python3 benchmark.py Happy completion()!
2023-09-08T16:16:51
https://www.reddit.com/r/LocalLLaMA/comments/16deuph/cli_tool_to_benchmark_100llms_response_response/
Comfortable_Dirt5590
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16deuph
false
null
t3_16deuph
/r/LocalLLaMA/comments/16deuph/cli_tool_to_benchmark_100llms_response_response/
false
false
https://b.thumbs.redditm…dMROqSdZIFsA.jpg
1
{'enabled': False, 'images': [{'id': 'ZjjQvwP8rnnvCTR0O6LXqEmuppiQJY9zV7VM3iR1g3M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WoWsgBarV7pJ_QNbTHyvJsS0v31zAspms2baMkqvzbk.jpg?width=108&crop=smart&auto=webp&s=9aea425b42f2aa232de9d63b2c9232572b901009', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WoWsgBarV7pJ_QNbTHyvJsS0v31zAspms2baMkqvzbk.jpg?width=216&crop=smart&auto=webp&s=9f02ce8e409c072ede798c4350af642f3fd5bc13', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WoWsgBarV7pJ_QNbTHyvJsS0v31zAspms2baMkqvzbk.jpg?width=320&crop=smart&auto=webp&s=66669f6482cd12cdc5ffc15c9e776e0f6e2a7a15', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WoWsgBarV7pJ_QNbTHyvJsS0v31zAspms2baMkqvzbk.jpg?width=640&crop=smart&auto=webp&s=b1c1628eea4e03b64a7e6e35140a77dc84ecccad', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WoWsgBarV7pJ_QNbTHyvJsS0v31zAspms2baMkqvzbk.jpg?width=960&crop=smart&auto=webp&s=cd47c7b3eb94dd8e1a856bc3e1a256962bb23824', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WoWsgBarV7pJ_QNbTHyvJsS0v31zAspms2baMkqvzbk.jpg?width=1080&crop=smart&auto=webp&s=f346e89e8fd742bffd55f9b02750a2cbb1214083', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WoWsgBarV7pJ_QNbTHyvJsS0v31zAspms2baMkqvzbk.jpg?auto=webp&s=9318d99b651b240b9ebc4dcc273847beeae1e48e', 'width': 1200}, 'variants': {}}]}
awesome llama models
1
I’m getting lost on the plenty of different fine tunes of llama are available on HF, is there any curated list explaining which model in TheBloke’s HF profile is tuned for?
2023-09-08T16:23:30
https://www.reddit.com/r/LocalLLaMA/comments/16df0ra/awesome_llama_models/
SK33LA
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16df0ra
false
null
t3_16df0ra
/r/LocalLLaMA/comments/16df0ra/awesome_llama_models/
false
false
self
1
null
The Trade-Off Between Model Size and Effectiveness in Local LLMs: How Can We Make Powerful Models More Accessible?
1
Greetings, fellow LLaMa herders! As someone who has been meticulously following the rapid advancements in the realm of local Large Language Models (LLMs), I've observed a persistent issue that we, as a community, need to address. While the progress has been nothing short of remarkable, these local LLMs still lag significantly behind OpenAI's GPT-3.5, which has 175 billion parameters. The recent release of the Falcon 180B model shows that we can create models of comparable size, but the hardware requirements for running such a model are astronomical. We're talking about a model that demands around 400GB of VRAM, a requirement far beyond the reach of most individuals, who might at best have access to a couple of RTX 4090 GPUs, each with a mere 24GB of VRAM. #### Understanding the Training Process Before diving into the trade-off between size and effectiveness, let's consider how these models are trained. They're built on trillions of tokens from the internet, encompassing a wide range of topics. This helps the model learn language, form logic, and acquire domain-specific knowledge. The larger the model, the more it can store, and the better it performs. However, this approach has a significant downside. The training data with trillions of tokens that we have scraped from the internet includes a lot highly specialized topics ranging from Familial Mediterranean Fever to category theory to the Fermi Paradox which unnecessarily bloats the model's size as all of this information is somehow stored in the model's parameters. Consider the analogy of how a child learns. In the early stages, a child's learning is focused primarily on grasping the fundamentals of language and acquiring a foundational layer of knowledge that enables logical reasoning. It's only after these basics are firmly in place that the child moves on to more specialized or niche topics. Shouldn't we adopt a similar approach for LLMs? #### Potential Solutions 1. **Pruning Large Models**: One approach could be to train a large model like Falcon 180B and then prune it to remove unnecessary information. However, this seems like a daunting task and I don't even know if this is possible. 2. **Specialized Training Sets**: An alternative, and in my opinion, a more promising strategy would be to curate specialized training sets that focus predominantly on language understanding and foundational knowledge. This could pave the way for smaller, more manageable models—perhaps in the ballpark of 13 billion parameters—that are both coherent in their responses and accessible to the average user as they are both effective and hardware-friendly. I believe the second approach holds promise for creating models that are both coherent and accessible. What are your thoughts on this?
2023-09-08T16:30:42
https://www.reddit.com/r/LocalLLaMA/comments/16df70w/the_tradeoff_between_model_size_and_effectiveness/
barbarous_panda
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16df70w
false
null
t3_16df70w
/r/LocalLLaMA/comments/16df70w/the_tradeoff_between_model_size_and_effectiveness/
false
false
self
1
null
llama-cpp-guidance: A python package that enables you to use Guidance with llama-cpp-python
1
2023-09-08T16:57:47
https://github.com/nicholasyager/llama-cpp-guidance
trickyager
github.com
1970-01-01T00:00:00
0
{}
16dfv1w
false
null
t3_16dfv1w
/r/LocalLLaMA/comments/16dfv1w/llamacppguidance_a_python_package_that_enables/
false
false
https://b.thumbs.redditm…ZCX1wY-5fiSs.jpg
1
{'enabled': False, 'images': [{'id': 'CeZDt2k4hYDw3St9mXhK4EosNPKj6Ya9OJWzdThyEUw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uiKJxNyfY3536anVIBzL_2LfbPMsh0K9rE4NH83s_zw.jpg?width=108&crop=smart&auto=webp&s=857f8708d6612c0a626204f06785146d17fb9169', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uiKJxNyfY3536anVIBzL_2LfbPMsh0K9rE4NH83s_zw.jpg?width=216&crop=smart&auto=webp&s=a6d654d0eddbbfd4bcbd523631de1cbf1a25e8b6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uiKJxNyfY3536anVIBzL_2LfbPMsh0K9rE4NH83s_zw.jpg?width=320&crop=smart&auto=webp&s=601ea83c13ba66aac3e84a6f487f0332844e8ac3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uiKJxNyfY3536anVIBzL_2LfbPMsh0K9rE4NH83s_zw.jpg?width=640&crop=smart&auto=webp&s=d2c60cad2682b68562e1c2a0a595917da1749f57', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uiKJxNyfY3536anVIBzL_2LfbPMsh0K9rE4NH83s_zw.jpg?width=960&crop=smart&auto=webp&s=b7ad7c3c921fd0150183d04f6f22ea83ef86fe68', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uiKJxNyfY3536anVIBzL_2LfbPMsh0K9rE4NH83s_zw.jpg?width=1080&crop=smart&auto=webp&s=f0c443a0636e62210ed309fe09396718bd5707da', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uiKJxNyfY3536anVIBzL_2LfbPMsh0K9rE4NH83s_zw.jpg?auto=webp&s=e40c4bb354399f6f4bda5627120fcd2f2c173bf7', 'width': 1200}, 'variants': {}}]}
Has anyone been successful telling Llama-2 model to overide its math?
1
[removed]
2023-09-08T17:10:19
https://www.reddit.com/r/LocalLLaMA/comments/16dg6ns/has_anyone_been_successful_telling_llama2_model/
Natural-Sentence-601
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16dg6ns
false
null
t3_16dg6ns
/r/LocalLLaMA/comments/16dg6ns/has_anyone_been_successful_telling_llama2_model/
false
false
self
1
{'enabled': False, 'images': [{'id': 'zCZRl5IHUtOgLP6kzFh5KDelz3UJxd-L1dGiQwqBJPA', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/lu8NhtJOUgHqqyYcKduymZVmYs5CzX1ZypRdadhK53Y.jpg?width=108&crop=smart&auto=webp&s=bc9b56bb720581fe5bc2b4328ca1ff86069f2e0e', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/lu8NhtJOUgHqqyYcKduymZVmYs5CzX1ZypRdadhK53Y.jpg?auto=webp&s=4044cf9262375528c6c5b13faec0d4309f97dace', 'width': 200}, 'variants': {}}]}
Falcon 180b performance on a Mac M2 Ultra w/192GB
1
Hey, Figured some folks would find this interesting. Using llama.cpp & TheBloke's Falcon180B Chat GGUF ([https://huggingface.co/TheBloke/Falcon-180B-Chat-GGUF](https://huggingface.co/TheBloke/Falcon-180B-Chat-GGUF)) Fastest model is `Q4_0` at about 2 tokens/second. https://i.redd.it/vg581owpi2nb1.gif &#x200B; Q2_K llm_load_tensors: mem required = 70536.36 MB (+ 320.00 MB per state) Output generated in 145.28 seconds (1.38 tokens/s, 200 tokens, context 79, seed 1759407072) Q3_K_L llm_load_tensors: mem required = 87721.75 MB (+ 320.00 MB per state) Output generated in 117.39 seconds (1.70 tokens/s, 200 tokens, context 79, seed 1136538994) Q3_K_M llm_load_tensors: mem required = 81230.28 MB (+ 320.00 MB per state) Output generated in 134.27 seconds (1.49 tokens/s, 200 tokens, context 79, seed 251348613) Q3_K_S llm_load_tensors: mem required = 74164.25 MB (+ 320.00 MB per state) Output generated in 150.22 seconds (1.33 tokens/s, 200 tokens, context 79, seed 330699490) Q4_0 llm_load_tensors: mem required = 96779.66 MB (+ 320.00 MB per state) Output generated in 97.72 seconds (2.05 tokens/s, 200 tokens, context 282, seed 614401052) Q4_K_M llm_load_tensors: mem required = 103455.55 MB (+ 320.00 MB per state) Output generated in 110.13 seconds (1.82 tokens/s, 200 tokens, context 485, seed 1068172880) Q4_K_S llm_load_tensors: mem required = 96779.66 MB (+ 320.00 MB per state) Output generated in 95.65 seconds (2.09 tokens/s, 200 tokens, context 79, seed 436631851) Q5_0 llm_load_tensors: mem required = 118064.76 MB (+ 320.00 MB per state) Output generated in 129.09 seconds (1.55 tokens/s, 200 tokens, context 79, seed 433094704) Q5_K_M llm_load_tensors: mem required = 124920.54 MB (+ 320.00 MB per state) Output generated in 135.39 seconds (1.48 tokens/s, 200 tokens, context 79, seed 801488217) Q5_K_S llm_load_tensors: mem required = 118064.76 MB (+ 320.00 MB per state) Output generated in 132.95 seconds (1.50 tokens/s, 200 tokens, context 79, seed 1991044959) Q6_K llm_load_tensors: mem required = 140680.17 MB (+ 320.00 MB per state) Output generated in 141.24 seconds (1.42 tokens/s, 200 tokens, context 79, seed 1709395580) Q8_0 llm_load_tensors: mem required = 181920.04 MB (+ 320.00 MB per state) (put ram at 191GB used, generated 1 token and stopped, started compressing ram) &#x200B;
2023-09-08T17:39:24
https://www.reddit.com/r/LocalLLaMA/comments/16dgxa4/falcon_180b_performance_on_a_mac_m2_ultra_w192gb/
ferminaut
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16dgxa4
false
null
t3_16dgxa4
/r/LocalLLaMA/comments/16dgxa4/falcon_180b_performance_on_a_mac_m2_ultra_w192gb/
false
false
https://b.thumbs.redditm…0_FB3kdSSKCE.jpg
1
null
Is it possible to get the text-generation-webui with any model, to ready a local file like a Python script?
1
I just started messing around with the local AI models.. I have a decent computer running 32 GB RAM and a RTX 4090 card. I have installed the text-generation-webui locally via command line Anaconda, and I have several Models downloaded. For Example hugging face model "TheBloke\_WizardCoder-Python-13B-V1.0-GPTQ". I can run the text-generation-webui and access via local host website, from there I can load the model. It seems to work, I can ask it basic questions and it gives me answers, make a python script that can count from 1 to 100. What I am unable to figure out is, if you have a local file on your computer which is too big to copy n paste into the prompt how do you get it to load a local file? Or maybe this isn't possible? Any advice or tips would be appreciated.
2023-09-08T19:06:47
https://www.reddit.com/r/LocalLLaMA/comments/16dj5ri/is_it_possible_to_get_the_textgenerationwebui/
jb2kewl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16dj5ri
false
null
t3_16dj5ri
/r/LocalLLaMA/comments/16dj5ri/is_it_possible_to_get_the_textgenerationwebui/
false
false
self
1
null
Is it possible to get the local installed AI using text-generation-webui to ready local files on your computer?
1
I just started messing around with the local AI models.. I have a decent computer running 32 GB RAM and a RTX 4090 card. I have installed the text-generation-webui locally via command line Anaconda, and I have several Models downloaded. For Example hugging face model "TheBloke\_WizardCoder-Python-13B-V1.0-GPTQ". I can run the text-generation-webui and access via local host website, from there I can load the model. It seems to work, I can ask it basic questions and it gives me answers, make a python script that can count from 1 to 100. What I am unable to figure out is, if you have a local file on your computer which is too big to copy n paste into the prompt how do you get it to load a local file? Or maybe this isn't possible? Any advice or tips would be appreciated.
2023-09-08T19:08:32
https://www.reddit.com/r/LocalLLaMA/comments/16dj796/is_it_possible_to_get_the_local_installed_ai/
jb2kewl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16dj796
false
null
t3_16dj796
/r/LocalLLaMA/comments/16dj796/is_it_possible_to_get_the_local_installed_ai/
false
false
self
1
null
Why Do You Not Use Open Source AI? (Or do you?)
1
[removed] [View Poll](https://www.reddit.com/poll/16djv3d)
2023-09-08T19:34:24
https://www.reddit.com/r/LocalLLaMA/comments/16djv3d/why_do_you_not_use_open_source_ai_or_do_you/
Heralax_Tekran
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16djv3d
false
null
t3_16djv3d
/r/LocalLLaMA/comments/16djv3d/why_do_you_not_use_open_source_ai_or_do_you/
false
false
self
1
null
How to Infuse domain specific knowledge into LLAMA?
1
I am exploring the possibilities of infusing domain specific knowledge into LLMs, and till now i have used LLAMA_V1. I have finetuned it on DBLP's subgraphs-to-text data ( i have generated this), using LORA. DBLP's knowledge graph contains publications, their authors and their affiliations. So by fine-tuning i wanted to test the ability of llms of retaining knowledge. and will eventually compare with dedicated question answer system on knowledge graphs (KGQAS). Results: - fine-tuned model just produced numbers as string ( can say just hallucinating) - some generated responses were mostly wrong ( random paper names, authors field of research, and their affiliations all wrong) My Understanding: - its wrong way to do it. - i got to learn that peft methods are only useful for adapting to task specific behaviours. and all the tutorials and blogs on internet conclude that. instead of peft, i should use full fine-tuning ? Discussion: How do i evaluate this knowledge infusion? (metrics) Required Resources Full fine-tuning of llama? Will appreciate any resources or suggestions on approaching this problem. Thanks!
2023-09-08T19:39:19
https://www.reddit.com/r/LocalLLaMA/comments/16djzeg/how_to_infuse_domain_specific_knowledge_into_llama/
omiz_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16djzeg
false
null
t3_16djzeg
/r/LocalLLaMA/comments/16djzeg/how_to_infuse_domain_specific_knowledge_into_llama/
false
false
self
1
null
AWS Inferentia2 Accelerators
1
Has anyone used AWS' Inferentia2 Accelerators for their local LLM? I don't see any support for them in oobabooga's platform. However, they are about half the cost of GPUs on AWS.
2023-09-08T20:24:07
https://www.reddit.com/r/LocalLLaMA/comments/16dl484/aws_inferentia2_accelerators/
SatoshiReport
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16dl484
false
null
t3_16dl484
/r/LocalLLaMA/comments/16dl484/aws_inferentia2_accelerators/
false
false
self
1
null
Using a RAG Model for Semantic Search & Document Question and Answering
1
[removed]
2023-09-08T20:32:55
https://www.reddit.com/r/LocalLLaMA/comments/16dlc8r/using_a_rag_model_for_semantic_search_document/
JabootieeIsGroovy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16dlc8r
false
null
t3_16dlc8r
/r/LocalLLaMA/comments/16dlc8r/using_a_rag_model_for_semantic_search_document/
false
false
self
1
null
Settings for Oobabooga for 13b chronos-hermes L2 GPTQ for an RTX 3090? I keep getting an out of memory error.
1
I've searched on here for info but I can't figure it out. Every single model I load has an out of memory error; I've done 4bit quant 30b/33b models and 13b models. I tried various loaders like exllama and the others in the dropdown that I recognized the name of. Nothing works. System specs: 1. RTX 3090 2. GTX 1070 3. GTX 1050 (display out) 4. 28GB DDR3 system ram, 80GB page file 5. 256GB Sata SSD with like 100GB free after several models installed with a 2TB spinning rust disk 6. Intel E3-1270 V3 (malaise era Intel quad, slightly better than 4790k on the same socket) Could someone please screencap their settings page for any good RP models, because I understand the configuration part zero. Stable Diffusion was a lot easier to get running. I was hoping to try Airoboros 33b, Wizard LM 30b, Chronos Hermes L2 13b. But I've used none so far and am open to suggestions. Thank you for help.
2023-09-08T20:34:50
https://www.reddit.com/r/LocalLLaMA/comments/16dldyu/settings_for_oobabooga_for_13b_chronoshermes_l2/
RottieCumGutter
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16dldyu
false
null
t3_16dldyu
/r/LocalLLaMA/comments/16dldyu/settings_for_oobabooga_for_13b_chronoshermes_l2/
false
false
self
1
null
Using 13b on 8gb vram?
1
[removed]
2023-09-08T21:28:09
https://www.reddit.com/r/LocalLLaMA/comments/16dmt3o/using_13b_on_8gb_vram/
hot_cat22
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16dmt3o
false
null
t3_16dmt3o
/r/LocalLLaMA/comments/16dmt3o/using_13b_on_8gb_vram/
false
false
self
1
null