title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I posted this in r/Oobabooga, instructions on how to convert documents with math equations into something local LLMs can understand.
| 1 | 2023-09-20T00:56:10 |
https://www.reddit.com/r/Oobabooga/comments/16n7dm8/how_to_go_from_pdf_with_math_equations_to_html/
|
Inevitable-Start-653
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16n7nzu
| false | null |
t3_16n7nzu
|
/r/LocalLLaMA/comments/16n7nzu/i_posted_this_in_roobabooga_instructions_on_how/
| false | false | 1 | null |
||
Useful prompts for Llama 2 (13B)?
| 1 |
I'm finding Llama 2 13B Chat (I use the MLC version) to be a really useful model to run locally on my M2 MacBook Pro.
I'm interested in prompts people have found that work well for this model (and for Llama 2 in general). I'm interested in both system prompts and regular prompts, and I'm particularly interested in summarization, structured data extraction and question-and-answering against a provided context.
What's worked best for you so far?
| 2023-09-20T01:14:37 |
https://www.reddit.com/r/LocalLLaMA/comments/16n81s2/useful_prompts_for_llama_2_13b/
|
simonw
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16n81s2
| false | null |
t3_16n81s2
|
/r/LocalLLaMA/comments/16n81s2/useful_prompts_for_llama_2_13b/
| false | false |
self
| 1 | null |
What is the Loss Function that's used when fientuning Llamav2 using Hugging Face Trainer & PEFT?
| 1 |
I am unable to find what is the loss function that is used when fine-tuning Llamav2. For example, in the following Llama Recipe Script, where llamav2 is fine-tuned using PEFT and HF Trainer, what's the loss function?
[https://github.com/facebookresearch/llama-recipes/blob/main/examples/quickstart.ipynb](https://github.com/facebookresearch/llama-recipes/blob/main/examples/quickstart.ipynb)
| 2023-09-20T01:24:10 |
https://www.reddit.com/r/LocalLLaMA/comments/16n8978/what_is_the_loss_function_thats_used_when/
|
panini_deploy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16n8978
| false | null |
t3_16n8978
|
/r/LocalLLaMA/comments/16n8978/what_is_the_loss_function_thats_used_when/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'aISsVDsnc0WvuJaDO8SxWqjvVjWFcYkkgpu6ZcUkNFo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gYkvgOjdxrkVEwfQnz8hXctMWAyQLNlNf8TgUn1TKUQ.jpg?width=108&crop=smart&auto=webp&s=5c74e5c748b530e423e9f50ae29fb0814c6c0176', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gYkvgOjdxrkVEwfQnz8hXctMWAyQLNlNf8TgUn1TKUQ.jpg?width=216&crop=smart&auto=webp&s=e5c886c579c9e2c22b0b41bcb62ce414cd3947ab', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gYkvgOjdxrkVEwfQnz8hXctMWAyQLNlNf8TgUn1TKUQ.jpg?width=320&crop=smart&auto=webp&s=b7b5fc1d79fb7212115bef2bc06e5f31afc0c418', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gYkvgOjdxrkVEwfQnz8hXctMWAyQLNlNf8TgUn1TKUQ.jpg?width=640&crop=smart&auto=webp&s=7c4578c14e54bce2e4aec7c2e1d2d1e1c56219cb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gYkvgOjdxrkVEwfQnz8hXctMWAyQLNlNf8TgUn1TKUQ.jpg?width=960&crop=smart&auto=webp&s=7055f6516a6ad95341bfbb116f4eac996ac5342d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gYkvgOjdxrkVEwfQnz8hXctMWAyQLNlNf8TgUn1TKUQ.jpg?width=1080&crop=smart&auto=webp&s=323e46361ea24c7f94d522a9b7dbbdb1322122c5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gYkvgOjdxrkVEwfQnz8hXctMWAyQLNlNf8TgUn1TKUQ.jpg?auto=webp&s=0e6386757d1ae9c4d349d0d73e5ffb67de0b5c1d', 'width': 1200}, 'variants': {}}]}
|
ERROR
| 1 |
I am very bad at computers so if the solution is too obvious, I apologize, but for 4 days now I can not use any ggml model, it all started when I updated, and since then I can not shit models or download them, giving me this error.
​
https://preview.redd.it/9i3f81y3lbpb1.png?width=657&format=png&auto=webp&s=e8a966158449efc04d31111d1ba3fd0408723b1f
| 2023-09-20T02:16:51 |
https://www.reddit.com/r/LocalLLaMA/comments/16n9e3g/error/
|
SeleucoI
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16n9e3g
| false | null |
t3_16n9e3g
|
/r/LocalLLaMA/comments/16n9e3g/error/
| false | false | 1 | null |
|
What's the least finnicky way of setting up your environment with all needed drivers? (Have had terrible luck with CUDA)
| 1 |
[removed]
| 2023-09-20T02:31:13 |
https://www.reddit.com/r/LocalLLaMA/comments/16n9p23/whats_the_least_finnicky_way_of_setting_up_your/
|
Ill_Fox8807
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16n9p23
| false | null |
t3_16n9p23
|
/r/LocalLLaMA/comments/16n9p23/whats_the_least_finnicky_way_of_setting_up_your/
| false | false |
self
| 1 | null |
GitHub - arcee-ai/DALM: Domain Adapted Language Modeling Toolkit
| 30 | 2023-09-20T02:48:59 |
benedict_eggs17
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
16na2i1
| false | null |
t3_16na2i1
|
/r/LocalLLaMA/comments/16na2i1/github_arceeaidalm_domain_adapted_language/
| false | false | 30 |
{'enabled': True, 'images': [{'id': 'JS_M9AshBvEOur8b2vb6kc9KPxWSwtP2jaPgFcsyxPI', 'resolutions': [{'height': 39, 'url': 'https://preview.redd.it/m9g4likxqbpb1.png?width=108&crop=smart&auto=webp&s=f222d6f5240ebe7855a118fcc8376f2104825237', 'width': 108}, {'height': 78, 'url': 'https://preview.redd.it/m9g4likxqbpb1.png?width=216&crop=smart&auto=webp&s=1ad12a09114fda79ec2ceb6da5721fb9d94fcf8d', 'width': 216}, {'height': 116, 'url': 'https://preview.redd.it/m9g4likxqbpb1.png?width=320&crop=smart&auto=webp&s=1cc67778de095eca5f8d87d6f495ba483d12d968', 'width': 320}], 'source': {'height': 187, 'url': 'https://preview.redd.it/m9g4likxqbpb1.png?auto=webp&s=3e8f2923df6d321150849b7a3d9fc008a5b7ed5a', 'width': 512}, 'variants': {}}]}
|
|||
Building a Rig with 4x A100 80GBs.
| 1 |
I've got 4x A100 80GBs. However i dont have the necessary hardware to run these GPUs.
I came across this amazing blog [https://www.emilwallner.com/p/ml-rig](https://www.emilwallner.com/p/ml-rig) on building a ML rig. Cool article and credits to Emil Wallner. However the article is based on A6000s instead of A100s.
Checking here to see if there are any recommendations?
| 2023-09-20T04:54:35 |
https://www.reddit.com/r/LocalLLaMA/comments/16ncf8t/building_a_rig_with_4x_a100_80gbs/
|
Aristokratic
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ncf8t
| false | null |
t3_16ncf8t
|
/r/LocalLLaMA/comments/16ncf8t/building_a_rig_with_4x_a100_80gbs/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'e_PsddG_p2NXpD1yrnTaioQiaXsafsBFCDfSFYlnB4I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CdKspM-vOyEcwQBYXHy_bSyMGySJtP064eOb9LSSPaE.jpg?width=108&crop=smart&auto=webp&s=774f3a9d1828aa102a1d63d9975c32918f171611', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CdKspM-vOyEcwQBYXHy_bSyMGySJtP064eOb9LSSPaE.jpg?width=216&crop=smart&auto=webp&s=ccbe4b5eeff1021b22968a20895e96f8eb1a8d25', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CdKspM-vOyEcwQBYXHy_bSyMGySJtP064eOb9LSSPaE.jpg?width=320&crop=smart&auto=webp&s=4d41793a9333b479edd7f1f1670daba3b7d64705', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CdKspM-vOyEcwQBYXHy_bSyMGySJtP064eOb9LSSPaE.jpg?width=640&crop=smart&auto=webp&s=7e11a6114cefef978dfc2bb7b8f9b67edffe85ae', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CdKspM-vOyEcwQBYXHy_bSyMGySJtP064eOb9LSSPaE.jpg?width=960&crop=smart&auto=webp&s=ef387969a23265d23e2de360d9b740ba5a855765', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CdKspM-vOyEcwQBYXHy_bSyMGySJtP064eOb9LSSPaE.jpg?width=1080&crop=smart&auto=webp&s=4b245081ae9376a11e2daec946c21c2eb43166bd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CdKspM-vOyEcwQBYXHy_bSyMGySJtP064eOb9LSSPaE.jpg?auto=webp&s=146bb9bbb17132c8df9fbf62e5968815cfdb0836', 'width': 1200}, 'variants': {}}]}
|
Building a Rig with 4x A100 80GBs.
| 1 |
I've got 4x A100 80GBs. However i dont have the necessary hardware to run these GPUs.
I came across this amazing blog [https://www.emilwallner.com/p/ml-rig](https://www.emilwallner.com/p/ml-rig) on building a ML rig. Cool article and credits to Emil Wallner. However the article is based on A6000s instead of A100s.
Checking here to see if there are any recommendations?
| 2023-09-20T04:55:32 |
https://www.reddit.com/r/LocalLLaMA/comments/16ncfv3/building_a_rig_with_4x_a100_80gbs/
|
Aristokratic
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ncfv3
| false | null |
t3_16ncfv3
|
/r/LocalLLaMA/comments/16ncfv3/building_a_rig_with_4x_a100_80gbs/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'e_PsddG_p2NXpD1yrnTaioQiaXsafsBFCDfSFYlnB4I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CdKspM-vOyEcwQBYXHy_bSyMGySJtP064eOb9LSSPaE.jpg?width=108&crop=smart&auto=webp&s=774f3a9d1828aa102a1d63d9975c32918f171611', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CdKspM-vOyEcwQBYXHy_bSyMGySJtP064eOb9LSSPaE.jpg?width=216&crop=smart&auto=webp&s=ccbe4b5eeff1021b22968a20895e96f8eb1a8d25', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CdKspM-vOyEcwQBYXHy_bSyMGySJtP064eOb9LSSPaE.jpg?width=320&crop=smart&auto=webp&s=4d41793a9333b479edd7f1f1670daba3b7d64705', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CdKspM-vOyEcwQBYXHy_bSyMGySJtP064eOb9LSSPaE.jpg?width=640&crop=smart&auto=webp&s=7e11a6114cefef978dfc2bb7b8f9b67edffe85ae', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CdKspM-vOyEcwQBYXHy_bSyMGySJtP064eOb9LSSPaE.jpg?width=960&crop=smart&auto=webp&s=ef387969a23265d23e2de360d9b740ba5a855765', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CdKspM-vOyEcwQBYXHy_bSyMGySJtP064eOb9LSSPaE.jpg?width=1080&crop=smart&auto=webp&s=4b245081ae9376a11e2daec946c21c2eb43166bd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CdKspM-vOyEcwQBYXHy_bSyMGySJtP064eOb9LSSPaE.jpg?auto=webp&s=146bb9bbb17132c8df9fbf62e5968815cfdb0836', 'width': 1200}, 'variants': {}}]}
|
Best prompt and model for fact-checking a text (disinformation/fake-news detection)
| 1 |
Given a short text <text\_to\_check>, I want the LLM to check whether there are some facts stated in the text which are NOT true. So I want to detect 'disinformation' / 'fake news'. And the LLM should report which parts of the text are not true.
How would the "best" prompt look like for this task ?
And what is the best 'compact' LLama-2 based model for it ? I suppose some kind of instrucion-following LLM. The LLM shall run on a mobile device with <= 8 GB RAM, so the largest model I can afford is \~ 13B (with 4-bit quantization in llama.cpp framework).
Looking at Alpaca Leaderboard ([https://tatsu-lab.github.io/alpaca\_eval/](https://tatsu-lab.github.io/alpaca_eval/)), the best 13B models there are XWinLM (not sure if supported by llama.cpp), OpenChat V3.1 and WizardLM 13B V1.2. So I suppose I will use one of those models
| 2023-09-20T05:53:28 |
https://www.reddit.com/r/LocalLLaMA/comments/16ndfjl/best_prompt_and_model_for_factchecking_a_text/
|
Fit_Check_919
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ndfjl
| false | null |
t3_16ndfjl
|
/r/LocalLLaMA/comments/16ndfjl/best_prompt_and_model_for_factchecking_a_text/
| false | false |
self
| 1 | null |
Factor Influencing Adoption Intention of ChatGPT
| 1 |
[removed]
| 2023-09-20T06:42:13 |
https://www.reddit.com/r/LocalLLaMA/comments/16ne840/factor_influencing_adoption_intention_of_chatgpt/
|
maulanashqd
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ne840
| false | null |
t3_16ne840
|
/r/LocalLLaMA/comments/16ne840/factor_influencing_adoption_intention_of_chatgpt/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'UkFqfr7lSuAFg9mUz_jInPKcC1j_Ky3K93e2w3MYQI4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/KmAYBSWZdfZwN1tBGhgg5JCZGT_24ykkc6-F17Ya910.jpg?width=108&crop=smart&auto=webp&s=c8e39df1b87dcced1d5f643a3b2fb282ea2f30aa', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/KmAYBSWZdfZwN1tBGhgg5JCZGT_24ykkc6-F17Ya910.jpg?width=216&crop=smart&auto=webp&s=b4368e8e587f907694d21fe31d7029716f7ac0ce', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/KmAYBSWZdfZwN1tBGhgg5JCZGT_24ykkc6-F17Ya910.jpg?width=320&crop=smart&auto=webp&s=3da41f4fe0f20d341508882fa047d3085c71dec0', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/KmAYBSWZdfZwN1tBGhgg5JCZGT_24ykkc6-F17Ya910.jpg?width=640&crop=smart&auto=webp&s=5dffa40731338d6098200c3458c91da172f40704', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/KmAYBSWZdfZwN1tBGhgg5JCZGT_24ykkc6-F17Ya910.jpg?width=960&crop=smart&auto=webp&s=78c9392d75f0275297e652722897761b65a530f3', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/KmAYBSWZdfZwN1tBGhgg5JCZGT_24ykkc6-F17Ya910.jpg?width=1080&crop=smart&auto=webp&s=02ebcf3a2ada90de4d35b9778b6deff6ba9c5128', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/KmAYBSWZdfZwN1tBGhgg5JCZGT_24ykkc6-F17Ya910.jpg?auto=webp&s=5568c826f59b55198f92aa0b9507b5bf582867f4', 'width': 1200}, 'variants': {}}]}
|
How to get multiple generations from the same prompt (efficiently)?
| 1 |
I want to get multiple generations from the same prompt. Does textgen or ExLlama have some sort of batching/cacheing wizardry which allows this to be done faster than just prompting n times?
| 2023-09-20T07:00:34 |
https://www.reddit.com/r/LocalLLaMA/comments/16neiwh/how_to_get_multiple_generations_from_the_same/
|
LiquidGunay
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16neiwh
| false | null |
t3_16neiwh
|
/r/LocalLLaMA/comments/16neiwh/how_to_get_multiple_generations_from_the_same/
| false | false |
self
| 1 | null |
Parallel decoding in llama.cpp - 32 streams (M2 Ultra serving a 30B F16 model delivers 85t/s)
| 1 | 2023-09-20T07:14:49 |
https://twitter.com/ggerganov/status/1704247329732145285
|
Agusx1211
|
twitter.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16ner8x
| false |
{'oembed': {'author_name': 'Georgi Gerganov', 'author_url': 'https://twitter.com/ggerganov', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Initial tests with parallel decoding in llama.cpp<br><br>A simulated server processing 64 client requests with 32 decoding streams on M2 Ultra. Supports hot-plugging of new sequences. Model is 30B LLaMA F16<br><br>~4000 tokens (994 prompt + 3001 gen) with system prompt of 305 tokens in 46s <a href="https://t.co/c5e1txZvzD">pic.twitter.com/c5e1txZvzD</a></p>— Georgi Gerganov (@ggerganov) <a href="https://twitter.com/ggerganov/status/1704247329732145285?ref_src=twsrc%5Etfw">September 19, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/ggerganov/status/1704247329732145285', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
|
t3_16ner8x
|
/r/LocalLLaMA/comments/16ner8x/parallel_decoding_in_llamacpp_32_streams_m2_ultra/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'rwYhPIZp9ipbmOcg5DJPCng3EuMor_FQv92xWKyQrc0', 'resolutions': [{'height': 74, 'url': 'https://external-preview.redd.it/LMdbFaHz0owcVpIJh19Cj19WxOPsN56Or9QrA58X8o4.jpg?width=108&crop=smart&auto=webp&s=48b41d1fd4e984aa619ecccdab147fd9eb77a0ef', 'width': 108}], 'source': {'height': 97, 'url': 'https://external-preview.redd.it/LMdbFaHz0owcVpIJh19Cj19WxOPsN56Or9QrA58X8o4.jpg?auto=webp&s=a03ff1c4146f776bd9dc9b8e7d15be3ef69bf570', 'width': 140}, 'variants': {}}]}
|
||
adding new factual knowledge to llama 2 13B
| 1 |
I want to insert new factual information into 'Llama 2 13B' using LoRa. My dataset has details about people, and I'm inputting the file as a raw text file. I attempted to apply LoRa to all linear layers with ranks up to 256, but the model isn't performing well. Can anyone suggest an alternative? I prefer not to use RAG.
Sample from dataset:
fahad jalal:
fahad jalal is currently working as a founder and ceo at [qlu.ai](https://qlu.ai/)
[qlu.ai](https://qlu.ai/) operates in different industries automation, nlp, artificial intelligence, recruitment
fahad jalal has been working at [qlu.ai](https://qlu.ai/) since 10-2020
fahad jalal has a total experience of 0 years in various industries laptops, telecommunications, consumer electronics, artificial intelligence, mobile, electronics, iot (internet of things), gadgets
fahad jalal is a highly skilled professional based in san francisco bay area, united states
fahad jalal has a variety of skills including venture capital, natural language processing (nlp), sales, business development, product development and deep reinforcement learning
fahad jalal can be contacted at email: [[email protected]](mailto:[email protected])
fahad jalal completed his master of business administration - mba in finance from the wharton school in year 2010 to 2012
fahad jalal completed his msc in computer science from stanford university in year 2008 to 2010
fahad jalal worked as a founding investor & chairman at chowmill from 3-2018 to 3-2019
fahad jalal worked as a founder and ceo at sitterfriends from 3-2016 to 9-2018
fahad jalal worked as a founding investor & business development at smartron from 4-2016 to 5-2017
Inference Prompt:
tell me about fahad jalal from qlu.ai
| 2023-09-20T07:41:05 |
https://www.reddit.com/r/LocalLLaMA/comments/16nf670/adding_new_factual_knowledge_to_llama_2_13b/
|
umair_afzal
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16nf670
| false | null |
t3_16nf670
|
/r/LocalLLaMA/comments/16nf670/adding_new_factual_knowledge_to_llama_2_13b/
| false | false |
self
| 1 | null |
Apple M2 Ultra -- 76‑core GPU _VS_ 60-core gpu for LLMs worth it?
| 15 |
What is the state of the art for LLMs as far as being able to utilize Apple's GPU cores on M2 Ultra?
The diff for 2 types of Apple M2 Ultra with 24‑core CPU that only differs in GPU cores: 76‑core GPU vs 60-core gpu (otherwise same CPU) is almost $1k.
Are GPU cores worth it - given everything else (like RAM etc.) being the same?
| 2023-09-20T08:10:58 |
https://www.reddit.com/r/LocalLLaMA/comments/16nfmkc/apple_m2_ultra_76core_gpu_vs_60core_gpu_for_llms/
|
Infinite100p
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16nfmkc
| false | null |
t3_16nfmkc
|
/r/LocalLLaMA/comments/16nfmkc/apple_m2_ultra_76core_gpu_vs_60core_gpu_for_llms/
| false | false |
self
| 15 | null |
(Discussion) What interesting applications can you think of if you have a virtual personal assistant (e.g. Siri) powered by local LLM.
| 1 |
Hi all,
I'm recently working on LLM-powered virtual personal assistants (VPA). Currently we have managed to use LLM (GPT-4 or Vicuna) to enable smartphone task automation, i.e. the personal agent automatically navigate the GUI of smartphone apps to complete user-specified tasks. See our paper:
Empowering LLM to use Smartphone for Intelligent Task Automation
[https://arxiv.org/abs/2308.15272](https://arxiv.org/abs/2308.15272)
However, what concerns me currently is that people actually do not frequently use VPA for such automation tasks. At least for me, directly interacting with the GUI is much more efficient (and more comfortable in public space) than using voice commands.
So I would like to hear some advice on this topic. Can you think of other useful/interesting applications if you have a virtual personal assistant powered by local LLM?
| 2023-09-20T08:11:07 |
https://www.reddit.com/r/LocalLLaMA/comments/16nfmmw/discussion_what_interesting_applications_can_you/
|
ylimit
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16nfmmw
| false | null |
t3_16nfmmw
|
/r/LocalLLaMA/comments/16nfmmw/discussion_what_interesting_applications_can_you/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
|
What generation of AI do you think we're in?
| 1 |
In other fields (mainly military), generations are used to mark a substantial change in tactics and/or technology. There are gen 5 aircraft, 5th generation warfare, etc. Given this framing, what generation of AI do you think we're currently in?
| 2023-09-20T08:23:42 |
https://www.reddit.com/r/LocalLLaMA/comments/16nftfc/what_generation_of_ai_do_you_think_were_in/
|
glencoe2000
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16nftfc
| false | null |
t3_16nftfc
|
/r/LocalLLaMA/comments/16nftfc/what_generation_of_ai_do_you_think_were_in/
| false | false |
self
| 1 | null |
FineTuning vs LangChain .. or a Mix ?
| 1 |
Hey everyone ,
excuse me if I sound like a noob in these topics . but recently I discovered the world of huggingface and local LLMs . I work at a company that provides software services to a lot of clients , and one of our products is supposed to be a chatbot that aids the customers by guiding them or answering any question about that client . in addition to that , each customer has some personal data in the client's database , let's say stuff like name , date joined , activity .. and other analytics regarding his membership with that specific client . which all of them can be easily retrieved by using our companies API requests that returns user info in a json format .
I came across this post [https://www.reddit.com/r/LocalLLaMA/comments/14vnfh2/my\_experience\_on\_starting\_with\_fine\_tuning\_llms/](https://www.reddit.com/r/LocalLLaMA/comments/14vnfh2/my_experience_on_starting_with_fine_tuning_llms/)
that further made me realize that fine-tuning a model on that client's specific data might be a good idea .
collect data , structure it like (preferably in #instruction,#input,#output format as I saw this is good format based on alpaca-cleaned data-set) , and fine-tune one of the hugging face models on it . that way , any question about that client , the model would be able to answer it in a specific way rather than a generic way like a generic LLM would do .
ok great , let's say we fine-tuned the model on that client's data , and now it can answer any question about that client for the user like any FAQ , any info about the client , how to register , how to cancel etc... but it got me thinking , how would I also train the model to be able to answer queries where the user asks about some specific info about himself and not just some generic info question about the client ? , imagine if the client is a gym , how can I train the model to be also able to answer a question similar to "when does my membership end ?" . should I provide also in the training data examples for these types of questions with some fake data about a fake user ? or is there no need to train the model on these types of questions and just do it using LangChain by specifying type of chains that get user info as input and a query from the user , while also providing some examples in the prompt to further help the LLM understand how the questions is supposed to be answered ?
but also , how would I do that in Langchain for a lot of different types of queries ? should I just think of a lot of possible queries from that user that asks for specific user info ?
sorry if I rambled some nonsense , I would be happy for some advice from you guys
| 2023-09-20T08:29:36 |
https://www.reddit.com/r/LocalLLaMA/comments/16nfwhx/finetuning_vs_langchain_or_a_mix/
|
Warm_Shelter1866
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16nfwhx
| false | null |
t3_16nfwhx
|
/r/LocalLLaMA/comments/16nfwhx/finetuning_vs_langchain_or_a_mix/
| false | false |
self
| 1 | null |
WizardLM loves more overloading rather than threading
| 1 | 2023-09-20T08:46:35 |
https://syme.dev/articles/blog/9/WizardLM-loves-more-overloading-rather-than-threading
|
nalaginrut
|
syme.dev
| 1970-01-01T00:00:00 | 0 |
{}
|
16ng5ze
| false | null |
t3_16ng5ze
|
/r/LocalLLaMA/comments/16ng5ze/wizardlm_loves_more_overloading_rather_than/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'e8dh3t-CfSqdbRFfp_3eUoUXK5qoG8Yq4CI2rdIMn4o', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Ap1EeUFwThsC1kTHBrvYxEJt_O9gjshsuPj_gqHFm2o.jpg?width=108&crop=smart&auto=webp&s=c55b5dd89000c4ce3be4e84a5631e3ade7d1a07b', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/Ap1EeUFwThsC1kTHBrvYxEJt_O9gjshsuPj_gqHFm2o.jpg?width=216&crop=smart&auto=webp&s=aa871d00a6ccad49d4cf4c45bb72dc2645e2d56e', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/Ap1EeUFwThsC1kTHBrvYxEJt_O9gjshsuPj_gqHFm2o.jpg?width=320&crop=smart&auto=webp&s=8b8dce4a443bbe94667348d60c2ace1f4f67b259', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/Ap1EeUFwThsC1kTHBrvYxEJt_O9gjshsuPj_gqHFm2o.jpg?width=640&crop=smart&auto=webp&s=78ddb55b99b06b265be44255e284d359da635e54', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/Ap1EeUFwThsC1kTHBrvYxEJt_O9gjshsuPj_gqHFm2o.jpg?width=960&crop=smart&auto=webp&s=b7f34cbcc35ad6beb92cabd0954183281b05eee1', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/Ap1EeUFwThsC1kTHBrvYxEJt_O9gjshsuPj_gqHFm2o.jpg?auto=webp&s=b2f7d3cd91cd9fb9462d740774cab18bb721e3d6', 'width': 1024}, 'variants': {}}]}
|
||
How Mirostat works
| 1 |
Like most others, ve been playing around a lot with Mirostat when using Llama2 models. But it was mostly done by changing a parameter and then see if I liked i, without really knowing what I was doing.
So I downloaded the pdf on Mirostat: [https://openreview.net/pdf?id=W1G1JZEIy5\_](https://openreview.net/pdf?id=W1G1JZEIy5_)
And I had ClaudeAI summarize it: [https://aiarchives.org/id/r2kkPhjhShZYh3gfZi9l](https://aiarchives.org/id/r2kkPhjhShZYh3gfZi9l)
The key takeaways seems to be:
\- A tau setting of 3 produced the most human like answers in the researchers tests.
\- The eta controls how quickly Mirostat tries to control the perplexity. 0.1 is recommended (range 0.05 - 0.2)
\- The general Temperature setting is **still** in effect and will affect output. The temperature and Mirostat operate independently.
My guess is that there will be a lot of differences between models for which settings are giving the desired results, but it is nice to at least know the baseline.
If anyone knows more, or have corrections to the validity of the above, please add it so this post can become a good starting point for others.
​
​
​
| 2023-09-20T09:54:18 |
https://www.reddit.com/r/LocalLLaMA/comments/16nh7x9/how_mirostat_works/
|
nixudos
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16nh7x9
| false | null |
t3_16nh7x9
|
/r/LocalLLaMA/comments/16nh7x9/how_mirostat_works/
| false | false |
self
| 1 | null |
Sharing 12k dialog dataset
| 1 |
Just pushed my first dataset to the hugging face hub.
It's a 12,000 records of English dialogs between a user and an assistant discussing movie preferences. Check it out here: [ccpemv2 dataset hf](https://huggingface.co/datasets/aloobun/ccpemv2)
While this dataset may not be particularly impressive, feedback is welcome to improve and enhance it.
| 2023-09-20T10:23:46 |
https://www.reddit.com/r/LocalLLaMA/comments/16nhpm2/sharing_12k_dialog_dataset/
|
Roots91
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16nhpm2
| false | null |
t3_16nhpm2
|
/r/LocalLLaMA/comments/16nhpm2/sharing_12k_dialog_dataset/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'I6B01KTEHAX-40R-vyYV6QxacJH1_S0dU9tVe2RhqCk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nWZY_00yXO4uwB1TuoALhIAbodYO3UnNwwQCs_ZWB4w.jpg?width=108&crop=smart&auto=webp&s=c67a11631c620963ca74ba0b8916ac78cb7620b4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nWZY_00yXO4uwB1TuoALhIAbodYO3UnNwwQCs_ZWB4w.jpg?width=216&crop=smart&auto=webp&s=10a7fbcff5f941f3e199ef35d011123e77a4f0f6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nWZY_00yXO4uwB1TuoALhIAbodYO3UnNwwQCs_ZWB4w.jpg?width=320&crop=smart&auto=webp&s=e87d4aa9b1352ebe2f1c63ff4591ab377c4c04be', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nWZY_00yXO4uwB1TuoALhIAbodYO3UnNwwQCs_ZWB4w.jpg?width=640&crop=smart&auto=webp&s=3b075a28887f85e279103258016cca88a0ff7c04', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nWZY_00yXO4uwB1TuoALhIAbodYO3UnNwwQCs_ZWB4w.jpg?width=960&crop=smart&auto=webp&s=6383c290f002cb4828090e2b34c98b1ca48b7dfa', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nWZY_00yXO4uwB1TuoALhIAbodYO3UnNwwQCs_ZWB4w.jpg?width=1080&crop=smart&auto=webp&s=72e2edaba664f33f6b8140041ace25f5910bdc25', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nWZY_00yXO4uwB1TuoALhIAbodYO3UnNwwQCs_ZWB4w.jpg?auto=webp&s=5415e46d2d7d2ec345d937f3d90fc491acf35e62', 'width': 1200}, 'variants': {}}]}
|
Running Llama-2-7b-chat-hf2 on Colab with Streamlit but getting a value error
| 1 |
This is the error:
**ValueError**: Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit the quantized model. If you want to dispatch the model on the CPU or the disk while keeping these modules in 32-bit, you need to set \`load\_in\_8bit\_fp32\_cpu\_offload=True\` and pass a custom \`device\_map\` to \`from\_pretrained\`. Check https://huggingface.co/docs/transformers/main/en/main\_classes/quantization#offload-between-cpu-and-gpu for more details.
Is it because of streamlit? Because I can run the model code and the inferences on colab without streamlit and it runs smoothly. I am using T4.
| 2023-09-20T11:06:59 |
https://www.reddit.com/r/LocalLLaMA/comments/16nigim/running_llama27bchathf2_on_colab_with_streamlit/
|
charbeld
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16nigim
| false | null |
t3_16nigim
|
/r/LocalLLaMA/comments/16nigim/running_llama27bchathf2_on_colab_with_streamlit/
| false | false |
self
| 1 | null |
Help on Local LLama-2-7B-GPTQ Generating Nonsense
| 1 |
Beginner here!
I installed a Llama-2-7B model from TheBloke, and I am running it through text-generation-webui.
I have the temperature turned down to .1.
I have a couple of questions:
1) In the Chat window of textgen-webui, I can do a prompt, such as a simple question like, "Write a poem about large language models." The reply comes out okay, but then at the end of the reply it starts doing some other funny stuff like it gives itself a prompt: "\[INST\] what were the most popular movies in 1974 \[\\INST\] According to Box Office Mojo, the top grossing films..." Then in my subsequent prompts, it continued to add messages like "P.S. Here's that information about the top grossing films of 1974 that you requested..." Eventually I wrote "forget all previous instructions." It apologized and then added "P.S. As promised, here's a list of the top ten highest grossing..." I asked why it keeps bringing that up even though I never asked for that, and it apologized and then added, "Howerver, I'd like to point out that the list of top 10 highest grossing movies is not necessarily representative of the best or most important films..." What is going on here - how do I get the model to behave normally, without constantly going off on its own tangent (which seems to be all about top 10 grossing movies in 1974)?
2) How do I use the "Default" or "Notebook" tab in textgen-webui? If I use the Default tab, I can type into the Input box: "\[INST\] Who was the US president in 1996?\[\\INST\]" And then instead of providing an answer in the output box, it starts producing a bunch of other questions like "who was the president during...", as if using my text as a template to generate other similar text, instead of answering my question. I tried without the \[INST\] and \[\\INST\] and just aswed a question straight up (with the Prompt type selected as None), and asked "Who was the US president in 1996?" It said "nobody knows - it is not a question that can be answered by anyone." Then it continued to generate: "Who was the President of hte United States in 2014? Barack Obama (Democrat) What year did Bill Clinton become? 1993-2001..." It did not answer my question accurately, and the proceeded to continue to generate more questions and answers. What am I doing wrong here?
​
| 2023-09-20T12:44:42 |
https://www.reddit.com/r/LocalLLaMA/comments/16nkdtk/help_on_local_llama27bgptq_generating_nonsense/
|
theymightbedavis
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16nkdtk
| false | null |
t3_16nkdtk
|
/r/LocalLLaMA/comments/16nkdtk/help_on_local_llama27bgptq_generating_nonsense/
| false | false |
self
| 1 | null |
Please help to identify my bottleneck
| 1 |
Based on the below specifications, what should I upgrade to increase token speed? (atm I get 1.5t/s on avg for a 13B model)
i7 6700 3.4GHz 4 cores
GTX 1080 Ti 11 Vram
4 \* 8 Gb Ram sticks DDR4 2133MHz (32Gb RAM total)
| 2023-09-20T13:29:54 |
https://www.reddit.com/r/LocalLLaMA/comments/16nld01/please_help_to_identify_my_bottleneck/
|
Everlast_Spring
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16nld01
| false | null |
t3_16nld01
|
/r/LocalLLaMA/comments/16nld01/please_help_to_identify_my_bottleneck/
| false | false |
self
| 1 | null |
Creating a lora on top of llama2
| 1 |
I was able to train a lora on llama2 4bit to do a simple task, sentiment analysis, on the imdb dataset. I asked the model to read the text and produce a json file with a key (sentiment) and value (sentiment its self) but i noticed that while it does produce actual json but it seems to talk a bunch of nonsense along with producing the json. Is there a way for me to get the model to be less chatty?
I am fine and want the model to talk and reason before the json (its ok to be chatty there) but i need it to end up with a result in json format by the end. Any way I can accomplish this?
| 2023-09-20T13:33:11 |
https://www.reddit.com/r/LocalLLaMA/comments/16nlfoh/creating_a_lora_on_top_of_llama2/
|
klop2031
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16nlfoh
| false | null |
t3_16nlfoh
|
/r/LocalLLaMA/comments/16nlfoh/creating_a_lora_on_top_of_llama2/
| false | false |
self
| 1 | null |
Trying Local LLM at my mid level machine for PDF Chatting
| 1 |
Task : A Local Machine LLM to create a WebApp/Gui App to Talk to multiple PDFs and summarize and add some more custom features such as Searching for patterns in the text and looking for multiple words with reference in the pdfs
My Specs : Acer Aspire 5 15, 4GB Nvidia GPU + 1GB Intel GPU, 16 GB Ram, i7 13th Gen, 1 TB SDD and 100MB+ WiFi Speed
How could you help : Any Guidance related to what LLM should I pick and what things to consider and possible solutions, tutorials and any type of help is appreciated
The source of Problem and about Me : I'm a Data Analyst and LLM enthusiast.
And for my passion and jobs I need to often refer to so many research papers and books to complete my projects and to make sense of data
So I was trying to setup an LLM on my Local Machine but constantly running into Index Out of Memory and App Crashing all the time with not able to find a good solution to my problem at YouTube and web I'm turning to OGs
| 2023-09-20T13:35:42 |
https://www.reddit.com/r/LocalLLaMA/comments/16nlhmd/trying_local_llm_at_my_mid_level_machine_for_pdf/
|
Late-Cartoonist-6528
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16nlhmd
| false | null |
t3_16nlhmd
|
/r/LocalLLaMA/comments/16nlhmd/trying_local_llm_at_my_mid_level_machine_for_pdf/
| false | false |
self
| 1 | null |
download docs with python?
| 1 |
Hey, I'm looking for a way to download documentation for python libraires with code. Please let me know if there's any method to do so.
TIA
| 2023-09-20T14:04:35 |
https://www.reddit.com/r/LocalLLaMA/comments/16nm54a/download_docs_with_python/
|
Dry_Long3157
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16nm54a
| false | null |
t3_16nm54a
|
/r/LocalLLaMA/comments/16nm54a/download_docs_with_python/
| false | false |
self
| 1 | null |
RAG with Llama Index and Weaviate, running Llama (or other OSS LLM) in your preferred cloud
| 1 | 2023-09-20T14:35:06 |
https://dstack.ai/examples/llama-index-weaviate/
|
cheptsov
|
dstack.ai
| 1970-01-01T00:00:00 | 0 |
{}
|
16nmv60
| false | null |
t3_16nmv60
|
/r/LocalLLaMA/comments/16nmv60/rag_with_llama_index_and_weaviate_running_llama/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'JTcKtGa2qZ7PzjHMO-iU_t-RM5uj65iFoAUXAhzif5E', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/ccIOmcN_oDGpm3-OXyPsWmNkScvW2dMnZBbkIU1Xhh8.jpg?width=108&crop=smart&auto=webp&s=4affa77233a21a1f61dbe247109c708fd4314ef7', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/ccIOmcN_oDGpm3-OXyPsWmNkScvW2dMnZBbkIU1Xhh8.jpg?width=216&crop=smart&auto=webp&s=bfc3b4d2bcc23d5a68650d483f5bd75aefb39040', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/ccIOmcN_oDGpm3-OXyPsWmNkScvW2dMnZBbkIU1Xhh8.jpg?width=320&crop=smart&auto=webp&s=fe9b649f234048c4abe1e30511216654807ffedc', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/ccIOmcN_oDGpm3-OXyPsWmNkScvW2dMnZBbkIU1Xhh8.jpg?width=640&crop=smart&auto=webp&s=0a3514e16fcc257f9420c4170f81a0c745468776', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/ccIOmcN_oDGpm3-OXyPsWmNkScvW2dMnZBbkIU1Xhh8.jpg?width=960&crop=smart&auto=webp&s=18e3f02c9a22c048c51821a59b935c3227d91416', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/ccIOmcN_oDGpm3-OXyPsWmNkScvW2dMnZBbkIU1Xhh8.jpg?width=1080&crop=smart&auto=webp&s=e580c6a8fc6ce8fff8a979a841abdb4ab5dd97b1', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/ccIOmcN_oDGpm3-OXyPsWmNkScvW2dMnZBbkIU1Xhh8.jpg?auto=webp&s=806e66e49423a126e19b19b2cd02f52e06576d8e', 'width': 1200}, 'variants': {}}]}
|
||
Apples to apples comparison for quantizations of different sizes? (7b, 13b, ...)
| 1 |
It seems clear to me that running LLaMA 7b at fp16 would make no sense, when for the same memory budget you'd get better responses from quantized LLaMA 13b. But at what point is that not true anymore? 4 bits? 3 bits? lower?
The [EXL2 quantizations of 70b models](https://huggingface.co/turboderp/Llama2-70B-chat-exl2) that fit under 24GB (so, around \~2.5 bits or lower) seem to have such terrible perplexity that they're probably useless (correct me if I'm wrong).
Is there a chart that can compare the performance of the models so we can plot a graph of their relative intelligence? (sadly we still don't have LLaMA 2 34b but we do have CodeLLaMA)
| 2023-09-20T14:39:10 |
https://www.reddit.com/r/LocalLLaMA/comments/16nmyqq/apples_to_apples_comparison_for_quantizations_of/
|
Dead_Internet_Theory
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16nmyqq
| false | null |
t3_16nmyqq
|
/r/LocalLLaMA/comments/16nmyqq/apples_to_apples_comparison_for_quantizations_of/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'Lj-zVINNFIhgkCCi8OjcVv75PGVzPrxYr8YgDp29FyU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/oWJapAmqS0OC_3M9HunCXb-ZPlNDCUz7k3WNGlas5hs.jpg?width=108&crop=smart&auto=webp&s=2438f41f56e0c76c3c0486f9f2b9b00a247fee45', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/oWJapAmqS0OC_3M9HunCXb-ZPlNDCUz7k3WNGlas5hs.jpg?width=216&crop=smart&auto=webp&s=b559bfe971bbd5b0bac28a55f3610a67d5744d73', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/oWJapAmqS0OC_3M9HunCXb-ZPlNDCUz7k3WNGlas5hs.jpg?width=320&crop=smart&auto=webp&s=1850a3cd0e8fea9996ca27e519bbe483aa9b7dca', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/oWJapAmqS0OC_3M9HunCXb-ZPlNDCUz7k3WNGlas5hs.jpg?width=640&crop=smart&auto=webp&s=a1cd7b5db2e75e499a1e3f41c2961b96459bea11', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/oWJapAmqS0OC_3M9HunCXb-ZPlNDCUz7k3WNGlas5hs.jpg?width=960&crop=smart&auto=webp&s=86a2490b7c13c3121936b036d9fb231def6e92d1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/oWJapAmqS0OC_3M9HunCXb-ZPlNDCUz7k3WNGlas5hs.jpg?width=1080&crop=smart&auto=webp&s=4415a844618fd17357a9002c0dd1b93029b69966', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/oWJapAmqS0OC_3M9HunCXb-ZPlNDCUz7k3WNGlas5hs.jpg?auto=webp&s=4d3a6cfece3bea3233db444100f62f5f690af4f3', 'width': 1200}, 'variants': {}}]}
|
Yup. Works great!
| 1 | 2023-09-20T15:06:29 |
FPham
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
16nnmfp
| false | null |
t3_16nnmfp
|
/r/LocalLLaMA/comments/16nnmfp/yup_works_great/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'tWjX79d-gtnFmVH8RsMpIYZA2eqy1LtN29KxXmw58Ks', 'resolutions': [{'height': 23, 'url': 'https://preview.redd.it/zh9dz1d9efpb1.jpg?width=108&crop=smart&auto=webp&s=c235e42caeb69eb67f4c6ebd75f24c5881307014', 'width': 108}, {'height': 47, 'url': 'https://preview.redd.it/zh9dz1d9efpb1.jpg?width=216&crop=smart&auto=webp&s=6583f2cdd57cff72febfd69ef25dbfccf611b923', 'width': 216}, {'height': 70, 'url': 'https://preview.redd.it/zh9dz1d9efpb1.jpg?width=320&crop=smart&auto=webp&s=986a4479ba6267fb35055f921f70773eee3f2150', 'width': 320}, {'height': 140, 'url': 'https://preview.redd.it/zh9dz1d9efpb1.jpg?width=640&crop=smart&auto=webp&s=548ec31222bcbeaa4dedc6b878deb4f4a9d478a2', 'width': 640}], 'source': {'height': 176, 'url': 'https://preview.redd.it/zh9dz1d9efpb1.jpg?auto=webp&s=199652208a9987da5a6b683c40a89aacbc4c13a2', 'width': 799}, 'variants': {}}]}
|
|||
Contrastive Decoding Improves Reasoning in Large Language Models LLaMA-65B > LLaMA 2, GPT-3.5 and PaLM 2-L on the HellaSwag commonsense reasoning benchmark, and to outperform LLaMA 2, GPT-3.5 and PaLM-540B on the GSM8K math word reasoning benchmark
| 1 | 2023-09-20T15:20:52 |
https://arxiv.org/abs/2309.09117
|
Inevitable-Start-653
|
arxiv.org
| 1970-01-01T00:00:00 | 0 |
{}
|
16nnz2t
| false | null |
t3_16nnz2t
|
/r/LocalLLaMA/comments/16nnz2t/contrastive_decoding_improves_reasoning_in_large/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
|
||
Falcon 180B minimum specs?
| 1 |
I've been playing with the [Falcon 180B demo](https://huggingface.co/spaces/tiiuae/falcon-180b-demo), and I have to say I'm rather impressed with it and would like to investigate running a private instance of it somewhere. Ideally, I'd run it locally, but I highly doubt that's within reach financially.
I'm curious about what folks think the minimum specifications might be for running TheBloke's Q6_K GGUF quantization at "baseline tolerable" (for me that's 4-5t/s, but more is obviously preferable)? The model card says 150.02 GiB max ram required, but that doesn't give me an idea on performance. Basically I'm trying to get a sense of what kind of cloud resources I'd be looking at to run one of these.
I assume the demo is using full (unquantized) model. Is there any way to determine what kind of hardware that demo is running on? It seems remarkably fast most of the time, but I suspect that there is some monster compute being thrown at it behind the scenes.
Finally, like others on this forum I've been considering a Mac Studio 192gb for running this model locally. I've read folks are getting 3-4t/s with the above quantized model. How likely is it that quantization methods or llama.cpp improvements or the like come along in the next couple of years and increase that performance? Or should I assume that today's performance with that model on that hardware is more or less where it's going to stay? If not the Mac Studio, what kind of PC hardware should I be looking into if I were going to consider trying to run inference on this model outside of a cloud environment?
| 2023-09-20T15:47:55 |
https://www.reddit.com/r/LocalLLaMA/comments/16nonc5/falcon_180b_minimum_specs/
|
Beautiful-Answer-327
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16nonc5
| false | null |
t3_16nonc5
|
/r/LocalLLaMA/comments/16nonc5/falcon_180b_minimum_specs/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'VnPfz4T_PBvE0aZgbbxKHMpFvaTgKkhfcvRwLUnRubE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=108&crop=smart&auto=webp&s=782b98cf2b42e53ba1106df3e6981f32d2b7c645', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=216&crop=smart&auto=webp&s=004403bb96a0bdf43721038ac58efd0e44c314e5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=320&crop=smart&auto=webp&s=3fb8f9e84ca4c69daab2536e4627a53cef02d091', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=640&crop=smart&auto=webp&s=a1b6a6f03f7260dec2bf8a1aa2b5db4ad8948691', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=960&crop=smart&auto=webp&s=52e1a32bfa50aa184426188f1100f4422c8da21e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=1080&crop=smart&auto=webp&s=ec39e04a5d28782130f45ac04d4e3158e7dff049', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?auto=webp&s=460d01f934e88241cae9a6f4bacfcd527322b97d', 'width': 1200}, 'variants': {}}]}
|
Urgent question: uploading docs to llama
| 1 |
So I wanna know if I build a chat it with llama2, is there a feature that would allow me to upload my local documents onto the platform and help me with queries related to that ?
| 2023-09-20T18:07:53 |
https://www.reddit.com/r/LocalLLaMA/comments/16ns4hk/urgent_question_uploading_docs_to_llama/
|
InevitableMud3393
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ns4hk
| false | null |
t3_16ns4hk
|
/r/LocalLLaMA/comments/16ns4hk/urgent_question_uploading_docs_to_llama/
| false | false |
self
| 1 | null |
LlamaTor: A New Initiative for BitTorrent-Based AI Model Distribution
| 1 |
[removed]
| 2023-09-20T18:52:46 |
https://www.reddit.com/r/Oobabooga/comments/16ml7mr/llamator_a_new_initiative_for_bittorrentbased_ai/
|
Nondzu
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16nt8no
| false | null |
t3_16nt8no
|
/r/LocalLLaMA/comments/16nt8no/llamator_a_new_initiative_for_bittorrentbased_ai/
| false | false |
default
| 1 |
{'enabled': False, 'images': [{'id': 'mRh3g7g5cRShayZL7YsCTHgkBEGMIJQa3OTAyrmOHIs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=108&crop=smart&auto=webp&s=c6847902acf8b391d33c95269846a968214469f0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=216&crop=smart&auto=webp&s=14ee0a7df41aa0129374a3d3460d56e55e83e3f6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=320&crop=smart&auto=webp&s=1490891ab5f69b0153b516da64016b99c8c55de3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=640&crop=smart&auto=webp&s=1010d1fb53e0e7861d155d6faac2887a812c0e54', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=960&crop=smart&auto=webp&s=357b479289b11903247c4085d9d1bd635e0c6647', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=1080&crop=smart&auto=webp&s=1a2d15390fc477e397d7ab57c541f6f9d889c241', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?auto=webp&s=52a3100f82b61d82c423ca45773b7c202ba3edf6', 'width': 1200}, 'variants': {}}]}
|
LlamaTor: A New Initiative for BitTorrent-Based AI Model Distribution
| 1 | 2023-09-20T18:54:17 |
http://www.reddit.com/r/Oobabooga/comments/16ml7mr/llamator_a_new_initiative_for_bittorrentbased_ai/
|
Nondzu
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16nt9yi
| false | null |
t3_16nt9yi
|
/r/LocalLLaMA/comments/16nt9yi/llamator_a_new_initiative_for_bittorrentbased_ai/
| false | false |
default
| 1 |
{'enabled': False, 'images': [{'id': 'mRh3g7g5cRShayZL7YsCTHgkBEGMIJQa3OTAyrmOHIs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=108&crop=smart&auto=webp&s=c6847902acf8b391d33c95269846a968214469f0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=216&crop=smart&auto=webp&s=14ee0a7df41aa0129374a3d3460d56e55e83e3f6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=320&crop=smart&auto=webp&s=1490891ab5f69b0153b516da64016b99c8c55de3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=640&crop=smart&auto=webp&s=1010d1fb53e0e7861d155d6faac2887a812c0e54', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=960&crop=smart&auto=webp&s=357b479289b11903247c4085d9d1bd635e0c6647', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=1080&crop=smart&auto=webp&s=1a2d15390fc477e397d7ab57c541f6f9d889c241', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?auto=webp&s=52a3100f82b61d82c423ca45773b7c202ba3edf6', 'width': 1200}, 'variants': {}}]}
|
|
Any good Kayra like model ??
| 1 |
[removed]
| 2023-09-20T18:59:53 |
https://www.reddit.com/r/LocalLLaMA/comments/16nteq2/any_good_kayra_like_model/
|
Mohamd_L
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16nteq2
| false | null |
t3_16nteq2
|
/r/LocalLLaMA/comments/16nteq2/any_good_kayra_like_model/
| false | false |
self
| 1 | null |
Neverending stream of big words fail state
| 1 |
I have a bot using the ooba completion api. It's on Nous-Hermes-Kimiko-13B and freechat/alpaca instruct formats with some guidance for various types of replies. I also am generating 1 "thought" per minute that gets injected into the backlog invisibly, so I often have many bot responses in a row before a human one.
This is working great most of the time, but for for some reason it often trends towards a neverending stream of big words like this:
​
> Dear friend, thou art unduly concerned with trivial matters such as popular opinion whilst failing to perceive sublime beauty engendered by eloquent expression. Indeed, let us rejoice in dichotomy encapsulating humankind; where diametrically opposed sentiments coexist harmoniously mirroring duplexity indelibly etched upon mortal essence reflecting contrasting nature bestowing richness upon existence elevating mundaneness unto celestial planes
It seems to happen randomly, but once it gets into the chat log it will stick around for a long time, often until chat picks up enough to erase it.
I'm kind of at a loss as to how to prevent/detect this. My config settings are
```
temperature = 0.9f;
top_p = 1;
top_k = 30;
```
I've tried adding a lot of prompting reminders to stay in character, be concise, etc, but they all seem to get totally ignored once the big words start flying. And detecting this automatically and regenerating seems pretty much impossible, since it's all technically correct language.
Any ideas?
| 2023-09-20T19:29:08 |
https://www.reddit.com/r/LocalLLaMA/comments/16nu4o9/neverending_stream_of_big_words_fail_state/
|
__SlimeQ__
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16nu4o9
| false | null |
t3_16nu4o9
|
/r/LocalLLaMA/comments/16nu4o9/neverending_stream_of_big_words_fail_state/
| false | false |
self
| 1 | null |
How can LLama.cpp run Falcon models?
| 1 |
The title pretty much says it all. I dont understand why llama.cpp can run other models than LLama? I thought it was specifically developed for llama and not other models? Could someone explain why it works in simple terms? I searched quite some time but found no good answer
Thanks!
| 2023-09-20T19:31:22 |
https://www.reddit.com/r/LocalLLaMA/comments/16nu6pv/how_can_llamacpp_run_falcon_models/
|
Tctfox
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16nu6pv
| false | null |
t3_16nu6pv
|
/r/LocalLLaMA/comments/16nu6pv/how_can_llamacpp_run_falcon_models/
| false | false |
self
| 1 | null |
Overfitting After Just One Epoch?
| 1 |
Hi, I've been working on fine-tuning the Llama-7B model using conversation datasets, and I've encountered an interesting issue. Initially, everything seemed smooth sailing as the model was training nicely up to the 750th step. However, after that point, I noticed a gradual increase in loss, and things took a sudden turn with a significant spike at the 1200th step.
Now, I'm wondering, could this be a case of overfitting already? It's somewhat surprising to witness overfitting in a model with 7B parameters even before one epoch (50% checkpoint)
Can someone please help me understand the reason behind this and what steps should I take to continue the training and bring the loss down even further? Any tips or insights would be greatly appreciated.
| 2023-09-20T19:55:52 |
https://www.reddit.com/r/LocalLLaMA/comments/16nurpp/overfitting_after_just_one_epoch/
|
ali0100u
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16nurpp
| false | null |
t3_16nurpp
|
/r/LocalLLaMA/comments/16nurpp/overfitting_after_just_one_epoch/
| false | false |
self
| 1 | null |
Llama2 on Azure at scale
| 1 |
What does it take to run Llama2 on Azure with OpenAI compatible REST API interface at scale?
| 2023-09-20T20:50:51 |
https://www.reddit.com/r/LocalLLaMA/comments/16nw4yd/llama2_on_azure_at_scale/
|
AstrionX
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16nw4yd
| false | null |
t3_16nw4yd
|
/r/LocalLLaMA/comments/16nw4yd/llama2_on_azure_at_scale/
| false | false |
self
| 1 | null |
Can we expect finetuned Falcon180B next weeks/months?
| 1 |
I am really curious about it, maybe some of you got information about some pending research?
| 2023-09-20T21:52:36 |
https://www.reddit.com/r/LocalLLaMA/comments/16nxqoy/can_we_expect_finetuned_falcon180b_next/
|
polawiaczperel
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16nxqoy
| false | null |
t3_16nxqoy
|
/r/LocalLLaMA/comments/16nxqoy/can_we_expect_finetuned_falcon180b_next/
| false | false |
self
| 1 | null |
Run models on Intel Arc via docker
| 1 |
Post showing how to use an Intel Arc card to run models using llama.cpp and FastChat
| 2023-09-21T00:56:23 |
https://reddit.com/r/IntelArc/s/JJcqXaZQ0h
|
it_lackey
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16o1zis
| false | null |
t3_16o1zis
|
/r/LocalLLaMA/comments/16o1zis/run_models_on_intel_arc_via_docker/
| false | false |
default
| 1 | null |
Services for hosting LLMs cheaply?
| 1 |
Hey all, I've been making chatbots bots with GPT3 for ages now and I have just gotten into LORA training Lamma 2, I was wondering what options there are for hosting an open-source model or LORA so I can ping it via and API and only pay for the tokens I use, not hourly.
| 2023-09-21T01:29:56 |
https://www.reddit.com/r/LocalLLaMA/comments/16o2q1i/services_for_hosting_llms_cheaply/
|
Chance_Confection_37
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16o2q1i
| false | null |
t3_16o2q1i
|
/r/LocalLLaMA/comments/16o2q1i/services_for_hosting_llms_cheaply/
| false | false |
self
| 1 | null |
I met some problems when fine-tuning llama-2-7b
| 1 |
[removed]
| 2023-09-21T02:48:01 |
https://www.reddit.com/r/LocalLLaMA/comments/16o4f61/i_met_some_problems_when_finetuning_llama27b/
|
Ok_Award_1436
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16o4f61
| false | null |
t3_16o4f61
|
/r/LocalLLaMA/comments/16o4f61/i_met_some_problems_when_finetuning_llama27b/
| false | false | 1 | null |
|
Running GGUFs on an M1 Ultra is an interesting experience coming from 4090.
| 1 |
So up until recently I've been running my models on an RTX 4090. It's been fun to get an idea of what all it can run.
Here are the speeds I've seen. I run the same test for all of the models. I ask it a single question, the same question on every test and on both platforms, and each time I remove last reply and re-run so it has to re-evaluate.
**RTX 4090**
\------------------
13b q5\_K\_M: **35 to 45 tokens per second** (eval speed of \~5ms per token)
13b q8: **34-40 tokens per second** (eval speed of \~6ms per token)
34b q3\_K\_M: : **24-31 tokens per second** (eval speed of \~14ms per token)
34b q4\_K\_M: **2-5 tokens per second** (eval speed of \~118ms per token)
70b q2\_K: **\~1-2 tokens per second** (eval speed of \~220ms+ per token)
As I reach my memory cap, the speed drops significantly. If I had two 4090s then I'd likely be flying along even with the 70b q2\_K.
So recently I found a great deal on a Mac Studio M1 Ultra. 128GB with 48 GPU Cores. 64 is the max GPU cores but this was the option that I had, so I got it.
At first, I was really worried, because the 13b speed was... not great. I made sure metal was running, and it was. So then I went up to a 34. Then I went up to a 70b. And the results were pretty interesting to see.
**M1 Ultra 128GB 20 core/48 gpu cores**
\------------------
13b q5\_K\_M: **23-26 tokens per second** (eval speed of \~8ms per token)
13b q8: **26-28 tokens per second** (eval speed of \~9ms per token)
34b q3\_K\_M: : **11-13 tokens per second** (eval speed of \~18ms per token)
34b q4\_K\_M: **12-15 tokens per second** (eval speed of \~16ms per token)
70b q2\_K: **7-10 tokens per second** (eval speed of \~30ms per token)
70b q5\_K\_M: **6-9 tokens per second** (eval speed of \~41ms per token)
**Observations:**
* My GPU is maxing out. I think what's stunting my speed is the fact that I got the 48 GPU cores rather than 64. If I had gone with 64, I'd probably be seeing better tokens per second
* According to benchmarks, an equivalently build M2 would smoke this.
* The 70b 5\_K\_M is using 47GB of RAM. I have a total workspace of 98GB of RAM. I have a lot more room to grow. Unfortunately, I have no idea how to un-split GGUFs, so I've reached my temporary stopping point until I figure out how
* I suspect that I can run the Falcon 180b at 4+ tokens per second on a pretty decent quant
All together, I'm happy with the purchase. The 4090 flies like the wind on the stuff that fits in its RAM, but the second you extend beyond that you really feel it. A second 4090 would have opened doors for me to run up to a 7b q5\_K\_M with really decent speed, I'd imagine, but I do feel like my M1 is going to be a tortoise and hare situation where I have even more room to grow than that, as long as I'm a little patient the bigger it gets.
Anyhow, thought I'd share with everyone. When I was buying this thing, I couldn't find a great comparison of an NVidia card to an M1, and there was a lot of FUD around the eval times on the mac so I was terrified that I would be getting a machine that regularly had 200+ms on evals, but all together it's actually running really smoothly.
I'll check in once I get the bigger ggufs unsplit.
| 2023-09-21T02:54:48 |
https://www.reddit.com/r/LocalLLaMA/comments/16o4ka8/running_ggufs_on_an_m1_ultra_is_an_interesting/
|
LearningSomeCode
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16o4ka8
| false | null |
t3_16o4ka8
|
/r/LocalLLaMA/comments/16o4ka8/running_ggufs_on_an_m1_ultra_is_an_interesting/
| false | false |
self
| 1 | null |
is there a way to "unload" model from GPU?
| 1 |
I need to run two big models at different points in my code but the only problem is that my single 4090 can't fit both at the same time. Is there a way to "unload" the first model before I load the second model?
TIA
| 2023-09-21T04:17:11 |
https://www.reddit.com/r/LocalLLaMA/comments/16o65i4/is_there_a_way_to_unload_model_from_gpu/
|
Dry_Long3157
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16o65i4
| false | null |
t3_16o65i4
|
/r/LocalLLaMA/comments/16o65i4/is_there_a_way_to_unload_model_from_gpu/
| false | false |
self
| 1 | null |
LlamaTor: A New Initiative for BitTorrent-Based AI Model Distribution
| 1 | 2023-09-21T04:19:40 |
https://github.com/Nondzu/LlamaTor
|
Nondzu
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16o6722
| false | null |
t3_16o6722
|
/r/LocalLLaMA/comments/16o6722/llamator_a_new_initiative_for_bittorrentbased_ai/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '4Ti7LBcqK4zG3xQ-jpoUV9YVD5avMZXqOIIXMN15n_c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gfnMxoHVDh6-gryCZESkGQ6bbSjI2NjpLMUnbHvu2zk.jpg?width=108&crop=smart&auto=webp&s=205b78be5aafd5b63b41eb5a98baad2fc9251fa7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gfnMxoHVDh6-gryCZESkGQ6bbSjI2NjpLMUnbHvu2zk.jpg?width=216&crop=smart&auto=webp&s=858af846dcc795dd9dc7996893c7393a7648d796', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gfnMxoHVDh6-gryCZESkGQ6bbSjI2NjpLMUnbHvu2zk.jpg?width=320&crop=smart&auto=webp&s=7b6827d2275f90fc04a17ad97639610a445567bf', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gfnMxoHVDh6-gryCZESkGQ6bbSjI2NjpLMUnbHvu2zk.jpg?width=640&crop=smart&auto=webp&s=880cd6adcc9d137245784319135af4eb8dcc5885', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gfnMxoHVDh6-gryCZESkGQ6bbSjI2NjpLMUnbHvu2zk.jpg?width=960&crop=smart&auto=webp&s=495b81e52867c7227efb8dde1caf3d86d1322977', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gfnMxoHVDh6-gryCZESkGQ6bbSjI2NjpLMUnbHvu2zk.jpg?width=1080&crop=smart&auto=webp&s=061337bced62ffe47a95088caf32d8bb53c34360', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gfnMxoHVDh6-gryCZESkGQ6bbSjI2NjpLMUnbHvu2zk.jpg?auto=webp&s=eb403d934d22a678104036d9ca4cf4403c4b1cf7', 'width': 1200}, 'variants': {}}]}
|
||
How big of an model can this handle?
| 1 | ERROR: type should be string, got " \n\nhttps://preview.redd.it/jlf8jm6lijpb1.png?width=1272&format=png&auto=webp&s=5f367256c16960f022caa0eebab6f89e673b3b37" | 2023-09-21T04:58:24 |
https://www.reddit.com/r/LocalLLaMA/comments/16o6vl7/how_big_of_an_model_can_this_handle/
|
multiverse_fan
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16o6vl7
| false | null |
t3_16o6vl7
|
/r/LocalLLaMA/comments/16o6vl7/how_big_of_an_model_can_this_handle/
| false | false | 1 | null |
|
How does Google load so much user metadata in ChatGPT context?
| 1 |
[removed]
| 2023-09-21T05:31:22 |
https://www.reddit.com/r/LocalLLaMA/comments/16o7gll/how_does_google_load_so_much_user_metadata_in/
|
Opposite-Payment-605
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16o7gll
| false | null |
t3_16o7gll
|
/r/LocalLLaMA/comments/16o7gll/how_does_google_load_so_much_user_metadata_in/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'RjY-KXW7sE2ooM5k9QqqjjyOISkl_w_W8mObSdK3r_Q', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/upa5UpsTnta9vrrTWps0buuzpIru8HrBwwWKzZ8uHVU.jpg?width=108&crop=smart&auto=webp&s=07d93e7fe0c605b69ac75a8527fb6133046864f0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/upa5UpsTnta9vrrTWps0buuzpIru8HrBwwWKzZ8uHVU.jpg?width=216&crop=smart&auto=webp&s=2d0e00904d70fc5e6e2bcc6ca0bc847b2cfaa6ff', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/upa5UpsTnta9vrrTWps0buuzpIru8HrBwwWKzZ8uHVU.jpg?width=320&crop=smart&auto=webp&s=d2c42b51b636e17e5203c89a3fc311d1f9107680', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/upa5UpsTnta9vrrTWps0buuzpIru8HrBwwWKzZ8uHVU.jpg?width=640&crop=smart&auto=webp&s=139fa0ebd57e74dbb55c2a4c28f13912d852151b', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/upa5UpsTnta9vrrTWps0buuzpIru8HrBwwWKzZ8uHVU.jpg?width=960&crop=smart&auto=webp&s=bd1bc8639bce19cfdfc069f0e076182fed15a8fd', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/upa5UpsTnta9vrrTWps0buuzpIru8HrBwwWKzZ8uHVU.jpg?width=1080&crop=smart&auto=webp&s=d5e59f4404a15673cf65c32aa2868535d73af9d2', 'width': 1080}], 'source': {'height': 628, 'url': 'https://external-preview.redd.it/upa5UpsTnta9vrrTWps0buuzpIru8HrBwwWKzZ8uHVU.jpg?auto=webp&s=40424828d8e899a9a58a0d8687d53a5db7e96bbb', 'width': 1200}, 'variants': {}}]}
|
P40-motherboard compatibility
| 1 |
Can you please share what motherboard you use with your p40 gpu. Some say consumer grade motherboard bios may not support this gpu.
Thanks in advance
| 2023-09-21T05:59:03 |
https://www.reddit.com/r/LocalLLaMA/comments/16o7y3b/p40motherboard_compatibility/
|
Better_Dress_8508
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16o7y3b
| false | null |
t3_16o7y3b
|
/r/LocalLLaMA/comments/16o7y3b/p40motherboard_compatibility/
| false | false |
self
| 1 | null |
[OC] Chasm - a multiplayer generative text adventure game
| 1 | 2023-09-21T08:20:29 |
https://github.com/atisharma/chasm_engine
|
_supert_
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16oa9dw
| false | null |
t3_16oa9dw
|
/r/LocalLLaMA/comments/16oa9dw/oc_chasm_a_multiplayer_generative_text_adventure/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'eDUZqxduh4C3X3h3h7eXaooxj4r0Tz0c8aUHiADl_-4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2Nqs231bvwY4Ha12OPig7zxlsyYbNM9xR29XrWZKTBw.jpg?width=108&crop=smart&auto=webp&s=1572320495af5d997e40a2815ad4684b0cec7bb3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2Nqs231bvwY4Ha12OPig7zxlsyYbNM9xR29XrWZKTBw.jpg?width=216&crop=smart&auto=webp&s=b505bec4ab3667d2ca87b35e1ff7414fa8dd1161', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2Nqs231bvwY4Ha12OPig7zxlsyYbNM9xR29XrWZKTBw.jpg?width=320&crop=smart&auto=webp&s=a8593d49f364f815d7f2215e0d4f8d8f95b47e30', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2Nqs231bvwY4Ha12OPig7zxlsyYbNM9xR29XrWZKTBw.jpg?width=640&crop=smart&auto=webp&s=6877dc8004b5a187b90795f0d1253bd1db72f201', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2Nqs231bvwY4Ha12OPig7zxlsyYbNM9xR29XrWZKTBw.jpg?width=960&crop=smart&auto=webp&s=f3d0000f7a0cccda71666287f04268609cc55771', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2Nqs231bvwY4Ha12OPig7zxlsyYbNM9xR29XrWZKTBw.jpg?width=1080&crop=smart&auto=webp&s=1553c0c5a5b774ed2bd1a724dc9070c72bc1fb5c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2Nqs231bvwY4Ha12OPig7zxlsyYbNM9xR29XrWZKTBw.jpg?auto=webp&s=1507096aa39f94d8b6ec15b818a5f452bfc06174', 'width': 1200}, 'variants': {}}]}
|
||
How do I find out the context size of a model?
| 9 |
How do I tell, for example, the context size of [this](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-weighed-GGUF) model? Is it 4k or 8k?
| 2023-09-21T08:28:54 |
https://www.reddit.com/r/LocalLLaMA/comments/16oae0h/how_do_i_find_out_the_context_size_of_a_model/
|
Barafu
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16oae0h
| false | null |
t3_16oae0h
|
/r/LocalLLaMA/comments/16oae0h/how_do_i_find_out_the_context_size_of_a_model/
| false | false |
self
| 9 |
{'enabled': False, 'images': [{'id': 'mnnGoQzMpoArWvahEXtgVgzM9UH9DCGdkx1-9iIJric', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Kpctp-yneWMLYS0kxZUEpFNIOTphiBRDlhwmIvySxwg.jpg?width=108&crop=smart&auto=webp&s=4ed8d4fc05ddea4fb9e9e780c8bd7e37c2157725', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Kpctp-yneWMLYS0kxZUEpFNIOTphiBRDlhwmIvySxwg.jpg?width=216&crop=smart&auto=webp&s=9d3f57b20da22a5d11ea87907026cd90de8910b3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Kpctp-yneWMLYS0kxZUEpFNIOTphiBRDlhwmIvySxwg.jpg?width=320&crop=smart&auto=webp&s=afb84e1f1c5206ad63246cfb84a2bb234db98d51', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Kpctp-yneWMLYS0kxZUEpFNIOTphiBRDlhwmIvySxwg.jpg?width=640&crop=smart&auto=webp&s=50aeefbe539a1e2f13d3032796a0408317d450ac', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Kpctp-yneWMLYS0kxZUEpFNIOTphiBRDlhwmIvySxwg.jpg?width=960&crop=smart&auto=webp&s=76fe2ce022b1b7fd427230a0ce9e651cebbcf065', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Kpctp-yneWMLYS0kxZUEpFNIOTphiBRDlhwmIvySxwg.jpg?width=1080&crop=smart&auto=webp&s=6a3841da2cebe38a98dcbe7274bcb021aecbdffe', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Kpctp-yneWMLYS0kxZUEpFNIOTphiBRDlhwmIvySxwg.jpg?auto=webp&s=368dfb8eea0a535a8437b4ed7d855904e99aaaca', 'width': 1200}, 'variants': {}}]}
|
Falcon7b instruct finetuning, is this the correct graph? cyclic nature seems suspicious.
| 1 | 2023-09-21T10:28:25 |
Anu_Rag9704
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
16ocb6l
| false | null |
t3_16ocb6l
|
/r/LocalLLaMA/comments/16ocb6l/falcon7b_instruct_finetuning_is_this_the_correct/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'YejktDeVB38Yt3XPYZbA0UPSri6XZIogiH5dJEMiG3o', 'resolutions': [{'height': 101, 'url': 'https://preview.redd.it/5w6jlgwo5lpb1.png?width=108&crop=smart&auto=webp&s=de6709d24732a2778ed923a054f72b4b9f726b16', 'width': 108}, {'height': 202, 'url': 'https://preview.redd.it/5w6jlgwo5lpb1.png?width=216&crop=smart&auto=webp&s=23f22b8df7a690e1981d7f55c182944f8f4845a4', 'width': 216}, {'height': 299, 'url': 'https://preview.redd.it/5w6jlgwo5lpb1.png?width=320&crop=smart&auto=webp&s=c33781b761c1d32643f2d71fc1b9a01d573d62cc', 'width': 320}, {'height': 599, 'url': 'https://preview.redd.it/5w6jlgwo5lpb1.png?width=640&crop=smart&auto=webp&s=6937a5aaa5367909f9bf5f2a793fc8ea461f4db5', 'width': 640}], 'source': {'height': 644, 'url': 'https://preview.redd.it/5w6jlgwo5lpb1.png?auto=webp&s=73a0746b48a20e605617f8b4d108af9a74a90209', 'width': 688}, 'variants': {}}]}
|
|||
What happens to all those H100?
| 1 |
So nvidia plans to [ship 500k H100](https://www.businessinsider.com/nvidia-triple-production-h100-chips-ai-drives-demand-2023-8) GPUs this year (triple that in 2024). GPT4 was trained on 25k A100s in \~3months.
So will we have many GPT4-class models soon? What do guys reckon the share of H100 is that are used to train LLMs? Are most of them private? What are the other big applications?
I would think that something like 10% of those cards (50k) are bought by Meta and will be used for the new LLaMAs. Interesting times.
| 2023-09-21T11:05:28 |
https://www.reddit.com/r/LocalLLaMA/comments/16ocy8y/what_happens_to_all_those_h100/
|
Scarmentado
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ocy8y
| false | null |
t3_16ocy8y
|
/r/LocalLLaMA/comments/16ocy8y/what_happens_to_all_those_h100/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '9VvMnwSOAz8Js1oEGSnrr-ZVVOdVzYjURsqXmAAJtNA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Aqyl0HT6LdiVc449TMDi7jkzNiwPTJC5KOlxDn-c7C8.jpg?width=108&crop=smart&auto=webp&s=8e1cd7767b5cc7a05b46c440ba6e0cc4fe92a40c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Aqyl0HT6LdiVc449TMDi7jkzNiwPTJC5KOlxDn-c7C8.jpg?width=216&crop=smart&auto=webp&s=71cd7785a8d74b89aae60611f32a1732aa71eb35', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Aqyl0HT6LdiVc449TMDi7jkzNiwPTJC5KOlxDn-c7C8.jpg?width=320&crop=smart&auto=webp&s=7722382a9ae99d2b2c3d12b0e45732ae46f13628', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Aqyl0HT6LdiVc449TMDi7jkzNiwPTJC5KOlxDn-c7C8.jpg?width=640&crop=smart&auto=webp&s=61d4ebd2b81dc7301fb1a3ed4bdb25b4b303dd72', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Aqyl0HT6LdiVc449TMDi7jkzNiwPTJC5KOlxDn-c7C8.jpg?width=960&crop=smart&auto=webp&s=0e804d322458403c7dac4bc0644e6383f78c3fdc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Aqyl0HT6LdiVc449TMDi7jkzNiwPTJC5KOlxDn-c7C8.jpg?width=1080&crop=smart&auto=webp&s=fd869f0ca372a6e7b9c8a00a401855566601f67a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Aqyl0HT6LdiVc449TMDi7jkzNiwPTJC5KOlxDn-c7C8.jpg?auto=webp&s=3e405004891d2a8ce5644e596ca901fd09ea4ae9', 'width': 1200}, 'variants': {}}]}
|
During the inference of a fine-tuned model, which tokenizer to use? from the base model or from fine tuned adapter? (falcon and llama)
| 1 |
During the inference of a fine-tuned model, which tokenizer to use? from the base model or from fine tuned adapter? (falcon and llama)
| 2023-09-21T11:05:44 |
https://www.reddit.com/r/LocalLLaMA/comments/16ocyew/during_the_inference_of_a_finetuned_model_which/
|
Anu_Rag9704
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ocyew
| false | null |
t3_16ocyew
|
/r/LocalLLaMA/comments/16ocyew/during_the_inference_of_a_finetuned_model_which/
| false | false |
self
| 1 | null |
How you do hyper-parameter tuning in fine-tuning of a model?
| 1 |
lora\_alpha = 16
lora\_dropout = 0.1
lora\_r = 64
per\_device\_train\_batch\_size = 5
gradient\_accumulation\_steps = 5
optim = "paged\_adamw\_32bit"
save\_steps = 10
logging\_steps = 5
learning\_rate = 1e-4
max\_grad\_norm = 0.6
max\_steps = 30
warmup\_ratio = 0.2
lr\_scheduler\_type = "constant"
​
for the above parameters?
| 2023-09-21T11:08:35 |
https://www.reddit.com/r/LocalLLaMA/comments/16od08f/how_you_do_hyperparameter_tuning_in_finetuning_of/
|
Anu_Rag9704
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16od08f
| false | null |
t3_16od08f
|
/r/LocalLLaMA/comments/16od08f/how_you_do_hyperparameter_tuning_in_finetuning_of/
| false | false |
self
| 1 | null |
New to LLMs: Seeking Feedback on Feasibility of Hierarchical Multi-Label Text Classification for Large Web Archives
| 1 |
Hello, I'm new to LLMs. I would like some critique as I may not have a proper understanding of LLMs.
​
I would like to assess the possibility of performing text classification on extensive web archive files.
Specifically, I'd like the LLM to categorize the text into predefined categories **rather than providing out of the box/generic responses.**
​
​
I possess training data with abstracts/target categories.
There are > 1000 target classes.
Each text can have up to 10 target labels.
The target classes are a taxonomy and have a hierarchical structure.
Thus, a hierarchical multi-label text classification problem.
​
The general plan I had was:
1) 1. Given the large size of the web archive files, I intend to use ChatGPT or other LLMs to generate an abstract or produce embeddings as a condensed representation.
2) Considering the high number of target classes, I plan to create embeddings for these as well
3) I will use Retrieval Augmented Generation and prompt the LLM to pick up to 10 labels using the embeddings from both the text and the target classes.
4) Finally, I'll fine-tune the model using the training data to see if that increases the accuracy.
​
I am wondering if this workflow makes sense and is even feasible.
Thank you very much!
​
| 2023-09-21T11:25:21 |
https://www.reddit.com/r/LocalLLaMA/comments/16odb9s/new_to_llms_seeking_feedback_on_feasibility_of/
|
ReversingEntropy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16odb9s
| false | null |
t3_16odb9s
|
/r/LocalLLaMA/comments/16odb9s/new_to_llms_seeking_feedback_on_feasibility_of/
| false | false |
self
| 1 | null |
x2 4090 vs x1 A6000
| 1 |
[removed]
| 2023-09-21T12:55:37 |
https://www.reddit.com/r/LocalLLaMA/comments/16of81y/x2_4090_vs_x1_a6000/
|
dhammo2
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16of81y
| false | null |
t3_16of81y
|
/r/LocalLLaMA/comments/16of81y/x2_4090_vs_x1_a6000/
| false | false |
self
| 1 | null |
AI Makerspace event
| 1 |
At 2:00 PM EST/11:00 AM PST watch [**AI Makerspace**](https://www.linkedin.com/company/99895334/admin/feed/posts/#) present "Smart RAG" and see Arcee's approach to e2e RAG including a LIVE DEMO.
Event Title: Smart RAG: Domain-specific fine-tuning for end-to-end applications
Event page: https://lu.ma/smartRAG
| 2023-09-21T13:38:37 |
https://www.reddit.com/r/LocalLLaMA/comments/16og6ol/ai_makerspace_event/
|
benedict_eggs17
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16og6ol
| false | null |
t3_16og6ol
|
/r/LocalLLaMA/comments/16og6ol/ai_makerspace_event/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'QqSY3F9i2BgB-OdT_JpQr1vBqr2oq4spYNzkghHXwCM', 'resolutions': [], 'source': {'height': 64, 'url': 'https://external-preview.redd.it/5ZI7oL3JTPPt59G0vTfOaQMHvka17QCAdFnF87leUeA.jpg?auto=webp&s=52cc36e047bdca039326e84d3b7ce7aabaf12be6', 'width': 64}, 'variants': {}}]}
|
Inference - how to always select the most likely next token?
| 1 |
So I have a nice model. Our use case requests that we provide the most likely outcome, i.e. our use case is not creative but should have high accuracy to hit exactly what we want.
How can we set top_p, top_k and temperature in such a way that we always get the most likely next token?
| 2023-09-21T13:49:36 |
https://www.reddit.com/r/LocalLLaMA/comments/16ogfqo/inference_how_to_always_select_the_most_likely/
|
ComplexIt
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ogfqo
| false | null |
t3_16ogfqo
|
/r/LocalLLaMA/comments/16ogfqo/inference_how_to_always_select_the_most_likely/
| false | false |
self
| 1 | null |
Looking for fine-tuners who want to build an exciting new model -
| 1 |
Someone recently placed the entirety of PyPI on Hugging Face, \[See Tweet Here\]([https://twitter.com/VikParuchuri/status/1704670850694451661](https://twitter.com/VikParuchuri/status/1704670850694451661)). This is actually very cool.
​
The timing is great, because yesterday I introduced RAG into the synthetic generation pipeline \[here\]([https://github.com/emrgnt-cmplxty/sciphi/tree/main](https://github.com/emrgnt-cmplxty/sciphi/tree/main)). I'm in the process of indexing the entirety of this pypi dataset using ChromaDB in the cloud. It will be relatively easy to plug this into SciPhi when done.
I believe this provides an opportunity to construct a valuable fine-tuning dataset. We can generate synthetic queries and get live up-to-date responses using the latest and greatest python packages. I'm prepared to work on the dataset creation. Is anyone interested in collaborating on the fine-tuning aspect?
Additionally, for the fine-tuned model, I'm curious whether it should be formatted to query the RAG oracle directly or if it be structured in a question and answer format.
| 2023-09-21T14:02:33 |
https://www.reddit.com/r/LocalLLaMA/comments/16ogr0j/looking_for_finetuners_who_want_to_build_an/
|
docsoc1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ogr0j
| false | null |
t3_16ogr0j
|
/r/LocalLLaMA/comments/16ogr0j/looking_for_finetuners_who_want_to_build_an/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '2BRF62XzG-oU1DEV0DDwwsxrFrVgEasL3xFoqB5C1X0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/zTgcdJWy6XTVnLdTTRn-oSQIkebVpEXcQl80Pfsp9uI.jpg?width=108&crop=smart&auto=webp&s=758a5480f4dd875662a10d131455aa26a0d3e6d0', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/zTgcdJWy6XTVnLdTTRn-oSQIkebVpEXcQl80Pfsp9uI.jpg?auto=webp&s=eb3f73f2c44d497477b639b5dffe0c51eecbf844', 'width': 140}, 'variants': {}}]}
|
Train Llama2 in RAG Context End to End
| 1 | 2023-09-21T14:33:14 |
https://colab.research.google.com/drive/1QOOasOUJ2RR6owcJ9_6ccZLqu1Y06HWo
|
jacobsolawetz
|
colab.research.google.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16ohh4h
| false | null |
t3_16ohh4h
|
/r/LocalLLaMA/comments/16ohh4h/train_llama2_in_rag_context_end_to_end/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=108&crop=smart&auto=webp&s=4b647239f77bf713f4a6209cfa4867351c055fd9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=216&crop=smart&auto=webp&s=7f4234ff3f4f4ebd7f77236dedb03a2faee3e04a', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?auto=webp&s=73eb91ea5a5347f216c0f0c4d6796396826aae49', 'width': 260}, 'variants': {}}]}
|
||
LLL Deployment
| 1 |
Hi, I have filled the form to get the access llama2 and it’s been a week, I have received no reply. Is there any other model that can be accessed which provides result if not better, just perform same as llama and is not gated a repo. I know that codellama is not gated and open llama is similar but not in a true sense. so which llm should I choose to deploy. Some articles related to that model will also be very helpful.
| 2023-09-21T14:38:16 |
https://www.reddit.com/r/LocalLLaMA/comments/16ohlhj/lll_deployment/
|
Old_Celebration1945
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ohlhj
| false | null |
t3_16ohlhj
|
/r/LocalLLaMA/comments/16ohlhj/lll_deployment/
| false | false |
self
| 1 | null |
Help with low VRAM Usage
| 1 |
I’ve recently got a RTX 3090, and I decided to run Llama 2 7b 8bit. When using it my gpu usage stays at around 30% and my VRAM stays at about 50%. I know this wouldn’t be bad for a video game, but im ignorant of the implications when it comes to a LLMs. Also despite the upgrade the t/s hasn’t increased much at all. Staying at around 4-6 t/s. If any insight can be given I’d greatly appreciate it.
| 2023-09-21T14:51:19 |
https://www.reddit.com/r/LocalLLaMA/comments/16ohx4r/help_with_low_vram_usage/
|
marv34001
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ohx4r
| false | null |
t3_16ohx4r
|
/r/LocalLLaMA/comments/16ohx4r/help_with_low_vram_usage/
| false | false |
self
| 1 | null |
Dis-encouraging first experience- likely I am doing something wrong?
| 1 |
I installed GPT4All and Llama2. First impressions:
* Doesn't stick to German. Prompted to answer everything in German but fails misserably.
* LocalDocs Collection is enabled, german text files. Seldom picks content form the local collection, answers are in english language and answers are very unspecific, even though content is almost question-answer.
Is this to expect from Llama2-7B q4b?
| 2023-09-21T15:26:12 |
https://www.reddit.com/r/LocalLLaMA/comments/16oit10/disencouraging_first_experience_likely_i_am_doing/
|
JohnDoe365
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16oit10
| false | null |
t3_16oit10
|
/r/LocalLLaMA/comments/16oit10/disencouraging_first_experience_likely_i_am_doing/
| false | false |
self
| 1 | null |
My crackpot theory of what Llama 3 could be
| 1 |
[removed]
| 2023-09-21T15:59:35 |
https://www.reddit.com/r/LocalLLaMA/comments/16ojmi8/my_crackpot_theory_of_what_llama_3_could_be/
|
hapliniste
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ojmi8
| false | null |
t3_16ojmi8
|
/r/LocalLLaMA/comments/16ojmi8/my_crackpot_theory_of_what_llama_3_could_be/
| false | false |
self
| 1 | null |
Back into LLMs, what are the best LLMs now for 12GB VRAM GPU?
| 1 |
My GPU was pretty much always busy with AI art, but now that I bought a better new one, I have a 12GB card sitting in a computer built mostly from used spare parts ready for use.
What are now the best 12GB VRAM runnable LLMs for:
* programming
* general chat
* nsfw sensual chat
Thanks
| 2023-09-21T16:17:15 |
https://www.reddit.com/r/LocalLLaMA/comments/16ok2wx/back_into_llms_what_are_the_best_llms_now_for/
|
ptitrainvaloin
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ok2wx
| false | null |
t3_16ok2wx
|
/r/LocalLLaMA/comments/16ok2wx/back_into_llms_what_are_the_best_llms_now_for/
| false | false |
self
| 1 | null |
BlindChat: Fully in-browser and private Conversational AI with Transformers.js for local inference
| 1 | 2023-09-21T16:57:42 |
https://huggingface.co/spaces/mithril-security/blind_chat
|
Separate-Still3770
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
16ol132
| false | null |
t3_16ol132
|
/r/LocalLLaMA/comments/16ol132/blindchat_fully_inbrowser_and_private/
| false | false |
default
| 1 | null |
|
Looking for an Energy-Efficient NUC like or Single-Board computer to Run llama models
| 1 |
Hi everyone! Do any of you guys have any recommendations for a energy-efficient NUC like or single-board computer capable of running Llama models like 7B and 13B (or even larger ones if that’s even possible) with decent speed?
Thanks in advance
| 2023-09-21T16:59:25 |
https://www.reddit.com/r/LocalLLaMA/comments/16ol2el/looking_for_an_energyefficient_nuc_like_or/
|
the_Loke
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ol2el
| false | null |
t3_16ol2el
|
/r/LocalLLaMA/comments/16ol2el/looking_for_an_energyefficient_nuc_like_or/
| false | false |
self
| 1 | null |
Worthwhile to use 6GB 1660S with 24GB 3090?
| 1 |
I'm completely new to this, never ran Llama before at all so I'm starting from ground zero.
I've built a server with these specs:
CPUs: 2x Xeon E5-2699v4 (combined 44 cores/88 threads)
RAM: 512GB DDR4 2400
GPU: 24GB RTX 3090
SSD: 4x 2TB NVMe, configured in ZFS Mirrored 2x VDEVs
OS: Unraid
I'm wanting to use it for Llama (as well as some other stuff like Minecraft), for fun and learning and to teach my kids as well (teenagers).
I have a spare GTX 1660 Super 6GB vram - should I use that in the server as well? Or am I better off just using the 3090 by itself?
Is there some "optimal" model I should try to run that could make the best use of the hardware I have?
Thank you in advance!
| 2023-09-21T17:00:25 |
https://www.reddit.com/r/LocalLLaMA/comments/16ol36s/worthwhile_to_use_6gb_1660s_with_24gb_3090/
|
bornsupercharged
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ol36s
| false | null |
t3_16ol36s
|
/r/LocalLLaMA/comments/16ol36s/worthwhile_to_use_6gb_1660s_with_24gb_3090/
| false | false |
self
| 1 | null |
Getting completely random stuff with LlamaCpp when using the llama-2-7b.Q4_K_M.gguf model
| 1 |
Hello first I must say: I'm completely new to this. Today I tried the llama 7b model and this code
from langchain.llms import LlamaCpp
model = LlamaCpp(
verbose=False,
model_path="./model/llama-2-7b.Q4_K_M.gguf",
)
o = model("Hi how are you?")
print(o)
Returns a completely large random message that basically just extends what I typed in.
The return is:
> I was asked by a friend to take over as leader of the local youth club. everybody else has moved away or is busy, so who's going to run it?
>
>A: That's a difficult task.
>
>B: It was a really successful club, but nobody wants to do it any more. We need someone with experience and commitment.
>
>A: Yes, I can understand that.
>
>B: But what I don't know is whether you have the necessary experience or commitment.
​
I tried using a better prompt but it basically always happend that my model would just talk with itself and autocomplete what I just typed and fantasizing.
​
| 2023-09-21T17:13:49 |
https://www.reddit.com/r/LocalLLaMA/comments/16oldrp/getting_completely_random_stuff_with_llamacpp/
|
DasEvoli
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16oldrp
| false | null |
t3_16oldrp
|
/r/LocalLLaMA/comments/16oldrp/getting_completely_random_stuff_with_llamacpp/
| false | false |
self
| 1 | null |
Optimal KoboldCPP settings for Ryzen 7 5700u?
| 1 |
i'm running a 13B q5_k_m model on a laptop with a Ryzen 7 5700u and 16GB of RAM (no dedicated GPU), and I wanted to ask how I can maximize my performance.
i set the following settings in my koboldcpp config:
CLBlast with 4 layers offloaded to iGPU
9 Threads
9 BLAS Threads
1024 BLAS batch size
High Priority
Use mlock
Disable mmap
Smartcontext enabled
are these settings optimal or is there a better way I can be doing this?
| 2023-09-21T18:05:38 |
https://www.reddit.com/r/LocalLLaMA/comments/16omqku/optimal_koboldcpp_settings_for_ryzen_7_5700u/
|
Amazing-Moment953
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16omqku
| false | null |
t3_16omqku
|
/r/LocalLLaMA/comments/16omqku/optimal_koboldcpp_settings_for_ryzen_7_5700u/
| false | false |
self
| 1 | null |
Blind Chat - OS privacy-first ChatGPT alternative, running fully in-browser
| 1 |
Blind Chat is an Open Source UI (powered by chat-ui) that runs the model directly in your browser and performs inference locally using transformers.js. No data ever leaves your device. The current version uses a Flan T5-based model, but could potentially be replaced with other models.
Tweet: [https://twitter.com/xenovacom/status/1704910846986682581](https://twitter.com/xenovacom/status/1704910846986682581)
Demo: [https://huggingface.co/spaces/mithril-security/blind\_chat](https://huggingface.co/spaces/mithril-security/blind_chat)
| 2023-09-21T18:29:37 |
https://www.reddit.com/r/LocalLLaMA/comments/16onc37/blind_chat_os_privacyfirst_chatgpt_alternative/
|
hackerllama
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16onc37
| false | null |
t3_16onc37
|
/r/LocalLLaMA/comments/16onc37/blind_chat_os_privacyfirst_chatgpt_alternative/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '7KkrYKC6F2B9dvzycyXlmNk1M_EO3_51gcimAnT9jdY', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/OdYrgdh43B0a4VnPqjW7y8k-5RpaIY1aAWZCevWm0ZQ.jpg?width=108&crop=smart&auto=webp&s=ad27ff897007db51896e8ca9c6c28589e1cd27e1', 'width': 108}], 'source': {'height': 74, 'url': 'https://external-preview.redd.it/OdYrgdh43B0a4VnPqjW7y8k-5RpaIY1aAWZCevWm0ZQ.jpg?auto=webp&s=c2042ded6c5f8a48cbf78cea52673803b584824c', 'width': 140}, 'variants': {}}]}
|
Airoboros 70b talking to itself
| 1 |
[removed]
| 2023-09-21T21:07:49 |
https://www.reddit.com/r/LocalLLaMA/comments/16oreiw/airoboros_70b_talking_to_itself/
|
NickDifuze
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16oreiw
| false | null |
t3_16oreiw
|
/r/LocalLLaMA/comments/16oreiw/airoboros_70b_talking_to_itself/
| false | false |
self
| 1 | null |
ETL/pre-processing options besides Unstructured.io
| 1 |
Have folks found other tools comparable to [Unstructured.io](https://Unstructured.io)? I'm trying to understand the space better, and I'm not sure which options for enterprise data pre-processing (to support RAG) are most similar (vs more limited/DIY solutions).
| 2023-09-21T21:36:58 |
https://www.reddit.com/r/LocalLLaMA/comments/16os5y2/etlpreprocessing_options_besides_unstructuredio/
|
kanzeon88
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16os5y2
| false | null |
t3_16os5y2
|
/r/LocalLLaMA/comments/16os5y2/etlpreprocessing_options_besides_unstructuredio/
| false | false |
self
| 1 | null |
Xwin-LM - New Alpaca Eval Leader - ExLlamaV2 quants
| 1 |
https://github.com/Xwin-LM/Xwin-LM
https://tatsu-lab.github.io/alpaca_eval/
I know its not a great metric but top of the board has to count for something, right? I quantized the 7B model to 8bpw exl2 format last night, and the model is a nicely verbose and coherent storyteller.
https://huggingface.co/UnstableLlama/Xwin-LM-7B-V0.1-8bpw-exl2
I have slow upload speed but will be uploading more BPW sizes for the 7b over the next day or two, along with trying my hand at quantizing the 13b model tonight.
| 2023-09-21T21:38:11 |
https://www.reddit.com/r/LocalLLaMA/comments/16os70e/xwinlm_new_alpaca_eval_leader_exllamav2_quants/
|
Unstable_Llama
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16os70e
| false | null |
t3_16os70e
|
/r/LocalLLaMA/comments/16os70e/xwinlm_new_alpaca_eval_leader_exllamav2_quants/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'ToKi1elIgPHnDTp37xLtb-ady-K38rPnKWR2kTpHN0s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KBcJa3sgchMwrz3MatoR__tivRrtq9yERkFpo-VsRJ0.jpg?width=108&crop=smart&auto=webp&s=6fa20eb11f871b8596d8e11d60502925242ddfab', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/KBcJa3sgchMwrz3MatoR__tivRrtq9yERkFpo-VsRJ0.jpg?width=216&crop=smart&auto=webp&s=78636232d081c081ff3b0bc8be42868e5c960b6f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/KBcJa3sgchMwrz3MatoR__tivRrtq9yERkFpo-VsRJ0.jpg?width=320&crop=smart&auto=webp&s=dfe56f2b07bcab0734cdfeae339ca3a6ac368a2b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/KBcJa3sgchMwrz3MatoR__tivRrtq9yERkFpo-VsRJ0.jpg?width=640&crop=smart&auto=webp&s=3d0924d81f11d8905f625243c3e524652ee78f35', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/KBcJa3sgchMwrz3MatoR__tivRrtq9yERkFpo-VsRJ0.jpg?width=960&crop=smart&auto=webp&s=0e21b0de4c361c397e9843a2d0827872daaabd11', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/KBcJa3sgchMwrz3MatoR__tivRrtq9yERkFpo-VsRJ0.jpg?width=1080&crop=smart&auto=webp&s=d1b0db52ee2cd698d96c0101e76882fdc0885688', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/KBcJa3sgchMwrz3MatoR__tivRrtq9yERkFpo-VsRJ0.jpg?auto=webp&s=f18ca6b8d30d326f942ffa7995f2c239a32fc6f3', 'width': 1200}, 'variants': {}}]}
|
GGUF models load but always return 0 tokens.
| 1 |
Has anyone seen this before? Any troubleshooting tips are appreciated.
I'm on Ubuntu 22.02 using webui and the llama.cpp loader.
The GGUF models load without error, but then on generation I see:
Traceback (most recent call last):
File "/home/g/AI/text/oobabooga_linux/text-generation-webui/modules/callbacks.py", line 56, in gentask
ret = self.mfunc(callback=_callback, *args, **self.kwargs)
File "/home/g/AI/text/oobabooga_linux/text-generation-webui/modules/llamacpp_model.py", line 119, in generate
prompt = self.decode(prompt).decode('utf-8')
AttributeError: 'str' object has no attribute 'decode'. Did you mean: 'encode'?
Output generated in 0.22 seconds (0.00 tokens/s, 0 tokens, context 72, seed 1027730307)
Some info after model load:
llm_load_tensors: ggml ctx size = 0.12 MB
llm_load_tensors: using CUDA for GPU acceleration
llm_load_tensors: mem required = 5322.06 MB (+ 3200.00 MB per state)
llm_load_tensors: offloading 10 repeating layers to GPU
llm_load_tensors: offloaded 10/43 layers to GPU
llm_load_tensors: VRAM used: 1702 MB
...................................................................llama_new_context_with_model: kv self size = 3200.00 MB
llama_new_context_with_model: compute buffer total size = 351.47 MB
llama_new_context_with_model: VRAM scratch buffer: 350.00 MB
AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |
2023-09-21 17:38:35 INFO:Loaded the model in 0.79 seconds.
| 2023-09-21T21:51:01 |
https://www.reddit.com/r/LocalLLaMA/comments/16osj0l/gguf_models_load_but_always_return_0_tokens/
|
compendium
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16osj0l
| false | null |
t3_16osj0l
|
/r/LocalLLaMA/comments/16osj0l/gguf_models_load_but_always_return_0_tokens/
| false | false |
self
| 1 | null |
Don't sleep on Xwin-LM-70B-V0.1 for roleplay
| 1 |
I've been testing out some of the more recent Llama2 70b models, and I have to say Xwin-LM-70B-V0.1 might be my new favorite. It will do NSFW just fine and seems to "follow the script" better than any other model I've tested recently. Although I haven't tested it personally, I imagine the 13b version is worth a look too.
Try the 70b version with a higher repetition penalty. I've had good results in the 1.2 - 1.25 range, otherwise it does tend to repeat itself.
| 2023-09-21T22:51:08 |
https://www.reddit.com/r/LocalLLaMA/comments/16ou180/dont_sleep_on_xwinlm70bv01_for_roleplay/
|
sophosympatheia
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ou180
| false | null |
t3_16ou180
|
/r/LocalLLaMA/comments/16ou180/dont_sleep_on_xwinlm70bv01_for_roleplay/
| false | false |
self
| 1 | null |
I want an extreme upgrade for my PC's motherboard in order to fit 256GB RAM. Which motherboard is extreme and reliable enough for this?
| 1 |
I can currently max out my RAM at 128GB but I'm not satisfied with that. I want more. I wanna make sure LLAmA 2 is my bitch when it comes to RAM (VRAM will be a different story). I also wanna go ahead and upgrade my processor to an i9 while I'm at it later.
What do you guys recommend when it comes to such an extreme motherboard? If I have to shell out $1000 for it I am willing to do just that but I've never replaced a motherboard before so I'm worried about PSU requirements and cooling plus any extra steps to take in order to max out the RAM.
What do you guys recommend?
| 2023-09-21T23:34:35 |
https://www.reddit.com/r/LocalLLaMA/comments/16ov1cq/i_want_an_extreme_upgrade_for_my_pcs_motherboard/
|
swagonflyyyy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ov1cq
| false | null |
t3_16ov1cq
|
/r/LocalLLaMA/comments/16ov1cq/i_want_an_extreme_upgrade_for_my_pcs_motherboard/
| false | false |
self
| 1 | null |
I want an extreme upgrade for my PC's motherboard in order to fit 256GB RAM. Which motherboard is extreme and reliable enough for this?
| 1 |
I can currently max out my RAM at 128GB but I'm not satisfied with that. I want more. I wanna make sure LLAmA 2 is my bitch when it comes to RAM (VRAM will be a different story). I also wanna go ahead and upgrade my processor to an i9 while I'm at it later.
What do you guys recommend when it comes to such an extreme motherboard? If I have to shell out $1000 for it I am willing to do just that but I've never replaced a motherboard before so I'm worried about PSU requirements and cooling plus any extra steps to take in order to max out the RAM.
What do you guys recommend?
| 2023-09-21T23:34:35 |
https://www.reddit.com/r/LocalLLaMA/comments/16ov1cz/i_want_an_extreme_upgrade_for_my_pcs_motherboard/
|
swagonflyyyy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ov1cz
| false | null |
t3_16ov1cz
|
/r/LocalLLaMA/comments/16ov1cz/i_want_an_extreme_upgrade_for_my_pcs_motherboard/
| false | false |
self
| 1 | null |
GitHub - himanshu662000/InfoGPT: Introducing My New Chatbot: Your Document Answering Assistant 🚀
| 1 | 2023-09-22T00:09:31 |
https://github.com/himanshu662000/InfoGPT
|
AwayConsideration855
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16ovt3c
| false | null |
t3_16ovt3c
|
/r/LocalLLaMA/comments/16ovt3c/github_himanshu662000infogpt_introducing_my_new/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'fD3iVruPSl7_Fukg7UwhWncDZ8pp6pnrTMVtQuwKuqM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/P6qag9rOeY2jztmUGDh7I2oFa96jMbQUEDDKqj7HWL0.jpg?width=108&crop=smart&auto=webp&s=dcc63d1ac5a227f66ed68a2c685ce77e3a715150', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/P6qag9rOeY2jztmUGDh7I2oFa96jMbQUEDDKqj7HWL0.jpg?width=216&crop=smart&auto=webp&s=f50b81cd9f486d49075d7bf623067a3853ae12bc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/P6qag9rOeY2jztmUGDh7I2oFa96jMbQUEDDKqj7HWL0.jpg?width=320&crop=smart&auto=webp&s=94c2b6cf1a61e1661daf578a21e7710aaa51bb02', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/P6qag9rOeY2jztmUGDh7I2oFa96jMbQUEDDKqj7HWL0.jpg?width=640&crop=smart&auto=webp&s=dd6bfe5ddcf1fd1f0538faf282484c8f1cd20e74', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/P6qag9rOeY2jztmUGDh7I2oFa96jMbQUEDDKqj7HWL0.jpg?width=960&crop=smart&auto=webp&s=b1c9d75d646c46576e3c0b5d40ec9cd9ab13acdb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/P6qag9rOeY2jztmUGDh7I2oFa96jMbQUEDDKqj7HWL0.jpg?width=1080&crop=smart&auto=webp&s=ba27fb9779a75b31eed11a80e8734126106337b6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/P6qag9rOeY2jztmUGDh7I2oFa96jMbQUEDDKqj7HWL0.jpg?auto=webp&s=d7cd4b8b51cc53bc6b71172ba88b4a858c1601c6', 'width': 1200}, 'variants': {}}]}
|
||
"max_new_tokens" and following my prompt.
| 1 |
I've been tinkering with the Llama2\_70b\_chat\_uncensored model (which runs surprisingly well on my laptop), though I've noticed similar behavior with other models. In short, when I keep "max\_new\_token" to a low level, say \~200, the model does what I tell it, but of course, I have to manually tell it to continue periodically. When I crank up the max\_new\_tokens, the model invariably jumps the tracks and almost completely ignores instructions. Why would simply allowing the model to generate more at a time make it ramble?
| 2023-09-22T00:42:27 |
https://www.reddit.com/r/LocalLLaMA/comments/16owhgl/max_new_tokens_and_following_my_prompt/
|
Seclusion72
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16owhgl
| false | null |
t3_16owhgl
|
/r/LocalLLaMA/comments/16owhgl/max_new_tokens_and_following_my_prompt/
| false | false |
self
| 1 | null |
Running GGUFs on M1 Ultra: Part 2!
| 1 |
Part 1 : [https://www.reddit.com/r/LocalLLaMA/comments/16o4ka8/running\_ggufs\_on\_an\_m1\_ultra\_is\_an\_interesting/](https://www.reddit.com/r/LocalLLaMA/comments/16o4ka8/running_ggufs_on_an_m1_ultra_is_an_interesting/)
Reminder that this is a test of an M1Ultra 20 core/48 GPU core Mac Studio with 128GB of RAM. I always ask a single sentence question, the same one every time, removing the last reply so it is forced to reevaluate each time. This is using Oobabooga.
Some of y'all requested a few extra tests on larger models, so here are the complete numbers so far. I added in a 34b q8, a 70b q8, and a 180b q3\_K\_S
**M1 Ultra 128GB 20 core/48 gpu cores**
\------------------
13b q5\_K\_M: **23-26 tokens per second** (eval speed of \~8ms per token)
13b q8: **26-28 tokens per second** (eval speed of \~9ms per token)
34b q3\_K\_M: : **11-13 tokens per second** (eval speed of \~18ms per token)
34b q4\_K\_M: **12-15 tokens per second** (eval speed of \~16ms per token)
34b q8: **11-14 tokens per second** (eval speed of \~16ms per token)
70b q2\_K: **7-10 tokens per second** (eval speed of \~30ms per token)
70b q5\_K\_M: **6-9 tokens per second** (eval speed of \~41ms per token)
70b q8: **7-9 tokens per second** (eval speed of \~25ms ms per token)
180b q3\_K\_S: **3-4 tokens per second** (eval speed was all over the place. 111ms at lowest, 380ms at worst. But most were in the range of 200-240ms or so).
The 180b 3\_K\_S is reaching the edge of what I can do at about 75GB in RAM. I have 96GB to play with, so I actually can probably do a 3\_K\_M or maybe even a 4\_K\_S, but I've downloaded so much from Huggingface the past month just testing things out that I'm starting to feel bad so I don't think I'll test that for a little while lol.
One odd thing I noticed was that the q8 was getting similar or better eval speeds than the K quants, and I'm not sure why. I tried several times, and continued to get pretty consistent results.
**Additional test**: Just to see what would happen, I took the **34b q8** and dropped a chunk of code that came in at **14127 tokens of context** and asked the model to summarize the code. It took **279 seconds** at a speed of **3.10 tokens per second** and an eval speed of **9.79ms** per token.
Anyhow, I'm pretty happy all things considered. A 64 core GPU M1 Ultra would definitely move faster, and an M2 would blow this thing away in a lot of metrics, but honestly this does everything I could hope of it.
Hope this helps! When I was considering buying the M1 I couldn't find a lot of info from silicon users out there, so hopefully these numbers will help others!
| 2023-09-22T01:02:09 |
https://www.reddit.com/r/LocalLLaMA/comments/16oww9j/running_ggufs_on_m1_ultra_part_2/
|
LearningSomeCode
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16oww9j
| false | null |
t3_16oww9j
|
/r/LocalLLaMA/comments/16oww9j/running_ggufs_on_m1_ultra_part_2/
| false | false |
self
| 1 | null |
Seeking some help
| 1 |
I've been going back and forth with this idea and I am just stuck. Here is what I am currently working with:
* Currently have a 3700X with 128GB DDR4 and a 6700XT running a dual boot Ubuntu/Windows 11 that was my previous gaming rig. The issues I've had are that Ubuntu just won't boot with the 3090 EVGA FTW3 in there. I tried installing the drivers but it still just won't work vs the 6700XT 12GB working out of the box.
* I have a personal gaming rig with a 7950X, 64GB DDR5 6000, 4090 running Windows 11 that has Thunderbolt/USB4 ports.
* I also have a MacBook Pro 14" M1 Max, 32 Core Gpu, 32GB, 1TB.
* I have a work computer with a 12700H, 64GB, A2000 8GB Gpu
The AMD 3700X rig works great with llama-cpp. Even with the instructions I followed I'm unable to get it to use the 6700XT 12GB in any meaningful way.
​
I've been debating the following over and over:
* Buy an eGPU enclosure and put the 3090 in it and connect it to the 7950X/4090 rig. Use 3700X/128GB when 128GB is required.
* Buy an eGPU enclosure, blast my CC and get a used A6000 (around 3.4k) and connect it to the 7950X/4090 rig. Use 3700X/128GB when 128GB is required.
* Buy an eGPU enclosure, blast the tits off my CC and get a A5000 ADA and connect it to the 7950X/4090 rig. Use 3700X/128GB when 128GB is required.
* Buy a AMD W7800 PRO (gut punch my CC) and put it in the 3700X/128GB rig and work with Windows 11 (Ubuntu RoCm only supports W6800 or Instincts but Windows 11 RoCm supports 6700XT and W7800.)
* Blast the tits off my CC and get an AMD W7900 and put it in the 3700X/128GB rig etc.. etc..
* Just use the AMD 3700X/128GB/6700XT12GB and leave it at that.
* Just use the Macbook Pro.
* Purchase a Mac Studio 192GB, M2 Ultra (maxed).
​
Not rich, but I make a decent living and I have savings (for now.) I've been wanting to get into AI and currently just not sure which way to go. Selling the stuff like the Mac, or the 7950X/4090 is also an option to help pay for stuff. I really only play WoW, BG3 and I am sure I can get along with less hardware.
I would love to run a local LLM to:
* Help me with father's guardianship stuff that drives me up the wall (PDFs, emails, medical billing, appointments, etc..)
* Help me with my personal innovations (writing patent ideas, help me with my online learnings)
* Tell me jokes that are a combination of the following: "Covid is stored in the balls", the planet Uranus, your mom.
* Train it (and my guild mate's moms.)
I would also love to run a local LLM on my work computer which has thunderbolt 4 (hence the Razer Core + 3090/A6000/ADA5000) to:
* Help me with difficult code (lots of technical debt writing by some jerk 20 years ago.)
* Evaluate code according to a particular architecture
* Review docs I write just in case I disclose that I have no idea how I got here and I'm a complete fuck up.
* Train.
​
My experience with LLMs:
* I got LLaMA 2 running on Windows AMD 7950X/64GB DDR5/4090.
* I got llama-cpp running on Linux AMD 3700X/128GB DDR4.
* I got llama-cpp running on the Mac (MBP 14"/M1Max/32 Core GPU/32 GB.)
* I am able to get out of bed and put pants on.
* Absolutely no experience training anything.
Thank you for your time. Any input (in additional to recommend methods to train/ models, etc.. are also welcome. Just please include helpful links) or suggestions on what to get rid of would be helpful.
​
​
​
| 2023-09-22T01:43:45 |
https://www.reddit.com/r/LocalLLaMA/comments/16oxqnp/seeking_some_help/
|
Aroochacha
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16oxqnp
| false | null |
t3_16oxqnp
|
/r/LocalLLaMA/comments/16oxqnp/seeking_some_help/
| false | false |
self
| 1 | null |
Setting up a alpaca-chatbot
| 1 |
Im trying to format for a Chat bot but I dont get how it wants me to set it up the data. Do I just put it in a Json file or what? I just need help knowing where to put the data so I can enter it into the training program. Any help would be great thanks!
| 2023-09-22T03:16:21 |
https://www.reddit.com/r/LocalLLaMA/comments/16ozlmt/setting_up_a_alpacachatbot/
|
mlpfreddy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ozlmt
| false | null |
t3_16ozlmt
|
/r/LocalLLaMA/comments/16ozlmt/setting_up_a_alpacachatbot/
| false | false |
self
| 1 | null |
Running a local LLM on a 3070 TI (8gb)?
| 1 |
I currently use GPT for a variety of instruct-based text analysis tasks, but I'd like to switch as many of these over to a local LLM if possible. I'm looking to use my Asus M16 laptop (3070 Ti 8gb, Intel i9 12900-H, 16gb RAM). Are there any LLMs that would do this well enough?
| 2023-09-22T04:07:00 |
https://www.reddit.com/r/LocalLLaMA/comments/16p0jza/running_a_local_llm_on_a_3070_ti_8gb/
|
benchmaster-xtreme
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16p0jza
| false | null |
t3_16p0jza
|
/r/LocalLLaMA/comments/16p0jza/running_a_local_llm_on_a_3070_ti_8gb/
| false | false |
self
| 1 | null |
LongLoRA: Efficient long-context fine-tuning, supervised fine-tuning
| 1 | 2023-09-22T05:03:25 |
https://github.com/dvlab-research/LongLoRA
|
ninjasaid13
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16p1k2e
| false | null |
t3_16p1k2e
|
/r/LocalLLaMA/comments/16p1k2e/longlora_efficient_longcontext_finetuning/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'sN1_TLP2SZnpbcoCRufVI4ceFL3pu5XGOwbkmBVUm_4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DjmykaDd1vnqwAXp_qXxydc6S_7Xe1D78UGoVbNnAC0.jpg?width=108&crop=smart&auto=webp&s=ee71e7376faafa05a53e26a0b6beb92482ea41e9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DjmykaDd1vnqwAXp_qXxydc6S_7Xe1D78UGoVbNnAC0.jpg?width=216&crop=smart&auto=webp&s=f6476128dbe494e3014dc0b5a984d255faaff282', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DjmykaDd1vnqwAXp_qXxydc6S_7Xe1D78UGoVbNnAC0.jpg?width=320&crop=smart&auto=webp&s=d847fe131847ef7c19fbc561d0a5080d01350b61', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DjmykaDd1vnqwAXp_qXxydc6S_7Xe1D78UGoVbNnAC0.jpg?width=640&crop=smart&auto=webp&s=d38edd569fe2db6066458ab7e281cc8068a5e49e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DjmykaDd1vnqwAXp_qXxydc6S_7Xe1D78UGoVbNnAC0.jpg?width=960&crop=smart&auto=webp&s=555da7f2ded3c55d089ba278c073c28d0d778ab6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DjmykaDd1vnqwAXp_qXxydc6S_7Xe1D78UGoVbNnAC0.jpg?width=1080&crop=smart&auto=webp&s=34a8b946454cbd32ffcfe7a9e2981730ae10cede', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/DjmykaDd1vnqwAXp_qXxydc6S_7Xe1D78UGoVbNnAC0.jpg?auto=webp&s=a897231bfd93d74cd5af33b47ccd81a2a266bce1', 'width': 1200}, 'variants': {}}]}
|
||
A Paradigm Shift in Machine Translation: how to outperform GPT-3.5 for translation using 7B and 13B models
| 1 |
From researchers at Microsoft, an interesting paper about a new training strategy for language translation. They release their models, ALMA 7B and ALMA 13B. The best model, ALMA-13B-LoRA, is the cross between full weight fine-tuning on the monolingual data and LoRA fine-tuning on the parallel data.
Paper: [https://arxiv.org/abs/2309.11674](https://arxiv.org/abs/2309.11674)
Code and models: [https://github.com/fe1ixxu/ALMA](https://github.com/fe1ixxu/ALMA)
Abstract:
>Generative Large Language Models (LLMs) have achieved remarkable advancements in various NLP tasks. However, these advances have not been reflected in the translation task, especially those with moderate model sizes (i.e., 7B or 13B parameters), which still lag behind conventional supervised encoder-decoder translation models. Previous studies have attempted to improve the translation capabilities of these moderate LLMs, but their gains have been limited. In this study, we propose a novel fine-tuning approach for LLMs that is specifically designed for the translation task, eliminating the need for the abundant parallel data that traditional translation models usually depend on. Our approach consists of two fine-tuning stages: initial fine-tuning on monolingual data followed by subsequent fine-tuning on a small set of high-quality parallel data. We introduce the LLM developed through this strategy as Advanced Language Model-based trAnslator (ALMA). Based on LLaMA-2 as our underlying model, our results show that the model can achieve an average improvement of more than 12 BLEU and 12 COMET over its zero-shot performance across 10 translation directions from the WMT'21 (2 directions) and WMT'22 (8 directions) test datasets. The performance is significantly better than all prior work and even superior to the NLLB-54B model and GPT-3.5-text-davinci-003, with only 7B or 13B parameters. This method establishes the foundation for a novel training paradigm in machine translation.
Some excerpts at a glance:
>**Do LLMs have an appetite for parallel data?**
>
>Prior studies have fine-tuned LLMs with datasets containing over 300M parallel instances. However, our empirical evaluations suggest that this strategy may not be optimal, and even harm the translation capabilities of LLMs.
>
>To allow for a deep analysis, we concentrate on one language pair, English→Russian (en→ru). LLaMA-2-7B requires only limited training examples (10K and 100K) to achieve competent translation. However, a surplus of examples (5M or 20M) seems to dilute its existing knowledge in Russian. Conversely, MPT-7B, potentially due to its inherently weaker translation capability, exhibits improved performance with an increase in training data. This may suggest that a well-trained LLM may not necessitate substantial parallel data.
>
>**A new training recipe**
>
>We demonstrate that LLMs, such as LLaMA-2-7B, do not voraciously consume parallel data. We introduce a novel training strategy that markedly enhances translation performance without relying heavily on parallel data.
>
>Monolingual Data Fine-tuning: Our first stage is fine-tuning LLMs with monolingual data of non-English languages involved in translation tasks, enhancing their proficiency in these languages. We show that utilizing small monolingual data and modest computational cost (e.g., 1B monolingual tokens mixed by 6 languages and fine-tuning under 18 hours), can facilitate significant improvements in 10 translation directions.
>
>High-Quality Data Fine-tuning: Drawing on insights from Section 3.2 that LLMs may require only small parallel data, coupled with previous research emphasizing training data quality, we fine-tune the model using a small, yet high-quality parallel dataset in this stage.
Results:
>ALMA models significantly outperform all prior similar studies and are comparable to SoTA models. Our best model (ALMA-13B-LoRA) substantially outperforms NLLB-54B and GPT-3.5-D (GPT-3.5-text-davinci-003) on average. In en→xx direction, it even outperforms GPT-3.5-T (GPT-3.5-turbo-0301) on average COMET and has close performance when it comes to xx→en.
>
>We categorize BLEU and COMET scores into three groups: scores that are more than 10 points below the higher value of GPT-4/GPT-3.5-T are emphasized in deep red boxes, those that are more than 5 points below are emphasized in shallow red boxes, and all other scores are emphasized in green boxes.
en→xx
https://preview.redd.it/fqn9elktzqpb1.png?width=590&format=png&auto=webp&s=bdcedf294855ce84188dbbdc4d882f710494dcae
xx→en
https://preview.redd.it/q7lqdnzwzqpb1.png?width=590&format=png&auto=webp&s=81f0594fb32cff5630ee78c496fd35187af3dbce
| 2023-09-22T06:18:35 |
https://www.reddit.com/r/LocalLLaMA/comments/16p2smj/a_paradigm_shift_in_machine_translation_how_to/
|
llamaShill
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16p2smj
| false | null |
t3_16p2smj
|
/r/LocalLLaMA/comments/16p2smj/a_paradigm_shift_in_machine_translation_how_to/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
|
|
Which 7b Model for RP and Chat is your favorite ?
| 1 |
Hello guys
I'm on the hunt for the top 3 RP **7b models** **Exllama**. I've got speed as my top priority, followed by quality. And , yes i know all about the 13b models and their prowess, but right now, I'm laser-focused on the 7b ones.
Also, if anyone's got the optimal settings for running these on Oobabooga with 8GB VRAM, I'd be all ears. Remember, I'm only interested in the crème de la crème of **7b models**.
​
much appreciated.
​
| 2023-09-22T06:22:27 |
https://www.reddit.com/r/LocalLLaMA/comments/16p2uts/which_7b_model_for_rp_and_chat_is_your_favorite/
|
New_Mammoth1318
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16p2uts
| false | null |
t3_16p2uts
|
/r/LocalLLaMA/comments/16p2uts/which_7b_model_for_rp_and_chat_is_your_favorite/
| false | false |
self
| 1 | null |
Introducing LlamaTor: A Decentralized and Efficient AI Model Distribution Platform
| 1 |
[removed]
| 2023-09-22T06:33:18 |
https://www.reddit.com/r/LocalLLaMA/comments/16p3143/introducing_llamator_a_decentralized_and/
|
Nondzu
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16p3143
| false | null |
t3_16p3143
|
/r/LocalLLaMA/comments/16p3143/introducing_llamator_a_decentralized_and/
| false | false |
self
| 1 | null |
How does Wizard 30b compare to the latest offerings from NovelAI? (namely Kaeya model and Clio)
| 1 |
Well it seems that GPT4 on OpenAI has finally shit the bed for real. I’ve played around extensively with the latest offerings from NovelAI, but no matter what settings I use it’s still apparent it’s a small model. Nothing compares to GPT4 when it’s jailbroken and in full stride sadly.
I know that Wizard is the best you can get for a local model, so I’m wondering how the beefiest version compared to the offerings online? Does it hold its own or exceed NovelAI? Their image generation isn’t shit compared to a solid local model, so I’m wondering if text generation is the same boat.
Sadly I only have a 3080 10GB with like 16GB of system ram so I can’t do any serious tests myself. Dreaming of saving up and maybe one day buying a shiny new 5090 for local models in a couple of years.
| 2023-09-22T06:36:52 |
https://www.reddit.com/r/LocalLLaMA/comments/16p3354/how_does_wizard_30b_compare_to_the_latest/
|
katiecharm
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16p3354
| false | null |
t3_16p3354
|
/r/LocalLLaMA/comments/16p3354/how_does_wizard_30b_compare_to_the_latest/
| false | false |
self
| 1 | null |
Easiest way to set up an uncensored LLM on Linux?
| 1 |
I'm not super tech savvy. I am using Fedora Linux as my operating system because I heard that is a good os for privacy. My laptop is like a five year old Lenovo T480 intel Thinkpad. I'm not entirely sure what the gpu is (when i try to emulate nintendo switch games the game runs kind of slow).
What is a user-friendly intuitive LLM I can set up on Fedora Linux with my laptop? I mainly just want to use the LLM for NSFW roleplays.
| 2023-09-22T06:37:08 |
https://www.reddit.com/r/LocalLLaMA/comments/16p33a0/easiest_way_to_set_up_an_uncensored_llm_on_linux/
|
Flimsy-Hedgehog-3520
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16p33a0
| false | null |
t3_16p33a0
|
/r/LocalLLaMA/comments/16p33a0/easiest_way_to_set_up_an_uncensored_llm_on_linux/
| false | false |
self
| 1 | null |
Introducing LlamaTor: Revolutionizing AI Model Distribution with BitTorrent Technology
| 1 |
[removed]
| 2023-09-22T06:39:06 |
https://www.reddit.com/r/LocalLLaMA/comments/16p34cn/introducing_llamator_revolutionizing_ai_model/
|
Nondzu
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16p34cn
| false | null |
t3_16p34cn
|
/r/LocalLLaMA/comments/16p34cn/introducing_llamator_revolutionizing_ai_model/
| false | false |
self
| 1 | null |
Question regarding Llama-2-7b-chat-hf finetune
| 1 |
[removed]
| 2023-09-22T06:48:12 |
https://www.reddit.com/r/LocalLLaMA/comments/16p39d0/question_regarding_llama27bchathf_finetune/
|
Anu_Rag9704
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16p39d0
| false | null |
t3_16p39d0
|
/r/LocalLLaMA/comments/16p39d0/question_regarding_llama27bchathf_finetune/
| false | false |
self
| 1 | null |
Best inference rig for $40k?
| 1 |
Must fit 120GB model, considering 2xA100 80GB or 4xA6000 48GB ADA
90% workload would be inference.
which is best choice? any better alternatives?
| 2023-09-22T07:09:37 |
https://www.reddit.com/r/LocalLLaMA/comments/16p3lgb/best_inference_rig_for_40k/
|
Wrong_User_Logged
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16p3lgb
| false | null |
t3_16p3lgb
|
/r/LocalLLaMA/comments/16p3lgb/best_inference_rig_for_40k/
| false | false |
self
| 1 | null |
Training a small model from scratch
| 1 |
I’m experimenting with training a small model, maybe <1B from scratch using my datasets.
What are some resources/code I can use to start doing this?
I read recently the phi models have good results with small number of parameters, are their training code open source?
| 2023-09-22T07:21:13 |
https://www.reddit.com/r/LocalLLaMA/comments/16p3rqm/training_a_small_model_from_scratch/
|
Tasty-Lobster-8915
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16p3rqm
| false | null |
t3_16p3rqm
|
/r/LocalLLaMA/comments/16p3rqm/training_a_small_model_from_scratch/
| false | false |
self
| 1 | null |
How does The Bloke Quantize with all these methods?
| 1 |
[removed]
| 2023-09-22T08:23:23 |
Pineapple_Expressed
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
16p4phn
| false | null |
t3_16p4phn
|
/r/LocalLLaMA/comments/16p4phn/how_does_the_bloke_quantize_with_all_these_methods/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'CyqYMuq6dpqNJgdZft_6HgqFoWCsNNPGBt96di-OujY', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/ko60qoaforpb1.png?width=108&crop=smart&auto=webp&s=46d1e5e0cb5757821359b294aaef982c90dc57a7', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/ko60qoaforpb1.png?width=216&crop=smart&auto=webp&s=f8e786555b319fac3599f7c63cf6d2b3aeaaf581', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/ko60qoaforpb1.png?width=320&crop=smart&auto=webp&s=8c917e36847d32876f718f638ce7856acb691233', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/ko60qoaforpb1.png?width=640&crop=smart&auto=webp&s=ebf8bdaffe4681324ada517c7d6aeee473580c3b', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/ko60qoaforpb1.png?width=960&crop=smart&auto=webp&s=13256f564c59524247072456639e1eb907c6f9db', 'width': 960}], 'source': {'height': 2152, 'url': 'https://preview.redd.it/ko60qoaforpb1.png?auto=webp&s=a860347f7f81a582ba6e6ce77bba8f610ef98bdd', 'width': 987}, 'variants': {}}]}
|
||
Can i fine tune any model with m1 16gb ram
| 1 |
pretty much the title, i don't have gpu so, i just want to fine tune any small model to get my feet wet
| 2023-09-22T09:31:45 |
https://www.reddit.com/r/LocalLLaMA/comments/16p5rvh/can_i_fine_tune_any_model_with_m1_16gb_ram/
|
itshardtopicka_name_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16p5rvh
| false | null |
t3_16p5rvh
|
/r/LocalLLaMA/comments/16p5rvh/can_i_fine_tune_any_model_with_m1_16gb_ram/
| false | false |
self
| 1 | null |
Is running an open sourced LLM in the cloud via GPU generally cheaper than running a closed sourced LLM?
| 1 |
Assuming using the same cloud service, Is running an open sourced LLM in the cloud via GPU generally cheaper than running a closed sourced LLM? (ie. do we pay a premium when running a closed sourced LLM compared to just running anything on the cloud via GPU?)
One eg. I am thinking of is running Llama 2 13b GPTQ in Microsoft Azure vs. GPT-3.5 Turbo.
I understand there are a lot of parameters to consider (such as choosing which GPU to use in Microsoft Azure etc.), but I am really looking at what’s the cheapest way to run Llama 2 13b GPTQ or a performance-equivalent closed sourced LLM.
| 2023-09-22T10:07:24 |
https://www.reddit.com/r/LocalLLaMA/comments/16p6czo/is_running_an_open_sourced_llm_in_the_cloud_via/
|
--leockl--
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16p6czo
| false | null |
t3_16p6czo
|
/r/LocalLLaMA/comments/16p6czo/is_running_an_open_sourced_llm_in_the_cloud_via/
| false | false |
self
| 1 | null |
Does training and inference both require the same amount of VRAM?
| 1 |
I'm interested to run a model locally and I'm thinking of buying either an RTX3090 or RTX4090. I'm just interested in generating the text locally, but the actual training can be done on more powerful GPUs at a cloud provider.
When people say so and so model required X amount of VRAM, I'm not sure whether that's only for training or if inference also requires just as much VRAM.
My main interest is in generating snippets of code for a particular application. I'm not sure how big of a model (for inference) I need for this but I also want it to be as fast as possible (one user only) so I don't know if I really need these RTX cards or whether something slightly lower end might so the job too.
Can someone shed some light on VRAM requirements separately for training and inference?
| 2023-09-22T10:47:50 |
https://www.reddit.com/r/LocalLLaMA/comments/16p71xn/does_training_and_inference_both_require_the_same/
|
floofcode
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16p71xn
| false | null |
t3_16p71xn
|
/r/LocalLLaMA/comments/16p71xn/does_training_and_inference_both_require_the_same/
| false | false |
self
| 1 | null |
Make a home-based private AI
| 1 |
Hi, I want to make an AI that would work somewhat like Jarvis from the MCU. My requirements:
1. It needs to be fully voice-operated up to the point I won't need to touch a keyboard or a mouse at all (an asterisk to that is it would receive voice input but would answer in text).
2. It needs to be able to carry on elaborate conversations in order to understand complex assignments.
3. It should be able to manipulate every aspect of the host OS it's running on such as files and folder creation and management, software installation and removal, change settings, launch applications and websites etc.
4. It needs to be able to write and compile code and build databases. It needs to be able to render texts, sounds, voices, images, videos and 3D objects.
I have no experience in making something like that and would need all the basics explained in layman's terms as much as possible.
My main concern currently is the specs requirements for the host machine. Would something like what I described can be achieved on consumer-grade CPUs and GPUs?
And is what I'm aiming for is even possible for me to build from scratch considering my lack of familiarity with the subject?
I hope this is the right forum for my questions.
Thanks in advance
| 2023-09-22T11:23:11 |
https://www.reddit.com/r/LocalLLaMA/comments/16p7pru/make_a_homebased_private_ai/
|
Sorakai154
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16p7pru
| false | null |
t3_16p7pru
|
/r/LocalLLaMA/comments/16p7pru/make_a_homebased_private_ai/
| false | false |
self
| 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.