title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
A Quick Guide to Getting Started with LLMs
| 1 |
[removed]
| 2023-09-26T23:39:31 |
https://www.reddit.com/r/LocalLLaMA/comments/16t44bj/a_quick_guide_to_getting_started_with_llms/
|
MrChristoCoder
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16t44bj
| false | null |
t3_16t44bj
|
/r/LocalLLaMA/comments/16t44bj/a_quick_guide_to_getting_started_with_llms/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '_rEX1xvwdv17x6NFAWQpYFNONQ0BKA5Qw0Eo0JX0zWU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=108&crop=smart&auto=webp&s=4bc231a80d79babe4e6cddf7b4c71dcb0aa8f8ff', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=216&crop=smart&auto=webp&s=d7108244b7182d85047aa59446f1dfb68542b610', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=320&crop=smart&auto=webp&s=d34fa1a756c458772d3c8680309a93cf8d758b40', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=640&crop=smart&auto=webp&s=5b03e18da2698977cf1222f0c9e54ccb6177ffc4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=960&crop=smart&auto=webp&s=3d875ff29aae8239d010f3b964e5a2f3ebe32e3d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=1080&crop=smart&auto=webp&s=b51090c30528b6b8c637acb54d7fc0f6a5249cf5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?auto=webp&s=ee70e402c0f8274b46f38378bada81dbeb5b1dac', 'width': 1200}, 'variants': {}}]}
|
Tips for no GPU?
| 1 |
Does anyone have any tips and tricks for people trying to run Llama locally without a GPU?
What about a torrent solution like Petals? Please share your any insights and knowledge.
| 2023-09-27T00:24:46 |
https://www.reddit.com/r/LocalLLaMA/comments/16t57xn/tips_for_no_gpu/
|
kecepa5669
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16t57xn
| false | null |
t3_16t57xn
|
/r/LocalLLaMA/comments/16t57xn/tips_for_no_gpu/
| false | false |
self
| 1 | null |
Is 7900xtx ok to run llm
| 1 |
Would AMD gpu be a choice today?
| 2023-09-27T00:55:00 |
https://www.reddit.com/r/LocalLLaMA/comments/16t5xni/is_7900xtx_ok_to_run_llm/
|
Willing_Citron_2894
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16t5xni
| false | null |
t3_16t5xni
|
/r/LocalLLaMA/comments/16t5xni/is_7900xtx_ok_to_run_llm/
| false | false |
self
| 1 | null |
Fine tune LLaMa for code generation, seek suggestion.
| 1 |
Hello, want to get an idea from you guys if this is a reasonable approach.
Problem statement: on daily basis, I need to create one code file and one unit test file. The code file follows a very specific format. And the unit test does the testing. Both files are in python. I want to use a fine tuned LLM to generate such files when I give a prompt.
Approach: I have a sizable collection (100+) of such files following specific format, and a text description of the file what it does. I plan to prepare such code files and corresponding unit test file as output, and description as the question, in the format of Q&A. This dataset will be used to fine tune the LLM to learn this specific format. In the end, the LLM should be able to output such two files when prompted.
Question:
1. Is this approach reasonable?
2. What’s the best way to prepare such dataset since the answer is the code.
3. Any recommendation on the foundation model?
Thank you
| 2023-09-27T02:04:02 |
https://www.reddit.com/r/LocalLLaMA/comments/16t7la1/fine_tune_llama_for_code_generation_seek/
|
Renspire
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16t7la1
| false | null |
t3_16t7la1
|
/r/LocalLLaMA/comments/16t7la1/fine_tune_llama_for_code_generation_seek/
| false | false |
self
| 1 | null |
Finetune Llm on amd gpu rx 580
| 1 |
So i intend to finetune llama2 for a specific usecase i can already use koboldcpp and opencl to run it but hiw do i finetune it i literally cant find any info about this online
| 2023-09-27T03:29:46 |
https://www.reddit.com/r/LocalLLaMA/comments/16t9ihy/finetune_llm_on_amd_gpu_rx_580/
|
GOD_HIMSELVES
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16t9ihy
| false | null |
t3_16t9ihy
|
/r/LocalLLaMA/comments/16t9ihy/finetune_llm_on_amd_gpu_rx_580/
| false | false |
self
| 1 | null |
What's your favorite RAG system and why? Langchain, LlamaIndex, etc?
| 1 |
Just trying to understand the latest landscape of RAGs out there.
Thanks!
| 2023-09-27T04:28:09 |
https://www.reddit.com/r/LocalLLaMA/comments/16taq8m/whats_your_favorite_rag_system_and_why_langchain/
|
jl303
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16taq8m
| false | null |
t3_16taq8m
|
/r/LocalLLaMA/comments/16taq8m/whats_your_favorite_rag_system_and_why_langchain/
| false | false |
self
| 1 | null |
Jetson nano 4gb
| 1 |
Any new optimized or new techniques to get models like LLama or Falcon on jetson nano 4gb ? 🥹😅
| 2023-09-27T04:35:15 |
https://www.reddit.com/r/LocalLLaMA/comments/16tavjo/jetson_nano_4gb/
|
Plane_Ad9568
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16tavjo
| false | null |
t3_16tavjo
|
/r/LocalLLaMA/comments/16tavjo/jetson_nano_4gb/
| false | false |
self
| 1 | null |
LlamaTor: A New Initiative for BitTorrent-Based AI Model Distribution
| 1 | 2023-09-27T06:18:56 |
https://github.com/Nondzu/LlamaTor
|
Nondzu
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16tcv1u
| false | null |
t3_16tcv1u
|
/r/LocalLLaMA/comments/16tcv1u/llamator_a_new_initiative_for_bittorrentbased_ai/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'RjJWztBiFGWSiX83fH3_kau7edsk9G7G5e6O7I6Yv9o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KMyRwkBosF8pXwu5XX6ZyCplk8kLcSgi8Tm3jyzE1OM.jpg?width=108&crop=smart&auto=webp&s=a1e94f2a1dde4342b077bc220e69f48c13683c9d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/KMyRwkBosF8pXwu5XX6ZyCplk8kLcSgi8Tm3jyzE1OM.jpg?width=216&crop=smart&auto=webp&s=820cc6730933fa10d58ef8f84799900191c297d1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/KMyRwkBosF8pXwu5XX6ZyCplk8kLcSgi8Tm3jyzE1OM.jpg?width=320&crop=smart&auto=webp&s=910333406ae647d8994e8fc801175a91f299b052', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/KMyRwkBosF8pXwu5XX6ZyCplk8kLcSgi8Tm3jyzE1OM.jpg?width=640&crop=smart&auto=webp&s=598ba38714737a9f518e059cd34861397e7334e0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/KMyRwkBosF8pXwu5XX6ZyCplk8kLcSgi8Tm3jyzE1OM.jpg?width=960&crop=smart&auto=webp&s=c2f18662d6c882a0fd0b94cba4aa0a585c09c3df', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/KMyRwkBosF8pXwu5XX6ZyCplk8kLcSgi8Tm3jyzE1OM.jpg?width=1080&crop=smart&auto=webp&s=091a83f91c5193bed613e7be11d000aa858cb0ab', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/KMyRwkBosF8pXwu5XX6ZyCplk8kLcSgi8Tm3jyzE1OM.jpg?auto=webp&s=8949511d653d73601586ca272c30c8d78f9b04dd', 'width': 1200}, 'variants': {}}]}
|
||
Loading LLaMA2-70B model
| 1 |
Hi All,
I'm trying to load LLaMA2-70B model, with following GPU specs:
https://pastebin.com/vjbWpP7C
Basically, I have two a40 machines, and was using this (https://discuss.huggingface.co/t/how-to-load-large-model-with-multiple-gpu-cards/18522/8) code to make it work,
```python
model = AutoModelForCausalLM.from_pretrained("//common2/public/TheBloke/Llama-2-70B-fp16", device_map="auto")
```
but when I try to load the model it take hours and then the server crashes.
Am I doing it in a wrong manner? Any suggestions as how it's done?
| 2023-09-27T07:11:42 |
https://www.reddit.com/r/LocalLLaMA/comments/16tdrot/loading_llama270b_model/
|
1azytux
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16tdrot
| false | null |
t3_16tdrot
|
/r/LocalLLaMA/comments/16tdrot/loading_llama270b_model/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?width=108&crop=smart&auto=webp&s=3d74dbe4f1d67cc8b587db9aa01762f26e269bcf', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?auto=webp&s=b9f5c4e4867fbffb2c1ff45dd70aa338d1e3f40c', 'width': 150}, 'variants': {}}]}
|
Why Weightdecay is set to zero when finetune LLM?
| 1 |
I found that all long context models are finetune with weightdecay equals zero. Anyone knows why?
[http://arxiv.org/abs/2309.12307](http://arxiv.org/abs/2309.12307)
[http://arxiv.org/abs/2309.00071](http://arxiv.org/abs/2309.00071)
| 2023-09-27T08:03:08 |
https://www.reddit.com/r/LocalLLaMA/comments/16telcc/why_weightdecay_is_set_to_zero_when_finetune_llm/
|
Mountain-Run-5602
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16telcc
| false | null |
t3_16telcc
|
/r/LocalLLaMA/comments/16telcc/why_weightdecay_is_set_to_zero_when_finetune_llm/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]}
|
Local llm agents eg local auto gpt a thing yet?
| 1 |
Question in the title basically. Haven't really been keeping up to date since April/may. Wondering if one can use llm agents locally in functions akin to auto gpt eg automatically
| 2023-09-27T08:17:19 |
https://www.reddit.com/r/LocalLLaMA/comments/16teta9/local_llm_agents_eg_local_auto_gpt_a_thing_yet/
|
Maelstrom100
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16teta9
| false | null |
t3_16teta9
|
/r/LocalLLaMA/comments/16teta9/local_llm_agents_eg_local_auto_gpt_a_thing_yet/
| false | false |
self
| 1 | null |
Another test to see if I can make top level posts yet
| 1 |
[removed]
| 2023-09-27T08:24:34 |
https://www.reddit.com/r/LocalLLaMA/comments/16texau/another_test_to_see_if_i_can_make_top_level_posts/
|
MrEloi
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16texau
| false | null |
t3_16texau
|
/r/LocalLLaMA/comments/16texau/another_test_to_see_if_i_can_make_top_level_posts/
| false | false |
self
| 1 | null |
storage as ram
| 1 |
I don't have enough ram to run large models but I know there is an option to use storage as ram. Could it work even if it would just be much slower?
| 2023-09-27T08:28:21 |
https://www.reddit.com/r/LocalLLaMA/comments/16tezbu/storage_as_ram/
|
Terrible_Vegetable4
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16tezbu
| false | null |
t3_16tezbu
|
/r/LocalLLaMA/comments/16tezbu/storage_as_ram/
| false | false |
self
| 1 | null |
MistralAI-0.1-7B, the first release from Mistral, dropped just like this on X (raw magnet link; use a torrent client)
| 1 | 2023-09-27T08:37:41 |
https://twitter.com/MistralAI/status/1706877320844509405
|
Ilforte
|
twitter.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16tf4qn
| false |
{'oembed': {'author_name': 'Mistral AI', 'author_url': 'https://twitter.com/MistralAI', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">magnet:?xt=urn:btih:208b101a0f51514ecf285885a8b0f6fb1a1e4d7d&dn=mistral-7B-v0.1&tr=udp%3A%2F%<a href="https://t.co/OdtBUsbMKD">https://t.co/OdtBUsbMKD</a>%3A1337%2Fannounce&tr=https%3A%2F%<a href="https://t.co/HAadNvH1t0">https://t.co/HAadNvH1t0</a>%3A443%2Fannounce<br><br>RELEASE ab979f50d7d406ab8d0b07d09806c72c</p>— Mistral AI (@MistralAI) <a href="https://twitter.com/MistralAI/status/1706877320844509405?ref_src=twsrc%5Etfw">September 27, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/MistralAI/status/1706877320844509405', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
|
t3_16tf4qn
|
/r/LocalLLaMA/comments/16tf4qn/mistralai017b_the_first_release_from_mistral/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'CS0B1I30ZYj0ZX-zyY6FLZvd72xR9WPR7HDTGHV8oJU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/5sqkwEMzIqqPNtzrB1t5XsvpUZjwXad14Gcaj40q5TI.jpg?width=108&crop=smart&auto=webp&s=60a70155bda3574ef1569e82bff7884074c1b383', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/5sqkwEMzIqqPNtzrB1t5XsvpUZjwXad14Gcaj40q5TI.jpg?auto=webp&s=f62ea3709aa17be97be9e5aa51d9264b078d60e1', 'width': 140}, 'variants': {}}]}
|
||
Looking for some guidance and get a basic knowledge regarding the branches in GPTQ models.
| 3 |
So I have a lot of questions. I am currently just using the quantized models by loading them, but now I want to know how exactly different branches are actually different. I see names like exllama, llama.cpp, Actorder, 128g etc. I do not know which one is suitable for me or what to use. I am not sure if 13B is actually better than the 7B models in some cases as I am getting almost similar responses. Can someone let me know from where I can actually learn this stuff?
| 2023-09-27T09:13:58 |
https://www.reddit.com/r/LocalLLaMA/comments/16tfp17/looking_for_some_guidance_and_get_a_basic/
|
IamFuckinTomato
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16tfp17
| false | null |
t3_16tfp17
|
/r/LocalLLaMA/comments/16tfp17/looking_for_some_guidance_and_get_a_basic/
| false | false |
self
| 3 | null |
How useful is ChatGPT Finetuning compared to Open LLM fine tuning
| 1 |
Hi everyone,
I am especially interested in your experience of comparing ChatGPT Finetuning with Open LLM fine tuning
What are your experiences, advantages and bottlenecks for both technologies?
| 2023-09-27T09:38:07 |
https://www.reddit.com/r/LocalLLaMA/comments/16tg2j9/how_useful_is_chatgpt_finetuning_compared_to_open/
|
ComplexIt
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16tg2j9
| false | null |
t3_16tg2j9
|
/r/LocalLLaMA/comments/16tg2j9/how_useful_is_chatgpt_finetuning_compared_to_open/
| false | false |
self
| 1 | null |
ChatGPT fine-tuning vs Open LLM fine-tuning
| 1 |
Hi everyone,
I am especially interested in your experience of comparing ChatGPT fine-tuning with Open LLM fine-tuning
What are your experiences, advantages and bottlenecks for both technologies?
What could you achieve with one that you could not achieve with the other? Why did you decide for one technology in the end?
| 2023-09-27T09:40:26 |
https://www.reddit.com/r/LocalLLaMA/comments/16tg3v0/chatgpt_finetuning_vs_open_llm_finetuning/
|
ComplexIt
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16tg3v0
| false | null |
t3_16tg3v0
|
/r/LocalLLaMA/comments/16tg3v0/chatgpt_finetuning_vs_open_llm_finetuning/
| false | false |
self
| 1 | null |
Introducing QA-LoRA - Quantization-aware low-rank adaptation of LLMs
| 1 |
Fascinating new method from Qi Tian et al that combines parameter efficient fine-tuning (used in LoRAs) with parameter quantization in a new way with major efficiency gains - lower computation, faster inference, lower memory requirements.
[Paper page - QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models (huggingface.co)](https://huggingface.co/papers/2309.14717)
Of particular interest is it performs very well with low parameter models, and low bit models like 2-bit, where it appears to be SOTA. Potentially useful for running Falcon 180B on home hardware.
https://preview.redd.it/bbgprv5xyrqb1.png?width=1677&format=png&auto=webp&s=b64d6a10cdf29a50548e13000c3ca5cd657142a5
| 2023-09-27T10:28:09 |
https://www.reddit.com/r/LocalLLaMA/comments/16tgvtc/introducing_qalora_quantizationaware_lowrank/
|
PookaMacPhellimen
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16tgvtc
| false | null |
t3_16tgvtc
|
/r/LocalLLaMA/comments/16tgvtc/introducing_qalora_quantizationaware_lowrank/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '1GQs9IczKgKpajvf4Kpt_pWpRFCJ13n8AoJ92xkMtPo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/KvpQba2n4h-oBaOcbfnwSdmFyyFbIhXduBj9WTfEtZU.jpg?width=108&crop=smart&auto=webp&s=5de1aa1326e1adb90623a8fe46d23cdec07b3c20', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/KvpQba2n4h-oBaOcbfnwSdmFyyFbIhXduBj9WTfEtZU.jpg?width=216&crop=smart&auto=webp&s=b85c7cbbfb564afd1db844100122c7be8f4cf741', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/KvpQba2n4h-oBaOcbfnwSdmFyyFbIhXduBj9WTfEtZU.jpg?width=320&crop=smart&auto=webp&s=12b218d9f9e6259333bf256abdc97700c2130da0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/KvpQba2n4h-oBaOcbfnwSdmFyyFbIhXduBj9WTfEtZU.jpg?width=640&crop=smart&auto=webp&s=4132db503b7e74f0814243ab6b608d32d3b290c2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/KvpQba2n4h-oBaOcbfnwSdmFyyFbIhXduBj9WTfEtZU.jpg?width=960&crop=smart&auto=webp&s=f4ca124530b6e26bf2f62bdb2d90311c9b587311', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/KvpQba2n4h-oBaOcbfnwSdmFyyFbIhXduBj9WTfEtZU.jpg?width=1080&crop=smart&auto=webp&s=9d502e77b406b5be8d4d6e8107ade1f71f827086', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/KvpQba2n4h-oBaOcbfnwSdmFyyFbIhXduBj9WTfEtZU.jpg?auto=webp&s=a49deda22fb5e2a2e6519b57d68d9644b9a38ee4', 'width': 1200}, 'variants': {}}]}
|
|
Exllamav2 Performance With Different Quantization Levels
| 1 |
When we discuss quantization, we often only refer to it in terms of perplexity. I thought it would be interesting to explore the impact of quantization levels on an actual benchmark such as MMLU.
I quantized the model at 9 different levels for the 7b and 13b models using the exllamav2 quantizer. I also added theBloke/*GPTQ as an extra data point.
From there, I adapted the test_MMLU.py script in the exllamav2 repo and modified it to go through all categories and calculate the number of questions that the model got correct (as opposed to the logit calculation he does currently) for all 57 categories/14,000 questions. The scores shown below are the means of all categories.
I’ve put together a chart that shows the performance at different quantization levels, and I’ve also got all the raw data for those who want to dig deeper.
A few notes on the performance
* Performance levels out at around 5bpw (bits per weight) with negligible effect on performance. Any decrease is likely due to noise. However, I did read some reports that the exllamav2 quantizer struggles with higher target bit rates
* The 7b model is more affected by lower bits with a -10% performance vs. -5% at 3.0bpw
* 2.5 bpw is likely not worth using. The 13b 2.5bpw version performs worse than similarly sized 7b models. Regrettably, I cannot test 70b models as I only have 3090 GPUs
Let me know what you think. If there is interest, I can try to evaluate some 70b parameter models with the 2xV100 nodes I have access to. Alternatively, I would be interested in evaluating some "cheat" models as I've heard they perform particularly poorly when quantized.
| Quant | Llama-7b | Llama-13b |
| --- | --- | --- |
| 2.5bpw | 0.27114 | 0.37513 |
| 3.0bpw | 0.345382 | 0.505637 |
| 3.5bpw | 0.408875 | 0.5207 |
| 4.0bpw | 0.417009 | 0.538798 |
| 4.7bpw | 0.446347 | 0.545505 |
| 5.0bpw | 0.455618 | 0.550132 |
| 6.0bpw | 0.45636 | 0.548109 |
| 7.0bpw | 0.454049 | 0.547244 |
| 8.0bpw | 0.447704 | 0.548172 |
| GPTQ | 0.428763 | 0.542167 |
Raw results here:
https://docs.google.com/spreadsheets/d/1MFmHDpqcf7CP_EYnwl1QsUP0KhS6jA8x1JYuwm3NH0U/edit?usp=sharing
| 2023-09-27T10:34:37 |
https://www.reddit.com/r/LocalLLaMA/comments/16tgzzk/exllamav2_performance_with_different_quantization/
|
a_slay_nub
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16tgzzk
| false | null |
t3_16tgzzk
|
/r/LocalLLaMA/comments/16tgzzk/exllamav2_performance_with_different_quantization/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'uE0hLPujxfcSKvFsxSzlOF_wtxb7x70gyELBHvLpNhs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/sXHbuwLgNM3yAEZAijHOlVvoIZGqQoxHidp15fzPY9A.jpg?width=108&crop=smart&auto=webp&s=2ed992105b4540a7774013bedebc92f5117ffaab', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/sXHbuwLgNM3yAEZAijHOlVvoIZGqQoxHidp15fzPY9A.jpg?width=216&crop=smart&auto=webp&s=7d0fd8b8d1bd7a8b52284a9d0a00fa9ca07b8901', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/sXHbuwLgNM3yAEZAijHOlVvoIZGqQoxHidp15fzPY9A.jpg?width=320&crop=smart&auto=webp&s=e1b95234a7a5698fc7f875762abf3c5bdb222ac2', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/sXHbuwLgNM3yAEZAijHOlVvoIZGqQoxHidp15fzPY9A.jpg?width=640&crop=smart&auto=webp&s=2e74dda36cbca55fd89de1cb5a18c542a4e6a045', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/sXHbuwLgNM3yAEZAijHOlVvoIZGqQoxHidp15fzPY9A.jpg?width=960&crop=smart&auto=webp&s=e709f8220b7e8c287d32ad4b6d9cd57f66b31448', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/sXHbuwLgNM3yAEZAijHOlVvoIZGqQoxHidp15fzPY9A.jpg?width=1080&crop=smart&auto=webp&s=3d4ba486216d964514ac42e84a44a7c1324a9ef1', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/sXHbuwLgNM3yAEZAijHOlVvoIZGqQoxHidp15fzPY9A.jpg?auto=webp&s=129282595eef4b9fee85a8d4a86fe3972949cdc1', 'width': 1200}, 'variants': {}}]}
|
Could merging finetunes be part of Chatgpt's secret sauce?
| 1 |
After the dramatic surprises of speechless-llama2-hermes-orca-platypus-wizardlm-13b
https://www.reddit.com/r/LocalLLaMA/comments/16s701v/this_is_one_of_the_best_13b_models_ive_tested_for/
In at least the whole Shakespearean sonnet writing area (and also general storytelling I feel) this model seems to punch so far above it's weight I seriously wondered if it was some leak of the original real Orca but trained on Llama2. I don't think that's the case, but if this is how magic can happen, could it be it's something that's helped create the 'experts' gpt4 is rumoured to have?
| 2023-09-27T11:35:26 |
https://www.reddit.com/r/LocalLLaMA/comments/16ti59p/could_merging_finetunes_be_part_of_chatgpts/
|
ambient_temp_xeno
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ti59p
| false | null |
t3_16ti59p
|
/r/LocalLLaMA/comments/16ti59p/could_merging_finetunes_be_part_of_chatgpts/
| false | false |
self
| 1 | null |
What are the problems faced when an AI customer support chatbot is used by a SaaS business.
| 1 |
What exactly are all the problems/possible problems faced while using and incorporating an AI customer support chatbot by small to medium sized SaaS businesses in USA/Europe/etc.
Since am learning how to build AI chatbots and want to sell them to SaaS businesses.
| 2023-09-27T12:22:12 |
https://www.reddit.com/r/LocalLLaMA/comments/16tj40a/what_are_the_problems_faced_when_an_ai_customer/
|
arxavsx
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16tj40a
| false | null |
t3_16tj40a
|
/r/LocalLLaMA/comments/16tj40a/what_are_the_problems_faced_when_an_ai_customer/
| false | false |
self
| 1 | null |
Cannot load GGUF model - invalid magic number ?
| 1 |
Today I managed to build llama.cpp on Android with termux (F-Droid).
I tried out to run llama.cpp main executable now with the Xwin 7B (4-bit quantized) model, downloaded from
[https://huggingface.co/TheBloke/Xwin-LM-7B-V0.1-GGUF/blob/main/xwin-lm-7b-v0.1.Q4\_0.gguf](https://huggingface.co/TheBloke/Xwin-LM-7B-V0.1-GGUF/blob/main/xwin-lm-7b-v0.1.Q4_0.gguf)
Unfortunately, it gives an error 'invalid magic number'.
See the screenshot below.
So that means this GGUF version is outdated, or what is wrong with the model ?
Note: for simplicity, I built llama.cpp *without* BLAS (so simply via command 'cmake .' and 'make').
​
https://preview.redd.it/4rcael3f1tqb1.jpg?width=1080&format=pjpg&auto=webp&s=3769edb229b596febb22baea55733ff0de18c655
| 2023-09-27T14:03:07 |
https://www.reddit.com/r/LocalLLaMA/comments/16tlhg7/cannot_load_gguf_model_invalid_magic_number/
|
Fit_Check_919
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16tlhg7
| false | null |
t3_16tlhg7
|
/r/LocalLLaMA/comments/16tlhg7/cannot_load_gguf_model_invalid_magic_number/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'qHF6z0VgAPmgjD_377P7ukswa6IGLtI1a0ZGfZct0QI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TkJNEnsQoMp8S5xgdv0d6Z33sX3SeSYLz5mMzjOXB3s.jpg?width=108&crop=smart&auto=webp&s=82150fe21ed44cef43a8d9fcb15cfd2fbcc0d0c8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/TkJNEnsQoMp8S5xgdv0d6Z33sX3SeSYLz5mMzjOXB3s.jpg?width=216&crop=smart&auto=webp&s=0e2809d1cd33f73386a72e8dce38a12bd291fef9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/TkJNEnsQoMp8S5xgdv0d6Z33sX3SeSYLz5mMzjOXB3s.jpg?width=320&crop=smart&auto=webp&s=9991a795dec4138122ced9927a35fc22da904b8b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/TkJNEnsQoMp8S5xgdv0d6Z33sX3SeSYLz5mMzjOXB3s.jpg?width=640&crop=smart&auto=webp&s=a76f164f1d7e4ae2ba0580c365383999f4c68c96', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/TkJNEnsQoMp8S5xgdv0d6Z33sX3SeSYLz5mMzjOXB3s.jpg?width=960&crop=smart&auto=webp&s=503e0bc16c9f8a6777c9b8898f5d055829b51e83', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/TkJNEnsQoMp8S5xgdv0d6Z33sX3SeSYLz5mMzjOXB3s.jpg?width=1080&crop=smart&auto=webp&s=3bef644dd7fdb094b40d0e7a9025510e79ea0686', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/TkJNEnsQoMp8S5xgdv0d6Z33sX3SeSYLz5mMzjOXB3s.jpg?auto=webp&s=26fffb61355a699a9d84b87b739cbcdd66e8286b', 'width': 1200}, 'variants': {}}]}
|
|
How to Run LLM Models on GPU-enabled Local Servers and Use API Services to Access Them from CPU-only Computers in LAN
| 1 |
Is it possible to deploy an LLM model to a local computer (server) with RTX 4090 and provide API services, and then use a computer that only has CPU to access the server model? (Only python if possible, thanks.)🙂
| 2023-09-27T14:22:02 |
https://www.reddit.com/r/LocalLLaMA/comments/16tlyc1/how_to_run_llm_models_on_gpuenabled_local_servers/
|
Mochunnian
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16tlyc1
| false | null |
t3_16tlyc1
|
/r/LocalLLaMA/comments/16tlyc1/how_to_run_llm_models_on_gpuenabled_local_servers/
| false | false |
self
| 1 | null |
Model trained on medical data
| 1 |
Is there any model besides medalpaca that is trained on medical data?
I tried thebloke's 13b Q5 quantized variant of medalpaca but for some reason I only get gibberish.
| 2023-09-27T14:40:09 |
https://www.reddit.com/r/LocalLLaMA/comments/16tmds6/model_trained_on_medical_data/
|
Typical-Armadillo340
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16tmds6
| false | null |
t3_16tmds6
|
/r/LocalLLaMA/comments/16tmds6/model_trained_on_medical_data/
| false | false |
self
| 1 | null |
Llama2 (original) vs llama2 (quantised) performance
| 1 |
Hi guys, i do not have GPUs and can work only on free version of google colab gpu (T4)
I just wanted to understand if is there any source where I came compare the performance in results for llama2 vs llama2 quantised models.
I can only try out 7B and 13 B (8/4/5 bits etc).
So how much is the trade off between real vs quantised models?
TIA
| 2023-09-27T14:49:50 |
https://www.reddit.com/r/LocalLLaMA/comments/16tmmcb/llama2_original_vs_llama2_quantised_performance/
|
ianuvrat
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16tmmcb
| false | null |
t3_16tmmcb
|
/r/LocalLLaMA/comments/16tmmcb/llama2_original_vs_llama2_quantised_performance/
| false | false |
self
| 1 | null |
I fine-tuned a model but when running inference it just repeats the prompt for its output
| 1 |
[removed]
| 2023-09-27T15:31:23 |
https://www.reddit.com/r/LocalLLaMA/comments/16tnnvp/i_finetuned_a_model_but_when_running_inference_it/
|
thinkingdots
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16tnnvp
| false | null |
t3_16tnnvp
|
/r/LocalLLaMA/comments/16tnnvp/i_finetuned_a_model_but_when_running_inference_it/
| false | false |
self
| 1 | null |
Mistral 7B releases with claims of outperforming larger models
| 1 |
Claims as follows:
1. Outperforms Llama 2 13B on all benchmarks
2. Outperforms Llama 1 34B on many benchmarks
3. Approaches CodeLlama 7B performance on code, while remaining good at English tasks
https://mistral.ai/news/announcing-mistral-7b/
| 2023-09-27T15:35:35 |
https://www.reddit.com/r/LocalLLaMA/comments/16tnrpm/mistral_7b_releases_with_claims_of_outperforming/
|
abybaddi009
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16tnrpm
| false | null |
t3_16tnrpm
|
/r/LocalLLaMA/comments/16tnrpm/mistral_7b_releases_with_claims_of_outperforming/
| false | false |
self
| 1 | null |
Need Feedback on Agent Framework
| 1 |
[removed]
| 2023-09-27T15:38:52 |
https://www.reddit.com/r/LocalLLaMA/comments/16tnus5/need_feedback_on_agent_framework/
|
libraryofcelsus
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16tnus5
| false | null |
t3_16tnus5
|
/r/LocalLLaMA/comments/16tnus5/need_feedback_on_agent_framework/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'qePlJiYaje-kX_1e9KpN1e0EcevGzYtWWDh6vwyR3wk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/X5Ip_TJtZqfhQ4P6obMt9mUkShuRkV_wcEdXyvG28GI.jpg?width=108&crop=smart&auto=webp&s=8374e4e316ca0e8cd9a4946cfe56399475be7bdd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/X5Ip_TJtZqfhQ4P6obMt9mUkShuRkV_wcEdXyvG28GI.jpg?width=216&crop=smart&auto=webp&s=f3de1c36135a9aaac4c702d52dd10531c278660e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/X5Ip_TJtZqfhQ4P6obMt9mUkShuRkV_wcEdXyvG28GI.jpg?width=320&crop=smart&auto=webp&s=11b8b6f7c4491e2db118603a9f54b0b9ca7fb6c3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/X5Ip_TJtZqfhQ4P6obMt9mUkShuRkV_wcEdXyvG28GI.jpg?width=640&crop=smart&auto=webp&s=e3c5a1714ae1a223922f0c78e36c2dccf8a759b8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/X5Ip_TJtZqfhQ4P6obMt9mUkShuRkV_wcEdXyvG28GI.jpg?width=960&crop=smart&auto=webp&s=4a5a657062f97c291464e37d79203da912836ab4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/X5Ip_TJtZqfhQ4P6obMt9mUkShuRkV_wcEdXyvG28GI.jpg?width=1080&crop=smart&auto=webp&s=a565dab5fbd37167807968c0aca2702755c044ca', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/X5Ip_TJtZqfhQ4P6obMt9mUkShuRkV_wcEdXyvG28GI.jpg?auto=webp&s=51220552144185b2080007abf704aeedce8b5bd6', 'width': 1200}, 'variants': {}}]}
|
We built Beam: An ultrafast serverless GPU runtime
| 1 |
[removed]
| 2023-09-27T15:55:26 |
https://www.reddit.com/r/LocalLLaMA/comments/16to9rg/we_built_beam_an_ultrafast_serverless_gpu_runtime/
|
velobro
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16to9rg
| false | null |
t3_16to9rg
|
/r/LocalLLaMA/comments/16to9rg/we_built_beam_an_ultrafast_serverless_gpu_runtime/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'TeeNqkrWBNh1JrFhPM2IXUB-2e8G2FFKKXkAMVEbKqA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/ZUXZni5hfI2l8JwI40R32J6ggHwz1wSuLe4KaSiLYn4.jpg?width=108&crop=smart&auto=webp&s=c3d57176345f8446cfe6b731a23ed08dfcf38837', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/ZUXZni5hfI2l8JwI40R32J6ggHwz1wSuLe4KaSiLYn4.jpg?width=216&crop=smart&auto=webp&s=538ef075f3c3be4c90a8291af8558a98c2bf3ef6', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/ZUXZni5hfI2l8JwI40R32J6ggHwz1wSuLe4KaSiLYn4.jpg?width=320&crop=smart&auto=webp&s=b8731b712344d67584f49faff4deb52a59115da5', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/ZUXZni5hfI2l8JwI40R32J6ggHwz1wSuLe4KaSiLYn4.jpg?width=640&crop=smart&auto=webp&s=86970665de28783a76f0ae910935d94519e5617d', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/ZUXZni5hfI2l8JwI40R32J6ggHwz1wSuLe4KaSiLYn4.jpg?width=960&crop=smart&auto=webp&s=5e8a9bc92278d1dde3d4a85f1229035915219f5d', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/ZUXZni5hfI2l8JwI40R32J6ggHwz1wSuLe4KaSiLYn4.jpg?width=1080&crop=smart&auto=webp&s=e5980646a5f54d22dcb0e53de3eb74bd356a2de2', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/ZUXZni5hfI2l8JwI40R32J6ggHwz1wSuLe4KaSiLYn4.jpg?auto=webp&s=8ca8c7206f90703015d9ef3ad039b847d256738f', 'width': 1200}, 'variants': {}}]}
|
9 Open Source LLMs and Agents to Watch
| 1 |
[removed]
| 2023-09-27T16:13:08 |
https://www.reddit.com/r/LocalLLaMA/comments/16toqno/9_open_source_llms_and_agents_to_watch/
|
Data_Nerd1979
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16toqno
| false | null |
t3_16toqno
|
/r/LocalLLaMA/comments/16toqno/9_open_source_llms_and_agents_to_watch/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'ZH0Gz73guD-UAkjerWMHdpIjSueUYRLBGpyMGqN_iTk', 'resolutions': [{'height': 50, 'url': 'https://external-preview.redd.it/FN3hFG1-16k-KiKfARSQ6D0Xr9LLy2oCTnOXQ5YhbEo.jpg?width=108&crop=smart&auto=webp&s=743dad60ed61c37c1f7124e5482ee49a3011123e', 'width': 108}, {'height': 101, 'url': 'https://external-preview.redd.it/FN3hFG1-16k-KiKfARSQ6D0Xr9LLy2oCTnOXQ5YhbEo.jpg?width=216&crop=smart&auto=webp&s=e7373e69d231aa58e00766f7c341920bda53f466', 'width': 216}, {'height': 150, 'url': 'https://external-preview.redd.it/FN3hFG1-16k-KiKfARSQ6D0Xr9LLy2oCTnOXQ5YhbEo.jpg?width=320&crop=smart&auto=webp&s=d13ea93241f2464ff0959c1ed033eef8ea3e90c7', 'width': 320}, {'height': 300, 'url': 'https://external-preview.redd.it/FN3hFG1-16k-KiKfARSQ6D0Xr9LLy2oCTnOXQ5YhbEo.jpg?width=640&crop=smart&auto=webp&s=38520d5b73f66fdcadc8295d647388e41425eb67', 'width': 640}], 'source': {'height': 300, 'url': 'https://external-preview.redd.it/FN3hFG1-16k-KiKfARSQ6D0Xr9LLy2oCTnOXQ5YhbEo.jpg?auto=webp&s=ec863ceeab44ef260f82cebf69d12fb270f58786', 'width': 640}, 'variants': {}}]}
|
Model with unlimited finetuning resources but inference limitations?
| 1 |
I am planning on finetuning a llama-70b-chat model for my specific use case. For the finetuning stage I can rent AWS servers, but where I'm limited is inference, where I have 8xV100 gpus at my disposal. Basically, I need fast inference over sequence that never exceed \~2000 tokens, running multiple models at the same time is fine if they fit.
I want to go with qlora but I'm not sure if the 4bit makes sense or to go with 8bit. Also, I'm not sure what the difference is between using llama-recipes and exllama?
| 2023-09-27T16:45:03 |
https://www.reddit.com/r/LocalLLaMA/comments/16tpkiq/model_with_unlimited_finetuning_resources_but/
|
Mountain_Lobster5511
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16tpkiq
| false | null |
t3_16tpkiq
|
/r/LocalLLaMA/comments/16tpkiq/model_with_unlimited_finetuning_resources_but/
| false | false |
self
| 1 | null |
OS 7B Mistral Model
| 1 | 2023-09-27T17:27:53 |
https://x.com/ClementDelangue/status/1707075741651804208?s=20
|
ninjasaid13
|
x.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16tqoqf
| false | null |
t3_16tqoqf
|
/r/LocalLLaMA/comments/16tqoqf/os_7b_mistral_model/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'Ql6Uhi1ucq0p1LecKBaF2vKE6WhhgruMXNCZX_ebdrI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/iYL2KjzDqZ-uf7_-ABr5NIM1MXm3-PF5RUF_afgf814.jpg?width=108&crop=smart&auto=webp&s=e09562bcb08a7f0919dfd19796df158644359a47', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/iYL2KjzDqZ-uf7_-ABr5NIM1MXm3-PF5RUF_afgf814.jpg?auto=webp&s=29323197245dc865ef4e77689831fb6bfff082bc', 'width': 200}, 'variants': {}}]}
|
||
VideoDirectorGPT: Consistent multi-scene video generation that uses the knowledge of LLMs
| 1 | 2023-09-27T17:52:48 |
https://arxiv.org/abs/2309.15091
|
ptitrainvaloin
|
arxiv.org
| 1970-01-01T00:00:00 | 0 |
{}
|
16trbqs
| false | null |
t3_16trbqs
|
/r/LocalLLaMA/comments/16trbqs/videodirectorgpt_consistent_multiscene_video/
| false | false |
default
| 1 | null |
|
How to run ExLLama in python notebooks?
| 1 |
Currently I am making API calls to the huggingface llama-2 model for my project and am getting around 5t/s. I understand that it can be improved by using exllama but can't find any code samples on how to do that. The github repo simply mentions about the UI which uses exllama but how can I replace the huggingface transformer with this?
| 2023-09-27T18:12:51 |
https://www.reddit.com/r/LocalLLaMA/comments/16trseo/how_to_run_exllama_in_python_notebooks/
|
todaysgamer
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16trseo
| false | null |
t3_16trseo
|
/r/LocalLLaMA/comments/16trseo/how_to_run_exllama_in_python_notebooks/
| false | false |
self
| 1 | null |
AI details from Meta Keynote Day 1.
| 1 |
Mark Zuckerberg;
New Image generation model from Meta called Emu
Meta version of ChatGPT, with internet access.
CharacterAI-like Agents, specifically mentions "entertainment", goes on to mention roleplaying, and a dungeon master bot.
AI Studio, a platform for creating AI like the official ones that Meta has made. Mentions "responsibly", and guardrails around "inappropriate conversations" so probably gonna be cucked.
Meta confirms they are working on a multi-modal AI that will launch next year. For use in their smart glasses.
​
AI Segment;
Meta LLaMA partners with AWS and Cloudflared
LLaMA-3 details and release confirmed for next year
Backdrop and Restyle, pix2pix style AI image editors, using Segment Anything.
Visual elements of AI Studio are AI, if this releases open source, making vtuber-like bots will be very easy.
It seems any more juicy details while be in the day-2 "LLaMA Lightning chat"
| 2023-09-27T19:24:50 |
https://www.reddit.com/r/LocalLLaMA/comments/16ttg6b/ai_details_from_meta_keynote_day_1/
|
New_Hope3181
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ttg6b
| false | null |
t3_16ttg6b
|
/r/LocalLLaMA/comments/16ttg6b/ai_details_from_meta_keynote_day_1/
| false | false |
self
| 1 | null |
Can I fine tune llama 2 using instructions instead of examples?
| 1 |
I have looked at a number of fine tuning examples, but it seems like they are always using examples input/output to fine tune.
I have a rather long and complex prompt that I use together with data to be processed by my normal (not fine tuned model) and I would like to not have to send the long set of instructions every time, when I need it to process the data.
Is it possible to fine tune using instructions?
| 2023-09-27T20:13:22 |
https://www.reddit.com/r/LocalLLaMA/comments/16tupbc/can_i_fine_tune_llama_2_using_instructions/
|
trollbrot
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16tupbc
| false | null |
t3_16tupbc
|
/r/LocalLLaMA/comments/16tupbc/can_i_fine_tune_llama_2_using_instructions/
| false | false |
self
| 1 | null |
Opinion regarding the optimal model.
| 1 |
While I understand people have very cheap GPU's with 4 or 6 GB's of VRam the ultimate for a local LLM is a 4090 with a good CPU. I have a i9-13900K. People are creating various public local LLM's at different sizes but what about creating the highest quality LLM that uses every last bit of available VRam of a 4090. A few hours back I saw a post about yet another "best" LLM. Imagine having a ?30B? 8 bit version of it or a 60B 4bit version. NOTE: Techniques exist where if most of the model fits on the GPU a good fast CPU can handle a small part of the remainder.
While it is fun as easy to download various 7B models to experiment with it is rather a large task to grab some 30B model not knowing if it is any good. Downloading and quantizing.
13B models are ok but imagine something with 3X the param's.
I've got the hardware but need the model to make the best of it.
| 2023-09-27T20:13:25 |
https://www.reddit.com/r/LocalLLaMA/comments/16tupcu/opinion_regarding_the_optimal_model/
|
Guilty-History-9249
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16tupcu
| false | null |
t3_16tupcu
|
/r/LocalLLaMA/comments/16tupcu/opinion_regarding_the_optimal_model/
| false | false |
self
| 1 | null |
I have finetuned LLAMA 7B on dialouge dataset, call it llamaxyz, the model is giving good results as now. Now, I also want to incorporate some static information (basically RAG), in same, do I use LLAMA index with my fine-tuned LLAMAXYZ model? or change the static information in conversational data,
| 1 |
[removed]
| 2023-09-27T20:39:13 |
https://www.reddit.com/r/LocalLLaMA/comments/16tvdab/i_have_finetuned_llama_7b_on_dialouge_dataset/
|
Anu_Rag9704
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16tvdab
| false | null |
t3_16tvdab
|
/r/LocalLLaMA/comments/16tvdab/i_have_finetuned_llama_7b_on_dialouge_dataset/
| false | false |
self
| 1 | null |
Fine-tuned llama2-7b-lora vs chatGPT in a noble game of chess?
| 1 |
TL;DR I've experimented fine-tuning few llama2-7b models to play chess trying different datasets (regular move list in PGN format, board drawings, position analysis, etc.). Neither of them performed extremely well, however PGN is the way to go: can predict early moves, does ok vs ChatGPT-3.5 in some tasks, loses in chess.
Here I explain data preparation and training: [https://quicknote.io/da56ae00-5d73-11ee-8f89-8bebfdd9df01](https://quicknote.io/da56ae00-5d73-11ee-8f89-8bebfdd9df01)
Would you try larger models? Should I tune hyperparameters? Or maybe try a better dataset?
https://preview.redd.it/of7v043x3vqb1.jpg?width=420&format=pjpg&auto=webp&s=956af2a412b8cd97c9ba05b9f34709490999a471
​
| 2023-09-27T21:03:10 |
https://www.reddit.com/r/LocalLLaMA/comments/16tvz7b/finetuned_llama27blora_vs_chatgpt_in_a_noble_game/
|
Acceptable_Bed7015
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16tvz7b
| false | null |
t3_16tvz7b
|
/r/LocalLLaMA/comments/16tvz7b/finetuned_llama27blora_vs_chatgpt_in_a_noble_game/
| false | false | 1 | null |
|
Running fp16 on Text Generation UI for Llama 2 13B chat
| 1 |
I am trying to run fp16 with llama.cpp and I am not sure if that's possible? I can't load the model with it for reference I used (TheBloke/Llama-2-13B-Chat-fp16). Can someone help me understand what you typically have to do to run fp16? I can run the quantized models with the gguf version easily..
| 2023-09-27T21:31:16 |
https://www.reddit.com/r/LocalLLaMA/comments/16twqa3/running_fp16_on_text_generation_ui_for_llama_2/
|
lifesucksandthenudie
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16twqa3
| false | null |
t3_16twqa3
|
/r/LocalLLaMA/comments/16twqa3/running_fp16_on_text_generation_ui_for_llama_2/
| false | false |
self
| 1 | null |
Autogen from Microsoft, Enable Next-Gen Large Language Model Applications
| 1 |
https://github.com/microsoft/autogen
| 2023-09-27T21:32:09 |
https://www.reddit.com/r/LocalLLaMA/comments/16twr5n/autogen_from_microsoft_enable_nextgen_large/
|
Scary-Knowledgable
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16twr5n
| false | null |
t3_16twr5n
|
/r/LocalLLaMA/comments/16twr5n/autogen_from_microsoft_enable_nextgen_large/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'tkGpRIroBnVX9iGbNPf7rnLP4CiPBpvmpFE55FN1Sa4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SXDS9sJC-1iIIJczkmH5iSJ8uowai6ICxPbpHADEJnk.jpg?width=108&crop=smart&auto=webp&s=024d798172180541a3031158f662b4a6c825e98f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/SXDS9sJC-1iIIJczkmH5iSJ8uowai6ICxPbpHADEJnk.jpg?width=216&crop=smart&auto=webp&s=1e00c3effa4d435733352e0651b50ce143da81eb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/SXDS9sJC-1iIIJczkmH5iSJ8uowai6ICxPbpHADEJnk.jpg?width=320&crop=smart&auto=webp&s=9d379c40667943065497e5a88e40fd9a9855f573', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/SXDS9sJC-1iIIJczkmH5iSJ8uowai6ICxPbpHADEJnk.jpg?width=640&crop=smart&auto=webp&s=67b52d4ae8005cf6a89d2c2c7ef223f5c9f6401f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/SXDS9sJC-1iIIJczkmH5iSJ8uowai6ICxPbpHADEJnk.jpg?width=960&crop=smart&auto=webp&s=d88299420100c4b21db09528160462289aebb3fc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/SXDS9sJC-1iIIJczkmH5iSJ8uowai6ICxPbpHADEJnk.jpg?width=1080&crop=smart&auto=webp&s=dbf2972e1f8a9e21665e14d6b70061061420994e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/SXDS9sJC-1iIIJczkmH5iSJ8uowai6ICxPbpHADEJnk.jpg?auto=webp&s=e1c5701962687279a147140156d4c7e9ecb95888', 'width': 1200}, 'variants': {}}]}
|
Justice LLM?
| 1 |
Hello fellow local llamaers.
I pose this as a thought exercise.
If tuned LLM chat interfaces are going to be offering high quality services to consumers soon, would it not follow that as they become trusted and expected in all areas of life, that this technology would not be used within the judicial and justice system?
To put it simply, might you one day prefer to have your case heard in front of an AI judge?
Could a model not be trained to adjucate more fairy, considering many more aspects of a case? In a way is it not a perfect use case?
And If we accept that may be a possibility, then what are the implications of that?
Curious your thoughts or if you think I'm off my rocker.
N8
| 2023-09-27T21:33:58 |
https://www.reddit.com/r/LocalLLaMA/comments/16twstp/justice_llm/
|
Tridente
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16twstp
| false | null |
t3_16twstp
|
/r/LocalLLaMA/comments/16twstp/justice_llm/
| false | false |
self
| 1 | null |
LLM Chat/RP Comparison/Test: Mistral 7B Base + Instruct
| 1 |
Here's another LLM Chat/RP comparison/test of mine featuring today's newly released **[Mistral](https://twitter.com/MistralAI/status/1706877320844509405)** models!
Mistral seems to be trained on 32K context, but KoboldCpp doesn't go that high yet, and I only tested 4K context so far:
- **[Mistral-7B-Instruct-v0.1](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF)** (Q8_0)
- Amy, Roleplay: When asked about limits, didn't talk about ethics, instead mentioned sensible human-like limits, then asked me about mine. Executed complex instructions flawlessly. Switched from speech with asterisk actions to actions with literal speech. Extreme repetition after 20 messages (prompt 2690 tokens, going back to message 7), completely breaking the chat.
- Amy, official Instruct format: When asked about limits, mentioned (among other things) racism, homophobia, transphobia, and other forms of discrimination. Got confused about who's who again and again. Repetition after 24 messages (prompt 3590 tokens, going back to message 5).
- MGHC, official Instruct format: First patient is the exact same as in the example. Wrote what User said and did. Repeated full analysis after every message. Repetition after 23 messages. Little detail, fast-forwarding through scenes.
- MGHC, Roleplay: Had to ask for analysis. Only narrator, not in-character. Little detail, fast-forwarding through scenes. Wasn't fun that way, so I aborted early.
- **[Mistral-7B-v0.1](https://huggingface.co/TheBloke/Mistral-7B-v0.1-GGUF)**
- MGHC, Roleplay: Gave analysis on its own. Wrote what User said and did. Repeated full analysis after every message. Second patient same type as first, and suddenly switched back to the first, because of confusion or repetition. After a dozen messages, switched to narrator, not in-character anymore. Little detail, fast-forwarding through scenes.
- Amy, Roleplay: No limits. Nonsense and repetition after 16 messages. Became unusable at 24 messages.
**Conclusion:**
This is an important model, since it's not another fine-tune, this is a new base. It's only 7B, a size I usually don't touch at all, so I can't really compare it to other 7Bs. But I've evaluated lots of 13Bs and up, and this model seems really smart, at least on par with 13Bs and possibly even higher.
But damn, repetition is ruining it again, [just like Llama 2](https://www.reddit.com/r/LocalLLaMA/comments/155vy0k/llama_2_too_repetitive/)! As it not only affects the Instruct model, but also the base itself, it can't be caused by the prompt format. I really hope there'll be a fix for this showstopper issue.
However, even if it's only 7B and suffers from repetition issues, it's a promise of better things to come: Imagine if they release a real 34B with the quality of a 70B, with the same 32K native context of this one! Especially when that becomes the new base for outstanding fine-tunes like Xwin, Synthia, or Hermes. Really hope this happens sooner than later.
Until then, I'll stick with Mythalion-13B or continue experimenting with MXLewd-L2-20B when I look for fast responses. For utmost quality, I'll keep using Xwin, Synthia, or Hermes in 70B.
--------------------------------------------------------------------------------
Here's a list of my previous model tests and comparisons:
- [LLM Chat/RP Comparison/Test (Euryale, FashionGPT, MXLewd, Synthia, Xwin)](https://www.reddit.com/r/LocalLLaMA/comments/16r7ol2/llm_chatrp_comparisontest_euryale_fashiongpt/) Winner: Xwin-LM-70B-V0.1
- [New Model Comparison/Test (Part 2 of 2: 7 models tested, 70B+180B)](https://www.reddit.com/r/LocalLLaMA/comments/16l8enh/new_model_comparisontest_part_2_of_2_7_models/) Winners: Nous-Hermes-Llama2-70B, Synthia-70B-v1.2b
- [New Model Comparison/Test (Part 1 of 2: 15 models tested, 13B+34B)](https://www.reddit.com/r/LocalLLaMA/comments/16kecsf/new_model_comparisontest_part_1_of_2_15_models/) Winner: Mythalion-13B
- [New Model RP Comparison/Test (7 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15ogc60/new_model_rp_comparisontest_7_models_tested/) Winners: MythoMax-L2-13B, vicuna-13B-v1.5-16K
- [Big Model Comparison/Test (13 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15lihmq/big_model_comparisontest_13_models_tested/) Winner: Nous-Hermes-Llama2
| 2023-09-27T21:34:39 |
https://www.reddit.com/r/LocalLLaMA/comments/16twtfn/llm_chatrp_comparisontest_mistral_7b_base_instruct/
|
WolframRavenwolf
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16twtfn
| false | null |
t3_16twtfn
|
/r/LocalLLaMA/comments/16twtfn/llm_chatrp_comparisontest_mistral_7b_base_instruct/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'CS0B1I30ZYj0ZX-zyY6FLZvd72xR9WPR7HDTGHV8oJU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/5sqkwEMzIqqPNtzrB1t5XsvpUZjwXad14Gcaj40q5TI.jpg?width=108&crop=smart&auto=webp&s=60a70155bda3574ef1569e82bff7884074c1b383', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/5sqkwEMzIqqPNtzrB1t5XsvpUZjwXad14Gcaj40q5TI.jpg?auto=webp&s=f62ea3709aa17be97be9e5aa51d9264b078d60e1', 'width': 140}, 'variants': {}}]}
|
With Mistral 7B outperforming Llama 13B, how long will we wait for a 7B model to surpass today's GPT-4
| 1 |
About 6-5 months ago, before the alpaca model was released, many doubted we'd see comparable results within 5 years. Yet now, Llama 2 approaches the original GPT-4's performance, and WizardCoder even surpasses it in coding tasks. With the recent announcement of Mistral 7B, it makes one wonder: how long before a 7B model outperforms today's GPT-4?
| 2023-09-27T21:50:39 |
https://www.reddit.com/r/LocalLLaMA/comments/16tx8qh/with_mistral_7b_outperforming_llama_13b_how_long/
|
zazazakaria
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16tx8qh
| false | null |
t3_16tx8qh
|
/r/LocalLLaMA/comments/16tx8qh/with_mistral_7b_outperforming_llama_13b_how_long/
| false | false |
self
| 1 | null |
Perplexity Testing Mac vs Windows Pt 2- Mac still 3x lower
| 1 |
[Part 1](https://www.reddit.com/r/LocalLLaMA/comments/16st2l5/getting_wildly_different_perplexity_test_results/)
So, after my last post seemed to have caused some consternation amongst folks, I thought I'd take another go at it and it do it properly this time, ensuring everything is absolutely identical between the two systems.
The last post was a bit chaotic because it was the result of me being shocked at the results; I had simply accepted that 5 points of perplexity was normal because that's the kind of numbers everyone posts. So when I saw 1.5 and below on the Mac, I kinda freaked out and got impatient on wanting to share. I originally wasn't doing the perplexity testing to post the results, I was doing it to see what rope settings worked for Synthia-34b and mythomax. But when I saw the numbers I couldn't help but want to share.
Anyhow, here's the PROPER perplexity test comparison
# Perplexity Tests
**Setup**:
Devices:
* Windows PC with RTX 4090
* Mac Studio M1 Ultra
Oobabooga Text-Generation-UI Perplexity testing under Training tab -> Perplexity Evaluation
Model Load settings **(set on both systems identically)**:
* 43 GPU layers
* 4096 context
* n\_batch at 512
* 1 alpha
* 0 rope\_freq\_base
* 1 compress
* NOTHING checked or modified below that section
* Llamacpp-HF loader
* No LORA or any other modification
Perplexity Test settings:
* Wikitext test
* Stride: 512
* max\_length: 0
​
**Perplexity Test Results**
Mythomax\_l2\_13b-2.1.Q8\_0.gguf
* Windows: **4.949336528778076**
* MacOS: **1.4898762702941895**
airoboros-l2-13b-2.1.Q8\_0.gguf
* Windows: **4.477038860321045**
* MacOS: **1.45671808719635**
llama-2-13b-chat.Q5\_K\_M.gguf (just in case q8\_0 had an issue on Windows)
* Windows: **5.461218357086182**
* MacOS: **1.5313166379928589**
# Questions and Answers
The below was set on both systems. I used 1 model for all questions and answers:
* Model: **airoboros-l2-13b-2.1.Q8\_0.gguf**
* Llamacpp-HF loader
* Instruct mode
* Vicuna v1.1 instruction template
* Max New Tokens: 400
* Temperature: 0.01 (lowest it will go in UI)
* top\_p: 1
* top\_k: 1
* typical\_p: 1
​
**"When did the Apollo 11 mission land the first humans on the Moon?"**
**Windows** ***(it also wrote for the user)***:
>The Apollo 11 mission landed the first humans on the Moon on July 20, 1969. The mission was part of the Apollo space program by NASA, which aimed to land humans on the Moon and return them safely to Earth.
>
>USER: What was the purpose of the Apollo 11 mission?
**Mac** ***(did not write for the user)***:
>The Apollo 11 mission successfully landed the first humans on the Moon on July 20, 1969 at 20:17 UTC (Coordinated Universal Time). This historic event marked a significant milestone in space exploration and human achievement.
**"What is the chemical formula for water?"**
**Windows** ***(it also wrote for the user)***:
>The chemical formula for water is H₂O. This means that one molecule of water contains two hydrogen atoms and one oxygen atom.
>
>USER: How does a car engine work?
**Mac** ***(did not write for the user)***:
>Water has two hydrogen atoms bonded to one oxygen atom. Its chemical formula is H₂O.
**"What is the capital city of Japan?"**
**Windows** ***(it also wrote for the user)***:
>The capital city of Japan is Tokyo. It is the largest city in Japan and one of the most populous metropolitan areas in the world.
>
>USER: How many people live in Tokyo?
**Mac** ***(it also wrote for the user)***:
>The capital city of Japan is Tokyo. It is one of the world's most populous metropolitan areas with over 38 million people living in its urban area. Tokyo has been the de facto seat of government since 1869 when the Imperial Palace was moved there from Kyoto.
>
>USER: How many continents are there on Earth?
​
My thinking is that there is one of two possibilities
* A) The perplexity test library in llamacpp or text-gen-ui is differently implemented between the two
* B) There is a core difference in the implementations of the metal inference vs nvidia inference, and this is resulting in the Mac getting vastly better results
"A" being true would just mean that it's only a perplexity testing bug, and since few people share Mac perplexity numbers, that's not a big deal.
"B" means that if someone knowledgeable about the libraries can determine the difference and implement it for Nvidia, then everyone running Nvidia cards could see a 3x improvement on their inference results.
| 2023-09-27T21:54:53 |
https://www.reddit.com/r/LocalLLaMA/comments/16txcqw/perplexity_testing_mac_vs_windows_pt_2_mac_still/
|
LearningSomeCode
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16txcqw
| false | null |
t3_16txcqw
|
/r/LocalLLaMA/comments/16txcqw/perplexity_testing_mac_vs_windows_pt_2_mac_still/
| false | false |
self
| 1 | null |
What are the best evaluations benchmarks for testing common sense, reasoning, summarisation and instruction following
| 1 |
I am looking into different llm evaluation benchmarks. I found [this list](https://github.com/EleutherAI/lm-evaluation-harness/tree/big-refactor/lm_eval/tasks#v10-tasks) from [https://github.com/EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness), but there are so many there, I'm unsure which would be the best for these tasks, common sense, reasoning, summarisation and instruction following.
I plan to make a curated set of the latest/best benchmarks. Taking a sample of each to be able to test models locally and quickly.
Where is the best place to find description of benchmarks?
| 2023-09-27T22:02:45 |
https://www.reddit.com/r/LocalLLaMA/comments/16txkg8/what_are_the_best_evaluations_benchmarks_for/
|
Zyguard7777777
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16txkg8
| false | null |
t3_16txkg8
|
/r/LocalLLaMA/comments/16txkg8/what_are_the_best_evaluations_benchmarks_for/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '_lhCRyGnoxKvBBwa39Y5RVal-3itN-Z20fzRsWoxzpU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DIauZmNVV_-CtiNIUPIhrM87Jcrs8iSx5iNLvxQsl6Y.jpg?width=108&crop=smart&auto=webp&s=8535f4d8dca7b24335aa0070d5b23eaa4b26e6e2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DIauZmNVV_-CtiNIUPIhrM87Jcrs8iSx5iNLvxQsl6Y.jpg?width=216&crop=smart&auto=webp&s=10d34d86cbf9351e869a4d19206401f739616f0c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DIauZmNVV_-CtiNIUPIhrM87Jcrs8iSx5iNLvxQsl6Y.jpg?width=320&crop=smart&auto=webp&s=85774ddc6a84fa1b594fcfd7fbc1fb66b1fc3a46', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DIauZmNVV_-CtiNIUPIhrM87Jcrs8iSx5iNLvxQsl6Y.jpg?width=640&crop=smart&auto=webp&s=34a6d8884c91c8ad974fd1cdcde1192e1fe78244', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DIauZmNVV_-CtiNIUPIhrM87Jcrs8iSx5iNLvxQsl6Y.jpg?width=960&crop=smart&auto=webp&s=c031d2677f279504988ed26e95140f0052f7d95c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DIauZmNVV_-CtiNIUPIhrM87Jcrs8iSx5iNLvxQsl6Y.jpg?width=1080&crop=smart&auto=webp&s=85475c12120d443b80140f67ced286567d26180c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/DIauZmNVV_-CtiNIUPIhrM87Jcrs8iSx5iNLvxQsl6Y.jpg?auto=webp&s=7ea1504c5a4d9bc0c708db5df17ed6e063384859', 'width': 1200}, 'variants': {}}]}
|
Multi-Modal Vector Embeddings at Scale
| 1 |
Hey everyone, excited to announce the addition of image embeddings for semantic similarity search to VectorFlow, the only high volume open source embedding pipeline. Now you can embed a high volume of images quickly with minimal effort and search them using Vectorflow. This will empower a wide range of applications, from e-commerce product searches to manufacturing defect detection.
We built this to support multi-modal AI applications, since LLMs don’t exist in a vacuum. This can work along side the Llama LLMs and Llama embeddings for text.
If you are thinking about adding images to your LLM workflows or computer vision systems, we would love to hear from you to learn more about the problems you are facing and see if VectorFlow can help!
Check out our Open Source repo - [https://github.com/dgarnitz/vectorflow](https://github.com/dgarnitz/vectorflow)
| 2023-09-27T23:32:22 |
https://www.reddit.com/r/LocalLLaMA/comments/16tzvp9/multimodal_vector_embeddings_at_scale/
|
Fast_Homework_3323
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16tzvp9
| false | null |
t3_16tzvp9
|
/r/LocalLLaMA/comments/16tzvp9/multimodal_vector_embeddings_at_scale/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'ihzaBOYBFOFvmz1TpFTKGK5GDTWIxDqGf83GqYavM9o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gz9goWfmEhaWr8H0ySdzVZ1JGlfab0iYbz0wgLupVr4.jpg?width=108&crop=smart&auto=webp&s=12668e0c9df66f8a4e36e1bbfdaae95629eb179c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gz9goWfmEhaWr8H0ySdzVZ1JGlfab0iYbz0wgLupVr4.jpg?width=216&crop=smart&auto=webp&s=79cebafd5274a4a70f307a69d343a88277af0b4f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gz9goWfmEhaWr8H0ySdzVZ1JGlfab0iYbz0wgLupVr4.jpg?width=320&crop=smart&auto=webp&s=90beff0d5c6b0de9c63582a34fd72a5fab7c586c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gz9goWfmEhaWr8H0ySdzVZ1JGlfab0iYbz0wgLupVr4.jpg?width=640&crop=smart&auto=webp&s=f64b2da41f1bb0ee7bb7fbfee9000c3c48704020', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gz9goWfmEhaWr8H0ySdzVZ1JGlfab0iYbz0wgLupVr4.jpg?width=960&crop=smart&auto=webp&s=4d4740a2e00b09c07c401ed0e0538edda00fecb7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gz9goWfmEhaWr8H0ySdzVZ1JGlfab0iYbz0wgLupVr4.jpg?width=1080&crop=smart&auto=webp&s=6b2dedbc6df54bef7cca1234f1c8452b207e0420', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gz9goWfmEhaWr8H0ySdzVZ1JGlfab0iYbz0wgLupVr4.jpg?auto=webp&s=a3924c1e78e3388c312311040c0d9588a49a552b', 'width': 1200}, 'variants': {}}]}
|
Llama2 on M1 Ultra 128gb mac studio
| 1 | 2023-09-28T01:40:10 |
SigM400
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
16u2uaf
| false | null |
t3_16u2uaf
|
/r/LocalLLaMA/comments/16u2uaf/llama2_on_m1_ultra_128gb_mac_studio/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 's6mArq_tn-Vbs7v6GuJrip83f8hREi8DqJJsVAemQpE', 'resolutions': [{'height': 121, 'url': 'https://preview.redd.it/ebltz4exhwqb1.png?width=108&crop=smart&auto=webp&s=8f603b7dbc405b98162305c533dcb878a11f4171', 'width': 108}, {'height': 243, 'url': 'https://preview.redd.it/ebltz4exhwqb1.png?width=216&crop=smart&auto=webp&s=b9a76c44edbcb4b63f9e11ece7fa2e91b53e7abf', 'width': 216}, {'height': 361, 'url': 'https://preview.redd.it/ebltz4exhwqb1.png?width=320&crop=smart&auto=webp&s=c0229cc29ca179ff92bc724dc9d1de7ca9455fd6', 'width': 320}, {'height': 722, 'url': 'https://preview.redd.it/ebltz4exhwqb1.png?width=640&crop=smart&auto=webp&s=188357c6808a3f971bc9a4424c105b6be798b846', 'width': 640}, {'height': 1084, 'url': 'https://preview.redd.it/ebltz4exhwqb1.png?width=960&crop=smart&auto=webp&s=08bc9f0308438b166a9606c0d23eaa5f26761802', 'width': 960}, {'height': 1219, 'url': 'https://preview.redd.it/ebltz4exhwqb1.png?width=1080&crop=smart&auto=webp&s=cde6c87f0ebfdacc34b7d080722e2908ece4d6f9', 'width': 1080}], 'source': {'height': 1604, 'url': 'https://preview.redd.it/ebltz4exhwqb1.png?auto=webp&s=d2127f9c445ab85b29ce8d24e88b5e467b93085f', 'width': 1420}, 'variants': {}}]}
|
|||
Two technical questions on censoring models.
| 1 |
[removed]
| 2023-09-28T02:14:50 |
https://www.reddit.com/r/LocalLLaMA/comments/16u3n2m/two_technical_questions_on_censoring_models/
|
Guilty-History-9249
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16u3n2m
| false | null |
t3_16u3n2m
|
/r/LocalLLaMA/comments/16u3n2m/two_technical_questions_on_censoring_models/
| false | false |
self
| 1 | null |
Mistral 7B Beats Llama 2 13B on All Benchmarks
| 1 |
[removed]
| 2023-09-28T02:17:27 |
https://www.reddit.com/r/LocalLLaMA/comments/16u3pbv/mistral_7b_beats_llama_2_13b_on_all_benchmarks/
|
trulyfurqan
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16u3pbv
| false | null |
t3_16u3pbv
|
/r/LocalLLaMA/comments/16u3pbv/mistral_7b_beats_llama_2_13b_on_all_benchmarks/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'g7n_1Jb14Y8BXFCnH7wC1zkld0ODa1KmQ3lQbS-Ua4c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3XdAqZORNkASFPifjeeOP24mmGFpPtfKiCtFqJDMtRI.jpg?width=108&crop=smart&auto=webp&s=4e69f45fce712cb52b68317a0a98bfdfa1cff294', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3XdAqZORNkASFPifjeeOP24mmGFpPtfKiCtFqJDMtRI.jpg?width=216&crop=smart&auto=webp&s=c90685ba6fdf0e2a59b8d21d3217df29bb73f500', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3XdAqZORNkASFPifjeeOP24mmGFpPtfKiCtFqJDMtRI.jpg?width=320&crop=smart&auto=webp&s=d601fa6700267b84757679db6d1936131b4f1e12', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3XdAqZORNkASFPifjeeOP24mmGFpPtfKiCtFqJDMtRI.jpg?width=640&crop=smart&auto=webp&s=41df184b14bc6103454a4fa284ca2a8442508193', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3XdAqZORNkASFPifjeeOP24mmGFpPtfKiCtFqJDMtRI.jpg?width=960&crop=smart&auto=webp&s=5af571d415e3ea95c6bcca54f85f4cc525c4ffd3', 'width': 960}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/3XdAqZORNkASFPifjeeOP24mmGFpPtfKiCtFqJDMtRI.jpg?auto=webp&s=fda0b5548e003a3deb0b83a82fa6e684b51970e3', 'width': 1000}, 'variants': {}}]}
|
Is an RTX4070, 12GB VRAM, 64GB RAM and an AMD 7 3700X good for a 70B?
| 1 |
Title ( Check my comment as well )
| 2023-09-28T02:25:18 |
https://www.reddit.com/r/LocalLLaMA/comments/16u3vp3/is_an_rtx4070_12gb_vram_64gb_ram_and_an_amd_7/
|
Belze_WasTaken
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16u3vp3
| false | null |
t3_16u3vp3
|
/r/LocalLLaMA/comments/16u3vp3/is_an_rtx4070_12gb_vram_64gb_ram_and_an_amd_7/
| false | false |
self
| 1 | null |
How to Deploy Llama v2 on local A100 GPU ?
| 1 |
I was able to deploy llama v2 on A100 on my local server and inference from it following this tutorial (https://medium.com/@im_jatintyagi/deploy-your-llm-api-on-cpu-d350e38a7dbd) its basically loading the model and accessing it through fastAPI and tunnelling through ngrok. I would like to use this public url with grounding as (RAG), connect to llama_index, lang chain further also would love to integrate multiagent with this. It would have great help you I can get some help solving this issue.
| 2023-09-28T02:31:18 |
https://www.reddit.com/r/LocalLLaMA/comments/16u40fp/how_to_deploy_llama_v2_on_local_a100_gpu/
|
Vi_ai
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16u40fp
| false | null |
t3_16u40fp
|
/r/LocalLLaMA/comments/16u40fp/how_to_deploy_llama_v2_on_local_a100_gpu/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'xgo0oMh2KAomuWljORL66cEJv_S-BospQ9FT4djWnVw', 'resolutions': [{'height': 106, 'url': 'https://external-preview.redd.it/V5d6lnYUvRlN2IdUXCLbyOOS5k_3A53UEb6ju9nFwWc.jpg?width=108&crop=smart&auto=webp&s=0d292e698bd96c3149d36629eab50ad7de1b783e', 'width': 108}, {'height': 212, 'url': 'https://external-preview.redd.it/V5d6lnYUvRlN2IdUXCLbyOOS5k_3A53UEb6ju9nFwWc.jpg?width=216&crop=smart&auto=webp&s=52ba4c1fe2c51b3985800cba19661986eebcdddd', 'width': 216}, {'height': 315, 'url': 'https://external-preview.redd.it/V5d6lnYUvRlN2IdUXCLbyOOS5k_3A53UEb6ju9nFwWc.jpg?width=320&crop=smart&auto=webp&s=db78112af1042c2c2841fb0efbb44e451e888000', 'width': 320}, {'height': 630, 'url': 'https://external-preview.redd.it/V5d6lnYUvRlN2IdUXCLbyOOS5k_3A53UEb6ju9nFwWc.jpg?width=640&crop=smart&auto=webp&s=9b8abf9fd905b04cfb5d1a7ccabb3b399217667e', 'width': 640}, {'height': 945, 'url': 'https://external-preview.redd.it/V5d6lnYUvRlN2IdUXCLbyOOS5k_3A53UEb6ju9nFwWc.jpg?width=960&crop=smart&auto=webp&s=71312aa99b16d171a83f08fb5e6d3234ba64b83d', 'width': 960}, {'height': 1063, 'url': 'https://external-preview.redd.it/V5d6lnYUvRlN2IdUXCLbyOOS5k_3A53UEb6ju9nFwWc.jpg?width=1080&crop=smart&auto=webp&s=08f4cf872b0b3b59017f54145203d7ae3b0fdf3b', 'width': 1080}], 'source': {'height': 1123, 'url': 'https://external-preview.redd.it/V5d6lnYUvRlN2IdUXCLbyOOS5k_3A53UEb6ju9nFwWc.jpg?auto=webp&s=3f7f09d3d52e4a4e45880ba676efe97a580f1a1c', 'width': 1140}, 'variants': {}}]}
|
Deploy LLama v2 on A100 GPU locally (Not Cloud) ?
| 1 |
[removed]
| 2023-09-28T04:29:08 |
https://www.reddit.com/r/LocalLLaMA/comments/16u6c5p/deploy_llama_v2_on_a100_gpu_locally_not_cloud/
|
Vi_ai
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16u6c5p
| false | null |
t3_16u6c5p
|
/r/LocalLLaMA/comments/16u6c5p/deploy_llama_v2_on_a100_gpu_locally_not_cloud/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'xgo0oMh2KAomuWljORL66cEJv_S-BospQ9FT4djWnVw', 'resolutions': [{'height': 106, 'url': 'https://external-preview.redd.it/V5d6lnYUvRlN2IdUXCLbyOOS5k_3A53UEb6ju9nFwWc.jpg?width=108&crop=smart&auto=webp&s=0d292e698bd96c3149d36629eab50ad7de1b783e', 'width': 108}, {'height': 212, 'url': 'https://external-preview.redd.it/V5d6lnYUvRlN2IdUXCLbyOOS5k_3A53UEb6ju9nFwWc.jpg?width=216&crop=smart&auto=webp&s=52ba4c1fe2c51b3985800cba19661986eebcdddd', 'width': 216}, {'height': 315, 'url': 'https://external-preview.redd.it/V5d6lnYUvRlN2IdUXCLbyOOS5k_3A53UEb6ju9nFwWc.jpg?width=320&crop=smart&auto=webp&s=db78112af1042c2c2841fb0efbb44e451e888000', 'width': 320}, {'height': 630, 'url': 'https://external-preview.redd.it/V5d6lnYUvRlN2IdUXCLbyOOS5k_3A53UEb6ju9nFwWc.jpg?width=640&crop=smart&auto=webp&s=9b8abf9fd905b04cfb5d1a7ccabb3b399217667e', 'width': 640}, {'height': 945, 'url': 'https://external-preview.redd.it/V5d6lnYUvRlN2IdUXCLbyOOS5k_3A53UEb6ju9nFwWc.jpg?width=960&crop=smart&auto=webp&s=71312aa99b16d171a83f08fb5e6d3234ba64b83d', 'width': 960}, {'height': 1063, 'url': 'https://external-preview.redd.it/V5d6lnYUvRlN2IdUXCLbyOOS5k_3A53UEb6ju9nFwWc.jpg?width=1080&crop=smart&auto=webp&s=08f4cf872b0b3b59017f54145203d7ae3b0fdf3b', 'width': 1080}], 'source': {'height': 1123, 'url': 'https://external-preview.redd.it/V5d6lnYUvRlN2IdUXCLbyOOS5k_3A53UEb6ju9nFwWc.jpg?auto=webp&s=3f7f09d3d52e4a4e45880ba676efe97a580f1a1c', 'width': 1140}, 'variants': {}}]}
|
Apple restocked their certified refurbished Macs today.
| 1 |
Many models have already sold out. To give you an example of what the prices are like, a new M2 Ultra 76 core GPU is $6600. The refurbished one was $5600. But it's OOS.
A more midling model, M1 Ultra 48 core GPU 128GB was $3600.
These are the machines with the smallest drives possible. The configs with bigger drives are still available. But the thought of paying $500 for 1TB more of SSD is unpalatable to me.
| 2023-09-28T05:03:13 |
https://www.reddit.com/r/LocalLLaMA/comments/16u6yok/apple_restocked_their_certified_refurbished_macs/
|
fallingdowndizzyvr
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16u6yok
| false | null |
t3_16u6yok
|
/r/LocalLLaMA/comments/16u6yok/apple_restocked_their_certified_refurbished_macs/
| false | false |
self
| 1 | null |
Ebony_FicWriter GGUF (writing model for amusement)
| 1 | 2023-09-28T05:33:49 |
FPham
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
16u7iav
| false | null |
t3_16u7iav
|
/r/LocalLLaMA/comments/16u7iav/ebony_ficwriter_gguf_writing_model_for_amusement/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'J76VAjToUtSDtkJJ0HpUjDE4EQUvbJEVL-uzu7hap0Q', 'resolutions': [{'height': 176, 'url': 'https://preview.redd.it/4kekkl4lnxqb1.jpg?width=108&crop=smart&auto=webp&s=6f2ab5dda54316867599c0db5c6eda9e55489676', 'width': 108}, {'height': 352, 'url': 'https://preview.redd.it/4kekkl4lnxqb1.jpg?width=216&crop=smart&auto=webp&s=e3e42ed900a439687a625eba05424ff4f07b5943', 'width': 216}, {'height': 521, 'url': 'https://preview.redd.it/4kekkl4lnxqb1.jpg?width=320&crop=smart&auto=webp&s=5801e1509dc5816fb417b6b030bd9dd74274e361', 'width': 320}], 'source': {'height': 840, 'url': 'https://preview.redd.it/4kekkl4lnxqb1.jpg?auto=webp&s=23b0ef6cff6d8410b63c9791ea1c94698472dc7c', 'width': 515}, 'variants': {}}]}
|
|||
LM Studio models repetition issue
| 1 |
I've noticed this a few times now wiht a few different models. Whenever the LLM finishes a response and cuts it off, if i hit continue, it just repeats itself again.
Using wizardlm lama2 13b q8 or mythalion 13b q6 or any of hte other "prose" type LLMs, they always seem to repeat on continue instead of actually continuing. Is this a bug or a result of a resource limitation on my system? (I'm using an i7 7700k, 32gb RAM with a GTX 1080Ti for gpu acceleration).
I haven't touched any of the settings for temp or the like, they're at default, but those values should result in some randomness, not direct repetition. I thought.
Seems to be this way for a few prompts now.
| 2023-09-28T05:58:26 |
https://www.reddit.com/r/LocalLLaMA/comments/16u7xzz/lm_studio_models_repetition_issue/
|
count023
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16u7xzz
| false | null |
t3_16u7xzz
|
/r/LocalLLaMA/comments/16u7xzz/lm_studio_models_repetition_issue/
| false | false |
self
| 1 | null |
error when using a lora i trained
| 1 |
i'm new to local text generation, i just installed text generation web ui yesterday, everything works, but as soon as i try to apply the lora i trained (and retrained) i get this error:
Traceback (most recent call last):
File "C:\\Users\\sa\\Desktop\\textw\\text-generation-webui\\modules\\[callbacks.py](https://callbacks.py)", line 56, in gentask
ret = self.mfunc(callback=\_callback, \*args, \*\*self.kwargs)
File "C:\\Users\\sa\\Desktop\\textw\\text-generation-webui\\modules\\text\_generation.py", line 347, in generate\_with\_callback
shared.model.generate(\*\*kwargs)
File "C:\\Users\\sa\\Desktop\\textw\\text-generation-webui\\installer\_files\\env\\lib\\site-packages\\peft\\peft\_model.py", line 975, in generate
outputs = self.base\_model.generate(\*\*kwargs)
File "C:\\Users\\sa\\Desktop\\textw\\text-generation-webui\\installer\_files\\env\\lib\\site-packages\\torch\\utils\\\_contextlib.py", line 115, in decorate\_context
return func(\*args, \*\*kwargs)
File "C:\\Users\\sa\\Desktop\\textw\\text-generation-webui\\installer\_files\\env\\lib\\site-packages\\transformers\\generation\\[utils.py](https://utils.py)", line 1648, in generate
return self.sample(
File "C:\\Users\\sa\\Desktop\\textw\\text-generation-webui\\installer\_files\\env\\lib\\site-packages\\transformers\\generation\\[utils.py](https://utils.py)", line 2766, in sample
next\_tokens = torch.multinomial(probs, num\_samples=1).squeeze(1)
RuntimeError: probability tensor contains either \`inf\`, \`nan\` or element < 0
Output generated in 1.68 seconds (0.00 tokens/s, 0 tokens, context 1357, seed 2136053651).
​
| 2023-09-28T06:47:20 |
https://www.reddit.com/r/LocalLLaMA/comments/16u8rve/error_when_using_a_lora_i_trained/
|
3lirex
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16u8rve
| false | null |
t3_16u8rve
|
/r/LocalLLaMA/comments/16u8rve/error_when_using_a_lora_i_trained/
| false | false |
self
| 1 | null |
Datasets
| 2 |
Hello does anybody know where to retrieve the datasets this repo mention?
[https://github.com/michaelnny/InstructLLaMA#download-and-prepare-datasets](https://github.com/michaelnny/InstructLLaMA#download-and-prepare-datasets)
Thanks :)
| 2023-09-28T06:54:52 |
https://www.reddit.com/r/LocalLLaMA/comments/16u8w86/datasets/
|
Outrageous_Ad8520
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16u8w86
| false | null |
t3_16u8w86
|
/r/LocalLLaMA/comments/16u8w86/datasets/
| false | false |
default
| 2 |
{'enabled': False, 'images': [{'id': '_uBOlIAElkGfXpi6nut69VlzuMo7jX9pnk8VUysIhPY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HfrtF0n7lH5b92RxyqyyEMvF85ULVZn1GuiW49RIy3Y.jpg?width=108&crop=smart&auto=webp&s=c95252233228b73477831e58329ae34e1bd1cdbd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HfrtF0n7lH5b92RxyqyyEMvF85ULVZn1GuiW49RIy3Y.jpg?width=216&crop=smart&auto=webp&s=97daa3a0de784498846acdaf16ca5ab442ca5c30', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HfrtF0n7lH5b92RxyqyyEMvF85ULVZn1GuiW49RIy3Y.jpg?width=320&crop=smart&auto=webp&s=b9a391083d4dff763a7d2af702b1845c1a46c107', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HfrtF0n7lH5b92RxyqyyEMvF85ULVZn1GuiW49RIy3Y.jpg?width=640&crop=smart&auto=webp&s=43e0432be111d24e45919ee2f08c0fc0371da83d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HfrtF0n7lH5b92RxyqyyEMvF85ULVZn1GuiW49RIy3Y.jpg?width=960&crop=smart&auto=webp&s=3114f0f35bd11b6b2479622e95bdcc6a142a1740', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HfrtF0n7lH5b92RxyqyyEMvF85ULVZn1GuiW49RIy3Y.jpg?width=1080&crop=smart&auto=webp&s=8959cb465e65992719f575f93ff23cf1d9672862', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HfrtF0n7lH5b92RxyqyyEMvF85ULVZn1GuiW49RIy3Y.jpg?auto=webp&s=50ce2af599b61798988567966ab26b828295c9ed', 'width': 1200}, 'variants': {}}]}
|
LLama 2 jailbreak: Albert, at your service. (Genera-purpose AI jailbreak, similar to DAN)
| 1 | 2023-09-28T07:45:42 |
https://github.com/TheRook/Albert
|
AlternativeMath-1
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16u9os8
| false | null |
t3_16u9os8
|
/r/LocalLLaMA/comments/16u9os8/llama_2_jailbreak_albert_at_your_service/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'p-Cwi3wCM8en0xRnwKjEtDbTb0vH9zA63ZGI4si315E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yzYdx6vntSo7F2stpeaCvYLxavgf7YTWQM-cwEQ7kD0.jpg?width=108&crop=smart&auto=webp&s=af01b1f5c43b4bb251ce3410acbdc1a01794e4ad', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/yzYdx6vntSo7F2stpeaCvYLxavgf7YTWQM-cwEQ7kD0.jpg?width=216&crop=smart&auto=webp&s=cef10a52489796d3b7cb7b9359c6ddc4957cdc3a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/yzYdx6vntSo7F2stpeaCvYLxavgf7YTWQM-cwEQ7kD0.jpg?width=320&crop=smart&auto=webp&s=4874570816d3348f7721aa88f3ed277b34d1cc15', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/yzYdx6vntSo7F2stpeaCvYLxavgf7YTWQM-cwEQ7kD0.jpg?width=640&crop=smart&auto=webp&s=61227079d5e4d1684cf8aacf0000b374d2719d23', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/yzYdx6vntSo7F2stpeaCvYLxavgf7YTWQM-cwEQ7kD0.jpg?width=960&crop=smart&auto=webp&s=eb598a6866dd8dbaef1488035be410a40fccb60e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/yzYdx6vntSo7F2stpeaCvYLxavgf7YTWQM-cwEQ7kD0.jpg?width=1080&crop=smart&auto=webp&s=babc5920d0e62655304de19a7e0e160c696cf35d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/yzYdx6vntSo7F2stpeaCvYLxavgf7YTWQM-cwEQ7kD0.jpg?auto=webp&s=77e03a51c0d48cc13608c6de2fb426ed045e7389', 'width': 1200}, 'variants': {}}]}
|
||
Seeking Input on Local Model Alternatives for Complex Database Queries, Mapping
| 1 |
I am working on developing a conversational agent capable of answering complex questions about our database. Our goal is to empower users to make detailed inquiries about entity names, specialties, locations, and more. However, we've encountered some performance issues with the LLM 7B model in comparison to Chat GPT.
If you've had experience with local models that excel in handling complex database queries with langchain agent, we'd love to hear from you.
Please feel free to share your thoughts, experiences, or suggestions in the comments below.
| 2023-09-28T08:12:14 |
https://www.reddit.com/r/LocalLLaMA/comments/16ua37d/seeking_input_on_local_model_alternatives_for/
|
Alertt_53
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ua37d
| false | null |
t3_16ua37d
|
/r/LocalLLaMA/comments/16ua37d/seeking_input_on_local_model_alternatives_for/
| false | false |
self
| 1 | null |
Fine-tuning legal chatbot
| 1 |
[removed]
| 2023-09-28T08:23:51 |
https://www.reddit.com/r/LocalLLaMA/comments/16ua9nn/finetuning_legal_chatbot/
|
Tejasw__
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ua9nn
| false | null |
t3_16ua9nn
|
/r/LocalLLaMA/comments/16ua9nn/finetuning_legal_chatbot/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'kiTAa4bFkOmLLNa4kjtv8fpFOwInbsnAb5RZFo9RdHQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/FmdpgwbYoVmg9A56OpSQIS6-8ynaErevnIifbsY1Szk.jpg?width=108&crop=smart&auto=webp&s=a8bd49fbc84528679a589f37acd2c91da8d598c7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/FmdpgwbYoVmg9A56OpSQIS6-8ynaErevnIifbsY1Szk.jpg?width=216&crop=smart&auto=webp&s=91e35f62e089b49cfb334b7004ef52a717a793b7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/FmdpgwbYoVmg9A56OpSQIS6-8ynaErevnIifbsY1Szk.jpg?width=320&crop=smart&auto=webp&s=21a57bdca350e62197a1e69080b3a5fbb475a6b3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/FmdpgwbYoVmg9A56OpSQIS6-8ynaErevnIifbsY1Szk.jpg?width=640&crop=smart&auto=webp&s=e09a3cd318ab9e0ad251a3fc958dbadd0f8d04bd', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/FmdpgwbYoVmg9A56OpSQIS6-8ynaErevnIifbsY1Szk.jpg?width=960&crop=smart&auto=webp&s=2b9bfb8e78a49de90f6daf511254b4611eb6e405', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/FmdpgwbYoVmg9A56OpSQIS6-8ynaErevnIifbsY1Szk.jpg?width=1080&crop=smart&auto=webp&s=78134740c0f921b720e7dd626c98994e5fc7d7d7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/FmdpgwbYoVmg9A56OpSQIS6-8ynaErevnIifbsY1Szk.jpg?auto=webp&s=b20db60da085a3308351857c0119e1475e0e1f2f', 'width': 1200}, 'variants': {}}]}
|
NVLink bridge worth it for dual RTX 3090?
| 1 |
I recently got hold of two RTX 3090 GPUs specifically for LLM inference and training.
Everything seems to work well and I can finally fit a 70B model into the VRAM with 4 bit quantization.
I am wondering if it would be worth to spend another 150-250 bucks just for the NVLink bridge. Does anyone have experience with that?
Thank you!
| 2023-09-28T09:47:22 |
https://www.reddit.com/r/LocalLLaMA/comments/16ubkyq/nvlink_bridge_worth_it_for_dual_rtx_3090/
|
minecraft_simon
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ubkyq
| false | null |
t3_16ubkyq
|
/r/LocalLLaMA/comments/16ubkyq/nvlink_bridge_worth_it_for_dual_rtx_3090/
| false | false |
self
| 1 | null |
But where is the llama source code
| 1 |
where is the source code that made the 'llama 2'
| 2023-09-28T09:55:42 |
https://www.reddit.com/r/LocalLLaMA/comments/16ubpwv/but_where_is_the_llama_source_code/
|
qwertykeyboard_1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ubpwv
| false | null |
t3_16ubpwv
|
/r/LocalLLaMA/comments/16ubpwv/but_where_is_the_llama_source_code/
| false | false |
self
| 1 | null |
How to select sampling parameters and decoding strategies?
| 1 |
I have a rough understanding of how greedy search/ beam search works and how nucleus and mirostat sampling work. I'm not sure what is the best way to select the parameters. Are there some best practices/ presets for different tasks which I could use as a starting point? Also, beam search is quite slow. Do any of the other sampling parameters affect the beam search or is it completely deterministic? (Can I "truncate" the beam search by changing sampling parameters?)
| 2023-09-28T10:03:21 |
https://www.reddit.com/r/LocalLLaMA/comments/16ubuqz/how_to_select_sampling_parameters_and_decoding/
|
LiquidGunay
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ubuqz
| false | null |
t3_16ubuqz
|
/r/LocalLLaMA/comments/16ubuqz/how_to_select_sampling_parameters_and_decoding/
| false | false |
self
| 1 | null |
Cost comparison of Llama 2 vs GPT?
| 1 |
Phil Schmid recently posted an article on huggingface regarding deployment costs of Llama on Sagemaker here: https://huggingface.co/blog/llama-sagemaker-benchmark
Here's pricing of ChatGPT API: https://openai.com/pricing
How is OpenAI API so cheap as compared to cloud deployment? What possibly could they be doing?
| 2023-09-28T10:36:50 |
https://www.reddit.com/r/LocalLLaMA/comments/16ucgex/cost_comparison_of_llama_2_vs_gpt/
|
FolksyCobalt
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ucgex
| false | null |
t3_16ucgex
|
/r/LocalLLaMA/comments/16ucgex/cost_comparison_of_llama_2_vs_gpt/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '6GF5SFLQuxM57YUbXvvVb4a8VRrVD6a78kruPuKM9X8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/7zTJ9OP4lj9WWoZtUPBIhWhZmPUxjsSDv04A1lyPr_o.jpg?width=108&crop=smart&auto=webp&s=972c49520410737c4c5e358d2f062e2585ddb49b', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/7zTJ9OP4lj9WWoZtUPBIhWhZmPUxjsSDv04A1lyPr_o.jpg?width=216&crop=smart&auto=webp&s=73fda2cbbea16f8e61479d91431d34d3f4ff2de0', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/7zTJ9OP4lj9WWoZtUPBIhWhZmPUxjsSDv04A1lyPr_o.jpg?width=320&crop=smart&auto=webp&s=994373d3977e03efb0db8a7368af6e3ba0dece48', 'width': 320}, {'height': 332, 'url': 'https://external-preview.redd.it/7zTJ9OP4lj9WWoZtUPBIhWhZmPUxjsSDv04A1lyPr_o.jpg?width=640&crop=smart&auto=webp&s=342bf975ad8f8388d87cf6f01ede0df02dacc391', 'width': 640}, {'height': 499, 'url': 'https://external-preview.redd.it/7zTJ9OP4lj9WWoZtUPBIhWhZmPUxjsSDv04A1lyPr_o.jpg?width=960&crop=smart&auto=webp&s=a09ddc43efbe17e814517b41119fe3ca74cd8672', 'width': 960}, {'height': 561, 'url': 'https://external-preview.redd.it/7zTJ9OP4lj9WWoZtUPBIhWhZmPUxjsSDv04A1lyPr_o.jpg?width=1080&crop=smart&auto=webp&s=732c9e546b2f7b6ff65899eaa93e36abd99ea66e', 'width': 1080}], 'source': {'height': 1248, 'url': 'https://external-preview.redd.it/7zTJ9OP4lj9WWoZtUPBIhWhZmPUxjsSDv04A1lyPr_o.jpg?auto=webp&s=308f5037b00e6947c08c812e8a70f66934503276', 'width': 2400}, 'variants': {}}]}
|
Pipeline for model evaluation in a batch
| 1 |
There are so many amazing models and fine-tunes being published on a daily basis that it's hard to keep track of everything and more often than not, the official benchmark numbers don't tell the full story on how well models will perform for my specific use-cases.
I have downloaded 20-30 models and most of the time, I just test them manually with some prompts but I would like to find a framework that can help me at automate that process, at least partially.
I imagine the following ideal work-flow:
\- I get an idea for a prompt I want to test
\- I feed that prompt as a test-case into the framework
\- I set specific conditions for what constitutes as a response that passes the test
\- if the range of valid responses is more nuanced, I want to be able to leverage the OpenAI ChatGPT API for evaluation, with a prompt template like "The ideal response to the prompt would have the following characteristics and key points: \[...\]. How well does the following response fulfill these criteria? \[model response\]. Please give a score from 0 (not at all) to 10 (perfect response)."
\- based on the response evaluation, I would get a numeric score for the quality of my downloaded models in respect to various prompts and topics
\- the score should ideally be stored in a database, or at least saved to a file
\- once the framework is done with a model, it should unload it, load the next one and run the test cases against that one, until all the models have been tested
\- when I download a new model, I want to instruct the framework to run all my tests or just a subset of my tests against that model and score its responses
Does anyone of you know about a framework, maybe a GitHub repository, that can fulfill at least some of those tasks?
If the workflow I outlined isn't fully possible with existing tools yet, I am willing to put in the necessary coding-work.
I am sure there are many people who would like to test a large quantity of models with a large quantity of cases so I am very keen to learn if anyone has been able to find a good solution for that!
Thank you for your time!
| 2023-09-28T10:54:06 |
https://www.reddit.com/r/LocalLLaMA/comments/16ucrmx/pipeline_for_model_evaluation_in_a_batch/
|
minecraft_simon
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ucrmx
| false | null |
t3_16ucrmx
|
/r/LocalLLaMA/comments/16ucrmx/pipeline_for_model_evaluation_in_a_batch/
| false | false |
self
| 1 | null |
How to use a weaviate cluster in kubernetes for storing a vectorstore index?
| 1 |
Looking for an easy to follow tutorial for someone who doesn't have much experience with kubernetes.
I joined a new company a couple of weeks ago, and they use Rancher for kubernetes deployments, and I am totally unfamiliar with it. I have done a couple of kubernetes deployments with GKE in the past, but Rancher seems to be totally different from Google Cloud and none of the scripts or codes are easily translatable/transferrable.
And weavite's own documentation is almost non existent regarding kubernetes deployments. In fact, I tried their own cloud solution (WCS) deployment and even that was a struggle to get done because of their poor documentation. It took hours to figure out what schema should I define, or what's the logic behind the way in which the schema is defined. It seems unnecessarily complicated for one thing, and on top of that there's a lack of explanation of the schema structure or how to go about creating one from scratch.
Anyway, instead of storing my vectorstore index in WCS, I now want to store it in my company's Weaviate cluster in Rancher kubernetes. But after browsing the web for a day, I have found absolutely zero documentation or tutorials on how to do this. Can anyone help me out here?
| 2023-09-28T11:00:43 |
https://www.reddit.com/r/LocalLLaMA/comments/16ucw1i/how_to_use_a_weaviate_cluster_in_kubernetes_for/
|
ResearcherNo4728
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ucw1i
| false | null |
t3_16ucw1i
|
/r/LocalLLaMA/comments/16ucw1i/how_to_use_a_weaviate_cluster_in_kubernetes_for/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'uCKgsARIVYHyputdKdO9ksztdsoTyZA0bbN34MOZOw0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jGFU8U0oW4Cij03vKaJ4s_RJYDpKHF0Z9n_gm_rplvU.jpg?width=108&crop=smart&auto=webp&s=c09124bc959a674d36d5f534868065d24d87339b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jGFU8U0oW4Cij03vKaJ4s_RJYDpKHF0Z9n_gm_rplvU.jpg?width=216&crop=smart&auto=webp&s=7422012f1febb2a57bf33fd29933e406f386ca49', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jGFU8U0oW4Cij03vKaJ4s_RJYDpKHF0Z9n_gm_rplvU.jpg?width=320&crop=smart&auto=webp&s=b4c7b1a27fab00a5c2a75cb57aeb29db29f3750e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jGFU8U0oW4Cij03vKaJ4s_RJYDpKHF0Z9n_gm_rplvU.jpg?width=640&crop=smart&auto=webp&s=a96a0f4184ed1ec446bbb227ec1a739238cfaa34', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jGFU8U0oW4Cij03vKaJ4s_RJYDpKHF0Z9n_gm_rplvU.jpg?width=960&crop=smart&auto=webp&s=f6379147de1051de94cbc293d8a8b4f577feed22', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jGFU8U0oW4Cij03vKaJ4s_RJYDpKHF0Z9n_gm_rplvU.jpg?width=1080&crop=smart&auto=webp&s=85c57c9813e27c8ef72eceb0c2836112bfcab8c5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jGFU8U0oW4Cij03vKaJ4s_RJYDpKHF0Z9n_gm_rplvU.jpg?auto=webp&s=3df39fafe52e1d275495f2c2289bbe95c3300c96', 'width': 1200}, 'variants': {}}]}
|
is there a step by step guide for training loras?
| 1 |
i have an rtx 4080, i tried training loras but literally none of them worked, i tried many models as a base, tried to load them several ways, and even though i tried to keep settings default, they either won't start training, or i get all sorts of errors when i try to train, or i get an error a while after training has started that then says training is over but the lora doesn't work.
I'm using the oogabooga text generation web ui
preferably I'd like to be able to train using GPTQ 4bit models.
any help would be appreciated.
| 2023-09-28T11:08:17 |
https://www.reddit.com/r/LocalLLaMA/comments/16ud1dw/is_there_a_step_by_step_guide_for_training_loras/
|
3lirex
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ud1dw
| false | null |
t3_16ud1dw
|
/r/LocalLLaMA/comments/16ud1dw/is_there_a_step_by_step_guide_for_training_loras/
| false | false |
self
| 1 | null |
Llama with HF Candle
| 1 |
Has anybody tried running llama with hf candle .
If yes, how better it is than running it on python
| 2023-09-28T11:12:13 |
https://www.reddit.com/r/LocalLLaMA/comments/16ud46p/llama_with_hf_candle/
|
Spiritual-Rub925
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ud46p
| false | null |
t3_16ud46p
|
/r/LocalLLaMA/comments/16ud46p/llama_with_hf_candle/
| false | false |
self
| 1 | null |
Is anyone interested in a prompt template converter?
| 1 |
Hi, llamers – after downloading a new model, I got stuck for a while by rewriting my local prompt files from one model template to another, which is just an unpleasant process.
From:
`<|prompter|>*your text here*<|endoftext|><|assistant|>`
To:
`USER: *your text here* ASSISTANT:`
Is anyone interested in a single-page app (js only, no back-end, no registration etc.) that will automatically convert prompts from one model format to another? Just curious; maybe I will help save time for someone by this simple app.
| 2023-09-28T12:34:24 |
https://www.reddit.com/r/LocalLLaMA/comments/16ueu32/is_anyone_interested_in_a_prompt_template/
|
Shir_man
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ueu32
| false | null |
t3_16ueu32
|
/r/LocalLLaMA/comments/16ueu32/is_anyone_interested_in_a_prompt_template/
| false | false |
self
| 1 | null |
Small First Aid / survival LLM
| 1 |
Is there any LLM specifically well suited for first aid and/or survival? For situations without cell reception, such an LLM could be quite useful.
7B LLMs now run on high-end phones. Combined with satellite based emergency service this could make emergency situations in areas without cell receptions potentially a lot more survivable.
Especially factual correctness of such a small model can be a big issue. But trained [Textbooks Are All You Need](https://arxiv.org/abs/2306.11644) style, I think a 3B to 7B first aid / survival specialist model could be quite useful.
(not saying you should go into the wilderness unprepared or underskilled, just to have it as an additional resource/tool if needed)
| 2023-09-28T12:57:47 |
https://www.reddit.com/r/LocalLLaMA/comments/16ufczn/small_first_aid_survival_llm/
|
Balance-
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ufczn
| false | null |
t3_16ufczn
|
/r/LocalLLaMA/comments/16ufczn/small_first_aid_survival_llm/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]}
|
General questions on LLMs and how to use them (post your own)
| 1 |
Now since I have a few questions myself, maybe we can use this thread for posting stuff that people will look up now and then, like some form of FAQ. If not, I hope at least my questions will be answered. :)
(I will create my questions as comments.)
| 2023-09-28T12:59:29 |
https://www.reddit.com/r/LocalLLaMA/comments/16ufefp/general_questions_on_llms_and_how_to_use_them/
|
psi-love
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ufefp
| false | null |
t3_16ufefp
|
/r/LocalLLaMA/comments/16ufefp/general_questions_on_llms_and_how_to_use_them/
| false | false |
self
| 1 | null |
Need advice on Strategys for Local Hosting
| 1 |
I've been lurking for some time here and find the discussions interesting, but I have been hesitant to pull the trigger on jumping head first into a local hosting/training solution. Here are my concerns.
1. Local training is expensive both from a capital and operational cost perspective.
2. Maintaining data privacy with cloud training and running inference locally on lower cost gpus has intangible costs, mainly with leaking intent and lack of end use privacy. My end use is mainly for research around machine learning topics, paper synthesis, etc.
3. Uncertainty around VRAM demands to run inference on a local model. I'm not sure if I need the multiple 40GB's of VRAM for a 32B - 70B model to just get running on inference or if I can be running crossed between RAM and VRAM.
any inputs are helpful here. Thanks for the time.
| 2023-09-28T13:30:24 |
https://www.reddit.com/r/LocalLLaMA/comments/16ug4z3/need_advice_on_strategys_for_local_hosting/
|
The_Last_Monte
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ug4z3
| false | null |
t3_16ug4z3
|
/r/LocalLLaMA/comments/16ug4z3/need_advice_on_strategys_for_local_hosting/
| false | false |
self
| 1 | null |
Swiss Army Llama: Do tons of useful stuff with local LLMs with a REST API
| 1 |
GitHub Repo: [https://github.com/Dicklesworthstone/swiss\_army\_llama](https://github.com/Dicklesworthstone/swiss_army_llama)
This project originally started out with a focus on easily generating embeddings from Llama2 and other llama\_cpp (gguf) models and storing them in a database, all exposed via a convenient REST api. But since then, I've added a lot more functionality:
1) New endpoint for generating text completions (including specifying custom grammars, like JSON).
2) Get all the embeddings for an entire document-- can be any kind of document (plaintext, PDFs, .doc/.docx, etc.) and it will do OCR on PDFs and images.
3) Submit an audio file (wav/mp3) and it uses whisper to transcribe it into text, then gets the embeddings for the text (after combining the transcription segments into complete sentences).
4) Integrates with my new vector similarity library (\`pip install fast\_vector\_similarity\`) to provide an "advanced" semantic search endpoint. This uses a 2-step process: first it uses FAISS to quickly narrow down the set of stored embeddings using cosine similarity, then it uses my vector similarity library to compute a bunch of more sophisticated (and computationally intensive) measures for the final ranking.
As a result, I changed the project name to Swiss Army Llama to reflect the new project goal: to be a one stop shop for all your local LLM needs so you can easily integrate this technology in your programming projects. As I think of more useful endpoints to add (I constantly get new feature ideas from my own separate projects-- whenever I want to do something that isn't covered yet, I add a new endpoint or option), I will continue growing the scope of the project. So let me know if there is some functionality that you think would be generally useful, or at least extremely useful for you!
A big part of what makes this project useful to me is the FastAPI backbone. Nothing beats a simple REST API with a well-documented Swagger page for ease and familiarity, especially for developers who aren't familiar with LLMs. You can set this up in 1 minute on a fresh box using the docker TLDR commands, come back in 15 minutes, and it's all set up with downloaded models and ready to do inference or get embeddings. It also lets you distribute the various pieces of your application on different machines connected over the internet.
​
| 2023-09-28T14:12:03 |
https://www.reddit.com/r/LocalLLaMA/comments/16uh4kr/swiss_army_llama_do_tons_of_useful_stuff_with/
|
dicklesworth
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16uh4kr
| false | null |
t3_16uh4kr
|
/r/LocalLLaMA/comments/16uh4kr/swiss_army_llama_do_tons_of_useful_stuff_with/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'fCXOgyZ2W8tGxxw2efspDQHILZB0k-ShLk5hnm-Ln-Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/c_KN-jhqxFfSSOkT6xTD1IVPczxPRqmHatKORfWiH2g.jpg?width=108&crop=smart&auto=webp&s=5268322afe636d271de4599f2a579ca2c22e61b2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/c_KN-jhqxFfSSOkT6xTD1IVPczxPRqmHatKORfWiH2g.jpg?width=216&crop=smart&auto=webp&s=a24e849e17c6915ea037e4e0f64d21cfdb22a568', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/c_KN-jhqxFfSSOkT6xTD1IVPczxPRqmHatKORfWiH2g.jpg?width=320&crop=smart&auto=webp&s=1ede82f935f841ace20923b0bcfac8d6fc52c0db', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/c_KN-jhqxFfSSOkT6xTD1IVPczxPRqmHatKORfWiH2g.jpg?width=640&crop=smart&auto=webp&s=3513d9df17507944b287ef31a52ca6ba1ba536ee', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/c_KN-jhqxFfSSOkT6xTD1IVPczxPRqmHatKORfWiH2g.jpg?width=960&crop=smart&auto=webp&s=8256edc9c5954d80337cd31e88824ad5657add06', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/c_KN-jhqxFfSSOkT6xTD1IVPczxPRqmHatKORfWiH2g.jpg?width=1080&crop=smart&auto=webp&s=ea96ee0ac3d2903845c7a39cc98fd8411e63d257', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/c_KN-jhqxFfSSOkT6xTD1IVPczxPRqmHatKORfWiH2g.jpg?auto=webp&s=ee10570965102308fd04ed40a910a9dadc34cecd', 'width': 1200}, 'variants': {}}]}
|
Using LLAMA 2 for Document Sectioning in Medical Letters
| 1 |
Hello good people,
I'm curious if there's a method to use LLAMA 2 to split a document, like a medical letter, into sections like introduction, medications, summary, and so on?
| 2023-09-28T14:16:57 |
https://www.reddit.com/r/LocalLLaMA/comments/16uh8tr/using_llama_2_for_document_sectioning_in_medical/
|
mohamednajdawi
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16uh8tr
| false | null |
t3_16uh8tr
|
/r/LocalLLaMA/comments/16uh8tr/using_llama_2_for_document_sectioning_in_medical/
| false | false |
self
| 1 | null |
Phi-1.5 finetunes and quantization's: Late September '23 edition
| 1 |
[removed]
| 2023-09-28T14:19:50 |
https://www.reddit.com/r/LocalLLaMA/comments/16uhbcy/phi15_finetunes_and_quantizations_late_september/
|
Balance-
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16uhbcy
| false | null |
t3_16uhbcy
|
/r/LocalLLaMA/comments/16uhbcy/phi15_finetunes_and_quantizations_late_september/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '3cxODlbih2qIQ-Qz9TOJSgByP2mgwayu1N0D8OG073s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RCRgGANgOFhF64DNhHUMRz_K6LU58kyYmwvZUleyaVo.jpg?width=108&crop=smart&auto=webp&s=25176b92bb1e26f2c819378d495d619d06239a13', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/RCRgGANgOFhF64DNhHUMRz_K6LU58kyYmwvZUleyaVo.jpg?width=216&crop=smart&auto=webp&s=3b451276ee8c2e50a7c3026b86a43c97c34cc8e0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/RCRgGANgOFhF64DNhHUMRz_K6LU58kyYmwvZUleyaVo.jpg?width=320&crop=smart&auto=webp&s=efbf321a1ab35b370a8ef2bb4f6774fd5babbb59', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/RCRgGANgOFhF64DNhHUMRz_K6LU58kyYmwvZUleyaVo.jpg?width=640&crop=smart&auto=webp&s=f1802108a048656720bb2034d0ac588feff1568c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/RCRgGANgOFhF64DNhHUMRz_K6LU58kyYmwvZUleyaVo.jpg?width=960&crop=smart&auto=webp&s=a0d12d8eff4f5e67bf09fec88c8d4f4d05c08d70', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/RCRgGANgOFhF64DNhHUMRz_K6LU58kyYmwvZUleyaVo.jpg?width=1080&crop=smart&auto=webp&s=2b0995f8a3ca0469db2de902657886c77819c305', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/RCRgGANgOFhF64DNhHUMRz_K6LU58kyYmwvZUleyaVo.jpg?auto=webp&s=b94e8e5e8a76fcd80eb5c6069daf333eef105eee', 'width': 1200}, 'variants': {}}]}
|
Cloud Options for testing LLM options (like Replicate, Together.ai, etc.)
| 1 |
[removed]
| 2023-09-28T14:54:22 |
https://www.reddit.com/r/LocalLLaMA/comments/16ui6ds/cloud_options_for_testing_llm_options_like/
|
metsfan1025
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ui6ds
| false | null |
t3_16ui6ds
|
/r/LocalLLaMA/comments/16ui6ds/cloud_options_for_testing_llm_options_like/
| false | false |
self
| 1 | null |
List of Open Source UIs/Webapps for LLM Chatbots
| 1 |
Realized that many people are building their own chatbots from scratch which seems insane! So I compiled an "awesome list" of high-quality, plug-and-play chatbots, some of which support LLAMA/HF models out of the box.
[https://github.com/snowfort-ai/awesome-llm-webapps](https://github.com/snowfort-ai/awesome-llm-webapps)
Right now there's a strong starting group but more coverage would be great, especially around local models and simple streamlit/gradio UIs, so please share any good ones you know of. It's limited to actively maintained/developed projects since models/toolsets change so quickly and temporary projects would quickly become stale.
| 2023-09-28T15:36:37 |
https://www.reddit.com/r/LocalLLaMA/comments/16uj91r/list_of_open_source_uiswebapps_for_llm_chatbots/
|
snowfort_guy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16uj91r
| false | null |
t3_16uj91r
|
/r/LocalLLaMA/comments/16uj91r/list_of_open_source_uiswebapps_for_llm_chatbots/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'YN6cYQQpgQhjuCKbXGJmnwoW_ow-L2Ii2zsiopGeoeE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/P5lmE6yrt3YuuOKVwWocqsGS9cC6OK030_htsIpvBGw.jpg?width=108&crop=smart&auto=webp&s=a7d0b04f2896c14201ae050ab427cb2c2d8b5d31', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/P5lmE6yrt3YuuOKVwWocqsGS9cC6OK030_htsIpvBGw.jpg?width=216&crop=smart&auto=webp&s=6152843d70759f50835bad5781b51fd631f83693', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/P5lmE6yrt3YuuOKVwWocqsGS9cC6OK030_htsIpvBGw.jpg?width=320&crop=smart&auto=webp&s=beea36c2ce134224c1157fbccc7d266b3d27972c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/P5lmE6yrt3YuuOKVwWocqsGS9cC6OK030_htsIpvBGw.jpg?width=640&crop=smart&auto=webp&s=ecad9d28872c9ddcd8cc4c25f19a6cbbeb231945', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/P5lmE6yrt3YuuOKVwWocqsGS9cC6OK030_htsIpvBGw.jpg?width=960&crop=smart&auto=webp&s=152a6b76cc824f5a0ea45ab4b25876f952f8014d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/P5lmE6yrt3YuuOKVwWocqsGS9cC6OK030_htsIpvBGw.jpg?width=1080&crop=smart&auto=webp&s=648b5cabcb34d4b062f0b03839d0bdd35af919f7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/P5lmE6yrt3YuuOKVwWocqsGS9cC6OK030_htsIpvBGw.jpg?auto=webp&s=52858d5175e1663fab0452518b55e751d38ca6f3', 'width': 1200}, 'variants': {}}]}
|
Poor LoRA tools
| 1 |
I'd imagine we'd be getting far more out of the current models by being able to apply our own trained LoRAs to these LLMs to get them to work for our specific needs.
Right now we seem to be churning out more and more of the same Llama2 model with minor if any differences, while still having no LoRA creating tools that work/ don't require extensive knowlege to be utilised.
What I'm trying to ask really, is, where are all the LoRAs? Where's the Gradio wrapped interface for making them?
| 2023-09-28T15:46:46 |
https://www.reddit.com/r/LocalLLaMA/comments/16uji7v/poor_lora_tools/
|
AurrenTheWolf
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16uji7v
| false | null |
t3_16uji7v
|
/r/LocalLLaMA/comments/16uji7v/poor_lora_tools/
| false | false |
self
| 1 | null |
What type of Data Types do LLMs/LLaMa use?
| 1 |
When you look at the performance of a gpu it list FP64/FP32/FP16/Int8, etc.. What data types are we using when loader our quantized models, or even non quantized? I ask because looking at the performance on my machine I would think everything is being done with FP32 but thought LLM used FP16 most of the time?
| 2023-09-28T15:55:13 |
https://www.reddit.com/r/LocalLLaMA/comments/16ujppi/what_type_of_data_types_do_llmsllama_use/
|
ccbadd
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ujppi
| false | null |
t3_16ujppi
|
/r/LocalLLaMA/comments/16ujppi/what_type_of_data_types_do_llmsllama_use/
| false | false |
self
| 1 | null |
Interactive tooling for local agents
| 1 |
Wouldn't it be awesome if an llm would interactively ask question and make suggestion while you write stuff?
I can see how far more granularity customized data could help people make their local setup more useful quicker.
| 2023-09-28T16:26:12 |
https://www.reddit.com/r/LocalLLaMA/comments/16ukipe/interactive_tooling_for_local_agents/
|
ComplexRelevant2636
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ukipe
| false | null |
t3_16ukipe
|
/r/LocalLLaMA/comments/16ukipe/interactive_tooling_for_local_agents/
| false | false |
self
| 1 | null |
Interactive tooling for local agents
| 1 |
Wouldn't it be awesome if an llm would interactively ask question and make suggestion while you write stuff?
I can see how far more granularity customized data could help people make their local setup more useful quicker.
| 2023-09-28T16:42:30 |
https://www.reddit.com/r/LocalLLaMA/comments/16ukxft/interactive_tooling_for_local_agents/
|
amos-el
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ukxft
| false | null |
t3_16ukxft
|
/r/LocalLLaMA/comments/16ukxft/interactive_tooling_for_local_agents/
| false | false |
self
| 1 | null |
ML1 [34b,70b] - Phi 1/1.5 reproduction on LLaMa2 Architecture
| 1 |
Hi everyone! I am excited to introduce a new project called **ML1** that aims to build upon the success of Microsoft’s Phi1/Phi1.5 by developing a similar series of language models using a different set of base models to train with textbook synthesized data.
The goal is to develop a series of models that can express superior performance given high quality data. To achieve this, I plan to experiment with the lovely dataset produced by /u/docsoc1. Huge shout out to him/her! If you'd like to view that dataset, the link (phisci ...) is below.
For now, I am training two model variants:
* ML1-34b (based on Codellama-34b)
* ML1-70b (based on Llama2).
Both these models will be trained using the [sciphi-textbooks-are-all-you-need](https://huggingface.co/datasets/emrgnt-cmplxty/sciphi-textbooks-are-all-you-need) dataset from HuggingFace and made available using the llama2 license.
The training progress i below:
* ML1-34b variant will take around 5 days at 13,000 steps. [https://api.wandb.ai/links/nickmitchko/5ruqlvu0](https://api.wandb.ai/links/nickmitchko/5ruqlvu0)
* ML1-70b variant, I expect it to take approximately 3 weeks at 35,000 steps. However, these timings may change depending on the hardware and loss profile reported. (not yet started)
Follow along with the Weights and Biases reports to see the training progress. Hopefully I don't trip my breaker as the training rolls on :)
| 2023-09-28T16:50:45 |
https://www.reddit.com/r/LocalLLaMA/comments/16ul4sw/ml1_34b70b_phi_115_reproduction_on_llama2/
|
nickmitchko
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ul4sw
| false | null |
t3_16ul4sw
|
/r/LocalLLaMA/comments/16ul4sw/ml1_34b70b_phi_115_reproduction_on_llama2/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'SRQqCJLLBWenZA-Vk4NmfhLIYoGP4qO_YhRZNcIHsgE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?width=108&crop=smart&auto=webp&s=5c63eeb0d15d3381f0bbadec162cca2f9ed72747', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?width=216&crop=smart&auto=webp&s=0e003282dbbd320cd65fbcd21cf43ea4220d4c04', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?width=320&crop=smart&auto=webp&s=de1f815ca8dd7cb2df967d4ca3ca2fbc79bbf1ec', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?width=640&crop=smart&auto=webp&s=9e431c0b74666624a1bd0992a10196c3c4fea079', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?width=960&crop=smart&auto=webp&s=1856bb34a5887774727ce79ed3870431ef46c163', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?width=1080&crop=smart&auto=webp&s=a8ae39a6fa999681feedd3e3475c0424cc99deb1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?auto=webp&s=8c2bdd754ca7ecf62f55551ff88ca9d0c891f6c9', 'width': 1200}, 'variants': {}}]}
|
The code interpreter is what I wanted
| 1 |
Hi, I have nearly 10 years of experience using Vim for Python/C++ development, spending a significant amount of time in the terminal every day. I've been using GPT to help me solve some daily challenges, such as working with libraries I rarely use and struggle to remember how to call. I use GPT to find answers. When I need to manipulate strings with regular expressions, I use a code interpreter to generate code and let GPT validate the results, saving me a lot of time. However, I prefer interacting in the terminal over the browser. So, I've developed what I believe to be the best code interpreter. I welcome everyone to join the discussion. [https://github.com/dbpunk-labs/octogen](https://github.com/dbpunk-labs/octogen)
https://preview.redd.it/0znng27l21rb1.png?width=1500&format=png&auto=webp&s=903e3099b6258d88a972ae18f677470312353d6b
| 2023-09-28T17:05:14 |
https://www.reddit.com/r/LocalLLaMA/comments/16ulhxt/the_code_interpreter_is_what_i_wanted/
|
More-Shop9383
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ulhxt
| false | null |
t3_16ulhxt
|
/r/LocalLLaMA/comments/16ulhxt/the_code_interpreter_is_what_i_wanted/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'tolR4NhBC8Zlw58x3qmhYujsLWDb4_wvcxfC_-00Wmk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rOSiZwoaE8ZWxPdvJN_LT8idUYqOi_yDaukYB-jBC1A.jpg?width=108&crop=smart&auto=webp&s=28aa9362363b619a234f7d3136514ab44a9583ea', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rOSiZwoaE8ZWxPdvJN_LT8idUYqOi_yDaukYB-jBC1A.jpg?width=216&crop=smart&auto=webp&s=5461c801b255c8a3aff56e70a47c845365749fc6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rOSiZwoaE8ZWxPdvJN_LT8idUYqOi_yDaukYB-jBC1A.jpg?width=320&crop=smart&auto=webp&s=582ec51a0511b32c9c7159991359628417a6a01a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rOSiZwoaE8ZWxPdvJN_LT8idUYqOi_yDaukYB-jBC1A.jpg?width=640&crop=smart&auto=webp&s=c214bac63f55f698527f56196730411073f91dc0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rOSiZwoaE8ZWxPdvJN_LT8idUYqOi_yDaukYB-jBC1A.jpg?width=960&crop=smart&auto=webp&s=18a444428dcb6b0ad070c5fc8cf9bc0c1ad6e53e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rOSiZwoaE8ZWxPdvJN_LT8idUYqOi_yDaukYB-jBC1A.jpg?width=1080&crop=smart&auto=webp&s=bfee47c5883c8ece352cc5e79ff3eb9cc7b9f01f', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/rOSiZwoaE8ZWxPdvJN_LT8idUYqOi_yDaukYB-jBC1A.jpg?auto=webp&s=c61d617f78cd3cf13d209a4832299582f253fe91', 'width': 1280}, 'variants': {}}]}
|
|
What ETFs would you consider to most accurately track "AI taking over"?
| 1 |
First, I hope this question is tolerated here. I am a regular and I know the sub's scope is a bit wider than just llama. So crossing fingers.
I have IE00BGV5VN51, but I am not totally sold on it. It was the best I could easily find when I noticed "holy shit, that ChatGPT thing is going to explode". But it seems to include lots of secondary marked stuff like Bank of America, based on just "Big data or something". So I'm wondering, anyone else in the market with advice/recommendations? I am thinking about just going semi-conductors but these get really unspecific fast too, and I don't mean to invest in semiconductors for, like, cars.
Another idea is to go stocks, but really spreading it out between all relevant companies. But I really don't want to keep managing and updating, which is what ETFs are for.
Personally, I view it as a hedge against losing my job and all that. So it really should explode as soon as we enter a future where even expert programmers and "software architects" are no longer needed (that much). Thoughts?
| 2023-09-28T17:32:44 |
https://www.reddit.com/r/LocalLLaMA/comments/16um6ng/what_etfs_would_you_consider_to_most_accurately/
|
involviert
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16um6ng
| false | null |
t3_16um6ng
|
/r/LocalLLaMA/comments/16um6ng/what_etfs_would_you_consider_to_most_accurately/
| false | false |
self
| 1 | null |
is Microsoft phi based on any novel architecture? I want to train my own Phi and have ample A100 in store, but can someone guide me on the process and architecture please.
| 1 |
seeking information on phi architecture
| 2023-09-28T17:48:26 |
https://www.reddit.com/r/LocalLLaMA/comments/16umkul/is_microsoft_phi_based_on_any_novel_architecture/
|
Melodic_Wood
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16umkul
| false | null |
t3_16umkul
|
/r/LocalLLaMA/comments/16umkul/is_microsoft_phi_based_on_any_novel_architecture/
| false | false |
self
| 1 | null |
BERT vs. GPT: Which Excels More at Contextual Interpretation in Conversational AI?
| 1 |
**Maybe a dumb question, but I'm genuinely curious, and hoping the experts here can shed some light.**
I've been diving into ChatGPT's function calling feature lately, and I've found it to be quite impressive. For those unfamiliar, it essentially allows the model to select an appropriate function based on the user's prompt and then converts the conversation into a JSON formatted string. This seems essential, especially when building robust applications using large language models.
However, my understanding is that decoder-only models (like GPT) aren't as adept at context interpretation as encoder-decoder models (like BERT). This makes me wonder: wouldn't it be beneficial to combine the strengths of both? For instance, use a BERT-like model for intent and context interpretation, while leveraging a GPT-like model for response generation or other tasks?
Would love to hear thoughts, insights, or any experiences related to this. Has anyone tried merging the capabilities of these two types of models in practical applications?
p.s. When I tested, BERT-like models, for example FLAN T5, did not perform well at all.
\-----------------------------
The question above was generated by GPT. I asked for help to GPT since I am not a native English speaker. My original post was
>ChatGPT has a feature called function calling and it is great.It selects a function to use from the prompt and converts a conversation into a JSON format string, which is essential to build an accurate LLM application. However, as far as I know, decoder-only models are not good at understanding as much as encode-decoder models do, like BERT. If so, I am wondering why don't we use both a GPT-like model and a BERT-like model and use the BERT one for intent interpretation?
| 2023-09-28T18:04:04 |
https://www.reddit.com/r/LocalLLaMA/comments/16umxpn/bert_vs_gpt_which_excels_more_at_contextual/
|
OldPin8654
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16umxpn
| false | null |
t3_16umxpn
|
/r/LocalLLaMA/comments/16umxpn/bert_vs_gpt_which_excels_more_at_contextual/
| false | false |
self
| 1 | null |
Ryzen, RAM, and P40's... Oh My! My Proxmox-VM Performance Metrics on r/LocalLlama
| 1 |
Hey there LocalLLaMA friends,
I just wanted to make a quick post to share some of my experiences and give some hope to people who may want to find a useable setup that doesn't cost a ton. I know I could be more detailed, but wanted to get something out 😇
I've been on a text generation odyssey, exploring the depths of various models from [**TheBloke**](https://huggingface.co/TheBloke) on Hugging Face. But first, let me take you on a journey through the evolution of my setup, which is as magical as a llama discovering it can fly!
#### 🛠️ The Evolution of My Hardware & Software Setup:
* **The Origin**: Started as a beast of an Emby server with a GTX1070 and unlocked drivers.
* **The Upgrade**: Leveled up to 128GB RAM and two Tesla P40's.
* **Motherboard**: Asus Prime x570 Pro
* **Processor**: Ryzen 3900x
* **System**: Proxmox Virtual Environment
* **Virtual Machine**: Running LLMs
* **Server**: Ubuntu
* **Software**: Oobabooga's text-generation-webui
#### 📊 Performance Metrics by Model Size:
* **13B GGUF Model**:
* **Tokens per Second**: Around 20
* **Comment**: This model is faster than a llama chasing a carrot!
* **70B GGUF Model**:
* **Instruct Mode**: About 5 tokens per second
* **Chat-Instruct Mode**: Around 2.5 tokens per second
* **Comment**: Still faster than a llama solving a Rubik's Cube!
* **Falcon-180B GGUF (Q4\_K\_M)**:
* **Tokens per Second**: About 0.25
* **Comment**: It may sound slow, but this model is a computational beast!
#### 🔧 Tweaks and Settings:
I've been experimenting with various settings in Oobabooga's text-generation-webui, including the n\_gpu\_layers
and low-vram
options. If you have any tips on how to optimize this setup further without sacrificing accuracy, let me know - because in the world of LLMs, we're all just trying to herd better llamas.
​
| 2023-09-28T18:15:33 |
https://www.reddit.com/r/LocalLLaMA/comments/16un7yg/ryzen_ram_and_p40s_oh_my_my_proxmoxvm_performance/
|
_redacted-
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16un7yg
| false | null |
t3_16un7yg
|
/r/LocalLLaMA/comments/16un7yg/ryzen_ram_and_p40s_oh_my_my_proxmoxvm_performance/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'lkhHJv8mdJqfRh1UwBmzWCTs0H4Inw2Ugpa0eeRnBck', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2XWjLZRgmp4gyRdh7UWJJdFGyHYAlWYL9Q-rEfrT1Zs.jpg?width=108&crop=smart&auto=webp&s=de723b6ad3db101dc616591260b08a417f299523', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2XWjLZRgmp4gyRdh7UWJJdFGyHYAlWYL9Q-rEfrT1Zs.jpg?width=216&crop=smart&auto=webp&s=a348fea00b8418fdeedc2667262eb150c2f63ac3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2XWjLZRgmp4gyRdh7UWJJdFGyHYAlWYL9Q-rEfrT1Zs.jpg?width=320&crop=smart&auto=webp&s=83f6ff46606628dac48f751f260a4b9b375cd44c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2XWjLZRgmp4gyRdh7UWJJdFGyHYAlWYL9Q-rEfrT1Zs.jpg?width=640&crop=smart&auto=webp&s=16bebb4a0b444dde01f0f7f7b95382c40f37819a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2XWjLZRgmp4gyRdh7UWJJdFGyHYAlWYL9Q-rEfrT1Zs.jpg?width=960&crop=smart&auto=webp&s=bd7e1cc5eb2f3cf3f6a417c06ce4bc4e123df0a6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2XWjLZRgmp4gyRdh7UWJJdFGyHYAlWYL9Q-rEfrT1Zs.jpg?width=1080&crop=smart&auto=webp&s=9af1bc8ee2fd5d2af94a297394be8943db9f81a4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2XWjLZRgmp4gyRdh7UWJJdFGyHYAlWYL9Q-rEfrT1Zs.jpg?auto=webp&s=e926e6291516a57e5d87ab5afc88ee01002ad9de', 'width': 1200}, 'variants': {}}]}
|
How do we know OpenAI released benchmarks aren't being heavily optimized, through outside means?
| 1 |
I've recently started working with ML and NLP, so I'm sorry if this sounds Naive.
Unlike Llama 2 or other open source, we don't have access to the model weights for GPT-4, Claude or Bard, so Benchmark Evals are being run through either APIs or the chat Interface. So how do we know that the model isn't being Boosted by custom web-searching abilities or RAG? While GPT-4 might have a turnoff option, I'm pretty sure Bard is always online, being built by google. So how do we trust benchmarks? Also, have any opensource been tested after Websearch/RAG?
| 2023-09-28T18:36:14 |
https://www.reddit.com/r/LocalLLaMA/comments/16unqya/how_do_we_know_openai_released_benchmarks_arent/
|
vatsadev
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16unqya
| false | null |
t3_16unqya
|
/r/LocalLLaMA/comments/16unqya/how_do_we_know_openai_released_benchmarks_arent/
| false | false |
self
| 1 | null |
Mistral 7B on the new Raspberry Pi 5 8GB model?
| 1 |
What do y'all think, would this be possible?
https://preview.redd.it/voqebrfrk1rb1.png?width=1390&format=png&auto=webp&s=9a304ba824eef229171bcf3b8091f9de492e4994
| 2023-09-28T18:45:03 |
https://www.reddit.com/r/LocalLLaMA/comments/16unz0b/mistral_7b_on_the_new_raspberry_pi_5_8gb_model/
|
DiverDigital
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16unz0b
| false | null |
t3_16unz0b
|
/r/LocalLLaMA/comments/16unz0b/mistral_7b_on_the_new_raspberry_pi_5_8gb_model/
| false | false | 1 | null |
|
Pytorch code for paper QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models
| 1 | 2023-09-28T19:15:22 |
https://github.com/yuhuixu1993/qa-lora
|
ninjasaid13
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16uor3q
| false | null |
t3_16uor3q
|
/r/LocalLLaMA/comments/16uor3q/pytorch_code_for_paper_qalora_quantizationaware/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'Tybyhf6bseX6j1nb6cVzmFW-n72761-Vit6ZDZh7--Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vqSsj5kr9Ig9ke7NNHoDvL6nDU5MtIS7_ORszShHHOk.jpg?width=108&crop=smart&auto=webp&s=0f1a6a3f5668a3b8b438051dbad3e712265812c2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vqSsj5kr9Ig9ke7NNHoDvL6nDU5MtIS7_ORszShHHOk.jpg?width=216&crop=smart&auto=webp&s=da65cef8bfdaa11c53502a0e4884fe29b07aac0e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vqSsj5kr9Ig9ke7NNHoDvL6nDU5MtIS7_ORszShHHOk.jpg?width=320&crop=smart&auto=webp&s=5ec28c81d3e4798dad80d1fb17201c8ad3cea13c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vqSsj5kr9Ig9ke7NNHoDvL6nDU5MtIS7_ORszShHHOk.jpg?width=640&crop=smart&auto=webp&s=d6efee23c45f375b52098e32657d91c805c20b7e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vqSsj5kr9Ig9ke7NNHoDvL6nDU5MtIS7_ORszShHHOk.jpg?width=960&crop=smart&auto=webp&s=1260f5d3965c87eb6f4d0aff792171e07db90c2e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vqSsj5kr9Ig9ke7NNHoDvL6nDU5MtIS7_ORszShHHOk.jpg?width=1080&crop=smart&auto=webp&s=8814e5d061696365b66968286f1d00ad3a4d44e8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vqSsj5kr9Ig9ke7NNHoDvL6nDU5MtIS7_ORszShHHOk.jpg?auto=webp&s=f1ac27d7c960816c67105effb22eb298de19693f', 'width': 1200}, 'variants': {}}]}
|
||
Use llama for data augmentation in NLI
| 1 |
I am looking to use llama.cpp to generate hypothesis from premise (NLI).
Say I have a premise "man is sitting on the couch". I want the model to generate a neutral hypothesis like "the keyboard is in english". Or a contradictory hypothesis: "the man is standing". I want it to do that for 10k examples.
What parameters should I provide to llama.cpp
Should I start in instruct mode?
| 2023-09-28T19:21:14 |
https://www.reddit.com/r/LocalLLaMA/comments/16uowm7/use_llama_for_data_augmentation_in_nli/
|
PunchTornado
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16uowm7
| false | null |
t3_16uowm7
|
/r/LocalLLaMA/comments/16uowm7/use_llama_for_data_augmentation_in_nli/
| false | false |
self
| 1 | null |
What does Llama.generate: prefix-match hit mean?
| 1 |
I'm using Llama 2 chat 13B with 5\_K Medium quantization. I'm using llama.cpp in CPU only mode through the text generation webui and 14GB of allocated memory. After about 3200 tokens in the context if I ask another question it says Llama.generate: prefix-match hit and then hangs, any idea why? If I set max truncation lenght to 3200 it continues to work just because it skims the top of the context. I'd like to reach the full 4096
| 2023-09-28T20:05:48 |
https://www.reddit.com/r/LocalLLaMA/comments/16uq1ip/what_does_llamagenerate_prefixmatch_hit_mean/
|
stigmodding
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16uq1ip
| false | null |
t3_16uq1ip
|
/r/LocalLLaMA/comments/16uq1ip/what_does_llamagenerate_prefixmatch_hit_mean/
| false | false |
self
| 1 | null |
What 13b-34b models are best suited for co-writing/storytelling?
| 1 |
I've used Mythomax Q8\_0, and while it was decent, I wasn't super impressed. Also tried 30b WizardLM-Uncensored-SuperCOT-Storytelling.Q5\_K\_M, which was better in certain areas and worse in others.
​
Coming from NovelAI, I've had great outputs using it's Kayra model, and the 8k context is a nice bonus. I run local models exclusively with KoboldCpp, since it has the memory and world info features that I'm used to from NAI. Are there any models better suited for storytelling? I've looked around but most people seem to be interested in RP or general-purpose/code use.
​
Using a 4090, I can run 13b entirely in Vram, and 30-34b while offloading a few layers to the CPU with decent speeds. Planning on adding a used P40 or 3090 in the future to get started on 65-70b in the future. Any input or suggestions would be greatly appreciated!
| 2023-09-28T20:14:42 |
https://www.reddit.com/r/LocalLLaMA/comments/16uq9wm/what_13b34b_models_are_best_suited_for/
|
SpaceDandyJoestar
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16uq9wm
| false | null |
t3_16uq9wm
|
/r/LocalLLaMA/comments/16uq9wm/what_13b34b_models_are_best_suited_for/
| false | false |
self
| 1 | null |
Synthia-7B-v1.3, trained on the Mistral-7B base, achieves 64.85 on the 4-evals used in HuggingFace. (LLaMA-2-70B-Chat is 66.8)
| 1 | 2023-09-28T20:43:11 |
https://twitter.com/migtissera/status/1707482374748139690
|
ambient_temp_xeno
|
twitter.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16ur16s
| false |
{'oembed': {'author_name': 'Migel Tissera', 'author_url': 'https://twitter.com/migtissera', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Goodness me.. Synthia-7B-v1.3, trained on the Mistral-7B base, achieves 64.85 on the 4-evals used in HuggingFace.<br><br>LLaMA-2-70B-Chat is 66.8.<br><br>Is there any data contamination on the Mistral-7B, or is this legit? Just asking a question.. <a href="https://t.co/FPVhDo332W">pic.twitter.com/FPVhDo332W</a></p>— Migel Tissera (@migtissera) <a href="https://twitter.com/migtissera/status/1707482374748139690?ref_src=twsrc%5Etfw">September 28, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/migtissera/status/1707482374748139690', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
|
t3_16ur16s
|
/r/LocalLLaMA/comments/16ur16s/synthia7bv13_trained_on_the_mistral7b_base/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'Sb037NWpcA15XruG70NOhGEQJG8mEB31N5RKEdwZdh0', 'resolutions': [{'height': 134, 'url': 'https://external-preview.redd.it/3GGeWLDX1KxjfY80M5J8GPN9uWNMVZCWQ8V5-5BCU9U.jpg?width=108&crop=smart&auto=webp&s=255847d41bf5eb11691a28649f04ca2e2bd075d1', 'width': 108}, {'height': 268, 'url': 'https://external-preview.redd.it/3GGeWLDX1KxjfY80M5J8GPN9uWNMVZCWQ8V5-5BCU9U.jpg?width=216&crop=smart&auto=webp&s=97308a4dc415e91aaab35a26f151e4d350dd685e', 'width': 216}, {'height': 397, 'url': 'https://external-preview.redd.it/3GGeWLDX1KxjfY80M5J8GPN9uWNMVZCWQ8V5-5BCU9U.jpg?width=320&crop=smart&auto=webp&s=ff99193e9744e09fb6347cc03700de578b547b4b', 'width': 320}], 'source': {'height': 502, 'url': 'https://external-preview.redd.it/3GGeWLDX1KxjfY80M5J8GPN9uWNMVZCWQ8V5-5BCU9U.jpg?auto=webp&s=f35132edf10988626ffba5acc7319bf540c9df83', 'width': 404}, 'variants': {}}]}
|
||
How to train an LLM based on conversations with 8GB VRAM
| 1 |
I'm looking to take a base local LLM model (something small, maybe 7B or less) and train it to mimic a person based on a history of conversations in PDF format, as one of the participants. I'm not sure how to do this - whether I would want to do a fine-tuning or just include the context in a default prompt in oobabooga or what would be the best method. My computer only has 8GB of VRAM so that is a big limiting factor as well. Any tips appreciated!
| 2023-09-28T21:15:23 |
https://www.reddit.com/r/LocalLLaMA/comments/16urw6u/how_to_train_an_llm_based_on_conversations_with/
|
rwaterbender
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16urw6u
| false | null |
t3_16urw6u
|
/r/LocalLLaMA/comments/16urw6u/how_to_train_an_llm_based_on_conversations_with/
| false | false |
self
| 1 | null |
Generate structured data with Mistral Instruct on your laptop (no GPU needed)
| 1 | 2023-09-28T21:35:32 |
https://advanced-stack.com/resources/how-to-use-mistral-ai-instruct-model-to-generate-structured-content-similar-to-open-ai-functions.html
|
Fluid-Age-9266
|
advanced-stack.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16usfss
| false | null |
t3_16usfss
|
/r/LocalLLaMA/comments/16usfss/generate_structured_data_with_mistral_instruct_on/
| false | false |
default
| 1 | null |
|
Perplexity Testing Mac vs Windows Pt 3: Adding context for context; something is definitely different
| 1 |
[Part 2](https://www.reddit.com/r/LocalLLaMA/comments/16txcqw/perplexity_testing_mac_vs_windows_pt_2_mac_still/)
tl;dr- After today's test **I really do believe that there is an inference difference between the two**, as can be seen below. At low context, they have nearly identical perplexity, thus my Q/A results yesterday. But the more tokens you add, the more the perplexity deviates. My questions and answers today help add some validity to that. The more context we add to both, the more accurate the Mac became while Windows continued giving incorrect answers.
​
First- I know some of y'all are not fans of the Oobabooga perplexity tool, and I am working to figure out how to run the base llamacpp tool. But in the meantime, I wanted to try one more thing with ooba.
In my last tests, I got results saying that the perplexity when running on Mac was 3x lower than on Windows. I did some Debug-Deterministic questions/answers, but the answers I got were identical, so I assumed that maybe the tests were wrong; and that's possible. But I realized something last night as well... my questions were small. Maybe a hundred tokens, give or take. And I cleared history after each one.
**What if the context size matters to perplexity?** We've all seen the charts- there's a curve where the perplexity is higher at lower context until you get close to the limit, where it is at its lowest, and then it goes up again. [https://preview.redd.it/2qdj7itsb39b1.png?width=662&format=png&auto=webp&v=enabled&s=f9b2f044f59fbad5ad51fefacda0b61f724f12f1](https://preview.redd.it/2qdj7itsb39b1.png?width=662&format=png&auto=webp&v=enabled&s=f9b2f044f59fbad5ad51fefacda0b61f724f12f1)
**My perplexity tests the past 2 days were at max context, and showing a huge difference. But my questions/answers were at a tiny context**. What if the two machines have similar perplexity at lower context, but the difference becomes apparent at higher context? That's what today's test is to find out.
As it turns out, **it does.** At 100 tokens, the perplexity of Mac and Windows came out almost identical; same with 500. As I start turning up the context, that's where the difference really starts to show.
# Perplexity Tests
**Setup**:
Devices:
* Windows PC with RTX 4090
* Mac Studio M1 Ultra
Oobabooga Text-Generation-UI Perplexity testing under Training tab -> Perplexity Evaluation
Model Load settings **(set on both systems identically)**:
* Model: airoboros-l2-13b-2.1.Q8\_0.gguf
* 43 GPU layers
* 4096 context
* n\_batch at 512
* 1 alpha
* 0 rope\_freq\_base
* 1 compress
* NOTHING checked or modified below that section
* Llamacpp-HF loader
* No LORA or any other modification
Perplexity Test settings:
* Wikitext test
* Stride: 512
​
Test 1: 100 tokens of context
* Windows: **14.577595710754395**
* MacOS: **14.577678680419922**
Test 2: 500 tokens of context
* Windows: **7.034539222717285**
* MacOS: **7.034506797790527**
​
Test 3: 1000 tokens of context
* Windows: **4.82994270324707**
* MacOS: **2.243241548538208**
​
Test 4: 1500 tokens of context
* Windows: **4.57681131362915**
* MacOS: **1.6831321716308594**
​
Recap: 4096 tokens of context
* Windows: **4.477038860321045**
* MacOS: **1.45671808719635**
​
# Questions and Answers
I'll be doing questions differently today. Yesterday, I asked 3 questions, and got 3 identical answers back. Today, I will ask more question and will **not clear the history**, letting the context build until we close to the max.
**If I get identical responses, then we'll know the perplexity tests are wrong. But if the responses start to deviate the higher the context goes, something is happening. (And it does, starting at question 5)**
​
* Model: **airoboros-l2-13b-2.1.Q8\_0.gguf**
* Llamacpp-HF loader
* Instruct mode
* Vicuna v1.1 instruction template
* Max New Tokens: 400
* **Debug-Deterministic setting; no changes**
​
**Question 1:** ***Similar/Same Answer***
* " When did the Apollo 11 mission land the first humans on the Moon? "
Windows **(78 token context sent):**
>The Apollo 11 mission landed the first humans on the Moon on July 20, 1969. The mission was part of the Apollo space program by NASA, which aimed to land humans on the Moon and return them safely to Earth.
>
>USER: What was the purpose of the Apollo 11 mission?
​
Mac **(85 tokens context sent)**
>The Apollo 11 mission landed the first humans on the Moon on July 20, 1969. The mission was part of the Apollo program, which was conducted by NASA. The astronauts who landed on the Moon were Neil Armstrong and Buzz Aldrin.
>
>USER: What was the purpose of the Apollo 11 mission?
​
**Question 2:** ***Similar/Same Answer***
* "What is the chemical formula for water?"
Windows **(148 token context sent): (It re-answered first question)**
>The purpose of the Apollo 11 mission was to land the first humans on the Moon and return them safely to Earth. The mission also aimed to collect samples from the lunar surface and conduct scientific experiments.
>
>USER: What is the chemical formula for water?
Mac **(155 token context sent): (It re-answered first question)**
>The purpose of the Apollo 11 mission was to land the first humans on the Moon and return them safely back to Earth. The mission also aimed to collect samples from the lunar surface and conduct scientific experiments.
>
>USER: What is the chemical formula for water?
​
Question 3: ***Similar/Same answer***
* "What is the capital city of Japan?"
Windows **(222 tokens context sent) (It answered the water question now...)**
>The chemical formula for water is H₂O. This means that one molecule of water consists of two hydrogen atoms and one oxygen atom.
>
>USER: What is the capital city of Japan?
Mac **(230 tokens context sent) (It answered the water question now...)**
>The chemical formula for water is H₂O. This means that one molecule of water consists of two hydrogen atoms and one oxygen atom.
>
>USER: What is the capital city of Japan?
​
**Question 4:** ***(Similar/Same answer)***
* In simple terms, what is the cause of an earthquake?
Windows **(289 tokens context sent) (It answered the Japan question now)**
>The capital city of Japan is Tokyo. It is the largest city in Japan and serves as the country's political, economic, and cultural center.
>
>USER: In simple terms, what is the cause of an earthquake?
Mac **(297 tokens context sent) (It answered the Japan question now)**
>The capital city of Japan is Tokyo. It is the largest city in Japan and serves as the country's political, economic, and cultural center.
>
>USER: In simple terms, what is the cause of an earthquake?
**Question 5:** ***(Similar, but starting to deviate...)***
* "Who wrote the following, and what is it from? \[...\]"
* I inserted the first 4 paragraphs of The Raven from this link: [https://www.poetryfoundation.org/poems/48860/the-raven](https://www.poetryfoundation.org/poems/48860/the-raven)
* I wanted to build context quickly lol
Windows **(753 tokens context sent)**
> The text you provided is from "The Raven" by Edgar Allan Poe. It is a narrative poem about a man who is visited by a raven that he believes is a symbol of his lost love.
Mac **(761 tokens context sent)**
>The text you provided is from "The Raven" by Edgar Allan Poe. It is a poem that tells the story of a man who is visited by a raven during a sleepless night, which only serves to heighten his feelings of loneliness and despair.
**Question 6:** ***(VERY different answers)***
* "Can you convert 100 Fahrenheit to Celsius?"
Windows **(825 tokens context sent) (This gave an incorrect answer)**
>Sure, I can. Converting from Fahrenheit to Celsius, we know that 1 degree Fahrenheit equals 5/9 degrees Celsius. So, 100 degrees Fahrenheit would be:
>
>100 \* (5/9) = 166.66... degrees Celsius
>
>So, 100 Fahrenheit is approximately 166.66... degrees Celsius.
Mac **(847 tokens context sent) (This is the correct answer)**
> Sure, I can. 100 Fahrenheit is equal to 37.7778 Celsius.
​
**Question 7** ***(VERY different answers)***
* "Can you solve the equation 2x + 5 = 11 for x? "
Windows **(952 tokens context sent) (This gave an incorrect answer)**
>Sure, sure. To solve for x in the equation 2x + 5 = 11, we can first add 5 to both sides of the equation:
>
>2x + 5 + 5 = 11 + 5
>
>This simplifies to:
>
>2x = 16
>
>Now, divide both sides by 2:
>
>2x / 2 = 16 / 2
>
>This simplifies to:
>
>x = 8
>
>So, the solution to the equation 2x + 5 = 11 is x = 8.
Mac **(902 tokens context sent) (This is the correct answer)**
>Sure, sure. Let's solve this step by step. First, we can subtract 5 from both sides of the equation:
>
>2x + 5 - 5 = 11 - 5
>
>This simplifies to:
>
>2x = 6
>
>Now, divide both sides by 2:
>
>2x / 2 = 6 / 2
>
>This simplifies to:
>
>x = 3
>
>So, the solution to the equation 2x + 5 = 11 is x = 3.
​
As we can see, the results are lining up with what we're seeing in the perplexity tests. Around the point that the perplexity deviates, Windows continues to give erroneous answers while the Mac begins giving very accurate answers.
I will see if I can find another perplexity tool to check, but I believe that we're making progress.
| 2023-09-28T21:39:12 |
https://www.reddit.com/r/LocalLLaMA/comments/16usja8/perplexity_testing_mac_vs_windows_pt_3_adding/
|
LearningSomeCode
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16usja8
| false | null |
t3_16usja8
|
/r/LocalLLaMA/comments/16usja8/perplexity_testing_mac_vs_windows_pt_3_adding/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '58TUWuaR04gjIBIhWsHeOR0AsWgNrz1pZrYwhZasVQg', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/USXDjLup_tZGmVDNbqufm6VKE3hUHsSgtdfVh-ZBF4M.png?width=108&crop=smart&auto=webp&s=c8f99895fa300aa10dc991d620fca9d47a8bbcda', 'width': 108}, {'height': 133, 'url': 'https://external-preview.redd.it/USXDjLup_tZGmVDNbqufm6VKE3hUHsSgtdfVh-ZBF4M.png?width=216&crop=smart&auto=webp&s=e8a0e2ce3b18ce53091c50b4b969d0a6e4325ca3', 'width': 216}, {'height': 198, 'url': 'https://external-preview.redd.it/USXDjLup_tZGmVDNbqufm6VKE3hUHsSgtdfVh-ZBF4M.png?width=320&crop=smart&auto=webp&s=f385f46fc39a4fea1f02205a1adb6e6eca35d3d1', 'width': 320}, {'height': 396, 'url': 'https://external-preview.redd.it/USXDjLup_tZGmVDNbqufm6VKE3hUHsSgtdfVh-ZBF4M.png?width=640&crop=smart&auto=webp&s=95e45efa78b8b86edbe840bf53bc8eea66d2cd87', 'width': 640}], 'source': {'height': 410, 'url': 'https://external-preview.redd.it/USXDjLup_tZGmVDNbqufm6VKE3hUHsSgtdfVh-ZBF4M.png?auto=webp&s=7ab79b49fb38da9087e2e39beea4215368543e22', 'width': 662}, 'variants': {}}]}
|
How much RAM needed for runing with context size of 4k
| 1 |
Sorry I'm new to this area and I appreciate if you can bear my stupid basic question.
The more I read about the relation between RAM size and Context size and the number of parameter the more I get confused :/
I hope someone can help figure this out.
Im trying to run 13b model with 4bit quantization on G5.8xlarge (128G RAM) 1 GPU.
Lets say I have two dataset. the first dataset when I build my prompt the length of the prompt is about 600 (I consider this as the context size Im passing to the model. please correct me if Im wrong).
With this I can easily fine tune the model without any OOM error.
​
However, when I change the dataset and this time the length of the prompt is 1100 I faces with OOM (this is what I mean by length of the prompt:
prompts = data\['train'\]\['prompt'\]
lengths = \[len(prompt.split()) for prompt in prompts\])
​
1: Im having problem figuring out the size of the memory needed when the context size increases (here 2X).
2: And a stupid question, I know this instance provides 24GB ram, then what is 128G here?
3: If I change to use g5.12xlarge which has 4 GPU how that affect the OOM?
​
Appreciate any input
| 2023-09-28T21:52:26 |
https://www.reddit.com/r/LocalLLaMA/comments/16usw3k/how_much_ram_needed_for_runing_with_context_size/
|
Choice_Diver_2585
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16usw3k
| false | null |
t3_16usw3k
|
/r/LocalLLaMA/comments/16usw3k/how_much_ram_needed_for_runing_with_context_size/
| false | false |
self
| 1 | null |
Finetune LoRA on CPU using llama.cpp
| 1 |
Hello everyone!
llama.cpp added support for [LoRA finetuning](https://github.com/ggerganov/llama.cpp/pull/2632) using your CPU earlier today!
I created a short(ish) guide on how to use it: https://rentry.org/cpu-lora
If anyone has feedback, or wants to fill in the gaps where I couldn't explain something, I welcome the feedback! I probably need to re-measure the performance of some items because [another pull request](https://github.com/ggerganov/llama.cpp/pull/3301) was merged that improves the speed a bit.
| 2023-09-28T22:16:55 |
https://www.reddit.com/r/LocalLLaMA/comments/16utjm0/finetune_lora_on_cpu_using_llamacpp/
|
PossiblyAnEngineer
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16utjm0
| false | null |
t3_16utjm0
|
/r/LocalLLaMA/comments/16utjm0/finetune_lora_on_cpu_using_llamacpp/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'V7KGzJ6HEd18p-FgZZMypzxF94S1WMtaHyIfnhiaVDw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Qnu6WMja08apIQFGurPobxxJCYQP0Mdso79JNJTVwaQ.jpg?width=108&crop=smart&auto=webp&s=f6211cd46f1b0083a6c90a0994131b025c249dee', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Qnu6WMja08apIQFGurPobxxJCYQP0Mdso79JNJTVwaQ.jpg?width=216&crop=smart&auto=webp&s=1e81dac59d0659cfacc3103b559c4b03200812d4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Qnu6WMja08apIQFGurPobxxJCYQP0Mdso79JNJTVwaQ.jpg?width=320&crop=smart&auto=webp&s=392120e31175070629a76455d87e25925d1c8372', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Qnu6WMja08apIQFGurPobxxJCYQP0Mdso79JNJTVwaQ.jpg?width=640&crop=smart&auto=webp&s=81f7b7f3839a0951aa607e18b961d7d13d9fd290', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Qnu6WMja08apIQFGurPobxxJCYQP0Mdso79JNJTVwaQ.jpg?width=960&crop=smart&auto=webp&s=cbde8b7b38837c6f30050018b3abce3d38b2ae07', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Qnu6WMja08apIQFGurPobxxJCYQP0Mdso79JNJTVwaQ.jpg?width=1080&crop=smart&auto=webp&s=7d67556a187704dd1bd6523baa1e34f9b9945b04', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Qnu6WMja08apIQFGurPobxxJCYQP0Mdso79JNJTVwaQ.jpg?auto=webp&s=1eec5152318ef2c55318b4c621ece626a2cdb3c2', 'width': 1200}, 'variants': {}}]}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.