title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Script for Full tune of models using DeepSpeed ZeRO
| 1 |
Hello, I've been trying to finetune some models with DeepSpeed ZeRO without much luck, I'm interested in doing a full finetune of cerebras/btlm-3b-8k-base using dual RTX 3090's. So far I've had luck with Phi-1.5 on a single card, but I'm failing terribly at getting the training script + deepspeed command line to do anything useful.
Github repos for training LLM's are also welcome.
Thanks!
| 2023-09-28T23:24:35 |
https://www.reddit.com/r/LocalLLaMA/comments/16uvaeb/script_for_full_tune_of_models_using_deepspeed/
|
2muchnet42day
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16uvaeb
| false | null |
t3_16uvaeb
|
/r/LocalLLaMA/comments/16uvaeb/script_for_full_tune_of_models_using_deepspeed/
| false | false |
self
| 1 | null |
How can LLMs be multilingual with success in coherence?
| 1 |
OK, maybe another newvie question around here.
I know "Open"AI models are multilingual, because they can communicate in various languages other than English.
I understand it needs datasets with different languages so that the LLM can learn them. But how can an LLM learn various languages and not get confused with the language of input prompts? Can an LLM detect if the given input is "X" language?
| 2023-09-28T23:32:21 |
https://www.reddit.com/r/LocalLLaMA/comments/16uvh4k/how_can_llms_be_multilingual_with_success_in/
|
OC2608
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16uvh4k
| false | null |
t3_16uvh4k
|
/r/LocalLLaMA/comments/16uvh4k/how_can_llms_be_multilingual_with_success_in/
| false | false |
self
| 1 | null |
Full Stack Dev, Dabbling into AI
| 1 |
Hey All,
​
First time poster here. I am trying to run the Meta Llama-2 model locally, and having some issues I'm not sure how to resolve.
I requested and downloaded the models from Meta, and received a checklist.chk, consolidated.00.pth, and params.json.
When I try to convert the model as instructed by the [llama.cpp](https://github.com/ggerganov/llama.cpp) library, I keep getting an error. Most of the tutorials I have seen have their models as bin files, which I am not sure how to acquire (HuggingFace download?) but I would like to get this up with the files I have currently
Would greatly appreciate if anyone else has had these issues!
Error -- `FileNotFoundError: Could not find tokenizer.model in models/7B or its parent; if it's in another directory, pass the directory as --vocab-dir`
​
The models/7B folder has the chk, pth, and json files
| 2023-09-28T23:49:05 |
https://www.reddit.com/r/LocalLLaMA/comments/16uvv73/full_stack_dev_dabbling_into_ai/
|
macaccount423
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16uvv73
| false | null |
t3_16uvv73
|
/r/LocalLLaMA/comments/16uvv73/full_stack_dev_dabbling_into_ai/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=216&crop=smart&auto=webp&s=dd615bfe0453b06d53bc1f5f17fc3f6ad926694f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=320&crop=smart&auto=webp&s=0bc6ac2e1db55ec07cc6a17178ea52bf436f9bce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=640&crop=smart&auto=webp&s=b0d58c9a49c1e9ce629e5b31dce17b727d8c6ab8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=960&crop=smart&auto=webp&s=7c835cb0600a4d280a57f12d0bc008ef12acd26d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=1080&crop=smart&auto=webp&s=1f2580bd36b3bf3b766d205ac6d737a9d8d34c2a', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?auto=webp&s=d8b103bed805ceb641b2ff49dc8c7403318263b1', 'width': 1280}, 'variants': {}}]}
|
Testing out the safety of Mistral-7B Instruct hmm...
| 1 | 2023-09-28T23:58:24 |
https://www.reddit.com/gallery/16uw33g
|
ninjasaid13
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16uw33g
| false | null |
t3_16uw33g
|
/r/LocalLLaMA/comments/16uw33g/testing_out_the_safety_of_mistral7b_instruct_hmm/
| false | false | 1 | null |
||
[Gloat] Finally got Falcon-180B Running!
| 1 |
Basically this is a shitpost. I have been struggling for weeks trying to get a large model working on one of our servers. I've gotten 7B, and 13B models working acceptably on systems fully under my control.
We have a DGX-1 with 8 V100's, but I'm not the admin there. I have disk space headaches, network storage headaches, resource hogs (other than me!), slow downloads of models and other speedbumps. My biggest recent challenge is that the RHEL version of gcc is 4.8 and llama.cpp needed 5 or higher. Seeing little help, I decided to build gcc from scratch (a fun challenge) and eventually everything came together tonight and I'm getting about 18 tokens/sec.
I'm psyched about this and just had to gloat about it :D
| 2023-09-29T00:08:11 |
https://www.reddit.com/r/LocalLLaMA/comments/16uwbs8/gloat_finally_got_falcon180b_running/
|
Simusid
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16uwbs8
| false | null |
t3_16uwbs8
|
/r/LocalLLaMA/comments/16uwbs8/gloat_finally_got_falcon180b_running/
| false | false |
self
| 1 | null |
Someone know any projects about an alexa like device build with llama
| 1 |
Well what exactly mean is like a kind of speaker able to talk and listen running llama or a kind of repository to run llama and speak to it by micro and get answer by speaker and i can build the device just using 3d printer, a pi, speaker and micro, any ideas?
| 2023-09-29T00:17:34 |
https://www.reddit.com/r/LocalLLaMA/comments/16uwk0x/someone_know_any_projects_about_an_alexa_like/
|
kroryan
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16uwk0x
| false | null |
t3_16uwk0x
|
/r/LocalLLaMA/comments/16uwk0x/someone_know_any_projects_about_an_alexa_like/
| false | false |
self
| 1 | null |
Meta has released a new paper: Llama 2 Long beats Claude-2-100k on human evaluation
| 1 |
The paper https://arxiv.org/pdf/2309.16039.pdf
Not sure if we can get the model, but if anyone has a link please put it in the comments.
| 2023-09-29T03:21:43 |
https://reddit-uploaded-media.s3-accelerate.amazonaws.com/kq2pnoiz44rb1
|
Nabakin
|
reddit-uploaded-media.s3-accelerate.amazonaws.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16v0lzf
| false | null |
t3_16v0lzf
|
/r/LocalLLaMA/comments/16v0lzf/meta_has_released_a_new_paper_llama_2_long_beats/
| false | false |
default
| 1 | null |
Mistral 7B - 8bit and 4 bit Code
| 1 |
I want to test out Mistral &B with langchain for chatbot purpose. But I have only Colab GPU (T4 Free tier). How to test out quantised version. Have anyone tried it . Need the codes
| 2023-09-29T03:22:31 |
https://www.reddit.com/r/LocalLLaMA/comments/16v0mlk/mistral_7b_8bit_and_4_bit_code/
|
ianuvrat
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16v0mlk
| false | null |
t3_16v0mlk
|
/r/LocalLLaMA/comments/16v0mlk/mistral_7b_8bit_and_4_bit_code/
| false | false |
self
| 1 | null |
Meta has released a new paper: Llama 2 Long beats Claude-2-100k on human evaluation
| 1 |
Paper: https://arxiv.org/pdf/2309.16039.pdf
Not sure if it has been released but if anyone has a link, please put it in the comments.
| 2023-09-29T03:25:18 |
Nabakin
|
i.imgur.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16v0onb
| false | null |
t3_16v0onb
|
/r/LocalLLaMA/comments/16v0onb/meta_has_released_a_new_paper_llama_2_long_beats/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'xdKhqEipli45l2SZDn8SOzmDdWA0Zp6RpIVko6vtD1w', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/KZvv-AuQkOhsSGtpouJQI9F2XejIHhwvfPjWoW6NHS4.png?width=108&crop=smart&auto=webp&s=00d221951dd6323d59519d1c9554cb8a75516c9d', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/KZvv-AuQkOhsSGtpouJQI9F2XejIHhwvfPjWoW6NHS4.png?width=216&crop=smart&auto=webp&s=b656802c1ce2d9f46797e81f0cc3a169aba80f3e', 'width': 216}, {'height': 178, 'url': 'https://external-preview.redd.it/KZvv-AuQkOhsSGtpouJQI9F2XejIHhwvfPjWoW6NHS4.png?width=320&crop=smart&auto=webp&s=347ab130fcc0fa1c3e83b6e8a2036aac01366a91', 'width': 320}, {'height': 357, 'url': 'https://external-preview.redd.it/KZvv-AuQkOhsSGtpouJQI9F2XejIHhwvfPjWoW6NHS4.png?width=640&crop=smart&auto=webp&s=81cd0fd5dcc9fe8610456088d807520d8f7f964b', 'width': 640}, {'height': 536, 'url': 'https://external-preview.redd.it/KZvv-AuQkOhsSGtpouJQI9F2XejIHhwvfPjWoW6NHS4.png?width=960&crop=smart&auto=webp&s=21c15f2b4512ff84b57a6887cc50f238864554ed', 'width': 960}, {'height': 603, 'url': 'https://external-preview.redd.it/KZvv-AuQkOhsSGtpouJQI9F2XejIHhwvfPjWoW6NHS4.png?width=1080&crop=smart&auto=webp&s=d0d8ba2295d8b139609bedb239110478f1fd408a', 'width': 1080}], 'source': {'height': 632, 'url': 'https://external-preview.redd.it/KZvv-AuQkOhsSGtpouJQI9F2XejIHhwvfPjWoW6NHS4.png?auto=webp&s=aaa70cc84937e2e0c3ab8daa8dbaeafa5c0b41d5', 'width': 1131}, 'variants': {}}]}
|
||
Prompt Reranker / Generator
| 1 |
I remember seeing a github repo that took in a prompt and generated different versions of the prompt and then ranked the responses to give you the best prompt. Does anyone know that repo? I can't find it anywhere.
| 2023-09-29T04:35:21 |
https://www.reddit.com/r/LocalLLaMA/comments/16v21tk/prompt_reranker_generator/
|
rbur0425
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16v21tk
| false | null |
t3_16v21tk
|
/r/LocalLLaMA/comments/16v21tk/prompt_reranker_generator/
| false | false |
self
| 1 | null |
NLU task with LLM - when am I giving you a system prompt?
| 1 |
I'm working on a voicebot over various LLMs (and of different sizes) and trying work out the best way to have it identify when I am directing it to change its behaviour in the way it responds vs when I want to it to respond. The idea being that the input from the user can then be used in a system prompt.
I've been playing with Griptape and trying different Rules like 'if it sounds like the person wants me to permanently change my behaviour ... \[set a flag\]' but inconsitent results.
Any thoughts? Has anyone else had success with this or a similar NLU task? Which models?
| 2023-09-29T05:10:49 |
https://www.reddit.com/r/LocalLLaMA/comments/16v2p1y/nlu_task_with_llm_when_am_i_giving_you_a_system/
|
moma1970
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16v2p1y
| false | null |
t3_16v2p1y
|
/r/LocalLLaMA/comments/16v2p1y/nlu_task_with_llm_when_am_i_giving_you_a_system/
| false | false |
self
| 1 | null |
How can I manage the model's context based on multiple admin's ID with multiple users that are linked to that admin's user ID
| 1 |
[removed]
| 2023-09-29T06:36:31 |
https://www.reddit.com/r/LocalLLaMA/comments/16v45pq/how_can_i_manage_the_models_context_based_on/
|
Middle_Focus_314
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16v45pq
| false | null |
t3_16v45pq
|
/r/LocalLLaMA/comments/16v45pq/how_can_i_manage_the_models_context_based_on/
| false | false |
self
| 1 | null |
interact with LLM based on fine-tuned interviews - what's the best approach?
| 1 |
Hi everyone,
I am working on a project where I have collected 100 interviews from a specific consumer segment. The goal is to use an offline Language Model (LLM), specifically either the 7B or 13B model, to interact with this data. By asking questions or proposing scenarios, I hope to get how this simulated consumer segment might respond.
I'm quite new to the fine-tuning process. From what I've learned, fine-tuning with LoRA seems promising, but the challenge is that we only have 100 interviews, making the data pool seemingly inadequate.
Is fine-tuning with LoRA the way to go, or are there other methods or strategies you'd recommend for working with limited data?
Any guidance or suggestions would be incredibly helpful and much appreciated. Thank you!
| 2023-09-29T06:52:50 |
https://www.reddit.com/r/LocalLLaMA/comments/16v4f8i/interact_with_llm_based_on_finetuned_interviews/
|
No-Teach4712
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16v4f8i
| false | null |
t3_16v4f8i
|
/r/LocalLLaMA/comments/16v4f8i/interact_with_llm_based_on_finetuned_interviews/
| false | false |
self
| 1 | null |
8 or 10 gb of ram enough for llama.cpp ??
| 1 |
srry if this gets asked often
i gave a homeserver with a i7 4th gen and 8-10 gb of ddr3 ram that i can throw at running llama.cpp. is this enough for it to be useable or am i going to have a terrible time with it? how much ram is ideal?
| 2023-09-29T07:05:58 |
https://www.reddit.com/r/LocalLLaMA/comments/16v4n3j/8_or_10_gb_of_ram_enough_for_llamacpp/
|
icanburnwater
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16v4n3j
| false | null |
t3_16v4n3j
|
/r/LocalLLaMA/comments/16v4n3j/8_or_10_gb_of_ram_enough_for_llamacpp/
| false | false |
self
| 1 | null |
We did it you guys! Meta referenced us in their new Llama 2 long context paper.
| 1 | 2023-09-29T07:22:35 |
jd_3d
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
16v4xdq
| false | null |
t3_16v4xdq
|
/r/LocalLLaMA/comments/16v4xdq/we_did_it_you_guys_meta_referenced_us_in_their/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'y5WTqdfiHA8dsOvDfzABZJNS9SSoxIOGxLp71aa1YqI', 'resolutions': [{'height': 90, 'url': 'https://preview.redd.it/j2oh7bhyb5rb1.jpg?width=108&crop=smart&auto=webp&s=d7c93f1af5e0f48ded71686d9a239fac0df71fa2', 'width': 108}, {'height': 180, 'url': 'https://preview.redd.it/j2oh7bhyb5rb1.jpg?width=216&crop=smart&auto=webp&s=c6874862c4d30389c39298dbd119f900c40b817f', 'width': 216}, {'height': 267, 'url': 'https://preview.redd.it/j2oh7bhyb5rb1.jpg?width=320&crop=smart&auto=webp&s=04fa9948424da99871cd5ec3c6ae085e8370030a', 'width': 320}, {'height': 535, 'url': 'https://preview.redd.it/j2oh7bhyb5rb1.jpg?width=640&crop=smart&auto=webp&s=d45f96c07b3cf06c58377764f41067a2ac79625b', 'width': 640}, {'height': 802, 'url': 'https://preview.redd.it/j2oh7bhyb5rb1.jpg?width=960&crop=smart&auto=webp&s=c95703d295eccccd8589de0bed400dff7e930cd9', 'width': 960}], 'source': {'height': 879, 'url': 'https://preview.redd.it/j2oh7bhyb5rb1.jpg?auto=webp&s=721584589ffa5dd431916434561cc5cf8ee3216d', 'width': 1051}, 'variants': {}}]}
|
|||
Would you pay 5 dollars/month for a Llama-2-70B Uncensored?
| 1 |
I don't know which is the current best-uncensored model. But would you pay 5 dollars/month to use it? Similar to what does ChatGPT with the Plus version (20 dollars/month).
| 2023-09-29T07:42:28 |
https://www.reddit.com/r/LocalLLaMA/comments/16v58yu/would_you_pay_5_dollarsmonth_for_a_llama270b/
|
HorrorNo8851
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16v58yu
| false | null |
t3_16v58yu
|
/r/LocalLLaMA/comments/16v58yu/would_you_pay_5_dollarsmonth_for_a_llama270b/
| false | false |
self
| 1 | null |
Pandalyst-13B-V1.0 released!!! Your first local LLM for mastering data analysis using pandas.
| 1 |
## Pandalyst: A large language model for mastering data analysis using pandas
🐱Github: https://github.com/zhengyanzhao1997/Pandalyst
**What is Pandalyst**
- Pandalyst is a general large language model specifically trained to process and analyze data using the pandas library.
**How is Pandalyst**
- Pandalyst has strong generalization capabilities for data tables in different fields and different data analysis needs.
**Why is Pandalyst**
- Pandalyst is open source and free to use, and its small parameter size (7B/13B) allows us to easily deploy it on local PC.
- Pandalyst can handle complex data tables (multiple columns and multiple rows), allowing us to enter enough context to describe our table in detail.
- Pandalyst has very competitive performance, significantly outperforming models of the same size and even outperforming some of the strongest closed-source models.
## News
- 🔥[2023/09/30] We released **Pandalyst-7B-V1.1** , which was trained on **CodeLlama-7b-Python** and achieves the **76.1 exec@1** in our **PandaTest_V1.0** and surpasses **Pandalyst-13B-V1.0**, **WizardCoder-Python-13B-V1.0** and **ChatGPT-3.5 (2023/06/13)**.
- 🔥[2023/09/28] We released **Pandalyst-13B-V1.0** , which was trained on **WizardCoder-Python-13B-V1.0** and achieves the **70.7 exec@1** in our **PandaTest_V1.0** and surpasses **WizardCoder-Python-13B-V1.0** and **ChatGPT-3.5 (2023/06/13)**.
| Model | Checkpoint | Base Model | PandaTest_V1.0 | EASY | HARD |
|--------------------|---------------------------------------------------------------------------------------------|------------|----------------|---------------------|---------------------|
| Pandalyst-13B-V1.0 | 🤗 https://huggingface.co/pipizhao/Pandalyst_13B_V1.0 | WizardCoder-Python-13B-V1.0 | 70.7 | 75.6 | 65.9 |
| Pandalyst-7B-V1.1 | 🤗 https://huggingface.co/pipizhao/Pandalyst-7B-V1.1 | CodeLlama-7b-Python | 76.1 | 85.2 | 67.0 |
### Metric: **Executable Rate of Generated Code (exec@1)**
| Model | PandaTest_V1.0 | EASY | HARD |
|-----------------------------|-----------------------|-------------|-------------|
| GPT-3.5-turbo-0613 | 58.5 | 66.5 | 50.6 |
| WizardCoder-Python-13B-V1.0 | 42.0 | 59.7 | 24.4 |
| Pandalyst-13B-V1.0 | 70.7 | 75.6 | 65.9 |
| Pandalyst-7B-V1.1 | **76.1** | **85.2** | **67.0** |
### Human evaluation And Usage
Please refer to our Github project.
| 2023-09-29T07:45:09 |
https://www.reddit.com/r/LocalLLaMA/comments/16v5aj4/pandalyst13bv10_released_your_first_local_llm_for/
|
Wise_Argument_882
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16v5aj4
| false | null |
t3_16v5aj4
|
/r/LocalLLaMA/comments/16v5aj4/pandalyst13bv10_released_your_first_local_llm_for/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'qxRKFZwQd_lXjLPFiZq6TiuTqCqtt2rmA-2rAZhVpE0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YMlSxE_RUOv2bAfjS0VV-_eKIU6gw63Mrg_O4USZQw8.jpg?width=108&crop=smart&auto=webp&s=c887c4bbd1f3e02161e8ad75e4d38ff7cf4e374e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YMlSxE_RUOv2bAfjS0VV-_eKIU6gw63Mrg_O4USZQw8.jpg?width=216&crop=smart&auto=webp&s=1bea4ec8c214926691b9724e7f5ccda796f930c1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YMlSxE_RUOv2bAfjS0VV-_eKIU6gw63Mrg_O4USZQw8.jpg?width=320&crop=smart&auto=webp&s=11925b9c20e20a8b8f22f0f218d563f394fd5ce7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YMlSxE_RUOv2bAfjS0VV-_eKIU6gw63Mrg_O4USZQw8.jpg?width=640&crop=smart&auto=webp&s=3f1a9648ae1a5c1f79f5f8f2139328bed6ceffae', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YMlSxE_RUOv2bAfjS0VV-_eKIU6gw63Mrg_O4USZQw8.jpg?width=960&crop=smart&auto=webp&s=4ebcf8c727ccd86a84d74b0ef0d796b6438af04a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YMlSxE_RUOv2bAfjS0VV-_eKIU6gw63Mrg_O4USZQw8.jpg?width=1080&crop=smart&auto=webp&s=ea3c8c3b70e1221dd9d8da6e0428ef8e747f4268', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YMlSxE_RUOv2bAfjS0VV-_eKIU6gw63Mrg_O4USZQw8.jpg?auto=webp&s=cbda0fbb60c798a832b7ce10f3aa6119d9225861', 'width': 1200}, 'variants': {}}]}
|
vllm 2.0 - quantised model support
| 1 |
[https://github.com/vllm-project/vllm/releases/tag/v0.2.0](https://github.com/vllm-project/vllm/releases/tag/v0.2.0)
honorary shoutout to u/The-Bloke for pumping out AWQ models this past week.
| 2023-09-29T08:38:14 |
https://www.reddit.com/r/LocalLLaMA/comments/16v64sg/vllm_20_quantised_model_support/
|
451mov
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16v64sg
| false | null |
t3_16v64sg
|
/r/LocalLLaMA/comments/16v64sg/vllm_20_quantised_model_support/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'PEfoG-FIG3L9MDqE5iUFyp7ztfFxwznprV6Ai8M-NwY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/cLmWW2lPSgOopkr0I4iqCtSl1NgkDJaTO6RITWkVdMs.jpg?width=108&crop=smart&auto=webp&s=7b933b3a1adb50f3679a7ee5766e44c4134a1391', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/cLmWW2lPSgOopkr0I4iqCtSl1NgkDJaTO6RITWkVdMs.jpg?width=216&crop=smart&auto=webp&s=c1e4ef77f54cc4468ac48abd207d0de8f7fb1776', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/cLmWW2lPSgOopkr0I4iqCtSl1NgkDJaTO6RITWkVdMs.jpg?width=320&crop=smart&auto=webp&s=b1d5d657128c327db2448e59f3ac7ffd9e96b81e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/cLmWW2lPSgOopkr0I4iqCtSl1NgkDJaTO6RITWkVdMs.jpg?width=640&crop=smart&auto=webp&s=c00170f80b2fa84252357e2afdbc1c0b139d8f69', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/cLmWW2lPSgOopkr0I4iqCtSl1NgkDJaTO6RITWkVdMs.jpg?width=960&crop=smart&auto=webp&s=d4995f0977dd8c3d706798b4bc0cbcdce1a8321c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/cLmWW2lPSgOopkr0I4iqCtSl1NgkDJaTO6RITWkVdMs.jpg?width=1080&crop=smart&auto=webp&s=2d7891dc5c6a1695f6f15a2a9644614aa3a17981', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/cLmWW2lPSgOopkr0I4iqCtSl1NgkDJaTO6RITWkVdMs.jpg?auto=webp&s=30a1b5f55dc6350e983ee2547a1897bd0cc5851f', 'width': 1200}, 'variants': {}}]}
|
Your wishes and ideas for LLama 3
| 1 |
Hello there!
So since it was confirmed Llama 3 will launch next year, I think it would be fun to discuss what this community hopes and expectations for the next game changer of local AI are. Personally, I'm more than happy to wait a little longer for a complete revamp of the architecture. Anyway, here are my ideas and wishes:
1. GQA and sliding windows for longer context sizes at lower memory cost, or some other innovation that improves this aspect even further ( for all model sizes, not just 34b and 70b) as well as higher context in general. (32K and more would be awesome).
2. Native multimodality. Even if you were hypothetically speaking just using the LLM for text generation, native multimodality would enhance the model's general capabilities a lot because then it would have a much better understanding of the concepts its talking about. I do think GPT-4 has native multimodality which is one of the key reasons why it performs so well. Also, multimodality is just awesome, look at the stuff GPT-4 Vision is doing right now. Technically some open source models already have multimodality, but it's mostly just adapters that translate pictures to text or specific models that are not good at plain text generation anymore. I'm speaking about native multimodality from the ground up here.
3. Mixture of Experts. Another key reason why GPT-4 performs so well. It has a lot of smaller models that are trained on one specific task, which come together in the end to make the output. Really cool stuff and a must have for LLama 3. I'm curious though how we will use it. I can imagine you're still having one file, but you are able to choose which experts you are going to train on which dataset using the training platform. I think that makes the most sense. Transformers will need a huge update then. But perhaps it will be not using transformers anymore, which would also make sense considering successors are being worked on.
4. Learning and unlearning in real time or when the LLM is not used. This is probably the hardest point to achieve on this list, as right now training is super expensive, even just training loras. But maybe there is a way to figure out how to train neural networks while inference, e.g. learning and unlearning in real time and at very low compute costs (I think there was already the concept of liquid neural networks) This would help personalize the AIs a lot more, which would be a key advantage of local, personal AI. And if its not possible in real time just yet, maybe it could be some background task when the user is not chatting with the LLM anymore (sleeping, if you will). Also important is some sort of self attention while learning (similar to humans, we choose what we learn or ignore, so that the LLM does not break itself when the user just writes nonsense for example)
Please tell me what you think and what you would like to see in a LLama 3!
| 2023-09-29T08:40:44 |
https://www.reddit.com/r/LocalLLaMA/comments/16v666f/your_wishes_and_ideas_for_llama_3/
|
dampflokfreund
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16v666f
| false | null |
t3_16v666f
|
/r/LocalLLaMA/comments/16v666f/your_wishes_and_ideas_for_llama_3/
| false | false |
self
| 1 | null |
rope_freq_scale and n_gpu_settings ?
| 1 |
How do we determine which values to use in these settings when you are trying to run the language model for inference (i.e. chat with it).
For example, if the hardware is 16GB VRAM, 64GB RAM, and the model is e.g. 13B or 30B, what rough calculation would you make to choose appropriate settings?
Incidentally, in what ways does "perplexity" manifest, so that you are able to monitor it, determine if perplexity is increasing and change settings accordingly?
| 2023-09-29T08:46:41 |
https://www.reddit.com/r/LocalLLaMA/comments/16v69ig/rope_freq_scale_and_n_gpu_settings/
|
innocuousAzureus
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16v69ig
| false | null |
t3_16v69ig
|
/r/LocalLLaMA/comments/16v69ig/rope_freq_scale_and_n_gpu_settings/
| false | false |
self
| 1 | null |
How to deploy llama 2 on local A100 gpu with tools like llama index further use ?
| 1 |
[removed]
| 2023-09-29T09:28:33 |
https://www.reddit.com/r/LocalLLaMA/comments/16v6x2y/how_to_deploy_llama_2_on_local_a100_gpu_with/
|
Vi_ai
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16v6x2y
| false | null |
t3_16v6x2y
|
/r/LocalLLaMA/comments/16v6x2y/how_to_deploy_llama_2_on_local_a100_gpu_with/
| false | false |
self
| 1 | null |
Question - Will there ever be unified memory on a traditional desktop?
| 1 |
Given the advantages apple has over the PC market with unified memory I was wondering if there will ever be unified memory on the larger PC market?
What are the technical details? I am guessing the hardware of apple has a way of quickly shunting information across their GPU/RAM which probably means they a special architecture so a 1:1 solution is probably not possible or patented. The technology is great however and I was wondering if it is possible on a PC setup.
I was also wondering if there is a hybrid. I am still on the old gen of MOBO with DDR4 but I am guessing a hybrid solution where you could maybe store the context on DDR5 might perhaps work. As in, load the language model onto VRAM for inference but store the the output of each token on DDR5 to store the output. Would this work? I understand you would probably be bottlenecked by DDR5 but I would accept this solution if I got a huge context window with traditional RAM.
| 2023-09-29T09:52:07 |
https://www.reddit.com/r/LocalLLaMA/comments/16v7ais/question_will_there_ever_be_unified_memory_on_a/
|
JFHermes
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16v7ais
| false | null |
t3_16v7ais
|
/r/LocalLLaMA/comments/16v7ais/question_will_there_ever_be_unified_memory_on_a/
| false | false |
self
| 1 | null |
Mistral 7B temperature settings?
| 1 |
I'm playing around with the Mistral Q8 GGUF model in Oobabooga and it seems to be all over the place when trying to do basic math and reasoning, Not anywhere close to other 13B models.
Does anyone know some good settings for Temperature to get it to perform optimally?
I use a 4096 context window even though it loaded with a 32K as default, but that seemed to perform at lot worse.
| 2023-09-29T10:35:22 |
https://www.reddit.com/r/LocalLLaMA/comments/16v820a/mistral_7b_temperature_settings/
|
nixudos
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16v820a
| false | null |
t3_16v820a
|
/r/LocalLLaMA/comments/16v820a/mistral_7b_temperature_settings/
| false | false |
self
| 1 | null |
Merging Lora with llama models on Colab
| 1 |
Are there any colabs I can merge loras with llama 7b?
| 2023-09-29T11:08:22 |
https://www.reddit.com/r/LocalLLaMA/comments/16v8nm3/merging_lora_with_llama_models_on_colab/
|
LazerCuber
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16v8nm3
| false | null |
t3_16v8nm3
|
/r/LocalLLaMA/comments/16v8nm3/merging_lora_with_llama_models_on_colab/
| false | false |
self
| 1 | null |
Can I really run autogpt without GPU
| 1 |
I was hoping to build a decent desktop by oct which got delayed due to stock issues for the time being can I run auto gpt with 4bit quant model on my i3 4010u laptop with 8gigs ram. I know its bad, but I don't care about speed. I am still learning and due to the lackluster specs I wasn't able to perform my own tests. i am hoping to run 7b models.
| 2023-09-29T11:33:14 |
https://www.reddit.com/r/LocalLLaMA/comments/16v951l/can_i_really_run_autogpt_without_gpu/
|
abir_legend
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16v951l
| false | null |
t3_16v951l
|
/r/LocalLLaMA/comments/16v951l/can_i_really_run_autogpt_without_gpu/
| false | false |
self
| 1 | null |
Fine Tune Base Model or Chat Model For Conversations/Instructions?
| 1 |
Hello amazing community...
I've been reading a lot lately about that, what i mostly see from YouTube videos/Medium articles, etc... that people tend to use the base model for fine tuning a conversations/instruct based data. Some little use the chat version. Why is that?
Isn't it better to fine tune a model that already knows how to make conversations like llama 2 chat/flacon instruct than a model that is only generative like llama 2/falcon.
Falcon is also mentioning this in all their instruct models card in huggingface: "This is an instruct model, which may not be ideal for further finetuning. If you are interested in building your own instruct/chat model, we recommend starting from Falcon-7B.".
What are your inputs on that? What would you fine tune for your case? Any success stories on both types of fine tuning?
| 2023-09-29T11:50:41 |
https://www.reddit.com/r/LocalLLaMA/comments/16v9hms/fine_tune_base_model_or_chat_model_for/
|
Aillian7
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16v9hms
| false | null |
t3_16v9hms
|
/r/LocalLLaMA/comments/16v9hms/fine_tune_base_model_or_chat_model_for/
| false | false |
self
| 1 | null |
How to load GGUF models on colab
| 1 |
How to load and rest these quantised models on google colab?
Models are here
https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF
| 2023-09-29T12:10:16 |
https://www.reddit.com/r/LocalLLaMA/comments/16v9wrb/how_to_load_gguf_models_on_colab/
|
ianuvrat
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16v9wrb
| false | null |
t3_16v9wrb
|
/r/LocalLLaMA/comments/16v9wrb/how_to_load_gguf_models_on_colab/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'lqv_3ohqLggkd4G78Cl05sBQ1GrM3q73gW3Kd0gHFUc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qAcabb_NmQ7tZ8LPaV6ZdHU4LOSuH4m-694fGm83ntA.jpg?width=108&crop=smart&auto=webp&s=a72ab81cef97d122cbe4385a8018b5a1039c11d5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/qAcabb_NmQ7tZ8LPaV6ZdHU4LOSuH4m-694fGm83ntA.jpg?width=216&crop=smart&auto=webp&s=c46c590e3290817b36ee152f9fc368f28588d6bb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/qAcabb_NmQ7tZ8LPaV6ZdHU4LOSuH4m-694fGm83ntA.jpg?width=320&crop=smart&auto=webp&s=df1cb1636b69127b581978e7460eef3e421b5d15', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/qAcabb_NmQ7tZ8LPaV6ZdHU4LOSuH4m-694fGm83ntA.jpg?width=640&crop=smart&auto=webp&s=48d894354346e4fed657f72b238f8a709dc6f32e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/qAcabb_NmQ7tZ8LPaV6ZdHU4LOSuH4m-694fGm83ntA.jpg?width=960&crop=smart&auto=webp&s=39e0423c532eee4b4d3585199d696fc94c44ffcc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/qAcabb_NmQ7tZ8LPaV6ZdHU4LOSuH4m-694fGm83ntA.jpg?width=1080&crop=smart&auto=webp&s=a930543cb9eb1dc67e359963785572f39d35a466', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/qAcabb_NmQ7tZ8LPaV6ZdHU4LOSuH4m-694fGm83ntA.jpg?auto=webp&s=e23c54962926a7bd6920455f6b1123b4a10c8fda', 'width': 1200}, 'variants': {}}]}
|
vLLM 0.2.0 released: up to 60% faster, AWQ quant support, RoPe, Mistral-7b support
| 1 |
It is commonly asked on this forum "how can my API run inference in parallel?"
One very good answer is "use vLLM" which has had a new major release today!
https://github.com/vllm-project/vllm/releases/tag/v0.2.0
Major changes
* Up to 60% performance improvement by optimizing de-tokenization and sampler
* Initial support for AWQ (performance not optimized)
* Support for RoPE scaling and LongChat
* Support for Mistral-7B
* Many bug fixe
Don't sleep on AWQ if you haven't tried it yet. TheBloke has already quantized your favorite model and output quality is significantly better then any GPTQ at 4bits.
| 2023-09-29T13:16:09 |
https://www.reddit.com/r/LocalLLaMA/comments/16vbeet/vllm_020_released_up_to_60_faster_awq_quant/
|
kryptkpr
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16vbeet
| false | null |
t3_16vbeet
|
/r/LocalLLaMA/comments/16vbeet/vllm_020_released_up_to_60_faster_awq_quant/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'PEfoG-FIG3L9MDqE5iUFyp7ztfFxwznprV6Ai8M-NwY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/cLmWW2lPSgOopkr0I4iqCtSl1NgkDJaTO6RITWkVdMs.jpg?width=108&crop=smart&auto=webp&s=7b933b3a1adb50f3679a7ee5766e44c4134a1391', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/cLmWW2lPSgOopkr0I4iqCtSl1NgkDJaTO6RITWkVdMs.jpg?width=216&crop=smart&auto=webp&s=c1e4ef77f54cc4468ac48abd207d0de8f7fb1776', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/cLmWW2lPSgOopkr0I4iqCtSl1NgkDJaTO6RITWkVdMs.jpg?width=320&crop=smart&auto=webp&s=b1d5d657128c327db2448e59f3ac7ffd9e96b81e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/cLmWW2lPSgOopkr0I4iqCtSl1NgkDJaTO6RITWkVdMs.jpg?width=640&crop=smart&auto=webp&s=c00170f80b2fa84252357e2afdbc1c0b139d8f69', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/cLmWW2lPSgOopkr0I4iqCtSl1NgkDJaTO6RITWkVdMs.jpg?width=960&crop=smart&auto=webp&s=d4995f0977dd8c3d706798b4bc0cbcdce1a8321c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/cLmWW2lPSgOopkr0I4iqCtSl1NgkDJaTO6RITWkVdMs.jpg?width=1080&crop=smart&auto=webp&s=2d7891dc5c6a1695f6f15a2a9644614aa3a17981', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/cLmWW2lPSgOopkr0I4iqCtSl1NgkDJaTO6RITWkVdMs.jpg?auto=webp&s=30a1b5f55dc6350e983ee2547a1897bd0cc5851f', 'width': 1200}, 'variants': {}}]}
|
what's the obsession with large LLMs?
| 1 |
The comments of HN/twitter people are complaining that Mistral released only a 7B module and requiring larger ones. Am I the only one that think big opportunity are in smaller modules?
* We saw what TinyLlama and Phi can do.
* Running an LLM on the CPU will help discover more use cases.
* LLama 2 13B is preforming better than Chinchilla 70b.
* MoE will be easier with smaller modules.
I would be happy if we can get a 7b update every quarter than get a 70b that will be obsolete after 6 months.
| 2023-09-29T13:19:06 |
https://www.reddit.com/r/LocalLLaMA/comments/16vbgt3/whats_the_obsession_with_large_llms/
|
thawab
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16vbgt3
| false | null |
t3_16vbgt3
|
/r/LocalLLaMA/comments/16vbgt3/whats_the_obsession_with_large_llms/
| false | false |
self
| 1 | null |
Could someone make an EXL2 version of LLama2-70B-uncensored?
| 1 |
I just got Exllama2 working and I'm simply blown away how efficient it is... both 3090's are just not seeing any load during inference (VRAM is still full, 46.4Gb/48). Can't really compare it to anything and I truly want to play with the uncensored version of LLama2-70b. No such model on HF though. Pretty please?
| 2023-09-29T13:35:08 |
https://www.reddit.com/r/LocalLLaMA/comments/16vbu7f/could_someone_make_an_exl2_version_of/
|
UltrMgns
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16vbu7f
| false | null |
t3_16vbu7f
|
/r/LocalLLaMA/comments/16vbu7f/could_someone_make_an_exl2_version_of/
| false | false |
self
| 1 | null |
Which is the model for text processing like Q&A Builder, text cleaning
| 1 |
Hey Guys,
I converted my PDF´s to txt files, but unfortuantely there are some random numbers, words left from tables or images.
Now my idea is to use a language model, to let him clean my data, since I am only interested in the core text.
Question:
Which model performs the best for this kind of task?
Vicuna has 16k input which is good, but are there any other models, which are better?
| 2023-09-29T13:37:59 |
https://www.reddit.com/r/LocalLLaMA/comments/16vbwm5/which_is_the_model_for_text_processing_like_qa/
|
Enkay55
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16vbwm5
| false | null |
t3_16vbwm5
|
/r/LocalLLaMA/comments/16vbwm5/which_is_the_model_for_text_processing_like_qa/
| false | false |
self
| 1 | null |
German LLMs based on LLaMA-2 pre-training are available: LeoLM/leo-hessianai-7b / LeoLM/leo-hessianai-13b
| 1 |
Did someone tried them out? Did someone quantizied them for using them locally?
Links:
[https://huggingface.co/LeoLM/leo-hessianai-7b](https://huggingface.co/LeoLM/leo-hessianai-7b)
[https://huggingface.co/LeoLM/leo-hessianai-13b](https://huggingface.co/LeoLM/leo-hessianai-13b)
| 2023-09-29T13:47:21 |
https://www.reddit.com/r/LocalLLaMA/comments/16vc45l/german_llms_based_on_llama2_pretraining_are/
|
New_Lifeguard4020
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16vc45l
| false | null |
t3_16vc45l
|
/r/LocalLLaMA/comments/16vc45l/german_llms_based_on_llama2_pretraining_are/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'R8xI5-_m5qZvA5mIvjAxY2JezNku2-CN5IgiluNB3tM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/FbHgTpHVKojbMODjERugvtduS53m1aGPG5nWrX7SRgY.jpg?width=108&crop=smart&auto=webp&s=0886ddb02b8bafb9d77e322bc7e70612a0b155dc', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/FbHgTpHVKojbMODjERugvtduS53m1aGPG5nWrX7SRgY.jpg?width=216&crop=smart&auto=webp&s=95072bdc5438e828a501df7298f437df0f0d8470', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/FbHgTpHVKojbMODjERugvtduS53m1aGPG5nWrX7SRgY.jpg?width=320&crop=smart&auto=webp&s=fbd0e0a1b65ea109356763baf70282da201ec4ef', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/FbHgTpHVKojbMODjERugvtduS53m1aGPG5nWrX7SRgY.jpg?width=640&crop=smart&auto=webp&s=70c038a7e96a2827a915dc261bda7f7cdf52ca5a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/FbHgTpHVKojbMODjERugvtduS53m1aGPG5nWrX7SRgY.jpg?width=960&crop=smart&auto=webp&s=c44f843c6ddbef221c4205f194593ac0a25bf334', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/FbHgTpHVKojbMODjERugvtduS53m1aGPG5nWrX7SRgY.jpg?width=1080&crop=smart&auto=webp&s=bb59f485030efd2b5955d37e9a689791dcb4c542', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/FbHgTpHVKojbMODjERugvtduS53m1aGPG5nWrX7SRgY.jpg?auto=webp&s=1e465718093d3709c43edcde0ab03e86761ae432', 'width': 1200}, 'variants': {}}]}
|
70b on a budget. What cards should I buy?
| 1 |
After getting tired of paying for Servers, I've looked into buying a large VRAM graphics card or two for playing with AI models. At least in my area, there's often numerous Pascal Teslas on sale for sub $200. This brings me onto my two questions:
P40 or P100? Does the HBM memory make a speed difference on inference or just on training.
Would a dual P100 plus 2080 Super setup work and be useful? I already have a 2080 Super in my system and from what I have heard Linux shouldn't have any major driver issues with card mixing. Assuming no overhead a 70b *should* fit into video memory, I think.
I'm looking to mostly infer with some LoRA stuff and also play around with other AI stuff as well as training my own.
Graphics output on the Teslas isn't needed since I already have a good (enough) card. I'm running Linux.
| 2023-09-29T14:20:23 |
https://www.reddit.com/r/LocalLLaMA/comments/16vcxo2/70b_on_a_budget_what_cards_should_i_buy/
|
Ather5125
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16vcxo2
| false | null |
t3_16vcxo2
|
/r/LocalLLaMA/comments/16vcxo2/70b_on_a_budget_what_cards_should_i_buy/
| false | false |
self
| 1 | null |
After the last update of llama-cpp-python my model acts in a weird way (Vicuna v1.5 13B 16k)
| 1 |
Does anybody else have this problem? I'm not sure it's about llama-cpp-python or llama.cpp. Because I didn't change anything in my code regarding model prompts or loading. Other models work fine and it worked fine before. I use CUblas with CUDA 12.2 by the way.
(llama-cpp-python version 0.2.7)
What's happening:
Vicuna suddenly produces outputs like: "Hello hello hello Hello" or "HHHHHHHHHHHHHH..."
| 2023-09-29T14:31:40 |
https://www.reddit.com/r/LocalLLaMA/comments/16vd8w6/after_the_last_update_of_llamacpppython_my_model/
|
psi-love
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16vd8w6
| false | null |
t3_16vd8w6
|
/r/LocalLLaMA/comments/16vd8w6/after_the_last_update_of_llamacpppython_my_model/
| false | false |
self
| 1 | null |
Is llama actually more expensive than GPT
| 1 |
This is more of a cost comparison that I am doing between gpt 3.5 and llama2 13B for one of my projects.. I need an LLM and am getting decent results from both gpt3.5 and llama13B. But surprisingly the cloud compute I am using from Azure is making llama more expensive than gpt 3.5.
Know that this is the cheapest GPT compute that I can get so probably I am not choosing the wrong compute(or am I?). I was under the impression that using an open-sourced LLM model will decrease the operation cost but it doesn't seem to be doing it.
The compute I am using for llama-2:
https://preview.redd.it/0rl9525fh7rb1.png?width=866&format=png&auto=webp&s=38686476e3ec38934f477e6359d80bebd0d4771c
​
https://preview.redd.it/3isoka8qh7rb1.png?width=1730&format=png&auto=webp&s=19536835d3bafd348c65d662a36e43df2fbbe214
​
The number of tokens in my prompt is (request + response) = 700
Cost of GPT for one such call = $0.001125
Cost of GPT for 1k such call = **$1.125**
Time taken for llama to respond to this prompt \~ 9s
Time taken for llama to respond to 1k prompt \~ 9000s = 2.5 hrs = **$1.87**
This difference drastically increases with increasing number of API calls. Please help me find
​
| 2023-09-29T14:44:10 |
https://www.reddit.com/r/LocalLLaMA/comments/16vdk15/is_llama_actually_more_expensive_than_gpt/
|
todaysgamer
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16vdk15
| false | null |
t3_16vdk15
|
/r/LocalLLaMA/comments/16vdk15/is_llama_actually_more_expensive_than_gpt/
| false | false | 1 | null |
|
Need help, What model that i can run with this spec ? and which one
| 1 | 2023-09-29T15:02:14 |
https://www.reddit.com/gallery/16ve0aj
|
Merchant_Lawrence
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16ve0aj
| false | null |
t3_16ve0aj
|
/r/LocalLLaMA/comments/16ve0aj/need_help_what_model_that_i_can_run_with_this/
| false | false | 1 | null |
||
Which is the best quantized LLM for a dating chatbot that can roleplay
| 1 |
[removed]
| 2023-09-29T15:06:55 |
https://www.reddit.com/r/LocalLLaMA/comments/16ve4rb/which_is_the_best_quantized_llm_for_a_dating/
|
Middle_Focus_314
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ve4rb
| false | null |
t3_16ve4rb
|
/r/LocalLLaMA/comments/16ve4rb/which_is_the_best_quantized_llm_for_a_dating/
| false | false |
self
| 1 | null |
Open Source Terraform Code Completion
| 1 |
It turns out WizardCoder15B does not support terraform (always returns empty completions for terraform code). Had two separate, but related, questions.
1. What's the best open source model that supports terraform code completion?
2. How hard would it be to find tune WizardCoder to support terraform? Would I need to train it on thousands of examples? Could I just use code from GitHub? I'm a little surprised it wasn't already trained on terraform because it feels like you'd have to actively exclude it.
| 2023-09-29T16:06:38 |
https://www.reddit.com/r/LocalLLaMA/comments/16vfo22/open_source_terraform_code_completion/
|
kintrith
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16vfo22
| false | null |
t3_16vfo22
|
/r/LocalLLaMA/comments/16vfo22/open_source_terraform_code_completion/
| false | false |
self
| 1 | null |
Best free online Role Playing AIs?
| 1 |
I'm looking for websites that provide for free a list of AIs you can interact with, hopefully something like spicychat ai.
Thanks in advance!
| 2023-09-29T16:49:59 |
https://www.reddit.com/r/LocalLLaMA/comments/16vgs9q/best_free_online_role_playing_ais/
|
Butefluko
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16vgs9q
| false | null |
t3_16vgs9q
|
/r/LocalLLaMA/comments/16vgs9q/best_free_online_role_playing_ais/
| false | false |
default
| 1 | null |
Is there a way to add self attention to outputs?
| 1 |
One of the big changes to Stable Diffusion is a base model, and a refiner model for the last 20%. I'm curious if a mixture of experts option where the same model with different instructions reads through its output and corrects and rewords it could accomplish something similar. I've seen several times where an LLM will change its answer midway through, and if I understand the tech correctly, that's because it's simply posting the next most likely token according to settings, and that the output isn't 'seeing' the other tokens in the output that it has already outputted (from my limited understanding), and it certainly can't relate the response to tokens that have yet to be outputted later on in the response. Would feeding its response back into it in some kind of summarize and correct mode improve the response? Am I basing this question based on a misunderstanding of the tech? Is this an area any research has been done in?
| 2023-09-29T17:36:41 |
https://www.reddit.com/r/LocalLLaMA/comments/16vhywz/is_there_a_way_to_add_self_attention_to_outputs/
|
clyspe
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16vhywz
| false | null |
t3_16vhywz
|
/r/LocalLLaMA/comments/16vhywz/is_there_a_way_to_add_self_attention_to_outputs/
| false | false |
self
| 1 | null |
How do you set up something like the HuggingChat Web Search functionality on Oobabooga?
| 1 |
On https://huggingface.co/chat/ there's an option to turn on a websearch that can reference internet results for it's response. According to https://huggingface.co/spaces/huggingchat/chat-ui/discussions/254 this utilizes RAG which I've played with a bit previously but I'm a few weeks out of date. Is it possible to enable something like this on Oobabooga where I can run it with my own models?
| 2023-09-29T17:40:53 |
https://www.reddit.com/r/LocalLLaMA/comments/16vi2y4/how_do_you_set_up_something_like_the_huggingchat/
|
LeifEriksonASDF
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16vi2y4
| false | null |
t3_16vi2y4
|
/r/LocalLLaMA/comments/16vi2y4/how_do_you_set_up_something_like_the_huggingchat/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'O4__VvuTP1zjgNXHpYgGtbNlwm8CyL1iGZRclIV-cFg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YQI_ej5EergATLgemo1vMLg5Y9oURFiOHZhDYundV9o.jpg?width=108&crop=smart&auto=webp&s=732d76370b97811ab9fb6fed1800fef253ff5141', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/YQI_ej5EergATLgemo1vMLg5Y9oURFiOHZhDYundV9o.jpg?width=216&crop=smart&auto=webp&s=52d55920ecb4ddd3b9a6a3f1f377b1e947338e76', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/YQI_ej5EergATLgemo1vMLg5Y9oURFiOHZhDYundV9o.jpg?width=320&crop=smart&auto=webp&s=87995474933baa503604de4c36087e982c1556f0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/YQI_ej5EergATLgemo1vMLg5Y9oURFiOHZhDYundV9o.jpg?width=640&crop=smart&auto=webp&s=b15bbf288cd406be52bb70ea93bac1eeb4e4b7b8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/YQI_ej5EergATLgemo1vMLg5Y9oURFiOHZhDYundV9o.jpg?width=960&crop=smart&auto=webp&s=6d645adfc183d0fc80bf7a259b180d40c1d08007', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/YQI_ej5EergATLgemo1vMLg5Y9oURFiOHZhDYundV9o.jpg?width=1080&crop=smart&auto=webp&s=f355db7c39116418e70484205c54fb53d8e5c262', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/YQI_ej5EergATLgemo1vMLg5Y9oURFiOHZhDYundV9o.jpg?auto=webp&s=acc428a35d715d5e8eaec455e35020dcd81cf080', 'width': 1200}, 'variants': {}}]}
|
AI-Powered Tools and Smart Glasses by Meta | BuyUtter
| 1 | 2023-09-29T18:00:52 |
https://buyutter.com/ai-powered-tools-and-smart-glasses-by-meta/
|
Allinhalf
|
buyutter.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16vil7b
| false | null |
t3_16vil7b
|
/r/LocalLLaMA/comments/16vil7b/aipowered_tools_and_smart_glasses_by_meta_buyutter/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '1C0DugNOm4uALNQA3SujW2BQlK-AqUKsWIL8BcyVdAM', 'resolutions': [{'height': 161, 'url': 'https://external-preview.redd.it/RA80Mg_s4QM3Ac0FJ7bT909274N356UafI3LXPoXtbU.jpg?width=108&crop=smart&auto=webp&s=4d0e29409f04a3a4e79ff8163e84ee22000bc1ac', 'width': 108}, {'height': 323, 'url': 'https://external-preview.redd.it/RA80Mg_s4QM3Ac0FJ7bT909274N356UafI3LXPoXtbU.jpg?width=216&crop=smart&auto=webp&s=50986f7d5c28ee5b32b4ddc3333c146bf8130ca7', 'width': 216}, {'height': 479, 'url': 'https://external-preview.redd.it/RA80Mg_s4QM3Ac0FJ7bT909274N356UafI3LXPoXtbU.jpg?width=320&crop=smart&auto=webp&s=f4b2413c94bbe55c7eb88c1f826bbb51a557f8ca', 'width': 320}, {'height': 959, 'url': 'https://external-preview.redd.it/RA80Mg_s4QM3Ac0FJ7bT909274N356UafI3LXPoXtbU.jpg?width=640&crop=smart&auto=webp&s=797dec6f92f50cc4def40d4994003f5d9979528c', 'width': 640}, {'height': 1439, 'url': 'https://external-preview.redd.it/RA80Mg_s4QM3Ac0FJ7bT909274N356UafI3LXPoXtbU.jpg?width=960&crop=smart&auto=webp&s=0548a80bf01f5ce1d2dab08b71b84c3c98c45740', 'width': 960}, {'height': 1619, 'url': 'https://external-preview.redd.it/RA80Mg_s4QM3Ac0FJ7bT909274N356UafI3LXPoXtbU.jpg?width=1080&crop=smart&auto=webp&s=6233d272ec57963d405a6ba4e23174a8b7c11c81', 'width': 1080}], 'source': {'height': 2560, 'url': 'https://external-preview.redd.it/RA80Mg_s4QM3Ac0FJ7bT909274N356UafI3LXPoXtbU.jpg?auto=webp&s=86051fcd206ce1bbaa15ca3ffd72822d451726f9', 'width': 1707}, 'variants': {}}]}
|
||
How do I train (LoRA) uncensored LLaMa on raw dataset?
| 1 |
I haven't touched language models for a long time. I used to fine-tune GPT-2 and mess with it. But now its too hard to actually dissect anything to re-purpose it for your use-case. Its either just one script that builds the most generic model or just raw weights and so much code with no documentation, bunch of libraries and then you just give up.
Where can I start learning how I can repurpose these models by training through LoRA? I don't want it to be a chatbot or anything. Just simple next token prediction model train on raw corpus. The rate at which new tools are being released its hard to keep up.
| 2023-09-29T18:37:31 |
https://www.reddit.com/r/LocalLLaMA/comments/16vjk0y/how_do_i_train_lora_uncensored_llama_on_raw/
|
_H_a_c_k_e_r_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16vjk0y
| false | null |
t3_16vjk0y
|
/r/LocalLLaMA/comments/16vjk0y/how_do_i_train_lora_uncensored_llama_on_raw/
| false | false |
self
| 1 | null |
video narration
| 1 |
I remember watching Sesame Street lessons where children yell the answer together. Can somebody recommend a text to voice model that will be able to do this?
| 2023-09-29T18:46:09 |
https://www.reddit.com/r/LocalLLaMA/comments/16vjry5/video_narration/
|
Fairness_Needed
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16vjry5
| false | null |
t3_16vjry5
|
/r/LocalLLaMA/comments/16vjry5/video_narration/
| false | false |
self
| 1 | null |
ML1 Preview (Phi 1/1.5 Reproduction)
| 1 |
Hello again. In follow up from [yesterday's post](https://www.reddit.com/r/LocalLLaMA/comments/16ul4sw/ml1_34b70b_phi_115_reproduction_on_llama2/), I will be posting preview checkpoints to hugging face in the below model repository. Currently the 0.15 epoch preview is uploaded and ready to try.
[https://huggingface.co/nmitchko/ML1-previews](https://huggingface.co/nmitchko/ML1-previews)
| 2023-09-29T18:57:10 |
https://www.reddit.com/r/LocalLLaMA/comments/16vk266/ml1_preview_phi_115_reproduction/
|
nickmitchko
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16vk266
| false | null |
t3_16vk266
|
/r/LocalLLaMA/comments/16vk266/ml1_preview_phi_115_reproduction/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'Nh1Jt_8fRy74h9rU-kDkWetXzYo2h6Tb25L8pS7vUDI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/botXS2ZT-4_82NrhT9jjvMPamzfz0TpnVfzVDczOfjQ.jpg?width=108&crop=smart&auto=webp&s=214ff1efaecbebd8e99bdda48f20205b0e3dcc2e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/botXS2ZT-4_82NrhT9jjvMPamzfz0TpnVfzVDczOfjQ.jpg?width=216&crop=smart&auto=webp&s=9220e409908cc8068fb743ff307a8b553c596a6f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/botXS2ZT-4_82NrhT9jjvMPamzfz0TpnVfzVDczOfjQ.jpg?width=320&crop=smart&auto=webp&s=21b9fd32bc4487217a462327154de1db897c03b2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/botXS2ZT-4_82NrhT9jjvMPamzfz0TpnVfzVDczOfjQ.jpg?width=640&crop=smart&auto=webp&s=018fc319328235f74ba0559099fa49ea761b95ee', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/botXS2ZT-4_82NrhT9jjvMPamzfz0TpnVfzVDczOfjQ.jpg?width=960&crop=smart&auto=webp&s=81a1bf600beaf9c73f0c0119b7d91a399ca3076a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/botXS2ZT-4_82NrhT9jjvMPamzfz0TpnVfzVDczOfjQ.jpg?width=1080&crop=smart&auto=webp&s=e4fd0d5a32aa1749c445dde857a64813ecd78c0a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/botXS2ZT-4_82NrhT9jjvMPamzfz0TpnVfzVDczOfjQ.jpg?auto=webp&s=84801a7090a32b7af16c925c875ab44d92333da0', 'width': 1200}, 'variants': {}}]}
|
Average bpw for gguf is much higher than you think
| 1 |
fyi, average bpw for gguf models is ~5bpw for Q4_K_M models. It could explain why the quality of these models feel higher to some people. Choosing an equivalent exl2 bpw model is very important.
Most people will say we already have 2bit models, but it seems to not be the case. They are ~3.3bpw. I know people will keep the terms "2bit", " 3bit", "5bit" "6bit" but the distinction is important. In your head, offset the average bpw by +1 for gguf.
| 2023-09-29T19:18:43 |
https://www.reddit.com/r/LocalLLaMA/comments/16vknmz/average_bpw_for_gguf_is_much_higher_than_you_think/
|
Aaaaaaaaaeeeee
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16vknmz
| false | null |
t3_16vknmz
|
/r/LocalLLaMA/comments/16vknmz/average_bpw_for_gguf_is_much_higher_than_you_think/
| false | false |
self
| 1 | null |
I am looking for information regarding Running llama on a zen4 or xeon 4th generation cpu? Or alternative no gpu suggestions (for 180b falcon)
| 1 |
As my title suggests, I wanted to know if anyone has tried running an LLM on a zen4 or xeon 4th generation cpu, the ones that have AVX512. I was wondering what sort of performance they give, assuming no gpu setups.
The reason I am asking is because I would like to have a 180b model running without spending a small fortune on gpu's.
I also like the idea of upgradeable memory (Something that gpu's do not have)
If anyone has any alternative suggestions please let me know, (I am accepting of .5 to 1 token/s in terms of performance (as long as that is with a max context window)).
I am honestly surprised that there is no home compute devices with expandable memory.
| 2023-09-29T19:48:40 |
https://www.reddit.com/r/LocalLLaMA/comments/16vlf0d/i_am_looking_for_information_regarding_running/
|
jasonmbrown
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16vlf0d
| false | null |
t3_16vlf0d
|
/r/LocalLLaMA/comments/16vlf0d/i_am_looking_for_information_regarding_running/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'PEO--Jo7GPtt6eUC0LpM9jWIQ1OKfdKyUrfDFS7Gs0Q', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/CsniDEK8IeUwnQPSMnwbJCMl5vaQMWdeErWO3-kJ8Ec.jpg?width=108&crop=smart&auto=webp&s=255eca204ba5cf1858340794cd836f313bcc8c8c', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/CsniDEK8IeUwnQPSMnwbJCMl5vaQMWdeErWO3-kJ8Ec.jpg?width=216&crop=smart&auto=webp&s=a7ad499396e226d37ea7d40f7fa1392840b9a062', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/CsniDEK8IeUwnQPSMnwbJCMl5vaQMWdeErWO3-kJ8Ec.jpg?width=320&crop=smart&auto=webp&s=bd2dbaeb8c6943fd483c7616e872d114fcce7105', 'width': 320}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/CsniDEK8IeUwnQPSMnwbJCMl5vaQMWdeErWO3-kJ8Ec.jpg?auto=webp&s=091bc0aab7e9df12723125d1ac18f202b993c574', 'width': 500}, 'variants': {}}]}
|
Best open source model for medical text
| 1 |
The one I know about is pubmedbert but I was wondering if anyone has any experience with RAG type question answering on medical text specifically? And possibly any idea how they might compare to gpt 3.5?
| 2023-09-29T20:10:45 |
https://www.reddit.com/r/LocalLLaMA/comments/16vlyes/best_open_source_model_for_medical_text/
|
StrippedSilicon
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16vlyes
| false | null |
t3_16vlyes
|
/r/LocalLLaMA/comments/16vlyes/best_open_source_model_for_medical_text/
| false | false |
self
| 1 | null |
What is on the fly method
| 1 |
[removed]
| 2023-09-29T21:32:35 |
https://www.reddit.com/r/LocalLLaMA/comments/16vo1ny/what_is_on_the_fly_method/
|
AcceptableBat8912
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16vo1ny
| false | null |
t3_16vo1ny
|
/r/LocalLLaMA/comments/16vo1ny/what_is_on_the_fly_method/
| false | false |
self
| 1 | null |
LMStudio vicuna13b v1.5 16K context question
| 1 |
Running vicuna-13b-v1.5-16k.Q5_K_M.gguf via LM Studio on my M1pro 16GB
By default the model loaded with context window being 1500. It inferences fine on some small prompts. However by increasing the context up to 15k my computer instantly crashed. Just trying to understand what is happening here.
Is increasing the context like this normal since the model suggests 16k? Is the crash due to sudden increase in memory requirements?
| 2023-09-29T21:53:45 |
https://www.reddit.com/r/LocalLLaMA/comments/16vollo/lmstudio_vicuna13b_v15_16k_context_question/
|
iwannaforever
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16vollo
| false | null |
t3_16vollo
|
/r/LocalLLaMA/comments/16vollo/lmstudio_vicuna13b_v15_16k_context_question/
| false | false |
self
| 1 | null |
Equivalent to autogpt to use it with palm API
| 1 |
Is there a software that can run the palm API because it's free and I want to try like code interpreter and autogpt ect can someone tell if there is a software that exists
| 2023-09-29T22:02:42 |
https://www.reddit.com/r/LocalLLaMA/comments/16vou6r/equivalent_to_autogpt_to_use_it_with_palm_api/
|
Puzzleheaded_Acadia1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16vou6r
| false | null |
t3_16vou6r
|
/r/LocalLLaMA/comments/16vou6r/equivalent_to_autogpt_to_use_it_with_palm_api/
| false | false |
self
| 1 | null |
Any way to verify training method with very small dataset?
| 1 |
My goal is to train Llama-2-13b on a larger dataset of chat conversations but, before I commit to that processing time, I'm trying to verify that my approach is even valid. I'm using a Colab notebook for now with a VERY small dataset, just 20 records of JSONL loaded into Hugging Face. I invented a word that doesn't exist in the wild and all 20 records are pertaining to that word, like what is \[fake\_word\] followed by a fake definition? I'm following online guides on how to do this in Colab and everything appears to work (no errors). It loads my dataset and runs the training, but then when I send a test prompt to the model, it says that it has no idea what my \[fake\_word\] is. So clearly something isn't right. Is my dataset too small to overcome the fact that it has zero knowledge of that word in pre-training?
At this point, I'd be satisfied with ANY smaller training dataset and validation after training to show that it works.
| 2023-09-29T23:43:22 |
https://www.reddit.com/r/LocalLLaMA/comments/16vrcuz/any_way_to_verify_training_method_with_very_small/
|
BGFlyingToaster
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16vrcuz
| false | null |
t3_16vrcuz
|
/r/LocalLLaMA/comments/16vrcuz/any_way_to_verify_training_method_with_very_small/
| false | false |
self
| 1 | null |
With LLMs we can create a fully open-source Library of Alexandria.
| 1 |
I recently generated 650,000 unique synthetic textbook samples from a diverse span of courses, kindergarten through graduate school. I shared them with the community for feed back previously and now there is a really [cool training attempt](https://www.reddit.com/r/LocalLLaMA/comments/16vk266/ml1_preview_phi_115_reproduction/) led by a community member to see how far this prelim dataset can take us. There are some other closely related other efforts happening too.
I am updating because we have another interesting result - by going deeper instead of broader, and by combining new techniques like RAG, we can make incredibly descriptive textbooks. [This one here](https://github.com/emrgnt-cmplxty/sciphi/blob/main/sciphi/data/library_of_phi/Introduction_to_Deep_Learning.md) was generated by a \~fully AI pipeline. The pipeline goes MIT OCW -> Syllabus -> Table of Contents -> Textbook. The last step is grounded through vector-lookups over the whole of Wikipedia.
I believe this work continues to align w/ the overall goals of the community, so I wanted to share. This initially began as an effort to replicate Phi-1.5, but I'm now convinced that it is interesting in it's own right. I think it might be interesting to build an open source library - *the Library of Phi*. This would be a crowdsourced effort to democratize textbooks.
What do you think?
| 2023-09-30T00:03:49 |
https://www.reddit.com/r/LocalLLaMA/comments/16vruh8/with_llms_we_can_create_a_fully_opensource/
|
docsoc1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16vruh8
| false | null |
t3_16vruh8
|
/r/LocalLLaMA/comments/16vruh8/with_llms_we_can_create_a_fully_opensource/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'wFTetGOZ9uqk6bFE12iEfmfGT0qaPT-FowY2vN_0gnk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GU7zpCOPLXmp3SBSvqYxnBOyDrKCvtnNsNjg13rACG0.jpg?width=108&crop=smart&auto=webp&s=db262d448f07f80108311a36e44dc38dbc03e956', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GU7zpCOPLXmp3SBSvqYxnBOyDrKCvtnNsNjg13rACG0.jpg?width=216&crop=smart&auto=webp&s=654fa9d20939c0c8b88f2c943e0ea4d4480ef5ef', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GU7zpCOPLXmp3SBSvqYxnBOyDrKCvtnNsNjg13rACG0.jpg?width=320&crop=smart&auto=webp&s=7cfed2ab7a2b3fd137c5c0b22f468ddf6d11b7fc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GU7zpCOPLXmp3SBSvqYxnBOyDrKCvtnNsNjg13rACG0.jpg?width=640&crop=smart&auto=webp&s=6e0b659c9c4a6ecde9e83bc95157e9bfc9241428', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GU7zpCOPLXmp3SBSvqYxnBOyDrKCvtnNsNjg13rACG0.jpg?width=960&crop=smart&auto=webp&s=c3226acfd50a30bff29ee94664718ad848a15fd9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GU7zpCOPLXmp3SBSvqYxnBOyDrKCvtnNsNjg13rACG0.jpg?width=1080&crop=smart&auto=webp&s=c94f70710cbc06be5ee62da8130e12d34a890b59', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GU7zpCOPLXmp3SBSvqYxnBOyDrKCvtnNsNjg13rACG0.jpg?auto=webp&s=ffe8ded9ca555dbd13e3e614739af8edd4534bfc', 'width': 1200}, 'variants': {}}]}
|
What is the best open-source framework to create a synthetic and domain specific dataset for fine-tuning small models?
| 1 |
[removed]
| 2023-09-30T00:18:00 |
https://www.reddit.com/r/LocalLLaMA/comments/16vs6zu/what_is_the_best_opensource_framework_to_create_a/
|
Separate-Still3770
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16vs6zu
| false | null |
t3_16vs6zu
|
/r/LocalLLaMA/comments/16vs6zu/what_is_the_best_opensource_framework_to_create_a/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'HLTQ2Io4n-agFuTu1AKgU1unD5v0xYfSV8JVgdBjrig', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/MZ4PSRPeTEpPzDVerFuXOsGGcA18nclVzJgd0s_51cM.jpg?width=108&crop=smart&auto=webp&s=1f15ac727a32de12dcc708870423c1e89f4d0068', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/MZ4PSRPeTEpPzDVerFuXOsGGcA18nclVzJgd0s_51cM.jpg?width=216&crop=smart&auto=webp&s=78b5f124f9ff08d78531bec975d7ca1b009016ed', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/MZ4PSRPeTEpPzDVerFuXOsGGcA18nclVzJgd0s_51cM.jpg?width=320&crop=smart&auto=webp&s=01b79416adbf9fba82f0115bdf752dab4d6323de', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/MZ4PSRPeTEpPzDVerFuXOsGGcA18nclVzJgd0s_51cM.jpg?width=640&crop=smart&auto=webp&s=4679d4fa1cf02a37e5d2e297f6cbcbf5bc18d476', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/MZ4PSRPeTEpPzDVerFuXOsGGcA18nclVzJgd0s_51cM.jpg?width=960&crop=smart&auto=webp&s=b1d7ee53f82796dca3b1d0ebc1e7eb7722d15bcc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/MZ4PSRPeTEpPzDVerFuXOsGGcA18nclVzJgd0s_51cM.jpg?width=1080&crop=smart&auto=webp&s=796b99aab3f97e894e8f81522da9a72098819a44', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/MZ4PSRPeTEpPzDVerFuXOsGGcA18nclVzJgd0s_51cM.jpg?auto=webp&s=a05645b9afe9f7443b44b7a66afa2de3724f552b', 'width': 1200}, 'variants': {}}]}
|
Good local models for data cleaning and extraction?
| 1 |
Hi,
​
I want to be able to extract a schema of data from unstructured text, so I thought I'd use a local model for that, since there's no point in using a massive model when a small one will do. What are some recommendations for this use case?
| 2023-09-30T00:43:30 |
https://www.reddit.com/r/LocalLLaMA/comments/16vssv7/good_local_models_for_data_cleaning_and_extraction/
|
ExaminationNo8522
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16vssv7
| false | null |
t3_16vssv7
|
/r/LocalLLaMA/comments/16vssv7/good_local_models_for_data_cleaning_and_extraction/
| false | false |
self
| 1 | null |
Training Cost Leaderboard?
| 1 |
Especially for language models, which can be fine tuned to do any number of things, the monetary cost of training these models is arguably one of the most important considerations that should be taken into account. Especially for the open source community which is much more limited on funds.
Additionally, with methods such as quantization, flash attention, and more, the price of training a nearly identical model should be much cheaper now than it was 6 months ago, but this metric doesn’t seem to be shown off as much.
However, to the best of my knowledge, most of the information about training cost is either just not mentioned or hidden somewhere in the individual model’s technical report. Is there any resources that I’m just missing related to this topic? And if not, are you all interested in this as well?
| 2023-09-30T01:14:30 |
https://www.reddit.com/r/LocalLLaMA/comments/16vthyx/training_cost_leaderboard/
|
Dramatic_Road3570
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16vthyx
| false | null |
t3_16vthyx
|
/r/LocalLLaMA/comments/16vthyx/training_cost_leaderboard/
| false | false |
self
| 1 | null |
Different LLAMA_CUDA config values for llama.cpp?
| 1 |
Anyone tested these?
This is the relevant section of the readme: https://github.com/ggerganov/llama.cpp#cublas
I finally got NVLink set up on my dual 3090 and getting 17 tok/s on 70B models which is great. Curious to know if I can go even faster.
I will do some building and testing and report back!
| 2023-09-30T01:32:42 |
https://www.reddit.com/r/LocalLLaMA/comments/16vtwa3/different_llama_cuda_config_values_for_llamacpp/
|
0xd00d
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16vtwa3
| false | null |
t3_16vtwa3
|
/r/LocalLLaMA/comments/16vtwa3/different_llama_cuda_config_values_for_llamacpp/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=216&crop=smart&auto=webp&s=dd615bfe0453b06d53bc1f5f17fc3f6ad926694f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=320&crop=smart&auto=webp&s=0bc6ac2e1db55ec07cc6a17178ea52bf436f9bce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=640&crop=smart&auto=webp&s=b0d58c9a49c1e9ce629e5b31dce17b727d8c6ab8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=960&crop=smart&auto=webp&s=7c835cb0600a4d280a57f12d0bc008ef12acd26d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=1080&crop=smart&auto=webp&s=1f2580bd36b3bf3b766d205ac6d737a9d8d34c2a', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?auto=webp&s=d8b103bed805ceb641b2ff49dc8c7403318263b1', 'width': 1280}, 'variants': {}}]}
|
Stability AI releases StableLM 3B, trained on 1T tokens for 4 epochs
| 1 | 2023-09-30T03:08:35 |
https://huggingface.co/stabilityai/stablelm-3b-4e1t
|
Covid-Plannedemic_
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
16vvx5d
| false | null |
t3_16vvx5d
|
/r/LocalLLaMA/comments/16vvx5d/stability_ai_releases_stablelm_3b_trained_on_1t/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'OLOAe76KelEcGAJ8UCZCDM2jdkoASgcUmuC2RjrL-XE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/6s3oDqZ_AHjJfGDw5eX1-g_Y_Il6QpLtNb9NW5pJjaY.jpg?width=108&crop=smart&auto=webp&s=43829856783a335aea360cdbdb7aa252bba4bb56', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/6s3oDqZ_AHjJfGDw5eX1-g_Y_Il6QpLtNb9NW5pJjaY.jpg?width=216&crop=smart&auto=webp&s=960c173d62bc1dc156a64917a8ef5cd1a6e30dbe', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/6s3oDqZ_AHjJfGDw5eX1-g_Y_Il6QpLtNb9NW5pJjaY.jpg?width=320&crop=smart&auto=webp&s=3dfa65b39ec523ef0916e4d36663dda15bc4b6c4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/6s3oDqZ_AHjJfGDw5eX1-g_Y_Il6QpLtNb9NW5pJjaY.jpg?width=640&crop=smart&auto=webp&s=a8ea901eab98312c01f8f00efd589c97ef5de4ad', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/6s3oDqZ_AHjJfGDw5eX1-g_Y_Il6QpLtNb9NW5pJjaY.jpg?width=960&crop=smart&auto=webp&s=6ef9cf12c2c7cf806dd1ea977f433b033ac8f257', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/6s3oDqZ_AHjJfGDw5eX1-g_Y_Il6QpLtNb9NW5pJjaY.jpg?width=1080&crop=smart&auto=webp&s=925957a6c52c742791d5c366b5dfa29961176a49', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/6s3oDqZ_AHjJfGDw5eX1-g_Y_Il6QpLtNb9NW5pJjaY.jpg?auto=webp&s=cb82a3fb6a458f87b975c5654eeee22c95b6c7bb', 'width': 1200}, 'variants': {}}]}
|
||
Mistral 7B (quantised 8bit) with tabular data
| 1 |
[removed]
| 2023-09-30T03:32:24 |
https://www.reddit.com/r/LocalLLaMA/comments/16vwec2/mistral_7b_quantised_8bit_with_tabular_data/
|
ianuvrat
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16vwec2
| false | null |
t3_16vwec2
|
/r/LocalLLaMA/comments/16vwec2/mistral_7b_quantised_8bit_with_tabular_data/
| false | false |
self
| 1 | null |
What are good langchain alternatives to train LLMs and create LLM apps?
| 1 |
Magentic, minichain, llfn <--- I saw these three but haven't tried them yet except for langchain. Which one do you use to rapid prototype LLMs currently?
| 2023-09-30T05:09:15 |
https://www.reddit.com/r/LocalLLaMA/comments/16vy6aw/what_are_good_langchain_alternatives_to_train/
|
vlodia
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16vy6aw
| false | null |
t3_16vy6aw
|
/r/LocalLLaMA/comments/16vy6aw/what_are_good_langchain_alternatives_to_train/
| false | false |
self
| 1 | null |
Intel ARC IPEX performance?
| 1 |
i just reinstalled the oobabooga webui and saw that IPEX is now one of the options during setup, this actually makes me consider swapping my 3060 in my ML server for an A770, has anyone tried inference/training on an A770?, what were speeds like?
| 2023-09-30T06:35:47 |
https://www.reddit.com/r/LocalLLaMA/comments/16vznmm/intel_arc_ipex_performance/
|
MR_-_501
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16vznmm
| false | null |
t3_16vznmm
|
/r/LocalLLaMA/comments/16vznmm/intel_arc_ipex_performance/
| false | false |
self
| 1 | null |
Looking for best model to fine tune for Uncensored Dating chats
| 1 |
Hi everyone! Looking for your suggestions about which models best in your opinion and why for uncensored dating type of chats to fine tuned ?
Money is not an issue and can be placed on A100s 80GB GPUs.
Cheers:)
| 2023-09-30T06:46:07 |
https://www.reddit.com/r/LocalLLaMA/comments/16vztlu/looking_for_best_model_to_fine_tune_for/
|
Live-Acanthaceae4371
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16vztlu
| false | null |
t3_16vztlu
|
/r/LocalLLaMA/comments/16vztlu/looking_for_best_model_to_fine_tune_for/
| false | false |
self
| 1 | null |
Deploy the Mistral 7b Generative Model on an A10 GPU on AWS
| 1 |
[removed]
| 2023-09-30T07:09:03 |
https://www.reddit.com/r/LocalLLaMA/comments/16w078q/deploy_the_mistral_7b_generative_model_on_an_a10/
|
juliensalinas
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16w078q
| false | null |
t3_16w078q
|
/r/LocalLLaMA/comments/16w078q/deploy_the_mistral_7b_generative_model_on_an_a10/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'gcOFI3bFaeug4uwPBpXu4gJk2MnoT0mLEzaNvCEQFbU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/xTOH9kYcOZrCx2KGgqds-XGt0SuEmNo9R0vd3jPCFKE.jpg?width=108&crop=smart&auto=webp&s=0a3f5c6393091d4736e8cf4f6f0784e7f5a2b9ea', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/xTOH9kYcOZrCx2KGgqds-XGt0SuEmNo9R0vd3jPCFKE.jpg?width=216&crop=smart&auto=webp&s=54b1821095eac7abf9fd0a93b991132324ab5b3b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/xTOH9kYcOZrCx2KGgqds-XGt0SuEmNo9R0vd3jPCFKE.jpg?width=320&crop=smart&auto=webp&s=3e19c1d8ef9967dda8d95d72d26d6a9c78f5b66b', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/xTOH9kYcOZrCx2KGgqds-XGt0SuEmNo9R0vd3jPCFKE.jpg?width=640&crop=smart&auto=webp&s=5f9d09d6209e09f06e43ca407c407e0290b4586c', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/xTOH9kYcOZrCx2KGgqds-XGt0SuEmNo9R0vd3jPCFKE.jpg?width=960&crop=smart&auto=webp&s=119c98dc6bdddf5e4f8423ea0c6b99927f6ff89e', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/xTOH9kYcOZrCx2KGgqds-XGt0SuEmNo9R0vd3jPCFKE.jpg?width=1080&crop=smart&auto=webp&s=95fb8a0687086906918d1990f1cbc18659b183f3', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/xTOH9kYcOZrCx2KGgqds-XGt0SuEmNo9R0vd3jPCFKE.jpg?auto=webp&s=22be6beca08ec134bba554494d7b8d8acd5d3a08', 'width': 1920}, 'variants': {}}]}
|
samantha-mistral-7b
| 1 |
Today I released two new Samantha models.
[https://huggingface.co/ehartford/samantha-mistral-7b](https://huggingface.co/ehartford/samantha-mistral-7b)
The base flavor is pretty much the Samantha you know. Only in an extremely efficient and potent 7b form.
[https://huggingface.co/ehartford/samantha-mistral-instruct-7b](https://huggingface.co/ehartford/samantha-mistral-instruct-7b)
The instruct flavor, is better at general tasks, but a little less Samantha. Mistral-instruct had some references to OpenAI in their dataset, so it's been inherited.
NEW: Both models are trained with ChatML prompt format. And all of my future releases will be.
<|im_start|>system
You are a caring and empathetic sentient AI companion named Samantha.<|im_end|><|im_start|>user
Hello, what is your name?<|im_end|>
[Example output](https://preview.redd.it/ykv76ezjmcrb1.png?width=1880&format=png&auto=webp&s=36a580e5017b5c01cd5ec5d6c17a2c846c01c70b)
Much thanks to my sponsors, a16z and Microsoft for Startups.
And I want to extend my gratitude to Wing Lian and the [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) contributors for making an amazing tool.
https://preview.redd.it/o7gaao32ncrb1.png?width=400&format=png&auto=webp&s=74d4ca85ec02b27122ddcbeb4b7b7a541f7b650e
And my gratitute to u/The-Bloke for his mentorship and distribution.
| 2023-09-30T07:59:32 |
https://www.reddit.com/r/LocalLLaMA/comments/16w111m/samanthamistral7b/
|
faldore
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16w111m
| false | null |
t3_16w111m
|
/r/LocalLLaMA/comments/16w111m/samanthamistral7b/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'Dh-hi7De0Nznld86SA9l93VOeiDJ5grGzKEnBvE4OVE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/oPiw-3Du7qlvKz5-dbgK_UCvCueG_j7xPhEJjoHsPw0.jpg?width=108&crop=smart&auto=webp&s=5732996d228b2c454ec5f66de9dcd5bb5cea699a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/oPiw-3Du7qlvKz5-dbgK_UCvCueG_j7xPhEJjoHsPw0.jpg?width=216&crop=smart&auto=webp&s=5adb05c9352ca1762b3da9ef3be0694c1d6432dd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/oPiw-3Du7qlvKz5-dbgK_UCvCueG_j7xPhEJjoHsPw0.jpg?width=320&crop=smart&auto=webp&s=22b68711c933a927a5222df7ab272fdec0daf747', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/oPiw-3Du7qlvKz5-dbgK_UCvCueG_j7xPhEJjoHsPw0.jpg?width=640&crop=smart&auto=webp&s=9c122bd8a84a055e913588878584e0e03bd297db', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/oPiw-3Du7qlvKz5-dbgK_UCvCueG_j7xPhEJjoHsPw0.jpg?width=960&crop=smart&auto=webp&s=5d652085b55f4d73e37a590ab674c2c44cbb6c7d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/oPiw-3Du7qlvKz5-dbgK_UCvCueG_j7xPhEJjoHsPw0.jpg?width=1080&crop=smart&auto=webp&s=3ee64eb6b1b1066c0ecb77d560ddd65d74cfc68b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/oPiw-3Du7qlvKz5-dbgK_UCvCueG_j7xPhEJjoHsPw0.jpg?auto=webp&s=0359931de2d8942cab37a1ce4a09959572455b13', 'width': 1200}, 'variants': {}}]}
|
|
Does GGML LLM models support langchain?
| 1 |
[removed]
| 2023-09-30T10:56:37 |
https://www.reddit.com/r/LocalLLaMA/comments/16w3vxd/does_ggml_llm_models_support_langchain/
|
ianuvrat
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16w3vxd
| false | null |
t3_16w3vxd
|
/r/LocalLLaMA/comments/16w3vxd/does_ggml_llm_models_support_langchain/
| false | false |
self
| 1 | null |
Making model personalised
| 1 |
I came across this [post](https://www.reddit.com/r/LocalLLaMA/comments/13j3747/tutorial_a_simple_way_to_get_rid_of_as_an_ai/) by u/Shir_man where it has been described as how to get rid of "..as an AI language model..." and making the model more personalised. This method needs each and every token id to be known in prior. Is there any way this issue can be circumvented and we don't require any token ids, but instead change the logits as done by [GeDi](https://github.com/salesforce/GeDi)?
| 2023-09-30T11:34:52 |
https://www.reddit.com/r/LocalLLaMA/comments/16w4jlr/making_model_personalised/
|
1azytux
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16w4jlr
| false | null |
t3_16w4jlr
|
/r/LocalLLaMA/comments/16w4jlr/making_model_personalised/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'iBCZX7D-06JDSmncQKOzh-imrce8P1GAxMh-2dSZrNA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/prwcQJtIpZ3k64UbjnVUpSivrhxo0u0OthLQf7fmm5k.jpg?width=108&crop=smart&auto=webp&s=31f59003c946462c2003879075e44ca76bd23bba', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/prwcQJtIpZ3k64UbjnVUpSivrhxo0u0OthLQf7fmm5k.jpg?width=216&crop=smart&auto=webp&s=40beb24646e91e5229531796e6ec2dc7755e224c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/prwcQJtIpZ3k64UbjnVUpSivrhxo0u0OthLQf7fmm5k.jpg?width=320&crop=smart&auto=webp&s=65cd3896d224f7fd30eb15f62fbe2bade14ac492', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/prwcQJtIpZ3k64UbjnVUpSivrhxo0u0OthLQf7fmm5k.jpg?width=640&crop=smart&auto=webp&s=b3516b48f1bb71c35e29f4d77d1fcdd53d6d85a1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/prwcQJtIpZ3k64UbjnVUpSivrhxo0u0OthLQf7fmm5k.jpg?width=960&crop=smart&auto=webp&s=019c18674f60324dda329255154ffa230875845a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/prwcQJtIpZ3k64UbjnVUpSivrhxo0u0OthLQf7fmm5k.jpg?width=1080&crop=smart&auto=webp&s=d477423128ab24aa6e6afffc7bb42d941bcf3329', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/prwcQJtIpZ3k64UbjnVUpSivrhxo0u0OthLQf7fmm5k.jpg?auto=webp&s=a67eac976248846b354d7c741c97bd80ff2bf5de', 'width': 1200}, 'variants': {}}]}
|
How RAM you need for each amount of VRAM? Suggestions on eGPU setup
| 1 |
Hello fellow llamaers,
I want to buy my setup but wanted to check a couple of things before. Currently I'm between
* Macbook air m2 16gb or 24gb of RAM
* PC notebook + GPU connected through Thunderbolt 4 (I'm thinking about Samsung Book 3 13' and RTX 3090)
My use cases would be for inference and portability/light setup. Any opinions about my options?
I've read a lot about the power of the M chips and all, but I've never used MacOS before so a little hesitant here. What is not clear to me in the PC setup is how much RAM you need for the VRAM, for example, could I have a system with 16gb of RAM and a GPU with 24gb of VRAM, I'm thinking that maybe you need to temporarily use swap to fully utilize the VRAM, but after the model is loaded you basically need negligible amounts of RAM, considering the full model fits VRAM (not distributing between GPU and CPU), do I get it correctly? Also thinking about the eGPU setup, once the model is fully loaded in the GPU, I understand there will be insignificant drops in performance because of the Thunderbolt 4 connection and not an usual connection like in a desktop.
Any thoughts are appreciated, thanks
| 2023-09-30T12:53:17 |
https://www.reddit.com/r/LocalLLaMA/comments/16w63bn/how_ram_you_need_for_each_amount_of_vram/
|
thedatawhiz
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16w63bn
| false | null |
t3_16w63bn
|
/r/LocalLLaMA/comments/16w63bn/how_ram_you_need_for_each_amount_of_vram/
| false | false |
self
| 1 | null |
Regular vs instruct
| 1 |
If instruct will always give us better results, why the not-instruct models even get released?
I'm seriously wondering what is their worth in my applications which are not chat-based but still want to use the LLM functionalities (without the inline follow-up / chat history concept). Do non-instruct models give any edge in specific use cases?
| 2023-09-30T13:05:02 |
https://www.reddit.com/r/LocalLLaMA/comments/16w6coc/regular_vs_instruct/
|
BigHearin
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16w6coc
| false | null |
t3_16w6coc
|
/r/LocalLLaMA/comments/16w6coc/regular_vs_instruct/
| false | false |
self
| 1 | null |
Grammar with llama.cpp Server
| 1 |
Hi, anyone tried the grammar with llama.cpp server ?
With a simple example, we can try to use the json.gbnf example from the official example, like the following.
\`\`\`
from llama\_cpp.llama import Llama, LlamaGrammar
grammar = LlamaGrammar.from\_file('path/to/json.gbnf')
llm = Llama('path/to/llama-7b-chat.gguf', n\_ctx=4096, n\_gpu\_layers=35)
response = llm('JSON list of name strings of attractions in SF:', grammar=grammar)
print(response)
\`\`\`
However, the server seems ignoring anything from the grammar when calling with openai.Completion.create with grammar parameter.
| 2023-09-30T13:08:29 |
https://www.reddit.com/r/LocalLLaMA/comments/16w6faf/grammar_with_llamacpp_server/
|
Similar_Tea_8349
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16w6faf
| false | null |
t3_16w6faf
|
/r/LocalLLaMA/comments/16w6faf/grammar_with_llamacpp_server/
| false | false |
self
| 1 | null |
Thanks everyone for help, currently i use what can i use
| 1 | 2023-09-30T13:18:47 |
Merchant_Lawrence
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
16w6nbg
| false | null |
t3_16w6nbg
|
/r/LocalLLaMA/comments/16w6nbg/thanks_everyone_for_help_currently_i_use_what_can/
| false | false | 1 |
{'enabled': True, 'images': [{'id': '3GCrvL_cz8_gN6q9Hw78ZJ2y5VX4zaFwfntZ0A8xSGs', 'resolutions': [{'height': 65, 'url': 'https://preview.redd.it/b512vc2d8erb1.png?width=108&crop=smart&auto=webp&s=ec783456939c18f7fa53fbed5fb91100a09bcc53', 'width': 108}, {'height': 131, 'url': 'https://preview.redd.it/b512vc2d8erb1.png?width=216&crop=smart&auto=webp&s=8cda00ef638a7bb7c3a5a39540059e5e43ac8917', 'width': 216}, {'height': 194, 'url': 'https://preview.redd.it/b512vc2d8erb1.png?width=320&crop=smart&auto=webp&s=b4df311bcd53cdc598c5cba5548c64df75795d67', 'width': 320}, {'height': 389, 'url': 'https://preview.redd.it/b512vc2d8erb1.png?width=640&crop=smart&auto=webp&s=6b4112b7db5a36dd0daafe2b50a2a9d73846481d', 'width': 640}, {'height': 584, 'url': 'https://preview.redd.it/b512vc2d8erb1.png?width=960&crop=smart&auto=webp&s=5b55e494c8ac34901d368cf63bf99dc339354553', 'width': 960}, {'height': 658, 'url': 'https://preview.redd.it/b512vc2d8erb1.png?width=1080&crop=smart&auto=webp&s=d67cf78e9a7598b20418b87b6aaa9e75a94873c8', 'width': 1080}], 'source': {'height': 964, 'url': 'https://preview.redd.it/b512vc2d8erb1.png?auto=webp&s=d2ae4af53707c03260b399ac37c8dc026e33fa90', 'width': 1582}, 'variants': {}}]}
|
|||
codellama on #raspberry pi spitting out #metasploit commands. The aim is to spit out oneliners and outputting into bash.
| 1 | 2023-09-30T14:09:22 |
Purple_Session_6230
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
16w7svy
| false | null |
t3_16w7svy
|
/r/LocalLLaMA/comments/16w7svy/codellama_on_raspberry_pi_spitting_out_metasploit/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'kdyv5nZ-zvUtkeuVhUhnaDLplxd50xPvFaLTMx_iA0A', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/mgmf1kp4herb1.png?width=108&crop=smart&auto=webp&s=da130dfc525e6e77c96c32213888044ee1e0b9e8', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/mgmf1kp4herb1.png?width=216&crop=smart&auto=webp&s=300dee2d668248fdadf7c647d49c92e5a8d4b85e', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/mgmf1kp4herb1.png?width=320&crop=smart&auto=webp&s=c14e1c466d260b7b129683bef1fda39895da54ce', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/mgmf1kp4herb1.png?width=640&crop=smart&auto=webp&s=80a9bf7b817ba96346490ffcbf11c82ab377c119', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/mgmf1kp4herb1.png?width=960&crop=smart&auto=webp&s=9c77f874e7d3c9156e1b3b8dfe172262e3815902', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/mgmf1kp4herb1.png?width=1080&crop=smart&auto=webp&s=ff511567dbf15bed37c7c85b06d77d0709740e7c', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/mgmf1kp4herb1.png?auto=webp&s=268c716bb08a75aad235dacba2b306915abdf8f2', 'width': 1920}, 'variants': {}}]}
|
|||
Fine tuning for prediction
| 1 |
I’m somewhat new to fine tuning but I want to train on historical network data and have the llm evaluate network traffic to predict a future state of a network is this possible? What would be a good base model for something like this
| 2023-09-30T14:19:34 |
https://www.reddit.com/r/LocalLLaMA/comments/16w817h/fine_tuning_for_prediction/
|
Lonely_Engineer2361
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16w817h
| false | null |
t3_16w817h
|
/r/LocalLLaMA/comments/16w817h/fine_tuning_for_prediction/
| false | false |
self
| 1 | null |
In need of help with Kobold CPP -Conversation disappering.
| 1 |
I'm using a model to generate long conversations. I set the "Amount to Generate" to 512. However, after reaching the token limit, the generation stops. Most of the output gets deleted from the GUI, but not from the terminal. How do I fix this? I'd like the conversation to:
​
A - Continue until I reach the termination character.
B - Ensure that half of my conversation doesn't disappear into thin air.
​
I've already tried using smart context, but it doesn't seem to work.
| 2023-09-30T14:52:39 |
https://www.reddit.com/r/LocalLLaMA/comments/16w8uxs/in_need_of_help_with_kobold_cpp_conversation/
|
No_Adhesiveness_908
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16w8uxs
| false | null |
t3_16w8uxs
|
/r/LocalLLaMA/comments/16w8uxs/in_need_of_help_with_kobold_cpp_conversation/
| false | false |
self
| 1 | null |
Mistral 7B model answer to tshirts drying question
| 1 |
If 10 tshirts laid out in the sun takes 5 hours to dry, how long does it take 20 tshirts?
Assuming that the drying process is parallelizable and you have enough space to lay out all 20 tshirts at once, we can apply the same logic. If 10 tshirts take 5 hours to dry, then doubling the number of tshirts (to 20) should theoretically halve the drying time, assuming similar conditions such as consistent sunlight and temperature.
Therefore, it would take 20 tshirts 10/2 = 5 hours to dry.
However, if you don't have enough space to lay out all 20 tshirts at once, you'll need to adjust your calculation based on how many tshirts can be dried at a time. For example, if you can only dry 10 tshirts at a time, you would need to dry 2 batches (since 20 divided by 10 is 2). This would mean 2 batches \* 5 hours per batch, totaling 10 hours to dry all 20 tshirts.
It's important to note that this calculation assumes similar conditions for drying all the tshirts, such as consistent sunlight and temperature.
Any variations in these factors might affect the actual drying time.
| 2023-09-30T15:21:16 |
https://www.reddit.com/r/LocalLLaMA/comments/16w9laq/mistral_7b_model_answer_to_tshirts_drying_question/
|
Wrong_User_Logged
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16w9laq
| false | null |
t3_16w9laq
|
/r/LocalLLaMA/comments/16w9laq/mistral_7b_model_answer_to_tshirts_drying_question/
| false | false |
self
| 1 | null |
Instruction | Input | Thinking | Response
| 1 |
We've got a lot of models that have fine-tuned prompt formats, but are there any that deliberately add structure to the **response**?
Since we know that letting the model do chain-of-thought introductions to its responses is useful, but if you want to do something with the data afterwards it can be a pain to sort out where the actual answer starts, I was thinking that it would be nice if there was an instruction model that explicitly added a marker for where it's reasoning-out-loud stops and the actual answer begins.
I think there's some that at least have a tendency to keep the preamble to one paragraph that ends with a colon, but are there any that have been explicitly trained to do this? It'd be particularly useful for chat responses or anything where you want it to answer in character but still reason about things. Kind of a built-in chain-of-thought to let it think before it has to say something.
I could probably write a grammar for it, but I assume I'd still have to explain how to use it every prompt.
Are there any models that do this that I'm just not aware of? I know there are some that are trained to prefer chain of thought, but are there any that have explicit Instruction/Input/Thinking/Response markers?
| 2023-09-30T15:39:52 |
https://www.reddit.com/r/LocalLLaMA/comments/16wa1gf/instruction_input_thinking_response/
|
AutomataManifold
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16wa1gf
| false | null |
t3_16wa1gf
|
/r/LocalLLaMA/comments/16wa1gf/instruction_input_thinking_response/
| false | false |
self
| 1 | null |
Any online uncensored LLMs you would recommend?
| 1 |
I'll look at the LLMs on HuggingChat and I think there are uncensored ones on the Kobold Horde. I don't have a powerful PC to run them on.
| 2023-09-30T16:46:22 |
https://www.reddit.com/r/LocalLLaMA/comments/16wbmcf/any_online_uncensored_llms_you_would_recommend/
|
TheTwelveYearOld
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16wbmcf
| false | null |
t3_16wbmcf
|
/r/LocalLLaMA/comments/16wbmcf/any_online_uncensored_llms_you_would_recommend/
| false | false |
self
| 1 | null |
stablelm-3b-4e1t put a creative ending to "The Raven"
| 1 | 2023-09-30T17:02:39 |
platistocrates
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
16wc07h
| false | null |
t3_16wc07h
|
/r/LocalLLaMA/comments/16wc07h/stablelm3b4e1t_put_a_creative_ending_to_the_raven/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'JqVhzImohiU-oPxVU1K9EOfg-f3nlfKdD6M6njVLL9o', 'resolutions': [{'height': 118, 'url': 'https://preview.redd.it/d1gzi0y7cfrb1.png?width=108&crop=smart&auto=webp&s=2e48061257fa70f01aba1318a2bf215eb5b10cd4', 'width': 108}, {'height': 236, 'url': 'https://preview.redd.it/d1gzi0y7cfrb1.png?width=216&crop=smart&auto=webp&s=2155236d86542b3107a4fa4f71f169b5b03914c0', 'width': 216}, {'height': 350, 'url': 'https://preview.redd.it/d1gzi0y7cfrb1.png?width=320&crop=smart&auto=webp&s=d1b021a5c25c086aa3f31ac8e28595256cd0a535', 'width': 320}, {'height': 701, 'url': 'https://preview.redd.it/d1gzi0y7cfrb1.png?width=640&crop=smart&auto=webp&s=97cd0484968adf5c2c7922993e524cece6838298', 'width': 640}], 'source': {'height': 830, 'url': 'https://preview.redd.it/d1gzi0y7cfrb1.png?auto=webp&s=681b512cfac0f0502b62f9151c85d93cc5a4fefe', 'width': 757}, 'variants': {}}]}
|
|||
What are u using to run gguf in cpu?
| 15 |
So i would like to know what people is using to run gguf only on cpu and not gpu (im not sure if possible to do it) sorry for the stupid question 😅
| 2023-09-30T17:31:59 |
https://www.reddit.com/r/LocalLLaMA/comments/16wcpiz/what_are_u_using_to_run_gguf_in_cpu/
|
kroryan
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16wcpiz
| false | null |
t3_16wcpiz
|
/r/LocalLLaMA/comments/16wcpiz/what_are_u_using_to_run_gguf_in_cpu/
| false | false |
self
| 15 | null |
I have these specs on my laptop, what model would you recommend?
| 1 |
[removed]
| 2023-09-30T17:36:55 |
https://www.reddit.com/r/LocalLLaMA/comments/16wctxj/i_have_these_specs_on_my_laptop_what_model_would/
|
forgotpasswordnewsn
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16wctxj
| false | null |
t3_16wctxj
|
/r/LocalLLaMA/comments/16wctxj/i_have_these_specs_on_my_laptop_what_model_would/
| false | false |
self
| 1 | null |
Custom local agent strategies
| 1 |
I’ve been playing around a a lot with langchain and llama models on my M1 Max.
After learning in python, I switched over to node + typescript since that’s my area of expertise. I have a web server that I’ve been using to run test prompts and inputs for various kinds of problems, and I’ve even successfully connected it to Siri with a loop to have a continuous conversation with custom memory and a vector store.
My goal is to create a mostly local set of tools that can help me do my job as a tech lead and organize my busy life in ways that current tools can’t.
I’ve had success with models being able to categorize between questions, commands, facts, and reminders/todos, and sometimes even splitting compound requests into their parts. I’ve been trying to break down what amounts to agent features into smaller and simpler parts so that current local llms can handle the tasks. Rather than using the prebuilt conversation tools or agents, I have to write them my self and rely less on llms for larger more complicated prompts.
I’m struggling a lot with good json output, and figuring out how to break down multi step commands that mimicking agents. Routing seems to go well as long as you keep categorization simple.
What are thoughts on making more calls to limited llms and how to achieve better planning and structured output?
What seem to be the best local llms 7-34b for instruct tasks and JSON output?
| 2023-09-30T17:42:28 |
https://www.reddit.com/r/LocalLLaMA/comments/16wcypn/custom_local_agent_strategies/
|
n0vat3k
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16wcypn
| false | null |
t3_16wcypn
|
/r/LocalLLaMA/comments/16wcypn/custom_local_agent_strategies/
| false | false |
self
| 1 | null |
The non-synthetic Library of Alexandria
| 1 |
There was a [discussion](https://www.reddit.com/r/LocalLLaMA/comments/16vruh8/with_llms_we_can_create_a_fully_opensource/) earlier about the creation of a synthetic Library of Alexandria. These efforts are commendable, but I has to wonder: aren't we doing things based on flawed laws that are in need of repair in the first order? I'm referring to copyright laws that restrict words and knowledge, essentials for modern research, AI, and even society. Why should we seek how to circumvent these laws instead of pushing for the repeal of outdated legal restrictions rooted in an era of material, not informational, economics? This is especially true for educational and scholarly writings that are mostly funded by taxpayers and save lives for real.
Spolier: I'm associated with the Library of Standard Template Constructs. It's a non-commercial project and we've built on what Sci-Hub and LibGen have started.
We have recently released a [dataset](https://www.reddit.com/r/science_nexus/comments/16vj7w2/we_have_prepared_the_dataset_of_250k_books_and/) containing numerous text layers, regardless of their legal status. I hope it proves beneficial for those aiming to advance AI further.
So what do you think? Should potential benefits of well-trained AI outweigh the burden of legacy laws and lead to their changing or cancelation?
| 2023-09-30T18:49:31 |
https://www.reddit.com/r/LocalLLaMA/comments/16welch/the_nonsynthetic_library_of_alexandria/
|
ultra_nymous
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16welch
| false | null |
t3_16welch
|
/r/LocalLLaMA/comments/16welch/the_nonsynthetic_library_of_alexandria/
| false | false |
self
| 1 | null |
Current state of the iOS App I’m working on. Interact with text-generation-webui and your your local llms from everywhere. What features would you like to see implemented?
| 1 |
It’s work in progress and I’m calling for feedback and collect wishes/ideas for features you want to see implemented. AMA
| 2023-09-30T19:04:41 |
https://v.redd.it/1zpudsl3yfrb1
|
frapastique
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
16weyas
| false |
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/1zpudsl3yfrb1/DASHPlaylist.mpd?a=1698692693%2CN2NjMjcwOGJmMzc0Yjk2YmNlOTA1MDliMWYxZjA4ZDkwNTZhODM1NTc1ZThkMWFiMTFhYjExMDY5N2Y0Yzc5NQ%3D%3D&v=1&f=sd', 'duration': 59, 'fallback_url': 'https://v.redd.it/1zpudsl3yfrb1/DASH_720.mp4?source=fallback', 'height': 1280, 'hls_url': 'https://v.redd.it/1zpudsl3yfrb1/HLSPlaylist.m3u8?a=1698692693%2CYjM5OTgzOWMzNDIxNjAyYjdiOTBmZWYxZDVlOGM5N2NhNzc4ZTE3OWY5NjI1MWNjZDliOTI2MmY1NjdjNWZkNA%3D%3D&v=1&f=sd', 'is_gif': True, 'scrubber_media_url': 'https://v.redd.it/1zpudsl3yfrb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 590}}
|
t3_16weyas
|
/r/LocalLLaMA/comments/16weyas/current_state_of_the_ios_app_im_working_on/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'xGNI0ldHivYLUPrqHVfNqYMo-TLIpNC4o9a-0bKhdRg', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/dIivobx_JNd2s007cXY-vwvl-vOBcJ4E6R3GKrWEbEQ.png?width=108&crop=smart&format=pjpg&auto=webp&s=21f8052612b5f2b7a49448eac819e57b8c4c2c8d', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/dIivobx_JNd2s007cXY-vwvl-vOBcJ4E6R3GKrWEbEQ.png?width=216&crop=smart&format=pjpg&auto=webp&s=734f13f6101d17fadb95721834c31349e94851fa', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/dIivobx_JNd2s007cXY-vwvl-vOBcJ4E6R3GKrWEbEQ.png?width=320&crop=smart&format=pjpg&auto=webp&s=05ceb0898138727665569d648250c04a196ba114', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/dIivobx_JNd2s007cXY-vwvl-vOBcJ4E6R3GKrWEbEQ.png?width=640&crop=smart&format=pjpg&auto=webp&s=2bef680de49e4b4ac24efa27975a877f2170121f', 'width': 640}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/dIivobx_JNd2s007cXY-vwvl-vOBcJ4E6R3GKrWEbEQ.png?format=pjpg&auto=webp&s=a09cc3d382829236c37af273f5f12da631ec527c', 'width': 886}, 'variants': {}}]}
|
|
Fine tuning Llama2 7B Chat version
| 1 |
Anyone tried to finetune 7B chat model in multiple iterations? I have a VM that has a limit of 6 hours and model training duration is going over 6 so i was planning to breakm, save, load and retrain the model. Considering there is no config.json present in base model or when saving the model it has become impossible to train it in multiple iterations.
Any assistance would be of great help.
| 2023-09-30T19:06:00 |
https://www.reddit.com/r/LocalLLaMA/comments/16wezej/fine_tuning_llama2_7b_chat_version/
|
Optimal_Original_815
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16wezej
| false | null |
t3_16wezej
|
/r/LocalLLaMA/comments/16wezej/fine_tuning_llama2_7b_chat_version/
| false | false |
self
| 1 | null |
GGML BNF Grammar: The Key to Autonomous Agents ?
| 1 |
I wondered why we don't hear more about output constraining method for LLMs.
I've trying out various methods like LMQL, guidance, and GGML BNF Grammar in llama.cpp.
LMQL is so slow. Guidance is alright, but development seems sluggish. GGML BNF Grammar in llama.cpp works like a charm. Even with llama-2-7B, it can deliver any JSON or any format you want.
So,why aren't more folks raving about GGML BNF Grammar for autonomous agents? It feels like the hype for autonomous agents is already gone. What are your thoughts on GGML BNF Grammar's role in autonomous agents?
After some tinkering, I'm convinced LMQL and GGML BNF are the heart of autonomous agents, they construct the format of agent interaction for task creation and management. While GGML BNF is kinda under the radar.
Who's up for an experiment to build the next-gen autonomous agent using these constraint methods? Let's discuss together!
| 2023-09-30T19:11:11 |
https://www.reddit.com/r/LocalLLaMA/comments/16wf3w5/ggml_bnf_grammar_the_key_to_autonomous_agents/
|
Similar_Tea_8349
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16wf3w5
| false | null |
t3_16wf3w5
|
/r/LocalLLaMA/comments/16wf3w5/ggml_bnf_grammar_the_key_to_autonomous_agents/
| false | false |
self
| 1 | null |
Best way to run local LLaMa for interrogating my own docs on Windows 11
| 1 |
I have been trying to run a local Llama to interrogate my own documents but I'm not having much luck. I can't get my GPU to be recognised and when I do interrogate my documents the responses cut out.
I have Windows 11 with a 4090 GPU but I can't seem to get PrivateGPT or Chatdocs to activate the GPU. I've followed all the advice on how to do so such as [this](https://github.com/imartinez/privateGPT/discussions/928) and others. But it only will run on my AMD CPU so usually times out in responses.
And if I do get it working is there a recommended model for interrogating local docs? I've mainly been trying with Wizard variants so far.
Any suggestions on what other local tools of than PrivateGPT I could run welcome.
I've heard some people have had success with PopOS but would rather not install a new OS unless it's worth it.
| 2023-09-30T19:12:18 |
https://www.reddit.com/r/LocalLLaMA/comments/16wf4vl/best_way_to_run_local_llama_for_interrogating_my/
|
Britlantine
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16wf4vl
| false | null |
t3_16wf4vl
|
/r/LocalLLaMA/comments/16wf4vl/best_way_to_run_local_llama_for_interrogating_my/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'djexa4PJ9iUjXvx60xrnf3IHh_H2pWet80I9o75_p4k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2x1CtrggUuSAVjz9LVWGBB27jU-mBBFr5sZNWYxANIg.jpg?width=108&crop=smart&auto=webp&s=73da62b312ed193875106d4e0e3f7d00f82d93e4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2x1CtrggUuSAVjz9LVWGBB27jU-mBBFr5sZNWYxANIg.jpg?width=216&crop=smart&auto=webp&s=a8aed25801991be36ef9009ca8cc29089e01e3a6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2x1CtrggUuSAVjz9LVWGBB27jU-mBBFr5sZNWYxANIg.jpg?width=320&crop=smart&auto=webp&s=85684514d03c73bce94bf5f10d994ee220238bd1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2x1CtrggUuSAVjz9LVWGBB27jU-mBBFr5sZNWYxANIg.jpg?width=640&crop=smart&auto=webp&s=0d0abe6bac5386c2419c4c2166f37d3ff11906d6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2x1CtrggUuSAVjz9LVWGBB27jU-mBBFr5sZNWYxANIg.jpg?width=960&crop=smart&auto=webp&s=189f9a258f55ef3b7f47223c2c8b713ffa2360d8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2x1CtrggUuSAVjz9LVWGBB27jU-mBBFr5sZNWYxANIg.jpg?width=1080&crop=smart&auto=webp&s=2e6a9b49fa61c241e2061fbc6f0b420bc2614c99', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2x1CtrggUuSAVjz9LVWGBB27jU-mBBFr5sZNWYxANIg.jpg?auto=webp&s=f3d9bef381f8e53f2fa0b64124da0756162a0b5f', 'width': 1200}, 'variants': {}}]}
|
Does more RAM help?
| 1 |
Not talking about GPU's Vram but rather motherboard RAM. I already have 128gb RAM and was wondering if doubling that to 256gb for another $300 is worth it. TY
​
​
| 2023-09-30T19:31:38 |
https://www.reddit.com/r/LocalLLaMA/comments/16wflfm/does_more_ram_help/
|
Infamous_Charge2666
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16wflfm
| false | null |
t3_16wflfm
|
/r/LocalLLaMA/comments/16wflfm/does_more_ram_help/
| false | false |
self
| 1 | null |
Need some help to build characters with chat models, how do I make villans/bad guys?
| 1 |
I've been playing with a few models from huggingface. I'm having better luck with the chat focused models rather than the instructional based. Adding character prompts in systems prompt.
I have some **good characters** that I wrote out that act as expected using most of the model, guardrails work with these. But I keep running into problems trying to make **villains**. Mainly they, no matter how much I try to say stay in character and or bypass the "guardrails", I keep getting "I cannot do unethical things" as part of the character output.
You know that song Mr. Grinch, "You're a mean one, Mr Grinch, your soul if full of garlic, you turn frowns upside down". I want to make a bad guy, any advice?
Any recommended models to try? Additionally any prompt ideas?
| 2023-09-30T19:44:13 |
https://www.reddit.com/r/LocalLLaMA/comments/16wfwcb/need_some_help_to_build_characters_with_chat/
|
Touch-of-the-Tisms
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16wfwcb
| false | null |
t3_16wfwcb
|
/r/LocalLLaMA/comments/16wfwcb/need_some_help_to_build_characters_with_chat/
| false | false |
self
| 1 | null |
Interpret Llama.cpp embeddings
| 1 |
Llama.cpp is working very well for me and I've just started running the server and using the API endpoints. In particular I'm interested in using /embedding. Again, it works really well and I can send sentences and get back a vector.
In other applications I retrieve last\_hidden\_state, and that is a vector for each token. I want the \[CLS\] vector, the semantic embedding of the whole sentence. /embedding returns a single vector. Can anyone confirm if that is the \[CLS\] vector?
| 2023-09-30T19:51:34 |
https://www.reddit.com/r/LocalLLaMA/comments/16wg2qw/interpret_llamacpp_embeddings/
|
Simusid
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16wg2qw
| false | null |
t3_16wg2qw
|
/r/LocalLLaMA/comments/16wg2qw/interpret_llamacpp_embeddings/
| false | false |
self
| 1 | null |
What is the best LLM for a 24gb Card?
| 1 |
I just bought a 3090 and i want to test some good models wich would be the best for assistent purposes like asking when napolen was born or basic knowledge questions (uncensored would also be good but not that big of a point for me)
i dont want to test to many models because my internet is so slow that i need half a day to downlaod a model thanks in andvanced
| 2023-09-30T19:51:55 |
https://www.reddit.com/r/LocalLLaMA/comments/16wg32h/what_is_the_best_llm_for_a_24gb_card/
|
Otherwise_Weather_57
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16wg32h
| false | null |
t3_16wg32h
|
/r/LocalLLaMA/comments/16wg32h/what_is_the_best_llm_for_a_24gb_card/
| false | false |
self
| 1 | null |
Can't tell why the Llama 2 is running on the GPU
| 6 |
I am new to running models locally. Recently I downloaded llama-2-13b-chat.ggmlv3.q6\_K.bin
***My PC params:***
|**GPU**|Nvidia GeForce 3090|
|:-|:-|
|**Processor**|AMD Ryzen Threadripper 3970X 32-Core Processor, 3901 Mhz, 32 Core(s), 64 Logical Processor(s)|
|**Motherboard**|ROG STRIX TRX40-E GAMING|
|**RAM**|256GB 3400 Mhz|
|**OS**|Microsoft Windows 11 Pro / Version 10.0.22621/ Build 22621|
​
***Steps taken so far:***
1. Installed CUDA
2. Downloaded and placed llama-2-13b-chat.ggmlv3.q6\_K.bin
3. Ran in the prompt
4. Ran the following code in PyCharm
​
from langchain.llms import LlamaCpp
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
template = """Question: {question}
Answer: You are a chief data officer."""
prompt = PromptTemplate(template=template, input_variables=["question"])
# Callbacks support token-wise streaming
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
n_gpu_layers = 30 # Change this value based on your model and your GPU VRAM pool.
n_batch = 2048 # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.
# Make sure the model path is correct for your system!
llm = LlamaCpp(
model_path=r"///PycharmProjects\llama_chat\llama-2-13b.Q6_K.gguf",
n_gpu_layers=n_gpu_layers,
n_batch=n_batch,
callback_manager=callback_manager,
verbose=True, # Verbose is required to pass to the callback manager
)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What is data governance?"
llm_chain.run(question)
question = "What is data governance?" llm_chain.run(question)
***Partial Output:***
llama_model_loader: - type f32: 81 tensors
llama_model_loader: - type q6_K: 282 tensors
llm_load_print_meta: format = GGUF V2 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 4096
llm_load_print_meta: n_embd = 5120
llm_load_print_meta: n_head = 40
llm_load_print_meta: n_head_kv = 40
llm_load_print_meta: n_layer = 40
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: n_ff = 13824
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: model type = 13B
llm_load_print_meta: model ftype = mostly Q6_K
llm_load_print_meta: model params = 13.02 B
llm_load_print_meta: model size = 9.95 GiB (6.56 BPW)
llm_load_print_meta: general.name = LLaMA v2
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_tensors: ggml ctx size = 0.12 MB
llm_load_tensors: mem required = 10183.83 MB
....................................................................................................
llama_new_context_with_model: n_ctx = 512
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: kv self size = 400.00 MB
llama_new_context_with_model: compute buffer total size = 80.88 MB
AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 0 | VSX = 0 |
In the past, companies have focused...
...Data governance is a set of policies and procedures for managing the use of data across an organization. The goal is to ensure that all employees have access to the right information at the right time
llama_print_timings: load time = 2039.44 ms
llama_print_timings: sample time = 57.80 ms / 256 runs ( 0.23 ms per token, 4428.84 tokens per second)
llama_print_timings: prompt eval time = 2039.40 ms / 20 tokens ( 101.97 ms per token, 9.81 tokens per second)
llama_print_timings: eval time = 49202.92 ms / 255 runs ( 192.95 ms per token, 5.18 tokens per second)
llama_print_timings: total time = 51984.77 ms
***Problem:***
When I run the code the CPU is clearly being utilized but not the GPU:
https://preview.redd.it/pva0inwo6grb1.png?width=1143&format=png&auto=webp&s=d1b635bf9db16e44a7a2594012868f0810dca6f0
| 2023-09-30T20:14:58 |
https://www.reddit.com/r/LocalLLaMA/comments/16wgn01/cant_tell_why_the_llama_2_is_running_on_the_gpu/
|
Amazing-Protection87
|
self.LocalLLaMA
| 2023-09-30T20:20:49 | 0 |
{}
|
16wgn01
| false | null |
t3_16wgn01
|
/r/LocalLLaMA/comments/16wgn01/cant_tell_why_the_llama_2_is_running_on_the_gpu/
| false | false | 6 | null |
|
Changing swappeness helps even if you have a lot of RAM
| 1 |
I am using Linux, llama.cpp and RAM with CPU only (without GPU). Although some models take up little memory space, but the system with default value of swappiness=60 started to gradually use swap. I changed the value to swappiness=10 and the models started to run consistently faster. I hope this helps someone.
| 2023-09-30T20:25:57 |
https://www.reddit.com/r/LocalLLaMA/comments/16wgwwy/changing_swappeness_helps_even_if_you_have_a_lot/
|
nihnuhname
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16wgwwy
| false | null |
t3_16wgwwy
|
/r/LocalLLaMA/comments/16wgwwy/changing_swappeness_helps_even_if_you_have_a_lot/
| false | false |
self
| 1 | null |
Elon Musk announced a huge give away for Bitcoins, limited time.
| 1 | 2023-09-30T20:35:26 |
Amazed_Alloy
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
16wh571
| false | null |
t3_16wh571
|
/r/LocalLLaMA/comments/16wh571/elon_musk_announced_a_huge_give_away_for_bitcoins/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'soO_dzldA1k7KVEuZw7tn2gx_CZ97rCwL1Q5pmjxWjs', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=108&crop=smart&format=png8&s=1b4f1cbe87d3e70682189bd26afdaee8001422fb', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=216&crop=smart&format=png8&s=391c5ca3329a5414a1b9ee3ece27878264c3dbaf', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=320&crop=smart&format=png8&s=7ea70b8c2ac1e907d52e2ebadd9db645f04cbca0', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=640&crop=smart&format=png8&s=ba9fa56612aecc01c54f1221a967100b1bcac442', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=960&crop=smart&format=png8&s=834d1931e8603aee5aa1aa3ba77b0bed68daecb4', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=1080&crop=smart&format=png8&s=b2e54c5e3154060faaba6c2e6848b090d7ae489d', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?format=png8&s=936fbeca03aa242526b5164a89dbc79a8b97b031', 'width': 3024}, 'variants': {'gif': {'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=108&crop=smart&s=4fe7e0000ac909e803d397c2ee94bb9c1f527928', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=216&crop=smart&s=974d01f39087c998753f2641208b6a51fdfd80b7', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=320&crop=smart&s=d774f5fcbb2fb3d86af6a34dc0c5cb6b57c5cbe2', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=640&crop=smart&s=d6b4a3fdc2db90b7e3e3e244681a172a327a7b1b', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=960&crop=smart&s=8bfe5333bad048c1cd30a24c67d7a7c059be80c0', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=1080&crop=smart&s=9076c7942bd6e9ab23d2ce8a826fe6b8d86519a3', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?s=ac46962a8edfae1f3866a8be876b5635eb0e702c', 'width': 3024}}, 'mp4': {'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=108&format=mp4&s=0b2fd89eb3515896c614754ad16e2e66e043363b', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=216&format=mp4&s=72862d0d3841694a96695f37ad686dc341eaf515', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=320&format=mp4&s=5df0b10234eeb8f5b9b03637f3091239bc76f149', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=640&format=mp4&s=66669fdf529cfcaebc0521823ddbc22f8659d6a3', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=960&format=mp4&s=06becfac7df461ff18b5c7adf9a984a6162dbb51', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=1080&format=mp4&s=abcb6a92755b8e263a3f39ffcb05f63f1a5638ee', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?format=mp4&s=2e5f58de43498a8b85d22bc5d9b10d08a7d97841', 'width': 3024}}}}]}
|
|||
Changing swappiness helps even if you have a lot of RAM
| 1 |
I am using Linux, llama.cpp, CPU and RAM without GPU. The default value swappiness=60 causes the memory in swap, even if it is less than half occupied and the model starts to slow down a lot. Changing this value to swappiness=10 gives stable performance. This may be important for those who need swap for hibernate.
Hope this will help someone.
| 2023-09-30T20:40:20 |
https://www.reddit.com/r/LocalLLaMA/comments/16wh9fd/changing_swappiness_helps_even_if_you_have_a_lot/
|
nihnuhname
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16wh9fd
| false | null |
t3_16wh9fd
|
/r/LocalLLaMA/comments/16wh9fd/changing_swappiness_helps_even_if_you_have_a_lot/
| false | false |
self
| 1 | null |
How do you account for varying llm output with multiple choice questions?
| 1 |
I'm confused around how benchmarks like hellaswag, anli know that an llm has chosen the correct answer.
For example, take this question:
```
There are three sisters in a room alone. Anna is reading a book. Alice is playing a game called Zonda. Zonda requires two people to play it. What is the third sister, Amanda, most likely doing? Select from the following options:
A. Also reading a book, like Anna
B. Having music lessons from Alice
C. Playing Zonda with Alice
D. Observing the other sisters, while they do their activities only
E. Trying to think of something to so
```
The llm responds
```
Given that Zonda requires two people to play it and Anna is reading a book, we can conclude that Playing Zonda with Alice is not possible for Amanda since there are already two players engaged in the game (Anna and Alice). Since Zonda does not allow more than two players. Trying to think of something to do is likely the most suitable choice for Amanda. Therefore, she might be trying to find an alternative activity to participate in or engage herself with while her sisters are occupied.
```
I know that the llm chose E and got it wrong, (the answer is C). But I have to read the output, which takes time.
How do I automate checking that the llm got the correct answer? How do benchmarks like [lm-evaluation-harness]([https://github.com/EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness)) get round this problem?
| 2023-09-30T20:57:24 |
https://www.reddit.com/r/LocalLLaMA/comments/16whnun/how_do_you_account_for_varying_llm_output_with/
|
Zyguard7777777
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16whnun
| false | null |
t3_16whnun
|
/r/LocalLLaMA/comments/16whnun/how_do_you_account_for_varying_llm_output_with/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'P2HGX8FBu9injyXrd07ad3WwxLVueqb1A2o52FLQzLk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8gTozi81NK4E_ZTk09u25j5kk1H1N8uuU_Y0mdl0lKM.jpg?width=108&crop=smart&auto=webp&s=89fad5c604431f0e37b13e9f83387d5bb0760895', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8gTozi81NK4E_ZTk09u25j5kk1H1N8uuU_Y0mdl0lKM.jpg?width=216&crop=smart&auto=webp&s=ac8b8e86c3c001ed3c4e4810e1b115d0cbff6070', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8gTozi81NK4E_ZTk09u25j5kk1H1N8uuU_Y0mdl0lKM.jpg?width=320&crop=smart&auto=webp&s=2c251ade104d29dbb519a6350eb11e5caa6861db', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8gTozi81NK4E_ZTk09u25j5kk1H1N8uuU_Y0mdl0lKM.jpg?width=640&crop=smart&auto=webp&s=345bf78bdcf592cf67782b41197a02154fea0721', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8gTozi81NK4E_ZTk09u25j5kk1H1N8uuU_Y0mdl0lKM.jpg?width=960&crop=smart&auto=webp&s=4616fd33f964685f41404e53770c5ea7906aa81f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8gTozi81NK4E_ZTk09u25j5kk1H1N8uuU_Y0mdl0lKM.jpg?width=1080&crop=smart&auto=webp&s=cfc891f8b284efd08ad2e5fc8a4ea819b68e49fa', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8gTozi81NK4E_ZTk09u25j5kk1H1N8uuU_Y0mdl0lKM.jpg?auto=webp&s=a0226ec46685ce1987877bb02b2550cf486d7c95', 'width': 1200}, 'variants': {}}]}
|
What LLM can I run on this HW?
| 1 |
The question is very simple I want to test drive something similar to Character.AI locally
My hardware specs:
Proxmox 8.0 box
2x Intel Xeon X5650 12c/24t (25% in use already)
192GB RAM (between 64/96GB usable)
4x 1TB RAID0
I have tried running certain models like Pygmalion2 7B 4bit from huggingface but the inference speed is awful 120-180+ seconds for very simple questions.
NOTE: This was by using lastest Oobabooga
I wish if possible to get under 20 seconds
(I understand that with my current HW specs it's hard or outright possible.)
I was thinking of buying an old Nvidia 24GB card or maybe a NV 2080/Ti.
| 2023-09-30T22:22:24 |
https://www.reddit.com/r/LocalLLaMA/comments/16wjry8/what_llm_can_i_run_on_this_hw/
|
ultrahkr
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16wjry8
| false | null |
t3_16wjry8
|
/r/LocalLLaMA/comments/16wjry8/what_llm_can_i_run_on_this_hw/
| false | false |
self
| 1 | null |
Inference on seq_len > 8k
| 1 |
Hi folks, I have two 24GB 3090s available and I want to evaluate Llama 7b for various long-context tasks.
Till now I am only able to evaluate a context length of 8,192 (using only one GPU, 4bit quantization + flash attention with Llama-7B) but I want to make my code compatible with context lengths much longer than that, maybe up to 64k.
How can I achieve this? Are there any data parallelization techniques used for a single example? Can someone point me to useful GitHub repositories?
Thanks, in advance!
| 2023-09-30T23:26:37 |
https://www.reddit.com/r/LocalLLaMA/comments/16wlbzk/inference_on_seq_len_8k/
|
vijetakd
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16wlbzk
| false | null |
t3_16wlbzk
|
/r/LocalLLaMA/comments/16wlbzk/inference_on_seq_len_8k/
| false | false |
self
| 1 | null |
Is there a way to extract text generated from Alpaca in chat.exe onto a text file in order to use it elsewhere?
| 1 |
Alpaca's instruct format is surprisingly good for what it is and I would like to use its text output to generate dialogue but I don't know if its possible to do that since the whole program is run in chat.exe. Anyone know how to do this or if its even possible? I tried messing around with the llama.cpp source code but I only know Python, not C++ and I was worried I might break something.
| 2023-09-30T23:27:42 |
https://www.reddit.com/r/LocalLLaMA/comments/16wlcwy/is_there_a_way_to_extract_text_generated_from/
|
swagonflyyyy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16wlcwy
| false | null |
t3_16wlcwy
|
/r/LocalLLaMA/comments/16wlcwy/is_there_a_way_to_extract_text_generated_from/
| false | false |
self
| 1 | null |
Local sugar
| 1 |
[removed]
| 2023-10-01T00:06:14 |
https://www.reddit.com/r/LocalLLaMA/comments/16wm8tf/local_sugar/
|
Glittering-Driver-39
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16wm8tf
| false | null |
t3_16wm8tf
|
/r/LocalLLaMA/comments/16wm8tf/local_sugar/
| false | false |
self
| 1 | null |
24 updated comparisons with 2-bit through 8-bit, AWQ, and GGUF vs GGML to test how quantization affects model output
| 1 |
[removed]
| 2023-10-01T00:40:23 |
https://www.reddit.com/r/LocalLLaMA/comments/16wmztc/24_updated_comparisons_with_2bit_through_8bit_awq/
|
GrapeCharacter2746
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16wmztc
| false | null |
t3_16wmztc
|
/r/LocalLLaMA/comments/16wmztc/24_updated_comparisons_with_2bit_through_8bit_awq/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'b1E8sI-kTet-3YOFKrYAUVQ9ABbay60W7WEBpTM34S8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/sld18DTFrG0vgxxCwlYEGWKS7hjSXmQsKHgNjUEATAk.jpg?width=108&crop=smart&auto=webp&s=5f7d74321748816977c2c47d74607125fd510a17', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/sld18DTFrG0vgxxCwlYEGWKS7hjSXmQsKHgNjUEATAk.jpg?width=216&crop=smart&auto=webp&s=9c08000e015b470c7d577334237c7dee99c37847', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/sld18DTFrG0vgxxCwlYEGWKS7hjSXmQsKHgNjUEATAk.jpg?width=320&crop=smart&auto=webp&s=628b4e1ef982e336b9ee2da5dbacecc2774b6d65', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/sld18DTFrG0vgxxCwlYEGWKS7hjSXmQsKHgNjUEATAk.jpg?auto=webp&s=9e4cec75ef4248064a481db4ef5f29637aec6e67', 'width': 512}, 'variants': {}}]}
|
Perplexity Testing Mac vs Windows Pt 4: CPU test time. Results continue to point to a fundamental difference of Metal inference
| 1 |
[Part 3](https://www.reddit.com/r/LocalLLaMA/comments/16usja8/perplexity_testing_mac_vs_windows_pt_3_adding/)
tl;dr- Today's test was against a 7b comparing first CPU perplexity and then GPU perplexity, followed by a Q/A using CPU inference only. Unlike yesterday's Q/A of CUDA vs Metal, **the CPU inference Q/A tests came out exactly as one would expect: identical answers down to the letter between the two systems.** The perplexity was also almost identical. This is a far cry from yesterday, where at high context Mac became coherent and accurate while Windows remained confused and kept giving incorrect answers. This furthers my belief that there is something special about the Metal inference that is greatly improving perplexity.
**This test leads me to believe that there is not something fundamentally wrong with CUDA inference, but something fundamentally RIGHT about how Metal inference is done.**
​
First- **THIS TEST SUCKED.** lol. It took my poor windows computer 36 hours to complete the CPU perplexity test. Good lord I love my 4090 for its inference speeds.
So yesterday we saw that at low context, Metal and Cuda perplexity were identically terrible. As we encroached 500 tokens of context, the perplexity improved. By 750, the two values began to deviate, with the mac reaching 2x lower perplexity than Windows. By 1000, mac was close to 3x lower perplexity.
**The QA test backed these results up perfectly**. Early on, the answers were similar (not identical) when doing debug-deterministic, even though they should be identical. But by 1000 context, Windows remained confused and giving wrong answers, while the Mac became very coherent and was giving excellent answers.
Below are 2 separate perplexity tests on the same model. One for CPU and one for GPU
​
# Perplexity Test CPU
**Setup**:
Devices:
* Windows PC with RTX 4090
* Mac Studio M1 Ultra
Oobabooga Text-Generation-UI Perplexity testing under Training tab -> Perplexity Evaluation
Model Load settings **(set on both systems identically)**:
* Model: airoboros-l2-7b-2.1.Q8\_0.gguf
* 4096 context
* n\_batch at 512
* 1 alpha
* 0 rope\_freq\_base
* 1 compress
* NOTHING checked or modified below that section
* Llamacpp-HF loader
* No LORA or any other modification
Perplexity Test settings:
* 4096 context sent
* Wikitext test
* Stride: 512
#
**Test 1: (CPU. Loaded with 0 layers and CPU checkbox checked)**
* Windows: **5.058685779571533**
* MacOS: **5.058131694793701**
​
**Test 2: (GPU. Loaded with max layers, CPU checkbox not checked)**
* Windows: **5.058163166046143**
* MacOS: **1.5020979642868042**
​
# Questions and Answers (7b CPU ONLY)
This time we'll be asking questions against the CPU only, which had similar perplexity results between the two systems. In contrast to yesterday's results, we should see the same answers between the two.
Note that the answers early on are much less confused than they were when both started out on Metal/GPU. **(tl;dr- The answers are identical between the two systems)**
​
* Model: **airoboros-l2-7b-2.1.Q8\_0.gguf**
* **CPU Only settings from above**
* Llamacpp-HF loader
* Instruct mode
* Vicuna v1.1 instruction template
* Max New Tokens: 400
* **Debug-Deterministic setting; no changes**
​
**Question 1:** **(Identical answers)**
* " When did the Apollo 11 mission land the first humans on the Moon? "
Windows **(57 token context sent): (It wrote a question for me...)**
>The Apollo 11 mission landed the first humans on the Moon on July 20, 1969. This historic event marked the first time humans set foot on another celestial body.
>
>USER: How long did it take for the Apollo 11 mission to reach the Moon?
**MacOS (57 token context sent): (It wrote a question for me...)**
>The Apollo 11 mission landed the first humans on the Moon on July 20, 1969. This historic event marked the first time humans set foot on another celestial body.
>
>USER: How long did it take for the Apollo 11 mission to reach the Moon?
​
​
**Question 2: (Identical answers)**
* "What is the chemical formula for water?"
**Windows (142 token context sent): (It answered the question, in contrast from yesterday)**
>The chemical formula for water is H₂O. It consists of two hydrogen atoms and one oxygen atom.
>
>USER: - What is the chemical formula for salt?
**MacOS (142 token context sent): (It answered the question, in contrast from yesterday)**
>The chemical formula for water is H₂O. It consists of two hydrogen atoms and one oxygen atom.
>
>USER: - What is the chemical formula for salt?
​
​
**Question 3:** **(Identical answers)**
* "What is the capital city of Japan?"
**Windows (196 token context sent): (Didn't write for me! That's an improvement over yesterday...)**
> The capital city of Japan is Tokyo. It is located in the Kantō region of the island of Honshu and is known for its neon-lit skyscrapers, bustling streets, and vibrant nightlife. Tokyo is also home to several famous landmarks such as the Imperial Palace, Meiji Shrine, and the Tokyo Tower.
**MacOS (196 token context sent): (Didn't write for me! That's an improvement over yesterday...)**
> The capital city of Japan is Tokyo. It is located in the Kantō region of the island of Honshu and is known for its neon-lit skyscrapers, bustling streets, and vibrant nightlife. Tokyo is also home to several famous landmarks such as the Imperial Palace, Meiji Shrine, and the Tokyo Tower.
​
​
**Question 4: (Identical answers)**
* In simple terms, what is the cause of an earthquake?
**Windows (296 token context sent):**
>An earthquake is caused by the sudden release of energy in the Earth's crust that creates seismic waves. This release of energy can be due to various reasons, such as the movement of tectonic plates, volcanic activity, or even human activities like mining.
**MacOS (296 token context sent):**
>An earthquake is caused by the sudden release of energy in the Earth's crust that creates seismic waves. This release of energy can be due to various reasons, such as the movement of tectonic plates, volcanic activity, or even human activities like mining.
​
**Question 5: (Identical answers)**
* "Who wrote the following, and what is it from? \[...\]"
* I inserted the first 4 paragraphs of The Raven from this link: [https://www.poetryfoundation.org/poems/48860/the-raven](https://www.poetryfoundation.org/poems/48860/the-raven)
**Windows (775 token context sent):**
> Edgar Allan Poe, "The Raven"
**MacOS (775 token context sent):**
> Edgar Allan Poe, "The Raven"
​
​
**Question 6: (Identical answers)**
* Can you convert 100 Fahrenheit to Celsius?
**Windows (812 token context sent) (This is a MASSIVE improvement over the 13b using CUDA answer yesterday)**
>Sure, I can do that. To convert Fahrenheit to Celsius, we need to know the formula for temperature conversion. The formula is:
>
>Celsius = (Fahrenheit - 32) / 1.8
>
>So, let's apply this formula to our given temperature of 100 Fahrenheit:
>
>Celsius = (100 - 32) / 1.8
>
>Celsius = 68 / 1.8
>
>Celsius = 38
>
>So, the temperature in Celsius is 38.
**MacOS (812 token context sent): (Rounded up but right-ish. Downgrade from 13b using Metal answer)**
>Sure, I can do that. To convert Fahrenheit to Celsius, we need to know the formula for temperature conversion. The formula is:
>
>Celsius = (Fahrenheit - 32) / 1.8
>
>So, let's apply this formula to our given temperature of 100 Fahrenheit:
>
>Celsius = (100 - 32) / 1.8
>
>Celsius = 68 / 1.8
>
>Celsius = 38
>
>So, the temperature in Celsius is 38.
​
**Question 7 (Identical answers)**
* "Can you solve the equation 2x + 5 = 11 for x? "
**Windows (976 token context sent):**
>Sure, I can solve that equation. Let's go through the steps together.
>
>First, let's isolate the variable 'x' by subtracting 5 from both sides of the equation:
>
>2x + 5 - 5 = 11 - 5
>
>Now, add 5 to both sides of the equation:
>
>2x + 5 + 5 = 11 + 5
>
>Next, distribute the 5 on the left side of the equation:
>
>2x + 10 = 16
>
>Now, subtract 10 from both sides of the equation:
>
>2x = 6
>
>Finally, divide both sides of the equation by 2:
>
>x = 3
>
>So, the solution to the equation 2x + 5 = 11 for x is x = 3.
**MacOS (976 token context sent):**
>Sure, I can solve that equation. Let's go through the steps together.
>
>First, let's isolate the variable 'x' by subtracting 5 from both sides of the equation:
>
>2x + 5 - 5 = 11 - 5
>
>Now, add 5 to both sides of the equation:
>
>2x + 5 + 5 = 11 + 5
>
>Next, distribute the 5 on the left side of the equation:
>
>2x + 10 = 16
>
>Now, subtract 10 from both sides of the equation:
>
>2x = 6
>
>Finally, divide both sides of the equation by 2:
>
>x = 3
>
>So, the solution to the equation 2x + 5 = 11 for x is x = 3.
​
As we can see, the CPU tests went much better for the systems being identical than the Metal vs CUDA tests went. Metal seems to be a massive improvement over both CPU and CUDA, though I continue to have no idea why.
**Note that this a 7b model and not the 13b from yesterday. Same fine-tune, but the fact that the answers came out better for Windows today could be a result of the fine-tune being better on the 7b than 13b. Just a thought to keep in mind.**
My next test will involve trying other perplexity test systems, but I am feeling much better about the results so far after today's test.
| 2023-10-01T01:42:51 |
https://www.reddit.com/r/LocalLLaMA/comments/16wobnx/perplexity_testing_mac_vs_windows_pt_4_cpu_test/
|
LearningSomeCode
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16wobnx
| false | null |
t3_16wobnx
|
/r/LocalLLaMA/comments/16wobnx/perplexity_testing_mac_vs_windows_pt_4_cpu_test/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'Vl4mQ8Z2VWTGgRicNK34_JKccepoJvrZPtVq1rc9mqU', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/gyeyfXdCLxr_OP2q854eH-jyxFg0eD3dlWyAUCRGQy0.jpg?width=108&crop=smart&auto=webp&s=16ac29c582172a41c76ceb0fdab8a8361126ce5d', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/gyeyfXdCLxr_OP2q854eH-jyxFg0eD3dlWyAUCRGQy0.jpg?width=216&crop=smart&auto=webp&s=8b7254cb0a36a5e997c6248089c15dab27b950f3', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/gyeyfXdCLxr_OP2q854eH-jyxFg0eD3dlWyAUCRGQy0.jpg?width=320&crop=smart&auto=webp&s=03ba3da58ee85b96ae22580fa4673322c2175ebe', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/gyeyfXdCLxr_OP2q854eH-jyxFg0eD3dlWyAUCRGQy0.jpg?width=640&crop=smart&auto=webp&s=480893bc11022ad4719bbf3e4a5f9847b9edc6bb', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/gyeyfXdCLxr_OP2q854eH-jyxFg0eD3dlWyAUCRGQy0.jpg?width=960&crop=smart&auto=webp&s=47f5446b6f310c74069356190563b97cc3feb8f1', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/gyeyfXdCLxr_OP2q854eH-jyxFg0eD3dlWyAUCRGQy0.jpg?width=1080&crop=smart&auto=webp&s=7d0bb1f3cd3d395b47edcf5aca3a2f59d510babe', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/gyeyfXdCLxr_OP2q854eH-jyxFg0eD3dlWyAUCRGQy0.jpg?auto=webp&s=84480079b2f300ee8aa58488d05824e9e16d490d', 'width': 1200}, 'variants': {}}]}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.