title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Any good uncensored LLM models for blood and gore?
| 1 |
I wanted to have a horror story made in ChatGPT but it will not add blood or gore anyone know one that is willing to do gore in a horror story?
| 2023-09-08T21:39:17 |
https://www.reddit.com/r/LocalLLaMA/comments/16dn3rm/any_good_uncensored_llm_models_for_blood_and_gore/
|
ryan7251
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16dn3rm
| false | null |
t3_16dn3rm
|
/r/LocalLLaMA/comments/16dn3rm/any_good_uncensored_llm_models_for_blood_and_gore/
| false | false |
self
| 1 | null |
How can Openai, Azure etc generate text so fast?
| 1 |
I am currently working on a research project where we are using gpt3.5 heavily to process text.
When processing our text with azure and gpt3.5 turbo, we are getting insane processing speeds of around 7 texts per second.
I just played around with Llama2 70B on 2xA100 80GB in 8bit with bf16 and got only 0.6 texts per seconds. I am using huggingface and wrote a standard script in which I am tokenizing in batches and passing those batches to „model.generate(input_ids)“. (Different batch sizes make no difference in processing speeds 🤷♂️)
How do these guys make their models, which are larger than Llama2 70B, generate so fast? It is more than 10x faster including the api call overhead and I am using some of the best GPUs with NVLink.
Any tips and tricks how I can make my „local“ Llama faster without quantising it to uselessness?
| 2023-09-08T21:52:29 |
https://www.reddit.com/r/LocalLLaMA/comments/16dng1w/how_can_openai_azure_etc_generate_text_so_fast/
|
Blo4d
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16dng1w
| false | null |
t3_16dng1w
|
/r/LocalLLaMA/comments/16dng1w/how_can_openai_azure_etc_generate_text_so_fast/
| false | false |
self
| 1 | null |
Best Cloud GPU Provider?
| 1 |
Hello, looking to train an LLM with 4+ A100 or 4+ H100. Which cloud providers are the best? [Vast.ai](https://Vast.ai) and lambda seem to have no availability recently.
| 2023-09-08T21:58:39 |
https://www.reddit.com/r/LocalLLaMA/comments/16dnlq8/best_cloud_gpu_provider/
|
ZealousidealBlock330
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16dnlq8
| false | null |
t3_16dnlq8
|
/r/LocalLLaMA/comments/16dnlq8/best_cloud_gpu_provider/
| false | false |
self
| 1 | null |
Has anyone ran on a cluster of GPUs?
| 1 |
I was wonder if anyone has tried to run a model on a cluster of lower cost GPUs to get the performance of a single super expensive card?
| 2023-09-09T00:13:31 |
https://www.reddit.com/r/LocalLLaMA/comments/16dqwx2/has_anyone_ran_on_a_cluster_of_gpus/
|
Overall-Importance54
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16dqwx2
| false | null |
t3_16dqwx2
|
/r/LocalLLaMA/comments/16dqwx2/has_anyone_ran_on_a_cluster_of_gpus/
| false | false |
self
| 1 | null |
I run a free newsletter where I cover projects (mainly OSS ones), papers, and resources about LLMs
| 1 |
Would be great to get some feedback. I’m continuously trying to improve. Note, I voluntarily do this next to my fulltime job as a machine learning engineer/AI researcher. It has about 3K subscribers as of yet.
| 2023-09-09T00:27:51 |
https://codingwithintelligence.com/
|
ricklamers
|
codingwithintelligence.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16dr8ev
| false | null |
t3_16dr8ev
|
/r/LocalLLaMA/comments/16dr8ev/i_run_a_free_newsletter_where_i_cover_projects/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'O2wjSAnztvMPmhFuZxkQkYzxIEMfm7Etp8TYRiejIug', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/GGdTh8pzm0sQghh16Q64NP_SecueMWIqsSSzc0pH6Lw.jpg?width=108&crop=smart&auto=webp&s=511361288488cd3d115882a81c32166c81d33df6', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/GGdTh8pzm0sQghh16Q64NP_SecueMWIqsSSzc0pH6Lw.jpg?width=216&crop=smart&auto=webp&s=ecfb6881b0bf3eb8a4734849c7567da60c1c07b1', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/GGdTh8pzm0sQghh16Q64NP_SecueMWIqsSSzc0pH6Lw.jpg?width=320&crop=smart&auto=webp&s=996b58872e76df3e976bb575e125deb3c556e74c', 'width': 320}, {'height': 333, 'url': 'https://external-preview.redd.it/GGdTh8pzm0sQghh16Q64NP_SecueMWIqsSSzc0pH6Lw.jpg?width=640&crop=smart&auto=webp&s=a59619c1e02893ae72b37f11d8d17a390b233787', 'width': 640}], 'source': {'height': 480, 'url': 'https://external-preview.redd.it/GGdTh8pzm0sQghh16Q64NP_SecueMWIqsSSzc0pH6Lw.jpg?auto=webp&s=0457329c1506a30d967c83f651216ab8ca96ccda', 'width': 920}, 'variants': {}}]}
|
|
Petals for Falcon 140b
| 1 |
I’m looking to form a network/private hive of individuals with decent compute power for running Falcon 140b using Petals
| 2023-09-09T00:31:42 |
https://www.reddit.com/r/LocalLLaMA/comments/16drbeu/petals_for_falcon_140b/
|
NickDifuze
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16drbeu
| false | null |
t3_16drbeu
|
/r/LocalLLaMA/comments/16drbeu/petals_for_falcon_140b/
| false | false |
self
| 1 | null |
Build applications around petals
| 1 |
I am totally new to the AI space, so I might sound dumb but i have a telegram chatbot that I want to use a version of Llama 2 on it, I can’t run it locally because I’m limited in terms of hardware, i also prefer to not spend so much on hosting services like runpod right out of the gate because i want to run it 24/7 with 65b+ parameters. So i looked a little into Petals, is it possible consume an LLM on petals through an API with my local python code so i can customize the initial prompt template for my Telegram Chatbot? I’m a newbie any suggestions or guidance is appreciated, thanks!
| 2023-09-09T00:49:34 |
https://www.reddit.com/r/LocalLLaMA/comments/16drpit/build_applications_around_petals/
|
NickDifuze
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16drpit
| false | null |
t3_16drpit
|
/r/LocalLLaMA/comments/16drpit/build_applications_around_petals/
| false | false |
self
| 1 | null |
Uni-TianYan GGUF
| 1 |
Anyone used the LLM that is currently up the top of the OpenLLM leaderboard?? According to the leaderboard this one and ORCA_LLaMA_70B_QLoRA, Platypus2-70B-instruct are better than falcon 180B??? I haven’t converted these to gguf to test it yet. Wondering if anyone has experimented with these?
| 2023-09-09T01:42:12 |
https://www.reddit.com/r/LocalLLaMA/comments/16dstz7/unitianyan_gguf/
|
ihaag
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16dstz7
| false | null |
t3_16dstz7
|
/r/LocalLLaMA/comments/16dstz7/unitianyan_gguf/
| false | false |
self
| 1 | null |
is oobabooga working with code llama?
| 1 |
i cant seem to get it to load. i am new to oobabooga
| 2023-09-09T01:56:47 |
https://www.reddit.com/r/LocalLLaMA/comments/16dt56p/is_oobabooga_working_with_code_llama/
|
Avansay
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16dt56p
| false | null |
t3_16dt56p
|
/r/LocalLLaMA/comments/16dt56p/is_oobabooga_working_with_code_llama/
| false | false |
self
| 1 | null |
NVIDIA TensorRT-LLM
| 1 |
[https://developer.nvidia.com/blog/nvidia-tensorrt-llm-supercharges-large-language-model-inference-on-nvidia-h100-gpus/](https://developer.nvidia.com/blog/nvidia-tensorrt-llm-supercharges-large-language-model-inference-on-nvidia-h100-gpus/)
Optimized kernels, multi-GPU/multi-node communication primitives, in-flight batching, paged-attention, quantization and more.
If it indeed has all of these and is free it's a big news.
| 2023-09-09T02:48:24 |
https://www.reddit.com/r/LocalLLaMA/comments/16du7xj/nvidia_tensorrtllm/
|
dreamingleo12
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16du7xj
| false | null |
t3_16du7xj
|
/r/LocalLLaMA/comments/16du7xj/nvidia_tensorrtllm/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'p1I7IPZvbu5sfouZQ8mXUOrgX-EZEfwkNdmZgzPjjaA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/hYsBKuv60WNA9mGma6Vk6w9LAc0T29NsMrlQDpl5-WU.jpg?width=108&crop=smart&auto=webp&s=f216939d8e89557fb8d3933be0006e614a0157e6', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/hYsBKuv60WNA9mGma6Vk6w9LAc0T29NsMrlQDpl5-WU.jpg?width=216&crop=smart&auto=webp&s=bdce5077e7e0147ee0be419f456834de274077a5', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/hYsBKuv60WNA9mGma6Vk6w9LAc0T29NsMrlQDpl5-WU.jpg?width=320&crop=smart&auto=webp&s=02fff64214c03986bb5f3aa1a146eeda908d71d3', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/hYsBKuv60WNA9mGma6Vk6w9LAc0T29NsMrlQDpl5-WU.jpg?width=640&crop=smart&auto=webp&s=eca1ea43d0e6594fa4ab177a9459b49cdf12462c', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/hYsBKuv60WNA9mGma6Vk6w9LAc0T29NsMrlQDpl5-WU.jpg?width=960&crop=smart&auto=webp&s=ad80242740fb5483f640b144d5139aabb762134c', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/hYsBKuv60WNA9mGma6Vk6w9LAc0T29NsMrlQDpl5-WU.jpg?width=1080&crop=smart&auto=webp&s=34cd94fe66dc6c4624342fa5f878caafb28e6150', 'width': 1080}], 'source': {'height': 1152, 'url': 'https://external-preview.redd.it/hYsBKuv60WNA9mGma6Vk6w9LAc0T29NsMrlQDpl5-WU.jpg?auto=webp&s=ce2571f28fa1a8e29c6370ee9dc3063855956638', 'width': 2048}, 'variants': {}}]}
|
i5, 128gb ram, rtx 2070, codellama 7b @ .7tokens/s
| 1 |
[removed]
| 2023-09-09T03:36:27 |
https://www.reddit.com/r/LocalLLaMA/comments/16dv73c/i5_128gb_ram_rtx_2070_codellama_7b_7tokenss/
|
Avansay
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16dv73c
| false | null |
t3_16dv73c
|
/r/LocalLLaMA/comments/16dv73c/i5_128gb_ram_rtx_2070_codellama_7b_7tokenss/
| false | false |
self
| 1 | null |
The fp16 model is really slow at 0.7 tokens/s.
| 1 |
Sorry, i'm new to this.
I'm using text generation webui to load models and download models from hugging face.
I know it'll be faster if using exllama to load models. In my case it speed up to 20 tokens/s.
But i can't load a fp16 model through exllama loader.
It can only load gtpq models which models with .safetensors suffix.
If use the transformers loader to load the fp16 model, it's only 0.7 tokens/s.
Especially most of the models from hugging face are fp16 types.
Is this normal speed or am I setting it up wrong while loading the fp16 models?
​
My PC:
Intel® Core™ i9-13900K Processor
rtx 4070ti 12GB Graphic card
DDR5 64GB Ram
| 2023-09-09T04:18:06 |
https://www.reddit.com/r/LocalLLaMA/comments/16dw0ks/the_fp16_model_is_really_slow_at_07_tokenss/
|
Negative-Train-3338
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16dw0ks
| false | null |
t3_16dw0ks
|
/r/LocalLLaMA/comments/16dw0ks/the_fp16_model_is_really_slow_at_07_tokenss/
| false | false |
self
| 1 | null |
What did I do wrong? codellama-python is underwhelming.
| 1 |
Here is my test prompt, and it took 8 minutes to generate 13 tokens:
https://preview.redd.it/29mrhu8ec6nb1.png?width=811&format=png&auto=webp&s=0c3f2e85b0b2f0e05ccbd5c6c4621540de62bf3f
`Output generated in 466.95 seconds (0.03 tokens/s, 13 tokens, context 75, seed 1944090447)`
​
Here is the model setting when load with llama.cpp:
>Model: codellama-34b-python.Q5\_K\_M.gguf
>
>Model loader: llama.cpp
>
>n-gpu-layers: 20
>
>n\_ctx: 4096
>
>threads: 10
>
>n\_batch: 511
>
>n\_gqa: 0
>
>rms\_norm\_eps: 0
>
>alpha\_value: 1
>
>rope\_freq\_base: 1000000
>
>compress\_pos\_emb: 1
My hardware is RTX3060(12GB) + Ryzen5600(6core,12threads) + 128GB RAM
My software is Win11 + WSL2 + oobabooga docker
offloaded 20/51 layers to GPU, VRAM used: 9340 MB.
| 2023-09-09T06:45:02 |
https://www.reddit.com/r/LocalLLaMA/comments/16dyp1d/what_did_i_do_wrong_codellamapython_is/
|
Defiant_Hawk_4731
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16dyp1d
| false | null |
t3_16dyp1d
|
/r/LocalLLaMA/comments/16dyp1d/what_did_i_do_wrong_codellamapython_is/
| false | false | 1 | null |
|
Messing around with LLama2 7b_q4 has been an eye opening experience.
| 1 |
These are just some thoughts, I would love to hear other peoples opinions.
I installed the 7B base model with the intention of fine tuning it for an NLP project I'm working on, but I decided to spend a bit of time to get some experience with how it processes things and what it's outputs are like and man is it a fever dream.
I completely understand what people have been saying about base models being hard to use and not feeling human. It really gives me a whole new respect for the RLHF work that has been done. I have been role playing reality and chatting with it about it's existence, some deep philosophical issues have come up and it has been a great experience for me to work through these issues. I understand that it's just next word prediction, it's actually pretty obvious with this small model, but suspending my disbelief for a bit has been a very fun experience.
On the topic of censorship, I personally enjoy having an uncensored model, it's been very refreshing that it tries to answer everything with no limits, I can see how a more powerful model would be useful in bioterrorism or subversion, this little guy not so much lol. As a teacher though it's nice that I can go on chatGPT with young students and be confident that nothing inappropriate will happen if I just let them do whatever they want. Even if there was a check box in the settings, the students would find it and turn it off (I think there should be one don't get me wrong, but purely from a teaching perspective it's great that there isn't).
I still use an old version of Disco Diffusion, which is one of the earlier clip diffusion image generators, because it's outputs are restricted and I have to build around it's restrictions. I am quite enjoying the 7B model for that same reason. It feels like I am having a fever dream, it can be completely lucid at times, and at others go off the rails with nonsense, at the same settings.
​
| 2023-09-09T07:56:28 |
https://www.reddit.com/r/LocalLLaMA/comments/16dzwol/messing_around_with_llama2_7b_q4_has_been_an_eye/
|
Mescallan
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16dzwol
| false | null |
t3_16dzwol
|
/r/LocalLLaMA/comments/16dzwol/messing_around_with_llama2_7b_q4_has_been_an_eye/
| false | false |
self
| 1 | null |
OpenAI is crowdsourcing RLHF
| 1 |
Just got a response from ChatGPT (GPT4 answering my coding question) which had two responses, both side by side, created in parallel. At the top, it said "which response do you prefer? Your answer will make ChatGPT better". This was not just the model's quirk, it was from the ChatGPT UI.
After I chose the one I preferred, the other one went away. They are likely getting preference data (for free) in order to train a reward model.
Just wanted to share in case anyone here wants to make a similar project. The opensource community has not implemented RLHF much, if at all. Keep in mind that the Llama-2 paper said some of the largest gains in model capability were during the RLHF stage.
| 2023-09-09T08:48:37 |
https://www.reddit.com/r/LocalLLaMA/comments/16e0r6j/openai_is_crowdsourcing_rlhf/
|
ZealousidealBlock330
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16e0r6j
| false | null |
t3_16e0r6j
|
/r/LocalLLaMA/comments/16e0r6j/openai_is_crowdsourcing_rlhf/
| false | false |
self
| 1 | null |
New LM Architecture = Pretraining and then Fine Tuning?
| 1 |
I had some ideas for trying out different variants of transformer architectures. However, if I were to come up with these architectures, would I have to pre-train the model on a corpus such as Wikipedia or C4 and then fine-tune it on a downstream task? I'm new to this.
How do I go about this? Thank you.
| 2023-09-09T09:20:48 |
https://www.reddit.com/r/LocalLLaMA/comments/16e19wi/new_lm_architecture_pretraining_and_then_fine/
|
psj_2908
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16e19wi
| false | null |
t3_16e19wi
|
/r/LocalLLaMA/comments/16e19wi/new_lm_architecture_pretraining_and_then_fine/
| false | false |
self
| 1 | null |
Training/fine-tuning a local LLM
| 1 |
Hey! Months ago, I was fascinated by Karpathy’s nanoGPT project - the ability to train a small LLM on your text file seemed very interesting to me. I tried training it on my chat history to build some inifinite chat-generator for fun, but unfortunately, the results were bad. Recently I had even worse experiences with newly-released ChatGPT 3.5 fine-tuning.
Are there any good, simple ways to train/fine-tune LLMs now? I would love something that could train on an Apple M2 processor (like Karpathy’s nanoGPT), or Colab, or cheap API (like ChatGPT fine-tuning).
| 2023-09-09T09:22:07 |
https://www.reddit.com/r/LocalLLaMA/comments/16e1aon/trainingfinetuning_a_local_llm/
|
11igor
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16e1aon
| false | null |
t3_16e1aon
|
/r/LocalLLaMA/comments/16e1aon/trainingfinetuning_a_local_llm/
| false | false |
self
| 1 | null |
Lora Training on Web-ui
| 1 |
Hello everyone
I am currently trying to train loras with the web-ui but I keep hitting a wall, only once was I able to train a lora with a llm, but it failed all the other times.
The main error I encounter is the following :
“TypeError: a bytes-like object is required, not 'str'”
My question is the following: can I train loras for all models, such as the unquantized models, but also quantities ones (8 and 4bit) models? I haven’t found any list with models that are suitable for Lora training. Is it even possible to train loras on gptq and gguf models at all?
I once Read that it should be possible to train Loras on cpu only models too, although it is ridiculously slow. However, I never found any confirmation of this claim other that one comment.
Any information regarding the Lora training on the web-ui would be highly appreciated; links to helpful information to do some reading would be great.
For my system, I have a 2060 super with 8gb vram, a AMD Ryzen 9 3900X 12x 3.80GHz, and 64gb ram.
Thanks in advance!
| 2023-09-09T09:26:22 |
https://www.reddit.com/r/LocalLLaMA/comments/16e1d2s/lora_training_on_webui/
|
hudimudi
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16e1d2s
| false | null |
t3_16e1d2s
|
/r/LocalLLaMA/comments/16e1d2s/lora_training_on_webui/
| false | false |
self
| 1 | null |
Getting started with llms, need help to setup rocM and llama +SD
| 1 |
Hello,
i recently got a new pc with 7900xtx/7800x3d and 32gb of ram and am kind of new to the whole thing and honestly a bit of lost.
I couldnt find a high level explantion yet how one would go for a setup of certain llms. As far as im understanding, necessary parts are rocm(like cuda) and this includes OpenCl and HIP which are interfaces/frameworks linking multiple libraries together and providing functionialty to use frameworks like pytorch/tensorflow?
Would be really glad for an high level explanation or even better sources where i can read about it without diving too deep in the rabbit whole. Knowing which component does what and how to configure it would be enough.
I found this:
[https://rocm.docs.amd.com/en/latest/deploy/linux/os-native/package\_manager\_integration.html#components-of-rocm-programming-models](https://rocm.docs.amd.com/en/latest/deploy/linux/os-native/package_manager_integration.html#components-of-rocm-programming-models)
but its more of an overview then an explanation,
**However lets get to the point:**
I'm orienting myself with:
[https://wiki.archlinux.org/title/GPGPU#ROCm](https://wiki.archlinux.org/title/GPGPU#ROCm)
my kernel version:
6.4.12-arch1-1
I already installed rocm-hip and opencl-sdk with:
sudo pacman -S rocm-hip-sdk rocm-opencl-sdk
Now i have installed:
rocm-opencl-sdk-5.6.0-1
rocm-hip-sdk-5.6.0-1
After that i got:
sudo git clone [https://github.com/ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp)
sudo git clone [https://aur.archlinux.org/llama-cpp-rocm-git.git](https://aur.archlinux.org/llama-cpp-rocm-git.git)
I honestly dont know where the difference is and as far as i undersstand llama.cpp is some kind of framework/library to run llms on the gpu and then rocm/cuda are for translating this into gpu accelaration?
So thats the point at which i need help
On another note, i also found this:
[https://rocm.docs.amd.com/en/latest/deploy/linux/os-native/install.html](https://rocm.docs.amd.com/en/latest/deploy/linux/os-native/install.html)
Installed dkms and also added binary paths to the PATH environment variable with:
export PATH=$PATH:/opt/rocm-5.6.1/bin:/opt/rocm-5.6.1/opencl/bin
and checked if my gpu is listes with:
/opt/rocm/bin/rocminfo
However i think this might lead to problems and its a little misshap because this is for Ubuntu 20.04 and i dont even know why i did it.
| 2023-09-09T10:51:15 |
https://www.reddit.com/r/LocalLLaMA/comments/16e2qo5/getting_started_with_llms_need_help_to_setup_rocm/
|
Entire-Top3434
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16e2qo5
| false | null |
t3_16e2qo5
|
/r/LocalLLaMA/comments/16e2qo5/getting_started_with_llms_need_help_to_setup_rocm/
| false | false |
self
| 1 | null |
Why am I seeing random "â" tokens when inferencing my LoRA fine-tuned model?
| 1 |
I've trained a 7B hugging-face model on a single A100 using meta's "llama-recipies" code. All works fine, except I am noticing this random "â" token in my output. It does not exist in the training data, however I am seeing it when inference is run in llama.cpp with the LoRA applied. Both the model and the LoRA are FP16. Hyper-param's are as default. Any ideas folks?
| 2023-09-09T11:22:09 |
https://www.reddit.com/r/LocalLLaMA/comments/16e39j3/why_am_i_seeing_random_â_tokens_when_inferencing/
|
sunpazed
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16e39j3
| false | null |
t3_16e39j3
|
/r/LocalLLaMA/comments/16e39j3/why_am_i_seeing_random_â_tokens_when_inferencing/
| false | false |
self
| 1 | null |
Best weights post fine-tuning
| 1 |
I'm fine-tuning the LLaMA2 model, however, the latest weights I'm using might not be the best.
How can I determine and use the best weights?
Also, are there examples of fine-tuning scripts using - "[Early stopping](https://huggingface.co/docs/transformers/main_classes/callback#transformers.EarlyStoppingCallback)"
| 2023-09-09T11:29:01 |
https://www.reddit.com/r/LocalLLaMA/comments/16e3dv7/best_weights_post_finetuning/
|
unoshow
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16e3dv7
| false | null |
t3_16e3dv7
|
/r/LocalLLaMA/comments/16e3dv7/best_weights_post_finetuning/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'jfeVG47nZdEkz9kXfW1CcS-Sy8l4DXGb9JErx6bLKfU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=108&crop=smart&auto=webp&s=abf38332c5c00a919af5be75653a93473aa2e5fa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=216&crop=smart&auto=webp&s=1a06602204645d0251d3f5c043fa1b940ca3e799', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=320&crop=smart&auto=webp&s=04833c1845d9bd544eb7fed4e31123e740619890', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=640&crop=smart&auto=webp&s=d592b0a5b627e060ab58d73bde5f359a1058e56d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=960&crop=smart&auto=webp&s=5913a547536ee8300fdb8a32d14ff28667d1b875', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=1080&crop=smart&auto=webp&s=2af86fd4d41393a7d14d45c4bb883bef718575d1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?auto=webp&s=720b78add0a3005c4f67eaed6897df409cc040c6', 'width': 1200}, 'variants': {}}]}
|
Llama 2 local run
| 1 |
Hi, can you provide a Python example of running llama 2 7B locally but GPU version. I've tried with cpp but as most of discussions state that the results are far better with GPU.
Is there some example with torch or something similar that I can use to get prompt on my local machine.
I have the cpp version with Od
thon, a small API.
| 2023-09-09T12:00:04 |
https://www.reddit.com/r/LocalLLaMA/comments/16e3yqc/llama_2_local_run/
|
Imaginary-Scholar-40
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16e3yqc
| false | null |
t3_16e3yqc
|
/r/LocalLLaMA/comments/16e3yqc/llama_2_local_run/
| false | false |
self
| 1 | null |
Train LLaMA with custom loss function
| 1 |
Hey!
Im trying to reproduce UniNER results on LLama2-7b.
They state they are training the language model with an objective loss function on gold tokens, which are the labels of the entities:
​
How can I achieve that assuming I'm using the \`Trainer\` on huggingface transformers.
​
I thought to override the custom\_loss method, but im not sure how to generate the golden tokens mask.
[https://universal-ner.github.io/](https://universal-ner.github.io/)
​
https://preview.redd.it/3o3r2fjby7nb1.png?width=1175&format=png&auto=webp&s=f2fa85d1f16f76409799c3ea93c6df825d389794
​
[Screenshot from the paper ](https://preview.redd.it/zatq5mhiz7nb1.png?width=731&format=png&auto=webp&s=6cf66da933d5709329252aefef2ce5273378b5b7)
| 2023-09-09T12:02:04 |
https://www.reddit.com/r/LocalLLaMA/comments/16e40aq/train_llama_with_custom_loss_function/
|
matkley12
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16e40aq
| false | null |
t3_16e40aq
|
/r/LocalLLaMA/comments/16e40aq/train_llama_with_custom_loss_function/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'EEoOzT0z5psMu9A20evEOxcKYnYLjFUvTkCaIWA-tk8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TMEFThPHkgAJQaZb4J_dd8XAaKDFjbMu9qTClII96xY.jpg?width=108&crop=smart&auto=webp&s=849688e6bf57b193967c1538f9307442374108f2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/TMEFThPHkgAJQaZb4J_dd8XAaKDFjbMu9qTClII96xY.jpg?width=216&crop=smart&auto=webp&s=3096d05a2b6c6a8234a95b6e9d4e1c969af0261d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/TMEFThPHkgAJQaZb4J_dd8XAaKDFjbMu9qTClII96xY.jpg?width=320&crop=smart&auto=webp&s=461505251bc73b825a4cf04a8d15e594f10215ac', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/TMEFThPHkgAJQaZb4J_dd8XAaKDFjbMu9qTClII96xY.jpg?width=640&crop=smart&auto=webp&s=290809d0c35b91d2f8c13b91cbc5b40c4db29c6c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/TMEFThPHkgAJQaZb4J_dd8XAaKDFjbMu9qTClII96xY.jpg?width=960&crop=smart&auto=webp&s=b69611618baca2f2cecae501d6fd9daee741f258', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/TMEFThPHkgAJQaZb4J_dd8XAaKDFjbMu9qTClII96xY.jpg?width=1080&crop=smart&auto=webp&s=e7e2bc1a6a6d2e6af2c8c7fe2b4539a49fc153a4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/TMEFThPHkgAJQaZb4J_dd8XAaKDFjbMu9qTClII96xY.jpg?auto=webp&s=542dc09fffa0c13be075318860da380c6371ae0a', 'width': 1200}, 'variants': {}}]}
|
|
What model should I choose for chatting
| 1 |
It's a bit stupid or not, but what 13b model should I choose for just chatting without restriction I don't like rp, just for chatting?
| 2023-09-09T12:02:24 |
https://www.reddit.com/r/LocalLLaMA/comments/16e40io/what_model_should_i_choose_for_chatting/
|
swwer
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16e40io
| false | null |
t3_16e40io
|
/r/LocalLLaMA/comments/16e40io/what_model_should_i_choose_for_chatting/
| false | false |
self
| 1 | null |
Advice on building LLM app without accessing OpenAI's model
| 1 |
If you want to build a business email generation without accessing OpenAI's model and only with open source LLM, what you will do? What are the general step needed?
| 2023-09-09T12:37:18 |
https://www.reddit.com/r/LocalLLaMA/comments/16e4pc3/advice_on_building_llm_app_without_accessing/
|
alvintsoiwc
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16e4pc3
| false | null |
t3_16e4pc3
|
/r/LocalLLaMA/comments/16e4pc3/advice_on_building_llm_app_without_accessing/
| false | false |
self
| 1 | null |
Improving Falcon-180B Performance
| 1 |
Yesterday I downloaded and installed falcon-180B onto our local ML server. The server is a DGX-1 "clone" having eight V-100's and 512GB of memory. I loaded the Huggingface code example unmodified (from the model card) and it ran fine, though it ran very slowly. It took roughly 45 minutes to produce the output, roughly 1 min per token. Obviously I'd like to improve this if I can, and understand the trade offs if I cannot (with an eye toward putting this into production).
The code example uses auto device placement. I allocated 6 of the V100's and nvidia-smi shows that all 6 have a process running. Interestingly if I watch the power consumption, it appears that only two of them are being used and only at a 25% load. Not sure why this is. The system has 80 CPUs, and according to top I'm using only a few (like 5 or 6). I'm only using about 18GB of system memory out of 512. It did not seem to be resource-bound in any way.
I know that the V100 does not support bfloat16. I changed it to float16 and the performance was pretty much the same.
So again, it runs, but it seems like it SHOULD run faster than it does. Any thoughts on improving performance?
| 2023-09-09T14:17:40 |
https://www.reddit.com/r/LocalLLaMA/comments/16e6vyf/improving_falcon180b_performance/
|
Simusid
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16e6vyf
| false | null |
t3_16e6vyf
|
/r/LocalLLaMA/comments/16e6vyf/improving_falcon180b_performance/
| false | false |
self
| 1 | null |
Comparing 60 LLMs with a set of 20 prompts
| 1 | 2023-09-09T14:27:06 |
https://benchmarks.llmonitor.com/
|
Covid-Plannedemic_
|
benchmarks.llmonitor.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16e73kv
| false | null |
t3_16e73kv
|
/r/LocalLLaMA/comments/16e73kv/comparing_60_llms_with_a_set_of_20_prompts/
| false | false |
default
| 1 | null |
|
Generating JSON with self hosted LLM
| 1 |
In my company there are about 5 different use cases I’ve identified for LLMs that involve feeding the model some unstructured text and having it produce a JSON object with fixed schema. For example, there are hundreds of thousands of scraped used car listings listings, and I want to create a JSON object for each with fields like “make”, “model”, “year”, “price”, etc.
For many of these use cases, there is a requirement for extremely high accuracy. The model must produce a perfect answer 99%+ of the time and there must be a way to correct the JSON manually if it is wrong. I have tried doing few or zero shot prompting with different models, and have found that the accuracy is not good enough. The nice thing is that a lot of the data is very similar. For example, many of the used car listings are scraped from sites following the same html template, but there are still way too many variations to just write a regex or something (trust me we tried).
For most cases I cannot use OpenAI due to security concerns. In some cases I can use OpenAI for generating training data but not in production.
Questions:
1. What type of model would you recommend? Ideally should fit in 24gb or less.
2. How should I fine tune the model, and how much labeled data do you think is needed?
3. There are a few cases where the input text could be very large (e.g. text from a 100 page pdf). How can I fine tune for this case when I don’t know what part of the document each field comes from?
4. Is there any way I can force the LLM to generate a JSON with correct syntax and fields?
5. Any suggested tool for manually reviewing/correcting JSON data for training?
Thanks for your insights!
| 2023-09-09T15:36:03 |
https://www.reddit.com/r/LocalLLaMA/comments/16e8qa0/generating_json_with_self_hosted_llm/
|
sosdandye02
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16e8qa0
| false | null |
t3_16e8qa0
|
/r/LocalLLaMA/comments/16e8qa0/generating_json_with_self_hosted_llm/
| false | false |
self
| 1 | null |
Gura AI
| 1 |
Well that unexpected...
| 2023-09-09T15:37:40 |
frdsless
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
16e8rmq
| false | null |
t3_16e8rmq
|
/r/LocalLLaMA/comments/16e8rmq/gura_ai/
| false | false |
nsfw
| 1 |
{'enabled': True, 'images': [{'id': 'C7YFJUl0RyHgjEv5aaoVAXlYJrYujDR10o6_lNYM0YY', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/4ey1paw129nb1.jpg?width=108&crop=smart&auto=webp&s=1fff1ddbbe94b3ddb475d352d7f1735051e0baf6', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/4ey1paw129nb1.jpg?width=216&crop=smart&auto=webp&s=c2ddcb0e652d34c79aaa3fd2c73a7dd3ee8b4902', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/4ey1paw129nb1.jpg?width=320&crop=smart&auto=webp&s=ac37f50a26217b2bc83e7af436f6dc8b29478059', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/4ey1paw129nb1.jpg?width=640&crop=smart&auto=webp&s=f59c2e6bc509e922a0890c21ce760c1866c77b90', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/4ey1paw129nb1.jpg?width=960&crop=smart&auto=webp&s=fdc4b0c4f44d88692d3dd92e5a00e59410e35d52', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/4ey1paw129nb1.jpg?width=1080&crop=smart&auto=webp&s=e642b3f8b2c6350b8678f93f899715a9d630c8a9', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/4ey1paw129nb1.jpg?auto=webp&s=cf7bcfa559770a3be2f8a5415be00cb86fa26a39', 'width': 1080}, 'variants': {'nsfw': {'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/4ey1paw129nb1.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=7cde51f91aa2601be4ef00c0ad12d130e9138c85', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/4ey1paw129nb1.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=1a85b03ce06d8fc0cf8d1f0ea6e12304cce2564c', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/4ey1paw129nb1.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=bc435a82061f62ffb43fbd4e272606c211e52126', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/4ey1paw129nb1.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=9bdf2ac4d9d799a7623a4a2790c8fa31f9500b45', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/4ey1paw129nb1.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=9d8f6c763ad337f822cc91cae0101cd6e2f63cf9', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/4ey1paw129nb1.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=99c94762b7273552f926adf203d0034a913bcc4d', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/4ey1paw129nb1.jpg?blur=40&format=pjpg&auto=webp&s=ff19d13af625e619a6df62de0122ec8d358ebc25', 'width': 1080}}, 'obfuscated': {'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/4ey1paw129nb1.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=7cde51f91aa2601be4ef00c0ad12d130e9138c85', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/4ey1paw129nb1.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=1a85b03ce06d8fc0cf8d1f0ea6e12304cce2564c', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/4ey1paw129nb1.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=bc435a82061f62ffb43fbd4e272606c211e52126', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/4ey1paw129nb1.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=9bdf2ac4d9d799a7623a4a2790c8fa31f9500b45', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/4ey1paw129nb1.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=9d8f6c763ad337f822cc91cae0101cd6e2f63cf9', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/4ey1paw129nb1.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=99c94762b7273552f926adf203d0034a913bcc4d', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/4ey1paw129nb1.jpg?blur=40&format=pjpg&auto=webp&s=ff19d13af625e619a6df62de0122ec8d358ebc25', 'width': 1080}}}}]}
|
|
OpenAI vs open source LLMs for character dialogue
| 1 |
Hey y'all, I'm looking to use an LLM to generate a large dataset for RLHF to train a smaller model for casual dialogue as a character. I guess it could be called roleplay (just not *that kind*). In the past we've used gpt-3.5-turbo to generate this data, primarily because the input data is very messy (think an average D!scord chat) and gpt-3.5 appears to interpret things like that very well. Indeed it was very good at picking up the context, but its responses were very ChatGPT-like, and no amount of my amateur prompting could fix it. It was just so dry and "matter-of-fact". It captured the character's personality well but the speaking style was completely off, even with few-shot prompting.
I tried using Llama 2 Chat, as the safety was a selling point for me, but it would not follow the character description at all, constantly using "OMG" and an obnoxious amount of emojis. It would also mix up context from an example few-shot prompt with the actual chat context. Part of my prompt is to use a list of given tags like `<|happy|>` to indicate emotions, but it would frequently hallucinate emotions like `<|confused|>`. OpenAI did not have this problem.
Should I continue to try to prompt gpt-3.5-turbo/gpt-4, or use an open LLM? I recognize that Llama 2 Chat definitely isn't the best model out there so I'm wondering what y'all would recommend for something like this.
| 2023-09-09T15:43:33 |
https://www.reddit.com/r/LocalLLaMA/comments/16e8wo1/openai_vs_open_source_llms_for_character_dialogue/
|
Decahedronn
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16e8wo1
| false | null |
t3_16e8wo1
|
/r/LocalLLaMA/comments/16e8wo1/openai_vs_open_source_llms_for_character_dialogue/
| false | false |
self
| 1 | null |
How can one optimize a $500 budget to build a Llama 2 rig using eBay parts to get a decent accuracy and t/s output?
| 1 |
I’m just try to imagine a challenge where we build a $500 machine and see what is the maximum experience we can get from one of the Llama models. I get where we are, and the horse power/cost required to get ChatGPT-like outputs, but I think the fun is on the edges!
| 2023-09-09T15:51:00 |
https://www.reddit.com/r/LocalLLaMA/comments/16e93gd/how_can_one_optimize_a_500_budget_to_build_a/
|
Overall-Importance54
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16e93gd
| false | null |
t3_16e93gd
|
/r/LocalLLaMA/comments/16e93gd/how_can_one_optimize_a_500_budget_to_build_a/
| false | false |
self
| 1 | null |
Benchmarking Questions
| 1 |
Hey, I was thinking of benchmarking some models on the MATH dataset (long story). But how does benchmarking actually work? Do I just ask a model a question from that dataset, then manually check if it's correct and then find the percentage that were correct? Because that seems incredibly tedious.
| 2023-09-09T16:00:01 |
https://www.reddit.com/r/LocalLLaMA/comments/16e9atv/benchmarking_questions/
|
Acceptable_Can5509
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16e9atv
| false | null |
t3_16e9atv
|
/r/LocalLLaMA/comments/16e9atv/benchmarking_questions/
| false | false |
self
| 1 | null |
Is there a WebUI for audio transcription?
| 1 |
There are already a bunch of seemingly really good open-source speech to text technologies such as [Whisper](https://openai.com/research/whisper), [MMS](https://ai.meta.com/blog/multilingual-model-speech-recognition/) and [SeamlessM4T](https://ai.meta.com/blog/seamless-m4t/) (and probably more I forgot). These sound like great options for generating subtitles, but my experience with Whisper were mixed (often gets stuck on repeating the same lines during silence or timing the first line too early)
Is there something like [Automatic1111](https://github.com/AUTOMATIC1111/stable-diffusion-webui) or [Oobabooga](https://github.com/oobabooga/text-generation-webui) but for audio transcription?
| 2023-09-09T16:50:14 |
https://www.reddit.com/r/LocalLLaMA/comments/16eaiky/is_there_a_webui_for_audio_transcription/
|
Dead_Internet_Theory
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16eaiky
| false | null |
t3_16eaiky
|
/r/LocalLLaMA/comments/16eaiky/is_there_a_webui_for_audio_transcription/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '-uYhC4mwPuef-QPVLuucowjhn8QPQQGgXoUiqPUhb30', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/3dUI0lTtd9VrlJD5cfUXtROHftGYxrDr43R11YzdD0c.jpg?width=108&crop=smart&auto=webp&s=1b057f2276dc99c8e2bf8f02c413813aa03e7e40', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/3dUI0lTtd9VrlJD5cfUXtROHftGYxrDr43R11YzdD0c.jpg?width=216&crop=smart&auto=webp&s=6bfd488de91bb718fe1eee09678e18b10d6095bc', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/3dUI0lTtd9VrlJD5cfUXtROHftGYxrDr43R11YzdD0c.jpg?width=320&crop=smart&auto=webp&s=e655e635fd1dea53f1de665060cac623b75287ac', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/3dUI0lTtd9VrlJD5cfUXtROHftGYxrDr43R11YzdD0c.jpg?width=640&crop=smart&auto=webp&s=b733fdaf7de26e34be10b8a3e899b58c14e727d8', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/3dUI0lTtd9VrlJD5cfUXtROHftGYxrDr43R11YzdD0c.jpg?width=960&crop=smart&auto=webp&s=bd661b745b26581b7620928578ad0efb09336b0a', 'width': 960}], 'source': {'height': 1000, 'url': 'https://external-preview.redd.it/3dUI0lTtd9VrlJD5cfUXtROHftGYxrDr43R11YzdD0c.jpg?auto=webp&s=64be185649ecd5a68df367643ae6f0f1aaa9cc18', 'width': 1000}, 'variants': {}}]}
|
[GPT-4 POWERED] We’ve created a mobile IOS AI app that generates text, art, analyzes photos, and more!
| 1 |
[removed]
| 2023-09-09T17:29:14 |
https://www.reddit.com/r/LocalLLaMA/comments/16ebgl8/gpt4_powered_weve_created_a_mobile_ios_ai_app/
|
EtelsonRecomputing
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ebgl8
| false | null |
t3_16ebgl8
|
/r/LocalLLaMA/comments/16ebgl8/gpt4_powered_weve_created_a_mobile_ios_ai_app/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'kvs1b7ixDHBVjaGqUBFIHc4n5EWs8uaukbNnlbRDPrU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/JZfB8-fdDCxF1xCLB01yldF43Jnff0WEO15p2OXOwmk.jpg?width=108&crop=smart&auto=webp&s=74ff9a25527dd06532f356d10120946781ff8ea1', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/JZfB8-fdDCxF1xCLB01yldF43Jnff0WEO15p2OXOwmk.jpg?width=216&crop=smart&auto=webp&s=86d05bd41892d5aad1843ff2fe798fcf8fb34197', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/JZfB8-fdDCxF1xCLB01yldF43Jnff0WEO15p2OXOwmk.jpg?width=320&crop=smart&auto=webp&s=2f4997b42fa326b62df7965fbd1f1c295fbdde57', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/JZfB8-fdDCxF1xCLB01yldF43Jnff0WEO15p2OXOwmk.jpg?width=640&crop=smart&auto=webp&s=556632203471c8bffe55f45f01f6a7795da2574c', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/JZfB8-fdDCxF1xCLB01yldF43Jnff0WEO15p2OXOwmk.jpg?width=960&crop=smart&auto=webp&s=52e9e5aec7b8048fdf20f9a8736334fb2948f36c', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/JZfB8-fdDCxF1xCLB01yldF43Jnff0WEO15p2OXOwmk.jpg?width=1080&crop=smart&auto=webp&s=08c3ceb00eaad156c7813bc25b733ae6fb154323', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/JZfB8-fdDCxF1xCLB01yldF43Jnff0WEO15p2OXOwmk.jpg?auto=webp&s=b1e9329cbde3a5feaae946c8e4e2dc50de48ff74', 'width': 1200}, 'variants': {}}]}
|
I have 6 V100 GPUs, what's best model for information extraction?
| 1 |
Hi, I have 6 V100 GPUs, what would be the best model I can use for performing information extraction from documents? Any input would be appreciated.
| 2023-09-09T17:55:37 |
https://www.reddit.com/r/LocalLLaMA/comments/16ec3qq/i_have_6_v100_gpus_whats_best_model_for/
|
kitkatmafia
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ec3qq
| false | null |
t3_16ec3qq
|
/r/LocalLLaMA/comments/16ec3qq/i_have_6_v100_gpus_whats_best_model_for/
| false | false |
self
| 1 | null |
A guide to prompting Code Llama
| 4 | 2023-09-09T18:05:59 |
https://ollama.ai/blog/prompting-code-llama
|
jmorganca
|
ollama.ai
| 1970-01-01T00:00:00 | 0 |
{}
|
16ecda3
| false | null |
t3_16ecda3
|
/r/LocalLLaMA/comments/16ecda3/a_guide_to_prompting_code_llama/
| false | false |
default
| 4 | null |
|
Explanation on memory requirements
| 1 |
Appreciate any responses or even just pointing me to sources to read.
I am new to the local LLM space, I would like to understand better to absolute memory requirements in the context of if you want to run a really large model but do not meet the required memory. Can you still run it and it be as accurate as that model is but will just run really slowly or is it not possible at all to run it?
Say I have a system with a ryzen 9 7950, 64GB of RAM and two 3090s. Would a model use the vram first then system RAM or how does that work?
| 2023-09-09T18:13:54 |
https://www.reddit.com/r/LocalLLaMA/comments/16eckb8/explanation_on_memory_requirements/
|
kwb7852
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16eckb8
| false | null |
t3_16eckb8
|
/r/LocalLLaMA/comments/16eckb8/explanation_on_memory_requirements/
| false | false |
self
| 1 | null |
How are ORCA_LLaMA_70B_QLoRA and Platypus2-70B-instruct? Anyone have experience using them?
| 1 |
Pretty much the title. These models are currently #2 and #3 on top of the huggingface leaderboard. For anyone who's used them, what has your experience been like? How coherent are they? Creative? Are they censored?
I can always experiment with them myself, but I was just wondering if anyone else has any input they'd like to share.
| 2023-09-09T18:31:52 |
https://www.reddit.com/r/LocalLLaMA/comments/16ed0du/how_are_orca_llama_70b_qlora_and/
|
nsfw_throwitaway69
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ed0du
| false | null |
t3_16ed0du
|
/r/LocalLLaMA/comments/16ed0du/how_are_orca_llama_70b_qlora_and/
| false | false |
self
| 1 | null |
Examples where dual 3090s outperform a single 4090?
| 1 |
I recently saw this thread saying dual 3090 is better than a single 4090. [https://www.reddit.com/r/LocalLLaMA/comments/15yxmgi/4090\_or\_dual\_3090/](https://www.reddit.com/r/LocalLLaMA/comments/15yxmgi/4090_or_dual_3090/)
What are concrete examples where dual 3090s win?
\- Run XYZ model which runs X% faster t/s
\- Fit ABC state of the art model in memory that is better than anything a 4090 can run
I just want to fully understand the difference before I pull the trigger :)
| 2023-09-09T18:44:50 |
https://www.reddit.com/r/LocalLLaMA/comments/16edbuu/examples_where_dual_3090s_outperform_a_single_4090/
|
yellowcustard77
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16edbuu
| false | null |
t3_16edbuu
|
/r/LocalLLaMA/comments/16edbuu/examples_where_dual_3090s_outperform_a_single_4090/
| false | false |
self
| 1 | null |
Training AI for Text Generation in Portuguese: Cloud or Local? GPT or Lamma?
| 1 |
I'm an SEO Specialist who has been using ChatGPT and the GPT API for a while now, and I've achieved good results with GPT-4 in text generation. However, I now face a challenge: I need to generate text in Portuguese (Brazil) for topics that the AI doesn't have information on. I have some unanswered questions regarding this:
1. **Fine-tuning AI**: Should I fine-tune GPT-4 or consider using alternatives like Lamma, Falcon, or others specifically for Portuguese text generation?
2. **Fine-tuning details**: Can I fine-tune the AI with existing blog posts to make it write like me? Additionally, can I provide specific definitions for certain words, such as legal terms, to ensure accurate and up-to-date content?
3. **Training methods**: Should I use the API to train these AIs or opt for platforms like Replicate? Can I also train them locally on my computer?
I currently have a computer with 16GB RAM and an Nvidia 3060 12GB GPU, in case that information is relevant.
Thank you in advance for your responses!
| 2023-09-09T19:05:24 |
https://www.reddit.com/r/LocalLLaMA/comments/16edtyx/training_ai_for_text_generation_in_portuguese/
|
imnotcoolsir
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16edtyx
| false | null |
t3_16edtyx
|
/r/LocalLLaMA/comments/16edtyx/training_ai_for_text_generation_in_portuguese/
| false | false |
self
| 1 | null |
What models are you running?
| 1 |
Every day there are tons of new models, most with random names and tweaks. What are you guys running and why do you like it?
| 2023-09-09T19:51:50 |
https://www.reddit.com/r/LocalLLaMA/comments/16eeyl7/what_models_are_you_running/
|
rorowhat
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16eeyl7
| false | null |
t3_16eeyl7
|
/r/LocalLLaMA/comments/16eeyl7/what_models_are_you_running/
| false | false |
self
| 1 | null |
3060Ti vs RX6700 XT, which is better for LLaMA?
| 3 |
Its what I can get in my country, I know nvidia is better for AI but it only has 8GB while the radeon has 12GB and I also want to do other stuff like SD.
So which one?
| 2023-09-09T20:07:56 |
https://www.reddit.com/r/LocalLLaMA/comments/16efcr1/3060ti_vs_rx6700_xt_which_is_better_for_llama/
|
tylercoder
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16efcr1
| false | null |
t3_16efcr1
|
/r/LocalLLaMA/comments/16efcr1/3060ti_vs_rx6700_xt_which_is_better_for_llama/
| false | false |
self
| 3 | null |
SillyTavern running locally on Mac M1 or M2 with llama-cpp-python backend
| 1 |
In this post, I'll share my method for running [SillyTavern](https://github.com/SillyTavern/SillyTavern) locally on a Mac M1/M2 using [llama-cpp-python](https://github.com/abetlen/llama-cpp-python).
## Overview
[SillyTavern](https://github.com/SillyTavern/SillyTavern) is a powerful chat front-end for LLMs - but it requires a server to actually run the LLM.
Many people simply use the OpenAI server - but since this is LocalLLaMa, we should run our own server.
To use SillyTavern locally, you'd usually serve your own LLM API using [KoboldCpp](https://github.com/LostRuins/koboldcpp), [oobabooga](https://github.com/oobabooga/text-generation-webui), [LM Studio](https://lmstudio.ai/), or a variety of other methods to serve the API.
Personally, I've found it to be cumbersome running any of those LLM API servers - and I wanted something simpler.
In fact, many people have been wondering how we could more simply use llama.cpp as an OpenAI-compatible backend for SillyTavern:
- [llama.cpp connection #371](https://github.com/SillyTavern/SillyTavern/issues/371)
- [api_like_OAI.py](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/api_like_OAI.py)
- [Simple LLaMA + SillyTavern Setup Guide](https://rentry.org/llama_v2_sillytavern)
## Solution: the llama-cpp-python embedded server
It turns out the Python package [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) now ships with a server module that is compatible with OpenAI.
For SillyTavern, the llama-cpp-python local LLM server is a drop-in replacement for OpenAI.
Best of all, for the Mac M1/M2, this method can take advantage of Metal acceleration.
I've found this to be the quickest and simplest method to run SillyTavern locally.
## Install llama-cpp-python with Metal acceleration
pip uninstall llama-cpp-python -y
CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install -U llama-cpp-python --no-cache-dir
pip install 'llama-cpp-python[server]'
## Run the HTTP server that comes with llama-cpp-python
python -m llama_cpp.server \
--model "llama2-13b.gguf.q6_K.bin" \
--n_gpu_layers 1 \
--port "8001"
In the future, to re-launch the server, just re-run this snippet of code.
## Obtain SillyTavern and run it too
git clone https://github.com/SillyTavern/SillyTavern
cd SillyTavern
./start.sh
In the future, you can just run `./start.sh`.
Note that both the llama-cpp-python server and SillyTavern need to be running at the same time.
## Connect SillyTavern to your llama-cpp-python server
1. Click **API Connections** (the icon looks like an electrical plug)
2. Under **API**, select **Chat Completion (OpenAI...)**
3. For **Chat Completion Source**: select **OpenAI** (there will be no API Key)
4. Click **AI Response Configuration** (the leftmost icon; looks like settings/sliders)
5. Scroll to **OpenAI/Claide Reverse Proxy**: type **http://127.0.0.1:8001/v1**
6. **Legacy Streaming Processing**: set **True**
## Finished
Now you're able to use SillyTavern locally with Metal acceleration.
On Llama2 13b, I'm getting about 10 token/s with an 32GB M1 with 8 GPU cores - and I'm really satisfied with the performance.
This is nearly the lowest-end Metal CPU to be manufactured - so I expect almost everyone else will see even better performance.
Now I don't have to deal with additional GitHub repos and multiple configurations for the API server.
I've found this method to be simpler than alternatives like KoboldCpp.
| 2023-09-09T20:14:03 |
https://www.reddit.com/r/LocalLLaMA/comments/16efhxm/sillytavern_running_locally_on_mac_m1_or_m2_with/
|
iandennismiller
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16efhxm
| false | null |
t3_16efhxm
|
/r/LocalLLaMA/comments/16efhxm/sillytavern_running_locally_on_mac_m1_or_m2_with/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'bDW7jyCB5L7RKBwRUqrzWSn3bIb_Szu_GogYRebiCjw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=108&crop=smart&auto=webp&s=22d2e1896c94ecebda58fed69478453d4b16fd4f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=216&crop=smart&auto=webp&s=019bd779b582098d4b9aa01b87ee530132195fa6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=320&crop=smart&auto=webp&s=55daeabbed00d9b3c1e7f3207edea4d0a265db39', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=640&crop=smart&auto=webp&s=47d7877d194270162d75f4922c4ecb60b17c101d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=960&crop=smart&auto=webp&s=004f5643d41eee63624b163efc53427073882f4f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=1080&crop=smart&auto=webp&s=e6ee7ad7840a9a71890c76db5e4df6a3f669e762', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?auto=webp&s=44d160d8b5087122f25fba2443dc2c5a77adf472', 'width': 1280}, 'variants': {}}]}
|
Text generation web UI
| 1 |
Very green to this, and would like to know how to optimize the for speed when loading models to generwt repliew faster. Does anyone have a cheat sheet all what all the sliders means and do?
| 2023-09-09T21:10:14 |
https://www.reddit.com/r/LocalLLaMA/comments/16egvw6/text_generation_web_ui/
|
rorowhat
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16egvw6
| false | null |
t3_16egvw6
|
/r/LocalLLaMA/comments/16egvw6/text_generation_web_ui/
| false | false |
self
| 1 | null |
16gb RAM/4gb VRAM laptop wants to run a 13b 4-bit Quantized model - pipe dream?
| 1 |
Hello, I’ve been trying to research ways to use a laptop with 1650 discrete graphics (4gb VRAM) and 16gb of ram for localLLaMA.
From everything I have seen, I’ve been told that at best you should run very small models only; but if slow speed is acceptable, down to 2 t/s, would it be possible to use oobabooga and .cpp to use both the 4gb VRAM and 16gb RAM to run a larger, 13b model in this laptop?
Thank you in advance!
| 2023-09-09T21:21:31 |
https://www.reddit.com/r/LocalLLaMA/comments/16eh69m/16gb_ram4gb_vram_laptop_wants_to_run_a_13b_4bit/
|
thatavidreadertrue
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16eh69m
| false | null |
t3_16eh69m
|
/r/LocalLLaMA/comments/16eh69m/16gb_ram4gb_vram_laptop_wants_to_run_a_13b_4bit/
| false | false |
self
| 1 | null |
Speed Benchmarks for more than batch 1 inference
| 1 |
Saw [this tweet](https://twitter.com/karpathy/status/1691571869051445433) by Karpathy explaining how LLMs are fast enough on local machines because "we" are very interested in batch 1 inference. I understood "batch 1 inference" as just prompting the LLM at the start and getting a result back, vs continuing the conversation.
If i got it right, then does that mean that most of available speed benchmarks are only for the first response of the LLM? Are there benchmarks for more prolonged "conversations" with the LLM?
If I didn't get it right though, please someone clarify me.
| 2023-09-09T21:52:42 |
https://www.reddit.com/r/LocalLLaMA/comments/16ehyxf/speed_benchmarks_for_more_than_batch_1_inference/
|
ExactSeaworthiness34
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ehyxf
| false | null |
t3_16ehyxf
|
/r/LocalLLaMA/comments/16ehyxf/speed_benchmarks_for_more_than_batch_1_inference/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '_hQx4p3YGSyGorMyf0YeA6t3uEEgxPU1EsFQZonxGdI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8P5PmWXm5EQFzI1JNP7c6HBlCVbDDi6ojf-pQgy7krI.jpg?width=108&crop=smart&auto=webp&s=92a019ec486b3e980075580dec5c54b4da770f4e', 'width': 108}], 'source': {'height': 70, 'url': 'https://external-preview.redd.it/8P5PmWXm5EQFzI1JNP7c6HBlCVbDDi6ojf-pQgy7krI.jpg?auto=webp&s=a900f5398ed8be8d510891a2356046c2369e8d01', 'width': 140}, 'variants': {}}]}
|
Question about large inference speed difference on similar setups
| 1 |
I have two machines with similar specs. Machine A has a 5950x with 128GB ram and 1x 3090. Machine B has a 5900X with 128GB ram and 2x 3090.
In the video you can see both machines running the same code with the same 13B models. Both are running inference tasks in a loop. The terminal on top is A and the other is B. You can see B is doing around 30-50 t/s while A only does 8-15 t/s. In both cases the model being used can fit into a single 3090s VRAM.
What could the discrepancy be? System A is on a B550 motherboard whereas B is on an X570 motherboard. The only difference I can think of is the 3090s on B are on x8 mode whereas on A the GPU is on full x16. Resizeable bar is enabled on both, I think. Both are running on relatively fresh installs of windows 11. Both are using ooba with the exllama hf loader. Could there be something wrong with the motherboard on B?
Any help would be appreciated!
| 2023-09-09T22:23:29 |
https://v.redd.it/lt785p1g2bnb1
|
hedonihilistic
|
/r/LocalLLaMA/comments/16eiqoq/question_about_large_inference_speed_difference/
| 1970-01-01T00:00:00 | 0 |
{}
|
16eiqoq
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/lt785p1g2bnb1/DASHPlaylist.mpd?a=1696976613%2CMjViNTljMGE1ZGVhZWY4MDJmMmU3OWM3ZmI0YzhhNmQwNzFjZmMxOGEwN2VlYThlM2RjZTZlODNkOTRhMjMxYQ%3D%3D&v=1&f=sd', 'duration': 27, 'fallback_url': 'https://v.redd.it/lt785p1g2bnb1/DASH_1080.mp4?source=fallback', 'height': 1920, 'hls_url': 'https://v.redd.it/lt785p1g2bnb1/HLSPlaylist.m3u8?a=1696976613%2CMDMxMDIyODI5ZGFlNWJkZDAwNGNmMWNmMWU4ZmQxZTA5NTc0NjVmMmRjZjg5OGU5Njk3ZjU2MGQzN2YzNmYzZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/lt785p1g2bnb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
|
t3_16eiqoq
|
/r/LocalLLaMA/comments/16eiqoq/question_about_large_inference_speed_difference/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'dTdnNGs3eWYyYm5iMX6mrHTagPx9oeytDy67bXgbD3P1o3hsBU3OTjvN2TGJ', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/dTdnNGs3eWYyYm5iMX6mrHTagPx9oeytDy67bXgbD3P1o3hsBU3OTjvN2TGJ.png?width=108&crop=smart&format=pjpg&auto=webp&s=cd1643a73fc778858ee14043612a7f927396cc14', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/dTdnNGs3eWYyYm5iMX6mrHTagPx9oeytDy67bXgbD3P1o3hsBU3OTjvN2TGJ.png?width=216&crop=smart&format=pjpg&auto=webp&s=3bab6744f5ad3734ba23fa2918619582edb99405', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/dTdnNGs3eWYyYm5iMX6mrHTagPx9oeytDy67bXgbD3P1o3hsBU3OTjvN2TGJ.png?width=320&crop=smart&format=pjpg&auto=webp&s=ed2f499a241567fb4b2154573037d25b0d4b0771', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/dTdnNGs3eWYyYm5iMX6mrHTagPx9oeytDy67bXgbD3P1o3hsBU3OTjvN2TGJ.png?width=640&crop=smart&format=pjpg&auto=webp&s=902e15d4e24e4b27e2dbe0cdf5c516bf5ad68d5e', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/dTdnNGs3eWYyYm5iMX6mrHTagPx9oeytDy67bXgbD3P1o3hsBU3OTjvN2TGJ.png?width=960&crop=smart&format=pjpg&auto=webp&s=cf6680f08277473f4163ac8cd366ca7748ed44e4', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/dTdnNGs3eWYyYm5iMX6mrHTagPx9oeytDy67bXgbD3P1o3hsBU3OTjvN2TGJ.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c57af9e754c56a914344c9dca53d64eec87c7f40', 'width': 1080}], 'source': {'height': 2290, 'url': 'https://external-preview.redd.it/dTdnNGs3eWYyYm5iMX6mrHTagPx9oeytDy67bXgbD3P1o3hsBU3OTjvN2TGJ.png?format=pjpg&auto=webp&s=bbb57d3bcdc1c97594c5e5fb2f9ad5f9620e95b3', 'width': 1288}, 'variants': {}}]}
|
|
Question about large inference speed difference on similar setups
| 1 |
I have two machines with similar specs. Machine A has a 5950x with 128GB ram and 1x 3090. Machine B has a 5900X with 128GB ram and 2x 3090.
In the video you can see both machines running the same code with the same 13B models. Both are running inference tasks in a loop. The terminal on top is A and the other is B. You can see B is doing around 30-50 t/s while A only does 8-15 t/s. In both cases the model being used can fit into a single 3090s VRAM.
What could the discrepancy be? System A is on a B550 motherboard whereas B is on an X570 motherboard. The only difference I can think of is the 3090s on B are on x8 mode whereas on A the GPU is on full x16. Resizeable bar is enabled on both, I think. Both are running on relatively fresh installs of windows 11. Both are using ooba with the exllama hf loader. Could there be something wrong with the motherboard on B?
Any help would be appreciated!
| 2023-09-09T22:23:40 |
https://v.redd.it/lt785p1g2bnb1
|
hedonihilistic
|
/r/LocalLLaMA/comments/16eiquu/question_about_large_inference_speed_difference/
| 1970-01-01T00:00:00 | 0 |
{}
|
16eiquu
| false | null |
t3_16eiquu
|
/r/LocalLLaMA/comments/16eiquu/question_about_large_inference_speed_difference/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'dTdnNGs3eWYyYm5iMX6mrHTagPx9oeytDy67bXgbD3P1o3hsBU3OTjvN2TGJ', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/dTdnNGs3eWYyYm5iMX6mrHTagPx9oeytDy67bXgbD3P1o3hsBU3OTjvN2TGJ.png?width=108&crop=smart&format=pjpg&auto=webp&s=cd1643a73fc778858ee14043612a7f927396cc14', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/dTdnNGs3eWYyYm5iMX6mrHTagPx9oeytDy67bXgbD3P1o3hsBU3OTjvN2TGJ.png?width=216&crop=smart&format=pjpg&auto=webp&s=3bab6744f5ad3734ba23fa2918619582edb99405', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/dTdnNGs3eWYyYm5iMX6mrHTagPx9oeytDy67bXgbD3P1o3hsBU3OTjvN2TGJ.png?width=320&crop=smart&format=pjpg&auto=webp&s=ed2f499a241567fb4b2154573037d25b0d4b0771', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/dTdnNGs3eWYyYm5iMX6mrHTagPx9oeytDy67bXgbD3P1o3hsBU3OTjvN2TGJ.png?width=640&crop=smart&format=pjpg&auto=webp&s=902e15d4e24e4b27e2dbe0cdf5c516bf5ad68d5e', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/dTdnNGs3eWYyYm5iMX6mrHTagPx9oeytDy67bXgbD3P1o3hsBU3OTjvN2TGJ.png?width=960&crop=smart&format=pjpg&auto=webp&s=cf6680f08277473f4163ac8cd366ca7748ed44e4', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/dTdnNGs3eWYyYm5iMX6mrHTagPx9oeytDy67bXgbD3P1o3hsBU3OTjvN2TGJ.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c57af9e754c56a914344c9dca53d64eec87c7f40', 'width': 1080}], 'source': {'height': 2290, 'url': 'https://external-preview.redd.it/dTdnNGs3eWYyYm5iMX6mrHTagPx9oeytDy67bXgbD3P1o3hsBU3OTjvN2TGJ.png?format=pjpg&auto=webp&s=bbb57d3bcdc1c97594c5e5fb2f9ad5f9620e95b3', 'width': 1288}, 'variants': {}}]}
|
|
2nd CPU with Llama.cpp. Will it do anything?
| 1 |
Do you think adding back my second 16 core xeon will improve llama.cpp speeds or not? I will also have to spread my ram to 1 dimm per channel. Technically that would mean 2133->2400 but I have that overridden so no change.
It's additional 80gb/s bandwidth but it could just mean a less utilized memory bus or more proc->proc transfers. My GPUs are all on one proc and I'm not sure how PCIE bandwith gets shared with ram, if at all.
Will it be wasted electricity or a tiny bit more t/s.
| 2023-09-09T22:31:04 |
https://www.reddit.com/r/LocalLLaMA/comments/16eixft/2nd_cpu_with_llamacpp_will_it_do_anything/
|
a_beautiful_rhind
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16eixft
| false | null |
t3_16eixft
|
/r/LocalLLaMA/comments/16eixft/2nd_cpu_with_llamacpp_will_it_do_anything/
| false | false |
self
| 1 | null |
First time trying to run locally in M2 96gb
| 1 |
Hi!
I have a M2 96gb. I have installed OOBABOOGA and downloaded this model (which seems to be good based on the HF ranking?): [https://huggingface.co/fangloveskari/ORCA\_LLaMA\_70B\_QLoRA](https://huggingface.co/fangloveskari/ORCA_LLaMA_70B_QLoRA)
I'm now loading the model using the "Transformers" backend, leaving all parameters as-is. But it is taking so long to load.
Is it realistic to try to load that model based on my specs? Is there any parameter I should try to adjust to load it faster?
Thank you!
​
| 2023-09-09T22:33:35 |
https://www.reddit.com/r/LocalLLaMA/comments/16eizof/first_time_trying_to_run_locally_in_m2_96gb/
|
Janx__Spirit
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16eizof
| false | null |
t3_16eizof
|
/r/LocalLLaMA/comments/16eizof/first_time_trying_to_run_locally_in_m2_96gb/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'cEZaysqw_JK4t52A335zl31TYp9r3qhSUtivkDVmXy8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wp51pRtPj6PBqMuquaN6QK6soylHbWTs3NALw5G_h4s.jpg?width=108&crop=smart&auto=webp&s=a3f4151c325798661820dd9086f1e9c68d80b115', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/wp51pRtPj6PBqMuquaN6QK6soylHbWTs3NALw5G_h4s.jpg?width=216&crop=smart&auto=webp&s=fe7a7e347712182cdbb766aee9256a1b6a427ebf', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/wp51pRtPj6PBqMuquaN6QK6soylHbWTs3NALw5G_h4s.jpg?width=320&crop=smart&auto=webp&s=774ec77c6e931c603df26a7275d442eb00793149', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/wp51pRtPj6PBqMuquaN6QK6soylHbWTs3NALw5G_h4s.jpg?width=640&crop=smart&auto=webp&s=5d2b948acc6d87abb52c654c47c0e1b7a3eb42ee', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/wp51pRtPj6PBqMuquaN6QK6soylHbWTs3NALw5G_h4s.jpg?width=960&crop=smart&auto=webp&s=68501b9bc7f64046c17430c82ebee0e8ad4ca230', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/wp51pRtPj6PBqMuquaN6QK6soylHbWTs3NALw5G_h4s.jpg?width=1080&crop=smart&auto=webp&s=f08a5bda7425fdd9dd4782e48859070116dd945a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/wp51pRtPj6PBqMuquaN6QK6soylHbWTs3NALw5G_h4s.jpg?auto=webp&s=1ba95e361877cea831e5925de0ea6ad87f2d0835', 'width': 1200}, 'variants': {}}]}
|
1500$ for Local Models..
| 1 |
I want to invest in a PC for running models in local. I want to take good time in response , but my bulk is 1500$. I don't know if take second live GPUs is a good idea or not. What do you think ? What CPU/GPU/motherboard combo do you recomend me ? And liquid refrijeration ? Classic ?
If you have 1500$ for invest in a PC, which components do you select it ?
Thanks !!!
| 2023-09-09T22:48:08 |
https://www.reddit.com/r/LocalLLaMA/comments/16ejcvh/1500_for_local_models/
|
Icy_Sun_4958
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ejcvh
| false | null |
t3_16ejcvh
|
/r/LocalLLaMA/comments/16ejcvh/1500_for_local_models/
| false | false |
self
| 1 | null |
help, getting this error message ImportError: cannot import name 'Undefined' from 'pydantic.fields'
| 1 |
Hi, my Oobabooga installation suddenly stop working today. It was working fine yesterday and today I am getting this error:
ImportError: cannot import name 'Undefined' from 'pydantic.fields' E:\\Oobabooga\\installer\_files\\env\\lib\\site-packages\\pydantic\\[fields.py](https://fields.py))
I am on windows and don't know much about linux or python. Any idea for a quick fix?
| 2023-09-10T00:02:22 |
https://www.reddit.com/r/LocalLLaMA/comments/16el54j/help_getting_this_error_message_importerror/
|
orick
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16el54j
| false | null |
t3_16el54j
|
/r/LocalLLaMA/comments/16el54j/help_getting_this_error_message_importerror/
| false | false |
self
| 1 | null |
For CUDA MMQ users on KoboldCPP, here's a fix to gain a double digit percentage in context size (for a full offload on GPU)
| 1 |
Since early august 2023, a line of code posed problem for me in the [ggml-cuda.cu](https://ggml-cuda.cu) of KoboldCPP, which caused an incremental hog when Cublas was processing batches in the prompt. The more batches processed, the more VRAM allocated to each batch, which led to early OOM, especially on small batches supposed to save.. VRAM! Such problem exists also still in LlamaCPP, but under a different implementation.
Example :
>Test 1 KoboldCPP 1.43, model L1 33b Q3\_K\_M : koboldcpp.exe --usecublas mmq --threads 1 --gpulayers 63 --contextsize 4096
>
>\--blasbatchsize 512 : 23139 MB (init) -> 23577 (à 4096ctx)
\--blasbatchsize 64 : 22439 (init) -> 23977 (à 4096ctx)
>
>Test 2 LlamaCPP b1204, model L1 33b Q3\_K\_M : perplexity -m -f wiki.test.raw -t 1 -ngl 100 -c 4096 --chunks 2
>
>\-b 512 : 22891 (init) -> 23329 (à 4096ctx)
\-b 64 : 22493 (init) -> 24031 (à 4096ctx)
>
>It seems coherent between KCPP and LCPP.
I found the faulty line of code this morning on the KoboldCPP side of the force, and released an edited build of KoboldCPP (link at the end of this post) which fixes the issue.
# kobold.cpp-frankensteined_experimental_v1.43.b1204e
​
>This Frankensteined release of KoboldCPP 1.43 is just an updated experimental release cooked for my own use and shared with the adventurous or those who want more context-size under Nvidia CUDA mmq, this until LlamaCPP moves to a quantized KV cache allowing also to integrate within the accessory buffers.
>
>For official releases of KoboldCPP made by its developer LostRuins, go on the original repo :
[https://github.com/LostRuins/koboldcpp](https://github.com/LostRuins/koboldcpp).
[https://github.com/LostRuins/koboldcpp/releases](https://github.com/LostRuins/koboldcpp/releases)
>
>On this kobold.cpp-frankensteined\_experimental\_v1.43.b1204e :
* KoboldCPP 1.43
* Including a CUDA mmq "fix" to have more context under Llama 1, 2, and CodeLlama models (two digits percentage increase, especially for high context sizes)
* Updated with a merge of LlamaCPP b1204 et 5 following commits. (see [https://github.com/ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp))
* With unlocked context choice in the KoboldCPP command line : use with caution, multiples of 512 highly recommended, if not 256, if not 128 (it works for me), if not good luck!
* To be tried : 96 stop tokens and banned tokens instead of 10.
* Compiled with CUDA 12.1.
* A smaller non-CUDA .exe with ClBlast and OpenBlas is also provided, without being tested.
First release ever, be warned!
[https://github.com/Nexesenex/kobold.cpp/releases/tag/v1.43.b1204e](https://github.com/Nexesenex/kobold.cpp/releases/tag/v1.43.b1204e)
| 2023-09-10T00:15:26 |
https://www.reddit.com/r/LocalLLaMA/comments/16elfqa/for_cuda_mmq_users_on_koboldcpp_heres_a_fix_to/
|
Nexesenex
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16elfqa
| false | null |
t3_16elfqa
|
/r/LocalLLaMA/comments/16elfqa/for_cuda_mmq_users_on_koboldcpp_heres_a_fix_to/
| false | false |
self
| 1 | null |
How to stop generating extra actions like *pauses* or (gulps)
| 1 |
See title. I'm running Llama-2-7b-chat-GPTQ in Colab, and I have a roleplay prompt. Llama keeps replying with a lot of extra stage-direction-type stuff like *frowns*, *fidgets*, *ahem*, etc. but I only want it to output dialogue. I've tried telling it not to use asterisks or any extra actions in the system prompt, but nothing seems to work. Has anyone found a way to control this behavior?
| 2023-09-10T00:26:02 |
https://www.reddit.com/r/LocalLLaMA/comments/16elodu/how_to_stop_generating_extra_actions_like_pauses/
|
firejak308
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16elodu
| false | null |
t3_16elodu
|
/r/LocalLLaMA/comments/16elodu/how_to_stop_generating_extra_actions_like_pauses/
| false | false |
self
| 1 | null |
How can I get LLaMA2 to stop being chatty and only output the result
| 1 |
I want to see if I can use the mdoel for some classification stuff that I currently use ChatGPT for, but have issues tuning the prompt so that the model just outputs what I want it to output, without chatting.
Specifically, even if I tell it stuff like: "ONLY OUTPUT X AND NOTHING ELSE, DO NOT CHAT DO NOT CONVERSE, DO NOT TALK, ONLY OUTPUT THE RESULT WITHOUT ANY ADDITIONAL TEXT", I still get results like "Based on x you provided, I would say y" or "Sure thing, here is the result: x"
Are there things I can tell it in the system prompt to get what I want here? I tried this on 7b and 13b
| 2023-09-10T00:44:16 |
https://www.reddit.com/r/LocalLLaMA/comments/16em3ao/how_can_i_get_llama2_to_stop_being_chatty_and/
|
HoodFruit
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16em3ao
| false | null |
t3_16em3ao
|
/r/LocalLLaMA/comments/16em3ao/how_can_i_get_llama2_to_stop_being_chatty_and/
| false | false |
self
| 1 | null |
lmstudio question
| 1 |
Is lmstudio safe to install? Any malware hiding in it?
Is it a good place to start local?
| 2023-09-10T01:22:09 |
https://www.reddit.com/r/LocalLLaMA/comments/16emxkw/lmstudio_question/
|
jeffsmith202
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16emxkw
| false | null |
t3_16emxkw
|
/r/LocalLLaMA/comments/16emxkw/lmstudio_question/
| false | false |
self
| 1 | null |
Best software web-/GUI?
| 1 |
Right now I really only know about [Ooba](https://github.com/oobabooga/text-generation-webui) and [koboldcpp](https://github.com/LostRuins/koboldcpp) for running and using models, I feel like they are really well when you want to tinker with the models but if you want to actually use them for example as a replacement to ChatGPT they fall behind
Is there any software out there that you can recommend for running LLM's? It can also be API front ends like [SillyTavern](https://github.com/SillyTavern/SillyTavern) but preferably not
| 2023-09-10T02:46:55 |
https://www.reddit.com/r/LocalLLaMA/comments/16eoozu/best_software_webgui/
|
Victor_Lalle
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16eoozu
| false | null |
t3_16eoozu
|
/r/LocalLLaMA/comments/16eoozu/best_software_webgui/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'vzi8XIg9iHZoY_ojAp4V3XDNw24HDgSSeawYlCqOXTg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gZtEjtY5y0kHgzJ1kkNayIpGgzDSIysuGY1d3w5tFCY.jpg?width=108&crop=smart&auto=webp&s=76ff86bb6a822acfbbf5cc9729a6d83871b0aa78', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gZtEjtY5y0kHgzJ1kkNayIpGgzDSIysuGY1d3w5tFCY.jpg?width=216&crop=smart&auto=webp&s=7caa175933e72fd2f7f788ea9a7b20aba2461319', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gZtEjtY5y0kHgzJ1kkNayIpGgzDSIysuGY1d3w5tFCY.jpg?width=320&crop=smart&auto=webp&s=2d15c154e11bb4b80edd1775f5456962ddeefe83', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gZtEjtY5y0kHgzJ1kkNayIpGgzDSIysuGY1d3w5tFCY.jpg?width=640&crop=smart&auto=webp&s=092a216c62e7ea3b6121cc48fe4363364f6f3416', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gZtEjtY5y0kHgzJ1kkNayIpGgzDSIysuGY1d3w5tFCY.jpg?width=960&crop=smart&auto=webp&s=4dc0ae54bdd9bff882db62b6d1d82b11b8b69687', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gZtEjtY5y0kHgzJ1kkNayIpGgzDSIysuGY1d3w5tFCY.jpg?width=1080&crop=smart&auto=webp&s=8bef277e29177a465f8ecb0999823d5e0217c906', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gZtEjtY5y0kHgzJ1kkNayIpGgzDSIysuGY1d3w5tFCY.jpg?auto=webp&s=0816fdd11ff033f433fe05adec80fda9192285d8', 'width': 1200}, 'variants': {}}]}
|
Simplest way of running a llm on a hosted server
| 1 |
Id like to run an uncensored llm like nous-hermes on some remote machine where it can run fast and then use text generation via some kind of api. I don't mind paying for it
what I want though is the simplest setup poossible.
Is this possible with huggingface hub? Could I get some pointers?
Thanks
| 2023-09-10T03:33:31 |
https://www.reddit.com/r/LocalLLaMA/comments/16epmf4/simplest_way_of_running_a_llm_on_a_hosted_server/
|
FahrenheitUnrequited
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16epmf4
| false | null |
t3_16epmf4
|
/r/LocalLLaMA/comments/16epmf4/simplest_way_of_running_a_llm_on_a_hosted_server/
| false | false |
self
| 1 | null |
Any use for 60gb of vram
| 1 |
I have duel rtx 3090 setup currently and I was thinking about sticking an a2000 in to the extra slot for 60gb of vram but I’m not sure there’s any point. Are there any models that can run on 60gb that can’t run of 48gb?
| 2023-09-10T04:10:08 |
https://www.reddit.com/r/LocalLLaMA/comments/16eqbx3/any_use_for_60gb_of_vram/
|
Ergosyn
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16eqbx3
| false | null |
t3_16eqbx3
|
/r/LocalLLaMA/comments/16eqbx3/any_use_for_60gb_of_vram/
| false | false |
self
| 1 | null |
SSD as a ram
| 1 |
..there is a lot of sw (in linux it is even natural) to create HDD space from RAM.
but is there any thick to use SSD space as a RAM? and if so, will it be even usable for big models like 70b, 170b..?
| 2023-09-10T06:17:31 |
https://www.reddit.com/r/LocalLLaMA/comments/16esosg/ssd_as_a_ram/
|
ovnf
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16esosg
| false | null |
t3_16esosg
|
/r/LocalLLaMA/comments/16esosg/ssd_as_a_ram/
| false | false |
self
| 1 | null |
CodeLlama-xb/CodeLlama-xb-Python vs. CodeLlama-xb-instruct
| 1 |
Hey guys, so I have googled around and read the documentation but I am still confused between what's the difference between CodeLlama-xb/CodeLlama-xb-Python vs. CodeLlama-xb-instruct? I know the xb model is the base model (for several languages) and the Python model specializes in Python, but what's the instruct model and how is it different from the other 2 models?
Would really appreciate your help. Thanks a million!
| 2023-09-10T07:04:37 |
https://www.reddit.com/r/LocalLLaMA/comments/16ethxq/codellamaxbcodellamaxbpython_vs/
|
--leockl--
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ethxq
| false | null |
t3_16ethxq
|
/r/LocalLLaMA/comments/16ethxq/codellamaxbcodellamaxbpython_vs/
| false | false |
self
| 1 | null |
Training long context (32K+) 70B Llama
| 57 |
Been obsessed with training long context 70B Llama (32K+), as an individual user. Wanted to share what I've found & would love answers/tips. This is a long post.
​
Why 70B? For me, because story and creative writing consistency improves remarkably. I know Mythomax 13B lovers swear by it and I wish I could have its creativity along with the intelligence of 70B.
Why long context? Same reason. I imagine creative writing is strange for the model, because we need it to hallucinate about the right things (new ideas, directions), but not about other things (where things are, who knows what about what, genders, tendencies, etc.). In my limited testing with linear RoPE scaling, providing *consistent* long-context data (egs., self-contained book excerpts, short stories, etc.) can encourage this behavior, somewhat. Honestly, even GPT4 struggles (but does it better than others).
I'm sure others have good reasons for longer and consistent context (code, document analysis, ERP).
​
**GPU Availability:**
You need the big guns for this. I have a bunch of Ada A6000s & 4090s which is very nice, but not enough for this task. Also, I think training LORAs are the only reasonable option 70B, for the GPU poor.
Because I'm not a millionaire, I'm using [runpod.io](https://runpod.io) for A100s. I'd love to use Lambda since they're cheaper, but A100 availability is terrible there. [vast.ai](https://vast.ai) isn't really geared toward renting lots of big GPUs on a single node. Not sure about paperspace and others. Renting H100s are stupidly expensive, and so far, I haven't found them to be >2x the performance (at 2x the cost of A100). Maybe optimization over time will yield gains.
If you know of other services with good A100+ availability and rates, let me know.
​
**Repos & QLORA vs GPTQ LORA:**
Some history (you can skip):
I started out with u/ReturningTarzan's suggestions in [this repo](https://github.com/turboderp/alpaca_lora_4bit), though like the author I found it worked, but not the way we'd like :)
I did try it again with Llama 2 just in case (and GPT4 modified the monkey patch in the repo for GQA perfectly too, once I explained what GQA was), but got similar results as Llama 1.
Later u/kaiokendev came up with [this fork](https://github.com/kaiokendev/alpaca_lora_4bit) and it worked brilliantly, and it is basically the method I still use.
Today:
These days I use the original [GPTQ lora training](https://github.com/johnsmith0031/alpaca_lora_4bit) repo or [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) (for both QLORA or GPTQ Lora). When I first started, the GPTQ repo was way faster, but when I tried recently, Axolotl QLORA was slightly faster and used slightly less VRAM. I've read some posts speculating about this - so here's a data point. I've moved on to QLORA now, in terms of VRAM and speed (I have not measured PPL, and not sure what metric matters for creative outputs).
Also, I found there were some issues with the native transformers implementation of GPTQ lora training (Axolotl uses it), probably will be ironed out with time. But the implementation in the other repo above still works fine, if you want to use it.
I found that targeting q\_proj, v\_proj, k\_proj, o\_proj, gate\_proj, down\_proj and up\_proj works better than just the Q, V like in the original Alpaca LORA paper.
I'm not sure about rank and alpha. I've had some great results with rank 8, alpha 16 (or even less sometimes, as kaiokendev's SuperHOT proves, especially targeting all the above layers), but using rank 64 or even higher sometimes can pick up some specific speech patterns and styles better.
I've tried using alpha = 2\*rank, alpha = 16 always, and alpha = rank. All seem to be suggested in various forums, and I'm not sure what is better. I use 1:1 (alpha = rank) and it hasn't destroyed my runs.
If anyone knows better, do share.
​
**RoPE Scaling Methods:**
I use linear scaling as originally proposed by kaiokendev/Meta. In Axolotl, you can achieve this by xpos\_rope (which is a patch) or rope\_scaling: type: linear (now native transformers). I got training loss =0 with xpos\_rope (maybe the scaling units is not a factor, but the actual new context length or something, dunno). They both do the same thing, so I use the native transformers method now.
I tried training with NTK-alpha but it was always inferior to linear in my testing, even after trying to optimize alpha. The [YaRN paper](https://github.com/jquesnelle/yarn) explains this is because it extrapolates some dimensions, and claim to fix it in their method. I suspect Meta's approach in CodeLlama, where they use a giant base (1e6), also minimizes the chances of extrapolation, so either approach would work (YaRN paper claims theirs is better of course!). I haven't yet explored this, and we'd need to write our own monkey patches for YaRN, for now. I kinda don't want to try anything that exllama won't support for inference.
I think the above methods are similar to linear scaling, *if* you are training for the full context you plan to use. But unlike linear scaling, the other methods above can exactly extrapolate reasonably beyond their training context too.
If anyone knows anything else, do share.
​
**Datasets:**
For my application, I use a lot of book excerpts (.epub converts easily and can be cleaned with Python scripts). I got good success using only the starting 32K of each book, because there is guaranteed to be no out-of-context information. But then, I have a bias where everything sounds like the first part of a book.
So for my next trials, I want to try using smaller model summarization or RAG, to insert "prior context" recursive summaries, every time I truncate anything to 32K. That's a lot more pre-processing that just picking 32K randomly positioned tokens from long C4 data items, but I am guessing it will be worth it.
For instruct-tuning, I have had good success with reverse-prompting, i.e., train a model to generate prompts given the response, to convert plain text into Q&A pairs, based on whatever your goal is. Usually, I make several hundred manually and with GPT4's help, train the reverse-prompt model, generate more outputs from there, fix them manually/GPT4, re-train reverse-prompt model, and so on. The reverse prompt generation quality isn't great, but it has helped me get more creative responses from the model that doesn't sound like GPT3.5/4/most datasets.
I also found [kaiokendev's approach](https://kaiokendev.github.io/til) helpful, i.e., manually generating high-quality datasets (with GPT4's help in my case). For the kind of batch sizes and training token throughput I can currently achieve, LIMA is the only option for me. Fortunately, it works, though you should temper your expectations (teach style, not knowledge).
If anyone knows of any good long-context datasets, do tell. Most I found don't meet the cut (and I want to avoid unmodified GPT3.5/GPT4 creative outputs like the >!PLAGUE!< that it is).
​
**VRAM Usage & Training Methods** (the meat)**:**
Numbers below are for 4-bit QLORA (slightly higher for 4-bit GPTQ LORA), using Flash Attention 2. I found xformers VRAM to be quite close, for these sizes at least (I still use it if I have to use Windows). sdp\_attention was my goto method for VRAM reduction before, but xformers beats it today. Obviously, you want to enable gradient checkpointing too.
**Training VRAM Usage by Context**
**8K**: I have trained an 8K 70B GPTQ Lora, high rank, on Ada A6000 + 4x4090s (it used up almost all the 144GB VRAM), because I can do that at home. Batch size = 1. The more GPUs you split it over, the more the VRAM overhead. It can fit in a lot less on A100s, though I doubt it can fit in a single A100. And if you have 2, why not 16K?
**16K**: 16K context with a QLORA on 70B, rank 64, needs about 110GB VRAM (for a single batch). You can do that on 2xA100. If you spread it naively across 4xA100 it will take 138GB, and you get no benefit unless you have a clever way to use all the GPUs (more on that below).
**32K** (my goal): Needs 224GB on 4xA100 for a single batch (rank 8). Some day, perhaps I will get more A6000s to do a single batch at home (5xA6000 or 11x3090/4090 should work in theory, 11x3090 costs almost the same as single Ada A6000 if you shop!).
For GPU inference, using exllama 70B + 16K context fits comfortably in 48GB A6000 or 2x3090/4090. With 3x3090/4090 or A6000+3090/4090 you can do 32K with a bit of room to spare. exllama scales very well with multi-gpu. Beyond that, I can scale with more 3090s/4090s, but the tokens/s starts to suck. I can get 2-3 tokens/sec with A6000+4090 at 32K context, and that's my limit, for now. Maybe GGUF is faster for longer contexts?
**Training Methods**
On the A100s, I get around 1.1-1.2 million tokens trained per hour, for a single batch (for both 16K and 32K), using naive model parallel (I've heard it called 'Tensor Parallel' sometimes). It only uses one card at a time, so you get no speed up (just VRAM expansion, plus some overhead).
I'd like to scale it up, using egs., 8xA100. Or figure out a way to get higher throughput.
​
**Question:**
Is there any multi-gpu method for QLORA/GPTQ LORA other than naive model parallel (deepspeed, fsdp, pytorch dp, etc.)? It has to work even when the model is too big to fit in a single GPU.
I've tried deepspeed zero3 with 8-bit loading and LORA training, but 16K context OOMs even on 4xA100. So the VRAM penalty is hefty. If I had a zillion A100s, sure it'll help, but not when I can only access 8.
| 2023-09-10T08:02:57 |
https://www.reddit.com/r/LocalLLaMA/comments/16euhw5/training_long_context_32k_70b_llama/
|
Grimulkan
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16euhw5
| false | null |
t3_16euhw5
|
/r/LocalLLaMA/comments/16euhw5/training_long_context_32k_70b_llama/
| false | false |
self
| 57 | null |
different card for total vram increase
| 4 |
hi, i have a setup with dual 3090 total 48gb of vram.
it's possible since i still have a free pcie slot to ad , for example, a p40 to use only for vram increase?
thx in advance and bests
​
| 2023-09-10T08:57:58 |
https://www.reddit.com/r/LocalLLaMA/comments/16evdzu/different_card_for_total_vram_increase/
|
allnc
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16evdzu
| false | null |
t3_16evdzu
|
/r/LocalLLaMA/comments/16evdzu/different_card_for_total_vram_increase/
| false | false |
self
| 4 | null |
Is there an distributed LLM out there that runs on a network like blockchain does?
| 1 |
I’m imagining 1000 ppl connected to a network sharing compute and all contributing to the model. Is that a thing? If not, could it be?
| 2023-09-10T10:59:43 |
https://www.reddit.com/r/LocalLLaMA/comments/16exepu/is_there_an_distributed_llm_out_there_that_runs/
|
Overall-Importance54
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16exepu
| false | null |
t3_16exepu
|
/r/LocalLLaMA/comments/16exepu/is_there_an_distributed_llm_out_there_that_runs/
| false | false |
self
| 1 | null |
Why llama.cpp?
| 1 |
[removed]
| 2023-09-10T12:18:41 |
https://www.reddit.com/r/LocalLLaMA/comments/16eyvsn/why_llamacpp/
|
cjviper987
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16eyvsn
| false | null |
t3_16eyvsn
|
/r/LocalLLaMA/comments/16eyvsn/why_llamacpp/
| false | false |
self
| 1 | null |
Absolute cheapest local LLM
| 1 |
I keep seeing posts for building to a specific budget but had a thought “How cheaply could a machine possibly be built?” Of course there will be a lower boundary for model size but what are your thoughts for the least expensive way to run an LLM with no internet connection?
Personally, I believe mlc LLM on an android phone is the highest value per dollar option since you can technically run a 7B model for around $50-100 on a used android phone with a cracked screen.
What else???
| 2023-09-10T12:21:02 |
https://www.reddit.com/r/LocalLLaMA/comments/16eyxgw/absolute_cheapest_local_llm/
|
SporksInjected
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16eyxgw
| false | null |
t3_16eyxgw
|
/r/LocalLLaMA/comments/16eyxgw/absolute_cheapest_local_llm/
| false | false |
self
| 1 | null |
Ampera A1 for Llama
| 4 |
I know it isn't that local but has anyone of you ever tried running Llama or something similar on the ampera A1 on oracle cloud? It's implemented in the free tier of oracle with 4 cores and 24gb of ram.
I'll try to get it run today and I will be very happy if anyone has knowledge about it and can help.
It's an arm processor if that makes anything harder to apply.
| 2023-09-10T12:38:56 |
https://www.reddit.com/r/LocalLLaMA/comments/16ezafb/ampera_a1_for_llama/
|
Deep-View-2411
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ezafb
| false | null |
t3_16ezafb
|
/r/LocalLLaMA/comments/16ezafb/ampera_a1_for_llama/
| false | false |
self
| 4 | null |
Chatty LLama: A fullstack Rust + react chat app using Llama-2 https://github.com/Sollimann/chatty-llama
| 1 | 2023-09-10T13:46:42 |
Sollimann
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
16f0pg9
| false | null |
t3_16f0pg9
|
/r/LocalLLaMA/comments/16f0pg9/chatty_llama_a_fullstack_rust_react_chat_app/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'aw2bmsHx7O3LEYLkDGvIzb3fll9zyFLYMY-f6x2O5ME', 'resolutions': [{'height': 124, 'url': 'https://preview.redd.it/gqg5isozmfnb1.png?width=108&crop=smart&auto=webp&s=f51e88965865b5df8f5f2646ff7e03ea34f6f6ea', 'width': 108}, {'height': 248, 'url': 'https://preview.redd.it/gqg5isozmfnb1.png?width=216&crop=smart&auto=webp&s=b2d1ca9a75c621f3699ff082acfa339afdc81c60', 'width': 216}, {'height': 368, 'url': 'https://preview.redd.it/gqg5isozmfnb1.png?width=320&crop=smart&auto=webp&s=1ada87481f322ff99b083fe696cadff77ee49558', 'width': 320}, {'height': 736, 'url': 'https://preview.redd.it/gqg5isozmfnb1.png?width=640&crop=smart&auto=webp&s=f553f4cc0284c16791f3ccae9e326c2ad5f9d3aa', 'width': 640}], 'source': {'height': 797, 'url': 'https://preview.redd.it/gqg5isozmfnb1.png?auto=webp&s=7ef76bd23b423d137f8268247d7c19e0e6492ba0', 'width': 693}, 'variants': {}}]}
|
|||
Best code LLM in your opinion
| 1 |
I have a 4090 card and I'm looking for the best code models that you guys use.
I'm planning to use this mostly to clean and document python code.
| 2023-09-10T13:56:44 |
https://www.reddit.com/r/LocalLLaMA/comments/16f0xi9/best_code_llm_in_your_opinion/
|
Dry_Long3157
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16f0xi9
| false | null |
t3_16f0xi9
|
/r/LocalLLaMA/comments/16f0xi9/best_code_llm_in_your_opinion/
| false | false |
self
| 1 | null |
How to run LLMs on raspberry Pi
| 1 |
Hey guys, I'm looking to run 1B/3B quantised LLMs on a Raspberry pi. Please share any tips or resources that I can refer to do the same.
Thank you!
| 2023-09-10T13:59:34 |
https://www.reddit.com/r/LocalLLaMA/comments/16f0zmu/how_to_run_llms_on_raspberry_pi/
|
Dry_Long3157
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16f0zmu
| false | null |
t3_16f0zmu
|
/r/LocalLLaMA/comments/16f0zmu/how_to_run_llms_on_raspberry_pi/
| false | false |
self
| 1 | null |
How to prompt LLMs properly for multi-turn conversation, optionally with character card? (even if the prompt format is given)
| 1 |
For example, take the following prompt format (Airoboros):
>A chat.
USER: {prompt}
ASSISTANT:
Where would the character card go and how to format the chat history?
One way would be to make it a single-turn instruction to write the next message (this is what oobabooga seems to be doing):
>A chat.
USER: <character card for Jane> Continue the chat between Jack and Jane. Here is the chat history so far, write the next response:
Jack: Hello
Jane: Hi
Jack: How are you?
ASSISTANT: Jane:
Another way would be to repeat the turn markers (makes it hard to name the character). I've seen something similar being done by SillyTavern:
>A chat between USER and ASSISTANT. <character card for ASSISTANT>
USER: Hello
ASSISTANT: Hi
USER: How are you?
ASSISTANT:
Another example: Mythomax. The place of the character card is given, but where does the chat history go?
> <System prompt/Character Card>
>
>### Instruction: Your instruction or question here. For roleplay purposes, I suggest the following - Write <CHAR NAME>'s next reply in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only.
>
>### Response:
Another example: Pygmalion 2 seems to say that the system/user/model tags "may" appear multiple times to form a conversation, but then again does that mean that both options work equally well?
TL;DR: The frontends seem to make their own choices and it kind of works, but I wonder if there is a "proper" format that would work even better. Whenever I talk to a model, I keep asking myself "am I prompting this right?"
| 2023-09-10T14:39:34 |
https://www.reddit.com/r/LocalLLaMA/comments/16f1wag/how_to_prompt_llms_properly_for_multiturn/
|
whtne047htnb
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16f1wag
| false | null |
t3_16f1wag
|
/r/LocalLLaMA/comments/16f1wag/how_to_prompt_llms_properly_for_multiturn/
| false | false |
self
| 1 | null |
Llama-2-7b-gguf-v1 and VRAM usage.
| 1 |
[https://huggingface.co/dhairya0907/meta-llama-2-7b-chat-hf-gguf-v1](https://huggingface.co/dhairya0907/meta-llama-2-7b-chat-hf-gguf-v1)
This model runs entirely on cpu which is nice, but I can't get it to use any VRAM.
Thoughts?
| 2023-09-10T14:51:30 |
https://www.reddit.com/r/LocalLLaMA/comments/16f26dx/llama27bggufv1_and_vram_usage/
|
Thistleknot
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16f26dx
| false | null |
t3_16f26dx
|
/r/LocalLLaMA/comments/16f26dx/llama27bggufv1_and_vram_usage/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '7DYO0uOBt6P9jH0zZjT9k-rxlWK-xmNeZLtqwiUQYgY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/rAzphJLl2fhjgbGUiRGO5bkU8qvFhq1KQ0s1WHF3jjM.jpg?width=108&crop=smart&auto=webp&s=0e7ed4b3c565b99c6513a2ca39e4609e4aa9e7ec', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/rAzphJLl2fhjgbGUiRGO5bkU8qvFhq1KQ0s1WHF3jjM.jpg?width=216&crop=smart&auto=webp&s=3677f974774a478b08f02a6fefdc6795decda432', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/rAzphJLl2fhjgbGUiRGO5bkU8qvFhq1KQ0s1WHF3jjM.jpg?width=320&crop=smart&auto=webp&s=e6dba239ae06d891e77059d09da2f7fb67c8a080', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/rAzphJLl2fhjgbGUiRGO5bkU8qvFhq1KQ0s1WHF3jjM.jpg?width=640&crop=smart&auto=webp&s=33a909226d71317b1334516ae26f88e23107bebf', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/rAzphJLl2fhjgbGUiRGO5bkU8qvFhq1KQ0s1WHF3jjM.jpg?width=960&crop=smart&auto=webp&s=d3d0a519b5e580e1ac78524189c78904911c881c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/rAzphJLl2fhjgbGUiRGO5bkU8qvFhq1KQ0s1WHF3jjM.jpg?width=1080&crop=smart&auto=webp&s=d962aa8cb66bc9fbca5eb0d3089b6834e03c08a4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/rAzphJLl2fhjgbGUiRGO5bkU8qvFhq1KQ0s1WHF3jjM.jpg?auto=webp&s=9d4c2e93c38bf0f44f99858d3e9fee43ca36457d', 'width': 1200}, 'variants': {}}]}
|
Getting a LLM to ask for more information
| 1 |
[removed]
| 2023-09-10T14:54:56 |
https://www.reddit.com/r/LocalLLaMA/comments/16f297p/getting_a_llm_to_ask_for_more_information/
|
Natural-Sentence-601
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16f297p
| false | null |
t3_16f297p
|
/r/LocalLLaMA/comments/16f297p/getting_a_llm_to_ask_for_more_information/
| false | false |
default
| 1 | null |
Anybody tried the Azure ML services. Can you use them to run Llama models?
| 1 |
I know Azure has virtual machines but I've seen they also have more specific machine learning services. Has anybody tried them? Can you use them to run the Llama models?
| 2023-09-10T15:15:57 |
https://www.reddit.com/r/LocalLLaMA/comments/16f2s91/anybody_tried_the_azure_ml_services_can_you_use/
|
Zer0Ma
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16f2s91
| false | null |
t3_16f2s91
|
/r/LocalLLaMA/comments/16f2s91/anybody_tried_the_azure_ml_services_can_you_use/
| false | false |
self
| 1 | null |
Running WizardLM 70B (or similar) On a Budget - Recommendations?
| 1 |
[removed]
| 2023-09-10T15:18:32 |
https://www.reddit.com/r/LocalLLaMA/comments/16f2uh6/running_wizardlm_70b_or_similar_on_a_budget/
|
JustinPooDough
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16f2uh6
| false | null |
t3_16f2uh6
|
/r/LocalLLaMA/comments/16f2uh6/running_wizardlm_70b_or_similar_on_a_budget/
| false | false |
self
| 1 | null |
oobabooga api instead of openai api?
| 1 |
I've seen around a few suggestions that you can use Oobabooga to imitate Openai Api, I would like to do it to be able to use it in Langflow.
Yet and haven't seen any explaining how to achieve this, anyone can enlighten me?
| 2023-09-10T15:30:05 |
https://www.reddit.com/r/LocalLLaMA/comments/16f34ky/oobabooga_api_instead_of_openai_api/
|
PTwolfy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16f34ky
| false | null |
t3_16f34ky
|
/r/LocalLLaMA/comments/16f34ky/oobabooga_api_instead_of_openai_api/
| false | false |
self
| 1 | null |
Multiple models running on one system
| 1 |
Is it possible to run multiple models at the same time on the same system and let them interact with each other on something like langchain?
| 2023-09-10T16:06:12 |
https://www.reddit.com/r/LocalLLaMA/comments/16f4177/multiple_models_running_on_one_system/
|
Deep-View-2411
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16f4177
| false | null |
t3_16f4177
|
/r/LocalLLaMA/comments/16f4177/multiple_models_running_on_one_system/
| false | false |
self
| 1 | null |
GPT 3.5 Turbo vs. Phind Code v2 adding code to 15.k token application
| 1 |
I have a very poorly written SPA using vanilla js, alpine.js, html with back end on directus. I dont have access to gpt4 32k.
Entire application, minified is 15500+ tokens.
IMO - Phind provides much more contextual answer (actully there are two or three different multi selectors already in the code)
GPT answer is honestly worthless in this scenario, I would need to change almost everything to get it to work there.
Ps. with proper description of the problem and with some details gpt4 gives really good (and better than mine) code that could have been plugged into all 3 mentioned places right away.
One of the things, In terms of local coding models, is evaluating if they would be able to comment on gitlab/hub issues with proposed solution based on the code in the repo - unfortunatelly/fortunatelly I max out at 30k context and its really slow. I will need to start applying some rag around it. I have really high hopes based on current tests that in high quality repo, with large context window you will be able to get as usuable results as gpt4 without transfering IP over the network.
On the side - It was also mentioned in other trails - I think there is entire field to explore around perfecting "context card" to improve programming capabilities - for example if you were to go for 2 weeks vacation and you need a junior to finish some work for you, what would you put in the good instruction? I imagine high quality datasets with very well annotated short informations that would work very well with embedding and reranking that one of the fellow redditiors used recently.
Prompt:
"The application above is written in html, javascript and alpine.js library, I have introduced few different ways to dynamic tag multi selector write more universal but similar component I can reuse in every part of this application
Only code for selector nothing else
You have to use alpine.js"
Code was given in system msg as otherwise it seems it was completly neglected.
GPT 3.5 turbo 16k:
Sure! Here's the code for a dynamic tag multi-selector component using Alpine.js:
```html
<div x-data="{ tags: [], selectedTags: [] }">
<input type="text" x-model="searchTerm" @input="filterTags()" placeholder="Search tags...">
<div>
<template x-for="tag in tags">
<button x-text="tag" @click="addTag(tag)"></button>
</template>
</div>
<div>
<template x-for="tag in selectedTags">
<button x-text="tag" @click="removeTag(tag)"></button>
</template>
</div>
</div>
<script>
const tags = ['Tag 1', 'Tag 2', 'Tag 3', 'Tag 4', 'Tag 5'];
function filterTags() {
this.tags = tags.filter(tag => tag.toLowerCase().includes(this.searchTerm.toLowerCase()));
}
function addTag(tag) {
if (!this.selectedTags.includes(tag)) {
this.selectedTags.push(tag);
}
}
function removeTag(tag) {
this.selectedTags = this.selectedTags.filter(selectedTag => selectedTag !== tag);
}
</script>
```
In this code, we have a text input field for searching tags. The `tags` array contains all the available tags. As the user types in the search input, the `filterTags` function is called to filter the tags based on the search term. The filtered tags are displayed in a list of buttons.
The `selectedTags` array contains the tags that have been selected by the user. When a tag button is clicked, the `addTag` function is called to add the tag to the `selectedTags` array. If a tag is already selected, it won't be added again. To remove a tag, the `removeTag` function is called to filter out the selected tag from the `selectedTags` array.
You can customize the code as per your requirements and integrate it into your application.
‐----------------------
Phind v2:
Here's a reusable Alpine.js component for a dynamic multi-selector tag picker:
```html
<div x-data="{
tags: [],
open: false,
value: [],
onChange(val) {
if (this.value.includes(val)) {
// Do nothing if the tag is already selected
} else {
this.value.push(val);
this.open = false;
}
},
remove(val) {
var i = this.value.indexOf(val);
if (i > -1) {
this.value.splice(i, 1);
}
this.open = false;
},
clearEmptyTags() {
this.tags = [];
// Populate this.tags with your tags
}
}" @click="clearEmptyTags()" x-on:click.outside="open = false" x-modelable="value" x-model="supposed_form_tags">
<div class="multi-tag-select">
<div style="position: absolute; margin-top: -27px; margin-left: -200px; padding: 5px;" x-show="value.length">
<template x-for="tag in value">
<span class="selected-tag span_tag" @click="remove($el.textContent)" x-text="tag"></span>
</template>
</div>
<div class="button tag-pick" x-on:click="open = !open">
<span class="tag-text">#TopicTags</span>
<span class="tag-arrow">
<span class="material-symbols-outlined"> arrow_drop_down </span>
</span>
</div>
</div>
<ul class="tag-list" style="position: absolute;" x-show="open" x-transition.opacity>
<template x-for="tag in tags">
<li @click="onChange($el.textContent)" x-text="tag"></li>
</template>
</ul>
</div>
```
This component allows users to select multiple tags from a dropdown list. The `clearEmptyTags()` function should be modified to populate the `tags` array with your actual tags. You can also customize the styling as needed.
| 2023-09-10T17:06:15 |
https://www.reddit.com/r/LocalLLaMA/comments/16f5j58/gpt_35_turbo_vs_phind_code_v2_adding_code_to_15k/
|
kpodkanowicz
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16f5j58
| false | null |
t3_16f5j58
|
/r/LocalLLaMA/comments/16f5j58/gpt_35_turbo_vs_phind_code_v2_adding_code_to_15k/
| false | false |
self
| 1 | null |
Can the RX7900XT (Or any other gpu with 20gb of vram) now run 30b models?
| 1 |
I have one, and as I know usually people buy 24gb cards for 30b models, but do I have any chances of running it without buying new hardware? Are there people with similar hardware/or vram configuration?
| 2023-09-10T17:06:35 |
https://www.reddit.com/r/LocalLLaMA/comments/16f5jg1/can_the_rx7900xt_or_any_other_gpu_with_20gb_of/
|
ethanol_addicted
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16f5jg1
| false | null |
t3_16f5jg1
|
/r/LocalLLaMA/comments/16f5jg1/can_the_rx7900xt_or_any_other_gpu_with_20gb_of/
| false | false |
self
| 1 | null |
Model/LoRA with up to date LLM knowledge
| 1 |
I have what I would consider basic questions about transformer models, and how various technologies interface with each other. I don't think these kinds of questions are valuable enough ask here, but I'm still curious. Has anyone trained/fine-tuned something that can answer these questions for me? I realize that state of the art and up to date are always going to be months behind, but from what I can tell, Llama based models are at least a year behind, and gpt4 is 2 years behind. If the model also had knowledge of the inner workings of stable diffusion, that would be a plus.
I can run up to 70b models.
| 2023-09-10T18:06:23 |
https://www.reddit.com/r/LocalLLaMA/comments/16f71x9/modellora_with_up_to_date_llm_knowledge/
|
clyspe
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16f71x9
| false | null |
t3_16f71x9
|
/r/LocalLLaMA/comments/16f71x9/modellora_with_up_to_date_llm_knowledge/
| false | false |
self
| 1 | null |
Local AI PC build advice
| 1 |
[removed]
| 2023-09-10T18:27:52 |
https://www.reddit.com/r/LocalLLaMA/comments/16f7lx3/local_ai_pc_build_advice/
|
Ewypig
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16f7lx3
| false | null |
t3_16f7lx3
|
/r/LocalLLaMA/comments/16f7lx3/local_ai_pc_build_advice/
| false | false |
self
| 1 | null |
Is era of training models from scratch over
| 1 |
With era of all open source models and open source optimisation tools.
Is era of training models over for mid level companies.
Training huge models is now also becoming more and more expensive.
ML is more about how to speed up and optimistize ml models for use cases.
What do you guys think
| 2023-09-10T18:36:23 |
https://www.reddit.com/r/LocalLLaMA/comments/16f7tvu/is_era_of_training_models_from_scratch_over/
|
Spiritual-Rub925
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16f7tvu
| false | null |
t3_16f7tvu
|
/r/LocalLLaMA/comments/16f7tvu/is_era_of_training_models_from_scratch_over/
| false | false |
self
| 1 | null |
Why are there no consumer grade ML GPU cards in between memory starved Gamer GPUs and superexpensive Enterprise GPUs?
| 1 |
Okay, unless I'm missing something you either get a gaming gpu thats limited to 24gb which is just fine for gamers but is a pathetic amount of memory to even do inference with all but the smallest models let alone train and has very limited to nonexistent capability to link together.
Or you put on a fedora and trenchcoat and go ring up your black market contacts for the privilege of buying enterprise level gpu workstations for 10s of thousands which is so much of boon they don't even offer it to mere mortal consumers.
Or to dig through the reseller market for a cast off slightly less expensive enterprise gpu someone is willing to part with.
Would it be that difficult to introduce a midlevel 'professional' gpu where they just take a gamer gpu and stick some more memory on it and maybe also reintroduce a slightly pumped up nvlink which can handle more than 2 cards?
Is anything like that in the pipes?
| 2023-09-10T18:43:52 |
https://www.reddit.com/r/LocalLLaMA/comments/16f7zb4/why_are_there_no_consumer_grade_ml_gpu_cards_in/
|
cyborgsnowflake
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 1 |
{'gid_2': 1}
|
16f7zb4
| false | null |
t3_16f7zb4
|
/r/LocalLLaMA/comments/16f7zb4/why_are_there_no_consumer_grade_ml_gpu_cards_in/
| false | false |
self
| 1 | null |
Any problems with mixing GPUs? 4090+3090
| 1 |
I have a 4090 and want to expand to get 48GB of VRAM to run larger models.
Is it a bad idea to pair it with a 3090? Will this give me trouble if I'm trying to do anything beyond inference like training?
3090s are cheaper and I saw on this forum someone mentioned that it works just fine.
I also saw that pairing GPUs of different generations can be problematic for deep learning so I'm wondering if anyone has experience in this area
| 2023-09-10T19:23:32 |
https://www.reddit.com/r/LocalLLaMA/comments/16f91ql/any_problems_with_mixing_gpus_40903090/
|
yellowcustard77
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16f91ql
| false | null |
t3_16f91ql
|
/r/LocalLLaMA/comments/16f91ql/any_problems_with_mixing_gpus_40903090/
| false | false |
self
| 1 | null |
What settings to load falcon-180b-chat.Q3_K_L.gguf in textgen webui with 2x 4090s and 64gb of ram?
| 1 |
I've only used gptq models and I can't get this working. Through playing around randomly I managed to load it with:
n-gpu-layers 42
n\_ctx 2048
threads 16 (using 5950x)
n\_batch 511 (I have no idea what this means)
tensor split 23, 24
But when I try inferrence I get:
`CUDA error 2 at D:\a\llama-cpp-python-cuBLAS-wheels\llama-cpp-python-cuBLAS-wheels\vendor\llama.cpp\`[`ggml-cuda.cu:5031`](https://ggml-cuda.cu:5031)`: out of memory`
`C:\arrow\cpp\src\arrow\filesystem\`[`s3fs.cc:2829`](https://s3fs.cc:2829)`: arrow::fs::FinalizeS3 was not called even though S3 was initialized. This could lead to a segmentation fault at exit`
Now I realize this means out of memory, though I'm not sure if it's talking about the ram or vram. Cuda sounds like vram, though I get the same error with less layers offloaded so I'm not sure.
Do I just need to try a smaller quant? Or is there something in the textgen webui settings that would fix this? Like maybe the n\_batch? Or some other setting(s) I haven't even touched?
I would like to at least see falcon 180B's quality before getting another 64gb of ram.
Thanks to anyone who can help
| 2023-09-10T20:13:07 |
https://www.reddit.com/r/LocalLLaMA/comments/16facgu/what_settings_to_load_falcon180bchatq3_k_lgguf_in/
|
UnarmedPug
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16facgu
| false | null |
t3_16facgu
|
/r/LocalLLaMA/comments/16facgu/what_settings_to_load_falcon180bchatq3_k_lgguf_in/
| false | false |
self
| 1 | null |
is this build good for ai?
| 1 |
Processor: Ryzen 9 7900X.
Graphics card: NVIDIA founders RTX 3090.
CPU cooler: Cooler Master Hyper 212 Black Edition.
Motherboard: Gigabyte B650 Aorus Elite AX ATX AM5.
SSD: Samsung 970 EVO Plus 2 TB.
Case: Corsair D4000 airflow ATX mid Tower case.
Power Supply: Corsair RM750e 750W.
| 2023-09-10T20:14:45 |
https://www.reddit.com/r/LocalLLaMA/comments/16fadzb/is_this_build_good_for_ai/
|
Many-Corner-6700
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16fadzb
| false | null |
t3_16fadzb
|
/r/LocalLLaMA/comments/16fadzb/is_this_build_good_for_ai/
| false | false |
self
| 1 | null |
KoboldCPP / LlamaCPP Frankensteined - Some Blast Batch Size tests
| 1 |
Some results with llamacpp-frankensteined\_b1204e\_b1209\_Cublas\_12.1\_bin (link at the bottom of the page), testing various values for **Blast Batch Size (BBS) :**
--blasbatchsize x
**Tests of VRAM occupation made on a RTX 3090 with full layers offload :**
**For all :**
llm\_load\_tensors: VRAM used: 16958 MB
\+ **llama\_new\_context\_with\_model: kv self size = 1536.00 MB**
(total : 18484, including the approx 480MB additional buffer populated when full context is reached, that's our baseline for all BBS values)
\+ fixed blast buffer accordingly to BBS size.
**BBS 2048**
perplexity -m airoboros-c34b-2.1.Q3\_K\_M.gguf -f wiki.test.raw -ngl 100 -b 2048 -c 8192 --rope-freq-scale 1 --rope-freq-base 100000 --chunks 1 -> **22833 MB VRAM max -> ratio 100%**
approx : 18010MB + 4288MB (fixed blast buffer) + 480MB (additional buffer created with 8192 processed tokens)
llama\_print\_timings: prompt eval time = 31237.98 ms / 8192 tokens ( 3.81 ms per token, **262.24 tokens per second) -> ratio 100%**
**BBS 1024**
perplexity -m airoboros-c34b-2.1.Q3\_K\_M.gguf -f wiki.test.raw -ngl 100 -b 1024 -c 8192 --rope-freq-scale 1 --rope-freq-base 100000 --chunks 1 -> **20627 MB VRAM max -> ratio 90.3%**
approx : 18010MB + 2144MB (fixed blast buffer) + 480MB (additional buffer created with 8192 processed tokens)
llama\_print\_timings: prompt eval time = 31683.44 ms / 8192 tokens ( 3.87 ms per token, **258.56 tokens per second) -> ratio 98.6%**
**BBS 512**
perplexity -m airoboros-c34b-2.1.Q3\_K\_M.gguf -f wiki.test.raw -ngl 100 -b 512 -c 8192 --rope-freq-scale 1 --rope-freq-base 100000 --chunks 1 -> **19573 MB VRAM max -> ratio 85.7%**
approx : 18010MB + 1072MB (fixed blast buffer) + 480MB (additional buffer created with 8192 processed tokens)
llama\_print\_timings: prompt eval time = 34139.16 ms / 8192 tokens ( 4.17 ms per token, **239.96 tokens per second) -> ratio 91.5%**
**BBS 256**
perplexity -m airoboros-c34b-2.1.Q3\_K\_M.gguf -f wiki.test.raw -ngl 100 -b 256 -c 8192 --rope-freq-scale 1 --rope-freq-base 100000 --chunks 1 -> **19029 MB VRAM max -> ratio 83.3%**
approx : 18010MB + 536MB (fixed blast buffer) + 480MB (additional buffer created with 8192 processed tokens)
llama\_print\_timings: prompt eval time = 38949.93 ms / 8192 tokens ( 4.75 ms per token, **210.32 tokens per second) -> ratio 80.2%**
**BBS 128**
perplexity -m airoboros-c34b-2.1.Q3\_K\_M.gguf -f wiki.test.raw -ngl 100 -b 128 -c 8192 --rope-freq-scale 1 --rope-freq-base 100000 --chunks 1 -> **18743 MB VRAM max -> ratio 82.1%**
approx : 18010MB + 268MB (fixed blast buffer) + 480MB (additional buffer created with 8192 processed tokens)
llama\_print\_timings: prompt eval time = 45749.06 ms / 8192 tokens ( 5.58 ms per token, **179.06 tokens per second) -> ratio 68.3%**
**BBS 64**
perplexity -m airoboros-c34b-2.1.Q3\_K\_M.gguf -f wiki.test.raw -ngl 100 -b 64 -c 8192 --rope-freq-scale 1 --rope-freq-base 100000 --chunks 1 -> **18629 MB VRAM max -> ratio 81.6%**
approx : 18010MB + 134MB (fixed blast buffer) + 480MB (additional buffer created with 8192 processed tokens)
llama\_print\_timings: prompt eval time = 74474.66 ms / 8192 tokens ( 9.09 ms per token, **110.00 tokens per second) -> ratio 41.9%**
**BBS 32**
perplexity -m airoboros-c34b-2.1.Q3\_K\_M.gguf -f wiki.test.raw -ngl 100 -b 32 -c 8192 --rope-freq-scale 1 --rope-freq-base 100000 --chunks 1 -> **18555 MB VRAM max -> ratio 81.3%**
approx : 18010MB + 67MB (fixed blast buffer) + 480MB (additional buffer created with 8192 processed tokens)
llama\_print\_timings: prompt eval time = 137798.75 ms / 8192 tokens ( 16.82 ms per token, **59.45 tokens per second) -> ratio 22.7%**
​
GPU-Z screenshot with the Frankenstein Llama version b1209 :
https://preview.redd.it/0corjt147hnb1.png?width=1175&format=png&auto=webp&s=bdf859f745ea08857ec32d78ba44881848e74fc1
For info, screenshot with the official Llama version b1209 :
https://preview.redd.it/t28vufbaehnb1.png?width=1215&format=png&auto=webp&s=355d94ac93806c6c438086615f5484cc3ab07b15
Observe the used memory growth curves along the processing of the context. The smaller the batch, the higher it goes.
​
**Conclusion :**
\- Works as intended, unlike the official releases since early august 2023 in which each batch weights more in VRAM than the previous one as they are processed (thus, OOM will happen faster on a lot of small batches than on a smaller number of bigger ones despite a smaller initial buffer for the smaller ones)
\- The sweet BBS spots for most people are 512 or even 256, to limit the VRAM occupation without sacrificing much performances. 128 is still usable, and allows the most optimal context size. The performance collapse comes at 64 and of course 32 is worst, while 2048 is almost useless compared to 1024, the sweet spot for the speed freaks.
\- Note : Q3\_K\_M is itself the best k\_quant for a low size allowing high context (once something goes out of context, it doesn't exist anymore and this perplexity is hence infinite) with an acceptable perplexity loss compared to f16.
Resources :
[https://www.reddit.com/r/LocalLLaMA/comments/16elfqa/for\_cuda\_mmq\_users\_on\_koboldcpp\_heres\_a\_fix\_to/](https://www.reddit.com/r/LocalLLaMA/comments/16elfqa/for_cuda_mmq_users_on_koboldcpp_heres_a_fix_to/)
[https://github.com/Nexesenex/kobold.cpp/releases/tag/v1.43.b1204e](https://github.com/Nexesenex/kobold.cpp/releases/tag/v1.43.b1204e)
| 2023-09-10T20:24:05 |
https://www.reddit.com/r/LocalLLaMA/comments/16famrm/koboldcpp_llamacpp_frankensteined_some_blast/
|
Nexesenex
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16famrm
| false | null |
t3_16famrm
|
/r/LocalLLaMA/comments/16famrm/koboldcpp_llamacpp_frankensteined_some_blast/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'sK7Y9HZpllGqqgn2-fKa0H1PoN54ZTwGMyuutmT7JtE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uRLtDGK633cK_iqcxs3EcyKCS7XXKQ_9Ox_5EQ9FnSI.jpg?width=108&crop=smart&auto=webp&s=5144621337daa51eb6689587ec5204b6cebe4017', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uRLtDGK633cK_iqcxs3EcyKCS7XXKQ_9Ox_5EQ9FnSI.jpg?width=216&crop=smart&auto=webp&s=d1bd5e252cd91bb832c7b62a11bc1e09cd7e340b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uRLtDGK633cK_iqcxs3EcyKCS7XXKQ_9Ox_5EQ9FnSI.jpg?width=320&crop=smart&auto=webp&s=d4df4266d22ab1aaedb5ad297e97c9f79034ed3b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uRLtDGK633cK_iqcxs3EcyKCS7XXKQ_9Ox_5EQ9FnSI.jpg?width=640&crop=smart&auto=webp&s=28e5f7a71542eeb7fa5c5b7a237bb667c9c1c394', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uRLtDGK633cK_iqcxs3EcyKCS7XXKQ_9Ox_5EQ9FnSI.jpg?width=960&crop=smart&auto=webp&s=33243b2ae0a783a6eacef9514d66cd95e0ee60ce', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uRLtDGK633cK_iqcxs3EcyKCS7XXKQ_9Ox_5EQ9FnSI.jpg?width=1080&crop=smart&auto=webp&s=ffeb80411333ec81eaef877bce21b27620de2ba1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uRLtDGK633cK_iqcxs3EcyKCS7XXKQ_9Ox_5EQ9FnSI.jpg?auto=webp&s=7d6b03e85f3b0b77837eb3dd25fe41c143971962', 'width': 1200}, 'variants': {}}]}
|
|
Are there any graphics cards priced ≤ 300€ that offer good performance for Transformers LLM training and inference?
| 8 |
Are there any graphics cards priced ≤ 300€ that offer good performance for Transformers LLM training and inference? (Used would be totally ok too)
I like to train small LLMs (3B, 7B, 13B). Already trained a few.
But I want to get things running locally on my own GPU, so I decided to buy a GPU. Now I am looking around a bit. And I think the RTX 3060 12 GB is a great card for my budget.
The prices for that card start from 278€
What do you think, is there a better option for my budget? How far is the ROCm technique? Would AMD cards also be an option?
I use Arch Linux.
Thank you.
| 2023-09-10T21:25:58 |
https://www.reddit.com/r/LocalLLaMA/comments/16fc9i6/are_there_any_graphics_cards_priced_300_that/
|
InternationalTeam921
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16fc9i6
| false | null |
t3_16fc9i6
|
/r/LocalLLaMA/comments/16fc9i6/are_there_any_graphics_cards_priced_300_that/
| false | false |
self
| 8 | null |
Dynalang code released
| 1 |
Disclaimer: I'm not responsible for the code or paper.
[Dynalang leverages diverse types of language to solve tasks by using language to predict the future via a multimodal world model.](https://reddit.com/link/16fd097/video/cwn9ovsq1inb1/player)
Code: [https://github.com/jlin816/dynalang](https://github.com/jlin816/dynalang)
Project: [https://dynalang.github.io/](https://dynalang.github.io/)
Paper: [https://arxiv.org/abs/2308.01399](https://arxiv.org/abs/2308.01399)
| 2023-09-10T21:54:14 |
https://www.reddit.com/r/LocalLLaMA/comments/16fd097/dynalang_code_released/
|
ninjasaid13
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16fd097
| false | null |
t3_16fd097
|
/r/LocalLLaMA/comments/16fd097/dynalang_code_released/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'aNwPHA6U-XUNV4B2lVhRynEz0EOkPRSyPpKMyjvYkgY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KJbWui38p5IAiLEyD2bzkRGZ3N94KWEiMSRzGSuYH8w.jpg?width=108&crop=smart&auto=webp&s=6b43263969f6d66371d6aa191d926e3d111af291', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/KJbWui38p5IAiLEyD2bzkRGZ3N94KWEiMSRzGSuYH8w.jpg?width=216&crop=smart&auto=webp&s=889be809d4f3a267d6ed7ea29d61b2e7f62d9e2c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/KJbWui38p5IAiLEyD2bzkRGZ3N94KWEiMSRzGSuYH8w.jpg?width=320&crop=smart&auto=webp&s=cbaa30fd9d711fbf392e5f6d12780b9ee5c1e261', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/KJbWui38p5IAiLEyD2bzkRGZ3N94KWEiMSRzGSuYH8w.jpg?width=640&crop=smart&auto=webp&s=8bbc34871f8f3a16657802d034c4a68ea9bb3da1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/KJbWui38p5IAiLEyD2bzkRGZ3N94KWEiMSRzGSuYH8w.jpg?width=960&crop=smart&auto=webp&s=365cd2032766cc7cab76f5e153fc5db73bac21ac', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/KJbWui38p5IAiLEyD2bzkRGZ3N94KWEiMSRzGSuYH8w.jpg?width=1080&crop=smart&auto=webp&s=8bb63d01facb849c4e396fb83f640f10423aec75', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/KJbWui38p5IAiLEyD2bzkRGZ3N94KWEiMSRzGSuYH8w.jpg?auto=webp&s=1de1b8be5722b689b5ce7b49a6d7069d06e1f97c', 'width': 1200}, 'variants': {}}]}
|
|
Editing specific sections of documents?
| 1 |
Are there any pipelines or perhaps a Langchain chain that would allow me to use an LLM to identify and edit specific portions/sections of a document based on a query?
I understand I can have the document indexed using an abrupt character split of a set number of characters, and edit the relevant chunk and re-append to the document, however if the content that is to be edited is spread across two chunks, I would end up having to regenerate both those chunks and re-appending to the original document. However I don't know how to include the context of the previous chunk to have a smooth continuation into the 2nd chunk.
Hence my question, is there any implementation for this? Or a simpler, better approach that I am missing? Any resources or help is greatly appreciated.
| 2023-09-10T22:23:03 |
https://www.reddit.com/r/LocalLLaMA/comments/16fdr8r/editing_specific_sections_of_documents/
|
ShaneMathy911
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16fdr8r
| false | null |
t3_16fdr8r
|
/r/LocalLLaMA/comments/16fdr8r/editing_specific_sections_of_documents/
| false | false |
self
| 1 | null |
What type of model Joyland is using.
| 1 |
Joyland ai is using very interesting model, why it's interesting well it's uses few relpies I tried many models such as mythomax but those model give me story reply or whatever I want for causal chatting with a character, my question would be what model can do that?
| 2023-09-10T22:24:35 |
https://www.reddit.com/r/LocalLLaMA/comments/16fdsl4/what_type_of_model_joyland_is_using/
|
swwer
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16fdsl4
| false | null |
t3_16fdsl4
|
/r/LocalLLaMA/comments/16fdsl4/what_type_of_model_joyland_is_using/
| false | false |
self
| 1 | null |
Meta Is Developing a New, More Powerful AI System as Technology Race Escalates
| 1 | 2023-09-10T23:03:21 |
https://www.wsj.com/tech/ai/meta-is-developing-a-new-more-powerful-ai-system-as-technology-race-escalates-decf9451
|
hzj5790
|
wsj.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16feqw6
| false | null |
t3_16feqw6
|
/r/LocalLLaMA/comments/16feqw6/meta_is_developing_a_new_more_powerful_ai_system/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '55LVCrtKrMX3T9qfEtCyWuvCswUr0nN14F-1SUnZYxE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_OCGjFpHvh9aSJ6q8GO9mDHTOQ0Qf_5qaWb4AMcS9ss.jpg?width=108&crop=smart&auto=webp&s=e56e369ec93aca35a99de9830a8b608948668143', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_OCGjFpHvh9aSJ6q8GO9mDHTOQ0Qf_5qaWb4AMcS9ss.jpg?width=216&crop=smart&auto=webp&s=06ac40196c141266bc7c1ee4249ae9f83fbaf8d7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_OCGjFpHvh9aSJ6q8GO9mDHTOQ0Qf_5qaWb4AMcS9ss.jpg?width=320&crop=smart&auto=webp&s=b199d04c3397fde0272684dee156f865b3814170', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_OCGjFpHvh9aSJ6q8GO9mDHTOQ0Qf_5qaWb4AMcS9ss.jpg?width=640&crop=smart&auto=webp&s=5443ec481a38788cb6a79c5fd3dfcd310c58bdaa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_OCGjFpHvh9aSJ6q8GO9mDHTOQ0Qf_5qaWb4AMcS9ss.jpg?width=960&crop=smart&auto=webp&s=be476264f4fd8fa692f8b3434af8585c83abaa6b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_OCGjFpHvh9aSJ6q8GO9mDHTOQ0Qf_5qaWb4AMcS9ss.jpg?width=1080&crop=smart&auto=webp&s=c6184ddb9e985278b81b26b87bf3ae0003571ac6', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/_OCGjFpHvh9aSJ6q8GO9mDHTOQ0Qf_5qaWb4AMcS9ss.jpg?auto=webp&s=7ba094d3580f51ff65d9b304610595c57de92788', 'width': 1280}, 'variants': {}}]}
|
||
Data Extraction using fine-tuned Llama or any other LLM?
| 1 |
Hey Reddit,
I'm working on a tool to pull data from highly irregular Excel files. I've gotten reasonable results which is extremely fast with standard Python coding, but it's far from perfect due to the lack of standardized templates.
Interestingly, when I tested ChatGPT-4 on a sample table, it did a decent job at data extraction. However, relying solely on GPT-4 has its downsides like token limits and slow processing speed (and data privacy issues). Plus, splitting the Excel sheet to fit within these limits results in loss of context and data.
I'm considering fine-tuning a language model to post-process data that was in a Pandas DataFrame (perhaps converted to JSON). Has anyone had success with this approach or have alternative recommendations? I've tried Langchain, but it wasn't helpful.
I have figured out to extract the relevant columns, but the post-processing part is where I am considering using an LLM which understands the domain and what needs to be extracted based on the examples I feed it.
Looking forward to your thoughts! And would be happy to answer any additional questions.
| 2023-09-10T23:03:22 |
https://www.reddit.com/r/LocalLLaMA/comments/16feqwu/data_extraction_using_finetuned_llama_or_any/
|
rs35plus1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16feqwu
| false | null |
t3_16feqwu
|
/r/LocalLLaMA/comments/16feqwu/data_extraction_using_finetuned_llama_or_any/
| false | false |
self
| 1 | null |
Is the 3060 with 12gb of ram okay for a LLM exclusive API server?
| 1 |
The goal is that I use this server on any and all applications I could need the ChatGPT API on in an effort to ditch it. I'm going to be running Llama2-13b-Chat and for a while I've been running it on CPU at about 6 t/s. My goal is to increase the speed as much as possible.
Also, would it be a better idea to just setup GPTQ at this point? Or should I continue to use Llama.CPP and just offload layers to the GPU?
Not sure how any of this works as I haven't even owned an NVIDIA GPU at one point. Thanks.
| 2023-09-10T23:39:23 |
https://www.reddit.com/r/LocalLLaMA/comments/16ffm6z/is_the_3060_with_12gb_of_ram_okay_for_a_llm/
|
-Plutonium-
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ffm6z
| false | null |
t3_16ffm6z
|
/r/LocalLLaMA/comments/16ffm6z/is_the_3060_with_12gb_of_ram_okay_for_a_llm/
| false | false |
self
| 1 | null |
How long does it take to load model normally?
| 1 |
For me, 13B model took about 200 seconds, and 180B took more than 1 hour, is this normal?
https://preview.redd.it/x2s2h5o8linb1.png?width=740&format=png&auto=webp&s=3855574859154d60f2971f36b3e0acecc7134836
| 2023-09-10T23:44:27 |
https://www.reddit.com/r/LocalLLaMA/comments/16ffqk7/how_long_does_it_take_to_load_model_normally/
|
Defiant_Hawk_4731
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ffqk7
| false | null |
t3_16ffqk7
|
/r/LocalLLaMA/comments/16ffqk7/how_long_does_it_take_to_load_model_normally/
| false | false | 1 | null |
|
Code Llama Parameters
| 1 |
I have been playing with code Llama (the 7B python one). It does pretty well, but I don't understand what the parameters in the code mean and how I should modify them to work best on my hardware. I'm looking at the code in: [https://github.com/facebookresearch/codellama/blob/main/llama/generation.py](https://github.com/facebookresearch/codellama/blob/main/llama/generation.py).
Unfortunately, there are no docs that come with it, so it's hard to tell. The constructor looks like:
`def build(`
`ckpt_dir: str,`
`tokenizer_path: str,`
`max_seq_len: int,`
`max_batch_size: int,`
`model_parallel_size: Optional[int] = None,`
`)`
The checkpoint and tokenizer make sense. What does max\_seq\_len, max\_batch\_size, and model\_parallel size mean? How should I set them? Then, generate looks like:
`def generate(`
`self,`
`prompt_tokens: List[List[int]],`
`max_gen_len: int,`
`temperature: float = 0.6,`
`top_p: float = 0.9,`
`logprobs: bool = False,`
`echo: bool = False,`
`stop_token: Optional[int] = None,`
`)`
Prompt and temp make sense; what should I put at max\_gen\_len and top\_p?
In particular, there is a check in the code (line 143, in the generatre function):
`assert max_prompt_len <= params.max_seq_len`
Why is this necessary?
| 2023-09-10T23:52:37 |
https://www.reddit.com/r/LocalLLaMA/comments/16ffxq4/code_llama_parameters/
|
beezlebub33
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ffxq4
| false | null |
t3_16ffxq4
|
/r/LocalLLaMA/comments/16ffxq4/code_llama_parameters/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'Jan1IqgWQFD57hGifKdQzDb1QzCkX_qFPj4rhliGk7k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/sShGhnZGYhVxhkALwYpQehstq-bKaQcUJJzAQLRP-aw.jpg?width=108&crop=smart&auto=webp&s=bd935217c41e6ace4d0b7e0e320dd15352085a83', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/sShGhnZGYhVxhkALwYpQehstq-bKaQcUJJzAQLRP-aw.jpg?width=216&crop=smart&auto=webp&s=b29571e338d708f4bc9867eae5f4cbf00223698a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/sShGhnZGYhVxhkALwYpQehstq-bKaQcUJJzAQLRP-aw.jpg?width=320&crop=smart&auto=webp&s=106373ce378626bbd9b83e40dcd6d63963f45de5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/sShGhnZGYhVxhkALwYpQehstq-bKaQcUJJzAQLRP-aw.jpg?width=640&crop=smart&auto=webp&s=245043fe1710b38285819901b759a4a71de1d4cc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/sShGhnZGYhVxhkALwYpQehstq-bKaQcUJJzAQLRP-aw.jpg?width=960&crop=smart&auto=webp&s=fd249ce601d7f0470c9d5a86f4888f99570e7ff2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/sShGhnZGYhVxhkALwYpQehstq-bKaQcUJJzAQLRP-aw.jpg?width=1080&crop=smart&auto=webp&s=4c7cf344242e3fdc78145a021235c61c8dfe9e59', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/sShGhnZGYhVxhkALwYpQehstq-bKaQcUJJzAQLRP-aw.jpg?auto=webp&s=8328f4eb183ba4b423467495bf5165ac662bda74', 'width': 1200}, 'variants': {}}]}
|
Fine tuning LLaMA 2
| 1 |
I am running an instance of 4*24 GB GPUs. A single GPU can't load the 70b model but I wanted to fine tune it to my dataset. How would I go about this?
| 2023-09-11T01:32:45 |
https://www.reddit.com/r/LocalLLaMA/comments/16fi80q/fine_tuning_llama_2/
|
SaatvikRamani
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16fi80q
| false | null |
t3_16fi80q
|
/r/LocalLLaMA/comments/16fi80q/fine_tuning_llama_2/
| false | false |
self
| 1 | null |
Careful when training LORA using Alpaca format!
| 1 |
Llama tokenizer make different tokens if you use ### or \\n### and that makes all the difference in interference
Details
This is to inform anybody when you are training.
I spent some time debugging this - because my training somehow sucked when I used ### Instruction ### Response, the interference was all wrong
Short story:
If you don't prepend \\n before ### instructions, the token will be different than if you put \\n before it
as there is token 835 that is " ###" and 2277, 29937 that is ## and #
https://preview.redd.it/as7r3il55jnb1.png?width=903&format=png&auto=webp&s=3b9ed1135b160baae25278b4185797362af90198
The consequences are obvious: during interference you will probably never be able to create token 835 but always 2277, 29937
So your training MAY be wrong if you trained with \[835\] ### Instructions and \[2277, 29937\] ### Response and obviously the result LORA will behave erratically.
This is another reason that Alpaca format should probably NOT be used because it is prone to tokenizer bugs like this.
It's a BUG in the system - just make sure you are aware of this!
| 2023-09-11T01:35:36 |
https://www.reddit.com/r/LocalLLaMA/comments/16fiabb/careful_when_training_lora_using_alpaca_format/
|
FPham
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16fiabb
| false | null |
t3_16fiabb
|
/r/LocalLLaMA/comments/16fiabb/careful_when_training_lora_using_alpaca_format/
| false | false | 1 | null |
|
Hi, I would love some help regarding building QA chat with Llama
| 1 |
I'm trying to build a chat for answering with document based knowledge.
The model I trying to use, used Llama2 as base, and train on top of that base. How can I use this model for QA? like, I've seen many guide using llama2 as QA chat but I have no idea how can I add this weights to it.
Here's what I'm trying to use [openthaigpt](https://openthaigpt.aieat.or.th/released-models-version-less-than-1.0.0-beta-greater-than-16-08-23).
Any help is appreciated!
| 2023-09-11T03:14:14 |
https://www.reddit.com/r/LocalLLaMA/comments/16fke96/hi_i_would_love_some_help_regarding_building_qa/
|
PuddleCuddle9
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16fke96
| false | null |
t3_16fke96
|
/r/LocalLLaMA/comments/16fke96/hi_i_would_love_some_help_regarding_building_qa/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'OfpaNFu1iZoJvxuY32RzUhNW2udTRTIecyp5Wv1wqPU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/GPNhRlfiLJ3IVEEM7UcdJZEwoEfORFcs9zgNvV85MLA.jpg?width=108&crop=smart&auto=webp&s=16b290e08922b290c7c78b2def122d3129ded0cd', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/GPNhRlfiLJ3IVEEM7UcdJZEwoEfORFcs9zgNvV85MLA.jpg?width=216&crop=smart&auto=webp&s=f0fc17fd7c48a09879dcf81dcb1614ce2ebb5fe5', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/GPNhRlfiLJ3IVEEM7UcdJZEwoEfORFcs9zgNvV85MLA.jpg?width=320&crop=smart&auto=webp&s=343af5c7e02e87e69ab8af5872da77c159a03715', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/GPNhRlfiLJ3IVEEM7UcdJZEwoEfORFcs9zgNvV85MLA.jpg?width=640&crop=smart&auto=webp&s=dcef01c57c7907d19b9e83fdf3be826640c0589e', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/GPNhRlfiLJ3IVEEM7UcdJZEwoEfORFcs9zgNvV85MLA.jpg?width=960&crop=smart&auto=webp&s=b4e77b8d6c824620dffb4212491d4b2745948b0f', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/GPNhRlfiLJ3IVEEM7UcdJZEwoEfORFcs9zgNvV85MLA.jpg?width=1080&crop=smart&auto=webp&s=a74cbd44389f3d7bda0de6b72d1b150c89fda645', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/GPNhRlfiLJ3IVEEM7UcdJZEwoEfORFcs9zgNvV85MLA.jpg?auto=webp&s=3a324f8fe9a4a0bc284f3f14297f48aeea04a86d', 'width': 1200}, 'variants': {}}]}
|
13B model on 16GB RAM and 8GB VRAM?
| 1 |
Is that possible? I have a 3050 with 8 GB VRAM and 16 GB RAM. I've been searching everywhere trying to find a solution but I couldn't really find anything. and made some attempts myself, but I feel as if I'm doing something wrong since it took 3+ min to generate a reply and only get <1 tokens...
For more info, I attempted to do it through oobabooga using llama.cpp and GGUF models. I played around with the settings and was able to load it up within some seconds, but it's just the generation part where I'm stuck. Is it supposed to take long to generate a response, or I'm just doing something wrong?
Also, similar things have been happening with trying to load 7B GGUF models with llama.cpp. It'll also take a while to generate and get <1 tokens as well. I feel like I'm just doing something wrong in trying to load these models and it's starting to confuse me.
I'm sorry if this is all over the place and confusing; I'm not the best at trying to explain myself.
| 2023-09-11T03:53:25 |
https://www.reddit.com/r/LocalLLaMA/comments/16fl5i0/13b_model_on_16gb_ram_and_8gb_vram/
|
Sensitive_Incident27
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16fl5i0
| false | null |
t3_16fl5i0
|
/r/LocalLLaMA/comments/16fl5i0/13b_model_on_16gb_ram_and_8gb_vram/
| false | false |
self
| 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.