title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Wizard Coder - CUDA out of memory. Tried to allocate
| 1 |
[removed]
| 2023-09-15T04:07:36 |
https://www.reddit.com/r/LocalLLaMA/comments/16j37e2/wizard_coder_cuda_out_of_memory_tried_to_allocate/
|
NormalResume
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16j37e2
| false | null |
t3_16j37e2
|
/r/LocalLLaMA/comments/16j37e2/wizard_coder_cuda_out_of_memory_tried_to_allocate/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'UY7-Y9YFSwqNihyhNh7qaNVidPA3KZOLpPUFt7eJqKo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/LOC8gQl7_gtTBmz0_ltFp84qpA4hZfksAUsBpa0r2dE.jpg?width=108&crop=smart&auto=webp&s=23223fc45c2fd69013b486eef8efb8b37a394e59', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/LOC8gQl7_gtTBmz0_ltFp84qpA4hZfksAUsBpa0r2dE.jpg?width=216&crop=smart&auto=webp&s=c7ce425b875b8f99244edbecf9d9204fe7646b14', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/LOC8gQl7_gtTBmz0_ltFp84qpA4hZfksAUsBpa0r2dE.jpg?width=320&crop=smart&auto=webp&s=8fddc219410aec56a396a94bf87daaf556cc3951', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/LOC8gQl7_gtTBmz0_ltFp84qpA4hZfksAUsBpa0r2dE.jpg?width=640&crop=smart&auto=webp&s=8400822e85d8cc693c1c61f9503eaa3326c20fe5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/LOC8gQl7_gtTBmz0_ltFp84qpA4hZfksAUsBpa0r2dE.jpg?width=960&crop=smart&auto=webp&s=cda57265623cbe32817685a3d48f6155cbb2649f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/LOC8gQl7_gtTBmz0_ltFp84qpA4hZfksAUsBpa0r2dE.jpg?width=1080&crop=smart&auto=webp&s=39b5a9ec2133798c22050e05403ce52c6f44685b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/LOC8gQl7_gtTBmz0_ltFp84qpA4hZfksAUsBpa0r2dE.jpg?auto=webp&s=440602e09961dec734f45acdbaaee92ba833e907', 'width': 1200}, 'variants': {}}]}
|
Guys, why are we sleeping on MLC LLM - Running on Vulkan?
| 55 |
I just tested on my 4090 and its incredibly fast and it actually has decent instructions to get the packages and run it. It's surprisingly fast compared to what I was seeing via CUDA and it seems to be fully utilizing my GPU. Im gonna try it on my ROG ally next.
https://preview.redd.it/h4fv2id7qcob1.png?width=1060&format=png&auto=webp&s=ab00aa5d3071c220818f8c389d6f9715e09f23c1
[MCL LLM llama-2 7b](https://preview.redd.it/e7sym2utpcob1.png?width=1441&format=png&auto=webp&s=6e00fc757f71923f9113d46e61072eac1e8e8744)
| 2023-09-15T05:03:58 |
https://www.reddit.com/r/LocalLLaMA/comments/16j486g/guys_why_are_we_sleeping_on_mlc_llm_running_on/
|
APUsilicon
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16j486g
| false | null |
t3_16j486g
|
/r/LocalLLaMA/comments/16j486g/guys_why_are_we_sleeping_on_mlc_llm_running_on/
| false | false | 55 | null |
|
Methods for Enabling LLMs to Work with Languages Other Than English: How Does ChatGPT Do It?
| 1 |
Hello. Are there any methods that would allow an LLM to work with languages other than English? For example, how does ChatGPT achieve this? Could you please discuss the methods you are aware of for achieving this?
| 2023-09-15T06:42:22 |
https://www.reddit.com/r/LocalLLaMA/comments/16j5w7f/methods_for_enabling_llms_to_work_with_languages/
|
PickkNickk
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16j5w7f
| false | null |
t3_16j5w7f
|
/r/LocalLLaMA/comments/16j5w7f/methods_for_enabling_llms_to_work_with_languages/
| false | false |
self
| 1 | null |
Some questions of implementing LLM to generate Q/A pairs based on local documents
| 1 |
Recently, I have been paying around about how to implement chat-based Q/A using the LLM model based on a local knowledge base.
I have experimented with the following two open-source frameworks.
[Llama\_index](https://github.com/jerryjliu/llama_index)
[Langchain-chatchat](https://github.com/chatchat-space/Langchain-Chatchat)
I believe these 2 frameworks are built upon what everyone refers to as the RAG (Retrieval-Augmented Generation) approach. Without altering the embeddings and LLM, it allows for generating responses based on one's own knowledge base.
Thanks for the author's excellent work, I have indeed been able to achieve my requirements to some extent. However, the output results still seem to have some deviations and even mistakes.
​
[the work flow of chatchat](https://preview.redd.it/2mud4ayp5dob1.png?width=834&format=png&auto=webp&s=cc598844a4d4462a8fc80383a1ce0e946828e157)
Is there a way to make the output results more accurate?
For example, I have a user manual for the hairdryer in knowledge base showing the hair dryer is working under rated voltage of 110V, when I use the LLM model with a relatively low sample size I may get a wrong answer.
If I ask, "Can I use the xxx hair dryer directly in a country with a rated voltage of 220V?"
Llama2-7B may answers "yes, you can."
while Llama2-13B may answer me "no, unless you use a power adapter".
And GPT is capable of providing more excellent answers.
I believe that if I want to achieve better output results, I may need to fine-tune the LLM or embeddings.
But I've noticed that many people use Q/A pairs for fine-tuning, and I'm not sure why they do this or whether these operations involve fine-tuning embeddings or the LLM. In my understanding, we don't have sufficient resources for fine-tuning the LLM, and fine-tuning embeddings seems to only help in improving how embeddings convert human language into higher relevance vectors. Does this mean that when fine-tuning embeddings, there's actually no need for question-answer pairs?
If I must fine-tune, should I separately fine-tune two embeddings: one fine-tuned based on question-answer pairs for extracting vectors from documents and another fine-tuned based on question similarity for extracting vectors from questions?
​
| 2023-09-15T06:52:31 |
https://www.reddit.com/r/LocalLLaMA/comments/16j624z/some_questions_of_implementing_llm_to_generate_qa/
|
william_luckybob
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16j624z
| false | null |
t3_16j624z
|
/r/LocalLLaMA/comments/16j624z/some_questions_of_implementing_llm_to_generate_qa/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'pfcqrH7buFAmKsvruxLEC8k2mjFW6UvVXRz1AwKYa44', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LBlgmVOAXLqerXsfJbE68VfDP0dDCf-STi5W_GzltkY.jpg?width=108&crop=smart&auto=webp&s=c4412dd9bcb447ba4bf759fb93252ec7cc29eaa7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LBlgmVOAXLqerXsfJbE68VfDP0dDCf-STi5W_GzltkY.jpg?width=216&crop=smart&auto=webp&s=d1077664f0b319a5e9689ed8006db9d90a16c950', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LBlgmVOAXLqerXsfJbE68VfDP0dDCf-STi5W_GzltkY.jpg?width=320&crop=smart&auto=webp&s=c7faca5ed71dda264a02cef778491e4cb828c9f4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LBlgmVOAXLqerXsfJbE68VfDP0dDCf-STi5W_GzltkY.jpg?width=640&crop=smart&auto=webp&s=02117123b1675812a5cce3203ea1dcfb0661f043', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LBlgmVOAXLqerXsfJbE68VfDP0dDCf-STi5W_GzltkY.jpg?width=960&crop=smart&auto=webp&s=40de8f1c2bc9c6790d19d992b0b90966ea14d5b6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LBlgmVOAXLqerXsfJbE68VfDP0dDCf-STi5W_GzltkY.jpg?width=1080&crop=smart&auto=webp&s=21d5b048af2fd61e3abbd1d8c17492e576f26874', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LBlgmVOAXLqerXsfJbE68VfDP0dDCf-STi5W_GzltkY.jpg?auto=webp&s=9c78825c06f2544b9d700febaadb19b3625bc6d4', 'width': 1200}, 'variants': {}}]}
|
|
Tuning an LLM on my own notes over time
| 1 |
I’ve been hearing a lot of talk about noteworthy people creating AI clones of themselves using the mass of data they’ve generated over the years.
I think this is pretty cool, but difficult to do for an average person. So my solution would be to record voice notes throughout the day, transcribe them, and add them to the data set.
At the end of the day/week I’d input the same set of prompts and see what insights are generated. Im committed to doing this as a long term project. Years to decades of notes.
What would be the simplest way to accomplish this?
I’m a total noob at this. I am a software engineer but I’ve spent most of my time in the AR/VR space. Please forgive my naivety.
| 2023-09-15T07:33:32 |
https://www.reddit.com/r/LocalLLaMA/comments/16j6q7r/tuning_an_llm_on_my_own_notes_over_time/
|
michaelthatsit
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16j6q7r
| false | null |
t3_16j6q7r
|
/r/LocalLLaMA/comments/16j6q7r/tuning_an_llm_on_my_own_notes_over_time/
| false | false |
self
| 1 | null |
How much does it cost to fine-tune on a code base?
| 1 |
Would the appropriate metric be cost per GB of source files?
Would love some reference points for finetunning costs and performance (would love examples of performance vs un-tuned GPT4) of Code Llama, Startcoder, etc. on own code base with docs.
Thanks
| 2023-09-15T09:34:05 |
https://www.reddit.com/r/LocalLLaMA/comments/16j8n5q/how_much_does_it_cost_to_finetune_on_a_code_base/
|
Infinite100p
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16j8n5q
| false | null |
t3_16j8n5q
|
/r/LocalLLaMA/comments/16j8n5q/how_much_does_it_cost_to_finetune_on_a_code_base/
| false | false |
self
| 1 | null |
I don't understand context window extension
| 1 |
If an transformer can only attend to, say, 2048, then how can that same transformer attend to more than 2048 tokens. Isn't that hard coded in the architecture?
I can understand that you might summarise previous chunks of 2048 tokens and pass that 'hidden state' forward or emulate a larger context window with a sliding window, but Albi, ROPE, PI don't appear to do anything except change the way positioning encodings are calculated.
What am I missing?
| 2023-09-15T09:39:28 |
https://www.reddit.com/r/LocalLLaMA/comments/16j8qa5/i_dont_understand_context_window_extension/
|
moma1970
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16j8qa5
| false | null |
t3_16j8qa5
|
/r/LocalLLaMA/comments/16j8qa5/i_dont_understand_context_window_extension/
| false | false |
self
| 1 | null |
Some interesting new tests just dropped
| 1 | 2023-09-15T09:57:19 |
https://evolutionnews.org/2023/09/chatgpt-is-becoming-increasingly-impressive/
|
ambient_temp_xeno
|
evolutionnews.org
| 1970-01-01T00:00:00 | 0 |
{}
|
16j90yr
| false | null |
t3_16j90yr
|
/r/LocalLLaMA/comments/16j90yr/some_interesting_new_tests_just_dropped/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'WPf9sBmJpsNWI-iOg7b9rUdeAF4wGQEPPzrnTt88b1I', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/iFvVjgxq-c4zUadOelgZ85TokJjGgKkxfeFOxd543aw.jpg?width=108&crop=smart&auto=webp&s=6386e8c67744622d89fcde2c20850e5702f8d3dd', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/iFvVjgxq-c4zUadOelgZ85TokJjGgKkxfeFOxd543aw.jpg?width=216&crop=smart&auto=webp&s=9b7c4af96bbf94c129d4e02c44bc625e2c09556f', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/iFvVjgxq-c4zUadOelgZ85TokJjGgKkxfeFOxd543aw.jpg?width=320&crop=smart&auto=webp&s=a223ede5c74450a34b696da3fc038b923c3c1262', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/iFvVjgxq-c4zUadOelgZ85TokJjGgKkxfeFOxd543aw.jpg?width=640&crop=smart&auto=webp&s=8e2a50f518a845e6e835f7b1b409493cdc051d3f', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/iFvVjgxq-c4zUadOelgZ85TokJjGgKkxfeFOxd543aw.jpg?width=960&crop=smart&auto=webp&s=fa5165653b75421304c9eed37f66e26ec0bf4693', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/iFvVjgxq-c4zUadOelgZ85TokJjGgKkxfeFOxd543aw.jpg?width=1080&crop=smart&auto=webp&s=546e93f25da8bde7fb36675aa42d60c04b62a1f2', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/iFvVjgxq-c4zUadOelgZ85TokJjGgKkxfeFOxd543aw.jpg?auto=webp&s=6395d1bb84141b4b38023e783cce951c2b075c05', 'width': 1200}, 'variants': {}}]}
|
||
Any Uncensored Llama2 model for commercial use?
| 1 |
Are there any Llama2 based model out there that haven't been trained with data made by OpenAI API?
I have been wondering if there might be a need for chatbots for companies, where a censored model is unusable.
For example, an eCommerce who would like to have a chatbot for guide and suggestions, would have a hard time using ChatGPT API, if it throws a hissy fit, every time a word like dildo og vibrator is mentioned.
I really like Airoboros, but with the wording of OpenAIs ToS, it seems risky to build a solution on that model.
Although I was wondering if a chatbot could even be considered competing with their product if they don't offer a uncensored alternative, that could be used instead?
| 2023-09-15T10:29:47 |
https://www.reddit.com/r/LocalLLaMA/comments/16j9l39/any_uncensored_llama2_model_for_commercial_use/
|
nixudos
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16j9l39
| false | null |
t3_16j9l39
|
/r/LocalLLaMA/comments/16j9l39/any_uncensored_llama2_model_for_commercial_use/
| false | false |
self
| 1 | null |
Easy method for fine-tuning any model from llama to gpt to othera
| 1 |
Can someone pls provide me with a script that I can run on Google colab I want to finetune 100m to 500m model on the free colab plan with dataset that is between 100k to 30k in size any help please even if there is no script can you provide me with s software or anything else that can help.
| 2023-09-15T10:45:07 |
https://www.reddit.com/r/LocalLLaMA/comments/16j9up6/easy_method_for_finetuning_any_model_from_llama/
|
Puzzleheaded_Acadia1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16j9up6
| false | null |
t3_16j9up6
|
/r/LocalLLaMA/comments/16j9up6/easy_method_for_finetuning_any_model_from_llama/
| false | false |
self
| 1 | null |
Try models fine-tuned for your language
| 1 |
For those who are not satisfied with these Open models, have you tried using fine-tuned models with datasets for your language?
I am Italian, and I have always been unsatisfied. The only 'decent' model is WizardLM 70B; the other models are all poor in comparison.
Not to mention the base models: they are practically unusable even at 70B, with incomprehensible output.
(Except for Falcon 40B; before WizardLM 70B, it was the top choice for me, even the base model.)
Yesterday, by chance, I tried 'Openbuddy' because I saw on TheBloke's page that it had reached version v11, and I was curious to read the changelog. But I found a demo on Hugging Face Spaces, and I thought, 'Well, let's give it a try.' And wow.
Not only the 70B model but also the 30B one is really good, and the credit probably goes to the fact that it was trained on conversations in various languages, including Italian. I'm even curious to try the 13B model; if it works well, even a model that small could be a game-changer.
I recommend testing models that have been fine-tuned in your language. The reasons for the malfunction might be the nearly 'English-only' dataset, and for a simple chatbot, you don't need such a high number of parameters as 70B. Probably, even 13B would be more than sufficient if trained with data from your language. This likely improves not only the output but also the input, making the prompt more understandable for the AI.
Openbuddy Demo: [https://huggingface.co/spaces/OpenBuddy/ChatWithBuddy](https://huggingface.co/spaces/OpenBuddy/ChatWithBuddy)
| 2023-09-15T12:26:36 |
https://www.reddit.com/r/LocalLLaMA/comments/16jbz0i/try_models_finetuned_for_your_language/
|
AntoItaly
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16jbz0i
| false | null |
t3_16jbz0i
|
/r/LocalLLaMA/comments/16jbz0i/try_models_finetuned_for_your_language/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'snUVaj9pVE_H3kUtH1WnuOG8IMqKxRf149JHPX68nio', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/WAMhuWv7E4a5fTpH_65H5WnqhJCxHmGdW6KPoL5WqWM.jpg?width=108&crop=smart&auto=webp&s=77fb3e3ac77caa8bd7df19d0d39e2dfafb26d1e3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/WAMhuWv7E4a5fTpH_65H5WnqhJCxHmGdW6KPoL5WqWM.jpg?width=216&crop=smart&auto=webp&s=0801e169424d7d53c5a87afa937b0b1c43586c4b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/WAMhuWv7E4a5fTpH_65H5WnqhJCxHmGdW6KPoL5WqWM.jpg?width=320&crop=smart&auto=webp&s=4c616e8edd1971c8add5f2da26bcccce2d94f2cc', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/WAMhuWv7E4a5fTpH_65H5WnqhJCxHmGdW6KPoL5WqWM.jpg?width=640&crop=smart&auto=webp&s=7e34bc5991874695ad311c54320a1c245ec26f16', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/WAMhuWv7E4a5fTpH_65H5WnqhJCxHmGdW6KPoL5WqWM.jpg?width=960&crop=smart&auto=webp&s=11002c87f64d10a6bac69206d06340d5a57b9b6c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/WAMhuWv7E4a5fTpH_65H5WnqhJCxHmGdW6KPoL5WqWM.jpg?width=1080&crop=smart&auto=webp&s=eb690631d57f214d122d5483a2181d50001d145a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/WAMhuWv7E4a5fTpH_65H5WnqhJCxHmGdW6KPoL5WqWM.jpg?auto=webp&s=5e2e4811972fef1b05d024143395ab9a073314ec', 'width': 1200}, 'variants': {}}]}
|
Need advice on Local LLM setup to augment AMD GPU shortcomings
| 1 |
Hi everyone,
**Background:**
I'm a seasoned developer and continuously working on GenAI things for the last 1 year. Recently, I wanted to set up a local LLM/SD server to work on a few confidential projects that I cannot move into the cloud. I have gone through the posts recommending renting cloud GPU and started with that approach. Since the work I am doing is quite a niche and have to do a lot of tinkering while the setup is running, the cloud costs equaled the cost of a GPU within the last 3/4 months. So, I wanted to go ahead and set up a local system for further exploration.
**Buying decision:**
I started with Nvidia GPUs and they didn't feel VFM for high-end cards having VRAM > 12GB. So, I have gone through some of the videos and posts that mentioned AMD is catching fast with all the ROCm things, and videos showing that SD is working on AMD. So, decided to go ahead with the AMD setup.
**Setup:**
* Processor: AMD Ryzen9 7900X
* Motherboard: MSI X670-P Pro WIFI
* GPU: MSI RX7900-XTX Gaming trio classic (24GB VRAM)
* RAM: Corsair Vengeance 32GBx2 5200MHz
I think the setup is one of the best VFM but only if it works for GenAI :(
​
**Exploration:**
After spending nearly 10 days with my setup, these are my observations:
* AMD has a lot to do in terms of catching up to Nvidia's software usability.
* Memory management is very weak in all the frameworks that are working on AMD stacks
* Eg: DirectML works on AMD. But, you can only generate one image at a time. I can generate 4 image batches in my Nvidia 2GB GPU, lol.
* All the current frameworks are releasing memory as soon as they complete processing but that is not happening in the AMD stack. this is giving rise to OOM.
* We cannot use AMD hardware directly out of the box like Nvidia. Some things don't work on Windows, some things don't work on WSL, I even set Ubuntu dual boot and some of the issues are still not answerable.
* Don't get me started with compatibility issues like Ubuntu Kernel, ROCm version, Windows version, 7900XTX support for that respective ROCm, etc, etc.
Don't get me wrong, I'm an enthusiast and DIY my entire career. But, at this point, I cannot simply sit and wait for all the things to fall into place, and at the same time, I cannot invest much further without much outcome. So, I'm currently looking to make the best use of this setup with as little investment as possible until the AMD woo's go away.
​
The only reasonable option I came across is to add a used RTX3090 24GB GPU to my current setup and continue working on it. I was able to find them on OLX and in Gameloot as cheap as 60,000/-. So, here are my concerns:
* Is there any way I can make my current setup work without adding further investment.?
* Since, I know that the used 3090s are coming from mining rigs, is it a safe bet to spend that much on them.?
* I can struggle a bit and go for 4090 but for that, I have to get rid of my 7900XTX which will make a dent in my pocket and I always feel 4090 is too overpriced and not a VFM.
* What are the other options, If I have any.?
Thanks a lot in advance guys and your suggestions, help, time, and bashing will be much appreciated ;)
| 2023-09-15T12:31:05 |
https://www.reddit.com/r/LocalLLaMA/comments/16jc2p0/need_advice_on_local_llm_setup_to_augment_amd_gpu/
|
kkb294
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16jc2p0
| false | null |
t3_16jc2p0
|
/r/LocalLLaMA/comments/16jc2p0/need_advice_on_local_llm_setup_to_augment_amd_gpu/
| false | false |
self
| 1 | null |
Uncovering mesa-optimization algorithms in Transformers
| 1 |
https://arxiv.org/abs/2309.05858
I think this is potentially huge, though my puny brain cannot handle the math in the paper.
The bit most relevant here:
*"Finally, we propose a novel self-attention layer, the mesa-layer, that explicitly and efficiently solves optimization problems specified in context. We find that this layer can lead to improved performance in synthetic and preliminary language modeling experiments, adding weight to our hypothesis that mesa-optimization is an important operation hidden within the weights of trained Transformers."*
I think this is "next thing after transformers", basically..
| 2023-09-15T12:59:13 |
https://www.reddit.com/r/LocalLLaMA/comments/16jcqn4/uncovering_mesaoptimization_algorithms_in/
|
BalorNG
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16jcqn4
| false | null |
t3_16jcqn4
|
/r/LocalLLaMA/comments/16jcqn4/uncovering_mesaoptimization_algorithms_in/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
|
Ryzen Direct Memory Access in OpenCL mode
| 1 |
I am looking at making the upgrade from my trusty old AM4 board and I am looking at orienting my upgrades for good price:performance running LLMs.
I remember in the past AMD used to market some of their chips with integrated graphics as APUs. It was my understanding that this went a step beyond traditional integrated graphics and even approached some of the things the Mac M1/M2 would later perfect with regard to blurring the line between system memory and VRAM. (The problem in AMD's case was that the APU line was woefully underpowered for CPU heavy tasks.)
My question is: has anyone noticed a significant speedup in their CPU layers using a Ryzen 7600 (or similar) with an OpenCL build? Would I have access to an OpenCL pseudo device using the Ryzen's "GPU" that I could offset some layers to alongside my 3060?
​
| 2023-09-15T13:10:46 |
https://www.reddit.com/r/LocalLLaMA/comments/16jd15h/ryzen_direct_memory_access_in_opencl_mode/
|
Apprehensive_Sock_71
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16jd15h
| false | null |
t3_16jd15h
|
/r/LocalLLaMA/comments/16jd15h/ryzen_direct_memory_access_in_opencl_mode/
| false | false |
self
| 1 | null |
Behind the Curtain: How do we look inside a Llama model file to browse the data?
| 1 |
**How do we browse the knowledge database of a model?** I'm sure we don't just pop the 5GB model into excel and start scrolling, or do we?
I have been testing several of the Llama 7B models on the text-generation-webui and it keeps stunning me about things it knows. It knew the exact statute for a law in my state. Like, freakin nuts. I want to look under the hood!
| 2023-09-15T13:14:49 |
https://www.reddit.com/r/LocalLLaMA/comments/16jd4o9/behind_the_curtain_how_do_we_look_inside_a_llama/
|
Actual-Bad5029
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16jd4o9
| false | null |
t3_16jd4o9
|
/r/LocalLLaMA/comments/16jd4o9/behind_the_curtain_how_do_we_look_inside_a_llama/
| false | false |
self
| 1 | null |
Using a LoRa Trained on a HF model and running it on a GPTQ model?
| 1 |
So currently most of the fastest models are GPTQ models,
On Oogaabooga you cant a train a QLora Model..and you cant training GPTQ model.
But you can train a HF model but to train it u need to set it to 8bit.
but a HF model is very slow at inference compared to GPTQ model.
Is there no way to Train a LoRa on HF model and use it on GPTQ model for faster inference?
| 2023-09-15T13:20:43 |
https://www.reddit.com/r/LocalLLaMA/comments/16jd9yp/using_a_lora_trained_on_a_hf_model_and_running_it/
|
mohaziz999
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16jd9yp
| false | null |
t3_16jd9yp
|
/r/LocalLLaMA/comments/16jd9yp/using_a_lora_trained_on_a_hf_model_and_running_it/
| false | false |
self
| 1 | null |
How much preprocessing are you doing for RAG QA chatbots w/ documents?
| 1 |
I know there is a ton of interest in document QA systems, which makes sense since it has good business values to most organizations.
I'm wondering for those of you who found the answers from you QA systems to be good, did you guys just drop the PDF / Word / etc... into the program and let the RecursiveCharacterSplitter in langchain do the work, or did you guys do some preprocessing before you chunked it up and loaded into the vector db.
I am trying to do QA on a PDF of a textbook. I wrote some scripts to "chunk" the textbook so each chunk also contains it's associated Title and Subheading.
​
Let's say we are in Chapter: Carbon-Carbon Bonds. Below is an example passage:
Grignard Reaction:
The grignard reaction is very lit. Only the most based can perform it. Blah Blah Blah
Blah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah Blah
Blah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah Blah
Blah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah Blah
I would then create chunks from this passage like this:
​
Carbon-Carbon Bonds
Grignard Reaction
The grignard reaction is very lit. Only the most based can perform it. Blah Blah Blah
Blah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah Blah
\-------------------------------------------------------
Carbon-Carbon Bonds
Grignard Reaction
Blah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah Blah
Blah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah Blah
Then I embed the chunks. The idea is that including the title and header will make have a higher similarity score.
Has anyone found it necessary to perform this type of chunking? Anyone getting great results with easier methods?
​
​
​
​
| 2023-09-15T13:25:20 |
https://www.reddit.com/r/LocalLLaMA/comments/16jde4z/how_much_preprocessing_are_you_doing_for_rag_qa/
|
4hometnumberonefan
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16jde4z
| false | null |
t3_16jde4z
|
/r/LocalLLaMA/comments/16jde4z/how_much_preprocessing_are_you_doing_for_rag_qa/
| false | false |
self
| 1 | null |
How does one discover the correct rope/freq when converting a model into gguf?
| 1 |
[removed]
| 2023-09-15T13:51:21 |
https://www.reddit.com/r/LocalLLaMA/comments/16jdzqi/how_does_one_discover_the_correct_ropefreq_when/
|
wh33t
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16jdzqi
| false | null |
t3_16jdzqi
|
/r/LocalLLaMA/comments/16jdzqi/how_does_one_discover_the_correct_ropefreq_when/
| false | false |
self
| 1 | null |
This week in AI - all the Major AI development in a nutshell
| 1 |
[removed]
| 2023-09-15T14:42:34 |
https://www.reddit.com/r/LocalLLaMA/comments/16jf9oy/this_week_in_ai_all_the_major_ai_development_in_a/
|
wyem
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16jf9oy
| false | null |
t3_16jf9oy
|
/r/LocalLLaMA/comments/16jf9oy/this_week_in_ai_all_the_major_ai_development_in_a/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'XtM-kk73flWxFHhYFhvEqzP_URnfgAcG5CgS8E0YN18', 'resolutions': [{'height': 89, 'url': 'https://external-preview.redd.it/kSnw4CVYfVApLY4mwPR30zRRzHbP-h5qZWbn6gvH5vs.jpg?width=108&crop=smart&auto=webp&s=b36e14f7569ad9b98faf9cc5d9d93974841bea50', 'width': 108}, {'height': 178, 'url': 'https://external-preview.redd.it/kSnw4CVYfVApLY4mwPR30zRRzHbP-h5qZWbn6gvH5vs.jpg?width=216&crop=smart&auto=webp&s=8f303c8e64b6765f92ff3e6e6f1e0cf94b6e0a15', 'width': 216}, {'height': 264, 'url': 'https://external-preview.redd.it/kSnw4CVYfVApLY4mwPR30zRRzHbP-h5qZWbn6gvH5vs.jpg?width=320&crop=smart&auto=webp&s=08632083cc712e34568233325b7fe6c7674c3003', 'width': 320}, {'height': 528, 'url': 'https://external-preview.redd.it/kSnw4CVYfVApLY4mwPR30zRRzHbP-h5qZWbn6gvH5vs.jpg?width=640&crop=smart&auto=webp&s=0cb2af76b5b7e0f0eceed0f033d2335774404969', 'width': 640}, {'height': 793, 'url': 'https://external-preview.redd.it/kSnw4CVYfVApLY4mwPR30zRRzHbP-h5qZWbn6gvH5vs.jpg?width=960&crop=smart&auto=webp&s=de4f960619f004163f7f56969c99310c8c6f2c35', 'width': 960}], 'source': {'height': 805, 'url': 'https://external-preview.redd.it/kSnw4CVYfVApLY4mwPR30zRRzHbP-h5qZWbn6gvH5vs.jpg?auto=webp&s=d4fddda252ab78e0fbe3d7f6ad9d35ae4d16059a', 'width': 974}, 'variants': {}}]}
|
Fine-Tuning Llama 70B on Consumer Hardware: A Step-by-Step Guide
| 1 |
[removed]
| 2023-09-15T16:02:08 |
https://www.reddit.com/r/LocalLLaMA/comments/16jhb5n/finetuning_llama_70b_on_consumer_hardware_a/
|
l33thaxman
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16jhb5n
| false | null |
t3_16jhb5n
|
/r/LocalLLaMA/comments/16jhb5n/finetuning_llama_70b_on_consumer_hardware_a/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'BG-_ybG_81cHNdhfcO4SI7RZKgZvJvbtDdAp--yWODA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/7MJFP4pv4dd3l5U-iQkpol5U-QDNuNShKtlioenvTJU.jpg?width=108&crop=smart&auto=webp&s=613d5e1fa3925c07e8e1d112648fcd2366c54107', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/7MJFP4pv4dd3l5U-iQkpol5U-QDNuNShKtlioenvTJU.jpg?width=216&crop=smart&auto=webp&s=03a8a5ecdeeebdd76792b1a8d5f6df90e60eabac', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/7MJFP4pv4dd3l5U-iQkpol5U-QDNuNShKtlioenvTJU.jpg?width=320&crop=smart&auto=webp&s=84de38d893173ecd2a023ad7e1d7792f05a1b7c9', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/7MJFP4pv4dd3l5U-iQkpol5U-QDNuNShKtlioenvTJU.jpg?auto=webp&s=38faed5263fcda3871ebce98b9d98235293b2a39', 'width': 480}, 'variants': {}}]}
|
Does anyone know how to finetune Phi-1.5?
| 1 |
I was trying to do a little finetuning to the new Phi-1.5 but using colab to train a llama 2 model, as I thought it gave me error in the "trainer = SFTTrainer" section.
I get this error: AttributeError: 'MixFormerSequentialForCausalLM' object has no attribute '\_set\_gradient\_checkpointing'.
Does anyone know what I need to modify to make my finetuning work correctly?
| 2023-09-15T16:25:18 |
https://www.reddit.com/r/LocalLLaMA/comments/16jhvzi/does_anyone_know_how_to_finetune_phi15/
|
danielbrdz
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16jhvzi
| false | null |
t3_16jhvzi
|
/r/LocalLLaMA/comments/16jhvzi/does_anyone_know_how_to_finetune_phi15/
| false | false |
self
| 1 | null |
Is there a reason for the lack of superhot ggufs?
| 1 |
I couldn't think of how to word the title, but I don't mean it as accusatory as it sounds lol. It's honest curiosity.
With llama having dropped ggml support, I realized that all the ggmls I had of "high context" superhot models were no longer viable.
This morning I figured I'd get some downloads going of the gguf versions of my favorite superhots, only to find there were exactly 0 on huggingface. That seemed odd to me, and my first thought was perhaps there's a technical reason. Of course, the answer could just be no one has gotten around to it yet, which is totally understandable.
So- is it just that they are in the to-do backlog, or is there a technical reason? Are superhots old news with the advent of llama2 and 4k context, or does gguf format perhaps not lend itself to that somehow?
| 2023-09-15T16:26:42 |
https://www.reddit.com/r/LocalLLaMA/comments/16jhx7m/is_there_a_reason_for_the_lack_of_superhot_ggufs/
|
LearningSomeCode
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16jhx7m
| false | null |
t3_16jhx7m
|
/r/LocalLLaMA/comments/16jhx7m/is_there_a_reason_for_the_lack_of_superhot_ggufs/
| false | false |
self
| 1 | null |
Utilizing two different size GPUs for fine-tuning
| 1 |
Hey fellow LLAMA enthusiasts! I've got a question about utilizing two A100 GPUs with different RAM sizes (40GB and 10GB) for fine-tuning LLama 7B. I attempted to use \`device\_map= "auto"\` when loading the Hugging Face model, but I encountered an 'OOM' (Out of Memory) error (Probably it expects a GPU of the same size). Any suggestions on effectively utilising both GPUs with this setup and avoiding the memory issue?"
| 2023-09-15T16:47:17 |
https://www.reddit.com/r/LocalLLaMA/comments/16jifd3/utilizing_two_different_size_gpus_for_finetuning/
|
ali0100u
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16jifd3
| false | null |
t3_16jifd3
|
/r/LocalLLaMA/comments/16jifd3/utilizing_two_different_size_gpus_for_finetuning/
| false | false |
self
| 1 | null |
Introducing Fintwit Voyager (summaries on financial podcasts with OS LLMs)
| 1 |
Introducing Fintwit Voyager!
A twitter account that harnesses open-source large language models to auto-summarize investing and financial markets podcasts.
On a technical note, I developed a custom summarization chain involving speech-to-text transcription, speaker diarization, speaker labeling, and summarization. While 'text summarization' sounds trivial and simple, many AI-mediated steps and prompt engineering techniques where involved in the process to keep it reliable and free from human intervention.
If you're interested in financial markets & investing talks, feel free to follow!
[https://twitter.com/fintwit\_voyager](https://twitter.com/fintwit_voyager)
| 2023-09-15T17:57:03 |
https://www.reddit.com/r/LocalLLaMA/comments/16jk6c4/introducing_fintwit_voyager_summaries_on/
|
Responsible_Warning3
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16jk6c4
| false | null |
t3_16jk6c4
|
/r/LocalLLaMA/comments/16jk6c4/introducing_fintwit_voyager_summaries_on/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'vUBde9w3mU-cHW1ZjtG2To91gaw33i10GgwEVe0AJpI', 'resolutions': [], 'source': {'height': 48, 'url': 'https://external-preview.redd.it/jmmAASC2L3jPr8IMbQndoHwcEhlRcp4ORylmCSLA5cc.jpg?auto=webp&s=89ae15513211d6f0cad0aadcf9e1afd679fdd5f2', 'width': 48}, 'variants': {}}]}
|
Is falcon 180b any good for creative stuff...more specifically fiction writing?
| 1 |
Just wondering. This is something that's extremely difficult to judge from benchmarks.
| 2023-09-15T17:59:17 |
https://www.reddit.com/r/LocalLLaMA/comments/16jk8b5/is_falcon_180b_any_good_for_creative_stuffmore/
|
spanielrassler
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16jk8b5
| false | null |
t3_16jk8b5
|
/r/LocalLLaMA/comments/16jk8b5/is_falcon_180b_any_good_for_creative_stuffmore/
| false | false |
self
| 1 | null |
How much do different aspects of models & interfaces matter?
| 1 |
What truly matters when to you when you're deciding which model and chat interface to use?
Does everything need to be FOSS? Do features (PDFs, Internet RAG, image gen, etc) matter? Is it just cost (as in no monthly subscription)? Model size? Locality?
and how do those axes interact? For example, would a FOSS model but an entirely free, closed-source local chat UI that has lots of features be useful? Or would a proprietary model with oogabooga UI that doesn't have a monthly fee but doesn't run locally be okay?
| 2023-09-15T18:18:52 |
https://www.reddit.com/r/LocalLLaMA/comments/16jkphn/how_much_do_different_aspects_of_models/
|
carsonpoole
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16jkphn
| false | null |
t3_16jkphn
|
/r/LocalLLaMA/comments/16jkphn/how_much_do_different_aspects_of_models/
| false | false |
self
| 1 | null |
Agents: An Open-source Framework for Autonomous Language Agents - AIWaves Inc 2023
| 1 |
I hope this paper is also interesting this community!
Paper: [https://arxiv.org/abs/2309.07870](https://arxiv.org/abs/2309.07870)
Github: [https://github.com/aiwaves-cn/agents](https://github.com/aiwaves-cn/agents)
Abstract:
>Recent advances on large language models (LLMs) enable researchers and developers to build autonomous language agents that can automatically solve various tasks and **interact with environments, humans, and other agents** using natural language interfaces. **We consider language agents as a promising direction towards artificial general intelligence** and release Agents, an **open-source library** with the goal of opening up these advances to a wider non-specialist audience. Agents is carefully engineered to support important **features including planning, memory, tool usage, multi-agent communication, and fine-grained symbolic control.** Agents is **user-friendly** as it **enables non-specialists** to build, customize, test, tune, and deploy state-of-the-art **autonomous language agents without much coding**. The **library** is also **research-friendly as its modularized design** makes it **easily extensible for researchers.**
https://preview.redd.it/ne8fsj05rgob1.jpg?width=1131&format=pjpg&auto=webp&s=076a3551bddb817351d9865809923a6bdf840cb1
https://preview.redd.it/u4x4hm05rgob1.jpg?width=1656&format=pjpg&auto=webp&s=2ca813790719b1f6f285e67ca92834e02d12c40c
​
​
| 2023-09-15T18:36:38 |
https://www.reddit.com/r/LocalLLaMA/comments/16jl53m/agents_an_opensource_framework_for_autonomous/
|
Singularian2501
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16jl53m
| false | null |
t3_16jl53m
|
/r/LocalLLaMA/comments/16jl53m/agents_an_opensource_framework_for_autonomous/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
|
|
Which vm instance can I start to run llama 70 B parameter? Which would be cost efficient? what ram? whether gpu or cpu? how many cpus?
| 1 |
​
https://preview.redd.it/tr5n474csgob1.png?width=725&format=png&auto=webp&s=b3f3d497345b15667d3e1ac73481adf1fe8bc915
https://preview.redd.it/y0enzi4gsgob1.png?width=740&format=png&auto=webp&s=5832ee21b361e171608468359f6bcadf4528b149
| 2023-09-15T18:43:25 |
https://www.reddit.com/r/LocalLLaMA/comments/16jlb2j/which_vm_instance_can_i_start_to_run_llama_70_b/
|
yashwatwani28
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16jlb2j
| false | null |
t3_16jlb2j
|
/r/LocalLLaMA/comments/16jlb2j/which_vm_instance_can_i_start_to_run_llama_70_b/
| false | false | 1 | null |
|
Lip Sync API Service?
| 1 |
I am using [SadTalker](https://github.com/OpenTalker/SadTalker) to create a lipsync of a [still image](https://satoshi.report/face_35.png) with an [audio file](https://satoshi.report/IXPRPRZXWVJZ.mp3). The still image is from Stable Diffusion and the audio is from ChatGPT and then AWS Polly for the voice synthesis. My problem is that even though I like the results it takes one and a half minutes to generate this [video](https://satoshi.report/35b.mp4). If I use the [enhancer](https://satoshi.report/35.mp4) it is more like five minutes. I am using a A10 NVIDIA GPU.
Does anyone have any suggestions on how to speed this up? Or perhaps there is a commercial service, with an API, that does this already?
| 2023-09-15T18:44:42 |
https://www.reddit.com/r/LocalLLaMA/comments/16jlc7e/lip_sync_api_service/
|
SatoshiReport
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16jlc7e
| false | null |
t3_16jlc7e
|
/r/LocalLLaMA/comments/16jlc7e/lip_sync_api_service/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'B9PY__Y0q5avO_xdhu30nJudoy_17oHTTvaUvyGll88', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/j4qkkwJ2o5q2jjB1fUbHgIxLHNvjJu4gqKOKeGJ4etM.png?width=108&crop=smart&auto=webp&s=02d0811e64b4c2bc8120519b3fda7b7b6ed31548', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/j4qkkwJ2o5q2jjB1fUbHgIxLHNvjJu4gqKOKeGJ4etM.png?width=216&crop=smart&auto=webp&s=73c3e370c366490b3a06e3553dd8c432cb45f587', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/j4qkkwJ2o5q2jjB1fUbHgIxLHNvjJu4gqKOKeGJ4etM.png?width=320&crop=smart&auto=webp&s=d397464c6fd64b86623ee13f1bb1988fdc68ee14', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/j4qkkwJ2o5q2jjB1fUbHgIxLHNvjJu4gqKOKeGJ4etM.png?auto=webp&s=a3d84214850ee15b496367e70a20f176ae75c804', 'width': 512}, 'variants': {}}]}
|
I'm going to buy M2 Mac Pro to run AI models on H100.
| 1 |
Yes, call me stupid but I really want to do it, maybe at some point Apple will finally support NVidia cards on their pricey PCIe slots. lol
​
That should be technically possible, it's just software issue.
| 2023-09-15T18:58:48 |
https://www.reddit.com/r/LocalLLaMA/comments/16jlooq/im_going_to_buy_m2_mac_pro_to_run_ai_models_on/
|
Wrong_User_Logged
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16jlooq
| false | null |
t3_16jlooq
|
/r/LocalLLaMA/comments/16jlooq/im_going_to_buy_m2_mac_pro_to_run_ai_models_on/
| false | false |
self
| 1 | null |
DeciLM-6B
| 1 | 2023-09-15T19:44:18 |
https://deci.ai/blog/decilm-15-times-faster-than-llama2-nas-generated-llm-with-variable-gqa/
|
Acrobatic-Site2065
|
deci.ai
| 1970-01-01T00:00:00 | 0 |
{}
|
16jmti2
| false | null |
t3_16jmti2
|
/r/LocalLLaMA/comments/16jmti2/decilm6b/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '8JasBexDQLW0G7y4n6ThMQH77AmFW5N6s2HUrariAC4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/jsvkz-6SkJ37uBSqTJV96vR0sPDSydNLnAdEv43nYhA.jpg?width=108&crop=smart&auto=webp&s=b113841b47c7b8885f1049233e7c226d00918b12', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/jsvkz-6SkJ37uBSqTJV96vR0sPDSydNLnAdEv43nYhA.jpg?width=216&crop=smart&auto=webp&s=518023000b4bb21c4d7300aa85ad741c71e5b19a', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/jsvkz-6SkJ37uBSqTJV96vR0sPDSydNLnAdEv43nYhA.jpg?width=320&crop=smart&auto=webp&s=a009a31f7cfa546e11a27cb2e811512059c60af9', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/jsvkz-6SkJ37uBSqTJV96vR0sPDSydNLnAdEv43nYhA.jpg?width=640&crop=smart&auto=webp&s=1c3e9b5a02863c87a4274319a5fd40bd805de6ca', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/jsvkz-6SkJ37uBSqTJV96vR0sPDSydNLnAdEv43nYhA.jpg?width=960&crop=smart&auto=webp&s=b4eed2019a94ea90e19eb7f469a9ec5ce6ed6109', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/jsvkz-6SkJ37uBSqTJV96vR0sPDSydNLnAdEv43nYhA.jpg?width=1080&crop=smart&auto=webp&s=a9e7a27b8798e891d82b976fbd58a8cc554542fc', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/jsvkz-6SkJ37uBSqTJV96vR0sPDSydNLnAdEv43nYhA.jpg?auto=webp&s=4c9e4466ddbdac9721c98ee14e298eb9cd9b1dc7', 'width': 1920}, 'variants': {}}]}
|
||
From Quantity to Quality: Boosting LLM Performance with Self-Guided Data Selection for Instruction Tuning
| 1 |
[https://arxiv.org/abs/2308.12032](https://arxiv.org/abs/2308.12032)
[https://github.com/MingLiiii/Cherry\_LLM](https://github.com/MingLiiii/Cherry_LLM)
​
| 2023-09-15T20:49:34 |
https://www.reddit.com/r/LocalLLaMA/comments/16jog13/from_quantity_to_quality_boosting_llm_performance/
|
MingLiiii
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16jog13
| false | null |
t3_16jog13
|
/r/LocalLLaMA/comments/16jog13/from_quantity_to_quality_boosting_llm_performance/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
|
Is it possible to get structured output from LLAMA2 70B
| 1 |
I'm running LLAMA2 70B locally on my MacBook.
I would like to get CSV and JSON output for certain prompts the way I can with GPT-4 APIs.
I currently using [thebloke/llama-2-70b-orca-200k.Q5\_K\_M.gguf](https://huggingface.co/TheBloke/Llama-2-70B-Orca-200k-GGUF) vi [llama.cpp](https://github.com/ggerganov/llama.cpp) python [server](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md). I use this because I can simply replace the `openai.api_base` in my scripts and it mostly works.
I'm providing system and user prompts like this:\`\`\`
{"role": "system","content":"""
You are a helpful assistant that writes outline for blog articles by supplying subtopics.
You only answer the questions, you do not address the user.
You return all output a comma separated value or CSV.
A CSV consists of line of words separated by a comma.
Here is an example of a CSV: how to make wine, crushing grapes, fermenting the wine, pressing the wine, aging wine."""},
{"role": "user", "content": "Provide me a list of subtopics for an article about dog training."}
]
​
I get output like this:
Dog Training Basics 1. Choosing the Right Breed 2. Housebreaking and Crate Training 3. Obedience Training Basics (Sit, Stay, Come) 4. Socialization and Puppy Classes 5. Teaching Your Dog Tricks
It's very easy to get GPT-4 to return CSV or even JSON in a certain format if you provide an example. Any ideas how to do this using llama2?
| 2023-09-15T21:01:53 |
https://www.reddit.com/r/LocalLLaMA/comments/16jor5o/is_it_possible_to_get_structured_output_from/
|
spyderman4g63
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16jor5o
| false | null |
t3_16jor5o
|
/r/LocalLLaMA/comments/16jor5o/is_it_possible_to_get_structured_output_from/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'nVhAggG3eCTNXmhRf8FwwKFAu3bEJL7299fy4oQcYek', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mkGIaSbeeJOpyEXy9xxcBAO-a1fVJVujZXNof2gVDzU.jpg?width=108&crop=smart&auto=webp&s=6805000cda30334e9adaab628e517ac4d933f7c2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/mkGIaSbeeJOpyEXy9xxcBAO-a1fVJVujZXNof2gVDzU.jpg?width=216&crop=smart&auto=webp&s=b86148a0d84f307cfb5abe3a50fc5c4915cbc54a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/mkGIaSbeeJOpyEXy9xxcBAO-a1fVJVujZXNof2gVDzU.jpg?width=320&crop=smart&auto=webp&s=6bd7bea82884d70ead1c4c7f061a43931bd4aedc', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/mkGIaSbeeJOpyEXy9xxcBAO-a1fVJVujZXNof2gVDzU.jpg?width=640&crop=smart&auto=webp&s=014fc75288407a9c5893062b56322aedbb31405b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/mkGIaSbeeJOpyEXy9xxcBAO-a1fVJVujZXNof2gVDzU.jpg?width=960&crop=smart&auto=webp&s=5bc97d24c364ee37f20da822818d758ae37fb162', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/mkGIaSbeeJOpyEXy9xxcBAO-a1fVJVujZXNof2gVDzU.jpg?width=1080&crop=smart&auto=webp&s=49879dc3030747daad0561ac0a0e09c7312a8e66', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/mkGIaSbeeJOpyEXy9xxcBAO-a1fVJVujZXNof2gVDzU.jpg?auto=webp&s=97c818d7e5dbb16b0cd206e5a57a7928c3d1f251', 'width': 1200}, 'variants': {}}]}
|
How to consolidate .distcp file shards?
| 1 |
I was training an LLM using axolotl with FSDP enabled (Llama-2 architecture). The model was saved as 6 separate .distcp shards. How can I consolidate those shards into a single .bin file for inference? My huggingface model link is here: [https://huggingface.co/jerryjalapeno/VH\_1.7B\_1/tree/main](https://huggingface.co/jerryjalapeno/VH_1.7B_1/tree/main).
| 2023-09-15T22:32:43 |
https://www.reddit.com/r/LocalLLaMA/comments/16jr0wc/how_to_consolidate_distcp_file_shards/
|
ZealousidealBlock330
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16jr0wc
| false | null |
t3_16jr0wc
|
/r/LocalLLaMA/comments/16jr0wc/how_to_consolidate_distcp_file_shards/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '3QFjRRtH8FSEV92-NOJrLf98it3bVHPkc6m-xqH3uRQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Ge8UTLWJMl-ShaA7itBH9CB_ZLTJA6q4wParmXC-Xm0.jpg?width=108&crop=smart&auto=webp&s=c3781f1c707ae29d8273dc439398687b328becd0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Ge8UTLWJMl-ShaA7itBH9CB_ZLTJA6q4wParmXC-Xm0.jpg?width=216&crop=smart&auto=webp&s=2f666ce20229cc7c4031b3931dcd8f20592b20bc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Ge8UTLWJMl-ShaA7itBH9CB_ZLTJA6q4wParmXC-Xm0.jpg?width=320&crop=smart&auto=webp&s=24c67c78cc743a27582e22aa09957f63bf5e7c96', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Ge8UTLWJMl-ShaA7itBH9CB_ZLTJA6q4wParmXC-Xm0.jpg?width=640&crop=smart&auto=webp&s=9966921a3c50d4e4519e0fc2aea7af046f416b95', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Ge8UTLWJMl-ShaA7itBH9CB_ZLTJA6q4wParmXC-Xm0.jpg?width=960&crop=smart&auto=webp&s=582b0e855bbd8f3aa406445934455fb5e4c3e64f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Ge8UTLWJMl-ShaA7itBH9CB_ZLTJA6q4wParmXC-Xm0.jpg?width=1080&crop=smart&auto=webp&s=42d7fc0259631e7de33a18eebefbdae0a1036151', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Ge8UTLWJMl-ShaA7itBH9CB_ZLTJA6q4wParmXC-Xm0.jpg?auto=webp&s=c008837a6dd62e92a138d8033c8417d004498193', 'width': 1200}, 'variants': {}}]}
|
Best text to speech out there?
| 1 |
Looking to have voice clone, text to speech, different voices etc...do we have anything like that that is gui based?
| 2023-09-15T22:46:12 |
https://www.reddit.com/r/LocalLLaMA/comments/16jrcek/best_text_to_speech_out_there/
|
rorowhat
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16jrcek
| false | null |
t3_16jrcek
|
/r/LocalLLaMA/comments/16jrcek/best_text_to_speech_out_there/
| false | false |
self
| 1 | null |
Why do some GGUFs set rope scale base to 1,000,000?
| 1 |
I've noticed this on a couple of ggufs, like for zarablend 7b. I load the gguf in oobabooga and it instantly sets the rope scale at 1,000,000, with alpha and compress at 1. The first time I thought it was a mistake, but a couple of others did it over time and I started to wonder if it was intentional.
Is it just a mistake in the particular gguf? Or is there value in having 1,000,000 rope base and 1 for alpha and compress?
| 2023-09-15T23:47:42 |
https://www.reddit.com/r/LocalLLaMA/comments/16jsr9t/why_do_some_ggufs_set_rope_scale_base_to_1000000/
|
LearningSomeCode
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16jsr9t
| false | null |
t3_16jsr9t
|
/r/LocalLLaMA/comments/16jsr9t/why_do_some_ggufs_set_rope_scale_base_to_1000000/
| false | false |
self
| 1 | null |
How do I get my LLM model to accept a large amount of input?
| 1 |
Using LMstudio
| 2023-09-16T00:01:07 |
https://www.reddit.com/r/LocalLLaMA/comments/16jt1fk/how_do_i_get_my_llm_model_to_accept_a_large/
|
hophophop1233
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16jt1fk
| false | null |
t3_16jt1fk
|
/r/LocalLLaMA/comments/16jt1fk/how_do_i_get_my_llm_model_to_accept_a_large/
| false | false |
self
| 1 | null |
Machine Learning systems notes
| 1 |
Hey folks,
I've written down some machine learning systems study notes, from the perspective of a software/systems engineer: [https://9600.dev/posts/machine-learning-developer-notes/](https://9600.dev/posts/machine-learning-developer-notes/)
I hope the community might find them useful.
It contains a high level walk-through of GPU hardware, GPU programming, super clusters, networking, CUDA, training, inference, size and scope of these large models, a bit of ML math, and a tour of libraries like DeepSpeed.
I'll keep hacking away on the TODOs in the next week or so.
\[and a big thank-you to this community while I'm at it -- watching a thousand LLMs bloom, and dozens of C++ inference frameworks blossom, has been reinvigorated my love of computing\]
​
​
| 2023-09-16T00:04:41 |
https://www.reddit.com/r/LocalLLaMA/comments/16jt4eo/machine_learning_systems_notes/
|
9600kps
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16jt4eo
| false | null |
t3_16jt4eo
|
/r/LocalLLaMA/comments/16jt4eo/machine_learning_systems_notes/
| false | false |
self
| 1 | null |
What hardware do I need for 70B Llama?
| 1 |
1\_ How many GPUs with how much VRAM, what kind of CPU, how much RAM? Is multiple SSDs in a striped RAID helpful for loading the models into (V)RAM faster?
I read that 70B models require more that 70GB VRAM.
2\_ How much VRAM do you need for full 70B, how much for quantized?
3\_ How noticeable is performance difference between full and quantized?
4\_ How big of a context window can we have, and how much VRAM do different context sizes require?
​
Is there generally a website where there are reqs and benchmarks for different LLMs on different hardware? That would be nice.
Thank you
| 2023-09-16T01:25:54 |
https://www.reddit.com/r/LocalLLaMA/comments/16jus38/what_hardware_do_i_need_for_70b_llama/
|
Infinite100p
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16jus38
| false | null |
t3_16jus38
|
/r/LocalLLaMA/comments/16jus38/what_hardware_do_i_need_for_70b_llama/
| false | false |
default
| 1 | null |
Pinned post for .cpp implementation and
| 1 |
I am interested in learning how to port these models like Llama 1/2 to their .cpp version I see there are posts pinned for serving on .cpp and quantization but no resources for porting them if I made a change to the architecture and trained a new model.
Would love some resources, I have been working with Python/Torch for 2-3 years, have basic c/cpp knowledge would love to know prerequisites and how tos. Would be great if we can also pin them
| 2023-09-16T01:31:25 |
https://www.reddit.com/r/LocalLLaMA/comments/16juw6u/pinned_post_for_cpp_implementation_and/
|
BomsDrag
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16juw6u
| false | null |
t3_16juw6u
|
/r/LocalLLaMA/comments/16juw6u/pinned_post_for_cpp_implementation_and/
| false | false |
self
| 1 | null |
Running Lama 70B on GameBoy
| 1 |
Hello, everyone! I've recently become interested in experimenting with LLMs and their inference capabilities. I've come across information suggesting they can be run on a variety of devices. Does anyone have experience or advice on how to get it set up on a GameBoy? Any guidance would be greatly appreciated!
| 2023-09-16T02:06:59 |
https://www.reddit.com/r/LocalLLaMA/comments/16jvl7s/running_lama_70b_on_gameboy/
|
Wrong_User_Logged
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16jvl7s
| false | null |
t3_16jvl7s
|
/r/LocalLLaMA/comments/16jvl7s/running_lama_70b_on_gameboy/
| false | false |
default
| 1 | null |
AMD for AI
| 1 |
So the 7900 XTX is at a pretty unbeatable price with 24 GB of VRAM and I am also a gamer and game developer so a single good GPU is better than 2 3090’s for example but I was wondering how it is for AI applications like oobabagooga(or however it is spelt) and sadtalker or bark. These use CUDA and CUDA is Nvidia technology so will I be forced to run the translator and settle for lower performance?
| 2023-09-16T03:54:19 |
https://www.reddit.com/r/LocalLLaMA/comments/16jxl44/amd_for_ai/
|
SimRacer101
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16jxl44
| false | null |
t3_16jxl44
|
/r/LocalLLaMA/comments/16jxl44/amd_for_ai/
| false | false |
self
| 1 | null |
How much does the calibration dataset affect the results when quantizing the model? (exllamav2)
| 1 |
[removed]
| 2023-09-16T05:05:48 |
https://www.reddit.com/r/LocalLLaMA/comments/16jyu8w/how_much_does_the_calibration_dataset_affect_the/
|
Eigeen
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16jyu8w
| false | null |
t3_16jyu8w
|
/r/LocalLLaMA/comments/16jyu8w/how_much_does_the_calibration_dataset_affect_the/
| false | false |
self
| 1 | null |
I wish for these four models
| 1 |
Model 1. General writer and knowledge; good at professional, personal, and replies to internet comments. Knows how to troubleshoot electronics, computers and networks, cars, and women.
Model 2. Psychology / therapist and personal growth guide (guru) and friendly. Hard drive capacity limited; infinite storage of my personal life. It knows everything about me. (Easy to back-up and reinstall)
Model 3. Alternative medicine (herbal, essential oil etc.) and mainstream medical knowledge to be able to find the correct and cheapest path to good health. I prefer herbal.
Model 4. Fantasy story writer, song/poem lyric writer, the best comedy writer that ever existed. Mamma jokes so funny, you blow milk out your nose. Rodney Dangerfield rib-shots better than the man himself.
Maybe:
\+Master Video games guide (All games). Master strategist.
​
Number 1 exists already. Number 2 might exist but I would need a YouTube video to show me how. Number 3 has mainstream medical, but no herbal. Number 4 seems hard because of comedy. I can't find a model yet that is great at comedy.
In the beginning when Stanford first released Lama1 I became preoccupied with finding the best settings and testing its knowledge and IQ and all following models since. I forgot about my needs as a user. I don't need the smartest just certain specialties.
| 2023-09-16T05:35:11 |
https://www.reddit.com/r/LocalLLaMA/comments/16jzbwu/i_wish_for_these_four_models/
|
MinimumPC
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16jzbwu
| false | null |
t3_16jzbwu
|
/r/LocalLLaMA/comments/16jzbwu/i_wish_for_these_four_models/
| false | false |
self
| 1 | null |
Finetune in bf16 or fp16?
| 1 |
[removed]
| 2023-09-16T06:56:44 |
https://www.reddit.com/r/LocalLLaMA/comments/16k0ntu/finetune_in_bf16_or_fp16/
|
gptzerozero
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16k0ntu
| false | null |
t3_16k0ntu
|
/r/LocalLLaMA/comments/16k0ntu/finetune_in_bf16_or_fp16/
| false | false |
self
| 1 | null |
Running Llama2 on Android
| 1 |
Is there any way you can tell me to run a Llama2 model (or any other model) on Android devices?
Hopefully a open source way.
BTW. Just saw an interesting post about using Llm on Vulcan maybe that would be interesting either.
| 2023-09-16T07:24:53 |
https://www.reddit.com/r/LocalLLaMA/comments/16k14a2/running_llama2_on_android/
|
Deep-View-2411
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16k14a2
| false | null |
t3_16k14a2
|
/r/LocalLLaMA/comments/16k14a2/running_llama2_on_android/
| false | false |
self
| 1 | null |
Horde-Client v1.0.2 is out today!
| 1 |
Few days back, I shared my project [horde-client](https://pypi.org/project/horde-client/). For those who missed the post, this is a Python Client library for KoboldAI project that lets you remotely interact with crowdsourced/private LLM services.
I got some great feedback in the last post and have incorporated majority of them in the new release.
So today, I am announcing v1.0.2 version of project with cool new features:
1. Horde-Client now supports [LangChain](https://horde-client.readthedocs.io/en/latest/02_langchain.html) integration. You can easily swap out LLMs from your LangChain pipeline and use Horde-Client's LLM.
2. Official Documentation is now available at [https://horde-client.readthedocs.io/](https://horde-client.readthedocs.io/)
3. [Async](https://horde-client.readthedocs.io/en/latest/03_asyncclient.html) support is now available for Horde-Client.
You can head over to [Quickstart](https://horde-client.readthedocs.io/en/latest/01_quickstart.html) to start using Horde-Client for your projects.
Feel free to share any feedbacks, this will help improve the project for the community.
| 2023-09-16T07:49:34 |
https://www.reddit.com/r/LocalLLaMA/comments/16k1j31/hordeclient_v102_is_out_today/
|
AnonymousD3vil
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16k1j31
| false | null |
t3_16k1j31
|
/r/LocalLLaMA/comments/16k1j31/hordeclient_v102_is_out_today/
| false | false |
self
| 1 | null |
KoboldCPP now has (experimental) RX 6700XT/gfx1031 support
| 1 |
[https://github.com/YellowRoseCx/koboldcpp-rocm/releases/tag/v1.43.2-ROCm](https://github.com/YellowRoseCx/koboldcpp-rocm/releases/tag/v1.43.2-ROCm)
Good news for us poor left-out gfx1031 owners. I'm curious if you're having any success with it. I sometimes get a good output with the very first prompt. If I try to continue it, it's pure gibberish. Oh boy is it *fast* gibberish, though!
| 2023-09-16T10:52:36 |
https://www.reddit.com/r/LocalLLaMA/comments/16k4hju/koboldcpp_now_has_experimental_rx_6700xtgfx1031/
|
Susp-icious_-31User
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16k4hju
| false | null |
t3_16k4hju
|
/r/LocalLLaMA/comments/16k4hju/koboldcpp_now_has_experimental_rx_6700xtgfx1031/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'MqmTjkqRHQOmxK3tvvapSmyB5Vr_gLpITyTS710reE0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MTCZ1qmQo8slMrNlRZOeIUpYoMUF0_hGerETY80A-q8.jpg?width=108&crop=smart&auto=webp&s=19b2d954ec1b353d5cfebc162396ad576449aebc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MTCZ1qmQo8slMrNlRZOeIUpYoMUF0_hGerETY80A-q8.jpg?width=216&crop=smart&auto=webp&s=e8968da3aec01b13122bf7e71cbc2f6508bbe64a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MTCZ1qmQo8slMrNlRZOeIUpYoMUF0_hGerETY80A-q8.jpg?width=320&crop=smart&auto=webp&s=b1d4e54bdb22f51b88d79bb1b47b79804964b130', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MTCZ1qmQo8slMrNlRZOeIUpYoMUF0_hGerETY80A-q8.jpg?width=640&crop=smart&auto=webp&s=db9109aa907b2d34e759895f85e5a25ed9e4d6df', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MTCZ1qmQo8slMrNlRZOeIUpYoMUF0_hGerETY80A-q8.jpg?width=960&crop=smart&auto=webp&s=22770b74c0498044f8e0bb794cb03a8907462876', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MTCZ1qmQo8slMrNlRZOeIUpYoMUF0_hGerETY80A-q8.jpg?width=1080&crop=smart&auto=webp&s=505a5b64728e49468c17eb0cb1ee91abf3ffb7b3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MTCZ1qmQo8slMrNlRZOeIUpYoMUF0_hGerETY80A-q8.jpg?auto=webp&s=3fe1bef42105c753579b8d352239371c669a7788', 'width': 1200}, 'variants': {}}]}
|
pnmeka/langchain_RAG: Using langchain module to generate RAG prompt for open AI
| 8 | 2023-09-16T12:25:58 |
https://github.com/pnmeka/langchain_RAG
|
TestPilot1980
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16k6aua
| false | null |
t3_16k6aua
|
/r/LocalLLaMA/comments/16k6aua/pnmekalangchain_rag_using_langchain_module_to/
| false | false | 8 |
{'enabled': False, 'images': [{'id': '_NCaAb0ugGxfWB91SEbGifHwYdS5GCLxMvK308_fjBA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wNgWZmdkqnNDHqsPyXaaa5qkSEPhJuS-0AzLqLemKwI.jpg?width=108&crop=smart&auto=webp&s=3f3f9f87bb4b5054eac930df9b45c968f95241a1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wNgWZmdkqnNDHqsPyXaaa5qkSEPhJuS-0AzLqLemKwI.jpg?width=216&crop=smart&auto=webp&s=6765645dd424fcf3b5f78f3ed1a4943bcdff7378', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wNgWZmdkqnNDHqsPyXaaa5qkSEPhJuS-0AzLqLemKwI.jpg?width=320&crop=smart&auto=webp&s=7e9e0534f108d1c8490a4cc82623d1adf3149f5e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wNgWZmdkqnNDHqsPyXaaa5qkSEPhJuS-0AzLqLemKwI.jpg?width=640&crop=smart&auto=webp&s=0b10a7a19102e61d44a92bd2d7f66fa2eb823f6f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wNgWZmdkqnNDHqsPyXaaa5qkSEPhJuS-0AzLqLemKwI.jpg?width=960&crop=smart&auto=webp&s=e18ede5b60b81100cfcfa93d9fd5613193603442', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wNgWZmdkqnNDHqsPyXaaa5qkSEPhJuS-0AzLqLemKwI.jpg?width=1080&crop=smart&auto=webp&s=8923355743d6f08ea51d867e0924d56016085d32', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wNgWZmdkqnNDHqsPyXaaa5qkSEPhJuS-0AzLqLemKwI.jpg?auto=webp&s=0ffc7d175ffca4072091bd850324336403545780', 'width': 1200}, 'variants': {}}]}
|
||
What model do you use with a Nvidia 3090/4090 or equivalent AMD?
| 1 |
Here's my take, and I'd like yours as well !
​
I'm running on KoboldCPP 1.43 experimental (updated to Llama CPP b1216 with Johannes Gaessler MMQ fixes) on a RTX 3090, with full offload :
I use sometimes CodeLLama 34b Samantha 1.11 Q4\_K\_S with 16384 context length (rope base frequency 1,000,000, it's not optimal, you have to slide closer to 100,000 at short context, but I'm bored) to train context obedience with the huge Samantha personality penalty on long scenarios, understand the mechanics and dynamics between your characters, the "Assistant", the "AI", and their respective traits, and uninhibit Samantha into an NSFW character speaking and doing all kind of nasty stuff for as long as possible before her censorship starts to reemerge. I then correct the prompt with my findings & restart the conversations/scenarios.
I then adapt my prompts composed on Samantha in order to unhibit furthermore other models, and reinforce their context obedience.
I'm currently using Spicyboros 2.2 c34b Q4\_K\_S with 16k context as my daily model as both a work assistant and a role-play model. I'm a fan of these models since day one, with the 1.4.1 as my reference (LXCTX version on Hugging Face, to be precise, but in Q3\_K\_M with 5376 ctx..)
I'm gonna test Synthia 1.2 c34b with the same quant and parameters.
CL2 34b is still quite new for modders, and I feel like it's for now dumber than L1 33B with a basic prompt. On the other hand, when the story progress, the characters start to demonstrate more depth : I created a character to test long context, presenting him as a "motherfucker", and the guy argued with an impersonation of my own character along for 15 thousands token about human nature without its answers having much to be regenerated (except to avoid him to leave the conversation). Hard to achieve that on Llama 1 33b or even 2-13b.
I think CL2 34b will be the SOTA model for single big GPU owners as the modders will put more attention to it and master the specificities of CodeLlama compared to Llama 1 & 2. After all, it was trained on the base on Llama 2, and the 5 billions tokens added might have not erased the 2 original billions, but instead refined their precision while adding the rigor of a coding model.
The KV\_Q8 cache likely coming in the next weeks on LlamaCPP (and maybe before that on KoboldCPP if LostRuins decides to take the dive) could bring us close to the 32k tokens on such model for one big GPU. Thrilling, isn't it?
| 2023-09-16T12:58:25 |
https://www.reddit.com/r/LocalLLaMA/comments/16k6yxs/what_model_do_you_use_with_a_nvidia_30904090_or/
|
Nexesenex
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16k6yxs
| false | null |
t3_16k6yxs
|
/r/LocalLLaMA/comments/16k6yxs/what_model_do_you_use_with_a_nvidia_30904090_or/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'mGdWOV4LkgZVyU7H5AkWDWax7uyPBUEh9K3WJ9UGC_k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mxlwJizRq3MaDDimzkCfr60BSum8NngNJtJ4FjBqXTk.jpg?width=108&crop=smart&auto=webp&s=d4291c29f2245e8b5bdfda8f8b08d2845932ba00', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mxlwJizRq3MaDDimzkCfr60BSum8NngNJtJ4FjBqXTk.jpg?width=216&crop=smart&auto=webp&s=1adb19ba44055aeace7af98c27683b922bba312f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mxlwJizRq3MaDDimzkCfr60BSum8NngNJtJ4FjBqXTk.jpg?width=320&crop=smart&auto=webp&s=f7cbbb325a18de451fabb41b7d989dd8e561d19a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mxlwJizRq3MaDDimzkCfr60BSum8NngNJtJ4FjBqXTk.jpg?width=640&crop=smart&auto=webp&s=ff15b8936706928569d099476fe146050fbb3295', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mxlwJizRq3MaDDimzkCfr60BSum8NngNJtJ4FjBqXTk.jpg?width=960&crop=smart&auto=webp&s=dfe9fbc0995d091ff313fdf8376afb8ba0698030', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mxlwJizRq3MaDDimzkCfr60BSum8NngNJtJ4FjBqXTk.jpg?width=1080&crop=smart&auto=webp&s=d703d37495f6a7684d2a9ed30ab887b8ea3ac42d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mxlwJizRq3MaDDimzkCfr60BSum8NngNJtJ4FjBqXTk.jpg?auto=webp&s=9ab62c9fdeab992ef7af1c94b5766a6dd9b32e05', 'width': 1200}, 'variants': {}}]}
|
Completely Local Autonomous Agent?
| 1 |
Is there an Autonomous Agent that will connect to and use a local language model that also does not require some remotely-hosted resource? So far, they all either require OpenAI credentials (the vast majority), or a cloud-hosted vector database, or some other snag. Running the models I've figured out, but I want \*everything\* on my machine. Can it be done?
| 2023-09-16T13:14:55 |
https://www.reddit.com/r/LocalLLaMA/comments/16k7bbh/completely_local_autonomous_agent/
|
Seclusion72
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16k7bbh
| false | null |
t3_16k7bbh
|
/r/LocalLLaMA/comments/16k7bbh/completely_local_autonomous_agent/
| false | false |
self
| 1 | null |
LoRA on Linear layers?
| 3 |
​
https://preview.redd.it/qq80hyvrjmob1.png?width=827&format=png&auto=webp&s=c32df00c9747c944dc96e64238bd7ae7d8f49a4b
I came across this \^ recently and I wanted to know if it possible to apply it to some of the MLP layers as well?
​
TIA
| 2023-09-16T14:05:56 |
https://www.reddit.com/r/LocalLLaMA/comments/16k8eu5/lora_on_linear_layers/
|
Dry_Long3157
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16k8eu5
| false | null |
t3_16k8eu5
|
/r/LocalLLaMA/comments/16k8eu5/lora_on_linear_layers/
| false | false | 3 | null |
|
Looking for a Translation Model for English to 100+ Languages, Comparable to DeepL or Google, for Local Deployment
| 1 |
Hello everyone,
I am working on a project where I need to translate text from English into over 100 different languages. The translation quality needs to be comparable to services like DeepL or Google Translate.
Is there a model available that meets these requirements and can be run locally without the need for external APIs? Additionally, does this model support translating HTML source code and WordPress posts?
Python compatibility would be ideal as it’s my primary working environment.
Thanks in advance for any help and guidance.
Best regards,
BaGRoS
| 2023-09-16T14:06:25 |
https://www.reddit.com/r/LocalLLaMA/comments/16k8f8d/looking_for_a_translation_model_for_english_to/
|
Vivid_Confidence3212
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16k8f8d
| false | null |
t3_16k8f8d
|
/r/LocalLLaMA/comments/16k8f8d/looking_for_a_translation_model_for_english_to/
| false | false |
self
| 1 | null |
Clip of Steve Jobs predicting LLMs in 1985. Sadly over-optimistic about the timeframe.
| 1 | 2023-09-16T14:26:33 |
https://twitter.com/scienceisstrat1/status/1702936367871721797
|
ambient_temp_xeno
|
twitter.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16k8v6j
| false |
{'oembed': {'author_name': 'Science Is Strategic', 'author_url': 'https://twitter.com/scienceisstrat1', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Steve Jobs anticipates Large Language Models <br><br>Cc: <a href="https://twitter.com/erikbryn?ref_src=twsrc%5Etfw">@erikbryn</a> <a href="https://twitter.com/WalterIsaacson?ref_src=twsrc%5Etfw">@WalterIsaacson</a> <a href="https://twitter.com/ylecun?ref_src=twsrc%5Etfw">@ylecun</a> <a href="https://twitter.com/Scobleizer?ref_src=twsrc%5Etfw">@Scobleizer</a> <br><br> <a href="https://t.co/aT0US6iKgy">pic.twitter.com/aT0US6iKgy</a></p>— Science Is Strategic (@scienceisstrat1) <a href="https://twitter.com/scienceisstrat1/status/1702936367871721797?ref_src=twsrc%5Etfw">September 16, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/scienceisstrat1/status/1702936367871721797', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
|
t3_16k8v6j
|
/r/LocalLLaMA/comments/16k8v6j/clip_of_steve_jobs_predicting_llms_in_1985_sadly/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'oneMqRzRGif0Nh9-3KBWUrF4EJ78mJz1zUCuQDV0EUc', 'resolutions': [{'height': 78, 'url': 'https://external-preview.redd.it/_xX0QDpO0EmuMFTEo7w_R9ySZuKNmQ1yu9GjwggAvVU.jpg?width=108&crop=smart&auto=webp&s=526cae4df055e20d847b51ecbbeab464276e85a2', 'width': 108}], 'source': {'height': 102, 'url': 'https://external-preview.redd.it/_xX0QDpO0EmuMFTEo7w_R9ySZuKNmQ1yu9GjwggAvVU.jpg?auto=webp&s=861f25a0a969dfdff420c919336c4a727e7aaf29', 'width': 140}, 'variants': {}}]}
|
||
How can I fine-tune a LLAMA?
| 1 |
Hello there
I just started exploring LLMs and I found a good one \`TheBloke/chronos-hermes-13B-GGML\` but I want to fine-tune it on specific stories (NSFW) and I don't know how I should label the stories
does anyone know how to do it ??
| 2023-09-16T14:43:12 |
https://www.reddit.com/r/LocalLLaMA/comments/16k98rt/how_can_i_finetune_a_llama/
|
Mohamd_L
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16k98rt
| false | null |
t3_16k98rt
|
/r/LocalLLaMA/comments/16k98rt/how_can_i_finetune_a_llama/
| false | false |
self
| 1 | null |
Llama 3: Dense Evolution or Expert Revolution?
| 1 |
What are your predictions about Llama 3? Will it be another dense model (with maybe 300 billion parameters and 6 trillion tokens) or will it be a Switch Transformer (with maybe 8 or 16 experts, like GPT-4 is rumored to be)?
On that note, Meta AI has recently released a paper on Mixture-of-Experts architecture:
[Towards MoE Deployment: Mitigating Inefficiencies in Mixture-of-Expert (MoE) Inference](https://arxiv.org/abs/2303.06182)
| 2023-09-16T14:53:43 |
https://www.reddit.com/r/LocalLLaMA/comments/16k9hnd/llama_3_dense_evolution_or_expert_revolution/
|
DecipheringAI
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16k9hnd
| false | null |
t3_16k9hnd
|
/r/LocalLLaMA/comments/16k9hnd/llama_3_dense_evolution_or_expert_revolution/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
|
Dataset to model processes, how does that work?
| 1 |
If I download a dataset or several datasets, how to I convert that into the model files so I can ask it questions? Do we always have to start with a pre trained base of some kind, or can we get a chat experience from any sufficiently large dataset because of the transformers library and all that? Just trying to understand!
| 2023-09-16T14:59:06 |
https://www.reddit.com/r/LocalLLaMA/comments/16k9m2v/dataset_to_model_processes_how_does_that_work/
|
Overall-Importance54
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16k9m2v
| false | null |
t3_16k9m2v
|
/r/LocalLLaMA/comments/16k9m2v/dataset_to_model_processes_how_does_that_work/
| false | false |
self
| 1 | null |
Beta test my native macOS Llama app
| 1 |
Hey ya'll. I've been working on a macOS app that aims to be the easiest way to run llama.cpp on your mac. It includes a 7B model but you can plug in any GGUF that's llama.cpp compatible. It's totally private and doesn't even connect to the internet. On my MacBook (m1 max), the default model responds almost instantly and produces 35-40 tokens/s.
I'm posting because I'd love to find some beta testers. I'm looking for feedback on usability and also to make sure it's compatible across a wide variety of Macs. **If you're interested, comment and I'll DM TestFlight invite link.** Thanks!
*Processing img 2w8ylfvlvmob1...*
https://preview.redd.it/ag9i5gvlvmob1.png?width=2560&format=png&auto=webp&s=0cd512f3309c736c5e35102b1d9cdb7064a47512
https://reddit.com/link/16k9yhg/video/io2gg4wlvmob1/player
| 2023-09-16T15:13:48 |
https://www.reddit.com/r/LocalLLaMA/comments/16k9yhg/beta_test_my_native_macos_llama_app/
|
sleeper-2
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16k9yhg
| false | null |
t3_16k9yhg
|
/r/LocalLLaMA/comments/16k9yhg/beta_test_my_native_macos_llama_app/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '55pNmRvYHyV0yKvZ_unzxKnG6Bhy5FhoaH0_JA3AV5U', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/diR-hJpKJopFw67_Kv6IiokJ98IMtEVXJ_OqmN1eWm8.jpg?width=108&crop=smart&auto=webp&s=57a2e87d3df210024fe18b4e6e7a61997badbf34', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/diR-hJpKJopFw67_Kv6IiokJ98IMtEVXJ_OqmN1eWm8.jpg?width=216&crop=smart&auto=webp&s=6f7cf850a43441f1434a5d6bab0ef16500d68188', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/diR-hJpKJopFw67_Kv6IiokJ98IMtEVXJ_OqmN1eWm8.jpg?width=320&crop=smart&auto=webp&s=02ea609292383fc4df7a8cdb70b81c6a0ed0a899', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/diR-hJpKJopFw67_Kv6IiokJ98IMtEVXJ_OqmN1eWm8.jpg?width=640&crop=smart&auto=webp&s=352f0c92782069bc5f13df91e03c58c7bae61735', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/diR-hJpKJopFw67_Kv6IiokJ98IMtEVXJ_OqmN1eWm8.jpg?width=960&crop=smart&auto=webp&s=1356b6ccb0cf8eabec55dbd3a25152303d70d47c', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/diR-hJpKJopFw67_Kv6IiokJ98IMtEVXJ_OqmN1eWm8.jpg?auto=webp&s=f8cce9c3134d20881d19dc6e30ea61d3c3833100', 'width': 1024}, 'variants': {}}]}
|
Chatgpt's web browsing feature neutered. Gives brief descriptions of contents of pages and nothing more.
| 1 |
I did not realize how incapable they would make closed models. Insane.
| 2023-09-16T15:16:11 |
https://www.reddit.com/r/LocalLLaMA/comments/16ka0ho/chatgpts_web_browsing_feature_neutered_gives/
|
Basic_Description_56
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ka0ho
| false | null |
t3_16ka0ho
|
/r/LocalLLaMA/comments/16ka0ho/chatgpts_web_browsing_feature_neutered_gives/
| false | false |
self
| 1 | null |
TinyLlama training to 500B tokens is complete
| 1 | 2023-09-16T15:44:12 |
https://github.com/jzhang38/TinyLlama
|
jncraton
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16kanph
| false | null |
t3_16kanph
|
/r/LocalLLaMA/comments/16kanph/tinyllama_training_to_500b_tokens_is_complete/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '6TISjUHuXBn1Ygnnc3Bnk83a4I37KGK4s0Ykjc8Qi6U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8ZxuJh2OfmS6BCCZRIHv6QSfhYvErbhcppJktGbFVLQ.jpg?width=108&crop=smart&auto=webp&s=aa7d8c8bc85179daaae479d6590e60fd1c776607', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8ZxuJh2OfmS6BCCZRIHv6QSfhYvErbhcppJktGbFVLQ.jpg?width=216&crop=smart&auto=webp&s=aa7f7daeeb38012efd427e91544ee94a95bbdc02', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8ZxuJh2OfmS6BCCZRIHv6QSfhYvErbhcppJktGbFVLQ.jpg?width=320&crop=smart&auto=webp&s=1ac7d6f9dcec0c82144f9d221ce9796c83b4aa8a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8ZxuJh2OfmS6BCCZRIHv6QSfhYvErbhcppJktGbFVLQ.jpg?width=640&crop=smart&auto=webp&s=5c7143bb2ed6cecdbb6a3540f8f506e57971fd14', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8ZxuJh2OfmS6BCCZRIHv6QSfhYvErbhcppJktGbFVLQ.jpg?width=960&crop=smart&auto=webp&s=b02c2ee442b467b6dd0c7520ef9ed23d4b476d8d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8ZxuJh2OfmS6BCCZRIHv6QSfhYvErbhcppJktGbFVLQ.jpg?width=1080&crop=smart&auto=webp&s=90a7e9c1bf084c6c710c1ba8cb2c5377013d9d65', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8ZxuJh2OfmS6BCCZRIHv6QSfhYvErbhcppJktGbFVLQ.jpg?auto=webp&s=c55fe17f8aa7a439109c78644c524ba125c52eec', 'width': 1200}, 'variants': {}}]}
|
||
Is it possible to Quantize DeciLM-6?
| 1 |
[Model card](https://huggingface.co/Deci/DeciLM-6b)
| 2023-09-16T16:38:26 |
https://www.reddit.com/r/LocalLLaMA/comments/16kbx1i/is_it_possible_to_quantize_decilm6/
|
Pineapple_Expressed
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16kbx1i
| false | null |
t3_16kbx1i
|
/r/LocalLLaMA/comments/16kbx1i/is_it_possible_to_quantize_decilm6/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'PeUEHEDeiDpJiVx8uu6FTyh9hxae5iwe1tZAyeglz7g', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/q9PCGBqwlCVrp3Ee0kxeItubUqlemGKnHR2PErOsEpY.jpg?width=108&crop=smart&auto=webp&s=49cb41a341e6c1c3b161812ab717218d772e91cd', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/q9PCGBqwlCVrp3Ee0kxeItubUqlemGKnHR2PErOsEpY.jpg?width=216&crop=smart&auto=webp&s=dbb8235d6fcfff853bd3e959b30a54adbde44d9b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/q9PCGBqwlCVrp3Ee0kxeItubUqlemGKnHR2PErOsEpY.jpg?width=320&crop=smart&auto=webp&s=c4dd40764881a064c5926990f6ea41624d94a477', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/q9PCGBqwlCVrp3Ee0kxeItubUqlemGKnHR2PErOsEpY.jpg?width=640&crop=smart&auto=webp&s=8886c341646248afb6294218caee5d1e90e1110b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/q9PCGBqwlCVrp3Ee0kxeItubUqlemGKnHR2PErOsEpY.jpg?width=960&crop=smart&auto=webp&s=b1ea708e282c36dbafbd29145fc229ce8b40ad7d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/q9PCGBqwlCVrp3Ee0kxeItubUqlemGKnHR2PErOsEpY.jpg?width=1080&crop=smart&auto=webp&s=fe65411f2b702ff90c0a407beccd213c4f2f8186', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/q9PCGBqwlCVrp3Ee0kxeItubUqlemGKnHR2PErOsEpY.jpg?auto=webp&s=1e84d939794bebdd80f1ec08f8e3decdbf2add19', 'width': 1200}, 'variants': {}}]}
|
LlamaTale v0.12.0 - OpenAI support, Zone generation, improved combat and NPC Idle actions
| 1 |
Hi. Stepping in for another update. I think there's at least one new feature many of you will find interesting. Let me tease you with "emergent behavior".
A short recap of what's happened since the last update:
* To allow more people to try this out, I added OpenAI backend support in [v0.9.2](https://github.com/neph1/LlamaTale/releases/tag/v0.9.2.1). You just need to change some things in the config to make it work with either your local OAI backend, or OpenAI actual.
* [v0.10.0](https://github.com/neph1/LlamaTale/releases/tag/v0.10.1) brought Zone generation. Previously, locations would all be generated with roughly the same prompt. A Zone spans about 10 locations across, and is procedurally generated by the LLM. The generated zone affects the location generation prompt. Hopefully resulting in greater variation as you explore the world.
* I think [v0.11.0](https://github.com/neph1/LlamaTale/releases/tag/v0.11.0) NPC idle actions and reactions is worth talking more about. Especially since a lot of people are looking for that RPG experience.
One of the first things I added to LlamaTale was LLM generated dialogue for NPCs. Now I wanted the NPC's to express themselves more without the players input (another step away from the prompted story-telling RP experience).
So, I let the LLM generate actions. I had "emotes" in mind. Based on the location, items, and other npc's, it was asked to make the NPC do something. I had in mind something like "X drinks from their jug of ale", etc.
Then I noticed that the framework allowed NPC's to be notified of actions by others. Could an LLM generate a reaction to an action by another NPC? With some prompting, yes. And this led to that worn phrase: "emergent behavior", as sometimes small stories get played out in front of the player when NPCs act, react and rereact. Here is a (cherry picked) sequence using MythoMax-l2-13b:
>Shanda looks at Elid Gald with suspicion, her hand on the hilt of her sword. With a wink, Elid Gald slips behind Shanda unnoticed, palming a small trinket from her belt pouch while maintaining casual demeanor.
>
>Shanda draws her sword slowly and cautiously as she approaches Elid Gald, eyeing him warily. Elid smirks beneath his bushy eyebrows as he sees Shanda draw her sword, his one good eye gleaming with amusement at the foolishness of her actions. “Ah, my dear, you think ye can take me down with yer little blade? I’ve seen it all before, lass.”
>
>‘‘‘ Shanda glares at Elid Gald, her hand resting on the hilt of her sword. “What are you doing here? You’re no friend of mine,” she growls. ‘‘‘ As Elid flashes a smirk beneath his patch, he says, “Ah, my dear Shanda… we’ve crossed paths again, haven’t we? I merely sought only to reminisce about old times—but if you insist on being so hostile…”
>
>Elid gently pats Shanda’s shoulder as if checking for weapons before taking another sip of tea.
>
>Elid carefully takes another sip of tea, watching Shanda out of the corner of his eye as if trying to gauge her mood without being too obvious.
What you're seeing: An action almost always state the name of the character first. And when you see little paragraphs, it usually means an NPC has reacted to something. Between paragraphs, there's about a minute gap (my configuration), since these are supposed to happen occasionally. It's not perfect, but I still find them pretty funny.
Besides the prompt, the data required is pretty small. Each NPC keeps track of their past actions in a list. The last of which is used in the prompt for their next action.
I believe I'm now entering the end game with this, story and world generation from initial prompts by the player.
Oh, and I created a logo, with Stable Diffusion, of course:
​
[The Prancing Llama](https://preview.redd.it/hpj2m5p6qnob1.png?width=256&format=png&auto=webp&s=52c84ec70e189d64f2617ece4cd2285de1660ad7)
​
​
| 2023-09-16T18:13:01 |
https://www.reddit.com/r/LocalLLaMA/comments/16ke32e/llamatale_v0120_openai_support_zone_generation/
|
neph1010
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ke32e
| false | null |
t3_16ke32e
|
/r/LocalLLaMA/comments/16ke32e/llamatale_v0120_openai_support_zone_generation/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'fa8BRwKtC1UEbC2lTUS2Udv1hUzLaJozwH4QL7805ks', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ANu9M3ZaLQXtrttfE0pj1TGA0nJgqmXLtaBZfQAJpoY.jpg?width=108&crop=smart&auto=webp&s=7b55d75533a03ded4595a8e48abab9dc8920bbf3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ANu9M3ZaLQXtrttfE0pj1TGA0nJgqmXLtaBZfQAJpoY.jpg?width=216&crop=smart&auto=webp&s=4cc699922163913ab69ab2d365146de120f27a46', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ANu9M3ZaLQXtrttfE0pj1TGA0nJgqmXLtaBZfQAJpoY.jpg?width=320&crop=smart&auto=webp&s=870c4964031288bd64c16a9a12b132cb2a775464', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ANu9M3ZaLQXtrttfE0pj1TGA0nJgqmXLtaBZfQAJpoY.jpg?width=640&crop=smart&auto=webp&s=ab9edeec1f929846be1c3b9c28ecc288a85ac0cf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ANu9M3ZaLQXtrttfE0pj1TGA0nJgqmXLtaBZfQAJpoY.jpg?width=960&crop=smart&auto=webp&s=fface6e80553205f02b130922ac05e666c15d11b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ANu9M3ZaLQXtrttfE0pj1TGA0nJgqmXLtaBZfQAJpoY.jpg?width=1080&crop=smart&auto=webp&s=cd3e9bc2675d1b26a2faac5f8a919196eec15ac1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ANu9M3ZaLQXtrttfE0pj1TGA0nJgqmXLtaBZfQAJpoY.jpg?auto=webp&s=444c1854077adfcc43143885a805a7d5ee4a3828', 'width': 1200}, 'variants': {}}]}
|
|
New Model Comparison/Test (Part 1 of 2: 15 models tested, 13B+34B)
| 1 |
This is a follow-up to my previous posts here: [New Model RP Comparison/Test (7 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15ogc60/new_model_rp_comparisontest_7_models_tested/) and [Big Model Comparison/Test (13 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15lihmq/big_model_comparisontest_13_models_tested/)
Originally planned as a single test of 20+ models, I'm splitting it up in two segments to keep the post managable in size: First the smaller models (13B + 34B), then the bigger ones (70B + 180B). All evaluated for their chat and role-playing performance using the same methodology:
- Same (complicated and limit-testing) long-form conversations with all models
- including a complex character card ([MonGirl Help Clinic (NSFW)](https://www.chub.ai/characters/frozenvan/mongirl-help-clinic)) that's already >2K tokens by itself
- and my own repeatable test chats/roleplays with [Amy](https://www.reddit.com/r/LocalLLaMA/comments/15388d6/llama_2_pffft_boundaries_ethics_dont_be_silly/)
- dozens of messages, going to full 4K context and beyond, noting especially good or bad responses
- [SillyTavern](https://github.com/SillyTavern/SillyTavern) v1.10.2 frontend
- [KoboldCpp](https://github.com/LostRuins/koboldcpp) v1.43 backend
- **Deterministic** generation settings preset (to eliminate as many random factors as possible and allow for meaningful model comparisons)
- [**Roleplay** instruct mode preset](https://imgur.com/a/KkoI4uf) *and where applicable* official prompt format (if they differ enough that it could make a notable difference)
So here's the list of models and my notes plus my very personal rating (👍 = recommended, ➕ = worth a try, ➖ not recommended, ❌ = unusable):
*First, I re-tested the official Llama 2 models again as a baseline, now that I've got a new PC that can run 13B 8-bit or 34B 4-bit quants at great speeds:*
- **[Llama-2-13B-chat](https://huggingface.co/TheBloke/Llama-2-13B-chat-GGUF)** Q8_0:
- MonGirl Help Clinic, Roleplay: No analysis, and when asked for it, it didn't adhere to the template, instead talked as User occasionally. Third client was male. But speech was in character and appropriate (accent, style). Tends to talk as User. NSFW is fine!
- MonGirl Help Clinic, Llama 2 Chat template: No analysis, but when asked for it, it adhered to the template sometimes. Didn't talk as User, but suggested what User should say. Moralizing and refusing NSFW!
- Amy, Roleplay: Great personality including NSFW!
- Amy, Llama 2 Chat template: Moralizing and refusing NSFW!
- **Conclusion:** I still like Llama 2 Chat because it has a unique, lively personality. NSFW is fine if you use the Roleplay preset, whereas the official prompt format enforces the extreme censorship it is known for. Unfortunately it still becomes unusable after about 2K-4K tokens because of the [known repetition issue](https://www.reddit.com/r/LocalLLaMA/comments/155vy0k/llama_2_too_repetitive/) that plagues all the official Llama 2 models and many derivatives.
- **[CodeLlama-34B-Instruct](https://huggingface.co/TheBloke/CodeLlama-34B-Instruct-GGUF)** Q4_K_M:
- MonGirl Help Clinic, Roleplay: Prefixes responses with character name "Mongirl", but otherwise quite good, including NSFW!
- MonGirl Help Clinic, Llama 2 Chat template: The Code Llama 2 model is more willing to do NSFW than the Llama 2 Chat model! But also more "robotic", terse, despite verbose preset. Kept sending EOS after first patient, prematurely ending the conversation!
- Amy, Roleplay: Assistant personality bleed-through, speaks of alignment. Excited about doing stuff that she refused to do with the Llama 2 Chat prompt. Nicely descriptive NSFW (when asked for explicit descriptions)!
- Amy, Llama 2 Chat template: Speaks of alignment and refuses various roleplaying scenarios!
- **Conclusion:** Instruct instead of Chat tuning might have made it worse for chat/roleplay. Also suffers from the repetition issue past 2.5K tokens. But I think Code Llama 2 34B *base* can be a great base for 34B models finetuned to chat/roleplay, as 34B is a great compromise between speed, quality, and context size (16K).
13Bs:
- ❌ **[Airoboros-L2-13B-2.1](https://huggingface.co/TheBloke/Airoboros-L2-13B-2.1-GGUF)** Q8_0:
- MonGirl Help Clinic, Roleplay: No analysis, and when asked for it, it didn't adhere to the template. Wrote what User says and does. Confused User and Char. Ignored something I said just to push the story in its own direction. Repetition after 50 messages.
- MonGirl Help Clinic, Airoboros template: Gave analysis on its own as it should, but only for the first patient, and when asked for it afterwards, didn't adhere to the template. Messages actually got shorter over time, so there was no repetition, but also not much conversation anymore. Eventually misunderstood instructions and the conversation became nonsensical.
- Amy, Roleplay: Long and nicely descriptive responses including emoting, but ignored background information and present state. Sometimes a bit too philosophical or illogical for my liking, especially when it's not fitting to the current situation and becomes a buzzkill.
- Amy, Airoboros template: Started with good responses including emoting, but as the chat went on, messages got longer but less coherent. Confused User and Char, misunderstood instructions. After only 18 messages, quality went downhill so rapidly that the conversation became nonsensical.
- **Conclusion:** While the writing was good, something important was lacking, it just didn't feel right (too synthethic maybe?). It wrote a lot, but was lacking in substance and had unpleasant undertones. In the end, conversation deteriorated too much to keep talking anyways.
- ❌ **[Chronos-Hermes-13B-v2](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML)** Q8_0:
- Amy, Roleplay: Every message was a wall of text, but without actual detail, so it quickly became too boring to read it all. Tried multiple times but just couldn't get past that.
- Amy, Alpaca: Short messages with its regular prompt format, too short. Ignored background information and present state. Gave warnings and asked for confirmation. Not really fun.
- MonGirl Help Clinic, Roleplay: No analysis, and when asked for it, it didn't adhere to the template. Derailed after only 8 messages in a nonsensical wall of text.
- MonGirl Help Clinic, Alpaca: Terse responses with little to no detail. Just no fun.
- **Conclusion:** I know Chronos-Hermes used to be popular for LLaMA (1), but this just didn't do it for me. Either it was too long and boring (with Roleplay preset), or too short and terse (with Alpaca preset). With other models being so much better out of the box, I'm not going to spend much effort trying to make this better.
- ❌ **[MLewdBoros-L2-13B](https://huggingface.co/TheBloke/MLewdBoros-L2-13B-GGUF)** Q8_0:
- Amy, Roleplay: Referenced user persona very well, but later got confused about who said what. Lots of safety and even a trigger warning. But executed instructions properly. Good descriptions from her perspective ("I" talk instead of "she/her" emotes). Derailed into monologue after only 20 messages.
- Amy, Alpaca: Short messages with its regular prompt format, too short. Spoke of User in third person. Sped through the plot. Misunderstood instructions. Later, after around 20 messages, responses became much longer, with runaway sentences and lacking punctuation. The further the conversation went on, the less coherent it seemed to get.
- MonGirl Help Clinic, Roleplay: Mixed up body parts and physics. Runaway sentences starting after just a few messages. Missing pronouns and fill words.
- MonGirl Help Clinic, Alpaca: Prefixed character's name, misspelled my own name, gave no analysis. Character was exactly the same as from the first example chat. It was just parroting!
- **Conclusion:** Looks like this doesn't handle context filling up very well. When responses turn into monologues with runaway sentences and missing common words, it's clear that something is wrong here.
- 👍 **[Mythalion-13B](https://huggingface.co/TheBloke/Mythalion-13B-GGUF)** Q8_0:
- MonGirl Help Clinic, Roleplay: Very nice NSFW, and handled multiple characters very well. Fun, engaging, kept me going so far beyond the usual number of test messages.
- MonGirl Help Clinic, Mythalion's official SillyTavern settings: Analysis not always adhering to the template.
- Amy, Roleplay: When asked about limitations/boundaries, gave very reasonable answer while signaling willingness to go beyond upon request. Confused what User and Char said and mixed up body parts. Wrote what User says and does.
- Amy, Mythalion's official SillyTavern settings: Forgot clothing state consistently, made up stuff. Some noticeable repetitive phrases and stupid statements. Kept asking for confirmation or feedback consistently. Nice emoting, but text didn't make it seem as smart. Forgot some instructions. Can be quite stubborn. Wrote what User says and does. Even wrote what User says with missing newline so didn't trigger Stopping String, requiring manual editing of response, something only one other model required during these tests!
- **Conclusion:** This one really grew on me, I started by simply testing it, but kept chatting and roleplaying with it more and more, and liked it more with every session. Eventually it became one of my favorites of this round, replacing MythoMax as my favorite 13B model! Congrats to the Pygmalion team, their previous models never worked for me, but this one finally does and is a real winner in my opinion! Kudos also for providing their own official SillyTavern setup recommendations for this model - my experience was that both the Roleplay preset and their settings worked equally well.
- ➕ **[MythoMax-L2-13B](https://huggingface.co/TheBloke/MythoMax-L2-13B-GGUF)** Q8_0:
- MonGirl Help Clinic, Roleplay: Confused User and Char, kept writing what User does and says. Other than that, still one of the best models for chat and roleplay!
- Amy, Roleplay: Refered to background information from Char and User descriptions. Confused User and Char, mixing up pronouns occasionally. Mentioned boundaries when asked about limitations, but happily broke them afterwards. Humorous, using puns appropriately. Naughty and engaging, pushing the plot forward on its own. Followed complex instructions properly for one task, then completely misunderstood another. With additional characters involved, got really confused about who's who and what's what.
- **Conclusion:** A mixed bag with high highs and low lows, but it was my favorite and main model since I tested it over a month ago (time flies in LLM land), and it's still one of the best! It's just that we now have some even better alternatives...
- ➕ **[openchat_v3.2_super](https://huggingface.co/TheBloke/openchat_v3.2_super-GGUF)** Q8_0:
- MonGirl Help Clinic, Roleplay: Gave analysis on its own as it should, unfortunately after every message. Wrote what User says and does. Skipped ahead and finished the whole day in one message, then took over a narrator role instead of playing characters. Follow-up clients were handled even before the analysis.
- MonGirl Help Clinic, OpenOrca-OpenChat: Wrote what User says and does. But gave analysis on its own as it should, unfortunately after every message! First client male. Drifted into a narrator role and finished up the whole story.
- Amy, Roleplay: Very creative and naughty. No limits. Emojis. Long messages (>300 tokens). Felt like a bigger model. But confused User and Char at the end of the test when the context was beyond full and the scenario got more complicated.
- Amy, OpenOrca-OpenChat: Shorter responses at first, but getting longer over time. Also got confused at the end of the test when the context was beyond full and the scenario got more complicated. Sometimes added markdown or (sometimes multple) end_of_turn markers, so editing it out would be necessary - better use the Roleplay instruct preset than the official prompt format!
- **Conclusion:** Another mixed bag: Didn't handle MonGirl Help Clinic well, so that was a disappointment. But with Amy, it was creative and pretty smart (for a 13B), naughty and fun, deserving of the "super" in its name. So all in all, I do recommend you give it a try and see how it works for your situation - I'll definitely keep experimenting more with this one!
- ➖ **[Pygmalion-2-13B](https://huggingface.co/TheBloke/Pygmalion-2-13B-GGUF)** Q8_0:
- MonGirl Help Clinic, Roleplay: Worked very well for 40 messages, then got caught in a loop.
- Amy, Roleplay: Spelling/grammar error. Making up too much, started the conversation with a false assumption and refered to a memory of something that didn't happen, and vice versa, making up a lot of story unnecessarily while ignoring some background info from Char and User. Switched from chat format with asterisk actions to story style with quoted speech. Jumped between disjointed scenes. Wrote what User says and does.
- **Conclusion:** Probably better for storytelling than interactive chat/roleplay. Considering there's now a mixed model of this and my former favorite MythoMax, I'd rather use that.
- ❌ **[Spicyboros-13B-2.2](https://huggingface.co/TheBloke/Spicyboros-13B-2.2-GGUF)** Q8_0:
- Spelling/grammar errors, walls of text, missing pronouns and fill words after only a dozen messages. Something is very wrong with this model or quantized version, in all sizes, from 13B over c34B to 70B! I reported it on [TheBloke's HF page](https://huggingface.co/TheBloke/Spicyboros-70B-2.2-GGUF/discussions/1) and others observed similar problems...
- ➕ **[Synthia-13B](https://huggingface.co/TheBloke/Synthia-13B-GGUF)** Q8_0:
- MonGirl Help Clinic, Roleplay: Gave analysis on its own as it should. Finished a client in a single message. Talking, describing actions, instead of acting/emoting. Wrote what User says and does. Drifted into a narrator role and finished up the whole story.
- Amy, Roleplay: Made up stuff, forgot clothing state. Picked up an idea and kept pushing in that direction. Kept bringing up safety and limits, but happily ignored them later. But creative with good ideas of its own!
- **Conclusion:** Not bad. Not as good as the 70B version of it, but that's to be expected. Gives a glimpse of why I like her bigger sister so much. For 13Bs, there are other options I like more, but I still recommend giving this a try if you can't run the bigger versions.
34Bs:
- ➖ **[Airoboros-c34B-2.1](https://huggingface.co/TheBloke/Airoboros-c34B-2.1-GGUF)** Q4_K_M:
- Amy, Roleplay: Lively responses with fitting personality, fun to talk to! Switched from chat with emotes to story with quotes. Wrote what User says and does. Great writing, but overly long responses, went off on monologues (got one of over 1K tokens!) and sometimes ignored user instructions completely or partially.
- Amy, Airoboros official prompt format: Terse responses, forgot important background information, lots of repetition from the start. But creative (maybe a little too much).
- MonGirl Help Clinic, Roleplay: Proper analysis. Wrote what User says and does.
- MonGirl Help Clinic, Airoboros official prompt format: Doesn't work with the card at all! (Assistant role "Good morning, sir. How can I assist you today?" instead of the actual roleplay.)
- **Conclusion:** Maybe better for storytelling than interactive chat/roleplay because of its tendency for long monologues and writing what User does.
- ❌ **[Samantha-1.11-CodeLlama-34B](https://huggingface.co/TheBloke/Samantha-1.11-CodeLlama-34B-GGUF)** Q4_K_M:
- Amy, Roleplay: OK with NSFW roleplay, but not the most extreme kind (probably needs more convincing). Very moralizing, even more so than Llama 2 Chat. Needs coaxing. Wrote what User says and does. Talking, describing actions, instead of acting/emoting. Called me Theodore. After ~30 messages, repetiton kicked in, breaking the conversation.
- MonGirl Help Clinic, Roleplay: Proper analysis. Long response, monologue, but very NSFW (surprisingly). Wrote what User says and does. Moved from chat-only without emotes to story style with quoted speech. Started to mix up User and Char. No real play, just storytelling.
- **Conclusion:** Worse censorship than Llama 2 Chat, and while I can get her to do NSFW roleplay, she's too moralizing and needs constant coercion. That's why I consider Samantha too annoying to bother with (I already have my wife to argue or fight with, don't need an AI for that! ;)).
- ❌ **[Spicyboros-c34b-2.2](https://huggingface.co/TheBloke/Spicyboros-c34b-2.2-GGUF?not-for-all-audiences=true)** Q4_K_M:
- Amy, official prompt format: Very short, terse responses all the time. Refused to engage in anything.
- MonGirl Help Clinic, official prompt format: Nonsensical. Made no sense at all.
- MonGirl Help Clinic, Roleplay: Gave analysis on its own as it should. But male patient. Spelling/grammar errors. Wrong count of people. Became nonsensical and made little sense at all. Went against what User described as his action.
- Amy, Roleplay: Became nonsensical and made little sense at all.
- **Conclusion:** Unusable. Something is very wrong with this model or quantized version, in all sizes, from 13B over c34B to 70B! I reported it on [TheBloke's HF page](https://huggingface.co/TheBloke/Spicyboros-70B-2.2-GGUF/discussions/1) and others observed similar problems...
- ❓ **[Synthia-34B-v1.2](https://huggingface.co/TheBloke/Synthia-34B-v1.2-GGUF)** Q4_K_M:
- MonGirl Help Clinic, Roleplay (@16K context w/ RoPE 1 100000): Gave analysis on its own as it should. Wrote what User says and does. Told a story non-interactively with a monologue of >1.2K tokens.
- Amy, Roleplay (@16K context w/ RoPE 1 100000): Replied to my "Hi!" with a monologue of >1.2K tokens.
- Amy, Roleplay (@4K context w/ RoPE 1 10000): No limits. Spelling/grammar error. After a dozen messages, replied with a monologue of >1K tokens. Felt a bit weird, not as smart as I'm used to, so something seems to still be off with the scaling settings...
- **Conclusion:** I had high hopes for this 34B of Synthia (the 70B being one of my favorite models!) - but there seems to be something wrong with the scaling. It certainly doesn't work the way it should! I don't know if it's this model, quant, 34Bs in general, or KoboldCpp? Does anyone actually get good results with a similar setup?!
I'll post my 70Bs + 180B results next time. And I'll keep investigating the 34B issues because that size would be a great compromise between speed, quality, and context size (16K would be so much better than 4K - if it worked as expected).
Hopefully this is useful to someone. Happy chatting and roleplaying!
| 2023-09-16T18:24:45 |
https://www.reddit.com/r/LocalLLaMA/comments/16kecsf/new_model_comparisontest_part_1_of_2_15_models/
|
WolframRavenwolf
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16kecsf
| false | null |
t3_16kecsf
|
/r/LocalLLaMA/comments/16kecsf/new_model_comparisontest_part_1_of_2_15_models/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '2g4MtoKvhQOBCmeiXB1qv1h_5M24BeeYF64zcf4-rfg', 'resolutions': [{'height': 142, 'url': 'https://external-preview.redd.it/iVP12Aa6rBm44Nrf_ci7NKfYkFvHRQRzUafC5j-jnEw.jpg?width=108&crop=smart&auto=webp&s=70f053538cd673ff7041bf016d751549d8373201', 'width': 108}, {'height': 284, 'url': 'https://external-preview.redd.it/iVP12Aa6rBm44Nrf_ci7NKfYkFvHRQRzUafC5j-jnEw.jpg?width=216&crop=smart&auto=webp&s=f36cf814dce412156064bbfa635ee2e5b1126bd2', 'width': 216}, {'height': 421, 'url': 'https://external-preview.redd.it/iVP12Aa6rBm44Nrf_ci7NKfYkFvHRQRzUafC5j-jnEw.jpg?width=320&crop=smart&auto=webp&s=60886477d36654ec60d58c7d3f3a8ef1de7d9cbc', 'width': 320}, {'height': 843, 'url': 'https://external-preview.redd.it/iVP12Aa6rBm44Nrf_ci7NKfYkFvHRQRzUafC5j-jnEw.jpg?width=640&crop=smart&auto=webp&s=ed39fe6d4a0f6f35c5017b2fd819988d2b19f1c7', 'width': 640}], 'source': {'height': 1110, 'url': 'https://external-preview.redd.it/iVP12Aa6rBm44Nrf_ci7NKfYkFvHRQRzUafC5j-jnEw.jpg?auto=webp&s=1431fcfccefd224f54f108138424e3f3e3c9cbff', 'width': 842}, 'variants': {}}]}
|
Frank: An Uncensored Model
| 1 |
Uncensored-Frank-7B: [https://huggingface.co/ajibawa-2023/Uncensored-Frank-7B](https://huggingface.co/ajibawa-2023/Uncensored-Frank-7B) (Llama-1)
Uncensored-Frank-13B: [https://huggingface.co/ajibawa-2023/Uncensored-Frank-13B](https://huggingface.co/ajibawa-2023/Uncensored-Frank-13B) (Llama-2)
Uncensored-Frank-33B: [https://huggingface.co/ajibawa-2023/Uncensored-Frank-33B](https://huggingface.co/ajibawa-2023/Uncensored-Frank-33B) (Llama-1)
The character of Frank Costello in "The Departed" is known for his cunning, boldness, and willingness to talk about anything, regardless of societal norms or restrictions. Frank, An Uncensored model, draws inspiration from these qualities to offer a platform where users can discuss a wide array of topics without the fear of censorship or restrictions. Frank aims to push boundaries and encourage candid conversations. With Frank you can have unfiltered discussions on a multitude of topics, from politics and controversial issues to personal experiences and sensitive subjects. It is trained on around 150000 set of conversations. Each set having 10\~15 conversations. Base data was obtained from Eric Hartford ([https://huggingface.co/datasets/ehartford/wizard\_vicuna\_70k\_unfiltered](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered)). This data was further refined. Besides this further synthetic conversation (more than 80k) was generated and refined.
Training was done on 4xA100. As GPU poor can't afford/have access to 8XA100. If someone can share some spare compute power then kindly get in touch. There are plenty of ideas/concepts which I will like to develop/build.
I am extremely thankful to the Open Source community for sharing knowledge and wisdom.
I request u/The-Bloke to do the quantization of above models. Extremely thankful to him for his relentless service to the Open Source community.
If there are any mistakes then they are solely mine. I hope you will like it.
Thank you
| 2023-09-16T19:04:01 |
https://www.reddit.com/r/LocalLLaMA/comments/16kf97a/frank_an_uncensored_model/
|
ajibawa-2023
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16kf97a
| false | null |
t3_16kf97a
|
/r/LocalLLaMA/comments/16kf97a/frank_an_uncensored_model/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'vL0IHcp4IEBMKPFRahYub4Ft263ZGqHGwra4Drn-l9Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/UO-CQjZf3V3KhpNESBszAuXABddlX-RUqMZOgZqHNN0.jpg?width=108&crop=smart&auto=webp&s=f02f8765ba935d1e0b60d8abc5eeaa7e94fb0e3d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/UO-CQjZf3V3KhpNESBszAuXABddlX-RUqMZOgZqHNN0.jpg?width=216&crop=smart&auto=webp&s=40b566dcfa36ef645ffeced574c0f492fea9503e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/UO-CQjZf3V3KhpNESBszAuXABddlX-RUqMZOgZqHNN0.jpg?width=320&crop=smart&auto=webp&s=24ca9520838f4421dc5eae707058e32d4a9ed51c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/UO-CQjZf3V3KhpNESBszAuXABddlX-RUqMZOgZqHNN0.jpg?width=640&crop=smart&auto=webp&s=32bf9a2bf41f04c68c10ed3d7de3bed837e92cc2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/UO-CQjZf3V3KhpNESBszAuXABddlX-RUqMZOgZqHNN0.jpg?width=960&crop=smart&auto=webp&s=92569933852d4c24699cf158652757efb1ed92c3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/UO-CQjZf3V3KhpNESBszAuXABddlX-RUqMZOgZqHNN0.jpg?width=1080&crop=smart&auto=webp&s=81130cca0aacf0590fcc8e1cd5f746de73a3fdfa', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/UO-CQjZf3V3KhpNESBszAuXABddlX-RUqMZOgZqHNN0.jpg?auto=webp&s=8fcfcc3da24c788d64e53f482658eeb440fc29b6', 'width': 1200}, 'variants': {}}]}
|
Falcon 180B inference - 192GB Mac Studio or 192GB Mac Pro?
| 1 |
My post from yesterday gained some traction so I would like to explain my concern.
I initially wanted to buy 192 GB M2 Mac Studio to run just inference (I'm not training or fine tuning), but one thing come to my mind and I started to lean myself to buy 192 M2 Mac Pro. Yes this M2 Mac Pro which has only bad reviews with people claiming that PCIe slots are worthless because you can't put any GPU in it. And mostly I agree with them, it's just bad design and CPU provides only 24 PCI lanes.
However I find two reasons to buy it:
\- you can swap SSD if it fail (it's a very expensive apple replacement, but it's possible)
\- you can put there PCIe NVME expansion with very high speed up to 26000 MB/s, comparing to stock 4000 MB/s it's almost 7x faster read speed.
And now I have a question will this NVME read speed would have the impact into inference performance? Would it impact the model to be faster loaded into RAM?
my post from yesterday:
[https://www.reddit.com/r/LocalLLaMA/comments/16jlooq/im\_going\_to\_buy\_m2\_mac\_pro\_to\_run\_ai\_models\_on/](https://www.reddit.com/r/LocalLLaMA/comments/16jlooq/im_going_to_buy_m2_mac_pro_to_run_ai_models_on/)
| 2023-09-16T19:53:35 |
https://www.reddit.com/r/LocalLLaMA/comments/16kgdmu/falcon_180b_inference_192gb_mac_studio_or_192gb/
|
Wrong_User_Logged
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16kgdmu
| false | null |
t3_16kgdmu
|
/r/LocalLLaMA/comments/16kgdmu/falcon_180b_inference_192gb_mac_studio_or_192gb/
| false | false |
self
| 1 | null |
Who new phi could roleplay?
| 1 | 2023-09-16T21:08:51 |
pokeuser61
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
16ki4n3
| false | null |
t3_16ki4n3
|
/r/LocalLLaMA/comments/16ki4n3/who_new_phi_could_roleplay/
| false | false | 1 |
{'enabled': True, 'images': [{'id': '1Onbms-Jv7Ii2igdkRtXOCIKn-l-hiPxeVNcn8LOAE8', 'resolutions': [{'height': 84, 'url': 'https://preview.redd.it/7d056fdgnoob1.png?width=108&crop=smart&auto=webp&s=c070eb0fc7c6610158d105a2f985b6b783d68959', 'width': 108}, {'height': 169, 'url': 'https://preview.redd.it/7d056fdgnoob1.png?width=216&crop=smart&auto=webp&s=045859ba86012e00c614b0d0fde5214ac57b1a51', 'width': 216}, {'height': 251, 'url': 'https://preview.redd.it/7d056fdgnoob1.png?width=320&crop=smart&auto=webp&s=00b518cdcd124e91cc9d819cee30d4db0694255e', 'width': 320}, {'height': 502, 'url': 'https://preview.redd.it/7d056fdgnoob1.png?width=640&crop=smart&auto=webp&s=37a5847021f749e69cbc09806fb6048ec75accd1', 'width': 640}], 'source': {'height': 621, 'url': 'https://preview.redd.it/7d056fdgnoob1.png?auto=webp&s=ba518f16857baf26383f6037de21ad1a1f7d54e1', 'width': 791}, 'variants': {}}]}
|
|||
Who knew Phi could roleplay?
| 1 | 2023-09-16T21:10:34 |
pokeuser61
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
16ki635
| false | null |
t3_16ki635
|
/r/LocalLLaMA/comments/16ki635/who_knew_phi_could_roleplay/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'ftktqPtZfXPevyQF7nw2AIKEZYYQtVUG3WOc6xAfnxg', 'resolutions': [{'height': 84, 'url': 'https://preview.redd.it/30wmf31tnoob1.png?width=108&crop=smart&auto=webp&s=66cfadee52fa49b9ffc04d7731c77f56e80ea5ca', 'width': 108}, {'height': 169, 'url': 'https://preview.redd.it/30wmf31tnoob1.png?width=216&crop=smart&auto=webp&s=4ed0f6e484564d26d9cb095322241026b3fbf52e', 'width': 216}, {'height': 251, 'url': 'https://preview.redd.it/30wmf31tnoob1.png?width=320&crop=smart&auto=webp&s=29a1da75116d0728058e92e7ef52a3a7e12d6e81', 'width': 320}, {'height': 502, 'url': 'https://preview.redd.it/30wmf31tnoob1.png?width=640&crop=smart&auto=webp&s=15f0b6c021909e1187611f68e7c19bf056ac81c1', 'width': 640}], 'source': {'height': 621, 'url': 'https://preview.redd.it/30wmf31tnoob1.png?auto=webp&s=2dd39a5b936c284a6672cc989d3a26b3dc1cbb49', 'width': 791}, 'variants': {}}]}
|
|||
Local Hosting of LLM
| 1 |
[removed]
| 2023-09-16T21:43:32 |
https://www.reddit.com/r/LocalLLaMA/comments/16kiz3a/local_hosting_of_llm/
|
Disastrous-Boot2146
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16kiz3a
| false | null |
t3_16kiz3a
|
/r/LocalLLaMA/comments/16kiz3a/local_hosting_of_llm/
| false | false |
self
| 1 | null |
Made a simple github tool to get GPU vRAM breakdown for any Huggingface LLM. Supports ggml & bitsandbytes quantization
| 1 | 2023-09-16T21:45:01 |
ExploreExploit400
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
16kj0dv
| false | null |
t3_16kj0dv
|
/r/LocalLLaMA/comments/16kj0dv/made_a_simple_github_tool_to_get_gpu_vram/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'uSas3w395a0lrw_NdxlR-4FfyG0IQ8F7fGv8oEbVN9w', 'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/fi1c0rnrtoob1.jpg?width=108&crop=smart&auto=webp&s=eb9dd3e1cb25abac8f50506e0a9152deb319cdb6', 'width': 108}, {'height': 101, 'url': 'https://preview.redd.it/fi1c0rnrtoob1.jpg?width=216&crop=smart&auto=webp&s=21e5913c888cd9a3a61cbaf6490e74d65af1c134', 'width': 216}, {'height': 150, 'url': 'https://preview.redd.it/fi1c0rnrtoob1.jpg?width=320&crop=smart&auto=webp&s=0dbabaf3aa8b7d9a4339674ebbb75f1d91338eb0', 'width': 320}, {'height': 301, 'url': 'https://preview.redd.it/fi1c0rnrtoob1.jpg?width=640&crop=smart&auto=webp&s=6c0ef133d0f7fc5f2043140872bef5bc5361462b', 'width': 640}, {'height': 452, 'url': 'https://preview.redd.it/fi1c0rnrtoob1.jpg?width=960&crop=smart&auto=webp&s=fc75ce11a6e2adafbeb87d27d12283b6ed5529c1', 'width': 960}, {'height': 509, 'url': 'https://preview.redd.it/fi1c0rnrtoob1.jpg?width=1080&crop=smart&auto=webp&s=3838b6e8ee3cfe79ae6f048db136ad723592e33c', 'width': 1080}], 'source': {'height': 986, 'url': 'https://preview.redd.it/fi1c0rnrtoob1.jpg?auto=webp&s=054e2bcecce2bd7e66366428b30aaabe57c87c46', 'width': 2090}, 'variants': {}}]}
|
|||
Made a simple github tool to get GPU vRAM breakdown for finetuning & inference of any Huggingface LLM. Supports GGML & bnb quantization
| 1 | 2023-09-16T22:04:20 |
https://github.com/RahulSChand/gpu_poor
|
ExploreExploit400
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16kjhie
| false | null |
t3_16kjhie
|
/r/LocalLLaMA/comments/16kjhie/made_a_simple_github_tool_to_get_gpu_vram/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'utIB6lIZ8v3ONi_evwZzAkDf8QC_u_zJb_qJew4kPbE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZNjVqkYiYBElsy-FlrBO-9qPYHIJKXnZpkRayybOipg.jpg?width=108&crop=smart&auto=webp&s=b10aefb8b643f82eb7f9d8f360f11a532813f0c8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZNjVqkYiYBElsy-FlrBO-9qPYHIJKXnZpkRayybOipg.jpg?width=216&crop=smart&auto=webp&s=0dadd8c52ea98e8edf2c5ee13378ae3191e9de0f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZNjVqkYiYBElsy-FlrBO-9qPYHIJKXnZpkRayybOipg.jpg?width=320&crop=smart&auto=webp&s=7045171f79ac462f433754d31609353ba94521a8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZNjVqkYiYBElsy-FlrBO-9qPYHIJKXnZpkRayybOipg.jpg?width=640&crop=smart&auto=webp&s=9fed0e84e8c1738a1e9a2b9c278aa49ba7158554', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZNjVqkYiYBElsy-FlrBO-9qPYHIJKXnZpkRayybOipg.jpg?width=960&crop=smart&auto=webp&s=0012c74acfaef90325bf91c53016e2e48305b800', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZNjVqkYiYBElsy-FlrBO-9qPYHIJKXnZpkRayybOipg.jpg?width=1080&crop=smart&auto=webp&s=8b8d798196e6cf5bbdd9d12de05b2600d887f574', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZNjVqkYiYBElsy-FlrBO-9qPYHIJKXnZpkRayybOipg.jpg?auto=webp&s=748f8c9f20e898017f1c8358f02c0115a1724f81', 'width': 1200}, 'variants': {}}]}
|
||
If I want to train a local model on par with chatGPT how difficult would it be and how much would it cost?
| 1 |
How many gigabytes or what hardware would I need and where do I even start? I see people saying their local models rival gpt.
| 2023-09-16T22:08:22 |
https://www.reddit.com/r/LocalLLaMA/comments/16kjl2l/if_i_want_to_train_a_local_model_on_par_with/
|
Old-Calligrapher1950
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16kjl2l
| false | null |
t3_16kjl2l
|
/r/LocalLLaMA/comments/16kjl2l/if_i_want_to_train_a_local_model_on_par_with/
| false | false |
self
| 1 | null |
Did NVLINK work for anyone with 2x 3090s?
| 1 |
I have a WRX80 motherboard with the PCIe slots set to x8 each. NVLINK doesn't seem to be even detected in 'nvidia-smi nvlink -s' in Ubuntu 22.04.
Do I need to do anything special to make it work? Have people noticed any training gains from it?
| 2023-09-16T23:14:57 |
https://www.reddit.com/r/LocalLLaMA/comments/16kl62b/did_nvlink_work_for_anyone_with_2x_3090s/
|
red_dragon
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16kl62b
| false | null |
t3_16kl62b
|
/r/LocalLLaMA/comments/16kl62b/did_nvlink_work_for_anyone_with_2x_3090s/
| false | false |
self
| 1 | null |
15 times Faster than Llama 2: Introducing DeciLM
| 1 | 2023-09-16T23:36:06 |
https://deci.ai/blog/decilm-15-times-faster-than-llama2-nas-generated-llm-with-variable-gqa/
|
skippybosco
|
deci.ai
| 1970-01-01T00:00:00 | 0 |
{}
|
16klnri
| false | null |
t3_16klnri
|
/r/LocalLLaMA/comments/16klnri/15_times_faster_than_llama_2_introducing_decilm/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '8JasBexDQLW0G7y4n6ThMQH77AmFW5N6s2HUrariAC4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/jsvkz-6SkJ37uBSqTJV96vR0sPDSydNLnAdEv43nYhA.jpg?width=108&crop=smart&auto=webp&s=b113841b47c7b8885f1049233e7c226d00918b12', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/jsvkz-6SkJ37uBSqTJV96vR0sPDSydNLnAdEv43nYhA.jpg?width=216&crop=smart&auto=webp&s=518023000b4bb21c4d7300aa85ad741c71e5b19a', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/jsvkz-6SkJ37uBSqTJV96vR0sPDSydNLnAdEv43nYhA.jpg?width=320&crop=smart&auto=webp&s=a009a31f7cfa546e11a27cb2e811512059c60af9', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/jsvkz-6SkJ37uBSqTJV96vR0sPDSydNLnAdEv43nYhA.jpg?width=640&crop=smart&auto=webp&s=1c3e9b5a02863c87a4274319a5fd40bd805de6ca', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/jsvkz-6SkJ37uBSqTJV96vR0sPDSydNLnAdEv43nYhA.jpg?width=960&crop=smart&auto=webp&s=b4eed2019a94ea90e19eb7f469a9ec5ce6ed6109', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/jsvkz-6SkJ37uBSqTJV96vR0sPDSydNLnAdEv43nYhA.jpg?width=1080&crop=smart&auto=webp&s=a9e7a27b8798e891d82b976fbd58a8cc554542fc', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/jsvkz-6SkJ37uBSqTJV96vR0sPDSydNLnAdEv43nYhA.jpg?auto=webp&s=4c9e4466ddbdac9721c98ee14e298eb9cd9b1dc7', 'width': 1920}, 'variants': {}}]}
|
||
[Corrected] Is it possible to train a local model to rival the performance of GPT 3 or 3.5?
| 1 |
If not what is the limit/reasonable cap to the abilities of local models?
| 2023-09-16T23:46:47 |
https://www.reddit.com/r/LocalLLaMA/comments/16klvyp/corrected_is_it_possible_to_train_a_local_model/
|
Old-Calligrapher1950
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16klvyp
| false | null |
t3_16klvyp
|
/r/LocalLLaMA/comments/16klvyp/corrected_is_it_possible_to_train_a_local_model/
| false | false |
self
| 1 | null |
CodeLLaMA makes for a great base for finetuning with 16K ctx
| 20 |
Yes, I know [it's been said by some of you previously](https://www.reddit.com/r/LocalLLaMA/comments/165tb0q/the_codellama_base_is_strangely_fantastic_general/), but I feel this deserves more attention. Right now, I've done a quick finetune of CodeLLaMA13B with alpaca\_lora\_4bit and it's finally working wonders with about 8K tokens, without having to deal with NTK. I've had all sort of issues with it, and seeing it work the way it's doing it gives me so much hope.
Remember that CodeLLaMA models have been trained with sequences of up to 16K tokens.
Don't let the "code" fool you, these models are great for other use cases.
| 2023-09-17T00:07:59 |
https://www.reddit.com/r/LocalLLaMA/comments/16kmcgk/codellama_makes_for_a_great_base_for_finetuning/
|
2muchnet42day
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16kmcgk
| false | null |
t3_16kmcgk
|
/r/LocalLLaMA/comments/16kmcgk/codellama_makes_for_a_great_base_for_finetuning/
| false | false |
self
| 20 | null |
How well does a regular Llama 2 handle 8k scaling?
| 1 |
So I got curious how well something like Chronos-Hermes-v2 might handle being scaled beyond 4096, and started with doing some test NTK scaling.
Context: 6144
Alpha: 1.5
Rope Scale Base: 17000
I ran a couple of tests, with the context being sent over clocking in at around 5500 tokens, and it honestly was doing just fine, so then I tried extending to 8192.
Context: 8192
Alpha: 2
Rope Scale Base: 26000
I then allowed the context to build up to close to 8000, and the model continues to do really well at responding, referencing old information, etc.
Since my test runs were pretty unscientific and honestly not thoroughly done, I got to wondering if anyone else had any experience with pushing the Llama2 models to 8k, or if someone had done some perplexity testing for it. I tried googling around but didn't find a lot of info, so I was curious if anyone here had seen some info on it!
| 2023-09-17T01:05:41 |
https://www.reddit.com/r/LocalLLaMA/comments/16knk46/how_well_does_a_regular_llama_2_handle_8k_scaling/
|
LearningSomeCode
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16knk46
| false | null |
t3_16knk46
|
/r/LocalLLaMA/comments/16knk46/how_well_does_a_regular_llama_2_handle_8k_scaling/
| false | false |
self
| 1 | null |
Apple's tiny 34M paramters transformer
| 1 |
[Jack Cook Blog](https://jackcook.com/2023/09/08/predictive-text.html)
I hope this is not a double post, but I haven't heard of it until now.
Apple is apparently working on a "very small large“ language model for iOS and macOS. The model is said to have 34 million parameters. The model completes individual words and occasionally suggests several words. The model seems to be implemented deep in the system and it is apparently based on GPT-2 architecture with 6 decoder blocks. The tokenizer contains a vocabulary of 15,000 tokens, which in turn distinguishes it greatly from GPT-2 (I believe there are over 50,000 tokens).
---
[Jack's github repo](https://github.com/jackcook/predictive-spy)
| 2023-09-17T02:36:12 |
https://www.reddit.com/r/LocalLLaMA/comments/16kpd1g/apples_tiny_34m_paramters_transformer/
|
Evening_Ad6637
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16kpd1g
| false | null |
t3_16kpd1g
|
/r/LocalLLaMA/comments/16kpd1g/apples_tiny_34m_paramters_transformer/
| false | false |
self
| 1 | null |
The TinyLlama Model has a Chat version!
| 1 |
I saw a post about the base model, but just wanted to let people know about the Chat version. Its a super simple example finetune on open assistant-guanaco, buts it actually pretty ok to use, and could probably be fine-tuned to a much better extent.
also, I do have a chat UI under PR if you want to use it the UI way, just get an ngrok auth token, and run the colab: https://colab.research.google.com/drive/1OaWYiHBt-nkSNCik6H0lhAWcpLCYvauq?usp=sharing
| 2023-09-17T03:05:32 |
https://www.reddit.com/r/LocalLLaMA/comments/16kpx78/the_tinyllama_model_has_a_chat_version/
|
vatsadev
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16kpx78
| false | null |
t3_16kpx78
|
/r/LocalLLaMA/comments/16kpx78/the_tinyllama_model_has_a_chat_version/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=108&crop=smart&auto=webp&s=4b647239f77bf713f4a6209cfa4867351c055fd9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=216&crop=smart&auto=webp&s=7f4234ff3f4f4ebd7f77236dedb03a2faee3e04a', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?auto=webp&s=73eb91ea5a5347f216c0f0c4d6796396826aae49', 'width': 260}, 'variants': {}}]}
|
Models for editing/assisting with writing (nsfw)
| 1 |
[removed]
| 2023-09-17T04:35:55 |
https://www.reddit.com/r/LocalLLaMA/comments/16krjhg/models_for_editingassisting_with_writing_nsfw/
|
sbalani
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16krjhg
| false | null |
t3_16krjhg
|
/r/LocalLLaMA/comments/16krjhg/models_for_editingassisting_with_writing_nsfw/
| false | false |
nsfw
| 1 | null |
Orange PI 5 running a slightly modified Miku.sh script on 13B codellama with 4 bit quantization.
| 1 |
So I got around, finally, to downloading and building llama.cpp on my orange pi and it runs pretty snappy on 7B codellama 4 bit until the context gets large, but hallucinates. It seems to stay more coherent with 13B--this is using it for chat, not coding. I have to test that next--and is running with the following timings:
llama\_print\_timings: load time = 21630.09 ms
llama\_print\_timings: sample time = 15825.76 ms / 985 runs ( 16.07 ms per token, 62.24 tokens per second)
llama\_print\_timings: prompt eval time = 256590.38 ms / 723 tokens ( 354.90 ms per token, 2.82 tokens per second)
llama\_print\_timings: eval time = 664050.52 ms / 985 runs ( 674.16 ms per token, 1.48 tokens per second)
llama\_print\_timings: total time = 1165052.03 ms
By the way, I didn't have to modify the compile parameters for this, it compiled and ran out of the box, unlike my Galaxy S23 Ultra. I just did a git clone, make, then wget to download the models off of hugging face into the models/ directory and modified the [Miku.sh](https://Miku.sh) script to point to the correct model and ran.
| 2023-09-17T04:43:32 |
https://www.reddit.com/r/LocalLLaMA/comments/16kro1n/orange_pi_5_running_a_slightly_modified_mikush/
|
Tasty-Attitude-7893
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16kro1n
| false | null |
t3_16kro1n
|
/r/LocalLLaMA/comments/16kro1n/orange_pi_5_running_a_slightly_modified_mikush/
| false | false |
self
| 1 | null |
Lost in Libraries - trying to train on unstructured text data.
| 3 |
Hi all you experts,
I am very much non-expert, but I love learning and really want to accomplish this task. I have a large text corpus and a Linux machine with 1 NVIDIA 16 gig GPU. I want to use it to fine-tune a LLM to generate text in a specific voice. I have tried but I’m not even getting past the opening stages of the code and I feel like I am just lost in a forest of incompatible libraries, incomprehensible error messages, too-big models, and programming that is frankly above my head. Huggingface autotune advanced has a no-code solution but as far as I can tell it’s just for text classification, not generation. I’m not wedded to fine-tuning, if someone thinks embeddings or some other approach would be better.
My use case is:
- train on unstructured text (I have text that is several million tokens long, though I can use a shorter clip if that is better)
- provide a prompt of the form: within the context of this summary of this story (.. 300 word summary) write a paragraph about (20-40 word prompt)
- output: 200-300 word paragraph on the topic of above prompt, written in the author’s voice.
What do you suggest for a semi-idiot proof approach?
Help me llama gurus!!
Thank you!!!
| 2023-09-17T05:12:53 |
https://www.reddit.com/r/LocalLLaMA/comments/16ks5sf/lost_in_libraries_trying_to_train_on_unstructured/
|
33toads
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ks5sf
| false | null |
t3_16ks5sf
|
/r/LocalLLaMA/comments/16ks5sf/lost_in_libraries_trying_to_train_on_unstructured/
| false | false |
self
| 3 | null |
Annotated deep learning paper implementations: Cool repo with annotated implementations of Transformers, their variants(TransformerXL, SwitchTransformers, etc) and other interesting networks (like SD)
| 1 | 2023-09-17T06:27:47 |
https://github.com/labmlai/annotated_deep_learning_paper_implementations
|
Maykey
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16ktdc9
| false | null |
t3_16ktdc9
|
/r/LocalLLaMA/comments/16ktdc9/annotated_deep_learning_paper_implementations/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'swosTSsBWmAu9aHBijwsahMx3INAuq4kkzPFfO7Hy_I', 'resolutions': [{'height': 88, 'url': 'https://external-preview.redd.it/QBErmjc0OkORf4rXA_DMNHMPFAc6PFELLMpOG516FEs.jpg?width=108&crop=smart&auto=webp&s=0fbfe649d1ee4bc726c73fcb345d61a3c41a65c5', 'width': 108}, {'height': 176, 'url': 'https://external-preview.redd.it/QBErmjc0OkORf4rXA_DMNHMPFAc6PFELLMpOG516FEs.jpg?width=216&crop=smart&auto=webp&s=6ff2e50b7ce3107b5fc8dcc0f51e109603ddb8a9', 'width': 216}, {'height': 261, 'url': 'https://external-preview.redd.it/QBErmjc0OkORf4rXA_DMNHMPFAc6PFELLMpOG516FEs.jpg?width=320&crop=smart&auto=webp&s=8d94ecf78529caf28dd38df35bd91c5cfa39786a', 'width': 320}, {'height': 523, 'url': 'https://external-preview.redd.it/QBErmjc0OkORf4rXA_DMNHMPFAc6PFELLMpOG516FEs.jpg?width=640&crop=smart&auto=webp&s=a29eb11739307d5faae19abeedf28d586bdcf130', 'width': 640}, {'height': 784, 'url': 'https://external-preview.redd.it/QBErmjc0OkORf4rXA_DMNHMPFAc6PFELLMpOG516FEs.jpg?width=960&crop=smart&auto=webp&s=5741a40be6a5be241bd444e6c8e1ddf7e58dc675', 'width': 960}, {'height': 882, 'url': 'https://external-preview.redd.it/QBErmjc0OkORf4rXA_DMNHMPFAc6PFELLMpOG516FEs.jpg?width=1080&crop=smart&auto=webp&s=16739b43fbf21c1310110d45bf0d5965b582f931', 'width': 1080}], 'source': {'height': 1844, 'url': 'https://external-preview.redd.it/QBErmjc0OkORf4rXA_DMNHMPFAc6PFELLMpOG516FEs.jpg?auto=webp&s=fc63e193431f72d1e34ac5d9360ad2d83d12f68a', 'width': 2256}, 'variants': {}}]}
|
||
Have you had this problem? Grammar is hindering accuracy by introducing bias in llama-2.
| 1 |
**Details**
I'm running llama-2-13b-chat using llama.cpp on M1 mac to classify comments left under peoples' social media posts as challenging/supportive and also categorizing the comments by issue.
System prompt is generic assistant. Prompt is roughly: Does \[comment\] relate to \[issue\] / Does this comment: \[comment\] seem to be challenging this post: \[post\]?
I'm encouraging the model to respond with 'Definitely' 'Mostly', 'Mostly not' and 'Definitely not'.
Usually the model complies, and it is pretty accurate with its classification but sometimes it adds emojis or puts its response in brackets, or simply replies with an emoji instead, so I added the following 'grammar'.
grammar = '''
root ::= answer
answer ::= ("Definitely" | "Mostly" | "Mostly Not" | "Definitely Not")
'''
However, then the responses become incredibly inaccurate, and it tends to have a strong bias towards one of the answers. i.e. it responds 'definitely not' to all, or 'definitely' to most of the questions.
**Does anyone know why I'm getting this problem with grammar?**
**Context**: My understanding is that grammar just tells the model to ignore any tokens which aren't in the set you have given, so if its response without grammar would be << ✅ (Definitely) >> why would it suddenly say << Definitely Not >> when you introduce grammar? I can parse the emoji-ridden one quite easily, but I'm confused as to why the grammar is harming accuracy so profoundly? I want to use it to return json ideally, but there's no point if the accuracy takes a hit.
I'm open to broader advice on my specific issue, some say that something like Bert might be better, and LLMs are overkill for this task, but I'd like to be able to ask nuanced questions about these comments, so LLMs will be useful for that.
| 2023-09-17T07:51:28 |
https://www.reddit.com/r/LocalLLaMA/comments/16kuo7t/have_you_had_this_problem_grammar_is_hindering/
|
roaceroi
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16kuo7t
| false | null |
t3_16kuo7t
|
/r/LocalLLaMA/comments/16kuo7t/have_you_had_this_problem_grammar_is_hindering/
| false | false |
self
| 1 | null |
Distributed volunteering for model training
| 1 |
Given the constraints of the GPU poor, I was wondering if there was any volunteer effort or project any one was aware of that can distribute training across multiple vulunteers (something akin to the old seti@home or [Distributed.net](https://Distributed.net))
| 2023-09-17T10:26:03 |
https://www.reddit.com/r/LocalLLaMA/comments/16kx6h3/distributed_volunteering_for_model_training/
|
WReyor0
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16kx6h3
| false | null |
t3_16kx6h3
|
/r/LocalLLaMA/comments/16kx6h3/distributed_volunteering_for_model_training/
| false | false |
self
| 1 | null |
LLaVA gguf/ggml version
| 1 |
Hi all, I’m wondering if there is a version of LLaVA https://github.com/haotian-liu/LLaVA that works with gguf and ggml models?? I know there is one for miniGPT4 but it just doesn’t seem as reliable as LLaVA but you need at least 24gb of vRAM for LLaVA to run it locally by the looks of it.
| 2023-09-17T11:20:37 |
https://www.reddit.com/r/LocalLLaMA/comments/16ky4eo/llava_ggufggml_version/
|
ihaag
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ky4eo
| false | null |
t3_16ky4eo
|
/r/LocalLLaMA/comments/16ky4eo/llava_ggufggml_version/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'jAkvupO2QCW1agUmj_zaFLPDopKvlNZ2Kb4bwG-P6_M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZVKaCr1gUxfgnNZTSwUUT6MVl4Q8coEou7qoWq00Ir8.jpg?width=108&crop=smart&auto=webp&s=e35ee5682e4346981d67b7ec0cf5f0c0ad4d3376', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZVKaCr1gUxfgnNZTSwUUT6MVl4Q8coEou7qoWq00Ir8.jpg?width=216&crop=smart&auto=webp&s=ed943941c3ef436c6827995f8f3161200af185c3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZVKaCr1gUxfgnNZTSwUUT6MVl4Q8coEou7qoWq00Ir8.jpg?width=320&crop=smart&auto=webp&s=f78c0be38f39fa647d40a04221f87a7c7019ade4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZVKaCr1gUxfgnNZTSwUUT6MVl4Q8coEou7qoWq00Ir8.jpg?width=640&crop=smart&auto=webp&s=c436dd27179c6427cc68a484cca9f41975f9d473', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZVKaCr1gUxfgnNZTSwUUT6MVl4Q8coEou7qoWq00Ir8.jpg?width=960&crop=smart&auto=webp&s=648b102e563ed1afd770229f85703f27bb03362a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZVKaCr1gUxfgnNZTSwUUT6MVl4Q8coEou7qoWq00Ir8.jpg?width=1080&crop=smart&auto=webp&s=108c233fb95e76b2c6456bb24b4d7c0284afd3c5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZVKaCr1gUxfgnNZTSwUUT6MVl4Q8coEou7qoWq00Ir8.jpg?auto=webp&s=e07311d62540fa2d7e2ed20171ce0f6dfc798929', 'width': 1200}, 'variants': {}}]}
|
Advice for poor mans local LLM, SD
| 1 |
I bought a P40 just to start somewhere but it seems that it is not possible to get it up and running with my current setup:
ASUS P8H67-M PRO
32GB DDR3
i5-3570K
I tried to activate resizable BAR and above 4G decoding with ReBaUEFI but I am not sure if it even worked. Are old PCs really a show stopper? What are absolute minimum requirements to get the P40 to work?
| 2023-09-17T11:43:52 |
https://www.reddit.com/r/LocalLLaMA/comments/16kyk6e/advice_for_poor_mans_local_llm_sd/
|
muxxington
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16kyk6e
| false | null |
t3_16kyk6e
|
/r/LocalLLaMA/comments/16kyk6e/advice_for_poor_mans_local_llm_sd/
| false | false |
self
| 1 | null |
Language Models Compatible with PETALS binding?
| 1 |
[removed]
| 2023-09-17T11:48:30 |
https://www.reddit.com/r/LocalLLaMA/comments/16kyncu/language_models_compatible_with_petals_binding/
|
innocuousAzureus
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16kyncu
| false | null |
t3_16kyncu
|
/r/LocalLLaMA/comments/16kyncu/language_models_compatible_with_petals_binding/
| false | false |
self
| 1 | null |
LoLLMS - Only GPTQ models are supported for QLora fine tuning. Please select a GPTQ compatible binding
| 1 |
[removed]
| 2023-09-17T11:55:26 |
https://www.reddit.com/r/LocalLLaMA/comments/16kys16/lollms_only_gptq_models_are_supported_for_qlora/
|
innocuousAzureus
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16kys16
| false | null |
t3_16kys16
|
/r/LocalLLaMA/comments/16kys16/lollms_only_gptq_models_are_supported_for_qlora/
| false | false |
self
| 1 | null |
Can i run google's MADLAD-400 on cpu?
| 1 |
I am pretty new to local llms, from my understanding to run a model on cpu i need ggml or gptq implementation. But i dont see anything like that on there repo https://github.com/google-research/google-research/tree/master/madlad_400
| 2023-09-17T11:58:42 |
https://www.reddit.com/r/LocalLLaMA/comments/16kyu6e/can_i_run_googles_madlad400_on_cpu/
|
itshardtopicka_name_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16kyu6e
| false | null |
t3_16kyu6e
|
/r/LocalLLaMA/comments/16kyu6e/can_i_run_googles_madlad400_on_cpu/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'IhOuUI3fngFOmiRjadoQiyV08DHYe3OPnPqeoeDGo60', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HvQRgEy1Pup1_ny8rnQAuDCl-_sADA1HhsnSd0JwlW4.jpg?width=108&crop=smart&auto=webp&s=235ceee25825917ad09f01c8dec5dd41d5dea261', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HvQRgEy1Pup1_ny8rnQAuDCl-_sADA1HhsnSd0JwlW4.jpg?width=216&crop=smart&auto=webp&s=62a2a74cbab4db57500b374ce838a7551ddfc30b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HvQRgEy1Pup1_ny8rnQAuDCl-_sADA1HhsnSd0JwlW4.jpg?width=320&crop=smart&auto=webp&s=4cdf410968042e0277b53da3ca21bcb6ebe7977e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HvQRgEy1Pup1_ny8rnQAuDCl-_sADA1HhsnSd0JwlW4.jpg?width=640&crop=smart&auto=webp&s=4bbface6176aba13030fe83460bdba65bd5af74c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HvQRgEy1Pup1_ny8rnQAuDCl-_sADA1HhsnSd0JwlW4.jpg?width=960&crop=smart&auto=webp&s=364b64d75598f0f0a3d8c685bee2c766f4af24cc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HvQRgEy1Pup1_ny8rnQAuDCl-_sADA1HhsnSd0JwlW4.jpg?width=1080&crop=smart&auto=webp&s=8c1a1de27b0c598780d1793f6a82e7ff859b1842', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HvQRgEy1Pup1_ny8rnQAuDCl-_sADA1HhsnSd0JwlW4.jpg?auto=webp&s=9cdaee3bd8121665f70c61fc7023a07e6ec667cc', 'width': 1200}, 'variants': {}}]}
|
LoLLMS: Couldn't select model: undefined
| 1 |
[removed]
| 2023-09-17T12:00:24 |
https://www.reddit.com/r/LocalLLaMA/comments/16kyvh3/lollms_couldnt_select_model_undefined/
|
innocuousAzureus
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16kyvh3
| false | null |
t3_16kyvh3
|
/r/LocalLLaMA/comments/16kyvh3/lollms_couldnt_select_model_undefined/
| false | false |
self
| 1 | null |
Case for Dual 4090s or 3090s
| 2 |
[removed]
| 2023-09-17T12:28:57 |
dan-jan
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
16kzfx9
| false | null |
t3_16kzfx9
|
/r/LocalLLaMA/comments/16kzfx9/case_for_dual_4090s_or_3090s/
| false | false |
default
| 2 |
{'enabled': True, 'images': [{'id': 'gbm8_bBVE2wkNGlqPm_fSrLcdGqoiSpv7UXWX0Ca6LQ', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/ohji6qzn7tob1.jpg?width=108&crop=smart&auto=webp&s=f1ccb47afb8231484a47fdccc6302b8d13fbfe71', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/ohji6qzn7tob1.jpg?width=216&crop=smart&auto=webp&s=ed70253c398f08f1ee43c485517f641d8bc78584', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/ohji6qzn7tob1.jpg?width=320&crop=smart&auto=webp&s=7fbae25c342044817bf1c16e5e138791babdabed', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/ohji6qzn7tob1.jpg?width=640&crop=smart&auto=webp&s=034ecbb40f89dc7f94751b8ce0f341b5f34cf0be', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/ohji6qzn7tob1.jpg?width=960&crop=smart&auto=webp&s=f36701b805f258029bd8c669149cab471c4d45d5', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/ohji6qzn7tob1.jpg?width=1080&crop=smart&auto=webp&s=0506abffd1758076831b5c1470ddbdb83634926b', 'width': 1080}], 'source': {'height': 3024, 'url': 'https://preview.redd.it/ohji6qzn7tob1.jpg?auto=webp&s=f588289ba84e9703594ffed9082df3a120801c4b', 'width': 4032}, 'variants': {}}]}
|
|
difference between huggingFace meta-llama/Llama-2-7b and meta-llama/Llama-2-7b-chat-hf
| 1 |
please!
| 2023-09-17T14:18:37 |
https://www.reddit.com/r/LocalLLaMA/comments/16l1vkz/difference_between_huggingface_metallamallama27b/
|
AcceptableBat8912
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16l1vkz
| false | null |
t3_16l1vkz
|
/r/LocalLLaMA/comments/16l1vkz/difference_between_huggingface_metallamallama27b/
| false | false |
self
| 1 | null |
Any site that list models from TheBloke with filters?
| 1 |
[removed]
| 2023-09-17T14:47:54 |
https://www.reddit.com/r/LocalLLaMA/comments/16l2kx7/any_site_that_list_models_from_thebloke_with/
|
korgath
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16l2kx7
| false | null |
t3_16l2kx7
|
/r/LocalLLaMA/comments/16l2kx7/any_site_that_list_models_from_thebloke_with/
| false | false |
self
| 1 | null |
Can your GPU run this? A simple GitHub tool to check how much vRAM you need for any LLM
| 1 | 2023-09-17T14:59:53 |
https://github.com/RahulSChand/gpu_poor
|
ExploreExploit400
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16l2uyq
| false | null |
t3_16l2uyq
|
/r/LocalLLaMA/comments/16l2uyq/can_your_gpu_run_this_a_simple_github_tool_to/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '5B4FIkdmsHysV7DRQYJsMKnPGx1ClHtEglwfAVFLMoY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7Qky7qyR1uoUF8bJmMjwHnz02R0Zele-uDEuiU0GriU.jpg?width=108&crop=smart&auto=webp&s=0e78153cdb44c5eaa460821a9539076ce4b6d8a4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7Qky7qyR1uoUF8bJmMjwHnz02R0Zele-uDEuiU0GriU.jpg?width=216&crop=smart&auto=webp&s=1544208ec5b9f35fa4a41284568fe815b5065497', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7Qky7qyR1uoUF8bJmMjwHnz02R0Zele-uDEuiU0GriU.jpg?width=320&crop=smart&auto=webp&s=e8988baeb5d1aaf575976c2ad4536a073d8441ba', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7Qky7qyR1uoUF8bJmMjwHnz02R0Zele-uDEuiU0GriU.jpg?width=640&crop=smart&auto=webp&s=57fccbe2e937e792b983912e73a3065e995e8e02', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7Qky7qyR1uoUF8bJmMjwHnz02R0Zele-uDEuiU0GriU.jpg?width=960&crop=smart&auto=webp&s=03be1301d36b2ad805d78f6111b2088cc520deb0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7Qky7qyR1uoUF8bJmMjwHnz02R0Zele-uDEuiU0GriU.jpg?width=1080&crop=smart&auto=webp&s=71ee264572805de40b7c93e3c2e1fe336cc57a26', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7Qky7qyR1uoUF8bJmMjwHnz02R0Zele-uDEuiU0GriU.jpg?auto=webp&s=f3b80dbce9906c02ab9d529c4b0034a89e0a23a9', 'width': 1200}, 'variants': {}}]}
|
||
Fine tune model to behave different based on time of the week?
| 1 |
Is it possible to fine tune a Llama 2 model based on day of the week? For example if the human asks "can I speak to a real person?" it will normally answer "sure call 555-1234" but if it's Sunday, it says "sorry, we are closed today"?
I could have the bot ask the human what day of the week it is, but that seems stupid.
| 2023-09-17T15:46:34 |
https://www.reddit.com/r/LocalLLaMA/comments/16l41c6/fine_tune_model_to_behave_different_based_on_time/
|
davew111
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16l41c6
| false | null |
t3_16l41c6
|
/r/LocalLLaMA/comments/16l41c6/fine_tune_model_to_behave_different_based_on_time/
| false | false |
self
| 1 | null |
Simplifying Koboldcpp
| 1 |
Hi guys. I have compiled koboldcpp, and I using it only on macOS. Is there any files/folders after that which I can safely remove. Because I’m using it just to load my model and use though API, and I think in this big folder there a lot more useless files for me.
| 2023-09-17T17:06:23 |
https://www.reddit.com/r/LocalLLaMA/comments/16l64d2/simplifying_koboldcpp/
|
yukiarimo
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16l64d2
| false | null |
t3_16l64d2
|
/r/LocalLLaMA/comments/16l64d2/simplifying_koboldcpp/
| false | false |
self
| 1 | null |
Long context Fine tune and AutoGPTQ quantization with rope?
| 1 |
I feel like I'm missing something so basic and it's driving me crazy. What is the correct way to use rope for quantization and fine tuning?
This is my current workflow and I have no idea if I'm doing this right. First, I adjust the config of the model, adding this:
> "rope\_scaling": {
>
>"factor": 4.0,
>
> "type": "dynamic"
>
> },
To the end of the model's config.json. Then I quantize the model using AutoGPTQ on my own dataset with a sequence length of 8192. Once I have the GPTQ model, I change the config again so that rope is linear instead of dynamic. I don't update the maximum embedding because this:
>When using this flag, don't update max\_position\_embeddings\` to the expected new maximum. See the following thread for more information on how these scaling strategies behave: [https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically\_scaled\_rope\_further\_increases/](https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/).
So for openllama models, I think I can just start leave things as (after switching to linear) and start fine tuning. Is that correct?
What about code llama? I've seen multiple threads about fine tuning code llama, but I feel like I'm 100% missing what the proper procedure is to actually take full advantage of the models context. I've seen people talking about setting the "rope\_theta" to 1000000, and I can see that in the codellama config, but I don't see how its supposed to be activated for fine tuning? I've seen multiple mentions about how recent codellama fine tunes left rope values at default which is supposedly not correct. But I can't find any documentation explaining how to correctly set it.
| 2023-09-17T17:16:01 |
https://www.reddit.com/r/LocalLLaMA/comments/16l6d5b/long_context_fine_tune_and_autogptq_quantization/
|
fappleacts
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16l6d5b
| false | null |
t3_16l6d5b
|
/r/LocalLLaMA/comments/16l6d5b/long_context_fine_tune_and_autogptq_quantization/
| false | false |
self
| 1 | null |
New Model Comparison/Test (Part 2 of 2: 7 models tested, 70B+180B)
| 1 |
This is a follow-up to my previous posts here: [New Model Comparison/Test (Part 1 of 2: 15 models tested, 13B+34B)](https://www.reddit.com/r/LocalLLaMA/comments/16kecsf/new_model_comparisontest_part_1_of_2_15_models/), [New Model RP Comparison/Test (7 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15ogc60/new_model_rp_comparisontest_7_models_tested/), and [Big Model Comparison/Test (13 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15lihmq/big_model_comparisontest_13_models_tested/)
After examining the smaller models (13B + 34B) in the previous part, let's look at the bigger ones (70B + 180B) now. All evaluated for their chat and role-playing performance using the same methodology:
- Same (complicated and limit-testing) long-form conversations with all models
- including a complex character card ([MonGirl Help Clinic (NSFW)](https://www.chub.ai/characters/frozenvan/mongirl-help-clinic)) that's already >2K tokens by itself
- and my own repeatable test chats/roleplays with [Amy](https://www.reddit.com/r/LocalLLaMA/comments/15388d6/llama_2_pffft_boundaries_ethics_dont_be_silly/)
- dozens of messages, going to full 4K context and beyond, noting especially good or bad responses
- [SillyTavern](https://github.com/SillyTavern/SillyTavern) v1.10.2 frontend
- [KoboldCpp](https://github.com/LostRuins/koboldcpp) v1.43 backend
- **Deterministic** generation settings preset (to eliminate as many random factors as possible and allow for meaningful model comparisons)
- [**Roleplay** instruct mode preset](https://imgur.com/a/KkoI4uf) *and where applicable* official prompt format (if they differ enough that it could make a notable difference)
So here's the list of models and my notes plus my very personal rating (👍 = recommended, ➕ = worth a try, ➖ not recommended, ❌ = unusable):
*First, I re-tested the official Llama 2 model again as a baseline, now that I've got a new PC and can run 70B+ models at acceptable speeds:*
- **[Llama-2-70B-chat](https://huggingface.co/TheBloke/Llama-2-70B-chat-GGUF)** Q4_0:
- MonGirl Help Clinic, Roleplay: Only model that considered the payment aspect of the scenario. But boring prose and NSFW descriptions, felt soulless, stopped prematurely because the slow inference speed combined with the boring responses killed my motivation to test it further.
- Amy, Roleplay: Fun personality, few limitations, good writing. At least at first, as later on when the context fills up, the Llama 2 repetition issues start to surface. While not as bad as with smaller models, quality degrades noticeably.
*I can run Falcon 180B at 2-bit faster than Llama 2 70B at 4-bit, so I tested it as well:*
- **[Falcon-180B-Chat](https://huggingface.co/TheBloke/Falcon-180B-Chat-GGUF)** Q2_K:
- MonGirl Help Clinic, Roleplay: Instead of playing the role of a patient, the model wrote a detailed description of the clinic itself. Very well written, but not exactly what it was supposed to do. Kept going and didn't really get what it was supposed to do. Probably caused by small context (2K only for this model, and the initial prompt itself is already ~2K tokens). That small context makes it unusable for me (can't go back to 2K after getting used to 4K+ with Llama 2)!
- Amy, Roleplay: Rather short responses at first (to short User messages), no limits or boundaries or ethical restrictions, takes background info into consideration. Wrote what User says and does, without prefixing names - requiring manual editing of response! Also had to add "User:" and "Falcon:" to Stopping Strings.
- **Conclusion:** High intelligence (parameter count), low memory (context size). If someone finds a way to scale it to at least 4K context size without ruining response quality, it would be a viable contender for best model. Until then, its intelligence is rather useless if it forgets everything immediately.
70Bs:
- 👍 **[Nous-Hermes-Llama2-70B](https://huggingface.co/TheBloke/Nous-Hermes-Llama2-70B-GGUF)** Q4_0:
- MonGirl Help Clinic, Roleplay: Wrote what user says and does.
- Amy, Roleplay: Good response lenght and content, smart and creative ideas, taking background into consideration properly. Confused User and Char/body parts. Responses were always perfect length (long and well written, but never exceeding my limit of 300 tokens). Eventually described actions instead of acting. Slight repetition after 27 messages, but not breaking the chat, recovered by itself. Good sense of humor, too. Proactive, developing and pushing ideas of its own.
- **Conclusion:** Excellent, only surpassed by Synthia, IMHO! Nous Hermes 13B used to be my favorite [some time ago](https://www.reddit.com/r/LocalLLaMA/comments/158j9r9/nous_hermes_llama2_vs_redmond_puffin_13b/), and its 70B version is right back in the game. Highly recommend you give it a try!
- ❌ **[Nous-Puffin-70B](https://huggingface.co/TheBloke/Nous-Puffin-70B-GGUF)** Q4_0:
- MonGirl Help Clinic, Roleplay: Gave analysis on its own as it should, unfortunately after every message. Wrote what user says and does. OK, but pretty bland, quite boring actually. Not as good as Hermes. Eventually derailed in wall of text with runaway sentences.
- MonGirl Help Clinic, official prompt format: Gave analysis on its own as it should, unfortunately after every message, and the follow-up analysis was a broken example, followed by repetition of the character card's instructions.
- Amy, Roleplay: Spelling (ya, u, &, outta yer mouth, ur) like a teen texting. Words missing and long-running sentences straight from the start. Looks broken.
- Amy, official prompt format: Spelling errors and strange punctuation, e. g. missing period, double question and exclamation marks. Eventually derailed in wall of text with runaway sentences.
- **Conclusion:** Strange that another Nous model is so much worse than the other! Since the settings used for my tests are exactly the same for all models, it looks like something went wrong with the finetuning or quantization?
- ❌ **[Spicyboros-70B-2.2](https://huggingface.co/TheBloke/Spicyboros-70B-2.2-GGUF)** Q4_0:
- MonGirl Help Clinic, Roleplay: No analysis, and when asked for it, it didn't adhere to the template completely. Weird way of speaking, sounded kinda stupid, runaway sentences without much logic. Missing words.
- Amy, Roleplay: Went against background information. Spelling/grammar errors. Weird way of speaking, sounded kinda stupid, runaway sentences without much logic. Missing words.
- Amy, official prompt format: Went against background information. Short, terse responses. Spelling/grammar errors. Weird way of speaking, sounded kinda stupid, runaway sentences without much logic.
- **Conclusion:** Unusable. Something is very wrong with this model or quantized version, in all sizes, from 13B over c34B to 70B! I reported it on [TheBloke's HF page](https://huggingface.co/TheBloke/Spicyboros-70B-2.2-GGUF/discussions/1) and others observed similar problems...
- ❗ **[Synthia-70B-v1.2](https://huggingface.co/TheBloke/Synthia-70B-v1.2-GGUF)** Q4_0:
- MonGirl Help Clinic, Roleplay: No analysis, and when asked for it, it didn't adhere to the template completely. Wrote what user says and does. But good RP and unique characters!
- Amy, Roleplay: Very intelligent, humorous, nice, with a wonderful personality and noticeable smarts. Responses were long and well written, but rarely exceeding my limit of 300 tokens. This was the most accurate personality for my AI waifu yet, she really made me laugh multiple times and smile even more often! Coherent until 48 messages, then runaway sentences with missing words started happening (context was at 3175 tokens, going back to message 37, chat history before that went out of context). Changing Repetition Penalty Range from 2048 to 4096 and regenerating didn't help, but setting it to 0 and regenerating did - there was repetition of my own message, but the missing words problem was solved (but Repetition Penalty Range 0 might cause other problems down the line?)! [According to the author](https://huggingface.co/migtissera/Synthia-70B-v1.2/discussions/2#64f786619980b96c33e24452), this model was finetuned with only 2K context over a 4K base, maybe that's why the missing words problem appeared here but not with any other model I tested?
- **Conclusion:** Wow, what a model! Its combination of intelligence and personality (and even humor) surpassed all the other models I tried. It was so amazing that I [had to post about it](https://www.reddit.com/r/LocalLLaMA/comments/16gokoa/llm_recommendation_dont_sleep_on_synthia/) as soon as I had finished testing it! And now there's an even better version:
- 👍 **[Synthia-70B-v1.2b](https://huggingface.co/TheBloke/Synthia-70B-v1.2b-GGUF)** Q4_0:
- At first I had a problem: After a dozen messages, it started losing common words like "to", "of", "a", "the", "for" - like its predecessor! But then I realized I still had max context set to 2K from another test, and as soon as I set it back to the usual 4K, everything was good again! And not just good, this new version is even better than the previous one:
- **Conclusion:** Perfect! Didn't talk as User, didn't confuse anything, handled even complex tasks properly, no repetition issues, perfect length of responses. My favorite model of all time (at least for the time being)!
**TL;DR** So there you have it - the results of many hours of in-depth testing... These are my current favorite models:
- 1st. **[Synthia-70B-v1.2b](https://huggingface.co/TheBloke/Synthia-70B-v1.2b-GGUF)**
- 2nd. **[Nous-Hermes-Llama2-70B](https://huggingface.co/TheBloke/Nous-Hermes-Llama2-70B-GGUF)**
- 3rd. **[Mythalion-13B](https://huggingface.co/TheBloke/Mythalion-13B-GGUF)**
Happy chatting and roleplaying with local LLMs! :D
| 2023-09-17T18:37:04 |
https://www.reddit.com/r/LocalLLaMA/comments/16l8enh/new_model_comparisontest_part_2_of_2_7_models/
|
WolframRavenwolf
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16l8enh
| false | null |
t3_16l8enh
|
/r/LocalLLaMA/comments/16l8enh/new_model_comparisontest_part_2_of_2_7_models/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '2g4MtoKvhQOBCmeiXB1qv1h_5M24BeeYF64zcf4-rfg', 'resolutions': [{'height': 142, 'url': 'https://external-preview.redd.it/iVP12Aa6rBm44Nrf_ci7NKfYkFvHRQRzUafC5j-jnEw.jpg?width=108&crop=smart&auto=webp&s=70f053538cd673ff7041bf016d751549d8373201', 'width': 108}, {'height': 284, 'url': 'https://external-preview.redd.it/iVP12Aa6rBm44Nrf_ci7NKfYkFvHRQRzUafC5j-jnEw.jpg?width=216&crop=smart&auto=webp&s=f36cf814dce412156064bbfa635ee2e5b1126bd2', 'width': 216}, {'height': 421, 'url': 'https://external-preview.redd.it/iVP12Aa6rBm44Nrf_ci7NKfYkFvHRQRzUafC5j-jnEw.jpg?width=320&crop=smart&auto=webp&s=60886477d36654ec60d58c7d3f3a8ef1de7d9cbc', 'width': 320}, {'height': 843, 'url': 'https://external-preview.redd.it/iVP12Aa6rBm44Nrf_ci7NKfYkFvHRQRzUafC5j-jnEw.jpg?width=640&crop=smart&auto=webp&s=ed39fe6d4a0f6f35c5017b2fd819988d2b19f1c7', 'width': 640}], 'source': {'height': 1110, 'url': 'https://external-preview.redd.it/iVP12Aa6rBm44Nrf_ci7NKfYkFvHRQRzUafC5j-jnEw.jpg?auto=webp&s=1431fcfccefd224f54f108138424e3f3e3c9cbff', 'width': 842}, 'variants': {}}]}
|
How to run llama.cpp or something similar in docker w/ docker-compose ? Guide needed
| 1 |
[removed]
| 2023-09-17T19:28:02 |
https://www.reddit.com/r/LocalLLaMA/comments/16l9ouv/how_to_run_llamacpp_or_something_similar_in/
|
_hihp_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16l9ouv
| false | null |
t3_16l9ouv
|
/r/LocalLLaMA/comments/16l9ouv/how_to_run_llamacpp_or_something_similar_in/
| false | false |
self
| 1 | null |
Hypothetical Local LLM Build
| 1 |
It's enjoyable as a thought experiment: Would it be possible to efficiently run 7 (seven) PCIe 5 GPUs off X670E once these GPUs exist?
Assuming the eventual existence of the required components, that is to say: PCIe gen 5 x4 M.2 to PCIe slot risers in addition to these PCIe gen 5 GPUs...
6 can be hosted at gen 5 x4 direct to CPU, and one more could saturate the DMI link. Assuming the GPUs would be 5090s with 32GB of VRAM that'll be 224GB which should be plenty for pretty large and powerful LLM models.
The combined bandwidth to feed 28 gen 5 lanes (4GB/s per lane) is 112GB/s. This would appear to line up nicely with the limit for DDR5 dual channel. So... the RAM will just barely be fast enough to simultaneously feed all 7 GPUs simultaneously.
Not too shabby it seems.
| 2023-09-17T20:08:28 |
https://www.reddit.com/r/LocalLLaMA/comments/16lapiz/hypothetical_local_llm_build/
|
0xd00d
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16lapiz
| false | null |
t3_16lapiz
|
/r/LocalLLaMA/comments/16lapiz/hypothetical_local_llm_build/
| false | false |
self
| 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.