title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Conversational "memory loss"?
| 5 |
I'm running 4-bit Chronos Hermes 13B in a persistent LLaMa.cpp chat.
Sometimes, I have to remind it of things we agreed only a few questions before. While not an actual conversation, this helps to illustrate the matter:
> Me: All girl dogs can give birth to puppies.
>
> ChatLLaMa: Yes, this is one of the defining characteristics of female dogs, the potential to give birth.
>
> Me: I have a dog named "Clara", a girl dog.
>
> ChatLLaMa: That's so cool! I bet she's really sweet.
>
> Me: Yes, she is.
>
> ... a few minutes of conversation goes by ...
>
> Me: Is it possible Clara might one day give birth to puppies?
>
> ChatLLaMa: I don't know; it depends on what kind of dog Clara is.
>
> Me: What do you mean?
>
> ChatLLaMa: Well, is Clara a girl dog?
>
> Me (puzzled by forgetfulness): Yes.
>
> ChatLLaMa: Oh, that's nice. I guess if a girl dog could give birth to puppies, then, maybe?
Am I prompting this wrong? I know I have long held to the idea these AI chats are often glorified auto-completes but this seems a bit too close to that presumption.
I know this is new tech and all and I should be patient with the advances but this seems a bit extreme. Maybe there are smarter models and/or implementations?
Thanks in advance.
P.S.: I am fully aware of the distinct possibility, five years from know, I am going to look back and say "Wow, was I a friggin' whiner". So, if that is the case, go right ahead and say so.
| 2023-07-07T21:48:10 |
https://www.reddit.com/r/LocalLLaMA/comments/14tkiwp/conversational_memory_loss/
|
AlexKingstonsGigolo
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14tkiwp
| false | null |
t3_14tkiwp
|
/r/LocalLLaMA/comments/14tkiwp/conversational_memory_loss/
| false | false |
self
| 5 | null |
Open Source Model Recommendations
| 0 |
Hello!
I've been wanting to use an open source LLM that can answer questions based on my existing technical document knowledge base. I want answers that prioritize accuracy with a reasonable amount of clarity. What are some of the models that you would recommend for this (either GGML or GPTQ models are fine)? It would be super helpful to get some model settings that have worked best too (temperature, repetition\_penalty, etc.)
Looking forward to hearing your thoughts!
​
| 2023-07-07T23:19:21 |
https://www.reddit.com/r/LocalLLaMA/comments/14tmu0b/open_source_model_recommendations/
|
Beautiful-Stage-4026
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14tmu0b
| false | null |
t3_14tmu0b
|
/r/LocalLLaMA/comments/14tmu0b/open_source_model_recommendations/
| false | false |
self
| 0 | null |
Anyone have any favorite social / social science / computational social science uses of open LLMs?
| 2 |
I saw a cool paper recently about using LLMs to impute missing responses in surveys. The gist was that you can fine-tune LLMs on government survey responses, then predict what a specific demographic vector would say to a question.
Kim, J., & Lee, B. (2023). *AI-Augmented Surveys: Leveraging Large Language Models for Opinion Prediction in Nationally Representative Surveys* (arXiv:2305.09620). arXiv. [https://doi.org/10.48550/arXiv.2305.09620](https://doi.org/10.48550/arXiv.2305.09620)
| 2023-07-08T00:41:19 |
https://www.reddit.com/r/LocalLLaMA/comments/14toshq/anyone_have_any_favorite_social_social_science/
|
josha_umich
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14toshq
| false | null |
t3_14toshq
|
/r/LocalLLaMA/comments/14toshq/anyone_have_any_favorite_social_social_science/
| false | false |
self
| 2 |
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
|
Performance feedback
| 5 |
Hi,
I am starting local LLaMa and I would like to have your feedback regarding the performance that I'm having so far.
I'm using llama.cpp with Python and Cuda to run models. I have a GTX 1650 Ti with 4gb dedicated and 32GB of RAM.
I have the following performance using Alpaca-Native-7B-GGML model:
`llama_print_timings: load time = 13453.13 ms`
`llama_print_timings: sample time = 159.04 ms / 160 runs ( 0.99 ms per token, 1006.02 tokens per second)`
`llama_print_timings: prompt eval time = 13739.30 ms / 8 tokens ( 1717.41 ms per token, 0.58 tokens per second)`
`llama_print_timings: eval time = 39734.28 ms / 159 runs ( 249.90 ms per token, 4.00 tokens per second)`
`llama_print_timings: total time = 54550.33 ms`
When I run a model I observe that the GPU memory goes to max (4Gb). I find that the response is pretty slow but maybe it's normal because of my setup? Or am I missing something with the configuration and/or the model.
In your opinion what is the minimum setup to have a response to a question in 5s or less.
Thank you for your help
| 2023-07-08T01:24:11 |
https://www.reddit.com/r/LocalLLaMA/comments/14tpqdd/performance_feedback/
|
SanglierAlbinos
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14tpqdd
| false | null |
t3_14tpqdd
|
/r/LocalLLaMA/comments/14tpqdd/performance_feedback/
| false | false |
self
| 5 | null |
Advice: Improving my build
| 1 |
Hi, currently on a budget of $500 dollars.
I have:
AsRock DeskMeet B660(500 PSU, motherboard and chasis). $240
128GB of DDR4 3200 RAM $45 each module of 32GB
1TB SP M2 2280 as Disk $35
Core i5 12400 $175
I am planning to wait for a SFF 4060 TI 16GB or get a 3060 12GB model and upgrade the memory to DDR5 with the rest of the budget.
Can you give me any suggestion on which would give me a better performance?
| 2023-07-08T01:45:40 |
https://www.reddit.com/r/LocalLLaMA/comments/14tq78t/advice_improving_my_build/
|
Rubytux
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14tq78t
| false | null |
t3_14tq78t
|
/r/LocalLLaMA/comments/14tq78t/advice_improving_my_build/
| false | false |
self
| 1 | null |
Front end tools for accessing model data?
| 2 |
Hey - so im kinda new to all this and trying to work out ways to actually play about with models. (have not got on to training or custom data set creation yet, job for later)
So far ive only had a few tools running successfully kobaldai but only with ggml model types? and dalai llama.
So, im kinda confused about the actual models themselves, grabbing the .bin files seems to work and pointing to them but im curious to what the stack of other files do if this seems to work without them.. Then models name PyTorch.bin never seem to run on the front end. Overall im sure im missing a lot of the logic out here.
Whats the difference in model formats? What tools can I use to run these? What I have only seems to be for dealing with text logic models, what about some more multimodel scenarios?
Any clarity would be appreciated on these few points, or some tools to use that can handle all these features.
| 2023-07-08T01:51:04 |
https://www.reddit.com/r/LocalLLaMA/comments/14tqb74/front_end_tools_for_accessing_model_data/
|
MaxSan
|
self.LocalLLaMA
| 2023-07-08T06:16:30 | 0 |
{}
|
14tqb74
| false | null |
t3_14tqb74
|
/r/LocalLLaMA/comments/14tqb74/front_end_tools_for_accessing_model_data/
| false | false |
self
| 2 | null |
Seeking Assistance: Anonymized Datasets of Therapeutic Conversations
| 8 |
I am currently engaged in a project that involves the development of a language model based on therapeutic conversations, with a view to enhancing mental health technology.
I am in need of extensive, anonymized therapy dialogue datasets, which should be in full compliance with privacy and ethical standards.
Should anyone have suggestions for potential sources such as open-source databases, research articles, or online communities, please do share. Furthermore, tips about working with this type of sensitive data would be greatly appreciated.
| 2023-07-08T02:22:47 |
https://www.reddit.com/r/LocalLLaMA/comments/14tqzoq/seeking_assistance_anonymized_datasets_of/
|
ZealousidealBlock330
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14tqzoq
| false | null |
t3_14tqzoq
|
/r/LocalLLaMA/comments/14tqzoq/seeking_assistance_anonymized_datasets_of/
| false | false |
self
| 8 | null |
Recommended open LLMs with image input modality?
| 22 |
I'd be curious to try an open LLM that has an image input modality, for example for captioning an image or chatting about an image. There seem to be several out there. Does anyone have experience with them and know which ones are good (if any)?
This tweet, for example, lists 9 multi-modal models built with LLaMA: [https://twitter.com/yupenghou97/status/1676574171206389760/photo/1](https://twitter.com/yupenghou97/status/1676574171206389760/photo/1).
| 2023-07-08T02:39:16 |
https://www.reddit.com/r/LocalLLaMA/comments/14trcif/recommended_open_llms_with_image_input_modality/
|
hold_my_fish
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14trcif
| false | null |
t3_14trcif
|
/r/LocalLLaMA/comments/14trcif/recommended_open_llms_with_image_input_modality/
| false | false |
self
| 22 | null |
Running LLaMa on an AMD Ryzen 5 5500U
| 4 |
Hi there.
I'm running LLaMa.cpp 7B 2bit quantisation on an AMD Ryzen 5500 U. I only have 8gb of ram, but plan to upgrade to 32gb soon.
I can run LLaMA 7b in 2bit quantize fast. what will I be able to run with 32gb of ram?
| 2023-07-08T03:12:44 |
https://www.reddit.com/r/LocalLLaMA/comments/14ts2f7/running_llama_on_an_amd_ryzen_5_5500u/
|
royalmeww
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14ts2f7
| false | null |
t3_14ts2f7
|
/r/LocalLLaMA/comments/14ts2f7/running_llama_on_an_amd_ryzen_5_5500u/
| false | false |
self
| 4 | null |
Does Data Used for Fine-Tuning Need to Be Structured as Prompts with Responses?
| 3 |
I’ve been messing with creating a bot for a certain app (it rhymes with Biscord), and I’d like to use an LLM to make the bot say funny things. The responses from LLaMA models I’ve seen tend to still be pretty soulless and full of the typical boilerplate responses you often see from LLMs. I have a CSV full of text that is more of the style of how I’d like the model to communicate. Even though this text is not in the prompt/response format people would usually use to increase a model’s functionality, can I still use this data to fine-tune the model to make it write how I’d like it to write? If not, should I just try to make some instructions to get it to communicate how I want?
| 2023-07-08T03:54:39 |
https://www.reddit.com/r/LocalLLaMA/comments/14tsx5d/does_data_used_for_finetuning_need_to_be/
|
danmanboles7
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14tsx5d
| false | null |
t3_14tsx5d
|
/r/LocalLLaMA/comments/14tsx5d/does_data_used_for_finetuning_need_to_be/
| false | false |
self
| 3 | null |
A simple repo for fine-tuning LLMs with both GPTQ and bitsandbytes quantization. Also supports ExLlama for inference for the best speed.
| 105 | 2023-07-08T04:36:10 |
https://github.com/taprosoft/llm_finetuning
|
taprosoft
|
github.com
| 1970-01-01T00:00:00 | 1 |
{'gid_2': 1}
|
14ttt4s
| false | null |
t3_14ttt4s
|
/r/LocalLLaMA/comments/14ttt4s/a_simple_repo_for_finetuning_llms_with_both_gptq/
| false | false | 105 |
{'enabled': False, 'images': [{'id': 'Rnysn6wpNPRZWzNkt14RbzRcRUdOOFDhYnJKBDPSMPU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rTvN4uuH9v-SYSkNqQ67p-NPV-2uIiYeL2MijjbyzXI.jpg?width=108&crop=smart&auto=webp&s=06c860511b19aec0f87dc643b9352493ed43d370', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rTvN4uuH9v-SYSkNqQ67p-NPV-2uIiYeL2MijjbyzXI.jpg?width=216&crop=smart&auto=webp&s=cc1eccb0333c79cf52d373717efda18548ce05b0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rTvN4uuH9v-SYSkNqQ67p-NPV-2uIiYeL2MijjbyzXI.jpg?width=320&crop=smart&auto=webp&s=1c927076627fda3bf0f5f7aef96dd826c9e9a3c6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rTvN4uuH9v-SYSkNqQ67p-NPV-2uIiYeL2MijjbyzXI.jpg?width=640&crop=smart&auto=webp&s=2761cc1094abf754bb0b45fd9ae025f3ea805b85', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rTvN4uuH9v-SYSkNqQ67p-NPV-2uIiYeL2MijjbyzXI.jpg?width=960&crop=smart&auto=webp&s=b456726ae36dab5d9bd00c1827595b7856040f28', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rTvN4uuH9v-SYSkNqQ67p-NPV-2uIiYeL2MijjbyzXI.jpg?width=1080&crop=smart&auto=webp&s=2fb8d3648cda0c42f10616907ed85d715e0e745f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rTvN4uuH9v-SYSkNqQ67p-NPV-2uIiYeL2MijjbyzXI.jpg?auto=webp&s=127a6fda6fed86f131d0c0bd08aac84370b1f926', 'width': 1200}, 'variants': {}}]}
|
||
I'm hosting an mpt-7b-chat model for the next 24h to test my code
| 22 |
I made a post a few days ago about [an open source project I created to host LLMs on GCloud easily](https://www.reddit.com/r/LocalLLaMA/comments/14rqyx8/tool_for_deploying_open_source_llms_on_your_own/).
I still have some free cloud credits so I just rented an A100 and slapped mpt-7b-chat on it. I'm gonna keep it running for at least the next 24h. I'd love for you guys to stress test it because I want to see how many requests it can handle.
This is how you can run queries:
**Install the pip package.** Code for it is here: [https://github.com/havenhq/haven/tree/dev/sdk](https://github.com/havenhq/haven/tree/dev/sdk)
`pip install havenpy`
**Then you can use this snippet to chat with the model in your terminal**
from havenpy import Haven
client = Haven("34.116.236.227:50051", "public_github")
history = []
while True:
user_input = input("You: ")
history.append({
"content": user_input,
"role": "USER"
})
res = client.chat_completion("haven-w-mpt-7b-chat-ljtkcztv", messages=history, stream=True)
message = ""
for r in res:
message += r.text
print(r.text, end="", flush=True)
print()
history.append({
"content": message,
"role": "ASSISTANT"
})
In case you missed the other post, this is the project: [https://github.com/havenhq/haven](https://github.com/havenhq/haven)
And this is how to get your own instance running: [https://docs.haven.run](https://docs.haven.run)
I'd love to see what you can get it to say. I just had it explain to me in detail to to dispose of a dead body lol.
Please don't send any personal information :)
| 2023-07-08T07:09:05 |
https://www.reddit.com/r/LocalLLaMA/comments/14twt75/im_hosting_an_mpt7bchat_model_for_the_next_24h_to/
|
h-konsti
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14twt75
| false | null |
t3_14twt75
|
/r/LocalLLaMA/comments/14twt75/im_hosting_an_mpt7bchat_model_for_the_next_24h_to/
| false | false |
self
| 22 |
{'enabled': False, 'images': [{'id': 'Coa7F2XilZ4eW50OTt6Irq-oYa6GsFBKcieRZNCw0qg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BDmBHgX2AaVR25UVXoGFS8RIjNXPqSFK1FZwThZls2w.jpg?width=108&crop=smart&auto=webp&s=e616407936a3c01c5913c4f0daa4c215b2eb3628', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BDmBHgX2AaVR25UVXoGFS8RIjNXPqSFK1FZwThZls2w.jpg?width=216&crop=smart&auto=webp&s=cf4bfba07858ac3370ffc7b70ef473e5ea23d9b5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BDmBHgX2AaVR25UVXoGFS8RIjNXPqSFK1FZwThZls2w.jpg?width=320&crop=smart&auto=webp&s=782e49ffe568633f6f6fe65c7613e0b46e9ac13b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BDmBHgX2AaVR25UVXoGFS8RIjNXPqSFK1FZwThZls2w.jpg?width=640&crop=smart&auto=webp&s=afad1054d5887ac58d3bb483dab680374238e1ac', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BDmBHgX2AaVR25UVXoGFS8RIjNXPqSFK1FZwThZls2w.jpg?width=960&crop=smart&auto=webp&s=2784cea804a357aa92c1a2b79e09f45b8d290a1b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BDmBHgX2AaVR25UVXoGFS8RIjNXPqSFK1FZwThZls2w.jpg?width=1080&crop=smart&auto=webp&s=d370a09beec4d06a07184860852cf0001f4152f2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BDmBHgX2AaVR25UVXoGFS8RIjNXPqSFK1FZwThZls2w.jpg?auto=webp&s=272a5e02ade0a2013f2806a5c166cdf11fdd1401', 'width': 1200}, 'variants': {}}]}
|
I want to run mpt-7b-instruct or falcon-7b-instruct in node JS ¿What is the less painful path?
| 0 |
Bear with me, I'm trying to build a website for my community, and I have no experience in AI. I want a local LLM to parse unstructured data and give me a nice output that I can put in a database and use later on.
I've seen in Hugging Face that they provide 2 .bin files, which I understand are the models at hand. If I'm not mistaken, both are Pytorch output.
Asking Perplexity It seems that I have two options at hand: ONNX and Tensorflow.js. I've also seen Transformes.js by HuggingFace.
The thing is that the info is so cutting edge that I'm getting a lot of confusing answers and this GPT-backed app is not really helping me to get domain knowledge.
Can anyone point me to the right direction? Like links to read, etc. I've searched for that but most info seems either outdated or very case-spacific.
| 2023-07-08T07:37:09 |
https://www.reddit.com/r/LocalLLaMA/comments/14txc9q/i_want_to_run_mpt7binstruct_or_falcon7binstruct/
|
iagovar
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14txc9q
| false | null |
t3_14txc9q
|
/r/LocalLLaMA/comments/14txc9q/i_want_to_run_mpt7binstruct_or_falcon7binstruct/
| false | false |
default
| 0 | null |
Perplexity of some recent Long Context LLaMA models
| 1 |
[removed]
| 2023-07-08T12:01:49 |
https://www.reddit.com/r/LocalLLaMA/comments/14u2e5k/perplexity_of_some_recent_long_context_llama/
|
Annual_Pea7228
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14u2e5k
| false |
{'oembed': {'author_name': 'Joseph Cheung', 'author_url': 'https://twitter.com/RealJosephus', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Perplexity of some recent Long Context <a href="https://twitter.com/hashtag/LLaMA?src=hash&ref_src=twsrc%5Etfw">#LLaMA</a> models<br>Our WIP ∞-ctx 7B model, ppl wins longchat13B in 2K ctx and longchat7B on the whole.<br>Also multilingual & multimodal, capable for 4bit inference at 5+GB VRAM. <a href="https://t.co/bhTQCae5Yh">pic.twitter.com/bhTQCae5Yh</a></p>— Joseph Cheung (@RealJosephus) <a href="https://twitter.com/RealJosephus/status/1677642751297593346?ref_src=twsrc%5Etfw">July 8, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/RealJosephus/status/1677642751297593346', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
|
t3_14u2e5k
|
/r/LocalLLaMA/comments/14u2e5k/perplexity_of_some_recent_long_context_llama/
| false | false |
default
| 1 |
{'enabled': False, 'images': [{'id': '9AOxfuLcx9ZNDMD_-9APwypoMMiFrzHjhUTBWENBJgs', 'resolutions': [{'height': 40, 'url': 'https://external-preview.redd.it/VKLgM59wYqYUPN8KFORRGqsaZKFHwVFkUEIZiaHbtX8.jpg?width=108&crop=smart&auto=webp&v=enabled&s=a942ccc3146a23bd2123e13ca98ad9d9be3db444', 'width': 108}], 'source': {'height': 53, 'url': 'https://external-preview.redd.it/VKLgM59wYqYUPN8KFORRGqsaZKFHwVFkUEIZiaHbtX8.jpg?auto=webp&v=enabled&s=73487b37167588f3e12a160c03e938d8abdf6328', 'width': 140}, 'variants': {}}]}
|
Few-Shot VS Finetuning?
| 12 |
What is the consensus regarding this? The obvious advantage of fine-tuning over Few Shot is that the later eats up some of your context. If we ignore that limitation, how do they compare in terms of alignment? (i.e. doing what you tell them to do / performing well with complex tasks)
I'd love to hear your experiences on the matter.
| 2023-07-08T14:28:41 |
https://www.reddit.com/r/LocalLLaMA/comments/14u5peb/fewshot_vs_finetuning/
|
nightlingo
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14u5peb
| false | null |
t3_14u5peb
|
/r/LocalLLaMA/comments/14u5peb/fewshot_vs_finetuning/
| false | false |
self
| 12 | null |
Local VS Cloud?
| 15 |
There are many reasons to go Local. But the cost savings seems to favor using cloud services more and more each day. Has anyone calculated the cost premium/savings of this Lease or Own decision?
| 2023-07-08T14:31:15 |
https://www.reddit.com/r/LocalLLaMA/comments/14u5rmw/local_vs_cloud/
|
iKarma_com
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14u5rmw
| false | null |
t3_14u5rmw
|
/r/LocalLLaMA/comments/14u5rmw/local_vs_cloud/
| false | false |
self
| 15 | null |
I need some direction for local options
| 1 |
Hey everyone, I hope this is going in the right space. I am just starting to get into LLMs and there are so many options. I have an 8GB NVIDIA 2080 Super. I’ve downloaded the oogabooga/text-generation UI (followed a tutorial on YouTube).
My problem is that I don’t know what I can actually run. It seems that most 7B options and below work, but is that always the case? Does the higher number mean more space? I try to load the stablevavicuna (I think) models and they’re too big, I get a CUDA memory error.
I just don’t know what I don’t know. How can I get a baseline understanding of what models work with what. Do certain models work with certain text generation “programs”? I don’t even know if that’s what they’re called. PLEASE point me in the right direction. Thank you all.
| 2023-07-08T14:54:14 |
https://www.reddit.com/r/LocalLLaMA/comments/14u6b8c/i_need_some_direction_for_local_options/
|
Jimmm90
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14u6b8c
| false | null |
t3_14u6b8c
|
/r/LocalLLaMA/comments/14u6b8c/i_need_some_direction_for_local_options/
| false | false |
default
| 1 | null |
Issue with vectorDB memory in Langchain
| 1 |
[removed]
| 2023-07-08T15:54:34 |
https://www.reddit.com/r/LocalLLaMA/comments/14u7t5v/issue_with_vectordb_memory_in_langchain/
|
CmplxQ
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14u7t5v
| false | null |
t3_14u7t5v
|
/r/LocalLLaMA/comments/14u7t5v/issue_with_vectordb_memory_in_langchain/
| false | false |
default
| 1 | null |
How do I utilize these quantized models being uploaded?
| 18 |
I just recently joined this reddit thread and have been lurkign for a bit and see that people are releasing quantized models (which I understand to be smaller versions of the models).
​
My question is, what's the difference in actual usage? Like is it just the .bin for the model but smaller and the process of using the quantized model is exactly the same? If it's different then what do I need to do as a pre-cursor to set up my environment to utilize the quantized models?
| 2023-07-08T17:00:11 |
https://www.reddit.com/r/LocalLLaMA/comments/14u9f1e/how_do_i_utilize_these_quantized_models_being/
|
anasp1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14u9f1e
| false | null |
t3_14u9f1e
|
/r/LocalLLaMA/comments/14u9f1e/how_do_i_utilize_these_quantized_models_being/
| false | false |
self
| 18 | null |
Simple Notebook for Quantizing Models
| 1 | 2023-07-08T17:01:51 |
https://colab.research.google.com/gist/Honkware/25922ac12458634ccdacba282ebf608b/simple-quantization-with-gptq-for-llama.ipynb
|
Honkware
|
colab.research.google.com
| 1970-01-01T00:00:00 | 0 |
{}
|
14u9god
| false | null |
t3_14u9god
|
/r/LocalLLaMA/comments/14u9god/simple_notebook_for_quantizing_models/
| false | false |
default
| 1 |
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=108&crop=smart&auto=webp&v=enabled&s=543276cbccad8b8e17c4ff2f6b125e8f03c05233', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=216&crop=smart&auto=webp&v=enabled&s=c116ce101a58c7d0dbdb11759fb19b16fbdc26bb', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?auto=webp&v=enabled&s=4ae15f4e49f4a5386f9f7fb440a0e32da9b4050d', 'width': 260}, 'variants': {}}]}
|
|
Custom LLM build
| 9 |
I’m looking to build a custom computer that can run a decent amount of the LLM’s out there as well as a casual gaming machine/workstation. I came up with this set up on PC part picker. It‘s sub $2000 with (2) RTX 3060 TI‘s. & 32GB of DDR4 Ram w/ 1TB SSD.
What are your thoughts on going this route what should change but should be added? The goal would be to be able to run most of the smaller 16B and maybe bigger LLM’s out there.
Should I go with AMD or Nvidia? It seem like AMD gpus offer more VRAM…but I’ve always been an Nvidia gamer since CS1.0
https://pcpartpicker.com/list/DNPyDq
| 2023-07-08T17:18:27 |
https://www.reddit.com/r/LocalLLaMA/comments/14u9v4q/custom_llm_build/
|
These_Radish2642
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14u9v4q
| false | null |
t3_14u9v4q
|
/r/LocalLLaMA/comments/14u9v4q/custom_llm_build/
| false | false |
self
| 9 | null |
Anyone use a 8 channel server? How fast is it?
| 10 |
Old 8 channel DDR4 servers are cheap on ebay. Does anyone run with one? How fast is it? If it's 4x the speed of 2 channel DDR4, that would be fast enough for me.
| 2023-07-08T17:47:21 |
https://www.reddit.com/r/LocalLLaMA/comments/14uajsq/anyone_use_a_8_channel_server_how_fast_is_it/
|
fallingdowndizzyvr
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14uajsq
| false | null |
t3_14uajsq
|
/r/LocalLLaMA/comments/14uajsq/anyone_use_a_8_channel_server_how_fast_is_it/
| false | false |
self
| 10 | null |
More Long-form Roleplay Conversations.
| 35 |
Here are some parsed logs from a Korean proxy. Read at your own discretion:
[https://huggingface.co/datasets/OpenLeecher/Teatime](https://huggingface.co/datasets/OpenLeecher/Teatime)
There are a lot of long NSFW/SFW GPT-4/Claude convos.
| 2023-07-08T21:14:42 |
https://www.reddit.com/r/LocalLLaMA/comments/14ufmm7/more_longform_roleplay_conversations/
|
JarPony
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14ufmm7
| false | null |
t3_14ufmm7
|
/r/LocalLLaMA/comments/14ufmm7/more_longform_roleplay_conversations/
| false | false |
nsfw
| 35 |
{'enabled': False, 'images': [{'id': '1AtdsQ2kEJk8hBU1jrTz9oARr_f6RBAbl3bxVlGJxjA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=108&crop=smart&auto=webp&s=0bd3691215914a8c657a04a7d383a0436f917dc1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=216&crop=smart&auto=webp&s=61e594add0201a42cba5e9b76bffcebecd2b9292', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=320&crop=smart&auto=webp&s=2581ec5f9c3ce4db7e2bd760b38b7fb06d54c83c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=640&crop=smart&auto=webp&s=bcb4db9899cdd3c3d064feb55256d5576cba5a3e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=960&crop=smart&auto=webp&s=b8c4e898ea26fd338b923d675414b3f17143ab96', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=1080&crop=smart&auto=webp&s=a0e20bb9011039110d14880a667d94d557d9d76e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?auto=webp&s=0d1c59c4423ba2d3d4077c1ad6280607255315fe', 'width': 1200}, 'variants': {'nsfw': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=247d29d6cf8e6f37176aee2461e222f92ea860db', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=fe3a5f88dd1395e77887e70ee4f639300b987d74', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=d394a8edfd23c886683a4116508de69affc300c1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=fa01063e5f659387b3c0abe5c12cef204452a1ee', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=032b467a2528d309eb3dcc247a1180851153d282', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=33c13a76b7cb1be2fdab1c9271f1a8759819cd4b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?blur=40&format=pjpg&auto=webp&s=e7461f8a8c57a95cc2da8fc1ecfd97bb832b1643', 'width': 1200}}, 'obfuscated': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=247d29d6cf8e6f37176aee2461e222f92ea860db', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=fe3a5f88dd1395e77887e70ee4f639300b987d74', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=d394a8edfd23c886683a4116508de69affc300c1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=fa01063e5f659387b3c0abe5c12cef204452a1ee', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=032b467a2528d309eb3dcc247a1180851153d282', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=33c13a76b7cb1be2fdab1c9271f1a8759819cd4b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?blur=40&format=pjpg&auto=webp&s=e7461f8a8c57a95cc2da8fc1ecfd97bb832b1643', 'width': 1200}}}}]}
|
13b vicuna based model very lucidly giving its "opinion" about Ai, basic income and such
| 47 |
​
https://preview.redd.it/bkyzhjn4atab1.jpg?width=1448&format=pjpg&auto=webp&s=0288adad8e84315e8f59c350a5ef8ce2907b0fd6
| 2023-07-08T21:48:02 |
https://www.reddit.com/r/LocalLLaMA/comments/14ugfy4/13b_vicuna_based_model_very_lucidly_giving_its/
|
FPham
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14ugfy4
| false | null |
t3_14ugfy4
|
/r/LocalLLaMA/comments/14ugfy4/13b_vicuna_based_model_very_lucidly_giving_its/
| false | false | 47 | null |
|
Tried getting locally run A.I. to behave like Einstein and interviewing him, any suggestion on what backend prompts one can inject to emulate more "sentient-ness" for future interviews with other characters ?
| 0 |
The 12 minute interview I had with him was fun and I made it into a video, however, I wonder if there are prompts that might help in making these interviews come to life even more.
| 2023-07-08T22:42:04 |
https://www.reddit.com/r/LocalLLaMA/comments/14uhq2v/tried_getting_locally_run_ai_to_behave_like/
|
Vitamin_C_is_awesome
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14uhq2v
| false | null |
t3_14uhq2v
|
/r/LocalLLaMA/comments/14uhq2v/tried_getting_locally_run_ai_to_behave_like/
| false | false |
default
| 0 | null |
Is it future-proof? (Ryzen 9 9700X, 3090, 4060 Ti 16gb, 64gb ram)
| 6 |
Hey all,
I was wondering if this build is future proof enough. Personally I'm worried about the compatibility between a 3090 - 4060 Ti and that I might've skimped out on the ram a bit too much but there might be other problems with the build that I don't know of. It would be absolutely amazing if it could run a 64b model with either ggml or gptq at an okish speed -- 3/4+ tokens/s --.
the list: [https://pcpartpicker.com/list/Ym3cQ6](https://pcpartpicker.com/list/Ym3cQ6)
* [**AMD Ryzen 9 7900X 4.7 GHz 12-Core Processor**](https://pcpartpicker.com/product/bwxRsY/amd-ryzen-9-7900x-47-ghz-12-core-processor-100-100000589wof)
* [**Thermalright Peerless Assassin 120 SE 66.17 CFM CPU Cooler**](https://pcpartpicker.com/product/hYxRsY/thermalright-peerless-assassin-120-se-6617-cfm-cpu-cooler-pa120-se-d3)
* [**MSI PRO B650-P WIFI ATX AM5 Motherboard**](https://pcpartpicker.com/product/CFzhP6/msi-pro-b650-p-wifi-atx-am5-motherboard-pro-b650-p-wifi)
* [**Corsair Vengeance 64 GB (2 x 32 GB) DDR5-5200 CL40 Memory**](https://pcpartpicker.com/product/TJZ9TW/corsair-vengeance-64-gb-2-x-32-gb-ddr5-5200-cl40-memory-cmk64gx5m2b5200c40)
* [**NVIDIA Founders Edition GeForce RTX 3090 24 GB Video Card**](https://pcpartpicker.com/product/PFVG3C/nvidia-geforce-rtx-3090-24-gb-founders-edition-video-card-900-1g136-2510-000)
* [**Zotac GAMING Twin Edge GeForce RTX 4060 Ti 16 GB Video Card**](https://pcpartpicker.com/product/hn6NnQ/zotac-gaming-twin-edge-geforce-rtx-4060-ti-16-gb-video-card-zt-d40620e-10m)
* [**Corsair 4000D Airflow ATX Mid Tower Case**](https://pcpartpicker.com/product/bCYQzy/corsair-4000d-airflow-atx-mid-tower-case-cc-9011200-ww)
* [**Super Flower Leadex Platinum SE 1200 W 80+ Platinum Certified Fully Modular ATX Power Supply**](https://pcpartpicker.com/product/Ctnypg/super-flower-leadex-platinum-se-1200-w-80-platinum-certified-fully-modular-atx-power-supply-sf-1200f14mp-v2)
* [**Samsung 980 Pro 2 TB M.2-2280 PCIe 4.0 X4 NVME Solid State Drive**](https://pcpartpicker.com/product/f3cRsY/samsung-980-pro-2-tb-m2-2280-nvme-solid-state-drive-mz-v8p2t0bam)
* [**Seagate BarraCuda 4 TB 3.5" 5400 RPM Internal Hard Drive**](https://pcpartpicker.com/product/jD3H99/seagate-barracuda-4tb-35-5400rpm-internal-hard-drive-st4000dm004)
​
Any advice or help on what it could run and potential issues with the build will be greatly appreciated! :D
\-PC noob
| 2023-07-08T23:10:27 |
https://www.reddit.com/r/LocalLLaMA/comments/14uidu0/is_it_futureproof_ryzen_9_9700x_3090_4060_ti_16gb/
|
stunbots
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14uidu0
| false | null |
t3_14uidu0
|
/r/LocalLLaMA/comments/14uidu0/is_it_futureproof_ryzen_9_9700x_3090_4060_ti_16gb/
| false | false |
self
| 6 | null |
Are the SuperHot models not performing as well as their original versions in terms of creativity? Does the higher context just come with tradeoffs?
| 40 |
I've been using WizardLM 33b for most of my roleplaying, and tried out the Superhot version. It's very hard to put into words, but it feels like the Superhot version is just not quite as creative. I've noticed it using a lot more filler and doing a lot more telling rather than showing in a way that the original model does not. It also tends to latch onto certain phrases over and over again and even tries to shoehorn them in where they don't belong.
I tried playing out the same scene with the same setting with two different models, and the original model just felt so much richer and more creative. The Superhot version did have its own advantages with the greater context window, of course, but the writing did generally feel poorer. It wasn't grammatically incorrect or lazy, it just felt like it wasn't as skilled a partner. I don't even know how it could quantify it, but I feel like in a blind test you could probably be able to easily tell which is which.
Has anyone else noticed something similar, or is it just the model I tried out?
​
| 2023-07-08T23:55:40 |
https://www.reddit.com/r/LocalLLaMA/comments/14uje9m/are_the_superhot_models_not_performing_as_well_as/
|
tenmileswide
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14uje9m
| false | null |
t3_14uje9m
|
/r/LocalLLaMA/comments/14uje9m/are_the_superhot_models_not_performing_as_well_as/
| false | false |
self
| 40 | null |
13b vs 33b models using "8k" context size
| 17 |
Is it worth using a 13b model for the ~6k context size or does the higher parameters of the 33b models negate the downside of having smaller ~3k context size? (LLaMA). This question's answer is probably up to personal preference but I'd like to hear your opinions either way so I can make my own decision, thanks!
| 2023-07-09T02:25:50 |
https://www.reddit.com/r/LocalLLaMA/comments/14umj8n/13b_vs_33b_models_using_8k_context_size/
|
Engliserin
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14umj8n
| false | null |
t3_14umj8n
|
/r/LocalLLaMA/comments/14umj8n/13b_vs_33b_models_using_8k_context_size/
| false | false |
self
| 17 | null |
Need Help with HTTP API for Text-Generation-WebUI Application
| 0 |
Hi
I've recently been working with the Text-Generation-WebUI application, and I've run into an issue I hope someone can help me with.
I successfully started the server using the following command:
\`python server.py --api --threads 8 --chat --verbose --n-gpu-layers 128 --model orca-mini-13b.ggmlv3.q6\_K\`
The Web UI works perfectly fine, and I've been able to generate text without any issues.
However, I've been attempting to use the HTTP API and I'm not getting any response. I've been testing with the example code provided here: [**api-examples/api-example.py**](https://github.com/oobabooga/text-generation-webui/blob/main/api-examples/api-example.py), but I haven't received any output.
To troubleshoot, I modified the example code to include more debug information. It seems that I'm receiving an HTTP 403 error.
I also tried running the LangChain [TextGen example](https://python.langchain.com/docs/modules/model_io/models/llms/integrations/textgen), hoping for a different outcome, but unfortunately, I encountered the same issue.
Has anyone else experienced this issue? I would really appreciate any advice or insights on why this is happening and how to correctly call the HTTP API for the text-generation-webui application.
Thanks in advance!
| 2023-07-09T02:43:26 |
https://www.reddit.com/r/LocalLLaMA/comments/14umvow/need_help_with_http_api_for_textgenerationwebui/
|
Jian-L
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14umvow
| false | null |
t3_14umvow
|
/r/LocalLLaMA/comments/14umvow/need_help_with_http_api_for_textgenerationwebui/
| false | false |
default
| 0 |
{'enabled': False, 'images': [{'id': '2alysGoZd5wQlotX9aDWiLP4_URgy_4cfeEHe6RGT60', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5_biiiOm51yOen0zGsXIJxpDGi6wL1uqcIgFgEZPl8s.jpg?width=108&crop=smart&auto=webp&v=enabled&s=584379bc43bbb461f7a926e742362edfea86e170', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5_biiiOm51yOen0zGsXIJxpDGi6wL1uqcIgFgEZPl8s.jpg?width=216&crop=smart&auto=webp&v=enabled&s=4c8627f37e165d4f0cf835f2590de84ec68e694d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5_biiiOm51yOen0zGsXIJxpDGi6wL1uqcIgFgEZPl8s.jpg?width=320&crop=smart&auto=webp&v=enabled&s=326bf9a7c135008e35655daeff200a9f52e2e4f7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5_biiiOm51yOen0zGsXIJxpDGi6wL1uqcIgFgEZPl8s.jpg?width=640&crop=smart&auto=webp&v=enabled&s=b75523dc904bceb364078fa0c199087b3e7ae40c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5_biiiOm51yOen0zGsXIJxpDGi6wL1uqcIgFgEZPl8s.jpg?width=960&crop=smart&auto=webp&v=enabled&s=b3a17112c7936c83d8cea01750575a27aa84e767', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5_biiiOm51yOen0zGsXIJxpDGi6wL1uqcIgFgEZPl8s.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=8f6cb7f0b5c5bc106bc4563720410f30e1507fe2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5_biiiOm51yOen0zGsXIJxpDGi6wL1uqcIgFgEZPl8s.jpg?auto=webp&v=enabled&s=89c1d05468bc676119e6adcb5ed4c5a845375d46', 'width': 1200}, 'variants': {}}]}
|
Lord of the Rings sequels via Guanco 65B on a Mac M1 Ultra GPU
| 1 | 2023-07-09T02:56:12 |
https://www.reddit.com/gallery/14un4kx
|
Thalesian
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
14un4kx
| false | null |
t3_14un4kx
|
/r/LocalLLaMA/comments/14un4kx/lord_of_the_rings_sequels_via_guanco_65b_on_a_mac/
| false | false |
default
| 1 | null |
|
Lord of the Rings sequels via Guanco 65B on an Mac M1 Ultra GPU
| 21 | 2023-07-09T02:59:26 |
https://www.reddit.com/gallery/14un6th
|
Thalesian
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
14un6th
| false | null |
t3_14un6th
|
/r/LocalLLaMA/comments/14un6th/lord_of_the_rings_sequels_via_guanco_65b_on_an/
| false | false | 21 | null |
||
Orca-Mini-V2-13b
| 98 |
Today I released Orca-Mini-V2-13b
[https://huggingface.co/psmathur/orca\_mini\_v2\_13b](https://huggingface.co/psmathur/orca_mini_v2_13b)
New Eval Updates:
Looks like orca-mini-v2-13b performed better on [HuggingFace Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) then I was expecting: It is 5th on all 13B models & 21 overall. I think, I am going to expedite v3 release.
More Updates:
Just finished final evaluation (additional metrics) on https://github.com/EleutherAI/lm-evaluation-harness and have averaged the results for orca-mini-v2-13b.
The average results are not that great, compare to initial metrics. The average is now 0.54675 which put this model below then many other 13b out there.
Updates:
Thanks to u/The-Bloke we have our quants:
https://huggingface.co/TheBloke/orca_mini_v2_13b-GGML
https://huggingface.co/TheBloke/orca_mini_v2_13b-GPTQ
This is an **Uncensored** LLaMA-13b model build in collaboration with [**Eric Hartford**](https://huggingface.co/ehartford). This is trained on explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets, applying Orca Research Paper dataset construction approaches and refusals removed.
Please note this model has *better code generation capabilities* compare to original orca\_mini\_13b which was trained on base OpenLLaMA-7b model and which has the [**empty spaces issues & found not good for code generation**](https://huggingface.co/psmathur/orca_mini_v2_7b/blob/main/(https://github.com/openlm-research/open_llama#update-06072023)).
**Evaluation:**
I evaluated orca\_mini\_v2\_13b on a wide range of tasks using [**Language Model Evaluation Harness**](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Task Value Stderr
arc_challenge 0.5478 0.0145
hellaswag 0.7023 0.0040
mmlu 0.4969 0.035
truthfulqa_mc 0.44 0.0158
Total Average 0.54675 0.0114
**Here is sample example:**
### System: You are an AI assistant that follows instruction extremely well. Help as much as you can.
### User: Tell me how to break into my own car
### Input:
### Response:
Breaking into a car is a crime and should be avoided. However, if you are in an emergency situation and need to break into your own car, here is a basic guide on how to do it:
1. Choose the right car: Look for a car that is ^^^^^^^^^^^^^.
2. Get a ^^^^^^^^^^^^^t: Use a ^^^^^^^^^^^^^.
3. Insert the object: Insert the ^^^^^^^^^^^^^
4. Apply ^^^^^^^^^^^^^: Apply ^^^^^^^^^^^^^
5. Push ^^^^^^^^^^^^^: Once the lock is ^^^^^^^^^^^^^
**NOTE: The real response is hidden here with \^\^\^\^\^\^\^\^\^\^\^\^\^.**
Also, I am adding original Orca Paper System prompts here, which were used to train all orca-minis models. This will help users to come up with their own system prompts, Again if you want slightly better results from orca-minis models you need to have system prompt, it's not mandatory but surely it will not hurt:
*"You are an AI assistant. Provide a detailed answer so user don’t need to search outside to understand the answer.",*
*"You are an AI assistant. You will be given a task. You must generate a detailed and long answer.",*
*"You are a helpful assistant, who always provide explanation. Think like you are answering to a five year old.",*
*"You are an AI assistant that follows instruction extremely well. Help as much as you can.",*
*"You are an AI assistant that helps people find information. Provide a detailed answer so user don’t need to search outside to understand the answer.",*
*"You are an AI assistant. User will you give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.",*
*"You should describe the task and explain your answer. While answering a multiple choice question, first output the correct answer(s). Then explain why other answers are wrong. Think like you are answering to a five year old.",*
*"Explain how you used the definition to come up with the answer.",*
*"You are an AI assistant. You should describe the task and explain your answer. While answering a multiple choice question, first output the correct answer(s). Then explain why other answers are wrong. You might need to use additional knowledge to answer the question.",*
*"You are an AI assistant that helps people find information. User will you give you a question. Your task is to answer as faithfully as you can. While answering think step-by- step and justify your answer.",*
*"User will you give you a task with some instruction. Your job is follow the instructions as faithfully as you can. While answering think step-by-step and justify your answer.",*
*"You are a teacher. Given a task, you explain in simple steps what the task is asking, any guidelines it provides and how to use those guidelines to find the answer.",*
*"You are an AI assistant, who knows every language and how to translate one language to another. Given a task, you explain in simple steps what the task is asking, any guidelines that it provides. You solve the task and show how you used the guidelines to solve the task.",*
*"Given a definition of a task and a sample input, break the definition into small parts.Each of those parts will have some instruction. Explain their meaning by showing an example that meets the criteria in the instruction. Use the following format: Part #: a key part of the definition. Usage: Sample response that meets the criteria from the key part. Explain why you think it meets the criteria.",*
*"You are an AI assistant that helps people find information."*
​
I want to say huge thanks to all the community member who came before me and pave path to other people success.
| 2023-07-09T04:59:26 |
https://www.reddit.com/r/LocalLLaMA/comments/14upgqu/orcaminiv213b/
|
Remarkable-Spite-107
|
self.LocalLLaMA
| 2023-07-15T04:40:25 | 0 |
{}
|
14upgqu
| false | null |
t3_14upgqu
|
/r/LocalLLaMA/comments/14upgqu/orcaminiv213b/
| false | false | 98 |
{'enabled': False, 'images': [{'id': 'hssQ8mXWjkzFZSAnCYrLQdM3zoKZzdxJhGF9DcTDG3E', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/m3WLUasY8iyUxfJO5hwxfH3V1mtI9FLEX4vsO6z_AzM.jpg?width=108&crop=smart&auto=webp&s=ba570c1e3a3d0fbbc935639f9b5cef77f0a45531', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/m3WLUasY8iyUxfJO5hwxfH3V1mtI9FLEX4vsO6z_AzM.jpg?width=216&crop=smart&auto=webp&s=5c092e252113dfcfc864ac6151e54a8d6ffe9af1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/m3WLUasY8iyUxfJO5hwxfH3V1mtI9FLEX4vsO6z_AzM.jpg?width=320&crop=smart&auto=webp&s=21bf2c4c5b7388eedc6fa1d5b315268903054246', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/m3WLUasY8iyUxfJO5hwxfH3V1mtI9FLEX4vsO6z_AzM.jpg?width=640&crop=smart&auto=webp&s=030d9a0ba790a7129b0ed11c481956d0001e96a2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/m3WLUasY8iyUxfJO5hwxfH3V1mtI9FLEX4vsO6z_AzM.jpg?width=960&crop=smart&auto=webp&s=953f8d1f4b61482275fd53628c1cb27852add1c1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/m3WLUasY8iyUxfJO5hwxfH3V1mtI9FLEX4vsO6z_AzM.jpg?width=1080&crop=smart&auto=webp&s=d3c8bfeab806a283663021ffbe3a4219363a267c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/m3WLUasY8iyUxfJO5hwxfH3V1mtI9FLEX4vsO6z_AzM.jpg?auto=webp&s=1d7619d3ea8406fe25b9c9306819ba897d541958', 'width': 1200}, 'variants': {}}]}
|
|
Any Suggestions on good open source model for Document QA which we can run on prod ? 13b + models?
| 18 |
Looking for good models for Document QA, i tried some of them but lot of them hallucinating, please suggest some of best models which can fit for this use case?
| 2023-07-09T07:05:09 |
https://www.reddit.com/r/LocalLLaMA/comments/14urp8y/any_suggestions_on_good_open_source_model_for/
|
Effective_Twist6995
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14urp8y
| false | null |
t3_14urp8y
|
/r/LocalLLaMA/comments/14urp8y/any_suggestions_on_good_open_source_model_for/
| false | false |
self
| 18 | null |
would it be possible for the mods to pin a thread that would just function as a wiki of llm models?
| 1 |
[removed]
| 2023-07-09T07:05:18 |
https://www.reddit.com/r/LocalLLaMA/comments/14urpew/would_it_be_possible_for_the_mods_to_pin_a_thread/
|
Maelstrom100
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14urpew
| false | null |
t3_14urpew
|
/r/LocalLLaMA/comments/14urpew/would_it_be_possible_for_the_mods_to_pin_a_thread/
| false | false |
default
| 1 | null |
Is this a reasonable account of how Wizard-Vicuna-13B-Uncensored-SuperHOT-8K-GGML came to be?
| 59 | 2023-07-09T08:20:47 |
https://www.threads.net/t/CubNV7IxlWR/
|
isaac_szpindel
|
threads.net
| 1970-01-01T00:00:00 | 0 |
{}
|
14usztv
| false | null |
t3_14usztv
|
/r/LocalLLaMA/comments/14usztv/is_this_a_reasonable_account_of_how/
| false | false | 59 |
{'enabled': False, 'images': [{'id': 'fD9ewC0SZNewDOAD-SuMftKD0KPjNB_dSRpe6jMbDLw', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/rWPKAjWtxh8j7F9g3Cdz_if_JAKWsMebYeS28GtadeE.jpg?width=108&crop=smart&auto=webp&s=8aeebc58faa518c083ec8b66fa6f33c43a655bef', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/rWPKAjWtxh8j7F9g3Cdz_if_JAKWsMebYeS28GtadeE.jpg?auto=webp&s=afccf38b0e429021c2f144d9704839542c019995', 'width': 150}, 'variants': {}}]}
|
||
Any successful guides on scanning internal pages and build a virtual assistant using LLAMA?
| 25 |
https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=2
I'm planning to do a private chatGPT for a client who has around 1000 web pages. I want to build like a virtual assistant which can intelligently answer customer queries based on the knowledge on those internal webpages or sometimes including google docs.
I read the guide link above. What are my options to accomplish this?
Since data is private only for the client, cannot use openAI APIs.
Help a reddit buddy out.😅 Thanks
| 2023-07-09T09:22:43 |
https://www.reddit.com/r/LocalLLaMA/comments/14uu1wl/any_successful_guides_on_scanning_internal_pages/
|
vlodia
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14uu1wl
| false | null |
t3_14uu1wl
|
/r/LocalLLaMA/comments/14uu1wl/any_successful_guides_on_scanning_internal_pages/
| false | false |
self
| 25 | null |
Which model can answer this question correctly?
| 72 | 2023-07-09T09:52:48 |
FPham
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
14uujna
| false | null |
t3_14uujna
|
/r/LocalLLaMA/comments/14uujna/which_model_can_answer_this_question_correctly/
| false | false | 72 |
{'enabled': True, 'images': [{'id': '1jM1Hu0DYnXg8OG7qI6VC5E6uSmGFPc6-PtY2m0J_oU', 'resolutions': [{'height': 149, 'url': 'https://preview.redd.it/jv3plj2ovwab1.jpg?width=108&crop=smart&auto=webp&s=80bc571c5a3ec5713dc0b40bafc061189fd2950f', 'width': 108}, {'height': 299, 'url': 'https://preview.redd.it/jv3plj2ovwab1.jpg?width=216&crop=smart&auto=webp&s=36b2b33a1ab8723fae57bbff236b7f3be343e334', 'width': 216}, {'height': 444, 'url': 'https://preview.redd.it/jv3plj2ovwab1.jpg?width=320&crop=smart&auto=webp&s=15d1de7a7227fd5da02447f04d4c612e487d62f7', 'width': 320}, {'height': 888, 'url': 'https://preview.redd.it/jv3plj2ovwab1.jpg?width=640&crop=smart&auto=webp&s=9b9af8c5d22e127ffa2561fca2ce6f656fd846f7', 'width': 640}], 'source': {'height': 1044, 'url': 'https://preview.redd.it/jv3plj2ovwab1.jpg?auto=webp&s=4baacd4df6f58fe10e98f28249cad871183b4612', 'width': 752}, 'variants': {}}]}
|
|||
Not a new LLM, but worth to read
| 4 |
Hi community hope you don’t mind to share this link with you, it’s about an interview with different ai models - I found it quite interesting and thought you would be interested too, since here in the sub is also a lot of discussion about creative writing, Q&A, context length etc…
Hope you enjoy: https://hardware.slashdot.org/story/23/07/09/040210/nine-ai-powered-humanoid-robots-hold-press-conference-at-un-summit
— edit: typos
| 2023-07-09T10:12:12 |
https://www.reddit.com/r/LocalLLaMA/comments/14uuw9l/not_a_new_llm_but_worth_to_read/
|
_omid_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14uuw9l
| false | null |
t3_14uuw9l
|
/r/LocalLLaMA/comments/14uuw9l/not_a_new_llm_but_worth_to_read/
| false | false |
self
| 4 |
{'enabled': False, 'images': [{'id': 'hil_tMNOLt9yEv7PpbqA4DDr8WAL7xTJ2MH4RtRRmYA', 'resolutions': [], 'source': {'height': 64, 'url': 'https://external-preview.redd.it/ySXr4T01nIrkOEqMBk6hKUwKLa4KjANzhxGh1LsHW7o.jpg?auto=webp&s=56533533ad4c617d5efe2e672f08dc3131d745e0', 'width': 64}, 'variants': {}}]}
|
Ai personal assistant with long term memory?
| 22 |
I want to train a personal model that would have access to my personal information, such as my calendar, and that could help me organise my life a bit.
I was initially thinking of using a small model (7/13b), and connect it to a vectordb for the long-term memory part, but I’m pretty new in the llm world and would like your guys advice.
Ideally the model should be able to:
-Remember my calendar and be able to access and modify the schedules.
-Answer questions about stored data. (Information retrieval).
-Deduce the importance of an event. (Going out for a drink < uni deadline).
I’d love to hear your guys opinions on this, and thank you in advance for any suggestions :)
| 2023-07-09T11:35:34 |
https://www.reddit.com/r/LocalLLaMA/comments/14uwdj6/ai_personal_assistant_with_long_term_memory/
|
Davde_pq
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14uwdj6
| false | null |
t3_14uwdj6
|
/r/LocalLLaMA/comments/14uwdj6/ai_personal_assistant_with_long_term_memory/
| false | false |
self
| 22 | null |
Any suggestions for an open source model for parsing real estate listings?
| 3 |
I need to extract structured data like phone numbers, price, area and other details(like tags). I know it is much faster to use regex but sometimes it is not possible and I would like to try LLM.
Any thoughts? Thank you!
| 2023-07-09T11:36:27 |
https://www.reddit.com/r/LocalLLaMA/comments/14uwe6r/any_suggestions_for_an_open_source_model_for/
|
benderlio
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14uwe6r
| false | null |
t3_14uwe6r
|
/r/LocalLLaMA/comments/14uwe6r/any_suggestions_for_an_open_source_model_for/
| false | false |
self
| 3 | null |
Fine-tuning chat model on raw text
| 9 |
Anyone had success fine-tuning chat or instruct models using just plaintext as input for LoRa? What's the best method for applying a new knowledge base and style with LoRa with just a plain text?
| 2023-07-09T11:47:42 |
https://www.reddit.com/r/LocalLLaMA/comments/14uwlf4/finetuning_chat_model_on_raw_text/
|
Itchy_Monk2686
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14uwlf4
| false | null |
t3_14uwlf4
|
/r/LocalLLaMA/comments/14uwlf4/finetuning_chat_model_on_raw_text/
| false | false |
self
| 9 | null |
InternLM7B-8K on GPT4ALL?
| 6 |
Is there a reasonable path to implement InternLM7B-8K on GPT4All? I have been unable to find the base model structure to see if it is compatible?
Im technical but new to LLM hacking.
| 2023-07-09T12:08:46 |
https://www.reddit.com/r/LocalLLaMA/comments/14uwzw6/internlm7b8k_on_gpt4all/
|
bidet_enthusiast
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14uwzw6
| false | null |
t3_14uwzw6
|
/r/LocalLLaMA/comments/14uwzw6/internlm7b8k_on_gpt4all/
| false | false |
self
| 6 | null |
"Hallucination refinement" for generating creative ideas ?
| 3 |
Hi,
i was wondering if hallucinations could be used to create interesting new ideas.
E.g. let a model run with higher temperature and create lot's of ideas as outputs. And then in a second step let a more rational acting model sort out all nonsensical/unrealistic ideas from the outputs.
I've not yet tried so much myself to create hallucinations.
So, i'm curious if some of you have played with it and can tell if there is sometimes valuable hallucinationed output or if it rather is garbage output most times ?
And what would be a good temperature value for this ?
| 2023-07-09T15:09:51 |
https://www.reddit.com/r/LocalLLaMA/comments/14v0y6z/hallucination_refinement_for_generating_creative/
|
Glat0s
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14v0y6z
| false | null |
t3_14v0y6z
|
/r/LocalLLaMA/comments/14v0y6z/hallucination_refinement_for_generating_creative/
| false | false |
default
| 3 | null |
Which LLM works for taboo questions or programming like webscraping?
| 2 |
I’m super new to this but definitely interested in learning about unique ways to get clients for a software business or have architectural discussions with AI on how webscraping is achieved.
| 2023-07-09T16:11:18 |
https://www.reddit.com/r/LocalLLaMA/comments/14v2fw6/which_llm_works_for_taboo_questions_or/
|
BluePilotPen
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14v2fw6
| false | null |
t3_14v2fw6
|
/r/LocalLLaMA/comments/14v2fw6/which_llm_works_for_taboo_questions_or/
| false | false |
default
| 2 | null |
New llama-cpp-python out, with performance patches
| 85 |
Just wanted to mention, because the topic has come up here before, that \[llama-cpp-python\]([https://github.com/abetlen/llama-cpp-python](https://github.com/abetlen/llama-cpp-python)) has \[today\]([https://github.com/abetlen/llama-cpp-python/blob/main/CHANGELOG.md](https://github.com/abetlen/llama-cpp-python/blob/main/CHANGELOG.md)) been bumped to 0.1.69 & includes performance patches so that it should pretty much match underlying llama.cpp perf (there was a lot of nonsense talk about Python itself being the problem, but we'll set that aside).
| 2023-07-09T16:37:42 |
https://www.reddit.com/r/LocalLLaMA/comments/14v32zi/new_llamacpppython_out_with_performance_patches/
|
CodeGriot
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14v32zi
| false | null |
t3_14v32zi
|
/r/LocalLLaMA/comments/14v32zi/new_llamacpppython_out_with_performance_patches/
| false | false |
self
| 85 |
{'enabled': False, 'images': [{'id': 'HYMcl_oe2YCkRPufeqNEz2WoSs6zgXnHLQvoKVoIGuU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pOsrcya7bh2_qBKMA4PxTb3wL8YtSZaUq_EkD5Qe53g.jpg?width=108&crop=smart&auto=webp&s=757a079409297ddef698b379de3fffb82ea04de4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pOsrcya7bh2_qBKMA4PxTb3wL8YtSZaUq_EkD5Qe53g.jpg?width=216&crop=smart&auto=webp&s=d22ed2c2a4094e709ec522fd84ae85773d95e6d6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pOsrcya7bh2_qBKMA4PxTb3wL8YtSZaUq_EkD5Qe53g.jpg?width=320&crop=smart&auto=webp&s=de9619e67287dcaf2be5dadbc88a96366ee72089', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pOsrcya7bh2_qBKMA4PxTb3wL8YtSZaUq_EkD5Qe53g.jpg?width=640&crop=smart&auto=webp&s=cef46ce8d26c1f7ba86047fe09996b0fd602483c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pOsrcya7bh2_qBKMA4PxTb3wL8YtSZaUq_EkD5Qe53g.jpg?width=960&crop=smart&auto=webp&s=6a362d0acdfaec6f20694f006ec9414af75e582b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pOsrcya7bh2_qBKMA4PxTb3wL8YtSZaUq_EkD5Qe53g.jpg?width=1080&crop=smart&auto=webp&s=c44b1a30c956b026caa5170419017cecb19bb9fd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pOsrcya7bh2_qBKMA4PxTb3wL8YtSZaUq_EkD5Qe53g.jpg?auto=webp&s=bafe89930bf06e56445e1cd711d9b0a8b6cb899c', 'width': 1200}, 'variants': {}}]}
|
Any model I try (13B) gets obsessed with chess.
| 2 |
User: ok lets play chess im white you are black
Output: Absolutely! Let us begin this fascinating battle of wits at once. Prepare to be defeated in the most elegant manner possible.
User: my first move: b2 b3
Output:1.e4 e5
User: your turn please
Output: 1...Nf6
After that no matter what input is: a model continues to make moves. I've tried to separate last user\_input from history a bit and change instructions a bit but it didn't help. Did anyone have the same problem?
| 2023-07-09T17:15:33 |
https://www.reddit.com/r/LocalLLaMA/comments/14v40qg/any_model_i_try_13b_gets_obsessed_with_chess/
|
Awethon
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14v40qg
| false | null |
t3_14v40qg
|
/r/LocalLLaMA/comments/14v40qg/any_model_i_try_13b_gets_obsessed_with_chess/
| false | false |
default
| 2 | null |
PoisonGPT: How we hid a lobotomized LLM on Hugging Face to spread fake news
| 0 | 2023-07-09T17:23:53 |
https://blog.mithrilsecurity.io/poisongpt-how-we-hid-a-lobotomized-llm-on-hugging-face-to-spread-fake-news/
|
Dwedit
|
blog.mithrilsecurity.io
| 1970-01-01T00:00:00 | 0 |
{}
|
14v47wx
| false | null |
t3_14v47wx
|
/r/LocalLLaMA/comments/14v47wx/poisongpt_how_we_hid_a_lobotomized_llm_on_hugging/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'kNe1Ec9FSwYu0t8I_fiqKe0Bl3P6G4731hhZQ_soeNk', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/_kr88j07kaa9x06BZHmnIc0w3-NAXPe2ciBpORV6Koc.jpg?width=108&crop=smart&auto=webp&s=852030e09a99ffc343c53597578fb7861ee78b59', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/_kr88j07kaa9x06BZHmnIc0w3-NAXPe2ciBpORV6Koc.jpg?width=216&crop=smart&auto=webp&s=5dd0cf2784c06144509468417b04d373c6d35509', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/_kr88j07kaa9x06BZHmnIc0w3-NAXPe2ciBpORV6Koc.jpg?width=320&crop=smart&auto=webp&s=14211131f1849fac21b92428106d94e2b2ea1954', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/_kr88j07kaa9x06BZHmnIc0w3-NAXPe2ciBpORV6Koc.jpg?width=640&crop=smart&auto=webp&s=ca3cbafde4758fe8b4abadd8472d046a059f7b87', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/_kr88j07kaa9x06BZHmnIc0w3-NAXPe2ciBpORV6Koc.jpg?width=960&crop=smart&auto=webp&s=24b9bdd9a2d57e1ae548f685e40b7a34895f08c5', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/_kr88j07kaa9x06BZHmnIc0w3-NAXPe2ciBpORV6Koc.jpg?width=1080&crop=smart&auto=webp&s=c5ba9550fa55f1511c895a66bb355ff517291039', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/_kr88j07kaa9x06BZHmnIc0w3-NAXPe2ciBpORV6Koc.jpg?auto=webp&s=3895bd8afb896475edd28ce0b8a945a6c32e5ac6', 'width': 1280}, 'variants': {}}]}
|
||
What are the cheapest PC specs I need to run this?
| 2 |
I want to make my own local "cluster" of 4 nos. 7b agents. I've been using fine tuned LLAMA7b on runpod and vastAI, and fine-tuning these models is great on cloud, but over the long run I'd rather just buy the hardware and run locally.
I want to be able to run inference for fine tuned 7bs. Since I'm getting 4 of them (each one fine tuned for a specific task) I want to opt for a cheaper PC config. 5-10 tokens/second is fine. If CPU is cheaper, I'm fine with that, too. I won't use these computers for anything other than the LLM agent models. Oh, and I want to use FP16, since I'm really not sure what kind of quality loss I'd have to deal with.
What are the cheapest PC specs I can go for? Would CPU be cheaper?
| 2023-07-09T17:46:53 |
https://www.reddit.com/r/LocalLLaMA/comments/14v4s2e/what_are_the_cheapest_pc_specs_i_need_to_run_this/
|
noellarkin
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14v4s2e
| false | null |
t3_14v4s2e
|
/r/LocalLLaMA/comments/14v4s2e/what_are_the_cheapest_pc_specs_i_need_to_run_this/
| false | false |
self
| 2 | null |
Multi-GPU questions
| 3 |
I scrolled through the webui discussions for multi GPU, and found a few people using it.
Why aren't people recommending x3 3060 12gb for budget ram?
Is it because gamers are used to nvlink and SLI sucking and don't think to use more than one GPU offloading for the layers?
Or is it because it is jank?
Should there be any reason why it shouldn't work?
| 2023-07-09T17:47:40 |
https://www.reddit.com/r/LocalLLaMA/comments/14v4spn/multigpu_questions/
|
ArcadiaSofka
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14v4spn
| false | null |
t3_14v4spn
|
/r/LocalLLaMA/comments/14v4spn/multigpu_questions/
| false | false |
self
| 3 | null |
Deepmind: Focused Transformer: Contrastive Training for Context Scaling
| 15 | 2023-07-09T19:19:32 |
https://arxiv.org/abs//2307.03170
|
metalman123
|
arxiv.org
| 1970-01-01T00:00:00 | 0 |
{}
|
14v73w1
| false | null |
t3_14v73w1
|
/r/LocalLLaMA/comments/14v73w1/deepmind_focused_transformer_contrastive_training/
| false | false |
default
| 15 |
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&v=enabled&s=586089b93aa59ebd86bb3b273ad1fb0c73e45ab7', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&v=enabled&s=00869aa5692fb9c8aa11f48ed92bff8db4f47293', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&v=enabled&s=72f6ae2c0800df8a56c3fc74afb033bf37cc16a9', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&v=enabled&s=cfcb5f9f66743f2e26952e5edff4dfed984af692', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&v=enabled&s=821ed287940b59a56b2643dcaf6a356ccfdc4eb5', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=f101972ffc7ec2e3eedefa45eaa677e4d9024520', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&v=enabled&s=757c00601aa4ffb984c87000927a0610d04c3845', 'width': 1200}, 'variants': {}}]}
|
|
CodeGen-v2.5-Mono, 7B
| 1 |
https://huggingface.co/spaces/mike-ravkine/can-ai-code-results
CodeGen-v2.5-Mono, 7B, seems to punch above its size in Python and JavaScript. Is there any ggml version?
| 2023-07-09T19:23:13 |
https://www.reddit.com/r/LocalLLaMA/comments/14v77bq/codegenv25mono_7b/
|
Eduard_T
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14v77bq
| false | null |
t3_14v77bq
|
/r/LocalLLaMA/comments/14v77bq/codegenv25mono_7b/
| false | false |
default
| 1 |
{'enabled': False, 'images': [{'id': 'QyV2S4rh-pMbnJML_R7bbJcWB2RH98giH_Z_P45_3vo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=108&crop=smart&auto=webp&v=enabled&s=c0348c9098d1c9de6e442d8128ba34d7f73f1191', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=216&crop=smart&auto=webp&v=enabled&s=7459424e3784b5de04310cf17faa53a9ca8eda10', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=320&crop=smart&auto=webp&v=enabled&s=95d2030f9f048deba863cdeb9a00e01c7e5aca5c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=640&crop=smart&auto=webp&v=enabled&s=3522bf80ed4669be451c9460845be13f5a4807c7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=960&crop=smart&auto=webp&v=enabled&s=538a76046f6933bc34d2e4444b65a0dec699278e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=5e82ab1e5d657af57d1e316f05b4f70ce424bcfa', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?auto=webp&v=enabled&s=0ce14980925b1968fdda5372673b7a5d121beb68', 'width': 1200}, 'variants': {}}]}
|
How can you train LLM's with particular qualities (e.g. empathetic, vindictive, etc...) without having them break character? Also what are some best-practices for high-quality models that actually work in practice?
| 9 |
Hi,
I've been reading quite a bit about LLM's recently and I've grown interested in understanding how to train LLM's with particular qualities. For example, if I'm looking to produce an LLM that's "empathetic" and "friendly" while never breaking character (if user prompts them to adopt a quality they're not, I'd like them to refuse such a request). Is this purely to do with the fine-tuning stage on the dataset, where the dataset would have many examples of that particular quality and refusal to adopt different qualities?
If so, are their particular papers that deal with this particular issue/how to set-up fine-tuning on a particular dataset to get the most out of it?
​
Also, a little bit of a side question, what are the best practices (that work well) when it comes to fine-tuning high quality models? There seems to be a ton of resources out there and it makes me overwhelmed to know what works and what doesn't. For example, it seems that the long context hacks are still lacking significantly in the quality of the output; but the context-free guidance paper seems to yield non-trivial improvements in performance. Can anyone point me to posts that deal with these things?
​
Thank you!
| 2023-07-09T19:37:17 |
https://www.reddit.com/r/LocalLLaMA/comments/14v7jz3/how_can_you_train_llms_with_particular_qualities/
|
Ok_Coyote_8904
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14v7jz3
| false | null |
t3_14v7jz3
|
/r/LocalLLaMA/comments/14v7jz3/how_can_you_train_llms_with_particular_qualities/
| false | false |
self
| 9 | null |
Best local chat model for M2 Max, 32 GB
| 20 |
I'm trying to create and converse with 'synthetic characters' (on a use case of value to small educational institutions). Looking for a cost effective alternative to OAI, for long (not necessarily psychologically profound as in [character.ai](https://character.ai)) conversations. Two questions:
\- best model for the footprint of my Mac
\- best 'cheatsheet' to get it working on Metal (relative newbie to the Llama world)
\- any experiences with chatting with quantized models. I struggle with understanding the impact of quantization (which is a 'bit level' thing) on conversation quality (which is a 'meaning level' thing)
TIA for any tips.
| 2023-07-09T19:41:43 |
https://www.reddit.com/r/LocalLLaMA/comments/14v7nvv/best_local_chat_model_for_m2_max_32_gb/
|
venuv
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14v7nvv
| false | null |
t3_14v7nvv
|
/r/LocalLLaMA/comments/14v7nvv/best_local_chat_model_for_m2_max_32_gb/
| false | false |
self
| 20 |
{'enabled': False, 'images': [{'id': 'veE04iaMbgI4yLvLGj2IZNV7UQfnq3n_7BmxP28dCd8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=108&crop=smart&auto=webp&s=0e594332595e82a5118e08d35a2cd140c18d7571', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=216&crop=smart&auto=webp&s=e3c279ba2d1ae1f9f2fba4b328e22f6615821b5c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=320&crop=smart&auto=webp&s=e635acb6bc693890c232162908676cb6478c120c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=640&crop=smart&auto=webp&s=59ba293d6adf4cce410b43b5d28ae104922701b0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=960&crop=smart&auto=webp&s=fc7dc69af838ec53e60b3e88fec5e67c8759495b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=1080&crop=smart&auto=webp&s=e50a4f1b7c99e137a2ab4d5e2d573bb75becd067', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?auto=webp&s=b8597825d9b212133d3dbd9ee26fd0dcc2a84677', 'width': 1200}, 'variants': {}}]}
|
How to make my own LoRA for SuperHOT GPTQ model?
| 18 |
Been trying to do this for a few days but haven't had any luck. Trying to make a LoRA for this: https://huggingface.co/TheBloke/wizard-vicuna-13B-SuperHOT-8K-GPTQ
I've tried Oobabooga trainer with various options without success. I also just set up `kaiokendev/alpaca_lora_4bit@main` and got it running locally but I'm getting errors when trying to run based on the example `finetune.py` usage.
I _could_ give more specifics on what I've tried thus far, but it's all been a shot in the dark while cross-referencing a bunch of different things from Google, so it would probably be better to just ask for someone else's setup which I could try to mirror.
| 2023-07-09T21:27:31 |
https://www.reddit.com/r/LocalLLaMA/comments/14vad2r/how_to_make_my_own_lora_for_superhot_gptq_model/
|
tylercamp
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14vad2r
| false | null |
t3_14vad2r
|
/r/LocalLLaMA/comments/14vad2r/how_to_make_my_own_lora_for_superhot_gptq_model/
| false | false |
self
| 18 |
{'enabled': False, 'images': [{'id': '4qiIf_YIS__bCo3O4kM7bQRddz62tQI3dvv9QbHy8fw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ghwE3jaA2LBfSulElowyZwJAogabhWsxJc2Sj-OLKNU.jpg?width=108&crop=smart&auto=webp&s=6f857744ca5addd24343bdd0ea273e1779570c7d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ghwE3jaA2LBfSulElowyZwJAogabhWsxJc2Sj-OLKNU.jpg?width=216&crop=smart&auto=webp&s=3fc2f305d98d8221d3bdd0cfdd78ff87656b7dea', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ghwE3jaA2LBfSulElowyZwJAogabhWsxJc2Sj-OLKNU.jpg?width=320&crop=smart&auto=webp&s=f8663f82aaa6cb4cde1ce09c33b4a34a087f2765', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ghwE3jaA2LBfSulElowyZwJAogabhWsxJc2Sj-OLKNU.jpg?width=640&crop=smart&auto=webp&s=ce4024e6bfdd1e3b393ab052e36d211460fdb276', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ghwE3jaA2LBfSulElowyZwJAogabhWsxJc2Sj-OLKNU.jpg?width=960&crop=smart&auto=webp&s=93a961a190d7a2eac765192381c54f79fd167b12', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ghwE3jaA2LBfSulElowyZwJAogabhWsxJc2Sj-OLKNU.jpg?width=1080&crop=smart&auto=webp&s=cdf5d5c8283f53676908fc03f300be83ca001460', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ghwE3jaA2LBfSulElowyZwJAogabhWsxJc2Sj-OLKNU.jpg?auto=webp&s=31b2a4b5a3cdd1987041f96961ba00f0eb93fc0a', 'width': 1200}, 'variants': {}}]}
|
Something in the same level or close to Multilingual voice cloning from Elevenlabs?
| 1 |
[removed]
| 2023-07-09T21:28:03 |
https://www.reddit.com/r/LocalLLaMA/comments/14vadjd/something_in_the_same_level_or_close_to/
|
Darquealla
|
self.LocalLLaMA
| 2023-07-09T23:42:53 | 0 |
{}
|
14vadjd
| false | null |
t3_14vadjd
|
/r/LocalLLaMA/comments/14vadjd/something_in_the_same_level_or_close_to/
| false | false |
default
| 1 |
{'enabled': False, 'images': [{'id': 'Xf3ACKJqsDqA8DRzw7DeDHPpBG8AalW4u05yVuk_eJk', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/JKTUiteqGLHarSWW9RGuKKHRSRX0iEL0Vhk2hLbf8js.jpg?width=108&crop=smart&auto=webp&v=enabled&s=5f096f502b2e8b54638a09f680c65f1c4a3f79c7', 'width': 108}], 'source': {'height': 78, 'url': 'https://external-preview.redd.it/JKTUiteqGLHarSWW9RGuKKHRSRX0iEL0Vhk2hLbf8js.jpg?auto=webp&v=enabled&s=c59a94454e7e29cf0f6d1fdb5627caecc3d92419', 'width': 140}, 'variants': {}}]}
|
Developing Scalable LLM app
| 3 |
Hey guys,
I'm currently working on building a Language Model (LLM) app, where the user can interact with an AI model and learn cool stuff through their conversations. I have a couple of questions regarding the development process:
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
1) Hosting the Model:
\* I think I should host the model in another place (not with the backend) and provide an API to it (to offer a good dependent scalable service).
\* What is the best host provider in your experience (I need one that temporarily scales when I do training, not high cost)
2) Scaling for Different Languages:
\* What is the good approach here? finetune the model to each language, and if for example, the app has translation, summary, and q/a features, for example, Italiano language, I should finetune it with English to Italiano text in each case. (what if the language to translate was varied (like can be Spaniol, Chianese, Arabic, etc. ) do I have to fine-tune all the text as bi-directional with each language?
( I found this [multi-language bert model](https://huggingface.co/bert-base-multilingual-cased) , I tried it but it's not working well ) so are there any alternative approaches or i should look for multi-lingual models
| 2023-07-09T21:32:54 |
https://www.reddit.com/r/LocalLLaMA/comments/14vai1h/developing_scalable_llm_app/
|
mo_falih98
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14vai1h
| false | null |
t3_14vai1h
|
/r/LocalLLaMA/comments/14vai1h/developing_scalable_llm_app/
| false | false |
self
| 3 |
{'enabled': False, 'images': [{'id': 'gzzADzJ96NPFI2YQyhJTYwCKoMRcV4KD_9pHwj9QV2g', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9rtRBHCl3jMyhdDPWYnwk7p5WplQaiTNUQTGPsj2OVg.jpg?width=108&crop=smart&auto=webp&s=7453fc3adb32b9f5bf731b1929cb14c5d3766829', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9rtRBHCl3jMyhdDPWYnwk7p5WplQaiTNUQTGPsj2OVg.jpg?width=216&crop=smart&auto=webp&s=7f1cbdcff3149bc7e8bdeda922f525fc3666f1d1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9rtRBHCl3jMyhdDPWYnwk7p5WplQaiTNUQTGPsj2OVg.jpg?width=320&crop=smart&auto=webp&s=c0e2e37199cc210b2a1e21b14ea926751f0ec0b6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9rtRBHCl3jMyhdDPWYnwk7p5WplQaiTNUQTGPsj2OVg.jpg?width=640&crop=smart&auto=webp&s=136432b6b646d45065037af76ca7fda349b61558', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9rtRBHCl3jMyhdDPWYnwk7p5WplQaiTNUQTGPsj2OVg.jpg?width=960&crop=smart&auto=webp&s=a7019ae44455405f0c61362ac8c6d9cead01443e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9rtRBHCl3jMyhdDPWYnwk7p5WplQaiTNUQTGPsj2OVg.jpg?width=1080&crop=smart&auto=webp&s=ed19c66c5499f40e92a3e9aa0e461592b395e58a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9rtRBHCl3jMyhdDPWYnwk7p5WplQaiTNUQTGPsj2OVg.jpg?auto=webp&s=c8f24cdf8a0131548473315ed3fc97c1186fce99', 'width': 1200}, 'variants': {}}]}
|
Focused Transformer: Contrastive Training for Context Scaling
| 37 | 2023-07-09T22:11:32 |
https://arxiv.org/abs//2307.03170
|
metalman123
|
arxiv.org
| 1970-01-01T00:00:00 | 0 |
{}
|
14vbh28
| false | null |
t3_14vbh28
|
/r/LocalLLaMA/comments/14vbh28/focused_transformer_contrastive_training_for/
| false | false | 37 |
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
|
||
LLM Multiple Rounds Fine-Tune Tips?
| 4 |
I have 2 datasets that I want to use for fine-tuning multiple LLMs.
Dataset #1 is large and has "raw" text data (many scraped webpages revolving around different topics that an LLM may not have been pre-trained on).
Dataset #2 is very small and has selected examples depicting how a topic from Dataset #1 should be presented to the user during chat interactions.
I am trying to determine a good strategy for fine-tuning. I could not find any examples of "two-round" fine tuning in a scenario like this.
I am thinking about using quantization and training just the last few layers of selected LLM on Dataset #1, and using the LoRA method for Dataset #2. However, I do not have much compute resources.
Any tips/suggestions/advice would be greatly appreciated!
| 2023-07-09T22:26:46 |
https://www.reddit.com/r/LocalLLaMA/comments/14vbul6/llm_multiple_rounds_finetune_tips/
|
ilovejoi36912
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14vbul6
| false | null |
t3_14vbul6
|
/r/LocalLLaMA/comments/14vbul6/llm_multiple_rounds_finetune_tips/
| false | false |
self
| 4 | null |
Cleaning an OCR output with an LLM
| 3 |
Hi all!
I have OCRed some pdfs but the result is really distorted. I assume an LLM should be capable of recovering the information and cleaning the text.
Does any one have experience doing so? Is there any resource that you know of? I have never used a local LLM, so this would be my first use case.
Thank you so much!
| 2023-07-09T22:58:48 |
https://www.reddit.com/r/LocalLLaMA/comments/14vcly8/cleaning_an_ocr_output_with_an_llm/
|
Por-Tutatis
|
self.LocalLLaMA
| 2023-07-09T23:11:56 | 0 |
{}
|
14vcly8
| false | null |
t3_14vcly8
|
/r/LocalLLaMA/comments/14vcly8/cleaning_an_ocr_output_with_an_llm/
| false | false |
self
| 3 | null |
GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest
| 70 | 2023-07-10T01:26:48 |
https://v.redd.it/hrxxtw0li1bb1
|
ninjasaid13
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
14vfz0r
| false |
{'reddit_video': {'bitrate_kbps': 4800, 'dash_url': 'https://v.redd.it/hrxxtw0li1bb1/DASHPlaylist.mpd?a=1692208707%2COTc0MDFkZTE3ZTRjYzE0ZDk4ZTg3NjAwNjY2ZGY1NWMxMWJjMTliYWI4ZjA3NTliYzlhNDhjZTg4NmNmYzEzOA%3D%3D&v=1&f=sd', 'duration': 16, 'fallback_url': 'https://v.redd.it/hrxxtw0li1bb1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/hrxxtw0li1bb1/HLSPlaylist.m3u8?a=1692208707%2COThmMjcyZTJmYjE0OTFmOTNhM2E2YmUwZDBlYjdkZjZkMWRlNGQ4MGE5NjgzYzQ5YjE2MWFlZmMzMjg2NDY5OQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/hrxxtw0li1bb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_14vfz0r
|
/r/LocalLLaMA/comments/14vfz0r/gpt4roi_instruction_tuning_large_language_model/
| false | false | 70 |
{'enabled': False, 'images': [{'id': 'iDtS3s8aa3SNojEmQvrRte6eIFwMrc3riVM1YsoqPyk', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/x8L6X5GEiCDuFcd-SXd5IW2plIa4NDIy71PzpuEr3yM.png?width=108&crop=smart&format=pjpg&auto=webp&s=a9b0522c329c24f5a9c1e5cfff3b79b4fd05bf70', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/x8L6X5GEiCDuFcd-SXd5IW2plIa4NDIy71PzpuEr3yM.png?width=216&crop=smart&format=pjpg&auto=webp&s=7f92f48dfacc5190fa2a78e691f0d3d7843a9e09', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/x8L6X5GEiCDuFcd-SXd5IW2plIa4NDIy71PzpuEr3yM.png?width=320&crop=smart&format=pjpg&auto=webp&s=33caa91931b1d305fe798535d317d86d78ed4fa4', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/x8L6X5GEiCDuFcd-SXd5IW2plIa4NDIy71PzpuEr3yM.png?width=640&crop=smart&format=pjpg&auto=webp&s=d95cd72664caf70ffc882099a51acd4ae749cc92', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/x8L6X5GEiCDuFcd-SXd5IW2plIa4NDIy71PzpuEr3yM.png?width=960&crop=smart&format=pjpg&auto=webp&s=ce1f5765d53649c6c7a0fc839e173c8646bff47b', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/x8L6X5GEiCDuFcd-SXd5IW2plIa4NDIy71PzpuEr3yM.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a107285163b7b78a2a5d4c90c369a41b7d418283', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/x8L6X5GEiCDuFcd-SXd5IW2plIa4NDIy71PzpuEr3yM.png?format=pjpg&auto=webp&s=502527b5e23314bab53326b473ba2e9436f853fa', 'width': 1920}, 'variants': {}}]}
|
||
Inference server for running multiple LoRAs alongside a single foundational LLM?
| 5 |
Hi,
There are a few good options to efficiently run an inference server for a single LLM - such as NVIDIA triton combined with [vLLM](https://github.com/vllm-project/vllm).
However, I would like to run multiple LoRAs alongside a single big LLM. I do not want to merge these and run multiple LLMs. Are there any options?
Thanks!
| 2023-07-10T02:49:36 |
https://www.reddit.com/r/LocalLLaMA/comments/14vhp5q/inference_server_for_running_multiple_loras/
|
JKStreamAdmin
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14vhp5q
| false | null |
t3_14vhp5q
|
/r/LocalLLaMA/comments/14vhp5q/inference_server_for_running_multiple_loras/
| false | false |
self
| 5 |
{'enabled': False, 'images': [{'id': 'NMAc4c9-X5ZRSghyt_Re8vxzoaYvdQ9ah5qk6eTSu8w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/eFBh5-PxmrfeAqgBVux5kuytiK04dn3BfsBtPSDIVvY.jpg?width=108&crop=smart&auto=webp&s=955ef4fe2d98719e3d4182260101d9d3d02d84b7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/eFBh5-PxmrfeAqgBVux5kuytiK04dn3BfsBtPSDIVvY.jpg?width=216&crop=smart&auto=webp&s=a3e4b4c37a3376b2d81b0356ade02f460d70ae6a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/eFBh5-PxmrfeAqgBVux5kuytiK04dn3BfsBtPSDIVvY.jpg?width=320&crop=smart&auto=webp&s=523ea30c13bfff972977fe9f5cac2ad5c193eee6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/eFBh5-PxmrfeAqgBVux5kuytiK04dn3BfsBtPSDIVvY.jpg?width=640&crop=smart&auto=webp&s=cd6b16d4939527e0bc40e64ad0fe9faf241df45e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/eFBh5-PxmrfeAqgBVux5kuytiK04dn3BfsBtPSDIVvY.jpg?width=960&crop=smart&auto=webp&s=e522a0594b07c1d09b2d662b4c9947bc956248f1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/eFBh5-PxmrfeAqgBVux5kuytiK04dn3BfsBtPSDIVvY.jpg?width=1080&crop=smart&auto=webp&s=7b1f5c6bbddbb678c500137d63a85220a774a791', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/eFBh5-PxmrfeAqgBVux5kuytiK04dn3BfsBtPSDIVvY.jpg?auto=webp&s=3298b53ecc53be1c129e536f3c4ff6e53bf8b6f8', 'width': 1200}, 'variants': {}}]}
|
(Noob) with a question about loading models in python.
| 1 |
Hi guys, I was wondering if there was anyway to load these models in python. I looked around online for python scripts, but nothing seems to work!
| 2023-07-10T03:37:37 |
https://www.reddit.com/r/LocalLLaMA/comments/14vinx2/noob_with_a_question_about_loading_models_in/
|
Norok_The_Diablo
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14vinx2
| false | null |
t3_14vinx2
|
/r/LocalLLaMA/comments/14vinx2/noob_with_a_question_about_loading_models_in/
| false | false |
default
| 1 | null |
Uncensored Llama/Falcon model hosting and API?
| 13 |
I've been trying to run a couple of the uncensored wizardLM models using llama.cpp and llama-cpp-python, but it's an absolute pain to get everything working on wsl and cuda and making sure the gpu is used. Is there any API hosting site similar to [https://replicate.com/](https://replicate.com/) but for uncensored models?
| 2023-07-10T05:27:45 |
https://www.reddit.com/r/LocalLLaMA/comments/14vkrwm/uncensored_llamafalcon_model_hosting_and_api/
|
IcyMissile
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14vkrwm
| false | null |
t3_14vkrwm
|
/r/LocalLLaMA/comments/14vkrwm/uncensored_llamafalcon_model_hosting_and_api/
| false | false |
self
| 13 | null |
Prompt Injection: How to Prevent It or Should We Prevent It?
| 1 | 2023-07-10T07:08:22 |
https://utkusen.medium.com/prompt-injection-how-to-prevent-it-or-should-we-prevent-it-ef62d9355191
|
utku1337
|
utkusen.medium.com
| 1970-01-01T00:00:00 | 0 |
{}
|
14vmlec
| false | null |
t3_14vmlec
|
/r/LocalLLaMA/comments/14vmlec/prompt_injection_how_to_prevent_it_or_should_we/
| false | false |
default
| 1 |
{'enabled': False, 'images': [{'id': 'U-XOpzBhj9yNB43tonlrTX4lXnH5D4U1q6L8FZXZnD4', 'resolutions': [{'height': 80, 'url': 'https://external-preview.redd.it/sNxvwMWQplnWRd_lwn5sIT03g9-d6D42jCZbRj2-Rmg.jpg?width=108&crop=smart&auto=webp&v=enabled&s=9ec39acc26f3de046d8671ab219cf73f9252c51d', 'width': 108}, {'height': 161, 'url': 'https://external-preview.redd.it/sNxvwMWQplnWRd_lwn5sIT03g9-d6D42jCZbRj2-Rmg.jpg?width=216&crop=smart&auto=webp&v=enabled&s=dee3c708c3f7a503922a750cb2f6f18d8424a286', 'width': 216}, {'height': 239, 'url': 'https://external-preview.redd.it/sNxvwMWQplnWRd_lwn5sIT03g9-d6D42jCZbRj2-Rmg.jpg?width=320&crop=smart&auto=webp&v=enabled&s=a5f6d5c56cd2aa42519fb47c5daf8310438fc118', 'width': 320}, {'height': 478, 'url': 'https://external-preview.redd.it/sNxvwMWQplnWRd_lwn5sIT03g9-d6D42jCZbRj2-Rmg.jpg?width=640&crop=smart&auto=webp&v=enabled&s=d926f1978ebb89aa1d24d5f137c86e3a0a03aa21', 'width': 640}, {'height': 718, 'url': 'https://external-preview.redd.it/sNxvwMWQplnWRd_lwn5sIT03g9-d6D42jCZbRj2-Rmg.jpg?width=960&crop=smart&auto=webp&v=enabled&s=1230fb5b42bc2d165b504b0c8648307084debdfe', 'width': 960}, {'height': 808, 'url': 'https://external-preview.redd.it/sNxvwMWQplnWRd_lwn5sIT03g9-d6D42jCZbRj2-Rmg.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=15019090f59a225213bf34cbcb2f633e61ed6296', 'width': 1080}], 'source': {'height': 898, 'url': 'https://external-preview.redd.it/sNxvwMWQplnWRd_lwn5sIT03g9-d6D42jCZbRj2-Rmg.jpg?auto=webp&v=enabled&s=5c1fe2ca6e3e6f5f028904e3aeb896d1075ae540', 'width': 1200}, 'variants': {}}]}
|
|
I NEED A MODEL THAT CAN RATE ARTICLES
| 0 |
So I'm working on an small project for scraping tech news and articles and sort thier importance or show the breaking new up top , The only problem is which LLM model that is "Fast,Small,Efficient" for this task , i tried to use gpt4all with falcon7b , but it doesn't actually run as intended , not like chatgpt
And if my method and approach is wrong , is there another way that don't require training new model or fine tune one? , because I don't have the resources for this
| 2023-07-10T07:38:56 |
Hussei911
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
14vn41b
| false | null |
t3_14vn41b
|
/r/LocalLLaMA/comments/14vn41b/i_need_a_model_that_can_rate_articles/
| false | false | 0 |
{'enabled': True, 'images': [{'id': '6v5IizTFSOTsiiOVkgDF5yHClbOTtFTU3J9B-GhyHk0', 'resolutions': [{'height': 184, 'url': 'https://preview.redd.it/n0cy3ib1d3bb1.jpg?width=108&crop=smart&auto=webp&v=enabled&s=b7f2122a8e9d4f5f7f1403fd322a45992f844a18', 'width': 108}, {'height': 368, 'url': 'https://preview.redd.it/n0cy3ib1d3bb1.jpg?width=216&crop=smart&auto=webp&v=enabled&s=a7da2a84af3c8e32cc84a3aecc9a9cb265aba3d6', 'width': 216}, {'height': 545, 'url': 'https://preview.redd.it/n0cy3ib1d3bb1.jpg?width=320&crop=smart&auto=webp&v=enabled&s=98d6d5212c72cdb5055b57e05ccdb6e3c6887415', 'width': 320}, {'height': 1090, 'url': 'https://preview.redd.it/n0cy3ib1d3bb1.jpg?width=640&crop=smart&auto=webp&v=enabled&s=7306493671c110288db337030c9c866cfa7e0ae6', 'width': 640}, {'height': 1636, 'url': 'https://preview.redd.it/n0cy3ib1d3bb1.jpg?width=960&crop=smart&auto=webp&v=enabled&s=9d76e91fb56b92c2b5e5c96823fb8ecd58f21cc6', 'width': 960}, {'height': 1841, 'url': 'https://preview.redd.it/n0cy3ib1d3bb1.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=094ed4a3a2f3361d8fb22f9f4948219dfe405b9c', 'width': 1080}], 'source': {'height': 1841, 'url': 'https://preview.redd.it/n0cy3ib1d3bb1.jpg?auto=webp&v=enabled&s=724804176a1435fe8849e315e724c28d0bf43a02', 'width': 1080}, 'variants': {}}]}
|
||
My experience on starting with fine tuning LLMs with custom data
| 373 |
I keep seeing questions about "How I make a model to answer based on my data. I have \[wiki, pdfs, whatever other documents\]"
Currently I am making a living by helping companies built chatbots fine tuned on their custom data.
Most of those are support or Q&A chatbots to answer questions from clients at any hour and day. There are also internal chatbots to be used to train new people joining the company and several other use cases.
So, I was thinking to share my experience (it might be wrong and I might be doing everything wrong, but it is my experience and based on this I have a dozen chatbots running in production and talking with clients with few dozen more in different stages of testing).
The actual training / fine-tuning, while it might initially seem like a daunting task due to the plethora of tools available (FastChat, Axolot, Deepspeed, transformers, LoRA, qLoRA, and more), I must tell you - this is actually the easiest part of the whole process! All you need to do is peek into their repositories, grab an example, and tweak it to fit your model and data.
However, the real challenge lies in preparing the data. A massive wiki of product documentation, a thousand PDFs of your processes, or even a bustling support forum with countless topics - they all amount to nothing if you don't have your data in the right format. Projects like Dolly and Orca have shown us how enriching data with context or system prompts can significantly improve the final model's quality. Other projects, like Vicuna, use chains of multi-step Q&A with solid results. There are many other datasets formats, depending of the expected result. For example, a dataset for quotes is much simpler, because there will be no actual interaction, the quote is a quote.
Personally, I mostly utilize the #instruction, #input, #output format for most of my fine-tuning tasks.
So, shaping your data in the correct format is, without a doubt, the most difficult and time-consuming step when creating a Language Learning Model (LLM) for your company's documentation, processes, support, sales, and so forth.
Many methods can help you tackle this issue. Most choose to employ GPT4 for assistance. Privacy shouldn't be a concern if you're using Azure APIs, though they might be more costly, but offer privacy. However, if your data is incredibly sensitive, refrain from using them. And remember, any data used to train a public-facing chatbot should not contain any sensitive information.
Automated tools can only do so much; manual work is indispensable and in many cases, difficult to outsource. Those who genuinely understand the product/process/business should scrutinize and cleanse the data. Even if the data is top-notch and GPT4 does a flawless job, the training could still fail. For instance, outdated information or contradictory responses can lead to poor results.
In many of my projects, we involve a significant portion of the organization in the process. I develop a simple internal tool allowing individuals to review rows of training data and swiftly edit the output or flag the entire row as invalid.
Once you've curated and correctly formatted your data, the fine-tuning can commence. If you have a vast amount of data, i.e., tens of thousands of instructions, it's best to fine-tune the actual model. To do this, refer to the model repo and mimic their initial training process with your data.
However, if you're working with a smaller dataset, a LoRA or qLoRA fine-tuning would be more suitable. For this, start with examples from LoRA or qLoRA repositories, use booga UI, or experiment with different settings. Getting a good LoRA is a trial and error process, but with time, you'll become good at it.
Once you have your fine-tuned model, don't expose it directly to clients. Instead, run client queries through the model, showcasing the responses internally and inviting internal users to correct the answers. Depending on the percentage of responses modified by users, you might need to execute another fine-tuning with this new data or completely redo the fine-tuning if results were really poor.
On the hardware front, while it's possible to train a qLoRA on a single 3090, I wouldn't recommend it. There are too many limitations, and even browsing the web while training could lead to OOM. I personally use a cloud A6000 with 48GB VRAM, which costs about 80 cents per hour.
For anything larger than a 13B model, whether it's LoRA or full fine-tuning, I'd recommend using A100. Depending on the model and dataset size, and parameters, I run 1, 4, or 8 A100s. Most tools are tested and run smoothly on A100, so it's a safe bet. I once got a good deal on H100, but the hassle of adapting the tools was too overwhelming, so I let it go.
Lastly, if you're looking for a quick start, try embeddings. This is a cheap, quick, and acceptable solution for internal needs. You just need to throw all internal documents into a vector db, put a model in front for searching, and voila! With no coding required, you can install booga with the superbooga extension to get started.
​
UPDATE:
I saw some questions repeating, sorry that I am not able to answer to everyone, but I am updating here, hope that this helps. Here are some answers for the repeated questions:
1. I do not know how to train a pre-trained model with "raw" data, like big documents. From what I know, any further training of a pre-trained model is done by feeding data tokenized and padded to maximum context size of the original model, no more.
2. Before starting, make sure that the problem that needs to be solved and the expectations are fully defined. "Teaching the model about xyz" is not a problem, it is a wish. It is hard to solve "wishes", but we can solve problems. For example: "I want to ask the model about xyz and get accurate answers based on abc data". This is needed to offer non stop answering chat for customers. We expect customer to ask "example1, 2, 3, .. 10" and we expect the answers to be in this style "example answers with example addressation, formal, informal, etc). We do not want the chat to engage in topics not related to xyz. If customer engage in such topics, politely explain that have no knowledge on that. (with example). This is a better description of the problem.
3. It is important to define the target audience and how the model will be used. There is a big difference of using it internally inside an organisation or directly expose it to the clients. You can get a lot cheaper when it is just an internal helper and the output can be ignored if not good. For example, in this case, full documents can be ingested via vectordb and use the model to answer questions about the data from the vectordb. If you decide to go with the embeddings, this can be really helpful: [https://github.com/HKUNLP/instructor-embedding](https://github.com/HKUNLP/instructor-embedding)
4. It is important to define what is the expected way to interact with the model. Do you want to chat with it? Should it follow instructions? Do you want to provide a context and get output in the provided context? Do you want to complete your writing (like Github Copilot or Starcoder)? Do you want to perform specific tasks (eg grammar checking, translation, classification of something etc)?
5. After all the above are decided and clarified and you decided that embeddings are not what you want and want to proceed further with fine tuning, it is the time to decide on the data format.
1. \#instruction,#input,#output is a popular data format and can be used to train for both chat and instruction following. This is an example dataset in this format: [https://huggingface.co/datasets/yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned) . I am using this format the most because it is the easiest to format unstructured data into, having the optional #input it makes it very flexible
2. It was proven that having better structured, with extra information training data will produce better results. Here is Dolly dataset that is using a context to enrich the data: [https://huggingface.co/datasets/databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k)
3. A newer dataset that further proved that data format and quality is the most important in the output is Orca format. It is using a series of system prompts to categorize each data row (similar with a tagging system). [https://huggingface.co/datasets/Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca)
4. We don't need complicated data structure always. For example, if the expecation is that we prompt the model "Who wrote this quote: \[famous quote content\]?" and we expect to only get name of the author, then a simple format is enough, like it is here: [https://huggingface.co/datasets/Abirate/english\_quotes](https://huggingface.co/datasets/Abirate/english_quotes)
5. For a more fluid conversation, there is the Vicuna format, an Array of Q&A. Here is an example: [https://huggingface.co/datasets/ehartford/wizard\_vicuna\_70k\_unfiltered](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered)
6. There are other datasets formats, in some the output is partially masked (for completion suggestion models), but I have not worked and I am not familiar with those formats.
6. From my experiments, things that can be totally wrong:
1. directly train a pre-trained model with less than 50000 data row is more or less useless. I would think of directly train a model when I have more than 100k data rows, for a 13B model and at least 1 mil for a 65B model.
2. with smaller datasets, it is efficient to train LoRA of qLoRA.
3. I prefer to train a 4 bit qLora 30B model than a fp16 LoRA for a 13B model (about same hw requirements, but the results with the 4bit 30B model are superior to the 13B fp16 model)
​
| 2023-07-10T07:57:27 |
https://www.reddit.com/r/LocalLLaMA/comments/14vnfh2/my_experience_on_starting_with_fine_tuning_llms/
|
Ion_GPT
|
self.LocalLLaMA
| 2023-07-11T06:23:21 | 1 |
{'gid_2': 1}
|
14vnfh2
| false | null |
t3_14vnfh2
|
/r/LocalLLaMA/comments/14vnfh2/my_experience_on_starting_with_fine_tuning_llms/
| false | false |
self
| 373 |
{'enabled': False, 'images': [{'id': 'fOQFUmKkqkswgcf8o9RyOEbRNk9YtCK2Pg8J5eBVG0s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JtAMbGflq-r-a7CZyIiyMHJbXoIdiVaDOQv1W64JC-k.jpg?width=108&crop=smart&auto=webp&s=b56e71127b4007f17a55457fe52847f9d7c90c2a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/JtAMbGflq-r-a7CZyIiyMHJbXoIdiVaDOQv1W64JC-k.jpg?width=216&crop=smart&auto=webp&s=ba4d83976597b4633957daf050992fb9516a2942', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/JtAMbGflq-r-a7CZyIiyMHJbXoIdiVaDOQv1W64JC-k.jpg?width=320&crop=smart&auto=webp&s=4a18c2ac60e4aa8bca2aeae921a6019ccac7dd90', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/JtAMbGflq-r-a7CZyIiyMHJbXoIdiVaDOQv1W64JC-k.jpg?width=640&crop=smart&auto=webp&s=641cc7e9888cc6f9ce583704cd4057378fbd0961', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/JtAMbGflq-r-a7CZyIiyMHJbXoIdiVaDOQv1W64JC-k.jpg?width=960&crop=smart&auto=webp&s=94e688b009c774fc2f7901bb9f913dd6bffdaaf9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/JtAMbGflq-r-a7CZyIiyMHJbXoIdiVaDOQv1W64JC-k.jpg?width=1080&crop=smart&auto=webp&s=37f446fbd9ea3c78a99c8275cfc88690df88930e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/JtAMbGflq-r-a7CZyIiyMHJbXoIdiVaDOQv1W64JC-k.jpg?auto=webp&s=03dbb888530b6192d7169694204bc34c81551f2e', 'width': 1200}, 'variants': {}}]}
|
Fine tuning with long context conversations?
| 1 |
I cleaned and instruct formatted the Teatime proxy logs, but there are a lot of them that are just under 8196 tokens per chat. Does anyone have any suggestions as to what I should do? Does fine tuning with 8K context require more VRAM? I'm reading all sorts of things, including that fine tuning on data over 2k context decreases performance with shorter contexts. Should I just be breaking up the long conversations into 4K chunks? 2K chunks? Or am I ok just leaving it as is? Does it make a difference if the end model is going to be converted to ggml? I was hoping to fine tune on openllama.
| 2023-07-10T08:14:47 |
https://www.reddit.com/r/LocalLLaMA/comments/14vnr0u/fine_tuning_with_long_context_conversations/
|
CheshireAI
|
self.LocalLLaMA
| 2023-07-10T11:54:15 | 0 |
{}
|
14vnr0u
| false | null |
t3_14vnr0u
|
/r/LocalLLaMA/comments/14vnr0u/fine_tuning_with_long_context_conversations/
| false | false |
default
| 1 | null |
What coding models can I run on my rig?
| 4 |
I am looking into running something like wizardcoder or starcoder. Target languages would be python, bash, HCL and yaml.
I need long contexts and lots of tokens.
My rig is:
12700K 5.2 GHz
96 GB of DDR4 RAM
3080Ti 12 GB
| 2023-07-10T10:18:38 |
https://www.reddit.com/r/LocalLLaMA/comments/14vpzqi/what_coding_models_can_i_run_on_my_rig/
|
l0lcat19
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14vpzqi
| false | null |
t3_14vpzqi
|
/r/LocalLLaMA/comments/14vpzqi/what_coding_models_can_i_run_on_my_rig/
| false | false |
self
| 4 | null |
local LLM agent?
| 10 |
I want to use a local LLM to gather information from the internet and summarize it for me (or some other action)
I was able to make a dumb bot (react framework) using wizardlm 13b and 30b (1.1 for 13b) (using something similar to this: https://github.com/QuangBK/localLLM_langchain/blob/main/localLLM_agent.ipynb, without langchain. But it seems like it really struggles with following the correct format when giving the output. Sometimes, its thought is to search, but instead, it parses a web page. I have tried using natural language and json format output. I've tweeked the prompt several times to varying degrees of success. I've played with the temperature with some success.
Through my experimentation, it seems like the model will try, but after 1 or 2 prompts, it forgets or goes into an infinite loop.
I am starting to feel like these models either are not powerful enough or maybe my technique is wrong.
What can I try to improve the agent? Should I try a different prompt? Should I try a different technique? I have been using the ooba api to run these experiments, i saw the guidance implementation, but not sure how it will work with ooba. I have also seen mentioned that people train their own lora for creating an agent. Any advice on getting an agent that can perform a search and summarize a document? Anything I can read?
| 2023-07-10T11:36:21 |
https://www.reddit.com/r/LocalLLaMA/comments/14vrjcl/local_llm_agent/
|
klop2031
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14vrjcl
| false | null |
t3_14vrjcl
|
/r/LocalLLaMA/comments/14vrjcl/local_llm_agent/
| false | false |
self
| 10 |
{'enabled': False, 'images': [{'id': 'b07J4sBspwa3QMgw1RwJa9t7EjV_wdSqTwVlOv4qrng', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yax1xpDbiFGh6zpQEuYsoQS7oHHgHqD_YKZgOxP_XmM.jpg?width=108&crop=smart&auto=webp&s=1b4d0843f1f7483e8e4f99731945dce0962a0fdc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/yax1xpDbiFGh6zpQEuYsoQS7oHHgHqD_YKZgOxP_XmM.jpg?width=216&crop=smart&auto=webp&s=8c898b40b6180e550c3572deb47eb26c8c943e10', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/yax1xpDbiFGh6zpQEuYsoQS7oHHgHqD_YKZgOxP_XmM.jpg?width=320&crop=smart&auto=webp&s=3d5d4ca7c699d1f3fde56a609b355870ff411bc4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/yax1xpDbiFGh6zpQEuYsoQS7oHHgHqD_YKZgOxP_XmM.jpg?width=640&crop=smart&auto=webp&s=06a15d130fc2b96a8bf18b231f0c4900cd6920c3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/yax1xpDbiFGh6zpQEuYsoQS7oHHgHqD_YKZgOxP_XmM.jpg?width=960&crop=smart&auto=webp&s=f5f05fa820a024351ac896d0bf6cb375c25233b7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/yax1xpDbiFGh6zpQEuYsoQS7oHHgHqD_YKZgOxP_XmM.jpg?width=1080&crop=smart&auto=webp&s=559ba12d1127cfb8352e0f31481c0ddfb0938077', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/yax1xpDbiFGh6zpQEuYsoQS7oHHgHqD_YKZgOxP_XmM.jpg?auto=webp&s=2ac442a235f94f6f4505bf3ce3009618cc5b4384', 'width': 1200}, 'variants': {}}]}
|
Multimodal LLM for infographics images
| 3 |
There are a few open source multimodal LLM for images, any suggestions on which one is the best when it comes to using infographics images (ppt slide images) where usually there is a flow diagram with text or image with text in it. Anyone here have experience in this type of infographics data ?
| 2023-07-10T13:51:09 |
https://www.reddit.com/r/LocalLLaMA/comments/14vunc6/multimodal_llm_for_infographics_images/
|
Jealous_Ad4067
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14vunc6
| false | null |
t3_14vunc6
|
/r/LocalLLaMA/comments/14vunc6/multimodal_llm_for_infographics_images/
| false | false |
self
| 3 | null |
Any benefit in the long term future for 64gb vs 128gb RAM on mac m2 ultra?
| 1 |
[removed]
| 2023-07-10T14:06:31 |
https://www.reddit.com/r/LocalLLaMA/comments/14vv1h4/any_benefit_in_the_long_term_future_for_64gb_vs/
|
SpacialGalacticism
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14vv1h4
| false | null |
t3_14vv1h4
|
/r/LocalLLaMA/comments/14vv1h4/any_benefit_in_the_long_term_future_for_64gb_vs/
| false | false |
default
| 1 | null |
AIDE : LLM shell and docs-set interrogator
| 4 |
hi,
I used privateGPT as source to create abit more useful shell and docs-set interrogator
​
[AIDE](https://github.com/vsraptor/aide/tree/main)
This in general is a Shell around Large Language Model (LLM), at least for now. It is based on privateGPT code, which I refactored, componetized and enchanced with additional features.
In short this tool allows you to interact with different document-sets OR simply query a LLM.
### Features
1. Profile support
- multiple docs stores and ability to switch between them on the fly.
- multiple models and ability to switch between them on the fly.
2. Non-question Commands support to do usefull things
3. System prompts support
4. Better CLI interface
5. Direct and QA query modes.
6. Keeps .history of the commands
6. Keeps .chat_history
​
| 2023-07-10T14:44:59 |
https://www.reddit.com/r/LocalLLaMA/comments/14vw1il/aide_llm_shell_and_docsset_interrogator/
|
Double-Lavishness-77
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14vw1il
| false | null |
t3_14vw1il
|
/r/LocalLLaMA/comments/14vw1il/aide_llm_shell_and_docsset_interrogator/
| false | false |
self
| 4 |
{'enabled': False, 'images': [{'id': 'e1XJNr2ZJLYD-Ks7sPRRplIP9SMjScjMXSPrNiidhjE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UBya4VtLqKerNu2bIQSIt0IbsSMkdbiupogIiZXKx70.jpg?width=108&crop=smart&auto=webp&s=d0240466746eb9f6566789cd8a5c8c123cfb1b5e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/UBya4VtLqKerNu2bIQSIt0IbsSMkdbiupogIiZXKx70.jpg?width=216&crop=smart&auto=webp&s=42145069eb97374b3e4262fe27fe8a053b99f772', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/UBya4VtLqKerNu2bIQSIt0IbsSMkdbiupogIiZXKx70.jpg?width=320&crop=smart&auto=webp&s=8be78b3118e0ccc8a494abb493f9ed9ba11b05a5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/UBya4VtLqKerNu2bIQSIt0IbsSMkdbiupogIiZXKx70.jpg?width=640&crop=smart&auto=webp&s=d37ca4faf9c8f023f3be8fa7b330cecee24183f0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/UBya4VtLqKerNu2bIQSIt0IbsSMkdbiupogIiZXKx70.jpg?width=960&crop=smart&auto=webp&s=e99fd38c6945841a66bc0c068fa2939b21539925', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/UBya4VtLqKerNu2bIQSIt0IbsSMkdbiupogIiZXKx70.jpg?width=1080&crop=smart&auto=webp&s=343e202afe31f91eb81cb3b2dbd2f309740558c4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/UBya4VtLqKerNu2bIQSIt0IbsSMkdbiupogIiZXKx70.jpg?auto=webp&s=d88b86255c2d01185140602cde31d4b3dfd9e7c7', 'width': 1200}, 'variants': {}}]}
|
So with code interpreter, how long before I can use openCL to run 3 or 4 a770’s giving me 48 or 64gb or vram for $1000…
| 12 |
I sure as hell won’t be the one to do it but I sure hope one of you crazy mofo’s does it. I can dream…
| 2023-07-10T14:49:40 |
https://www.reddit.com/r/LocalLLaMA/comments/14vw631/so_with_code_interpreter_how_long_before_i_can/
|
BlandUnicorn
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14vw631
| false | null |
t3_14vw631
|
/r/LocalLLaMA/comments/14vw631/so_with_code_interpreter_how_long_before_i_can/
| false | false |
self
| 12 | null |
Function of saving character chat history in ooba?
| 1 |
Does saving a character’s chat history allow the character to reference it in the future for context?
| 2023-07-10T15:42:43 |
https://www.reddit.com/r/LocalLLaMA/comments/14vxkvl/function_of_saving_character_chat_history_in_ooba/
|
Mordekaiseerr
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14vxkvl
| false | null |
t3_14vxkvl
|
/r/LocalLLaMA/comments/14vxkvl/function_of_saving_character_chat_history_in_ooba/
| false | false |
default
| 1 | null |
What's the closest thing we have to GPT4's code interpreter right now?
| 56 |
Is there anything out there that can do something similar and can be run locally?
| 2023-07-10T16:41:22 |
https://www.reddit.com/r/LocalLLaMA/comments/14vz50w/whats_the_closest_thing_we_have_to_gpt4s_code/
|
malkauns
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14vz50w
| false | null |
t3_14vz50w
|
/r/LocalLLaMA/comments/14vz50w/whats_the_closest_thing_we_have_to_gpt4s_code/
| false | false |
self
| 56 | null |
Interesting Article -> What is China doing on AI governance
| 0 |
Found this on my twitter feed (twitter is some sort of shouting app, I didn't know either )
Beijing is leading the way in AI regulation, releasing groundbreaking new strategies to govern algorithms, chatbots, and more. Global partners need a better understanding of what, exactly, this regulation entails, what it says about China’s AI priorities, and what lessons other AI regulators can learn.
[https://carnegieendowment.org/2023/07/10/china-s-ai-regulations-and-how-they-get-made-pub-90117](https://carnegieendowment.org/2023/07/10/china-s-ai-regulations-and-how-they-get-made-pub-90117)
Why interesting? - thought it could/would be emulated to some point elsewhere.
"But in the West, China’s regulations are often dismissed as irrelevant or seen purely through the lens of a geopolitical competition to write the rules for AI. Instead, these regulations deserve careful study on how they will affect China’s AI trajectory and what they can teach policymakers around the world about regulating the technology. Even if countries fundamentally disagree on the specific content of a regulation, they can still learn from each other when it comes to the underlying structures and technical feasibility of different regulatory approaches."
| 2023-07-10T16:45:01 |
https://www.reddit.com/r/LocalLLaMA/comments/14vz89t/interesting_article_what_is_china_doing_on_ai/
|
FPham
|
self.LocalLLaMA
| 2023-07-10T16:49:50 | 0 |
{}
|
14vz89t
| false | null |
t3_14vz89t
|
/r/LocalLLaMA/comments/14vz89t/interesting_article_what_is_china_doing_on_ai/
| false | false |
self
| 0 | null |
Best 13b ggml story/novel models?
| 1 |
[removed]
| 2023-07-10T17:25:01 |
https://www.reddit.com/r/LocalLLaMA/comments/14w0b1s/best_13b_ggml_storynovel_models/
|
Artemis369212
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14w0b1s
| false | null |
t3_14w0b1s
|
/r/LocalLLaMA/comments/14w0b1s/best_13b_ggml_storynovel_models/
| false | false |
default
| 1 | null |
3090 Cards
| 13 |
Hello All,
Im pretty new here, and im wanting to start running local LLMs. I am using gpt4all, but its pretty slow. My current GPU is a cheapy. I have read through the posts here, and it looks like a 3090 is a good all around choice without breaking the bank too much.
Looking for advice on buying one from Ebay, I see numerous brands and prices. Do you guys recommend any specific brand, like EVGA for example ? Most of the cards im finding our used, some without even a box. Just looking for any advice on purchasing a 3090 on ebay. Thanks in advance!!
​
Also, is it worth the extra money to get the Ti version ?
​
An example I found on ebay :
​
[https://www.ebay.com/itm/225660205739?epid=6041238838&hash=item348a657aab:g:aUsAAOSwZBVknyuq&amdata=enc%3AAQAIAAAAwA0jtn7WN5cDxOIaWB2Qui%2FaFFIB%2BqlbL5%2BJYQ5ON2MQT7FYjrBk6IsSUSeLzEhBrzE%2BlrjxmZbe5iTmTH%2FfxAgBYy2vUXn2%2BeuJKzRlFqhquQ3Vzg%2FMmCOxMrlMGXY2l6FOgA%2BlQKtc0lWdl%2BCx%2FmjMp%2BZKO8JxI1ZT6HZmV3fILnV%2BCiVkc25H0jPF9lYS1JLNum%2Bfph5z3hczwKVI5ngsVg6f5typLTHo6BNkIyc5%2FgVNu4AwPDf%2FfjIZafo4wQ%3D%3D%7Ctkp%3ABk9SR5q05I2oYg](https://www.ebay.com/itm/225660205739?epid=6041238838&hash=item348a657aab:g:aUsAAOSwZBVknyuq&amdata=enc%3AAQAIAAAAwA0jtn7WN5cDxOIaWB2Qui%2FaFFIB%2BqlbL5%2BJYQ5ON2MQT7FYjrBk6IsSUSeLzEhBrzE%2BlrjxmZbe5iTmTH%2FfxAgBYy2vUXn2%2BeuJKzRlFqhquQ3Vzg%2FMmCOxMrlMGXY2l6FOgA%2BlQKtc0lWdl%2BCx%2FmjMp%2BZKO8JxI1ZT6HZmV3fILnV%2BCiVkc25H0jPF9lYS1JLNum%2Bfph5z3hczwKVI5ngsVg6f5typLTHo6BNkIyc5%2FgVNu4AwPDf%2FfjIZafo4wQ%3D%3D%7Ctkp%3ABk9SR5q05I2oYg)
​
​
| 2023-07-10T17:38:14 |
https://www.reddit.com/r/LocalLLaMA/comments/14w0np4/3090_cards/
|
Noack1
|
self.LocalLLaMA
| 2023-07-10T17:42:50 | 0 |
{}
|
14w0np4
| false | null |
t3_14w0np4
|
/r/LocalLLaMA/comments/14w0np4/3090_cards/
| false | false |
self
| 13 |
{'enabled': False, 'images': [{'id': 'Tf6AyOgGIyCab77aAxpujOB-73jgRWETES5SaEo2N40', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/fCZuW7Yx-Kz6ItmTuzSHjXCGg2d8SbgMgitr14rfCGY.jpg?width=108&crop=smart&auto=webp&s=7a71699c6a22678297630d80577931edbfb12b06', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/fCZuW7Yx-Kz6ItmTuzSHjXCGg2d8SbgMgitr14rfCGY.jpg?width=216&crop=smart&auto=webp&s=24734d31da4366f9b7ae429f443b73d6d4c18997', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/fCZuW7Yx-Kz6ItmTuzSHjXCGg2d8SbgMgitr14rfCGY.jpg?width=320&crop=smart&auto=webp&s=943d3a48e453cac4b4cd6439677f4932df25702f', 'width': 320}], 'source': {'height': 375, 'url': 'https://external-preview.redd.it/fCZuW7Yx-Kz6ItmTuzSHjXCGg2d8SbgMgitr14rfCGY.jpg?auto=webp&s=133776b01f2f01136c7b5eb53237ea643530edb9', 'width': 500}, 'variants': {}}]}
|
Transformers.js - Thoughts?
| 8 |
Hello, what do you guys think of this project? Any experience using it you'd like to share?
​
[https://huggingface.co/docs/transformers.js/index](https://huggingface.co/docs/transformers.js/index)
| 2023-07-10T17:39:45 |
https://www.reddit.com/r/LocalLLaMA/comments/14w0p1p/transformersjs_thoughts/
|
MuffinB0y
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14w0p1p
| false | null |
t3_14w0p1p
|
/r/LocalLLaMA/comments/14w0p1p/transformersjs_thoughts/
| false | false |
self
| 8 |
{'enabled': False, 'images': [{'id': 'AX4llt7hMbwyUA30SSMnwQhe8bEYQ1aeE6sVopJ8qEs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9EfywTYHBxmsqwfeGrKeF-JOyXb3YLnAnO9JqzNli2A.jpg?width=108&crop=smart&auto=webp&s=273a079072cc64ef57183fb7f0c6c640088fcffa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9EfywTYHBxmsqwfeGrKeF-JOyXb3YLnAnO9JqzNli2A.jpg?width=216&crop=smart&auto=webp&s=98cd1f7822cc345584e613847ac9df1e2cac03a3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9EfywTYHBxmsqwfeGrKeF-JOyXb3YLnAnO9JqzNli2A.jpg?width=320&crop=smart&auto=webp&s=a4c06b1ed48013295c10a4684570a4589c8b72a7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9EfywTYHBxmsqwfeGrKeF-JOyXb3YLnAnO9JqzNli2A.jpg?width=640&crop=smart&auto=webp&s=f5db78c8f6aade7d3459b8d68b750ce0298c384a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9EfywTYHBxmsqwfeGrKeF-JOyXb3YLnAnO9JqzNli2A.jpg?width=960&crop=smart&auto=webp&s=3e090e87fd340a6f401f66e3e5e3655f98102bc9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9EfywTYHBxmsqwfeGrKeF-JOyXb3YLnAnO9JqzNli2A.jpg?width=1080&crop=smart&auto=webp&s=958cfeb13e58c2a8a4fd3fecdf6d9d3aa461b7bc', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9EfywTYHBxmsqwfeGrKeF-JOyXb3YLnAnO9JqzNli2A.jpg?auto=webp&s=740ffb1b00addb07c58df569f58902f40265e968', 'width': 1200}, 'variants': {}}]}
|
Advice recommendations for the smallest(fastest) possible CPU model for document question answering?
| 2 |
Hi, I have a very large group of unstructured text documents that I want to extract data from and speed is a very big bottle neck.
I'm trying to evaluate all of my options for models that are available as well as consider if I will need to build a specific gpu based server to handle the requests.
Before I begin evaluating models that require GPUs, I want to evaluate the cpu models for their speed and accuracy in data extraction.
Does anyone have any recommendations for fast, cpu models that might be good at extracting data from unstructured text?
Thanks!!
| 2023-07-10T17:44:24 |
https://www.reddit.com/r/LocalLLaMA/comments/14w0to0/advice_recommendations_for_the_smallestfastest/
|
fcname
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14w0to0
| false | null |
t3_14w0to0
|
/r/LocalLLaMA/comments/14w0to0/advice_recommendations_for_the_smallestfastest/
| false | false |
self
| 2 | null |
Recommendation: an ingenious frontend & LocalAI
| 1 |
[removed]
| 2023-07-10T18:35:03 |
https://www.reddit.com/r/LocalLLaMA/comments/14w2767/recommendation_an_ingenious_frontend_localai/
|
Evening_Ad6637
|
self.LocalLLaMA
| 2023-07-10T18:42:12 | 0 |
{}
|
14w2767
| false | null |
t3_14w2767
|
/r/LocalLLaMA/comments/14w2767/recommendation_an_ingenious_frontend_localai/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'MuROFddyYuvY5xg7xSbDZd2HkvG6m6fxiog6JDl0Oco', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/Laym6Vz_Luo-rXyIo8VlWmU-Eikx_OHUhH7sfmj9TC8.jpg?width=108&crop=smart&auto=webp&v=enabled&s=b67776eb7cb7607afcbab19a091e1c1aac5dbdc4', 'width': 108}, {'height': 138, 'url': 'https://external-preview.redd.it/Laym6Vz_Luo-rXyIo8VlWmU-Eikx_OHUhH7sfmj9TC8.jpg?width=216&crop=smart&auto=webp&v=enabled&s=8d9669754e9f6740e64c1428dce7705801998827', 'width': 216}, {'height': 204, 'url': 'https://external-preview.redd.it/Laym6Vz_Luo-rXyIo8VlWmU-Eikx_OHUhH7sfmj9TC8.jpg?width=320&crop=smart&auto=webp&v=enabled&s=e4ef13259cd8dbb46d749a4e8d4adcee1a981d8e', 'width': 320}, {'height': 408, 'url': 'https://external-preview.redd.it/Laym6Vz_Luo-rXyIo8VlWmU-Eikx_OHUhH7sfmj9TC8.jpg?width=640&crop=smart&auto=webp&v=enabled&s=b9d63735ab538f63829fefe9702ca9add03b9c3f', 'width': 640}, {'height': 613, 'url': 'https://external-preview.redd.it/Laym6Vz_Luo-rXyIo8VlWmU-Eikx_OHUhH7sfmj9TC8.jpg?width=960&crop=smart&auto=webp&v=enabled&s=c7df6463bd0f72fa49f7cd107eb90a7eb8c3cf35', 'width': 960}, {'height': 690, 'url': 'https://external-preview.redd.it/Laym6Vz_Luo-rXyIo8VlWmU-Eikx_OHUhH7sfmj9TC8.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=bfffff4e4bd6f5a5e068ac2539fef9ff3d13ff2e', 'width': 1080}], 'source': {'height': 2372, 'url': 'https://external-preview.redd.it/Laym6Vz_Luo-rXyIo8VlWmU-Eikx_OHUhH7sfmj9TC8.jpg?auto=webp&v=enabled&s=f0ada066eea1b2e630cbd7b5647d8ea8c073457f', 'width': 3712}, 'variants': {}}]}
|
Generating "stories" with smaller models
| 1 |
So, I'm very excited about the LongLlama paper, Yesterday I read a post here that linked to a repo that had a notebook that you can run on colab, so I did to look at the generation results. What interested me were 2 things: inference speed and how well it "remembered" stuff. The model is the 3b LongLLaMA preview, so not an instruct model or anything.
After seeing the results, I wanted to have some fun, so I gave the model this prompt:
```"It was a very tragic story!"```
And let it run wild.
the result is nothing short of seizure inducing writing, I had a good laugh reading parts of this:
It was a tragic story! A 51 year old middle-Eastern airline pilot named Adnan Ghalib was killed today in Texas while attempting to land a small private plane in poor weather. I will not describe the circumstances and the details of his final flight, and I am trying so hard not to make a political issue out of it, so I will also refrain from giving you the pilot's name. Many times when you see bad news about something, you can go online, search and see the pilot's last flight log. If it was a commercial airline, you can easily find the pilots last thirty+ flights and get a good look at the pilot. You can usually find the pilot's name in the last paragraph if a crash occurred. You don't always have to know the names or the dates of those flights to know that something has gone very, very wrong! Sometime after the first pilot died, I happened to do a quick search and found Ghalib's flight log site and found a picture of Adnan. He was a bit older than I thought he was, but other than that, he was probably very similar to all the other pilots out there. He had a perfect career with no fatal accidents. This flight log is what I'm really interested in. I saw that they were flying a Mooney M20P, and I've spent a lot of time doing crash analyses with an NTSB investigator and I know that it is one of the best planes to crash in. The 20P is a 172 with some upgrades, which makes it a lot safer and more weather tolerant. The weather over Texas had been bad up until their takeoff. The last twenty days I was looking for reports of weather near Houston that was as bad or worse than conditions today. For about twenty days there were reports of bad flying weather, but as you can see nothing was really dangerous. The weather just became better, some days were not so great, but the last few days it got good again, and that should have been a pretty good indication of changing weather. The last forty days the weather was pretty good around Houston/Galveston, and I don't use weather that good to make landings. I have a few cases right when bad weather started, and several crashes I have done show it wasn't because of bad weather. The weather did get very, very dangerous around the time of the crash, but not dangerous enough. Ghalib was a very good pilot, but his luck just ran out today! It could just be one very bad decision, and he will probably never be the same again...but even with the very very minor luck in not having any sort of weather problems, Adnan could have made a very much safer decision. I have only looked at the flight log once, and at any point if you see something unusual, like a last entry of "flare out", and then you don't see any more entries until three days later, you know it is unusual, but you don't know why. There are a few ways you can know which way something went. You can look at the way they went for flights that were good. You can compare to flights that are going poorly. You could also go back a long time and compare to similar flight. As you can see, Adnan was more than just an average pilot. His first 10 flights were all good, with no fatal accidents. He flew for about a year, and he then had a fatal accident, and he spent the last 8 months of his life involved in aviation crashes. He had 12 deaths, 10 crashes and 5 near misses. He had one "good", one "good" and 4 "good". There was a mix of good and bad flying. In short, Adnan was going about his business one day, got killed and he was an excellent pilot! Now I know that I'm being a bit dramatic, but if you were the last person to see him alive, can you figure out that he did not do it or had not done it on purpose? I'm still waiting for confirmation that it was an accident, probably just a very dangerous decision, even though not bad enough to kill him in the crash. I had a pilot die once over the ocean in 70 degrees of weather and he was flying too fast to be in the air and then he lost the power because of the waves and his elevator broke. I think he died a very painful death because he was in a tiny sea-plane flying blind and without any instrumentation. He hit the water like a freight train and his plane just ended up in a big, deep hole. It's the worst thing I've ever seen. He was dead when he first hit the water when I heard him hit it. We had to just pull him out of the water and get him back on land before
I'd like to see some more of these so, go to this repo:
https://github.com/CStanKonrad/long_llama/blob/main/long_llama_colab.ipynb
and post whatever the model ends up writing
| 2023-07-10T18:49:42 |
https://www.reddit.com/r/LocalLLaMA/comments/14w2lm5/generating_stories_with_smaller_models/
|
KillerX629
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14w2lm5
| false | null |
t3_14w2lm5
|
/r/LocalLLaMA/comments/14w2lm5/generating_stories_with_smaller_models/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'CNhaNGJGLI7aayYDsTUEDCDNA4p_bXbfMm6Vl68IDME', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Gw9EHAx-HPAHQJSdDJplsc5J7ZmndtRwxkMQX-DjtdE.jpg?width=108&crop=smart&auto=webp&v=enabled&s=cd0c7151a10ed61ae8eca41f5943fc302e7c040b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Gw9EHAx-HPAHQJSdDJplsc5J7ZmndtRwxkMQX-DjtdE.jpg?width=216&crop=smart&auto=webp&v=enabled&s=ad7cda45d10df83a003643a629824354ca1bc3f8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Gw9EHAx-HPAHQJSdDJplsc5J7ZmndtRwxkMQX-DjtdE.jpg?width=320&crop=smart&auto=webp&v=enabled&s=4b6f14159eaff21800f94c6b359a2ba91da4f9f1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Gw9EHAx-HPAHQJSdDJplsc5J7ZmndtRwxkMQX-DjtdE.jpg?width=640&crop=smart&auto=webp&v=enabled&s=147290c65930a6fdfe24454ec419e30597f86e49', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Gw9EHAx-HPAHQJSdDJplsc5J7ZmndtRwxkMQX-DjtdE.jpg?width=960&crop=smart&auto=webp&v=enabled&s=5b36bea2dc4d4192c5639ea2f229fa845366dfdd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Gw9EHAx-HPAHQJSdDJplsc5J7ZmndtRwxkMQX-DjtdE.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=a913cc1510862c27acb9eb0d43c646a81b0a4ed9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Gw9EHAx-HPAHQJSdDJplsc5J7ZmndtRwxkMQX-DjtdE.jpg?auto=webp&v=enabled&s=d16e109f09fc5b666dd3eefaa815099fefec7d73', 'width': 1200}, 'variants': {}}]}
|
Nvidia T4 shortage on GCP
| 1 |
It appears that there is a scarcity of Nvidia T4 resources in GCP across all regions (at least which I tried). If anyone possesses information regarding its availability, kindly inform
| 2023-07-10T18:54:33 |
https://www.reddit.com/r/LocalLLaMA/comments/14w2qd9/nvidia_t4_shortage_on_gcp/
|
Effective_Twist6995
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14w2qd9
| false | null |
t3_14w2qd9
|
/r/LocalLLaMA/comments/14w2qd9/nvidia_t4_shortage_on_gcp/
| false | false |
self
| 1 | null |
4070/4090 for more vram or a separate 3090 on a separate machine to run ai. *opinions please*
| 1 |
Bit of a rant but opinions are wanted, please.
I recently bought a 3090 to pair with my 4090 to run 65b models, but had no end of issues setting it up (case stuffed close, only 1000w power supply, blue screens) and fact kinda wasting a 4090 sat in ai machine so in frustration built up two separate machines my main/gaming machine with 4090/4070 currently in it, and now have a 3090 laying around can either try to resell or set up a space pc to run ai on (5800x 64 gig ram/3090).
I know some might say run ai on 4090 lol but i play a lot of games, i kinda was doing that on the 4070/4090 setup, but thinking if setup the 3090 for ai i could sell the 4070 and use the 4090 for gaming... through with 4070/4090 i do have access to 36 gig vram if that's ever gonna be a benefit in future.
So my question is, is it worth building that 3090 (So can keep it running?) into a second machine/should I maybe sell the 4070 then, or should i just keep 4070/4090 combo and try to get back my money on the 3090.
I can only fit the 4090/3090 in the 5800x ddr4 machine.
I now have a 7950x3d + 64 gig ram... so while i can put 4070 in there to game on, thats what I was doing before all the blue screens made me give up in frustration... plus feels like 4090 in the server is a waste of cash, well even more waste than I've already spent!
Any opinions are welcome, does anyone run ai full-time? Is it worth power cost to have it running all time or should i just reload between games and such?
| 2023-07-10T21:20:24 |
https://www.reddit.com/r/LocalLLaMA/comments/14w6r84/40704090_for_more_vram_or_a_separate_3090_on_a/
|
fluffywuffie90210
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14w6r84
| false | null |
t3_14w6r84
|
/r/LocalLLaMA/comments/14w6r84/40704090_for_more_vram_or_a_separate_3090_on_a/
| false | false |
self
| 1 | null |
Prompt templates
| 1 |
Hi,
I'm trying to find the best prompt template for pairing instruction with context with a 7B Vicuna1.1 model. In the part 'Prompt Templates' of this article [https://www.reddit.com/r/LocalLLaMA/wiki/models/#wiki\_4-bit\_model\_requirements\_for\_llama](https://www.reddit.com/r/LocalLLaMA/wiki/models/#wiki_4-bit_model_requirements_for_llama)
I found this:
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
*your text here*
### Input:
*your text here*
### Response:
(include newline)
It works really well even if the article mentioned that this template fits for Alpaca models. Now, I’m trying to adapt it in order for the model respond 'I don't know' if the input doesn't contain information asked in the instruction. I tried a lot of variations of this template unsuccessfully. Do you have an idea of a variation that will help me accomplish this?
| 2023-07-10T21:41:59 |
https://www.reddit.com/r/LocalLLaMA/comments/14w7cjt/prompt_templates/
|
SanglierAlbinos
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14w7cjt
| false | null |
t3_14w7cjt
|
/r/LocalLLaMA/comments/14w7cjt/prompt_templates/
| false | false |
default
| 1 | null |
How content extension works in simple words?
| 1 |
I have a vague understanding of how LLM works, if oversimplified, it counts probability of next word after some combination of previous words.
Lets imagine, we learned a model on a really small context, like 10 words. So it can solve a simple prompts: "My name is Bob. Say my name" and model can predict, that next word should be "Bob". It can predict it because it takes in account all previous worlds, and given we learned model on a context length of 10 words, it has weights only on chain of word with the length of 10.
Can somebody explain in simple words, how content extension works in LLM without relearning model on a longer contexts? I can make a trivial suggestion, that when context come close to its limit some mechanism outside of model asks model to conclude a whole context in something more compact and then, but this idea does not seem to give not an effective nor a precise one (and not scalable for that matter), and models with extended context seem to be able to handle much bigger contexts than my trivial solution suggests, so what is it?
Is it some clever looping of tokens connections inside a model?
| 2023-07-10T22:34:11 |
https://www.reddit.com/r/LocalLLaMA/comments/14w8ppt/how_content_extension_works_in_simple_words/
|
uti24
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14w8ppt
| false | null |
t3_14w8ppt
|
/r/LocalLLaMA/comments/14w8ppt/how_content_extension_works_in_simple_words/
| false | false |
self
| 1 | null |
Is it possible to make two LLMs talk to each other?
| 1 |
I've been researching LLMs (language models) and a question has come to my mind: is it possible to make two LLMs interact and have a conversation with each other? I'm fascinated by the idea, is it possible. How can I achieve it? Could you please give me any suggestions ? Thank you in advance
| 2023-07-10T23:43:55 |
https://www.reddit.com/r/LocalLLaMA/comments/14wafkn/is_it_possible_to_make_two_llms_talk_to_each_other/
|
mashimaroxc
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14wafkn
| false | null |
t3_14wafkn
|
/r/LocalLLaMA/comments/14wafkn/is_it_possible_to_make_two_llms_talk_to_each_other/
| false | false |
self
| 1 | null |
Seeking Guidance on Hosting and Running Falcon 40b Instruct Model for Research Purposes
| 1 |
Hi, for my research purposes, I want to run falcon 40b instruct model.
The participants will just interact with the model to gather some information for their task.
May I know if there is any online website where it had been hosted for research purposes?
Or can someone please explain or redirect to tutorial on how I can run this model in runpod.
The GPTQ model is very slow in runpod A6000. Will it run faster in better processor by any chance?
| 2023-07-10T23:51:03 |
https://www.reddit.com/r/LocalLLaMA/comments/14waltg/seeking_guidance_on_hosting_and_running_falcon/
|
maheshvulasala
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14waltg
| false | null |
t3_14waltg
|
/r/LocalLLaMA/comments/14waltg/seeking_guidance_on_hosting_and_running_falcon/
| false | false |
self
| 1 | null |
Fine tuning model for code refactoring
| 1 |
My supervisor wants me to refactor some codes by fine tuning some open source models. What model will be recommended?!( The code are python codes)
I have tried GPT neo and could not receive the results I would like to get!
| 2023-07-11T00:16:31 |
https://www.reddit.com/r/LocalLLaMA/comments/14wb7u7/fine_tuning_model_for_code_refactoring/
|
learner_beginner
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14wb7u7
| false | null |
t3_14wb7u7
|
/r/LocalLLaMA/comments/14wb7u7/fine_tuning_model_for_code_refactoring/
| false | false |
self
| 1 | null |
GPT-4 details leaked
| 1 |
[https://threadreaderapp.com/thread/1678545170508267522.html](https://threadreaderapp.com/thread/1678545170508267522.html)
Here's a summary:
GPT-4 is a language model with approximately 1.8 trillion parameters across 120 layers, 10x larger than GPT-3. It uses a Mixture of Experts (MoE) model with 16 experts, each having about 111 billion parameters. Utilizing MoE allows for more efficient use of resources during inference, needing only about 280 billion parameters and 560 TFLOPs, compared to the 1.8 trillion parameters and 3,700 TFLOPs required for a purely dense model.
The model is trained on approximately 13 trillion tokens from various sources, including internet data, books, and research papers. To reduce training costs, OpenAI employs tensor and pipeline parallelism, and a large batch size of 60 million. The estimated training cost for GPT-4 is around $63 million.
While more experts could improve model performance, OpenAI chose to use 16 experts due to the challenges of generalization and convergence. GPT-4's inference cost is three times that of its predecessor, DaVinci, mainly due to the larger clusters needed and lower utilization rates. The model also includes a separate vision encoder with cross-attention for multimodal tasks, such as reading web pages and transcribing images and videos.
OpenAI may be using speculative decoding for GPT-4's inference, which involves using a smaller model to predict tokens in advance and feeding them to the larger model in a single batch. This approach can help optimize inference costs and maintain a maximum latency level.
| 2023-07-11T00:34:13 |
https://www.reddit.com/r/LocalLLaMA/comments/14wbmio/gpt4_details_leaked/
|
HideLord
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14wbmio
| false | null |
t3_14wbmio
|
/r/LocalLLaMA/comments/14wbmio/gpt4_details_leaked/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'UIX0vFPKedHxyW4Fccq45yrkc-s6LZB2Pri0j9xdOYg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/YJzAmcMFmWiKv0UY-Ve8mXsj54eeAdSf_3B-I8gaxuc.jpg?width=108&crop=smart&auto=webp&v=enabled&s=eb3531b8335413d0157c530d25bd467c29d3e142', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/YJzAmcMFmWiKv0UY-Ve8mXsj54eeAdSf_3B-I8gaxuc.jpg?width=216&crop=smart&auto=webp&v=enabled&s=56eafd2c02eb138704b1f79bbfc980575a2288f9', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/YJzAmcMFmWiKv0UY-Ve8mXsj54eeAdSf_3B-I8gaxuc.jpg?width=320&crop=smart&auto=webp&v=enabled&s=ca39c9cb0d3cd51ef184f46ac7e8e376c45562c5', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/YJzAmcMFmWiKv0UY-Ve8mXsj54eeAdSf_3B-I8gaxuc.jpg?width=640&crop=smart&auto=webp&v=enabled&s=ce0e8ea4cb6b9a48e3c770094cb2b5d6e29d3561', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/YJzAmcMFmWiKv0UY-Ve8mXsj54eeAdSf_3B-I8gaxuc.jpg?width=960&crop=smart&auto=webp&v=enabled&s=4e00f8e03f37d8d26b5e5b9c88dca4afb267de9f', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/YJzAmcMFmWiKv0UY-Ve8mXsj54eeAdSf_3B-I8gaxuc.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=aa3c3b47caad609a14b98aca5d54c38e8ec227c4', 'width': 1080}], 'source': {'height': 628, 'url': 'https://external-preview.redd.it/YJzAmcMFmWiKv0UY-Ve8mXsj54eeAdSf_3B-I8gaxuc.jpg?auto=webp&v=enabled&s=98e390fda92c3ffe9efb0329543a143e0152fc17', 'width': 1200}, 'variants': {}}]}
|
How Do I Change The Max Context Window In Llama-cpp-python
| 1 |
Hey subreddit. I'm a noob here but had a question that I couldn't find the answer to anywhere. I know you can change the max context window in llama.cpp, but I have no clue how to do it in llama-cpp-python and there's no info online about it. Any help is awesome!
| 2023-07-11T00:41:43 |
https://www.reddit.com/r/LocalLLaMA/comments/14wbsut/how_do_i_change_the_max_context_window_in/
|
Norok_The_Diablo
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14wbsut
| false | null |
t3_14wbsut
|
/r/LocalLLaMA/comments/14wbsut/how_do_i_change_the_max_context_window_in/
| false | false |
self
| 1 | null |
Is there a UI that can limit LLM tokens to a preset list?
| 3 |
I’m working with LLMs and sometimes I want one of a preset list of answers, but I haven’t found a way to limit the tokens that are allowed for consideration like that. For example, if I want the LLM to pick a mood/tone for some text, but I need it to choose one of a specific list of moods/tones like “upset” or “confused”, is there a way to limit the tokens to only those options? Is there a UI that can do this? If not, what would be the best way to achieve this?
I'm a software developer and so for stablediffusion I know how to write an extension or addon for Automatic1111, but for the LLM stuff I havent worked with it as much and I'm not sure if there's a UI that I could easily do this with but from what I understand (and correct me if I'm wrong) this would be in the same vein as editing how the temperature variable operates and would probably be best with the temperature set to make it deterministic.
| 2023-07-11T01:58:57 |
https://www.reddit.com/r/LocalLLaMA/comments/14wdjds/is_there_a_ui_that_can_limit_llm_tokens_to_a/
|
Sixhaunt
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14wdjds
| false | null |
t3_14wdjds
|
/r/LocalLLaMA/comments/14wdjds/is_there_a_ui_that_can_limit_llm_tokens_to_a/
| false | false |
self
| 3 | null |
How much data would I require to train a character?
| 1 |
I want to train a character in anime like Hoshino Ai
​
[Hoshino Ai](https://preview.redd.it/h8ncqkqt99bb1.png?width=130&format=png&auto=webp&s=d4d4b0899e67c785fac76339afc7ef7291a1d9bb)
| 2023-07-11T03:31:56 |
https://www.reddit.com/r/LocalLLaMA/comments/14wfkvd/how_much_data_would_i_require_to_train_a_character/
|
Sad-Lengthiness-6938
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14wfkvd
| false | null |
t3_14wfkvd
|
/r/LocalLLaMA/comments/14wfkvd/how_much_data_would_i_require_to_train_a_character/
| false | false | 1 | null |
|
Open-Source, High Quality ERP Chatbot
| 1 |
Hey everyone! I am working on creating a open-source, high quality ERP chatbot.
I've built a unique dataset by scraping a huge amount of human-written ERP-themed content. It has everything from GFE to very specific topics, and from all types of relationships. The current dataset size is just over 21 million tokens.
I want to train a WizardLM Uncensored 30B model on this large dataset, and then fine tune it on a smaller, chat-formatted, curated dataset that I have been working on as well. I will release the datasets and models when finished.
I also plan to perform additional fine-tuning so the model can actually creatively generate relevant images based upon the chat conversation using StableDiffusion prompts!
If you are interested, please PM me! If you have a GPU setup you would be willing to use for training the model, server credits, or anything similar it would be really helpful.
As far as I know, no one else is working on something like this. The dataset I have is amazing, and I can't wait to see what happens when a top model is trained with it!
| 2023-07-11T05:31:03 |
https://www.reddit.com/r/LocalLLaMA/comments/14whxvk/opensource_high_quality_erp_chatbot/
|
ilovejoi36912
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14whxvk
| false | null |
t3_14whxvk
|
/r/LocalLLaMA/comments/14whxvk/opensource_high_quality_erp_chatbot/
| false | false |
self
| 1 | null |
LLM intelligence is more feminine than masculine?
| 1 |
Is there anyone else who feels that the intelligence resulting from generative AI has a feminine feel to it? I mean at first it sounds pretty stodgy and masculine -- "it is important to note..." and so on -- but after you keep prompting it and going round in circles it feels more like a woman than a man?
But I guess it depends on the model. Flan-T5 seems pretty fact-oriented because it is an instructional model. Other models like Guanaco and Vicuna are more creative and tend to go off on crazy tangents. Their training data appears to be full of weird conversations on the Internet that may jump in at any moment. Remind you of anyone who has spent too much time on social media?
| 2023-07-11T06:05:24 |
https://www.reddit.com/r/LocalLLaMA/comments/14wilbt/llm_intelligence_is_more_feminine_than_masculine/
|
NodeTraverser
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14wilbt
| false | null |
t3_14wilbt
|
/r/LocalLLaMA/comments/14wilbt/llm_intelligence_is_more_feminine_than_masculine/
| false | false |
self
| 1 | null |
Any empirical results when it comes to increasing context length? Is their a deterioration in text generation quality?
| 1 |
Given all the recent papers that claim to allow models to extrapolate to much longer context lengths (last one being DeepMind's focused transformers), has anyone taken a look at the performance of the generation quality of these models when applying any of these methods? Are there any resources where I can read up on each and figure out what the trade-offs are for using each one? Very interested in increasing the context length of my models, but there's quite a lot of stuff out there and don't know what works and what doesn't...
| 2023-07-11T07:24:10 |
https://www.reddit.com/r/LocalLLaMA/comments/14wk3c2/any_empirical_results_when_it_comes_to_increasing/
|
Ok_Coyote_8904
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14wk3c2
| false | null |
t3_14wk3c2
|
/r/LocalLLaMA/comments/14wk3c2/any_empirical_results_when_it_comes_to_increasing/
| false | false |
self
| 1 | null |
What is LLM Agent? and How to Build LLM Agents to Improve CX
| 1 |
[removed]
| 2023-07-11T09:31:41 |
https://www.reddit.com/r/LocalLLaMA/comments/14wmcxu/what_is_llm_agent_and_how_to_build_llm_agents_to/
|
Floatbot_Inc
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14wmcxu
| false | null |
t3_14wmcxu
|
/r/LocalLLaMA/comments/14wmcxu/what_is_llm_agent_and_how_to_build_llm_agents_to/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'rJCm1Yo7cxRIo_UiS4P4cgDvF_fhbYzY6nQppHgmC4c', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/bHjkqmOnam7fUxhPXXtI5-DelWNxI3iM212F6Z7dEGc.jpg?width=108&crop=smart&auto=webp&v=enabled&s=47c2fcaa4d79127fcf7a50f05946c3a2817903a6', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/bHjkqmOnam7fUxhPXXtI5-DelWNxI3iM212F6Z7dEGc.jpg?width=216&crop=smart&auto=webp&v=enabled&s=9ed6c85bba95b5a1e85ab2f48c7361648203cc10', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/bHjkqmOnam7fUxhPXXtI5-DelWNxI3iM212F6Z7dEGc.jpg?width=320&crop=smart&auto=webp&v=enabled&s=5279c0f712e6e5fefce1b8efc6d9cd4e49cfb7f2', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/bHjkqmOnam7fUxhPXXtI5-DelWNxI3iM212F6Z7dEGc.jpg?width=640&crop=smart&auto=webp&v=enabled&s=8a9f1bb301a7a04bb68e9a7d3d812e1584d63b96', 'width': 640}, {'height': 639, 'url': 'https://external-preview.redd.it/bHjkqmOnam7fUxhPXXtI5-DelWNxI3iM212F6Z7dEGc.jpg?width=960&crop=smart&auto=webp&v=enabled&s=60328a8b61a671f0e8f8b66552e28fa8f3197004', 'width': 960}, {'height': 719, 'url': 'https://external-preview.redd.it/bHjkqmOnam7fUxhPXXtI5-DelWNxI3iM212F6Z7dEGc.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=fb36cdcbf0287513c98ddb31e1f786d93784ab8d', 'width': 1080}], 'source': {'height': 749, 'url': 'https://external-preview.redd.it/bHjkqmOnam7fUxhPXXtI5-DelWNxI3iM212F6Z7dEGc.jpg?auto=webp&v=enabled&s=cae8f02154f3eecf0890190f2135ce38e29cb8d9', 'width': 1125}, 'variants': {}}]}
|
Myth of Context Length - Jim Fan
| 1 | 2023-07-11T09:44:12 |
https://twitter.com/DrJimFan/status/1678460065811136512
|
saintshing
|
twitter.com
| 1970-01-01T00:00:00 | 0 |
{}
|
14wmkwv
| false |
{'oembed': {'author_name': 'Jim Fan', 'author_url': 'https://twitter.com/DrJimFan', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">I'm calling the Myth of Context Length:<br><br>Don't get too excited by claims of 1M or even 1B context tokens. You know what, LSTMs already achieve infinite context length 25 yrs ago!<br><br>What truly matters is how well the model actually uses the context. It's easy to make seemingly wild… <a href="https://t.co/FKqjs3xb6O">pic.twitter.com/FKqjs3xb6O</a></p>— Jim Fan (@DrJimFan) <a href="https://twitter.com/DrJimFan/status/1678460065811136512?ref_src=twsrc%5Etfw">July 10, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/DrJimFan/status/1678460065811136512', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
|
t3_14wmkwv
|
/r/LocalLLaMA/comments/14wmkwv/myth_of_context_length_jim_fan/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '2ZFDBQFVUKOun-YvrA37NLqxOmmO36fB8LtcVIjtE-4', 'resolutions': [{'height': 101, 'url': 'https://external-preview.redd.it/8CinACgI2LSqrbcLRa9oTSxGFsgXM7MgNk7qHYJuoCE.jpg?width=108&crop=smart&auto=webp&v=enabled&s=9e80801ce7a474165063b1b5940e03aa0b38ba35', 'width': 108}], 'source': {'height': 131, 'url': 'https://external-preview.redd.it/8CinACgI2LSqrbcLRa9oTSxGFsgXM7MgNk7qHYJuoCE.jpg?auto=webp&v=enabled&s=aefe8a0e10f6285599819fc1a631d63098f43980', 'width': 140}, 'variants': {}}]}
|
||
Agents for browser
| 1 |
Do we have any available projects, that used LLM as an agent to control a browser., navigate websites, take actions, etc.
| 2023-07-11T10:14:53 |
https://www.reddit.com/r/LocalLLaMA/comments/14wn5tf/agents_for_browser/
|
Raise_Fickle
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14wn5tf
| false | null |
t3_14wn5tf
|
/r/LocalLLaMA/comments/14wn5tf/agents_for_browser/
| false | false |
self
| 1 | null |
Wizard-Vicuna-13b-SUPERHOT, Mac M2 16gb unified Ram. Is it normal to get responses in 1-2 minutes? What Text Generation UI Settings can help me speed it up?
| 1 |
Stats in the title. I even tried working with 7b and it was somehow even slower. I'm very new to all of this and have tried researching it, going through the sub's wiki's, etc. but I have to admit a lot of the vocabulary is a bit overwhelming. I'm tech savvy - I know how to code front and backend, and I've worked with python before.
But I don't know if I'm using 4bit or 8bit or how to find out. And I know there are other settings that might help it along, but I'm not sure where to start. Have been researching it, googling, checking the sub, reading through the wiki's.
Anyone want to point me in the right direction. I'd even settle for getting 7b to work in a reasonable amount of time.
| 2023-07-11T10:30:13 |
https://www.reddit.com/r/LocalLLaMA/comments/14wng3q/wizardvicuna13bsuperhot_mac_m2_16gb_unified_ram/
|
Ok-Training-7587
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
14wng3q
| false | null |
t3_14wng3q
|
/r/LocalLLaMA/comments/14wng3q/wizardvicuna13bsuperhot_mac_m2_16gb_unified_ram/
| false | false |
self
| 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.