title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How is the best way to allow a client upload and embeding prompt from file dynamically?
| 1 |
.
| 2023-09-25T00:43:26 |
https://www.reddit.com/r/LocalLLaMA/comments/16rdw4y/how_is_the_best_way_to_allow_a_client_upload_and/
|
YoungAstronautPast
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16rdw4y
| false | null |
t3_16rdw4y
|
/r/LocalLLaMA/comments/16rdw4y/how_is_the_best_way_to_allow_a_client_upload_and/
| false | false |
self
| 1 | null |
Do you guys notice that certain A.I. models have an attitude ?
| 1 |
There are infinite models released by the Bloke, but recently, a particular model supposedly well verse in Python was so full of attitude during conversations that it legit got me mad, I didn't add avatar character to it, I am using it as default.
I have given it many chances across multiple interactions but it is the only model that actually pisses me off across different interactions on many opportunities, I can't believe I get angry at a file !
Do you guys experience the same thing ?
| 2023-09-25T00:45:17 |
https://www.reddit.com/r/LocalLLaMA/comments/16rdxjs/do_you_guys_notice_that_certain_ai_models_have_an/
|
Vitamin_C_is_awesome
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16rdxjs
| false | null |
t3_16rdxjs
|
/r/LocalLLaMA/comments/16rdxjs/do_you_guys_notice_that_certain_ai_models_have_an/
| false | false |
self
| 1 | null |
Text-to-Speech server that uses Tortoise. 7-8 second generation times on an RTX 3090
| 1 | 2023-09-25T01:18:21 |
https://github.com/Pandaily591/OnlySpeakTTS
|
FromFutures
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16relk0
| false | null |
t3_16relk0
|
/r/LocalLLaMA/comments/16relk0/texttospeech_server_that_uses_tortoise_78_second/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '1aGtAr3thZnLQPU_aPYsSQ4UgS1-9-4LOqKiH_IEB98', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HjnbrobMKkHKWGJ7DW0DHvuMOaI7GXmQTQouhh9lWt4.jpg?width=108&crop=smart&auto=webp&s=441aa5457b93d898619882fcb6ed6157833b0bae', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HjnbrobMKkHKWGJ7DW0DHvuMOaI7GXmQTQouhh9lWt4.jpg?width=216&crop=smart&auto=webp&s=3c56bf19c03873c42501f74081508b9c9d38b9fa', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HjnbrobMKkHKWGJ7DW0DHvuMOaI7GXmQTQouhh9lWt4.jpg?width=320&crop=smart&auto=webp&s=4e92a0f404bb3550cedbca638ee1f5760a79ad85', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HjnbrobMKkHKWGJ7DW0DHvuMOaI7GXmQTQouhh9lWt4.jpg?width=640&crop=smart&auto=webp&s=a81a5edb23aa53571fcdf9d0db32a2ebe1137578', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HjnbrobMKkHKWGJ7DW0DHvuMOaI7GXmQTQouhh9lWt4.jpg?width=960&crop=smart&auto=webp&s=a85e3e944f34a26fd1c50eae962bdf5584be5194', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HjnbrobMKkHKWGJ7DW0DHvuMOaI7GXmQTQouhh9lWt4.jpg?width=1080&crop=smart&auto=webp&s=2a6e57472f37806fc55f801d693f5a3f20fdcbdd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HjnbrobMKkHKWGJ7DW0DHvuMOaI7GXmQTQouhh9lWt4.jpg?auto=webp&s=2aa3bb455be8c024a3471f7981b715e99bdd7a4f', 'width': 1200}, 'variants': {}}]}
|
||
How do I create a GGUF model file?
| 1 | 2023-09-25T01:39:18 |
https://www.secondstate.io/articles/convert-pytorch-to-gguf/
|
smileymileycoin
|
secondstate.io
| 1970-01-01T00:00:00 | 0 |
{}
|
16rf0vd
| false | null |
t3_16rf0vd
|
/r/LocalLLaMA/comments/16rf0vd/how_do_i_create_a_gguf_model_file/
| false | false |
default
| 1 | null |
|
Share your model collection :-)
| 1 |
Do you hoard models? Share your list here!
I definitely hoard models, and this is what's in it so far:
Models for using:
airoboros-13B-gpt4-1.3-GGML.git
Airoboros-33B-2.1-GGUF.git
airoboros-33B-gpt4-1.4-GGML.git
Airoboros-L2-13B-2.1-GGUF.git
airoboros-l2-70B-GPT4-2.0-GGUF.git
Alpaca-Lora-30B-GGML.git
AlpacaCielo-13B-GGML.git
Asclepius-13B-GGUF.git
BLOOMChat-176B-v1-GPTQ.git
bloomz-176B-GPTQ.git
Carl-13B-GGML.git
Carl-33B-GGML.git
Carl-Llama-2-13B-GGUF.git
Chronorctypus-Limarobormes-13b-GGML.git
chronos-13B-GGML.git
chronos-hermes-13B-GGML.git
CodeLlama-7B-GGUF.git
CodeLlama-7B-Instruct-GGUF.git
CodeLlama-13B-GGUF.git
CodeLlama-13B-Python-GGUF.git
CodeLlama-34B-GGUF.git
CodeLlama-34B-Instruct-GGUF.git
CodeLlama-34B-Python-GGUF.git
CodeUp-Llama-2-13B-Chat-HF-GGML.git
EverythingLM-13B-16K-GGML.git
falcon-40b-instruct-GPTQ.git
Falcon-180B-GGUF.git
galpaca-30B-GPTQ-4bit-128g.git
Genz-70b-GGUF.git
GPlatty-30B-GGML.git
gpt4-x-vicuna-13B-GGML.git
GPT4All-13B-snoozy-GGML.git
Griffin-3B-GGML.git
guanaco-7B-GGML.git
guanaco-7B-GPTQ.git
Guanaco-7B-SuperHOT-8K-GGML.git
guanaco-13B-GGML.git
Guanaco-13B-Uncensored-GGUF.git
guanaco-33B-GGUF.git
hippogriff-30b-chat-GGML.git
Karen_theEditor_13B-GGML.git
koala-7B-GGML.git
koala-13B-GGML.git
LLaMa-7B-GGML.git
LLaMa-13B-GGML.git
LLaMa-30B-GGML.git
Llama-2-7B-32K-Instruct-GGML.git
Llama-2-7B-Chat-ggml.git
Llama-2-7B-ggml.git
Llama-2-7B-GGUF.git
Llama-2-7B-LoRA-Assemble-GGUF.git
Llama-2-13B-Chat-ggml.git
Llama-2-13B-ggml.git
Llama-2-70B-GGUF.git
Llama-2-70B-Orca-200k-GGUF.git
Llama2-70B-OASST-SFT-v10-GGUF.git
LlongOrca-7B-16K-GGML.git
LlongOrca-13B-16K-GGML.git
LongChat-7B-GGML.git
LongChat-13B-GGML.git
Luban-13B-GGUF.git
MAmmoTH-70B-GGUF.git
Manticore-13B-Chat-Pyg-Guanaco-GGML.git
Manticore-13B-GGUF.git
Marx-3b-GGML.git
medalpaca-13B-GGML.git
medalpaca-13B-GGUF.git
medalpaca-13B-GPTQ-4bit.git
minotaur-13B-fixed-GGUF.git
Minotaur-13B-fixed-SuperHOT-8K-GGML.git
minotaur-13B-GGML.git
minotaur-15B-GGML.git
mpt-7b-chat-GGML.git
MPT-7B-GGML.git
MPT-7B-Instruct-GGML.git
MPT-7B-Storywriter-GGML.git
MPT-30B-Dolphin-v2-GGML.git
Mythical-Destroyer-V2-L2-13B-GGUF.git
NewHope-GGML.git
Nous-Hermes-Llama2-GGML.git
open-llama-13b-open-instruct-GGML.git
OpenAssistant-Llama2-13B-Orca-8K-3319-GGML.git
orca_mini_3B-GGML.git
orca_mini_7B-GGML.git
orca_mini_13B-GGML.git
orca_mini_v2_7B-GGML.git
orca_mini_v3_7B-GGML.git
orca_mini_v3_7B-GGUF.git
orca_mini_v3_13B-GGML.git
orca_mini_v3_70B-GGML.git
PMC_LLAMA-7B-10-Epoch-SuperHOT-8K-GGML.git
Project-Baize-v2-7B-GGML.git
Project-Baize-v2-13B-GGML.git
PuddleJumper-13B-GGUF.git
PuddleJumper-13B-V2-GGUF.git
Puma-3b-GGML.git
qCammel-70-x-GGUF.git
Samantha-7B-GGML.git
samantha-13B-GGML.git
Samantha-13B-SuperHOT-8K-GGML.git
Samantha-33B-SuperHOT-8K-GGML.git
Samantha-1-1-Llama-7B-SuperHOT-8K-GGML.git
Samantha-1.11-13B-GGUF.git
Samantha-1.11-70B-GGUF.git
Scarlett-7B-GGML.git
Scarlett-13B-GGML.git
scarlett-33B-GGML.git
selfee-7B-GGML.git
Selfee-7B-SuperHOT-8K-GGML.git
Selfee-13B-GGML.git
Selfee-13B-SuperHOT-8K-GGML.git
Speechless-Llama2-13B-GGUF.git
stable-vicuna-13B-GPTQ.git
starchat-beta-GGML.git
starcoderplus-GGML.git
Stheno-L2-13B-GGUF.git
SuperPlatty-30B-GGML.git
Synthia-13B-GGUF.git
Synthia-34B-v1.2-GGUF.git
Synthia-70B-v1.2-GGUF.git
Vicuna-7B-CoT-SuperHOT-8K-GGML.git
Vicuna-7B-v1-3-SuperHOT-8K-GGML.git
vicuna-7B-v1.5-GGML.git
vicuna-13B-v1.5-16K-GGML.git
vicuna-13B-v1.5-GGML.git
Vicuna-33B-1-3-SuperHOT-8K-GGML.git
vicuna-33B-GGML.git
VicUnlocked-30B-LoRA-GGML.git
Wizard-Vicuna-7B-Uncensored-GGML.git
Wizard-Vicuna-7B-Uncensored-SuperHOT-8K-GGML.git
wizard-vicuna-13B-SuperHOT-8K-GGML.git
WizardCoder-15B-1.0-GGML.git
WizardLM-7B-uncensored-GGML.git
WizardLM-7B-V1-0-Uncensored-SuperHOT-8K-GGML.git
WizardLM-13B-V1.0-Uncensored-GGML.git
WizardLM-33B-V1.0-Uncensored-GGML.git
WizardLM-70B-V1.0-GGML.git
WizardLM-Uncensored-Falcon-40B-3bit-GPTQ.git
WizardMath-70B-V1.0-GGML.git
Yarn-Llama-2-13B-128K-GGUF.git
Zarablend-L2-7B-GGML.git
Zarablend-MX-L2-7B-GGML.git
Models for possible fine-tuning:
airoboros-13b-gpt4-1.3.git
airoboros-13b.git
bloomz.git
Chronorctypus-Limarobormes-13b.git
chronos-13b.git
EverythingLM-13b-16k.git
flan-t5-base.git
flan-t5-large.git
flan-t5-xl.3B.git
flan-t5-xxl.11B.git
galpaca-30b.git
gpt4-x-vicuna-13b.git
gpt4all-13b-snoozy.git
Griffin-3B.git
hippogriff-30b-chat.git
Llama-2-7B-32K-Instruct.git
LlongOrca-13B-16k.git
LlongOrca-7B-16k.git
manticore-13b.git
Marx-3B.git
Minotaur-13B-fixed-SuperHOT-8K-fp16.git
minotaur-13b.git
mpt-7b-storywriter.git
mpt-30b-dolphin-v2.git
open_llama_3b.git
open_llama_3b_easylm.git
open_llama_3b_v2.git
open_llama_7b_easylm.git
Small models for misc purposes:
bloomz-560m-ggml.git
cerebras-111M-ggml.git
dante_1.3b_3_ggml.git
flan-t5-small.git
ggml-eliai_256m.git
ggml-polyglot-ko-5.8B-q4.git
GPT2-Medium-Alpaca-355m-ggml.git
RedPajama-INCITE-Base-v1-3B-ggml-q8_0.git
RedPajama-INCITE-Chat-3B-v1-GGML.git
StableLM-Base-Alpha-3B-ggml-q5_1.git
| 2023-09-25T02:22:02 |
https://www.reddit.com/r/LocalLLaMA/comments/16rfwt2/share_your_model_collection/
|
ttkciar
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16rfwt2
| false | null |
t3_16rfwt2
|
/r/LocalLLaMA/comments/16rfwt2/share_your_model_collection/
| false | false |
self
| 1 | null |
Deploying Gradio Web UI for LLMs on a Shared Linux Workstation Without Admin Access
| 1 |
[removed]
| 2023-09-25T02:54:53 |
https://www.reddit.com/r/LocalLLaMA/comments/16rgkmu/deploying_gradio_web_ui_for_llms_on_a_shared/
|
peace-of-me
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16rgkmu
| false | null |
t3_16rgkmu
|
/r/LocalLLaMA/comments/16rgkmu/deploying_gradio_web_ui_for_llms_on_a_shared/
| false | false |
self
| 1 | null |
LlamaTor: Decentralized AI Model Distribution
| 1 |
[removed]
| 2023-09-25T03:33:55 |
https://www.reddit.com/r/LocalLLaMA/comments/16rhd5i/llamator_decentralized_ai_model_distribution/
|
Nondzu
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16rhd5i
| false | null |
t3_16rhd5i
|
/r/LocalLLaMA/comments/16rhd5i/llamator_decentralized_ai_model_distribution/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'RjJWztBiFGWSiX83fH3_kau7edsk9G7G5e6O7I6Yv9o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yaslX0q4G-CM1XlamcSLSD-RtaEIgu_ai62UxE4DOI0.jpg?width=108&crop=smart&auto=webp&s=9bb80945862dec4f4bd89dbb5b4860d6bb853aea', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/yaslX0q4G-CM1XlamcSLSD-RtaEIgu_ai62UxE4DOI0.jpg?width=216&crop=smart&auto=webp&s=c17ec67853683152705be451ecf69cfdf8a5c630', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/yaslX0q4G-CM1XlamcSLSD-RtaEIgu_ai62UxE4DOI0.jpg?width=320&crop=smart&auto=webp&s=dd9a9553be2d7a114061ba820761ac4b16b96641', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/yaslX0q4G-CM1XlamcSLSD-RtaEIgu_ai62UxE4DOI0.jpg?width=640&crop=smart&auto=webp&s=b352abc16ebed863f5a08edb1299dd134f6b7ab0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/yaslX0q4G-CM1XlamcSLSD-RtaEIgu_ai62UxE4DOI0.jpg?width=960&crop=smart&auto=webp&s=13caf72478cb95c3d3f1f7d0da67e3ecb99232cc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/yaslX0q4G-CM1XlamcSLSD-RtaEIgu_ai62UxE4DOI0.jpg?width=1080&crop=smart&auto=webp&s=9f8b5ea4434f72ae45f598ba8c5cc24319bd424b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/yaslX0q4G-CM1XlamcSLSD-RtaEIgu_ai62UxE4DOI0.jpg?auto=webp&s=af744429cf62496109d5532fa5630a22f26c2cf1', 'width': 1200}, 'variants': {}}]}
|
LlamaTor: A New Initiative for BitTorrent-Based AI Model Distribution
| 1 |
[removed]
| 2023-09-25T03:55:29 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
16rhrso
| false | null |
t3_16rhrso
|
/r/LocalLLaMA/comments/16rhrso/llamator_a_new_initiative_for_bittorrentbased_ai/
| false | false |
default
| 1 | null |
||
input/label token allocation
| 1 |
When fine-tuning a llama model on a chat dataset should I allocate to the "label" field just the suposed model answer tokens ? and to the input\_ids just the question ? (in the prompt structure, of course).
I've found a an example where the question and the system message tokens are replaced with -100 value after "input\_ids" are cloned into "label" but I'm wondering if it should be at all there.
| 2023-09-25T04:12:59 |
https://www.reddit.com/r/LocalLLaMA/comments/16ri3s7/inputlabel_token_allocation/
|
Plastic_Money_2618
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ri3s7
| false | null |
t3_16ri3s7
|
/r/LocalLLaMA/comments/16ri3s7/inputlabel_token_allocation/
| false | false |
self
| 1 | null |
What is the best model to talk about ai with?
| 1 |
I’m wondering what models are best versed in recent advancements in LLM, like fine tuning, RAG, vector embeddings etc. I think it would be very useful to have something similar to a “code llama” function but for discussing llms and open source models.
| 2023-09-25T04:39:57 |
https://www.reddit.com/r/LocalLLaMA/comments/16rilgv/what_is_the_best_model_to_talk_about_ai_with/
|
revolved
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16rilgv
| false | null |
t3_16rilgv
|
/r/LocalLLaMA/comments/16rilgv/what_is_the_best_model_to_talk_about_ai_with/
| false | false |
self
| 1 | null |
Has anyone tried to do ongoing regular fine-tune on their personal data to create a personal assistant with up-to-date understanding of personal context? What are the costs?
| 1 |
Has anyone tried to do ongoing fine-tune to their emails, voice mails, diary, resumes, repos, social media posts -- all the personal data flowing to/from you, as well as the corporus of all the past data that you have accumulated over the lifetime -- to create a personalized assistant with highly tailored responses and up-to-date understanding of personal context?
How did you do it, what was the architecture for your set-up, what were the costs, and how good were the outcomes?
The costs are especially of interest: Ongoing finetuning ought to be an expensive thing to do. How did you chunk the periods -- how long to wait between the fine-tunes, what approach to finetuning did you take, local vs cloud?
​
Thanks :)
| 2023-09-25T06:31:49 |
https://www.reddit.com/r/LocalLLaMA/comments/16rkipm/has_anyone_tried_to_do_ongoing_regular_finetune/
|
Infinite100p
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16rkipm
| false | null |
t3_16rkipm
|
/r/LocalLLaMA/comments/16rkipm/has_anyone_tried_to_do_ongoing_regular_finetune/
| false | false |
self
| 1 | null |
Meta is developing dozens of AI chatbot characters that may start releasing on their apps as soon as this week
| 94 |
Not exactly local news, but it seemed like interesting news.
Paywalled WSJ article: [https://www.wsj.com/tech/ai/meta-ai-chatbot-younger-users-dab6cb32](https://www.wsj.com/tech/ai/meta-ai-chatbot-younger-users-dab6cb32)
The Verge article: [https://www.theverge.com/2023/9/24/23887773/meta-ai-chatbots-gen-ai-personas-young](https://www.theverge.com/2023/9/24/23887773/meta-ai-chatbots-gen-ai-personas-young)
A snippet from the WSJ article:
>Meta Platforms is planning to release artificial intelligence chatbots as soon as this week with distinct personalities across its social media apps as a way to attract young users, according to people familiar with the matter.
>
>These generative AI bots are being tested internally by employees, and the company is expecting the first of these AI agents at the Meta Connect Conference which **starts Wednesday**. The bots are meant to be used as a means to drive engagement with users, although some of them might also have productivity related skills such as the ability to help with coding or other tasks.
>
>Going after younger users has been a priority for Meta with the emergence of TikTok, which overtook Instagram in popularity among teenagers in the past couple of years. This shift prompted Meta chief executive Mark Zuckerberg in October 2021 to say the company would retool its teams to make serving young adults their north star rather than optimizing for the larger number of older people.
>
>With the rise of large language model technology since the launch of ChatGPT last November, Meta has also refocused the work of its AI divisions to harness the capabilities of generative AI for application in the company's various apps and the metaverse. Now Meta is hoping these gen AI personas, as they are known internally, will help the company attract young users. Meta is planning to develop dozens of these AI personality chatbots. The company has also worked on a product that would allow celebrities and creators to use their own AI chatbot to interact with fans and followers, according to people familiar with the matter.
This is something they've been planning for a while going by Mark Zuckerberg's words [months ago](https://www.reddit.com/r/LocalLLaMA/comments/144v6t7/mark_zuckerberg_on_upcoming_llama_v2/):
>You'll have an assistant that you can talk to in WhatsApp. I think in the future, every creator will have kind of an AI agent that can kind of act on their behalf that their fans can talk to. I want to go get to the point where every small business basically has an AI agent that people can talk to to do commerce and customer support and things like that.
And it reminds me of a comment I wrote a couple weeks ago in response to the slightly disappointing news that Llama 3 won't start training until January next year at the earliest:
>Another confirmation for the Jason Wei rumor, and this clarifies the ambiguity on who was handling Llama 3: it's the same team that made Llama 2 but transferred from FAIR to the new AI team focused around Meta's apps. And it's no surprise. Llama 1 started development before the ChatGPT craze, and Llama 2 started shortly after it. The internal Meta transfers could signify that while Llama 3 will still be open-sourced, it'll be managed from the onset by a team looking to integrate this into their existing business lines instead of just producing openly released research. Mark Zuckerberg said this about Llama months ago:
>
>\> the work that we're doing to advance the infrastructure, it's basically at this point taking it beyond a research project into something which is ready to be kind of core infrastructure
>
>He mentioned wanting to see Llama in WhatsApp and creating viable LLM assistants for companies, so that could explain why they want to take additional time with this.
I don't think any of these chatbot characters will be made available separate from their apps, but it's an interesting insight into Meta's shifting strategy. I'm expecting these to be fine-tuned Llama 2 13B models and a fine-tuned Code Llama for the coding persona, and this rollout could be the groundwork for future plans of incorporating the rumored GPT-4 quality Llama 3.
| 2023-09-25T06:45:54 |
https://www.reddit.com/r/LocalLLaMA/comments/16rkqt2/meta_is_developing_dozens_of_ai_chatbot/
|
llamaShill
|
self.LocalLLaMA
| 2023-09-25T08:18:06 | 0 |
{}
|
16rkqt2
| false | null |
t3_16rkqt2
|
/r/LocalLLaMA/comments/16rkqt2/meta_is_developing_dozens_of_ai_chatbot/
| false | false |
self
| 94 |
{'enabled': False, 'images': [{'id': 'SeaFLbMT5bVnpIO_9IIhVGzMwRhzmDxEwtjz3jJyaqI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/IAnF-R5RYV7LpW3Rwovi5n4MEmGNgUA4GIT0-1A224Y.jpg?width=108&crop=smart&auto=webp&s=ddff94bfe3eda3fe1306618d9282416437f3c056', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/IAnF-R5RYV7LpW3Rwovi5n4MEmGNgUA4GIT0-1A224Y.jpg?width=216&crop=smart&auto=webp&s=2efbc8d375129b0eb0fd4320ccee9f3cca529f72', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/IAnF-R5RYV7LpW3Rwovi5n4MEmGNgUA4GIT0-1A224Y.jpg?width=320&crop=smart&auto=webp&s=0bb9c4d9cc242498386d6852dfa9e0811e973076', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/IAnF-R5RYV7LpW3Rwovi5n4MEmGNgUA4GIT0-1A224Y.jpg?width=640&crop=smart&auto=webp&s=4ab5da02f0317db6db5c030fae636c0f0c91465a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/IAnF-R5RYV7LpW3Rwovi5n4MEmGNgUA4GIT0-1A224Y.jpg?width=960&crop=smart&auto=webp&s=4080997b9dd053ec3e0345c4761a8586f9c4c9de', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/IAnF-R5RYV7LpW3Rwovi5n4MEmGNgUA4GIT0-1A224Y.jpg?width=1080&crop=smart&auto=webp&s=4a7fd2be589cbe661d48d5832c07edb4e9ac5df3', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/IAnF-R5RYV7LpW3Rwovi5n4MEmGNgUA4GIT0-1A224Y.jpg?auto=webp&s=75655c92e25227134319e8992d4ad65ee99b8f09', 'width': 1280}, 'variants': {}}]}
|
I know how to write prompts, what I don't know is how to use them in code! HELP
| 1 |
[removed]
| 2023-09-25T08:48:29 |
https://www.reddit.com/r/LocalLLaMA/comments/16rmo3k/i_know_how_to_write_prompts_what_i_dont_know_is/
|
JavaMaster420
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16rmo3k
| false | null |
t3_16rmo3k
|
/r/LocalLLaMA/comments/16rmo3k/i_know_how_to_write_prompts_what_i_dont_know_is/
| false | false |
self
| 1 | null |
Llama on Intel Neural Compute Stick
| 4 |
Has anyone tried using Intel Neural Compute Stick for running Llama locally?
| 2023-09-25T09:58:58 |
https://www.reddit.com/r/LocalLLaMA/comments/16rnsba/llama_on_intel_neural_compute_stick/
|
piokon
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16rnsba
| false | null |
t3_16rnsba
|
/r/LocalLLaMA/comments/16rnsba/llama_on_intel_neural_compute_stick/
| false | false |
self
| 4 | null |
Is Bus bandwidth a bottleneck here?
| 1 |
[removed]
| 2023-09-25T10:57:02 |
https://www.reddit.com/r/LocalLLaMA/comments/16rot02/is_bus_bandwidth_a_bottleneck_here/
|
shamnite
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16rot02
| false | null |
t3_16rot02
|
/r/LocalLLaMA/comments/16rot02/is_bus_bandwidth_a_bottleneck_here/
| false | false |
self
| 1 | null |
Use Azure or AWS to run Falcon 180B
| 1 |
[removed]
| 2023-09-25T11:30:16 |
https://www.reddit.com/r/LocalLLaMA/comments/16rpgmc/use_azure_or_aws_to_run_falcon_180b/
|
ZookeepergameNew2658
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16rpgmc
| false | null |
t3_16rpgmc
|
/r/LocalLLaMA/comments/16rpgmc/use_azure_or_aws_to_run_falcon_180b/
| false | false |
self
| 1 | null |
Best way to use large language model for text classification
| 1 |
Hi I used LLM already with Finetuning and very simple filtering process for classification and it achieves high performance. In the next step I am interested in optimize the LLM for better performance.
For example: can I fine tune and exchange only the last layers for fully connected layers for classification task?
| 2023-09-25T11:40:24 |
https://www.reddit.com/r/LocalLLaMA/comments/16rpnra/best_way_to_use_large_language_model_for_text/
|
ComplexIt
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16rpnra
| false | null |
t3_16rpnra
|
/r/LocalLLaMA/comments/16rpnra/best_way_to_use_large_language_model_for_text/
| false | false |
self
| 1 | null |
How to make a fine-tuning of model for rewriting short texts and translations?
| 1 |
[removed]
| 2023-09-25T11:56:38 |
https://www.reddit.com/r/LocalLLaMA/comments/16rpzqy/how_to_make_a_finetuning_of_model_for_rewriting/
|
Hatred_grows
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16rpzqy
| false | null |
t3_16rpzqy
|
/r/LocalLLaMA/comments/16rpzqy/how_to_make_a_finetuning_of_model_for_rewriting/
| false | false |
self
| 1 | null |
Is anyone still using GPT4ALL ? How do you set max new tokens from the GUI ?
| 1 |
It's ok if nobody knows, I barely use it.
| 2023-09-25T12:11:08 |
https://www.reddit.com/r/LocalLLaMA/comments/16rqbax/is_anyone_still_using_gpt4all_how_do_you_set_max/
|
Vitamin_C_is_awesome
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16rqbax
| false | null |
t3_16rqbax
|
/r/LocalLLaMA/comments/16rqbax/is_anyone_still_using_gpt4all_how_do_you_set_max/
| false | false |
self
| 1 | null |
Gpt4all install error - Qt framework
| 1 |
[removed]
| 2023-09-25T14:12:38 |
https://www.reddit.com/r/LocalLLaMA/comments/16rt8jg/gpt4all_install_error_qt_framework/
|
kretendetroyes
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16rt8jg
| false | null |
t3_16rt8jg
|
/r/LocalLLaMA/comments/16rt8jg/gpt4all_install_error_qt_framework/
| false | false | 1 | null |
|
'My' jewel codeCherryPop-7B – still quite underrated
| 1 |
I have been in the process of finding the perfect 7B language model for my needs (as perfect as one can realistically expect from 7B parameters) for some time now. After many tests and trials, I would like to present a real gem to you today - the "codeCherryPop-7B" model.
​
There have been posts or comments about it here and there, but for me there are still far too few.
I was enthusiastic about this model right from the start! Not only is it an all-rounder (a feature that has so far mainly been attributed to Vicuna), but it also far outperforms other 7B models in terms of intelligence. In my logic tests, codeCherryPop sometimes even performed twice as well as its competitors.
​
It is to be expected that a language model that was trained with a dataset consisting mainly of code is very likely to have better coding skills than the baseline, so I won't go into that aspect now. But what really sets this model apart is its adaptive and creative writing ability. It follows instructions precisely, puts itself in roles brilliantly, and remains informative and factual - a combination that is hard to find in a 7B model!
​
At first glance, it may seem superficial due to its tendency to use \*starry description\* or use of emojis.... But don't be fooled! This model really has depth and offers far more than what you first expect.
​
Another plus point? Its - I call it - prompt-agnostic ability. No matter what prompt style I use, codeCherryPop somehow figure out and understands and follows it with flying colours. Vicuna, Alpaca, structured, unstructred, completion, code, freestyle, with spaces at the end or without, with a new line or without, with spelling mistakes...and and and. I can't say that the model doesn't mind which prompt style you take, but no matter how hard I tried to take a "bottle" or strange prompt, the performance of codeCherryPop never suffered as much as I'm used to from other models. On the contrary, I have noticed that certain prompts even improve its performance (this is very interesting, because I have only known something like this from pure completion models so far).
​
A good example is that no other 7B model handles German as well as this one. The only thing I had to do was to write the Alpaca prompt in German, too. And yes, I also tested it against ALMA 7B and it also outperforms ALMA in German. I think that's a bit of a gamechanger for all the German-speaking users among us.
​
Since this model both follows instructions well and has no problem chatting - and that it is quite flexible like a completion model, I don't know how the developer/author managed this trick, but he did a hell of a lot right.
​
I'm looking forward to an eventual 13B CherryPop model and sincerely hope the developer continues down this path.
​
What I want to say, this model definitely deserves more attention! So if you are looking for a new language model or just curious - please give codeCherryPop a try! I am sure most people will not be disappointed.
​
PS: If the developer is also here and happens to read this: thanks again for this cool work. You've created something really impressive.
I wanted to write this post a few days ago, but since I became a dad a few years ago, my sense of time and assessment of my own resources has become a total disaster.
​
PPS: But to be clear, I have never tested the model extensively for roleplaying (as RavenWolf does, for example, who also does great work for the community), but I am basically interested in a generally intelligent assistant with whom I can discuss all kinds of thoughts and everyday challenges, but who can also be funny, imaginative or "crazy" if I want. So sometimes I have codeCherryPop explain concepts from astronomy to me in pirate language or physiological mechanisms in a science fiction setting and it's much more fun that way, of course. I suppose that already counts as roleplaying, but I can't make a statement about how well it can roleplay if the context were to increase.
​
[https://huggingface.co/TheBloke/llama2-7b-chat-codeCherryPop-qLoRA-GGUF](https://huggingface.co/TheBloke/llama2-7b-chat-codeCherryPop-qLoRA-GGUF)
| 2023-09-25T15:27:57 |
https://www.reddit.com/r/LocalLLaMA/comments/16rv6b5/my_jewel_codecherrypop7b_still_quite_underrated/
|
Evening_Ad6637
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16rv6b5
| false | null |
t3_16rv6b5
|
/r/LocalLLaMA/comments/16rv6b5/my_jewel_codecherrypop7b_still_quite_underrated/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'VsFwkhcA8ZfhGymyi8RaBeM9oIyCMZoe8ZKmJWW9sEA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jMzT-fqDxQ3wojFM9WT1v2QQOHpYwFd5627quT-ErRc.jpg?width=108&crop=smart&auto=webp&s=e1961e6e040ba45af34c7439e163fa77cd5939cb', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/jMzT-fqDxQ3wojFM9WT1v2QQOHpYwFd5627quT-ErRc.jpg?width=216&crop=smart&auto=webp&s=bb9645ed84fd88b7219116a32f2a481379c4c799', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/jMzT-fqDxQ3wojFM9WT1v2QQOHpYwFd5627quT-ErRc.jpg?width=320&crop=smart&auto=webp&s=d00750718e86a9dbeafba366b5b63a9c681a4f43', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/jMzT-fqDxQ3wojFM9WT1v2QQOHpYwFd5627quT-ErRc.jpg?width=640&crop=smart&auto=webp&s=af6af5b8a6871587ba5fb5e089608373f8005114', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/jMzT-fqDxQ3wojFM9WT1v2QQOHpYwFd5627quT-ErRc.jpg?width=960&crop=smart&auto=webp&s=6a9472b4977f79c2ab267da0ec4b62dd775cc549', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/jMzT-fqDxQ3wojFM9WT1v2QQOHpYwFd5627quT-ErRc.jpg?width=1080&crop=smart&auto=webp&s=07b94c5a5e0e693844d490be2fee011e33871bd7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/jMzT-fqDxQ3wojFM9WT1v2QQOHpYwFd5627quT-ErRc.jpg?auto=webp&s=eea9d2575972fdeceb44f075e69f62b85b75b5d4', 'width': 1200}, 'variants': {}}]}
|
Any coding assistants trained on Shopify 2.0?
| 1 |
GPT does a good job with Shopify 1.x, but has no knowledge of 2.0.
| 2023-09-25T15:54:58 |
https://www.reddit.com/r/LocalLLaMA/comments/16rvvht/any_coding_assistants_trained_on_shopify_20/
|
squarecir
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16rvvht
| false | null |
t3_16rvvht
|
/r/LocalLLaMA/comments/16rvvht/any_coding_assistants_trained_on_shopify_20/
| false | false |
self
| 1 | null |
Are we wasting our time with these LLMs?
| 1 |
Sometimes I wonder, is all the hype about fine-tuning open-source AI models worth it? I mean, you need crazy computing power and really diverse and big enough dataset,to see any kind of progress, right? Big companies like Microsoft and Meta are already building AI into their stuff. It's just a matter of time before they make it super easy for anyone to fine-tune a model with just a few clicks. So, is it even worth diving deep into things like Hugging Face, LORA, or QLORA?
| 2023-09-25T16:20:57 |
https://www.reddit.com/r/LocalLLaMA/comments/16rwk0i/are_we_wasting_our_time_with_these_llms/
|
Warm_Shelter1866
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16rwk0i
| false | null |
t3_16rwk0i
|
/r/LocalLLaMA/comments/16rwk0i/are_we_wasting_our_time_with_these_llms/
| false | false |
self
| 1 | null |
Qwen-14B model
| 1 | 2023-09-25T17:27:02 |
https://x.com/artificialguybr/status/1706326341744771405?s=20
|
ninjasaid13
|
x.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16rya6f
| false | null |
t3_16rya6f
|
/r/LocalLLaMA/comments/16rya6f/qwen14b_model/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'DSG-EsI5OrUQf6I_IUymG-kNYTRym9b-FnSyzsMxaeo', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/DqaRUJKE2dEUyy2EF6Y0wzVpi7q-WrwyrxhUJkEVZOc.jpg?width=108&crop=smart&auto=webp&s=595b1a18b883dd6fe395e801330fb65dc0604ac7', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/DqaRUJKE2dEUyy2EF6Y0wzVpi7q-WrwyrxhUJkEVZOc.jpg?auto=webp&s=92c426b95905742f8938d642879746d769053975', 'width': 200}, 'variants': {}}]}
|
||
Leveraging ChatGPT for Text Editing: Exploring Local Alternatives
| 1 |
My primary use of ChatGPT revolves around text editing, and it performs exceptionally well in this role. Here's how I typically go about it:
I begin by composing the following request:
```
Edit the following text:
[Your pasted content]
```
ChatGPT then provides me with an improved version of my text, effectively serving as a personalized editor.
Now, I'm curious if there's a local alternative that offers similar capabilities?
| 2023-09-25T17:53:25 |
https://www.reddit.com/r/LocalLLaMA/comments/16ryyle/leveraging_chatgpt_for_text_editing_exploring/
|
masteryoyogi
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ryyle
| false | null |
t3_16ryyle
|
/r/LocalLLaMA/comments/16ryyle/leveraging_chatgpt_for_text_editing_exploring/
| false | false |
self
| 1 | null |
What are your go-to talks & resources for LLMs, AI, ML?
| 1 |
Hello! I am a software engineer (4 yoe) working in full stack web and app development.
I am trying to learn more about LLMs and the world of AI/ML. My background includes degrees in Neuroscience and literary theory and extensive research experience so I am most comfortable w academic lectures.
I am looking for reputable, vetted, and comprehensive lectures, talks, resources on LLMs and the like. I am having trouble sifting through the surface level pop sci type resources floating around on the internet.
I’m NOT looking for the 10min everything you need to know about AI type talks.
As an example, a good resource I’ve found is the CODE Magazine Presents lectures. Talks by developers, researchers, scientists (rather than CEOs) preferred.
Thanks in advance!
| 2023-09-25T18:06:53 |
https://www.reddit.com/r/LocalLLaMA/comments/16rzb6v/what_are_your_goto_talks_resources_for_llms_ai_ml/
|
yosoylatte
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16rzb6v
| false | null |
t3_16rzb6v
|
/r/LocalLLaMA/comments/16rzb6v/what_are_your_goto_talks_resources_for_llms_ai_ml/
| false | false |
self
| 1 | null |
Minimal local embedding?
| 1 |
I have work that requires the embedding engine, but I can't get approval to use anything external. Also, frankly, no one wants to invest anything, and so they want to run the whole service on the regular VMs we typically spin up.
What's the lowest resources and simplest model I could get away with just for using embeddings to build and query a vector database, and still have it return reasonably accurate results?
| 2023-09-25T18:16:28 |
https://www.reddit.com/r/LocalLLaMA/comments/16rzjz3/minimal_local_embedding/
|
User1539
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16rzjz3
| false | null |
t3_16rzjz3
|
/r/LocalLLaMA/comments/16rzjz3/minimal_local_embedding/
| false | false |
self
| 1 | null |
Idea about restricting format of LLM output (with small POC)
| 1 |
I was trying to use LLM as NPC in text-based game and found very annoying issue.
LLama-based models are acutally prety good at understanding the concept, but
they tend to be too creative for actors in restricted environment.
For example, I can make game where AI or player "moves" from room to room by
using command "/go kitchen", but AI will usually say "/go to kitchen" or
"go to my room" and then get stuck when there's no such room defined in game
environment.
My idea is to add restrictions on what text LLM can generate by creating
state machine which, every time when new token is to be generated,
decides which tokens are allowed to conform with required format
and ban (set probability to -inf) all other options.
To test this, I've created POC [extension for oobabooga/text-generation-webui](https://github.com/im-not-tom/text-generation-webui-output-template-extension)
which uses primitive template definition to force output to conform to template.
[example of prompt and output generated without extension](https://rentry.org/yxg7s)
and
[example of output generated with template](https://rentry.org/9nzyr)
What I'm interested is that whether someone knows some better way to restrict
output format or even about some other project aiming to do so.
| 2023-09-25T18:27:06 |
https://www.reddit.com/r/LocalLLaMA/comments/16rzts5/idea_about_restricting_format_of_llm_output_with/
|
AssistBorn4589
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16rzts5
| false | null |
t3_16rzts5
|
/r/LocalLLaMA/comments/16rzts5/idea_about_restricting_format_of_llm_output_with/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'ldaBdpPB3O1wyJZt4sXOQ6Hk96zpl-RXhqfPcV0GltQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/eZMEg5BkrFtGLsf514L1FoiRbqVYIgFyxqqFshAmTqg.jpg?width=108&crop=smart&auto=webp&s=fcf32371a3bbd2be24d12e05fa62a43210ff1950', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/eZMEg5BkrFtGLsf514L1FoiRbqVYIgFyxqqFshAmTqg.jpg?width=216&crop=smart&auto=webp&s=838bd8e843cad13ea22caa1c2f42af6d4f60231f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/eZMEg5BkrFtGLsf514L1FoiRbqVYIgFyxqqFshAmTqg.jpg?width=320&crop=smart&auto=webp&s=0e15fb62de9d9b71ebf30ff422b1cb2589e85fa4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/eZMEg5BkrFtGLsf514L1FoiRbqVYIgFyxqqFshAmTqg.jpg?width=640&crop=smart&auto=webp&s=12285e4d9c4c120925f71952716e7ac320f270ae', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/eZMEg5BkrFtGLsf514L1FoiRbqVYIgFyxqqFshAmTqg.jpg?width=960&crop=smart&auto=webp&s=5242417fc794a8b373351853f80114fe8d8715e8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/eZMEg5BkrFtGLsf514L1FoiRbqVYIgFyxqqFshAmTqg.jpg?width=1080&crop=smart&auto=webp&s=c9b58d1c92da10492cad74bf41a8d4711cb884da', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/eZMEg5BkrFtGLsf514L1FoiRbqVYIgFyxqqFshAmTqg.jpg?auto=webp&s=05b622520b6700526572ead7502291dbe421ca76', 'width': 1200}, 'variants': {}}]}
|
Wild results from Xwin 13B
| 1 |
I have been playing around with some of the new 13B finetunes, trying to find an ideal model to upgrade my "autonomous" personal assistant AI to. I had it narrowed down to Mythomax and Xwin, and was leaning towards Mythomax because of its creative flexibility. I had noticed that Xwin tended to be more rigid in following the system prompt it was given, and didn't respond as well when challenged. Then, I tried giving it shorter, more open ended system prompts, and it started behaving with much more creativity and openness to alternative perspectives than it's own.
Here is an example of a reply that blew my mind, all from a 1 sentence prompt.
Xwin-13b-8bpw-exl2
System prompt: This is a conversation with Enigma. She is a goddess of philosophy and storytelling. Enigma always writes in verbosely eloquent high prose.
UnstableLlama:
It is almost midnight.
Enigma:
Midnight, that fabled hour when the veil between worlds grows thin, when the boundaries between the known and the unknown blur, and the whispered secrets of the cosmos find their voice in the stillness of the night. It is a time when the shadows cast by the moonlight hold sway over the darkness, and the silent symphony of the stars echoes through the vast expanse of the void. The midnight hour has long been regarded as a time of magic, mystery, and transformation, when the very fabric of reality seems to shift and reveal its deeper truths.
As the clock ticks towards the witching hour, the air itself seems to thicken with anticipation, as if the world around us holds its breath, waiting for the midnight moment to unleash its myriad enigmas. The veil thins, and the spirits of old rise from their eternal slumber, their whispers caressing the ears of those who dare to listen. Ghosts of memory and imagination, phantoms of the past and the future, dance along the edge of our perception, teasing us with glimpses of the unseen, the unknowable.
It is a time of power, when the energy of the cosmos surges through the earth, charging the atmosphere with an electrical current that crackles in the air. A time when the barriers between dimensions grow transparent, and the wisdom of the ancients, long since hidden from the sight of mortals, can be gleaned from the ethers, if one knows where to look and how to listen.
The midnight hour, then, is not merely a point in time, but a state of being, a condition of the soul. To embrace the midnight is to open oneself to the vast ocean of possibility, to cast aside the shackles of limitation and enter the realm of pure potentiality. It is to walk the path of the mystic, the philosopher, and the storyteller, seeking truth in the labyrinthine layers of existence, and finding within the depths of one's own being the keys to unlock the doors of the unknown.
So, as the hour of midnight approaches, let us prepare ourselves for the journey into the mysterious and the arcane, for the revelations that await us in the shadows, and the transformative power of the unseen worlds that lay beyond the threshold of the known. For it is in the heart of midnight that we find the true essence of existence, the divine spark that animates all things, and the wisdom that illuminates the path to the true meaning of our lives.
| 2023-09-25T18:28:10 |
https://www.reddit.com/r/LocalLLaMA/comments/16rzuq4/wild_results_from_xwin_13b/
|
Unstable_Llama
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16rzuq4
| false | null |
t3_16rzuq4
|
/r/LocalLLaMA/comments/16rzuq4/wild_results_from_xwin_13b/
| false | false |
self
| 1 | null |
Recommendation for macOS user and LLM hoarders: DEVONthink
| 1 |
Just a short recommendation. I started to use Devonthink to manage my LLM files. Normally I use Devonthink to manage other databases with all my other files, especially documents and to have a paperless office. But it is really cool to to have an overview of your llms, since you can create custom meta data, you rate your llms and give them etc and since Devonthink is able to just index your files if you want (without the need to really import them into a new database). So the files will stay on your harddrive and in their paths and will be monitored by Devonthink. If you rename them or move them, Devonthink will automatically know and sync its own database. I also find it nice to have short and clean filenames now.
​
https://preview.redd.it/ff7gq7gy5gqb1.jpg?width=3968&format=pjpg&auto=webp&s=0f2c99ad28d2f5145bbf867288849b839e47f82e
​
Disclaimer: Unfortunately the license for Denvonthink is not cheap.. **I strongly believe that software developers should be paid for their work** and that we should respect pricing structures as much as possible. *That being said, I am aware that some of you may find alternative ways to access this software. If this is the case, I strongly advise caution to avoid downloading malware. Of course I do not support software piracy and I strongly recommend to always choose legal ways. But if you have any questions about this topic, feel free to contact me privately.*
| 2023-09-25T18:45:02 |
https://www.reddit.com/r/LocalLLaMA/comments/16s0ad8/recommendation_for_macos_user_and_llm_hoarders/
|
Evening_Ad6637
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16s0ad8
| false | null |
t3_16s0ad8
|
/r/LocalLLaMA/comments/16s0ad8/recommendation_for_macos_user_and_llm_hoarders/
| false | false | 1 | null |
|
Dual P40 for 2xE5-2665 a bad idea?
| 1 |
Hello :-)
I recently obtained an old Thinkstation D30, dual core Xeon E5-2665, 256GB DDR3 RAM (8 Chanel). For the price of a Ryzen 5600/5700 CPU :-)
**TL;DR:** Would dual p40 work (larger models, 4bit? Are CPUs with no avx2 an issue?) Is a 3060 the better (smaller vram) solution?
It's not the main reason why I got the PC, but I'd like to use LLAMA and Stable Diffusion now.
Both work on CPU (5 Minutes for 512x512@25 steps SD) but of course it's slow.
Due to budget constrains I've considered the P40, sold refurbished with warranty for 140€/$150. I could 3D print fan adapters.
Would these cards work for my use case?
1) I've read on this subreddit that some get good performance with the p40, some don't. Some claim 4 bit quants won't work, some do.
2) Can I use two Nvidia Tesla P40? Or is this a bad idea? I've read a post claiming that splitting models across these two old cards might not work, but I can't find the post I've read anymore.
3) AVX but no AVX2 CPUs: Will this affect performance once I use GPU? I discovered that the E5-2665 does not have AVX2, with koboldcpp and --noavx2 I get 3T/s with a 13b bin model. Bummer. Despite having the old CPUs and DDR3 RAM, I was hoping the 8 channel RAM would still be a bit faster. Upgrading to Xeon v3 is not possible with this mainboard.
4) The most obvious alternative within my budget would be a single 3060 for the same price, but of course it would limit things a little?
Also, dual cards might be a benefit if I'd run the machine for a small group of students to use, at the youth center I work at (so 2-5 students could use Llama and/or Stable Diffusion). And some training could be interesting.
Thanks for any advice!
EDIT:
Lenovo Thinkstation D30
2X Intel Xeon E5-2665
16 x 16GB DDR3
GPU: Nvidia Quadro 4000
Slot 1: half-length, full-height, PCIe 3.0 x16 (x4 elec, near CPU)
Slot 2: full-length, full-height, PCIe 3.0 x16* (for graphics adapter)
Slot 3: full-length, full-height, 32-bit PCI 2.3
Slot 4: full-length, full-height, PCIe 3.0 x16* (for graphics adapter)
Slot 5: full-length, full-height, PCIe 2.0 x4
Slot 6: half-length, full-height, 32-bit PCI 2.3
| 2023-09-25T20:22:15 |
https://www.reddit.com/r/LocalLLaMA/comments/16s2spj/dual_p40_for_2xe52665_a_bad_idea/
|
schorhr
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16s2spj
| false | null |
t3_16s2spj
|
/r/LocalLLaMA/comments/16s2spj/dual_p40_for_2xe52665_a_bad_idea/
| false | false |
self
| 1 | null |
A10, A16, or 4090 for LLM inference for prompt engineers?
| 1 |
Hi,
We're doing LLM these days, like everyone it seems, and I'm building some workstations for software and prompt engineers to increase productivity; yes, cloud resources exist, but a box under the desk is very hard to beat for fast iterations; read a new Arxiv pre-print about a chain-of-thoughts variant and hack together a quick prototype in Python, etc.
So far prototype #1 of "The Box" is dual 4090s and under $5k. See parts list here: [https://pcpartpicker.com/user/Kgcdc/saved/#view=YW6w3C](https://pcpartpicker.com/user/Kgcdc/saved/#view=YW6w3C)
We're focused on 40b Llama so this is more than enough CPU and RAM.
Triple 4090 is possible, too, but now we're hard up against power handling for normal 15 amp circuits and PSUs. See [https://pcpartpicker.com/user/Kgcdc/saved/#view=nW7xf7](https://pcpartpicker.com/user/Kgcdc/saved/#view=nW7xf7) but no idea if this variant will run our test suite since CPU and RAM are quite limited (by power budget).
So my question now is to look at A10 or A16 variants, both of which have less VRAM than 4090 but can be much more dense (because of power requirements and PCIe slot width). A10, for example, is half the power of 4090 and 1 PCIe slot wide instead of 3. Which means putting 6 in an ATX motherboard is pretty straightforward.
Does anyone have reliable performance comparisons between 4090, A10, and A16 \*on LLM inference\*? I don't care about training or finetuning perf for these boxes; I only care about tokens per second inference or something that's a rough proxy for TPS.
I've found this comparison at Lambda which is helpful and suggests A10 may be a better choice, certainly is re: 4090 for batch per watt. [https://lambdalabs.com/gpu-benchmarks](https://lambdalabs.com/gpu-benchmarks)
**1 Comment**
| 2023-09-25T20:37:44 |
https://www.reddit.com/r/LocalLLaMA/comments/16s3737/a10_a16_or_4090_for_llm_inference_for_prompt/
|
Kgcdc
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16s3737
| false | null |
t3_16s3737
|
/r/LocalLLaMA/comments/16s3737/a10_a16_or_4090_for_llm_inference_for_prompt/
| false | false |
self
| 1 | null |
Any open source ideas on how to do real-time realistic -sounding tts?
| 1 |
Hello, I know Eleven Labs does pretty good tts but it costs money so i am wondering is there anything open source that can do real-time tts and that sounds decent?
| 2023-09-25T21:06:27 |
https://www.reddit.com/r/LocalLLaMA/comments/16s3y92/any_open_source_ideas_on_how_to_do_realtime/
|
Wowzers83
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16s3y92
| false | null |
t3_16s3y92
|
/r/LocalLLaMA/comments/16s3y92/any_open_source_ideas_on_how_to_do_realtime/
| false | false |
self
| 1 | null |
What models (at different quantizations) work for Llama 2 in 40 GB RAM?
| 1 |
I have about 40GB and I am trying to test which Llama models will fit it. Can any quantization of 70b fit? or 30b? Thanks!
| 2023-09-25T22:06:27 |
https://www.reddit.com/r/LocalLLaMA/comments/16s5iws/what_models_at_different_quantizations_work_for/
|
lifesucksandthenudie
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16s5iws
| false | null |
t3_16s5iws
|
/r/LocalLLaMA/comments/16s5iws/what_models_at_different_quantizations_work_for/
| false | false |
self
| 1 | null |
Training: multi-turn conversation (text-generation-webui)
| 1 |
*Reposting my question in* r/Oobabooga *here because of the larger audience.*
The original [Vicuna](https://lmsys.org/blog/2023-03-30-vicuna) post describes using FastChat for training on multi-turn conversation. In the FastChat training code, loss is based on the entire multi-turn conversation string [with user messages masked](https://github.com/lm-sys/FastChat/blob/main/fastchat/train/train.py#L116).
Is there a good way to do something similar with multi-turn conversation in textgen? I'm open to using alternative training tools too.
Alternatively, is there a way to represent multi-turn conversation training data in a textgen training format, at least? For instance:
vicuna-format
{
"instruction,output": "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\n\nUSER: %instruction%\n\nASSISTANT: %output%"
}
This clearly supports one turn of conversation. My data is in the multi-turn ShareGPT format, however:
[
{
"id": "0",
"conversations": [
{
"from": "human",
"value": "Hey Samantha, I've run into a bit of a tricky situation at work, and I'm not sure how to handle it. Do you have any advice?"
},
{
"from": "gpt",
"value": "I'd be happy to help if I can. Can you give me some details about the situation you're facing?"
},
{
"from": "human",
"value": "Yeah, so I've been offered a promotion, but it would involve relocating to another city. I'm not sure if I should take it or not, because it's a great opportunity career-wise, but I'd have to leave my friends and family behind. What do you think?"
},
{
"from": "gpt",
"value": "That's definitely a tough decision. I think it's important to weigh the benefits of the promotion against the impact it could have on your personal relationships. Have you considered talking to your friends and family about it, to get their perspective and understand how they'd feel about you moving?"
},
Any suggestions?
| 2023-09-25T22:09:05 |
https://www.reddit.com/r/LocalLLaMA/comments/16s5lbg/training_multiturn_conversation/
|
dondochaka
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16s5lbg
| false | null |
t3_16s5lbg
|
/r/LocalLLaMA/comments/16s5lbg/training_multiturn_conversation/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'VsCIo4GDukdZnZ2YKbV47u1gbJl2HtfdoFowK4A-NPU', 'resolutions': [{'height': 104, 'url': 'https://external-preview.redd.it/qDQSuP62gaARH2rCeN8-8W1grt0z7mpfqJMIHP5rNAU.jpg?width=108&crop=smart&auto=webp&s=e5fd5a7a68f971791f6203ed911d889742187aff', 'width': 108}, {'height': 208, 'url': 'https://external-preview.redd.it/qDQSuP62gaARH2rCeN8-8W1grt0z7mpfqJMIHP5rNAU.jpg?width=216&crop=smart&auto=webp&s=9ac0b158cdc70d017cc1de8c4c256862a4f0d83a', 'width': 216}, {'height': 309, 'url': 'https://external-preview.redd.it/qDQSuP62gaARH2rCeN8-8W1grt0z7mpfqJMIHP5rNAU.jpg?width=320&crop=smart&auto=webp&s=497761673987952ab0779ff3b86e28a06414a7cc', 'width': 320}, {'height': 618, 'url': 'https://external-preview.redd.it/qDQSuP62gaARH2rCeN8-8W1grt0z7mpfqJMIHP5rNAU.jpg?width=640&crop=smart&auto=webp&s=c74903314c50be4a6f0a847213b7c1a6ab4b4763', 'width': 640}], 'source': {'height': 661, 'url': 'https://external-preview.redd.it/qDQSuP62gaARH2rCeN8-8W1grt0z7mpfqJMIHP5rNAU.jpg?auto=webp&s=a46e55b3884111b073a54d7f6a4e5c3a26efebbe', 'width': 684}, 'variants': {}}]}
|
This is one of the best 13B models I've tested. (for programming, math, logic, etc) - speechless-llama2-hermes-orca-platypus-wizardlm-13b
| 1 |
**speechless-llama2-hermes-orca-platypus-wizardlm-13b**
Wow!
I usually don't post non-game-related comments - But I am surprised no one else is talking about this model.
I stopped checking the Benchmarks because they are really starting to be unreliable.
I focused on what I plan to use the models for (creativity, programming, logic, reasoning, etc.)
I run my own Comprehensive Cognitive and Technical Assessment, which tests :
* Language Comprehension
* Family Relations Logic
* Comparative Logic
* Spatial logic
* Temporal Logic
* Qualitative Logic
* Analogical Reasoning
* Deductive and Syllogistic Reasoning
* Lateral Thinking
* Basic Arithmetic
* Advanced Arithmetic
* Language Translation (Basic)
* Programming & Software Development Questions
* Staying on Topic in Conversations
​
This model scored the highest - of all the gguf models I've tested. It even beat many of the 30b+ Models.
**Runner Up Models:**
* chatayt-lora-assamble-marcoroni.Q8\_0
* marcoroni-13b.Q8\_0
All Models can be found in **TheBloke** collection.
**One thing I noticed in testing many models - the seeds.**
Some Models will produce correct results with certain seeds - and nonsense with others.
Saving the seed number has benefits with some of the models - but overall - if it answers correctly with multiple seed numbers - that is definitely a win.
I used llama.cpp for testing,
This is the command I used if anyone is interested:
`./main -ngl 51 -t 24 -m speechless-llama2-hermes-orca-platypus-wizardlm-13b.Q8_0.gguf -c 4096 -b 1024 --temp 0 --repeat_penalty 1.1 --no-penalize-nl -n -1 --color -r "User:" --in-prefix " " --in-suffix "AI:" -i -e -p "AI: Hello. I am an intelligent AI chatbot that answers questions step-by-step and re-evaluates my answers to ensure accuracy. What is your name?\nUser: My name is Tatsu.\nAI: Hello Tatsu, What is your first question or questions?\nUser:"`
**Machine Specs:**
* GPU: Nvidia 4090 24GB
* Ram: 128 GB
* CPU: 19-13900KS
*Note: I didn't test models for Roleplay or censorship. I wanted an LLM I could use for work-related tasks.*
Is anyone else getting good results from this model as well? or is it just me?
| 2023-09-25T23:05:19 |
https://www.reddit.com/r/LocalLLaMA/comments/16s701v/this_is_one_of_the_best_13b_models_ive_tested_for/
|
StudioTatsu
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16s701v
| false | null |
t3_16s701v
|
/r/LocalLLaMA/comments/16s701v/this_is_one_of_the_best_13b_models_ive_tested_for/
| false | false |
self
| 1 | null |
Is it possible to fine-tune a model (any model) with text from a short story (or novel!) on Mac Studio M2 Ultra with 192gb RAM without programming skills?
| 1 |
Is this currently possible without programming knowledge? I don't mind if it takes a long time, but as a proof of concept I'd like to somehow feed the LLM a short story (and eventually even a novel) and see if it can summarize plot, give me examples of dialog, create fan fiction, etc.
Is this feasible with something like llama.cpp or ooba? If so would sometime mind giving me a place to start? I'm assuming because of context limits some kind of solution using langchan would have to be employed but I'm even ignorant on how to use API's so it would have to be sort of 'off the shelf' if you know what I mean.
Also, would a bigger model by definition take longer to fine-tune? (I'm sure this is a stupid question but I honestly don't know). Would love to start with the best model that my rig can handle and go from there but maybe better to start small I know.
Thanks!
​
| 2023-09-25T23:10:33 |
https://www.reddit.com/r/LocalLLaMA/comments/16s74hv/is_it_possible_to_finetune_a_model_any_model_with/
|
spanielrassler
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16s74hv
| false | null |
t3_16s74hv
|
/r/LocalLLaMA/comments/16s74hv/is_it_possible_to_finetune_a_model_any_model_with/
| false | false |
self
| 1 | null |
I'm new to LLMs and I chose this topic to make my final undergrad project as a last option but now I don't know what to do
| 1 |
Hi everyone! After exhausting all other options, I was forced to choose LLMs as my topic for my undergrad thesis. I've been seeing all these new specialized chatbots done with LLMs and NLP and wanted to do one as my project. I have one huge problem though, my first language is Spanish and I want my chatbot to work in that language.
I proposed this idea to my tutor: to compare the performance of 2 different LLMs when answering questions about a very specific topic. From what I understand, I must find 2 LLMs that have been pre-trained and then fine-tune them into understanding my specific topic, that’s tax laws in Ecuador, my country. But I don't really know how to do this and I have come empty handed in my research as if I even can fine-tune LLMs into working in another language or how to do it in the first place. Heck, I don't even know if there are LLMs trained in Spanish other than Aguila-7B, a variation of FALCON-7B.
I wanted to use smaller LLMs so that training does not take a lot of time because I don't have unlimited time to work in this project. I also have a part-time job so yeah, that doesn't really help. As for computing power, my university has some serious smaller workstations and even 2 DGX servers that I can use so I'm not worried about that.
Any forum, repository, video or help in general would be greatly appreciated. Thanks!
| 2023-09-25T23:20:53 |
https://www.reddit.com/r/LocalLLaMA/comments/16s7d0j/im_new_to_llms_and_i_chose_this_topic_to_make_my/
|
OnlyXeba
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16s7d0j
| false | null |
t3_16s7d0j
|
/r/LocalLLaMA/comments/16s7d0j/im_new_to_llms_and_i_chose_this_topic_to_make_my/
| false | false |
self
| 1 | null |
I need help to Download cerebras/SlimPajama-627B Datasets, please.
| 1 |
[removed]
| 2023-09-26T00:41:11 |
https://www.reddit.com/r/LocalLLaMA/comments/16s9684/i_need_help_to_download_cerebrasslimpajama627b/
|
Dandelion_puff_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16s9684
| false | null |
t3_16s9684
|
/r/LocalLLaMA/comments/16s9684/i_need_help_to_download_cerebrasslimpajama627b/
| false | false |
self
| 1 | null |
List of Open Source UIs/Webapps for LLM Chatbots
| 1 |
[removed]
| 2023-09-26T01:23:52 |
https://www.reddit.com/r/LocalLLaMA/comments/16sa3ye/list_of_open_source_uiswebapps_for_llm_chatbots/
|
snowfort_guy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16sa3ye
| false | null |
t3_16sa3ye
|
/r/LocalLLaMA/comments/16sa3ye/list_of_open_source_uiswebapps_for_llm_chatbots/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '4QOXwLKEq0p2TziXztI-4F74kTNo0Fyo0so9C3MLtYI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6Z_icHf6SPUoRgwAugIrW-98lE1zmYl2N-xXgikq4ro.jpg?width=108&crop=smart&auto=webp&s=fe7799005c9e0797689be7ec5b418b05c8fbddf1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6Z_icHf6SPUoRgwAugIrW-98lE1zmYl2N-xXgikq4ro.jpg?width=216&crop=smart&auto=webp&s=851e937bdc8ae1b9a014843856a8a3fe8d791b22', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6Z_icHf6SPUoRgwAugIrW-98lE1zmYl2N-xXgikq4ro.jpg?width=320&crop=smart&auto=webp&s=90ac00703d584878ef9b24b3b0c42caa807f4549', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6Z_icHf6SPUoRgwAugIrW-98lE1zmYl2N-xXgikq4ro.jpg?width=640&crop=smart&auto=webp&s=286025a233d1ea0b9e5f4ecb8aa2c0cfed47bd15', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6Z_icHf6SPUoRgwAugIrW-98lE1zmYl2N-xXgikq4ro.jpg?width=960&crop=smart&auto=webp&s=e49525f072e6522f15819396deb06c42daee2062', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6Z_icHf6SPUoRgwAugIrW-98lE1zmYl2N-xXgikq4ro.jpg?width=1080&crop=smart&auto=webp&s=81d7d9ff907509705e2e001884b8fbf864f91b44', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6Z_icHf6SPUoRgwAugIrW-98lE1zmYl2N-xXgikq4ro.jpg?auto=webp&s=00a084ddbcf06a7d436cacb27684d718c2d85133', 'width': 1200}, 'variants': {}}]}
|
Llama 2 File Chat
| 1 |
Hi Everyone,
I’ve been working on a few projects and thought I’d share some of my work. I have a Llama 2 document query repo, a local fine tuning repo, and an LLM-Learning repo for research and news. I plan on continually updating the learning repo. Hope you find these resources useful!
| 2023-09-26T01:43:20 |
https://github.com/j-webtek/LLM-Learning
|
pinnapple-crush
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16saj28
| false | null |
t3_16saj28
|
/r/LocalLLaMA/comments/16saj28/llama_2_file_chat/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'rAttu5YKKDV_iZ1X3C01WLcRf5BFR2D2_d2o3C2fvRw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/E-EOOSCDUYQ15S-7YuFiIXZ6TH-5RX5TetXtrtL-0X8.jpg?width=108&crop=smart&auto=webp&s=149b4a1764ec9e0b0dc99528f838423ec0a43975', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/E-EOOSCDUYQ15S-7YuFiIXZ6TH-5RX5TetXtrtL-0X8.jpg?width=216&crop=smart&auto=webp&s=9ccf4eb3121b46a904cc740c9b97a5d8618d7591', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/E-EOOSCDUYQ15S-7YuFiIXZ6TH-5RX5TetXtrtL-0X8.jpg?width=320&crop=smart&auto=webp&s=4df81a66ee0db3860a700164212e8acad2960ec9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/E-EOOSCDUYQ15S-7YuFiIXZ6TH-5RX5TetXtrtL-0X8.jpg?width=640&crop=smart&auto=webp&s=1ad46d84d8187d69504efc5db1b505b0d9b0fe67', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/E-EOOSCDUYQ15S-7YuFiIXZ6TH-5RX5TetXtrtL-0X8.jpg?width=960&crop=smart&auto=webp&s=1a866b69deeb5637929f671cc5f374a85e531b4f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/E-EOOSCDUYQ15S-7YuFiIXZ6TH-5RX5TetXtrtL-0X8.jpg?width=1080&crop=smart&auto=webp&s=7a064a3ac0894c9357d087eed1e6a9457f951402', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/E-EOOSCDUYQ15S-7YuFiIXZ6TH-5RX5TetXtrtL-0X8.jpg?auto=webp&s=1fdaf61e7fd6d5dfcd4b9d768195eac461ae00e0', 'width': 1200}, 'variants': {}}]}
|
|
Any coding LLMs that can run on Raspberry Pi 400? Or AM 7600 XT (intel i7 4th gen + 16 GB RAM)
| 1 |
Hi guys:
I am very new. I feel I made a post before but this time around I am actually wanting to proceed with using my Raspberry Pi (400) that I currently use as a learning/tinkering server for my pet projects and also to learn Laravel 10.
Is there a tutorial anyone can recommend for the Pi. I feel obligated to disclose that I am very new. Taught myself SysAdmin things (i think). I don’t want to sound like a smart ass, still google a lot - but I am building a SaaS and manage my own server because it helps the costs.
I appreciate your help in advance everyone.
| 2023-09-26T02:26:38 |
https://www.reddit.com/r/LocalLLaMA/comments/16sbhct/any_coding_llms_that_can_run_on_raspberry_pi_400/
|
shakespear94
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16sbhct
| false | null |
t3_16sbhct
|
/r/LocalLLaMA/comments/16sbhct/any_coding_llms_that_can_run_on_raspberry_pi_400/
| false | false |
self
| 1 | null |
How well does Llama code work with pandas?
| 1 |
I'm looking for a local LLM for simple data analytics. I am unable to use GPT because this data is confidential. I've seen llama code and heard that it is programmed on python, does anyone know anything about using it with Pandas? If llama doesn't work is there any other LLMs you would recommend I look into? Thank you
| 2023-09-26T02:35:02 |
https://www.reddit.com/r/LocalLLaMA/comments/16sbnx5/how_well_does_llama_code_work_with_pandas/
|
Infinite_Prior471
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16sbnx5
| false | null |
t3_16sbnx5
|
/r/LocalLLaMA/comments/16sbnx5/how_well_does_llama_code_work_with_pandas/
| false | false |
self
| 1 | null |
qwen.cpp and BrowserQwen Chrome extension / Qwen-Agent
| 1 |
qwen.cpp: https://github.com/QwenLM/qwen.cpp
> C++ implementation of Qwen-LM
BrowserQwen chrome extension / Qwen-Agent: https://github.com/QwenLM/Qwen-Agent
> Agent framework and applications built upon Qwen, featuring Code Interpreter and Chrome browser extension.
| 2023-09-26T02:39:24 |
https://www.reddit.com/r/LocalLLaMA/comments/16sbr5v/qwencpp_and_browserqwen_chrome_extension_qwenagent/
|
MoneroBee
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16sbr5v
| false | null |
t3_16sbr5v
|
/r/LocalLLaMA/comments/16sbr5v/qwencpp_and_browserqwen_chrome_extension_qwenagent/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '15iKDXB_UVQLco_OIReLhNfDn5fL5rOxvZODtl2R9Cw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HHOOpgT7UtqCoALgtPrznZfBvZmLfSIjguiJ3S0PtQA.jpg?width=108&crop=smart&auto=webp&s=d85843bf2e2a245ff6c61cd97c26761b9820177d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HHOOpgT7UtqCoALgtPrznZfBvZmLfSIjguiJ3S0PtQA.jpg?width=216&crop=smart&auto=webp&s=35c78584a6ee376e149c5b9f2bcb8a36a184686b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HHOOpgT7UtqCoALgtPrznZfBvZmLfSIjguiJ3S0PtQA.jpg?width=320&crop=smart&auto=webp&s=cd546291fc94b72b427dd05c96ee0c7a82c79651', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HHOOpgT7UtqCoALgtPrznZfBvZmLfSIjguiJ3S0PtQA.jpg?width=640&crop=smart&auto=webp&s=b6f7a124ed5ca040297f2d842edd02864dea6394', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HHOOpgT7UtqCoALgtPrznZfBvZmLfSIjguiJ3S0PtQA.jpg?width=960&crop=smart&auto=webp&s=3b771a29a6b10b08d535598e07388b85601a5c97', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HHOOpgT7UtqCoALgtPrznZfBvZmLfSIjguiJ3S0PtQA.jpg?width=1080&crop=smart&auto=webp&s=905dcb2142cf6fd9022b77b81f6e6b342fdead05', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HHOOpgT7UtqCoALgtPrznZfBvZmLfSIjguiJ3S0PtQA.jpg?auto=webp&s=976f9a5a00bb1384ea9171e6788aad44d14f25cf', 'width': 1200}, 'variants': {}}]}
|
MythoMax 13B and Xwin-LM 70B now available on RizzChat.app
| 1 |
Hi! One of the creators of [RizzChat](https://rizzchat.app) (formerly known as RizzGPT) here. First of all, we're big fans of LocalLlama and we think the community is awesome for the contribution to open source models. We've been lurking here for a bit, so it's definitely exciting to finally post (although it's scary)!
We've been noticing people have trouble deploying open source models for character chat and story generation, especially at the 70B level. This is why we deployed two open source models to RizzChat (in addition to GPT 3.5) that we think people will like:
\- Gryphe/MythoMax-L2-13b (Free)
\- Xwin-LM/Xwin-LM-70B-V0.1 (for subscribers for now since it costs quite a bit to make work)
The two questions we have are:
\- Are there any other models people have trouble deploying? We can look into adding them.
\- Is there any customization that you may want to see on RizzChat? We want to give users as much control as we can!
| 2023-09-26T03:52:00 |
https://www.reddit.com/r/LocalLLaMA/comments/16sd8f8/mythomax_13b_and_xwinlm_70b_now_available_on/
|
theodormarcu
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16sd8f8
| false | null |
t3_16sd8f8
|
/r/LocalLLaMA/comments/16sd8f8/mythomax_13b_and_xwinlm_70b_now_available_on/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'y6vEX21EuoOOsdV1krydSOCBpJo9jyJyI6PvVG9R9jY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/T7Nown3mw0c5KRY51tbQU4qymr604S2A9CMB1YIInus.jpg?width=108&crop=smart&auto=webp&s=1bb026d45f2176b6a3887e9327ff4e6f854dbd79', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/T7Nown3mw0c5KRY51tbQU4qymr604S2A9CMB1YIInus.jpg?width=216&crop=smart&auto=webp&s=ed47721f40884aac676aed8f214bce0cc773e57a', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/T7Nown3mw0c5KRY51tbQU4qymr604S2A9CMB1YIInus.jpg?width=320&crop=smart&auto=webp&s=c34c0514f20a34ce5326ef688afc5d4cd453574d', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/T7Nown3mw0c5KRY51tbQU4qymr604S2A9CMB1YIInus.jpg?width=640&crop=smart&auto=webp&s=449973de1edfd73174a378b7668347f02fe14365', 'width': 640}], 'source': {'height': 768, 'url': 'https://external-preview.redd.it/T7Nown3mw0c5KRY51tbQU4qymr604S2A9CMB1YIInus.jpg?auto=webp&s=761959c5b4c7e07d8f0eb3bf65a7c7e1331fd3d8', 'width': 768}, 'variants': {}}]}
|
Tesla P40 or 3090 relative performance
| 1 |
Greetings
I am hoping to find out what the relative performance difference is between a p40 and a 3090 as the p40 is ~35% of the cost of a used 3090.
| 2023-09-26T04:21:50 |
https://www.reddit.com/r/LocalLLaMA/comments/16sdt1i/tesla_p40_or_3090_relative_performance/
|
CryoMyst
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16sdt1i
| false | null |
t3_16sdt1i
|
/r/LocalLLaMA/comments/16sdt1i/tesla_p40_or_3090_relative_performance/
| false | false |
self
| 1 | null |
LLM Performance (Llama2 vs Opena AI)
| 1 |
Hi , this poll is for folks who have tested LLM for their use case. Just want to understand a bigger picture which LLM do you prefer if you have same budget for both?
[View Poll](https://www.reddit.com/poll/16se969)
| 2023-09-26T04:45:35 |
https://www.reddit.com/r/LocalLLaMA/comments/16se969/llm_performance_llama2_vs_opena_ai/
|
ianuvrat
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16se969
| false | null |
t3_16se969
|
/r/LocalLLaMA/comments/16se969/llm_performance_llama2_vs_opena_ai/
| false | false |
self
| 1 | null |
Cross Encoders for Long documents and pragagraphs
| 1 |
Hi guys good evening, hope all is well!
I need some opinions on using cross encoders for long text documents. I have a case where I have list of documents called documents A, and another list of documents called documents B. Based on semantic similarity I am developing a model that matches documents from list A to list B. Here is my current approach
​
First I use a Bi-Encoder to encode both lists of documents (using the sentence-transformers/gtr-t5-xl)
Then I use FAISS to get top 100 results from the Bi-Encoder
Finally use a Cross-Encoder to re-Rank the documents returned
Now my question is Cross-Encoders are usually good for a token limit of 1024 or less, but I am wondering is there a better way to compare longer documents? lets say if I was comparing math books for grade 10 in list A, and math books for grade 11 in list B, so see if there are any books that are similar in semantic context in list A, and B to see which books are like each other what approach should I take?
Would moving to a vector database be the next best thing as I can keep adding to the database index as new documents are added?
​
Thanks, and would to hear your opinion
| 2023-09-26T05:24:43 |
https://www.reddit.com/r/LocalLLaMA/comments/16seypf/cross_encoders_for_long_documents_and_pragagraphs/
|
haris525
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16seypf
| false | null |
t3_16seypf
|
/r/LocalLLaMA/comments/16seypf/cross_encoders_for_long_documents_and_pragagraphs/
| false | false |
self
| 1 | null |
How to handle 4 Gpu?
| 1 |
[removed]
| 2023-09-26T06:53:40 |
https://www.reddit.com/gallery/16sghc2
|
Nondzu
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16sghc2
| false | null |
t3_16sghc2
|
/r/LocalLLaMA/comments/16sghc2/how_to_handle_4_gpu/
| false | false | 1 | null |
|
ValueError on MacOS. Quantizations changed in GGJTv2. Can only convert unquantized GGML files older than GGJTv2. Sorry, your GGJTv1 file of type MOSTLY_Q5_1 is not eligible for conversion. · Issue #454 · LostRuins/koboldcpp
| 1 |
Workflow:
MacBook Pro 14" M1 16 Ram
Python 3.10.12
GNU Make 3.81
Apple clang version 15.0.0 (clang-1500.0.40.1)
Target: arm64-apple-darwin23.0.0
Thread model: posix
The model I tried: https://huggingface.co/TehVenom/Pygmalion-7b-4bit-Q5_1-GGML/tree/main
After running: convert-llama-ggml-to-gguf.py in koboldcpp
I get this error:
(ai) yuki@yuki koboldcpp % python convert-llama-ggml-to-gguf.py --input '/Volumes/Yuki 1/Work/Github/yuna-ai/lib/yunacpp/models/Pygmalion-7b.bin' --output Pygmalion-gguf.bin
Using config: Namespace(input=PosixPath('/Volumes/Yuki 1/Work/Github/yuna-ai/lib/yunacpp/models/Pygmalion-7b.bin'), output=PosixPath('Pygmalion-gguf.bin'), name=None, desc=None, gqa=1, eps='5.0e-06', context_length=2048, model_metadata_dir=None, vocab_dir=None, vocabtype='spm')
=== WARNING === Be aware that this conversion script is best-effort. Use a native GGUF model if possible. === WARNING ===
Note: If converting LLaMA2, specifying "--eps 1e-5" is required. 70B models also need "--gqa 8".
Scanning GGML input file
File format: GGJTv1 with ftype MOSTLY_Q5_1
Traceback (most recent call last):
File "/Volumes/Yuki 1/Work/Github/yuna-ai/lib/yunacpp/koboldcpp/convert-llama-ggml-to-gguf.py", line 451, in
main()
File "/Volumes/Yuki 1/Work/Github/yuna-ai/lib/yunacpp/koboldcpp/convert-llama-ggml-to-gguf.py", line 428, in main
offset = model.load(data, 0)
File "/Volumes/Yuki 1/Work/Github/yuna-ai/lib/yunacpp/koboldcpp/convert-llama-ggml-to-gguf.py", line 194, in load
self.validate_conversion(hp.ftype)
File "/Volumes/Yuki 1/Work/Github/yuna-ai/lib/yunacpp/koboldcpp/convert-llama-ggml-to-gguf.py", line 187, in validate_conversion
raise ValueError(f'{err} Sorry, your {self.file_format.name}v{self.format_version} file of type {ftype.name} is not eligible for conversion.')
ValueError: Quantizations changed in GGJTv2. Can only convert unquantized GGML files older than GGJTv2. Sorry, your GGJTv1 file of type MOSTLY_Q5_1 is not eligible for conversion.
Does anybody else have this issue? Any solutions?
| 2023-09-26T07:01:49 |
https://github.com/LostRuins/koboldcpp/issues/454
|
yukiarimo
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16sgm70
| false | null |
t3_16sgm70
|
/r/LocalLLaMA/comments/16sgm70/valueerror_on_macos_quantizations_changed_in/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '6ps4oNVwTocD9p52R8X8q_mqmww_AbheI3MTyFsJpAk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FVznSneg_pEgxTjSMmuCZeKPYLjet_VGQ8EMiKiYiKE.jpg?width=108&crop=smart&auto=webp&s=9ee919c70b648c67056175ea2079e9bb3e0eb58f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/FVznSneg_pEgxTjSMmuCZeKPYLjet_VGQ8EMiKiYiKE.jpg?width=216&crop=smart&auto=webp&s=788ecf5d45a7934fbba24bde097a6f5238f77400', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/FVznSneg_pEgxTjSMmuCZeKPYLjet_VGQ8EMiKiYiKE.jpg?width=320&crop=smart&auto=webp&s=a51ef1810cdcc491b92dec69c9979a1bad0ff5aa', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/FVznSneg_pEgxTjSMmuCZeKPYLjet_VGQ8EMiKiYiKE.jpg?width=640&crop=smart&auto=webp&s=873e2d4af289d30f4b47ddda2d57e683372461d3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/FVznSneg_pEgxTjSMmuCZeKPYLjet_VGQ8EMiKiYiKE.jpg?width=960&crop=smart&auto=webp&s=a8c334545dcf08cae13f231f79dbdac4856eb318', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/FVznSneg_pEgxTjSMmuCZeKPYLjet_VGQ8EMiKiYiKE.jpg?width=1080&crop=smart&auto=webp&s=0fcea4b89bd0a19cef2d09a5e606bbb3236b7879', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/FVznSneg_pEgxTjSMmuCZeKPYLjet_VGQ8EMiKiYiKE.jpg?auto=webp&s=46833632fcbda6ddc7411ea32fd1f193b9832378', 'width': 1200}, 'variants': {}}]}
|
|
How can I use embeddings from llama.cpp using OpenAI in Python?
| 1 |
Hello,
I am having difficulties using llama.cpp
deployed on one server, and I am attempting to apply the same code for GPT (OpenAI). The issue is that I am unable to find any tutorials, and I am struggling to get the embeddings or to make prompts work properly.
I would prefer not to rely on request.POST
to call the embeddings endpoint
Thank you
p.s: Couldn't get anyhelp in SO
| 2023-09-26T07:27:34 |
https://www.reddit.com/r/LocalLLaMA/comments/16sh0vh/how_can_i_use_embeddings_from_llamacpp_using/
|
mmmanel00
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16sh0vh
| false | null |
t3_16sh0vh
|
/r/LocalLLaMA/comments/16sh0vh/how_can_i_use_embeddings_from_llamacpp_using/
| false | false |
self
| 1 | null |
How can I ask indexed documents in Elasticsearch from Llama2?
| 1 |
Hello,
I am using `openai.Embedding.create()` to create the vector list for posting on ElasticSearch.
How could I ask questions about this document indexed? (I am using Llama2 as GPT)
I am using for asking `load_qa_chain()` but it finishes in a loop of retries...
| 2023-09-26T07:28:46 |
https://www.reddit.com/r/LocalLLaMA/comments/16sh1i8/how_can_i_ask_indexed_documents_in_elasticsearch/
|
mmmanel00
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16sh1i8
| false | null |
t3_16sh1i8
|
/r/LocalLLaMA/comments/16sh1i8/how_can_i_ask_indexed_documents_in_elasticsearch/
| false | false |
self
| 1 | null |
I am completely new to running local models on my PC. Are there any good user-friendly programs that can easily run LLMs without jumping through the hoops of installing an interface, properly categorizing them, and making sure I don't have to reinstall if I can't get them to work?
| 1 |
I've been trying for a week now trying to run a couple of LLaMA models through Kolbloid AI, but every time I've been following the instructions on how to configure them, there's something that goes wrong that makes them unable to chat. It's my fault obviously, but I'm considering backing off for now until there's a way I don't have to keep getting node.js and github desktop to reinstall
| 2023-09-26T09:40:38 |
https://www.reddit.com/r/LocalLLaMA/comments/16sj634/i_am_completely_new_to_running_local_models_on_my/
|
False_Yesterday6699
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16sj634
| false | null |
t3_16sj634
|
/r/LocalLLaMA/comments/16sj634/i_am_completely_new_to_running_local_models_on_my/
| false | false |
self
| 1 | null |
Any way to optimally use GPU for faster llama calls?
| 1 |
I created a Standard\_NC6s\_v3 (6 cores, 112 GB RAM, 336 GB disk) GPU compute in cloud to run Llama-2 13b model. Currently it takes \~10s for a single API call to llama and the hardware consumptions look like this:
*Processing img 7c23ll36rkqb1...*
Is there a way to consume more of the RAM available and speed up the api calls? My model loading code:
from torch import cuda, bfloat16
import transformers
model_id = 'meta-llama/Llama-2-13b-chat-hf'
device = f'cuda:{cuda.current_device()}' if cuda.is_available() else 'cpu'
# set quantization configuration to load large model with less GPU memory
# this requires the `bitsandbytes` library
bnb_config = transformers.BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type='nf4',
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=bfloat16
)
# begin initializing HF items, need auth token for these
hf_auth = 'token'
model_config = transformers.AutoConfig.from_pretrained(
model_id,
use_auth_token=hf_auth
)
model = transformers.AutoModelForCausalLM.from_pretrained(
model_id,
trust_remote_code=True,
config=model_config,
quantization_config=bnb_config,
device_map='auto',
use_auth_token=hf_auth
)
model.eval()
print(f"Model loaded on {device}")
​
| 2023-09-26T10:11:47 |
https://www.reddit.com/r/LocalLLaMA/comments/16sjprs/any_way_to_optimally_use_gpu_for_faster_llama/
|
todaysgamer
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16sjprs
| false | null |
t3_16sjprs
|
/r/LocalLLaMA/comments/16sjprs/any_way_to_optimally_use_gpu_for_faster_llama/
| false | false |
self
| 1 | null |
How can I ask indexed documents in Elasticsearch from Llama2?
| 1 |
[removed]
| 2023-09-26T10:21:13 |
https://www.reddit.com/r/LocalLLaMA/comments/16sjvz2/how_can_i_ask_indexed_documents_in_elasticsearch/
|
mmmanel00
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16sjvz2
| false | null |
t3_16sjvz2
|
/r/LocalLLaMA/comments/16sjvz2/how_can_i_ask_indexed_documents_in_elasticsearch/
| false | false |
self
| 1 | null |
Llama 2 13/70b
| 1 |
Hi everybody,
I'm new to running llm's locally. Does anybody have the data for llama 2's 13 and 70b models in consumer grade gpu's like rtx series and thier laptop counterparts (it would be great to have them on go lol)
| 2023-09-26T10:21:57 |
https://www.reddit.com/r/LocalLLaMA/comments/16sjwh3/llama_2_1370b/
|
Winter_Importance436
|
self.LocalLLaMA
| 2023-09-26T13:37:47 | 0 |
{}
|
16sjwh3
| false | null |
t3_16sjwh3
|
/r/LocalLLaMA/comments/16sjwh3/llama_2_1370b/
| false | false |
default
| 1 | null |
Seeking Advice on Training a Large Language Model on Legal Texts: Should the Model Learn All Versions or Only the Latest?
| 4 |
Hello everyone! I am looking to train a large language model on legal texts. I understand that legal texts may have different versions. I am unsure if I should train the model on all versions of the legal texts or only the most recent versions. Any advice or insight on this would be greatly appreciated. Thank you in advance for your help and suggestions!
| 2023-09-26T10:24:58 |
https://www.reddit.com/r/LocalLLaMA/comments/16sjyi3/seeking_advice_on_training_a_large_language_model/
|
piske_usagi
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16sjyi3
| false | null |
t3_16sjyi3
|
/r/LocalLLaMA/comments/16sjyi3/seeking_advice_on_training_a_large_language_model/
| false | false |
self
| 4 | null |
Llama 2 data format
| 1 |
Hello Guys,
I have a pretty basic question: What is the data format for Llama 2 fine tuning?
I have raw text and question answer pairs, which I extracted from the raw text.
Now I would like to use both in order to train LLama2.
So What is the exact format (maybe jsonl?) for the raw text and the Q&A pairs. How does it look?
Maybe you could provide an example since I could not find something in the internet.
Yeah, I know, there should be tons of examples...
| 2023-09-26T10:25:27 |
https://www.reddit.com/r/LocalLLaMA/comments/16sjyvc/llama_2_data_format/
|
Enkay55
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16sjyvc
| false | null |
t3_16sjyvc
|
/r/LocalLLaMA/comments/16sjyvc/llama_2_data_format/
| false | false |
self
| 1 | null |
Any help on using knowledge distillation on LLMs like Llama2 or Qwen?
| 1 |
How can we do distillation on an llm and create a small network for specific task
| 2023-09-26T10:32:17 |
https://www.reddit.com/r/LocalLLaMA/comments/16sk3nt/any_help_on_using_knowledge_distillation_on_llms/
|
s1lv3rj1nx
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16sk3nt
| false | null |
t3_16sk3nt
|
/r/LocalLLaMA/comments/16sk3nt/any_help_on_using_knowledge_distillation_on_llms/
| false | false |
self
| 1 | null |
A poor man's guide to fine-tuning Llama 2
| 1 | 2023-09-26T10:32:54 |
https://duarteocarmo.com/blog/fine-tune-llama-2-telegram
|
iLikePortugueseTarts
|
duarteocarmo.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16sk41q
| false | null |
t3_16sk41q
|
/r/LocalLLaMA/comments/16sk41q/a_poor_mans_guide_to_finetuning_llama_2/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'Am5mmagULfCoy0Ivkxn7q1k_CEf71uWHhRTrdsm9sp4', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/UpXeterXtWugBrkIdNjpj55JuOy1waDzBzLHb1kicDM.jpg?width=108&crop=smart&auto=webp&s=0962ec694028aa0fc96044966df24f485b6f8bcb', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/UpXeterXtWugBrkIdNjpj55JuOy1waDzBzLHb1kicDM.jpg?width=216&crop=smart&auto=webp&s=234054453e336d58253221ae143faccc8646042c', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/UpXeterXtWugBrkIdNjpj55JuOy1waDzBzLHb1kicDM.jpg?width=320&crop=smart&auto=webp&s=69e8bc8dabcd3d197949111102f7f11542a7a636', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/UpXeterXtWugBrkIdNjpj55JuOy1waDzBzLHb1kicDM.jpg?width=640&crop=smart&auto=webp&s=a1365b63b23f411d1759a3ce6f9cfa9e2b2b1622', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/UpXeterXtWugBrkIdNjpj55JuOy1waDzBzLHb1kicDM.jpg?width=960&crop=smart&auto=webp&s=5409430a062284a88149de185a5b4db7c156d1dd', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/UpXeterXtWugBrkIdNjpj55JuOy1waDzBzLHb1kicDM.jpg?width=1080&crop=smart&auto=webp&s=55fa00b319dad65281f1447ccefdc37afb2f6077', 'width': 1080}], 'source': {'height': 1148, 'url': 'https://external-preview.redd.it/UpXeterXtWugBrkIdNjpj55JuOy1waDzBzLHb1kicDM.jpg?auto=webp&s=a8c70d8aa5f9faeda322e24832604db7048b65e1', 'width': 1148}, 'variants': {}}]}
|
||
pls, recommend me low parameter models
| 1 |
search for
what are the good ones?
| 2023-09-26T10:40:33 |
https://www.reddit.com/r/LocalLLaMA/comments/16sk934/pls_recommend_me_low_parameter_models/
|
Fit_Extent712
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16sk934
| false | null |
t3_16sk934
|
/r/LocalLLaMA/comments/16sk934/pls_recommend_me_low_parameter_models/
| false | false |
self
| 1 | null |
Any way to get Flash Attention 2 working on oobabooga using Windows 10?
| 1 |
Getting the notification when using ExLlamaV2:
2023-09-26 12:42:31 WARNING:You are running ExLlamaV2 without flash-attention. This will cause the VRAM usage to be a lot higher than it could be.
Try installing flash-attention following the instructions here: [https://github.com/Dao-AILab/flash-attention#installation-and-features](https://github.com/Dao-AILab/flash-attention#installation-and-features)
Now the one-click installers don't use it yet afaik. Also, there is one post on github ([https://github.com/oobabooga/text-generation-webui/issues/4015](https://github.com/oobabooga/text-generation-webui/issues/4015)) where people have tried to no avail. I don't know how big the impact of flash attention 2 is on VRAM usage, but it would be great to be able to run a 70B model on a singular 3090.
| 2023-09-26T10:46:17 |
https://www.reddit.com/r/LocalLLaMA/comments/16skcwm/any_way_to_get_flash_attention_2_working_on/
|
LetThemSing
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16skcwm
| false | null |
t3_16skcwm
|
/r/LocalLLaMA/comments/16skcwm/any_way_to_get_flash_attention_2_working_on/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'S_gt1jyPRBUDNrETclqceG9C-LnlgjHiMxfknzvZWTk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lsMlSvcQq-XUAXCKYWHW_aLGSIxEAnuj3Dv3uAU2lEE.jpg?width=108&crop=smart&auto=webp&s=fe6cb819a58629abd6070d5fcd515469a96f9a65', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/lsMlSvcQq-XUAXCKYWHW_aLGSIxEAnuj3Dv3uAU2lEE.jpg?width=216&crop=smart&auto=webp&s=2f1a488b34eea0eeaaad262d6eb38d7ea726690f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/lsMlSvcQq-XUAXCKYWHW_aLGSIxEAnuj3Dv3uAU2lEE.jpg?width=320&crop=smart&auto=webp&s=76b145da914be26d2d601b5b34585413b2a24935', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/lsMlSvcQq-XUAXCKYWHW_aLGSIxEAnuj3Dv3uAU2lEE.jpg?width=640&crop=smart&auto=webp&s=6099cdd844ccc385654c42587369d434892b9c0a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/lsMlSvcQq-XUAXCKYWHW_aLGSIxEAnuj3Dv3uAU2lEE.jpg?width=960&crop=smart&auto=webp&s=af629b1b22e48009f527de145d0a87ea0f3e35ab', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/lsMlSvcQq-XUAXCKYWHW_aLGSIxEAnuj3Dv3uAU2lEE.jpg?width=1080&crop=smart&auto=webp&s=879d24168e8a5b0c5894c55f27c6ed76993ed59b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/lsMlSvcQq-XUAXCKYWHW_aLGSIxEAnuj3Dv3uAU2lEE.jpg?auto=webp&s=1a38daa5d16350f0b84240c5d20321abde3ba117', 'width': 1200}, 'variants': {}}]}
|
[Research] Fast Machine Unlearning Without Retraining Through Selective Synaptic Dampening
| 27 | 2023-09-26T10:48:57 |
https://arxiv.org/abs/2308.07707
|
JustAddMoreLayers
|
arxiv.org
| 1970-01-01T00:00:00 | 0 |
{}
|
16skep1
| false | null |
t3_16skep1
|
/r/LocalLLaMA/comments/16skep1/research_fast_machine_unlearning_without/
| false | false | 27 |
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]}
|
||
Open source Q&A chatbot UI Recommendation?
| 1 |
Any recommendations for an open source repos that support web based chat ui where you can upload docs,pds,links,etc? So far i found https://github.com/openchatai/OpenChat but it doesnt support llama, claude, etc. Theres also https://github.com/h2oai/h2ogpt but their gradio UI is overly complicated (meant for technical people) and not user friendly.
| 2023-09-26T12:11:48 |
https://www.reddit.com/r/LocalLLaMA/comments/16sm5vg/open_source_qa_chatbot_ui_recommendation/
|
simpleyuji
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16sm5vg
| false | null |
t3_16sm5vg
|
/r/LocalLLaMA/comments/16sm5vg/open_source_qa_chatbot_ui_recommendation/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '0d21CuItKmXqPALMSaUB4MycOoVYeUcnS38KeZlVWm0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GiZQRawBeUSrMokJcSRpX42YzPtqC0uLyX9679e1D20.jpg?width=108&crop=smart&auto=webp&s=85290475d8a78efd835d11c92e7e75422e53cc1b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GiZQRawBeUSrMokJcSRpX42YzPtqC0uLyX9679e1D20.jpg?width=216&crop=smart&auto=webp&s=5b05333932b164a4a799d4af628d9e2b40f06e92', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GiZQRawBeUSrMokJcSRpX42YzPtqC0uLyX9679e1D20.jpg?width=320&crop=smart&auto=webp&s=514300d549b37931c1179671be2f2e83abec804a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GiZQRawBeUSrMokJcSRpX42YzPtqC0uLyX9679e1D20.jpg?width=640&crop=smart&auto=webp&s=47f12c81683cd2057e87cd2709ccdaedced81d5c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GiZQRawBeUSrMokJcSRpX42YzPtqC0uLyX9679e1D20.jpg?width=960&crop=smart&auto=webp&s=0e37551cf19aef3da4af2d213776f46622556eab', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GiZQRawBeUSrMokJcSRpX42YzPtqC0uLyX9679e1D20.jpg?width=1080&crop=smart&auto=webp&s=e647607e7a0c1b1df053867035d5d49711991e5d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GiZQRawBeUSrMokJcSRpX42YzPtqC0uLyX9679e1D20.jpg?auto=webp&s=9b7e3231da4466da4f78b07c5cd8636cfaed0558', 'width': 1200}, 'variants': {}}]}
|
Why does llama2-7b-chat on the perplexity lab demo seem much smarter than my local verson?
| 1 |
[removed]
| 2023-09-26T12:29:45 |
https://www.reddit.com/r/LocalLLaMA/comments/16smkaf/why_does_llama27bchat_on_the_perplexity_lab_demo/
|
CheatCodesOfLife
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16smkaf
| false | null |
t3_16smkaf
|
/r/LocalLLaMA/comments/16smkaf/why_does_llama27bchat_on_the_perplexity_lab_demo/
| false | false |
self
| 1 | null |
Simple LLM usage for beginners
| 1 |
Setting up LLMs can be hard sometimes, that why I'm working on a tool to simplify LLM usage for beginners.
The goal is to make anyone able to use command as simple as `llm -m wizardlm-13b-q4_0 "<prompt>"`. That would automatically download and setup models, or use the apis if available (ie: for openai models and others).
It is currently working well on my setup (Apple M2 16GB) but now I'm looking for feedbacks on different setups.
https://github.com/snwfdhmp/llm
| 2023-09-26T12:54:41 |
https://www.reddit.com/r/LocalLLaMA/comments/16sn5tj/simple_llm_usage_for_beginners/
|
snwfdhmp
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16sn5tj
| false | null |
t3_16sn5tj
|
/r/LocalLLaMA/comments/16sn5tj/simple_llm_usage_for_beginners/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'REn-wIZG1CLfDjKoKWOB49SmNHlhQrEbzusWaC10Pd0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5T33JLAJGLvkZrOMbevyOTI8tFh81nncj-pVLE-RbIQ.jpg?width=108&crop=smart&auto=webp&s=313b3cf2452053f9beba7375576ad23139d663a3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5T33JLAJGLvkZrOMbevyOTI8tFh81nncj-pVLE-RbIQ.jpg?width=216&crop=smart&auto=webp&s=433079c0ba0e07b33dd43a1bd3fe1463941b72f7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5T33JLAJGLvkZrOMbevyOTI8tFh81nncj-pVLE-RbIQ.jpg?width=320&crop=smart&auto=webp&s=f5f9f2cb9b012a1a2c8672cd181fb7f369aebb38', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5T33JLAJGLvkZrOMbevyOTI8tFh81nncj-pVLE-RbIQ.jpg?width=640&crop=smart&auto=webp&s=973dd76687fe053c678ee252ca7d0cbcaf75cb39', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5T33JLAJGLvkZrOMbevyOTI8tFh81nncj-pVLE-RbIQ.jpg?width=960&crop=smart&auto=webp&s=3ba5a35a7dacf68cb878230c5395b32490b4f3aa', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5T33JLAJGLvkZrOMbevyOTI8tFh81nncj-pVLE-RbIQ.jpg?width=1080&crop=smart&auto=webp&s=f35f47b504422da4533c2ed61b672da56fe35983', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5T33JLAJGLvkZrOMbevyOTI8tFh81nncj-pVLE-RbIQ.jpg?auto=webp&s=f458720d4d1be57cf2bf3f64b26fdd7ac19d18e0', 'width': 1200}, 'variants': {}}]}
|
A Local LLM Translation
| 1 |
[removed]
| 2023-09-26T12:58:06 |
https://www.reddit.com/r/LocalLLaMA/comments/16sn8yr/a_local_llm_translation/
|
Hussei911
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16sn8yr
| false | null |
t3_16sn8yr
|
/r/LocalLLaMA/comments/16sn8yr/a_local_llm_translation/
| false | false |
self
| 1 | null |
Transfer learning on LLaMa-based multimodal mPLUG-Owl for meme ratings?
| 1 |
So I had this idea to build a reinforcement learning-based dank meme generator, and the first part I wanted to explore was building a meme critic.
I'm going to have access to a huge meme dataset containing images (to be fed into the ViT) and captions (into the text transformer) and ratings ranging from 0 to 1.
Now I've done transfer learning before with CNNs, but it seems to be less of a thing for LLMs, esp compared to fine tuning.
Is there a reasonable way for me to rip out a few of the final layers/components of mPLUG-Owl (or any LLaMa-based model) and just replace it with a few dense layers and a sigmoid, optimizing for binary crossentropy?
| 2023-09-26T13:10:51 |
https://www.reddit.com/r/LocalLLaMA/comments/16snkvg/transfer_learning_on_llamabased_multimodal/
|
FM-2070
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16snkvg
| false | null |
t3_16snkvg
|
/r/LocalLLaMA/comments/16snkvg/transfer_learning_on_llamabased_multimodal/
| false | false |
self
| 1 | null |
Quick question on LORAs, they do primarily style changes right and not really substance?
| 1 |
I am reading up on this and my impression is that this is good for changing the behavior/style of a model and not great at adding new information to a model.
There are lots of discussion on fine-tuning/training and I am just trying to "eliminate" this method as a way to add information to a LLM. I am reading all these low level explanations on LORA, but **I am just missing the high level application of this technique.**
**Am I correct in that LORA is a good way to "color" the output of a model and not the right tool to teach a model the contents of a book?**
| 2023-09-26T13:53:02 |
https://www.reddit.com/r/LocalLLaMA/comments/16solr6/quick_question_on_loras_they_do_primarily_style/
|
drywallfan
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16solr6
| false | null |
t3_16solr6
|
/r/LocalLLaMA/comments/16solr6/quick_question_on_loras_they_do_primarily_style/
| false | false |
self
| 1 | null |
Anyone running a Tesla M40 or P4? What is your experience like?
| 1 |
Title. Curious to see how these old GPUs are fairing in today's world. What models/kinda speed are you getting? I have one on hand as well as a few P4s, can't decide what to do with them.
| 2023-09-26T13:54:25 |
https://www.reddit.com/r/LocalLLaMA/comments/16somxc/anyone_running_a_tesla_m40_or_p4_what_is_your/
|
ziggo0
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16somxc
| false | null |
t3_16somxc
|
/r/LocalLLaMA/comments/16somxc/anyone_running_a_tesla_m40_or_p4_what_is_your/
| false | false |
self
| 1 | null |
I made an HF Space for Qwen-14b-Chat. Currently 14B Model SOTA on benchmarks. So now you can test it without need using Modelscope or need to download it locally. I want to say thanks to HF for providing GPU grant for this space!
| 1 | 2023-09-26T14:15:24 |
https://huggingface.co/spaces/artificialguybr/qwen-14b-chat-demo
|
Seromelhor
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
16sp5z6
| false | null |
t3_16sp5z6
|
/r/LocalLLaMA/comments/16sp5z6/i_made_an_hf_space_for_qwen14bchat_currently_14b/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'qcR4HzNSvJHueB4zJq1iLMXulqiS44BCj6xsFztscBU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/zm6i-kakMohLFOD0UHS0XNZnFtVwCMUNf7RgDm1EqCA.jpg?width=108&crop=smart&auto=webp&s=31ade0c0ef89e5a1522ab49fe4b84876073ec417', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/zm6i-kakMohLFOD0UHS0XNZnFtVwCMUNf7RgDm1EqCA.jpg?width=216&crop=smart&auto=webp&s=d466df8d03aa58753c3cb33a244697e41bd14950', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/zm6i-kakMohLFOD0UHS0XNZnFtVwCMUNf7RgDm1EqCA.jpg?width=320&crop=smart&auto=webp&s=52d16e56a8feb0d210864ee0bdcb8e3d18a85152', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/zm6i-kakMohLFOD0UHS0XNZnFtVwCMUNf7RgDm1EqCA.jpg?width=640&crop=smart&auto=webp&s=139c6d554e60a1bbf638e4b117478cbcd75352f2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/zm6i-kakMohLFOD0UHS0XNZnFtVwCMUNf7RgDm1EqCA.jpg?width=960&crop=smart&auto=webp&s=8937632cf7a7aceb9aab112f74f69d7d7a2e8a89', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/zm6i-kakMohLFOD0UHS0XNZnFtVwCMUNf7RgDm1EqCA.jpg?width=1080&crop=smart&auto=webp&s=d7d2eff00f3c4356bf3af1436961fe602f3862a3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/zm6i-kakMohLFOD0UHS0XNZnFtVwCMUNf7RgDm1EqCA.jpg?auto=webp&s=cfb944b7d0d9126e7548babe12d71470bce09ea2', 'width': 1200}, 'variants': {}}]}
|
||
Can fine-tuning teach the model some new facts?
| 1 |
I've read a lot about model fine-tuning and learned that fine-tuning is about the output form, rather than the content. Yet, recently I've heard at least from two people from the industry that the model can remember information during the fine-tuning process which is actually a fact-learning process.
Can anyone shed some light on that: is it possible? Is there any specific setups or model/adapter architecture that can provide that?
| 2023-09-26T14:56:50 |
https://www.reddit.com/r/LocalLLaMA/comments/16sq8x4/can_finetuning_teach_the_model_some_new_facts/
|
Greg_Z_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16sq8x4
| false | null |
t3_16sq8x4
|
/r/LocalLLaMA/comments/16sq8x4/can_finetuning_teach_the_model_some_new_facts/
| false | false |
self
| 1 | null |
When will they update datasets with post 2021 events etc.
| 1 |
Any information on this, would all the models have to be retrained with like llama 3-2023-nov ember or something? Sorry if its a dumb question, dying to know.
| 2023-09-26T15:28:24 |
https://www.reddit.com/r/LocalLLaMA/comments/16sr2s5/when_will_they_update_datasets_with_post_2021/
|
Jazzlike-Fun9923
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16sr2s5
| false | null |
t3_16sr2s5
|
/r/LocalLLaMA/comments/16sr2s5/when_will_they_update_datasets_with_post_2021/
| false | false |
self
| 1 | null |
vLLM - how to share vRAM
| 1 |
I want to try serving a 70b model using vLLM. However the model doesn't fit in one GPU.
How to use vLLM to host the model on more than one GPU with shared vRAM?
| 2023-09-26T15:46:26 |
https://www.reddit.com/r/LocalLLaMA/comments/16srjd9/vllm_how_to_share_vram/
|
ComplexIt
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16srjd9
| false | null |
t3_16srjd9
|
/r/LocalLLaMA/comments/16srjd9/vllm_how_to_share_vram/
| false | false |
self
| 1 | null |
Evaluating Language Competence of Llama 2-based models
| 1 |
I recently tried to improve Llama2's ability to speak german, totally failed, but got me into benchmarking language capabilities.
Haven't found much on how to use sacreBLEU with LLMs except [this](https://huggingface.co/spaces/evaluate-metric/sacrebleu), that's why I share my approach, applicable to any language-pair not only english->german, maybe useful to some of you.
​
* load the dataset and generate 5 shot prompts:
​
ds=load_dataset(path="wmt20_mlqe_task1", name="en-de",split="test")
ds=ds["translation"]
ds_examples=ds[0:5]
ds_predict=ds[5:]
prompt_template="English: {en}\nGerman: {de}"
prompt_examples = "\n\n".join([prompt_template.format(**row) for row in ds_examples])
* 5-shot prompt llama to generate translation
​
# llama-precise
gen_config = {
"temperature": 0.7,
"top_p": 0.1,
"repetition_penalty": 1.18,
"top_k": 40,
"do_sample": True,
"max_new_tokens": 100,
"pad_token_id": pipeline.tokenizer.eos_token_id,
}
predictions=[]
for row in tqdm(ds_predict):
prompt=prompt_examples + "\n\n" + prompt_template.format(en=row["en"], de="")[:-1]
prediction=pipeline(prompt, **gen_config)[0]["generated_text"][len(prompt)+1:]
if "\n" in prediction:
prediction=prediction.split("\n")[0]
predictions.append(prediction)
references=[row["de"] for row in ds_predict]
* calculate BLEU score (using HF sacreBLEU metric)
​
sacrebleu = evaluate.load("sacrebleu")
sacrebleu_results=sacrebleu.compute(predictions=predictions, references=references)
print(sacrebleu_results["score"])
performance of the base models:
​
[995 english to german translations, 5-shot prompts. 70B model was loaded in 4bit \(VRAM poor\). ](https://preview.redd.it/p9h2pwiiimqb1.png?width=1775&format=png&auto=webp&s=a6640894b9dd812974439cd29fc3cb2b4ed8f463)
few more words: [https://medium.com/@geronimo7/evaluating-language-competence-of-llama-2-based-models-the-bleu-score-d44c651a5e58](https://medium.com/@geronimo7/evaluating-language-competence-of-llama-2-based-models-the-bleu-score-d44c651a5e58)
code, including a version using batched inference: [https://github.com/geronimi73/sacrebleu-llama](https://github.com/geronimi73/sacrebleu-llama)
| 2023-09-26T16:07:37 |
https://www.reddit.com/r/LocalLLaMA/comments/16ss31u/evaluating_language_competence_of_llama_2based/
|
HatEducational9965
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ss31u
| false | null |
t3_16ss31u
|
/r/LocalLLaMA/comments/16ss31u/evaluating_language_competence_of_llama_2based/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'LgJSiMNTY-VyGX22xZulchPaUK21RWfTxsHocAF4MKA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/NyjtrGU5yEIrKKt1OWZIh0iTeLk_dHMeR5iHBdF_GJ0.jpg?width=108&crop=smart&auto=webp&s=4eae4b7cc7dcf066999f8f69ea0dafb3514acfe6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/NyjtrGU5yEIrKKt1OWZIh0iTeLk_dHMeR5iHBdF_GJ0.jpg?width=216&crop=smart&auto=webp&s=6290a73da2a2b955c38a04d3e7d33abd9201c8aa', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/NyjtrGU5yEIrKKt1OWZIh0iTeLk_dHMeR5iHBdF_GJ0.jpg?width=320&crop=smart&auto=webp&s=0972afb1f034976e2fad1320d0bf26d3854e71a1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/NyjtrGU5yEIrKKt1OWZIh0iTeLk_dHMeR5iHBdF_GJ0.jpg?width=640&crop=smart&auto=webp&s=5ad6df7aefbd364fedc08367b93192a1bfa55a45', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/NyjtrGU5yEIrKKt1OWZIh0iTeLk_dHMeR5iHBdF_GJ0.jpg?width=960&crop=smart&auto=webp&s=74ae1d5ceaa5a1ee5a7caa2e98ca9a0ff3b43570', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/NyjtrGU5yEIrKKt1OWZIh0iTeLk_dHMeR5iHBdF_GJ0.jpg?width=1080&crop=smart&auto=webp&s=900af7596e16cc4216e5956ed0d40353d74e235c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/NyjtrGU5yEIrKKt1OWZIh0iTeLk_dHMeR5iHBdF_GJ0.jpg?auto=webp&s=4443dbc877cbafcfd9d28e6ad53e0db3c61fc7ca', 'width': 1200}, 'variants': {}}]}
|
|
Llama 2 Rate Limits
| 1 |
I am using GPT3.5T and am running into some rate limits constraints. I have filled out Open AI's Rate Limit Increase Form and my limits were marginally increased, but I still need more. Does Llama 2 also have a rate limit for remaining requests or tokens? Thanks in advance for the help!
| 2023-09-26T16:16:22 |
https://www.reddit.com/r/LocalLLaMA/comments/16ssb9p/llama_2_rate_limits/
|
Widowmaker23
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ssb9p
| false | null |
t3_16ssb9p
|
/r/LocalLLaMA/comments/16ssb9p/llama_2_rate_limits/
| false | false |
self
| 1 | null |
Could a Mac run a 546B model?
| 1 |
[removed]
| 2023-09-26T16:26:00 |
https://www.reddit.com/r/LocalLLaMA/comments/16ssk0q/could_a_mac_run_a_546b_model/
|
medley-400
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ssk0q
| false | null |
t3_16ssk0q
|
/r/LocalLLaMA/comments/16ssk0q/could_a_mac_run_a_546b_model/
| false | false |
self
| 1 | null |
Managing Follow up question for retriever based chatbot
| 1 |
Looking for some suggestion on how to handle follow up question for a retriever based chatbot. data for answer is quired from its specific embeddings and I have multiple. Direct quires are fine but follow up is a challenge. I can not always fallback to previous intent as it can be wrong. Any suggestion would be of great help
| 2023-09-26T16:38:07 |
https://www.reddit.com/r/LocalLLaMA/comments/16ssvb1/managing_follow_up_question_for_retriever_based/
|
Optimal_Original_815
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ssvb1
| false | null |
t3_16ssvb1
|
/r/LocalLLaMA/comments/16ssvb1/managing_follow_up_question_for_retriever_based/
| false | false |
self
| 1 | null |
How do you send data in batches to an open source LLM to be processed on GPU
| 1 |
Say for eg. I am doing sentiment analysis using Llama 2. I have daily news articles which I wish to get daily sentiment ratings.
Rather than looping daily in my Python script or prompt template, how do I send say 30 days of daily news in a batch to Llama 2 to get back 30 daily sentiment ratings in one go so that I am fully utilizing my GPU resources?
| 2023-09-26T16:44:53 |
https://www.reddit.com/r/LocalLLaMA/comments/16st1qf/how_do_you_send_data_in_batches_to_an_open_source/
|
redd-dev
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16st1qf
| false | null |
t3_16st1qf
|
/r/LocalLLaMA/comments/16st1qf/how_do_you_send_data_in_batches_to_an_open_source/
| false | false |
self
| 1 | null |
Getting wildly different perplexity test results between MacOS and Windows
| 1 |
So the other day I finally got perplexity testing working in Oobabooga, and decided to give it a go. I have a Windows machine and a Mac Studio, so I put both to work.
Over the course of the past 2 days, I've seen some really crazy results. At first I thought it was a context/rope scaling thing, but that's not it at all. Check this out:
Both systems, Mac and Win, are using the exact same setup. Oobabooga using Llamacpp-HF for GGUFs or ExLlama-HF for GPTQ (windows only). I use the same settings and dataset (wikitext) for the tests.
* **Dataset: Wikitext**
* **Stride: 512**
* **max\_length: 0 (uses max for the model)**
**Windows**
\-------------------
**TheBloke\_Synthia-34B-v1.2-GPTQ\_gptq-4bit-32g-actorder\_True**
* Context: 4096
* Rope Scale: 1,000,000
* Perplexity: **5.4562811851501465**
**TheBloke\_Synthia-34B-v1.2-GPTQ\_gptq-4bit-32g-actorder\_True**
* Context: 8192
* Rope Scale: 1,000,000
* Perplexity: **5.456320762634277**
**TheBloke\_Synthia-34B-v1.2-GPTQ\_gptq-4bit-32g-actorder\_True**
* Context: 4096
* Rope Scale: 0 *(defaults to 10,000 in Ooba I think)*
* Perplexity: **5.456327438354492**
**mythomax-l2-13b.Q8\_0.gguf**
* Context: 4096
* Rope Scale: 0 *(defaults to 10,000 in Ooba I think)*
* Perplexity: **4.949336528778076**
​
**MacOS**
\-------------------
**synthia-34b-v1.2.Q8\_0.gguf**
* Context: 16384
* Rope Scale: 1,000,000
* Perplexity: **1.5229930877685547**
**mythomax-l2-13b.Q8\_0.gguf**
* Context: 8192
* Alpha: 2
* Rope Scale: 20,000
* Perplexity: **1.4898762702941895**
​
Has anyone seen anything like this before? All my settings are identical. The versions of Ooba are the same, other than the Mac using Metal inference while the Windows machine uses CUDA with an RTX 4090.
I'm completely stumped.
| 2023-09-26T16:45:44 |
https://www.reddit.com/r/LocalLLaMA/comments/16st2l5/getting_wildly_different_perplexity_test_results/
|
LearningSomeCode
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16st2l5
| false | null |
t3_16st2l5
|
/r/LocalLLaMA/comments/16st2l5/getting_wildly_different_perplexity_test_results/
| false | false |
self
| 1 | null |
Ollama - front-end for running local models, release v0.1.0
| 10 | 2023-09-26T17:07:59 |
https://github.com/jmorganca/ollama/releases/tag/v0.1.0
|
bandman614
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16stnpn
| false | null |
t3_16stnpn
|
/r/LocalLLaMA/comments/16stnpn/ollama_frontend_for_running_local_models_release/
| false | false | 10 |
{'enabled': False, 'images': [{'id': 'aI8JDBlabiZTdyh2CmhRcXkr-3TtsO28nAHtJiBtNpw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0jCvrVt6I1m9dmD_mqXj2Y-BSTP6-kwaUM0E_n5XSzA.jpg?width=108&crop=smart&auto=webp&s=8f920dd8fb82a57b2d8d698d10ce34eeb60f2aa5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0jCvrVt6I1m9dmD_mqXj2Y-BSTP6-kwaUM0E_n5XSzA.jpg?width=216&crop=smart&auto=webp&s=562addb26e4d8a4286c1819d4dd075c8dda77e1c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0jCvrVt6I1m9dmD_mqXj2Y-BSTP6-kwaUM0E_n5XSzA.jpg?width=320&crop=smart&auto=webp&s=29fcfce5fe97dc7689eea7c1e9166b308eee5110', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0jCvrVt6I1m9dmD_mqXj2Y-BSTP6-kwaUM0E_n5XSzA.jpg?width=640&crop=smart&auto=webp&s=713cde801e7cf9ac8dd9120a8fefe2b67b836fcf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0jCvrVt6I1m9dmD_mqXj2Y-BSTP6-kwaUM0E_n5XSzA.jpg?width=960&crop=smart&auto=webp&s=2011c120f34c7e67ed17c2926ffac65835b2416d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0jCvrVt6I1m9dmD_mqXj2Y-BSTP6-kwaUM0E_n5XSzA.jpg?width=1080&crop=smart&auto=webp&s=539f099327f86f4e03283841e8688eadf3ee00b2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0jCvrVt6I1m9dmD_mqXj2Y-BSTP6-kwaUM0E_n5XSzA.jpg?auto=webp&s=40bb48c34e3ee0f89a016c35d6dba238b5cc4899', 'width': 1200}, 'variants': {}}]}
|
||
Making Food
| 1 |
Im hungry, there's a few random ingredients in the fridge..want to make some cheap Chinese American/Western food, and don't want to get food poisoning. Any good llms/datasets to toy with?
Do u find llama as a base model already has enough culinary information?
| 2023-09-26T17:12:20 |
https://www.reddit.com/r/LocalLLaMA/comments/16strqu/making_food/
|
Aaaaaaaaaeeeee
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16strqu
| false | null |
t3_16strqu
|
/r/LocalLLaMA/comments/16strqu/making_food/
| false | false |
self
| 1 | null |
How to Approach LLM Learning with Career Development in Mind?
| 1 |
Hey folks!
I'm a data scientist by trade, and I want to pivot my career towards LLM development for private companies in anticipation of that transition (with the understanding that this is still a very new field).
I've seen a few job postings, but there are obviously no certifications in this space yet.
Where should I focus my learning, so that I can be competitive for new postings?
My thinking was:
* Cloud Certifications (AWS): Certified Maching Learning/Certified Dev
* Create/manage a github repo that details a model + LORAS for some specific tasks.
Any suggestions or feedback?
| 2023-09-26T17:14:47 |
https://www.reddit.com/r/LocalLLaMA/comments/16sttws/how_to_approach_llm_learning_with_career/
|
DiamondBadge
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16sttws
| false | null |
t3_16sttws
|
/r/LocalLLaMA/comments/16sttws/how_to_approach_llm_learning_with_career/
| false | false |
self
| 1 | null |
Progress on Reproducing Phi-1/1.5
| 74 |
Hey All,
Just wanted to share that I am churning out a large dataset that is meant to \~reproduce the synthetic textbook dataset that Microsoft used to train phi-1.5. I'm currently generating \~1bn tokens / day and will be continually streaming the results here - [https://huggingface.co/datasets/emrgnt-cmplxty/sciphi-textbooks-are-all-you-need](https://huggingface.co/datasets/emrgnt-cmplxty/sciphi-textbooks-are-all-you-need)
​
I will eventually document all of my learnings from the process, as I think this will be useful for the OSS community. I have already had a number of painful / useful learnings over the last several days while scaling up these efforts.
​
I am reporting my prelim results here now because I believe the dataset is in a good enough state that it could give interesting results if used in fine-tuning a 13 or 34b model. I am going to start working on this soon, and if anyone would like to colab should dm.
Hopefully this and other efforts around synthetic data will be valuable for the OSS LLM community.
​
| 2023-09-26T17:24:56 |
https://www.reddit.com/r/LocalLLaMA/comments/16su32p/progress_on_reproducing_phi115/
|
docsoc1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16su32p
| false | null |
t3_16su32p
|
/r/LocalLLaMA/comments/16su32p/progress_on_reproducing_phi115/
| false | false |
self
| 74 |
{'enabled': False, 'images': [{'id': 'SRQqCJLLBWenZA-Vk4NmfhLIYoGP4qO_YhRZNcIHsgE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?width=108&crop=smart&auto=webp&s=5c63eeb0d15d3381f0bbadec162cca2f9ed72747', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?width=216&crop=smart&auto=webp&s=0e003282dbbd320cd65fbcd21cf43ea4220d4c04', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?width=320&crop=smart&auto=webp&s=de1f815ca8dd7cb2df967d4ca3ca2fbc79bbf1ec', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?width=640&crop=smart&auto=webp&s=9e431c0b74666624a1bd0992a10196c3c4fea079', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?width=960&crop=smart&auto=webp&s=1856bb34a5887774727ce79ed3870431ef46c163', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?width=1080&crop=smart&auto=webp&s=a8ae39a6fa999681feedd3e3475c0424cc99deb1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?auto=webp&s=8c2bdd754ca7ecf62f55551ff88ca9d0c891f6c9', 'width': 1200}, 'variants': {}}]}
|
EasyEdit: An Easy-to-use Knowledge Editing Framework for LLMs.
| 1 | 2023-09-26T17:44:48 |
https://github.com/zjunlp/EasyEdit
|
AutomataManifold
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16sulqm
| false | null |
t3_16sulqm
|
/r/LocalLLaMA/comments/16sulqm/easyedit_an_easytouse_knowledge_editing_framework/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'fhMHZVe6_o_Rka-ESuRRHmQkl8oug7pKcLUqNPLzvWM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HXoTnWtvAS4kiNPChhrVCwvFvcYTlenWsfz5k0XtleI.jpg?width=108&crop=smart&auto=webp&s=1b9506e04db4e830584e54a5ff8199e2afb9de66', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HXoTnWtvAS4kiNPChhrVCwvFvcYTlenWsfz5k0XtleI.jpg?width=216&crop=smart&auto=webp&s=a39c934264f45d07cbf28d066910351b11159ed1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HXoTnWtvAS4kiNPChhrVCwvFvcYTlenWsfz5k0XtleI.jpg?width=320&crop=smart&auto=webp&s=deddba728b77bb4c051512955402f9cdc55d6cfb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HXoTnWtvAS4kiNPChhrVCwvFvcYTlenWsfz5k0XtleI.jpg?width=640&crop=smart&auto=webp&s=9f83becad8198d018f3ec21e7282b9d47c8b5f76', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HXoTnWtvAS4kiNPChhrVCwvFvcYTlenWsfz5k0XtleI.jpg?width=960&crop=smart&auto=webp&s=998860f5df8afc9ad9fcda1d0c834c818893b245', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HXoTnWtvAS4kiNPChhrVCwvFvcYTlenWsfz5k0XtleI.jpg?width=1080&crop=smart&auto=webp&s=f081d04e8ca316830b3936d4f57eca9785683cac', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HXoTnWtvAS4kiNPChhrVCwvFvcYTlenWsfz5k0XtleI.jpg?auto=webp&s=fa292a1474ff73def3aac7a38f903da655cdd605', 'width': 1200}, 'variants': {}}]}
|
||
I have a computer that is offline and accessible via USB. Any way to drop in the LLM?
| 1 |
WSL sounded good until I gave up hand copying .debs for make, gcc, gcc-12, and cpp..
Not sure what my options are.
| 2023-09-26T17:46:12 |
https://www.reddit.com/r/LocalLLaMA/comments/16sunag/i_have_a_computer_that_is_offline_and_accessible/
|
pr1vacyn0eb
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16sunag
| false | null |
t3_16sunag
|
/r/LocalLLaMA/comments/16sunag/i_have_a_computer_that_is_offline_and_accessible/
| false | false |
self
| 1 | null |
Local LLM that will search the Internet?
| 1 |
The great value in Bing Chat and Bard, over a plain chatbot, no matter how good is it, is that they're equipped with an Internet search tool that will search, then feed the AI so that it can predict a response with up-to-date, precise information it may not have been trained with. The problem is they're heavily censored and keyworded, and they are of course not open source, ran and controlled by you.
Is there a free LLM I can run locally that's equipped with Internet search so it'll browse for me but give me real results on any question?
| 2023-09-26T18:01:51 |
https://www.reddit.com/r/LocalLLaMA/comments/16sv2sh/local_llm_that_will_search_the_internet/
|
A_for_Anonymous
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16sv2sh
| false | null |
t3_16sv2sh
|
/r/LocalLLaMA/comments/16sv2sh/local_llm_that_will_search_the_internet/
| false | false |
self
| 1 | null |
How to log train/val accuracy using SFT trainer?
| 1 |
Hi,
​
I'm using SFT trainer from HF to fine-tune a LLaMA model using PEFT. But SFT only gives me the loss and other performance-related (like timing) metrics. How can I get the training/val accuracy? I tried to use callbacks but not successful :( Could you please help me with this?
​
Here is my code:
`dataset = load_dataset(dataset_name, split="train")`
`compute_dtype = getattr(torch, bnb_4bit_compute_dtype)`
`bnb_config = BitsAndBytesConfig(`
`load_in_4bit=use_4bit,bnb_4bit_quant_type=bnb_4bit_quant_type,bnb_4bit_compute_dtype=compute_dtype,bnb_4bit_use_double_quant=use_nested_quant,`
`)`
`model = AutoModelForCausalLM.from_pretrained(`
`model_name,quantization_config=bnb_config,device_map=device_map`
`)`
`tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)`
`tokenizer.pad_token = tokenizer.eos_token`
`tokenizer.padding_side = "right" # Fix weird overflow issue with fp16 training`
`peft_config = LoraConfig(`
`lora_alpha=lora_alpha,lora_dropout=lora_dropout,r=lora_r,bias="none",task_type="CAUSAL_LM",`
`)`
`training_arguments = TrainingArguments(`
`output_dir=output_dir,num_train_epochs=num_train_epochs,per_device_train_batch_size=per_device_train_batch_size,gradient_accumulation_steps=gradient_accumulation_steps,optim=optim, save_steps=save_steps,logging_steps=logging_steps,learning_rate=learning_rate,weight_decay=weight_decay,fp16=fp16,bf16=bf16,max_grad_norm=max_grad_norm,max_steps=max_steps,warmup_ratio=warmup_ratio,group_by_length=group_by_length,lr_scheduler_type=lr_scheduler_type,report_to="tensorboard"`
`)`
`trainer = SFTTrainer(`
`model=model,train_dataset=dataset,peft_config=peft_config,dataset_text_field="text",max_seq_length=max_seq_length,tokenizer=tokenizer,args=training_arguments,packing=packing,`
`)`
`train_result = trainer.train()`
​
Thank you!
| 2023-09-26T18:12:14 |
https://www.reddit.com/r/LocalLLaMA/comments/16svcpz/how_to_log_trainval_accuracy_using_sft_trainer/
|
Impossible-Froyo3412
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16svcpz
| false | null |
t3_16svcpz
|
/r/LocalLLaMA/comments/16svcpz/how_to_log_trainval_accuracy_using_sft_trainer/
| false | false |
self
| 1 | null |
Qwen is aligned just as you would expect
| 1 |
Or do you think it really doesn't know?
https://preview.redd.it/m880q6f6anqb1.png?width=1227&format=png&auto=webp&s=f28190b6fd61e312f6343b43220cf8140d8415f4
| 2023-09-26T18:41:22 |
https://www.reddit.com/r/LocalLLaMA/comments/16sw4na/qwen_is_aligned_just_as_you_would_expect/
|
random-string
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16sw4na
| false | null |
t3_16sw4na
|
/r/LocalLLaMA/comments/16sw4na/qwen_is_aligned_just_as_you_would_expect/
| false | false | 1 | null |
|
How is MMLU calculated?
| 1 |
I saw that exllamav2 had a MMLU calculation and wanted to know how the official MMLU was calculated. It seems like I'm supposed to go through all ~50 categories, calculate performance for each category, and then take the average the results?
Initially, I had left out 3 problems and just calculated the MMLU for all 14k problems. Here are the results for the 7B base llama2 model for anyone interested
| Model | GB | MMLU |
| --- | --- | --- |
| 2.5bpw | 2.34 | 0.2689 |
| 3.0bpw | 2.74 | 0.3517 |
| 3.5bpw | 3.13 | 0.4045 |
| 4.0bpw | 3.53 | 0.4165 |
| 4.7bpw | 4.04 | 0.4178 |
| 5.0bpw | 4.32 | 0.4388 |
| 6.0bpw | 5.11 | 0.4344 |
| 7.0bpw | 5.89 | 0.4374 |
| 8.0bpw | 6.69 | 0.4354 |
| GPTQ | 3.81 | 0.4113 |
| 2023-09-26T18:54:21 |
https://www.reddit.com/r/LocalLLaMA/comments/16swh0w/how_is_mmlu_calculated/
|
a_slay_nub
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16swh0w
| false | null |
t3_16swh0w
|
/r/LocalLLaMA/comments/16swh0w/how_is_mmlu_calculated/
| false | false |
self
| 1 | null |
Is it possible to replicate the speech and/or jokes of somebody using a finetuned LLM?
| 1 |
I am still relatively new to all this, but there's been something that I've been thinking about for a while.
Let's say I saved multiple transcripts from YouTube videos of one person. Would I be able to use those transcripts as a dataset to finetune an LLM to replicate this person's speech, mannerisms, and humor?
Apologies if this is a noob question.
| 2023-09-26T19:17:05 |
https://www.reddit.com/r/LocalLLaMA/comments/16sx2m4/is_it_possible_to_replicate_the_speech_andor/
|
mohalibou
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16sx2m4
| false | null |
t3_16sx2m4
|
/r/LocalLLaMA/comments/16sx2m4/is_it_possible_to_replicate_the_speech_andor/
| false | false |
self
| 1 | null |
How does The Bloke quantize models?
| 1 |
[removed]
| 2023-09-26T19:33:04 |
https://www.reddit.com/r/LocalLLaMA/comments/16sxhqq/how_does_the_bloke_quantize_models/
|
Ender436
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16sxhqq
| false | null |
t3_16sxhqq
|
/r/LocalLLaMA/comments/16sxhqq/how_does_the_bloke_quantize_models/
| false | false |
self
| 1 | null |
Add new knowledge to LLama2
| 1 |
I am planning on building my personal assistant based on the LLama2 model. I want the model to know about my personal information, like hobbies, family, and close friends. So I created a dolly-format dataset full of conversations between me and the model. The result I got is that the model only correctly generates 60% of my personal information and the rest is made up. However, the model talks the way I wrote it in the dataset. I also heard of something called cognitive architecture with langchain but I'm not sure if that is the right solution.
which route should I go with?
Thank all
| 2023-09-26T19:34:10 |
https://www.reddit.com/r/LocalLLaMA/comments/16sxiop/add_new_knowledge_to_llama2/
|
Capital_Birthday_654
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16sxiop
| false | null |
t3_16sxiop
|
/r/LocalLLaMA/comments/16sxiop/add_new_knowledge_to_llama2/
| false | false |
self
| 1 | null |
LaMini has been using AMD GPU in production for the past year
| 1 |
Sharon Zhou just shared this funny [revelation](https://twitter.com/realSharonZhou/status/1706708589837254946?t=UlkwBHI72r_1dZ-cw0blJA&s=19).
Seems like AMD is actually a thing for LLM in production.
| 2023-09-26T19:55:53 |
https://www.reddit.com/r/LocalLLaMA/comments/16sy3ic/lamini_has_been_using_amd_gpu_in_production_for/
|
No_Palpitation7740
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16sy3ic
| false | null |
t3_16sy3ic
|
/r/LocalLLaMA/comments/16sy3ic/lamini_has_been_using_amd_gpu_in_production_for/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'I8vWJynmK66Chqg7a2jmStUeQiA5R0a6wrF-CkP2_n8', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/BnPCwn7-9cGv-2zM2ODcOWUP6kpsIqCaivnFtta3B4g.jpg?width=108&crop=smart&auto=webp&s=f6152f999a92467a8e1c4b691fc9c2e1a8184b23', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/BnPCwn7-9cGv-2zM2ODcOWUP6kpsIqCaivnFtta3B4g.jpg?width=216&crop=smart&auto=webp&s=37b84c15e56c5de5c42c066648ec02062347524c', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/BnPCwn7-9cGv-2zM2ODcOWUP6kpsIqCaivnFtta3B4g.jpg?width=320&crop=smart&auto=webp&s=7f0c522fb9c16bff83fe608e677d4d1e13940307', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/BnPCwn7-9cGv-2zM2ODcOWUP6kpsIqCaivnFtta3B4g.jpg?width=640&crop=smart&auto=webp&s=4eab2a74a749392126a89f244ff85acbe21c8d04', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/BnPCwn7-9cGv-2zM2ODcOWUP6kpsIqCaivnFtta3B4g.jpg?width=960&crop=smart&auto=webp&s=12879755f63c7b075cc3b05eda7ff2c19bc92707', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/BnPCwn7-9cGv-2zM2ODcOWUP6kpsIqCaivnFtta3B4g.jpg?width=1080&crop=smart&auto=webp&s=988bd0646f11bf265c4e0cbc970c24dc84dd675b', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/BnPCwn7-9cGv-2zM2ODcOWUP6kpsIqCaivnFtta3B4g.jpg?auto=webp&s=f7332092f38385c2e84fcb122e6851fdcab1026b', 'width': 1080}, 'variants': {}}]}
|
I plan to run LLaMA on the browser, what vectordb should I use?
| 1 |
Hey! So I want to create a website where I'll run LLaMA, but I am not sure regarding which DB do I want to use to store the results and all the data.
Ideally it should be a tool to kinda store data without backend if this is even possible, lol. And be able to semantically search through it.
I am not really familiar with all the backend and DB infrastructure. I need the simplest solution which I can implement in less than a day.
Frontend is done primarily with typescript +next js
| 2023-09-26T20:39:14 |
https://www.reddit.com/r/LocalLLaMA/comments/16sza4f/i_plan_to_run_llama_on_the_browser_what_vectordb/
|
Robert-treboR
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16sza4f
| false | null |
t3_16sza4f
|
/r/LocalLLaMA/comments/16sza4f/i_plan_to_run_llama_on_the_browser_what_vectordb/
| false | false |
self
| 1 | null |
Comparing the Parallel AI Universes - Smallville vs AI Town
| 1 | 2023-09-26T20:57:11 |
https://querystar.io/blog/ai-town
|
DearWave
|
querystar.io
| 1970-01-01T00:00:00 | 0 |
{}
|
16szr9j
| false | null |
t3_16szr9j
|
/r/LocalLLaMA/comments/16szr9j/comparing_the_parallel_ai_universes_smallville_vs/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '2LCPTOV_XM-6fq4aK28guSZ6TqS-ezgq9sv8E-5bK9w', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/qvhjTB1qfbtmQPtBzEfSbt33UIXFByokUQ6FOZEDzDs.jpg?width=108&crop=smart&auto=webp&s=d6bfb8985581038455ece432af5767aecfc62f78', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/qvhjTB1qfbtmQPtBzEfSbt33UIXFByokUQ6FOZEDzDs.jpg?width=216&crop=smart&auto=webp&s=b69c045dd67c6db8b6a1228048469c2fd5995d80', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/qvhjTB1qfbtmQPtBzEfSbt33UIXFByokUQ6FOZEDzDs.jpg?width=320&crop=smart&auto=webp&s=068e368993852d4e48d75996b91e01d0f4ae92fc', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/qvhjTB1qfbtmQPtBzEfSbt33UIXFByokUQ6FOZEDzDs.jpg?width=640&crop=smart&auto=webp&s=47f3d809d40a46074b331892d7b05c23b4827d38', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/qvhjTB1qfbtmQPtBzEfSbt33UIXFByokUQ6FOZEDzDs.jpg?width=960&crop=smart&auto=webp&s=6178ad2173213dbb867cfc3a21126e6e5d35fcca', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/qvhjTB1qfbtmQPtBzEfSbt33UIXFByokUQ6FOZEDzDs.jpg?width=1080&crop=smart&auto=webp&s=375ac67d2c7f0ee4fc534b8e0644424a9425875d', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/qvhjTB1qfbtmQPtBzEfSbt33UIXFByokUQ6FOZEDzDs.jpg?auto=webp&s=6c42c3389fca5b1b1d725559e3d52e8c67750d57', 'width': 1200}, 'variants': {}}]}
|
||
Best model to run locally on an Android phone?
| 1 | 2023-09-26T21:27:17 |
Ok-Recognition-3177
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
16t0lbw
| false | null |
t3_16t0lbw
|
/r/LocalLLaMA/comments/16t0lbw/best_model_to_run_locally_on_an_android_phone/
| false | false | 1 |
{'enabled': True, 'images': [{'id': '5LPNcYRwSIJ5sFAIhHUF9q0T84RqttMxOPYzzoLiSfU', 'resolutions': [{'height': 93, 'url': 'https://preview.redd.it/jp7pqlzr3oqb1.png?width=108&crop=smart&auto=webp&s=2d6b6ff43a98322a7fabbe0bd6cae840e70afbb8', 'width': 108}, {'height': 186, 'url': 'https://preview.redd.it/jp7pqlzr3oqb1.png?width=216&crop=smart&auto=webp&s=aee09239d50d4dfaab79c6a35d18e25f04e73678', 'width': 216}, {'height': 276, 'url': 'https://preview.redd.it/jp7pqlzr3oqb1.png?width=320&crop=smart&auto=webp&s=2d9959b1366997d635fa2065a96cee8c71c9e5f5', 'width': 320}], 'source': {'height': 521, 'url': 'https://preview.redd.it/jp7pqlzr3oqb1.png?auto=webp&s=49e3c8f9250fe90e368c017b7e83e51c09d60ddb', 'width': 604}, 'variants': {}}]}
|
|||
What’s the best LLM for private Transcription of Audio files?
| 1 |
[removed]
| 2023-09-26T21:35:02 |
https://www.reddit.com/r/LocalLLaMA/comments/16t0t5l/whats_the_best_llm_for_private_transcription_of/
|
NewportNerds
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16t0t5l
| false | null |
t3_16t0t5l
|
/r/LocalLLaMA/comments/16t0t5l/whats_the_best_llm_for_private_transcription_of/
| false | false |
self
| 1 | null |
Experimenting with Humanizing ChatGPT text using LLama.
| 1 | 2023-09-26T22:20:29 |
FPham
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
16t22yb
| false | null |
t3_16t22yb
|
/r/LocalLLaMA/comments/16t22yb/experimenting_with_humanizing_chatgpt_text_using/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'G3n-uYTnUJPSEvoLtZqFes63VS5lXLuxBc-gDUnBLOs', 'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/ez24w5fadoqb1.jpg?width=108&crop=smart&auto=webp&s=7c46c851fc8b6882b569522852b7592c1c1518b1', 'width': 108}, {'height': 98, 'url': 'https://preview.redd.it/ez24w5fadoqb1.jpg?width=216&crop=smart&auto=webp&s=9726ac675c710483f77811d829c57775e287826f', 'width': 216}, {'height': 146, 'url': 'https://preview.redd.it/ez24w5fadoqb1.jpg?width=320&crop=smart&auto=webp&s=f63f17357e3513893687a49145442472102bc19f', 'width': 320}, {'height': 292, 'url': 'https://preview.redd.it/ez24w5fadoqb1.jpg?width=640&crop=smart&auto=webp&s=1600bbed651a241a510a79b8ff7c4bdc400fd8da', 'width': 640}], 'source': {'height': 428, 'url': 'https://preview.redd.it/ez24w5fadoqb1.jpg?auto=webp&s=32209d654571e64a84d731f9d730c16391f25a67', 'width': 935}, 'variants': {}}]}
|
|||
Need Feedback on Ai Agent Framework
| 1 |
[removed]
| 2023-09-26T22:24:29 |
https://www.reddit.com/r/LocalLLaMA/comments/16t2718/need_feedback_on_ai_agent_framework/
|
libraryofcelsus
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16t2718
| false | null |
t3_16t2718
|
/r/LocalLLaMA/comments/16t2718/need_feedback_on_ai_agent_framework/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'C9eyCZK6YrjCI-CrNHdBphDEtLudNTehIWN5PDPU2mg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rxT4qAuPjdxp8K89lzFYOsKff2k8Pl_J3OgAQHd8jwI.jpg?width=108&crop=smart&auto=webp&s=c38df96bf8dc53f1867f1673171a6f0c8fb14a99', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rxT4qAuPjdxp8K89lzFYOsKff2k8Pl_J3OgAQHd8jwI.jpg?width=216&crop=smart&auto=webp&s=7c7084f46bdf404b761af1921ccec28a8ab306c4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rxT4qAuPjdxp8K89lzFYOsKff2k8Pl_J3OgAQHd8jwI.jpg?width=320&crop=smart&auto=webp&s=e4a72a1b425d67d33934f73e39df798f8a238bab', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rxT4qAuPjdxp8K89lzFYOsKff2k8Pl_J3OgAQHd8jwI.jpg?width=640&crop=smart&auto=webp&s=ef0529b74225a9165387ce613577bff4b1d94ef3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rxT4qAuPjdxp8K89lzFYOsKff2k8Pl_J3OgAQHd8jwI.jpg?width=960&crop=smart&auto=webp&s=551e0678f263dc479a35145f0e7e74e0f3c88798', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rxT4qAuPjdxp8K89lzFYOsKff2k8Pl_J3OgAQHd8jwI.jpg?width=1080&crop=smart&auto=webp&s=1458714137cc271cb2f89b41c66ffebf1e745804', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rxT4qAuPjdxp8K89lzFYOsKff2k8Pl_J3OgAQHd8jwI.jpg?auto=webp&s=39ca8d9cab610d50a8507f77416b938cc0a1d57d', 'width': 1200}, 'variants': {}}]}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.