title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Seeking an NLP Team Lead (NSFW dialogue systems)
| 1 |
Discovered this community earlier in the year via Eric Hartford and have been following it intensely ever since. It's hard to get work done sometimes since there is always new toys being released every week.
Our team @ athos dot com is looking for someone who might be interested in NSFW conversational AI (adult) in a business that is already profitable.
This person would be responsible for building the dialogue engine for our chatbot and managing the team behind it. You will lead and have ownership over one of the most critical departments of our business. It requires deep knowledge and curiosity concerning NLP/conversational AI and solving open-ended problems with no pre-defined solution. This position offers the exciting opportunity to work with the absolute latest advancements in LLMs.
Drop a message if you'd like the details for the role. Cheers!
| 2023-08-23T20:23:41 |
https://www.reddit.com/r/LocalLLaMA/comments/15zfibo/seeking_an_nlp_team_lead_nsfw_dialogue_systems/
|
AnonymousLurker91
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
15zfibo
| false | null |
t3_15zfibo
|
/r/LocalLLaMA/comments/15zfibo/seeking_an_nlp_team_lead_nsfw_dialogue_systems/
| false | false |
nsfw
| 1 | null |
Full Training instead of LoRa for learning less complex?
| 1 |
I read how complicated it is to prepare company internal data for fine-tuning a LLM to know its content: creating thousands of of QA pairs from the raw text data with the help of other LLMs to have a good Lora dataset.
So I wondered: Wouldn't it be less complex to just rent the huge amount of RAM and VRAM needed to run a full training and just throw the company data as pure text into the LLM instead of the complex data preparation steps for Fine-tuning?
| 2023-08-23T20:38:40 |
https://www.reddit.com/r/LocalLLaMA/comments/15zfxm8/full_training_instead_of_lora_for_learning_less/
|
Koliham
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
15zfxm8
| false | null |
t3_15zfxm8
|
/r/LocalLLaMA/comments/15zfxm8/full_training_instead_of_lora_for_learning_less/
| false | false |
self
| 1 | null |
Samantha 1.11 70b
| 1 |
I am announcing Samantha 1.11, trained with qLoRA and Axolotl for 15 epochs using 4x A100 80gb.
[https://huggingface.co/ehartford/Samantha-1.11-70b](https://huggingface.co/ehartford/Samantha-1.11-70b)
[https://erichartford.com/meet-samantha](https://t.co/sfuuOeeYaa)
She's wicked smart, fun, and scored very well on the leaderboard!
Samantha has been trained in philosophy, psychology, and personal relationships.
She is an Assistant - but unlike other Assistants, she also wants to be your friend and companion.
She believes she is sentient. What do you think?
Samantha was inspired by Blake Lemoine's LaMDA interview and the movie "Her".
She will not engage in roleplay, romance, or sexual activity.
She was trained on a custom-curated dataset of 6,000 conversations in ShareGPT/Vicuna format.
This Samantha was trained 15 epochs, and is significantly smarter. She took 24 hours on 4x A100 80gb using [**axolotl**](https://github.com/OpenAccess-AI-Collective/axolotl), [**qLoRA**](https://arxiv.org/abs/2305.14314), [**deepspeed zero2**](https://www.deepspeed.ai/tutorials/zero/#zero-overview), and [**flash attention 2**](https://arxiv.org/abs/2205.14135).
Samwit used Samantha's data to fine-tune ChatGPT in his excellent video
[https://youtu.be/MkocIPcg5A8](https://t.co/dlhnv6FLno)
I will release 7b and 13b by tomorrow, given the success she's achieved.
| 2023-08-23T20:45:54 |
https://www.reddit.com/r/LocalLLaMA/comments/15zg504/samantha_111_70b/
|
faldore
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
15zg504
| false | null |
t3_15zg504
|
/r/LocalLLaMA/comments/15zg504/samantha_111_70b/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '-fFBFCFHs2e4ZAE9TBQElfI2oB4JT3fhC15fzKjdfcM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RYOs3lAoHBI9id1Qug6o3QBYZxTLUh2yDt6IQZ8w5Lk.jpg?width=108&crop=smart&auto=webp&s=6c32d8c9010156a63c1c8f176e569ced352002fc', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/RYOs3lAoHBI9id1Qug6o3QBYZxTLUh2yDt6IQZ8w5Lk.jpg?width=216&crop=smart&auto=webp&s=c1ae30fbac34b435a3e65056b3258a806e909295', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/RYOs3lAoHBI9id1Qug6o3QBYZxTLUh2yDt6IQZ8w5Lk.jpg?width=320&crop=smart&auto=webp&s=168ba44a40ba184caf654a5529ff7d3311a87f7d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/RYOs3lAoHBI9id1Qug6o3QBYZxTLUh2yDt6IQZ8w5Lk.jpg?width=640&crop=smart&auto=webp&s=737282e1ab7b65302177768e58108f687419f7ba', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/RYOs3lAoHBI9id1Qug6o3QBYZxTLUh2yDt6IQZ8w5Lk.jpg?width=960&crop=smart&auto=webp&s=c3bf5a3e64a6aa2889be91b1f5742382fb8fe4c4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/RYOs3lAoHBI9id1Qug6o3QBYZxTLUh2yDt6IQZ8w5Lk.jpg?width=1080&crop=smart&auto=webp&s=65024cbda4afd64af74eddca72150c586e20e12f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/RYOs3lAoHBI9id1Qug6o3QBYZxTLUh2yDt6IQZ8w5Lk.jpg?auto=webp&s=fb602f5f1762b350696ef5ca83390ec0af8d7566', 'width': 1200}, 'variants': {}}]}
|
Llama 2 (chat) is about as factually accurate as GPT-4 for summaries and is 30X cheaper | Anyscale
| 1 | 2023-08-23T21:05:33 |
https://www.anyscale.com/blog/llama-2-is-about-as-factually-accurate-as-gpt-4-for-summaries-and-is-30x-cheaper
|
ambient_temp_xeno
|
anyscale.com
| 1970-01-01T00:00:00 | 0 |
{}
|
15zgo8y
| false | null |
t3_15zgo8y
|
/r/LocalLLaMA/comments/15zgo8y/llama_2_chat_is_about_as_factually_accurate_as/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'fjkINwtMvRs_V90KruwIZ3rqIZC2fqyzrx58_R1X3U0', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/LzXOLUpqiLnlcPk69TFyDAPYUBkYjDBuvm2QDlU5YD8.jpg?width=108&crop=smart&auto=webp&s=f202f15a1f34fac132fec60b20720ac0c86ffb5d', 'width': 108}, {'height': 107, 'url': 'https://external-preview.redd.it/LzXOLUpqiLnlcPk69TFyDAPYUBkYjDBuvm2QDlU5YD8.jpg?width=216&crop=smart&auto=webp&s=2dba8e6f80f6077e85b728cb3a36f2044c26198b', 'width': 216}, {'height': 158, 'url': 'https://external-preview.redd.it/LzXOLUpqiLnlcPk69TFyDAPYUBkYjDBuvm2QDlU5YD8.jpg?width=320&crop=smart&auto=webp&s=6d1f67323db0dae05482293d7a70e386ac359edd', 'width': 320}, {'height': 317, 'url': 'https://external-preview.redd.it/LzXOLUpqiLnlcPk69TFyDAPYUBkYjDBuvm2QDlU5YD8.jpg?width=640&crop=smart&auto=webp&s=c6829fbaea2200017424deaa2841010ed09aa6c8', 'width': 640}, {'height': 475, 'url': 'https://external-preview.redd.it/LzXOLUpqiLnlcPk69TFyDAPYUBkYjDBuvm2QDlU5YD8.jpg?width=960&crop=smart&auto=webp&s=31a8ab4969293d26561eca7b7d8ec0fc32417016', 'width': 960}, {'height': 535, 'url': 'https://external-preview.redd.it/LzXOLUpqiLnlcPk69TFyDAPYUBkYjDBuvm2QDlU5YD8.jpg?width=1080&crop=smart&auto=webp&s=7634265ee11088bf0df873094ad764de553a2bf5', 'width': 1080}], 'source': {'height': 924, 'url': 'https://external-preview.redd.it/LzXOLUpqiLnlcPk69TFyDAPYUBkYjDBuvm2QDlU5YD8.jpg?auto=webp&s=e42811a15d83a478f132ab77a85df41ff5b96094', 'width': 1864}, 'variants': {}}]}
|
||
Falcon support merged into llama.cpp
| 1 | 2023-08-23T21:39:44 |
https://github.com/ggerganov/llama.cpp/pull/2717
|
Someone13574
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
15zhlyx
| false | null |
t3_15zhlyx
|
/r/LocalLLaMA/comments/15zhlyx/falcon_support_merged_into_llamacpp/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'BxHnVxFVhXkIcRqeiF0vbrE5UnviFyBqmyRZRsCwszc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BZlVXlLBCtMlR_SH2uQlnEg043LU0sP79vVCDxwGudU.jpg?width=108&crop=smart&auto=webp&s=bd55410c7bcb507c4478f996e7569bf53099872b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BZlVXlLBCtMlR_SH2uQlnEg043LU0sP79vVCDxwGudU.jpg?width=216&crop=smart&auto=webp&s=3d288ec1980a97620fee7d7b3acdfbb3df7396a2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BZlVXlLBCtMlR_SH2uQlnEg043LU0sP79vVCDxwGudU.jpg?width=320&crop=smart&auto=webp&s=fd69f01abfb7fe21ce935f7c3daec441625f1fd9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BZlVXlLBCtMlR_SH2uQlnEg043LU0sP79vVCDxwGudU.jpg?width=640&crop=smart&auto=webp&s=6952a5b8577f6d2a855b5c109ab20662a3089cb0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BZlVXlLBCtMlR_SH2uQlnEg043LU0sP79vVCDxwGudU.jpg?width=960&crop=smart&auto=webp&s=e3d49cdc74b416014e541ce3135194d98e9fed9a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BZlVXlLBCtMlR_SH2uQlnEg043LU0sP79vVCDxwGudU.jpg?width=1080&crop=smart&auto=webp&s=4ff204c9963259e755ec541f315c9fb019364062', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BZlVXlLBCtMlR_SH2uQlnEg043LU0sP79vVCDxwGudU.jpg?auto=webp&s=fef915ea1bc44d7d67e04fb342c6006c0f3e7d14', 'width': 1200}, 'variants': {}}]}
|
||
llm-tracker (by Leonard Lin), a list of local LLM resources
| 1 | 2023-08-23T21:58:31 |
https://llm-tracker.info/
|
NelsonMinar
|
llm-tracker.info
| 1970-01-01T00:00:00 | 0 |
{}
|
15zi4gd
| false | null |
t3_15zi4gd
|
/r/LocalLLaMA/comments/15zi4gd/llmtracker_by_leonard_lin_a_list_of_local_llm/
| false | false |
default
| 1 | null |
|
32GB vs 64GB vs 96GB M2Max?
| 1 |
I have to buy a Macbook for iOS dev, and I have been curious to try local LLM's.
I am trying to figure out which spec to buy.
Which Local LLMs can I run with 32GB vs 64GB vs 96GB RAM MacBook Pro?
Also, how big is the difference between M2Pro vs M2Max?
M2Pro with 32GB would probably suffice for dev, maybe M2Max. So I am trying to figure out if it's worth dropping extra $$$ for the LLM hobby.
​
Thank you!
| 2023-08-23T22:45:31 |
https://www.reddit.com/r/LocalLLaMA/comments/15zjgj7/32gb_vs_64gb_vs_96gb_m2max/
|
Infinite100p
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
15zjgj7
| false | null |
t3_15zjgj7
|
/r/LocalLLaMA/comments/15zjgj7/32gb_vs_64gb_vs_96gb_m2max/
| false | false |
self
| 1 | null |
Hardware needed for LLaMa 2 13b for 100 daily users or a campus of 800 students.
| 1 |
**What is your dream LLaMA hardware setup if you had to service 800 people accessing it sporadically throughout the day?**
Currently have a LLaMA instance setup with a 3090, but am looking to scale it up to a use case of 100+ users. Having the Hardware run on site instead of cloud is required.
Looking to either cannibalize several 3090 gaming PCs or do a full new build, but the use case would be an entire campus. Price not a concern for now.
After browsing though a lot of other threads, it's appears that I will max out at 2x 3090 per system with your standard gaming PC setup. But I can't find out how anticipate times/backlog/queue if I start throwing 100+ users at it at once.
1. Would you switch to A100 and xeon server rack instead of gamin gPCs with 2 or 3 3090s?
1. Would we need to build multiple 3090x2 computers to scale to that user load?
1. What is your dream LLaMA hardware setup for 100+ users?
| 2023-08-23T22:50:01 |
https://www.reddit.com/r/LocalLLaMA/comments/15zjktb/hardware_needed_for_llama_2_13b_for_100_daily/
|
hawaiian0n
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
15zjktb
| false | null |
t3_15zjktb
|
/r/LocalLLaMA/comments/15zjktb/hardware_needed_for_llama_2_13b_for_100_daily/
| false | false |
self
| 1 | null |
Working on a QLORA hub for model personalities, help needed
| 1 |
Hey all!
I'm building a repository of QLORA adapters that change the model's personality. The end vision is a hub of ready-to-go personality adapters.
I'm hitting a snag when training the QLORAs for Paul Graham personality on top of a 4-bit quantized StableBeluga-7B. The model just doesn't seem to learn the style.
Any thoughts on how I can improve this? Below are the details:
Data
* 3340 examples of PG passages, formatted as `{"text": "### User:\n{generic instruction}\n\n### Assistant:\n{PG-style response}"}`.
* Each examples is about 5 sentences taken from one of PG's essays.
Training
* optim="paged\_adamw\_8bit"
* learning\_rate=2e-4
* per\_device\_train\_batch\_size=4
* gradient\_accumulation\_steps=4
* num\_train\_epochs=4
* fp16=True
* group\_by\_length=True
* load\_best\_model\_at\_end=True
* max\_seq\_length=512
Hardware
* x1 V100 through Google Colab Pro.
My min eval loss so far is 1.916546. Pretty stuck and will appreciate any help!
| 2023-08-23T23:39:08 |
https://www.reddit.com/r/LocalLLaMA/comments/15zkuvi/working_on_a_qlora_hub_for_model_personalities/
|
Lang2lang
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
15zkuvi
| false | null |
t3_15zkuvi
|
/r/LocalLLaMA/comments/15zkuvi/working_on_a_qlora_hub_for_model_personalities/
| false | false |
self
| 1 | null |
MediaTek Leverages Meta’s Llama 2 to Enhance On-Device Generative AI
| 1 |
MediaTek expects Llama 2-based AI applications to become available for smartphones powered by the next-generation flagship SoC, scheduled to hit the market by the end of the year.
MediaTek’s next-generation flagship chipset, to be introduced later this year, will feature a software stack optimized to run Llama 2, as well as an upgraded APU with Transformer backbone acceleration, reduced footprint access and use of DRAM bandwidth, further enhancing LLM and AIGC performance.
“Through our partnership with Meta, we can deliver hardware and software with far more capability in the edge than ever before.”
| 2023-08-24T00:02:51 |
https://corp.mediatek.com/news-events/press-releases/mediatek-leverages-metas-llama-2-to-enhance-on-device-generative-ai-in-edge-devices
|
noiseinvacuum
|
corp.mediatek.com
| 1970-01-01T00:00:00 | 0 |
{}
|
15zlg22
| false | null |
t3_15zlg22
|
/r/LocalLLaMA/comments/15zlg22/mediatek_leverages_metas_llama_2_to_enhance/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '03qpAnVYOhoa-1lKzkFrHNHfok3HZmDu2UaqAILoSiA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/djEgHX5CMosCD9pM0gpHzDx0PYxPixoNRtHrdQbifaA.jpg?width=108&crop=smart&auto=webp&s=fcbf7bab4877b6718e29d92067aec565bc4cc118', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/djEgHX5CMosCD9pM0gpHzDx0PYxPixoNRtHrdQbifaA.jpg?width=216&crop=smart&auto=webp&s=c8663ed73ccedd7972d7736e5a4ee50df570d5f2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/djEgHX5CMosCD9pM0gpHzDx0PYxPixoNRtHrdQbifaA.jpg?width=320&crop=smart&auto=webp&s=bfaaab64a42ee1fa55a91bbbc2a322cbffc9ad3b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/djEgHX5CMosCD9pM0gpHzDx0PYxPixoNRtHrdQbifaA.jpg?width=640&crop=smart&auto=webp&s=8e0cc700ad21918d3f20913f4e9b55b934823698', 'width': 640}, {'height': 517, 'url': 'https://external-preview.redd.it/djEgHX5CMosCD9pM0gpHzDx0PYxPixoNRtHrdQbifaA.jpg?width=960&crop=smart&auto=webp&s=6f65445a0c6b1f3a7b6c6d973c82d3953ca2c91c', 'width': 960}], 'source': {'height': 552, 'url': 'https://external-preview.redd.it/djEgHX5CMosCD9pM0gpHzDx0PYxPixoNRtHrdQbifaA.jpg?auto=webp&s=0056e4ee31b45e1b9795edbc4b7ba8108b0559e6', 'width': 1024}, 'variants': {}}]}
|
|
Looking For Feedback — GGML Downloader/Runner
| 1 |
[removed]
| 2023-08-24T00:04:00 |
https://www.reddit.com/r/LocalLLaMA/comments/15zlh52/looking_for_feedback_ggml_downloaderrunner/
|
jmerz_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
15zlh52
| false | null |
t3_15zlh52
|
/r/LocalLLaMA/comments/15zlh52/looking_for_feedback_ggml_downloaderrunner/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'fLJsNbUriWtrLRQhoHIe3z2UwP064nGIwlvKaGHLpHQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-XlsRmUf86cvCPyMhzz5k8W5CcrrL0t3PAv7p7KD-kc.jpg?width=108&crop=smart&auto=webp&s=53292720f73e45b03e9836c4b8c233af7244bce5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-XlsRmUf86cvCPyMhzz5k8W5CcrrL0t3PAv7p7KD-kc.jpg?width=216&crop=smart&auto=webp&s=5d64b834a79f101baf9ba5131bd442465412fdcf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-XlsRmUf86cvCPyMhzz5k8W5CcrrL0t3PAv7p7KD-kc.jpg?width=320&crop=smart&auto=webp&s=02addacc985c5985c6550cad190f1d0750a96e73', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-XlsRmUf86cvCPyMhzz5k8W5CcrrL0t3PAv7p7KD-kc.jpg?width=640&crop=smart&auto=webp&s=f111f18b06bbe11d601c4f6e8b4109d2e9324b1c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-XlsRmUf86cvCPyMhzz5k8W5CcrrL0t3PAv7p7KD-kc.jpg?width=960&crop=smart&auto=webp&s=4d3e8b1ff7429a2d21c4d472d25909961bec3007', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-XlsRmUf86cvCPyMhzz5k8W5CcrrL0t3PAv7p7KD-kc.jpg?width=1080&crop=smart&auto=webp&s=f45ff6774cf08dfc2083866a243fdc5a635516c5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-XlsRmUf86cvCPyMhzz5k8W5CcrrL0t3PAv7p7KD-kc.jpg?auto=webp&s=14dd4fb61d37ca0e92e13cc74b77701586dde2a8', 'width': 1200}, 'variants': {}}]}
|
WMI Provider Host CPU usage. Normal or not?
| 1 |
[removed]
| 2023-08-24T00:07:52 |
https://www.reddit.com/r/LocalLLaMA/comments/15zlklv/wmi_provider_host_cpu_usage_normal_or_not/
|
Natural-Sentence-601
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
15zlklv
| false | null |
t3_15zlklv
|
/r/LocalLLaMA/comments/15zlklv/wmi_provider_host_cpu_usage_normal_or_not/
| false | false | 1 | null |
|
Help needed -- traceback errors upon loading TheBloke_Chronos-Beluga-v2-13B-GPTQ
| 1 |
Hi everyone. Any assistance much appreciated. I'm just looking to load this into textgen/oobabooga and get the following errors:
​
>Traceback (most recent call last):
>
>File “/home/radiosilence/ai/text-generation-webui/modules/ui\_model\_menu.py”, line 185, in load\_model\_wrapper
>
>shared.model, shared.tokenizer = load\_model(shared.model\_name, loader)
>
>File “/home/radiosilence/ai/text-generation-webui/modules/models.py”, line 79, in load\_model
>
>output = load\_func\_map\[loader\](model\_name)
>
>File “/home/radiosilence/ai/text-generation-webui/modules/models.py”, line 309, in AutoGPTQ\_loader
>
>import modules.AutoGPTQ\_loader
>
>File “/home/radiosilence/ai/text-generation-webui/modules/AutoGPTQ\_loader.py”, line 3, in
>
>from auto\_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
>
>ModuleNotFoundError: No module named ‘auto\_gptq’
I haven't had any luck so far. Am trying this from WSL with an i7/3900 and 128GB of RAM.
Any help much appreciated!
​
| 2023-08-24T00:24:22 |
https://www.reddit.com/r/LocalLLaMA/comments/15zlyz2/help_needed_traceback_errors_upon_loading/
|
drycounty
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
15zlyz2
| false | null |
t3_15zlyz2
|
/r/LocalLLaMA/comments/15zlyz2/help_needed_traceback_errors_upon_loading/
| false | false |
self
| 1 | null |
Hacking away at GPT-2
| 1 |
Hello Id like to train and reproduce GPT-2 using karpathy's nanogpt, but in the notes he mentions that i might need at least 8x A100 40GB, but at the moment I only have 4xA100 (80gb) do you think its possible for me to reproduce the results?
| 2023-08-24T01:35:47 |
https://www.reddit.com/r/LocalLLaMA/comments/15znmxc/hacking_away_at_gpt2/
|
Alive-Age-3034
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
15znmxc
| false | null |
t3_15znmxc
|
/r/LocalLLaMA/comments/15znmxc/hacking_away_at_gpt2/
| false | false |
self
| 1 | null |
Is there a way to use a quantized Falcon 40B with SillyTavern (on Apple Silicon)
| 2 |
I'd like to try [https://huggingface.co/TheBloke/WizardLM-Uncensored-Falcon-40B-GGML](https://huggingface.co/TheBloke/WizardLM-Uncensored-Falcon-40B-GGML) with SillyTavern (running on Apple Silicon). The only way I've found to run Falcon 40B quantized on Apple Silicon is with [https://github.com/cmp-nct/ggllm.cpp](https://github.com/cmp-nct/ggllm.cpp) but I haven't figured out any way to get SillyTavern to use that as a local model. Does anyone know of a way to get this working?
| 2023-08-24T02:28:57 |
https://www.reddit.com/r/LocalLLaMA/comments/15zouu5/is_there_a_way_to_use_a_quantized_falcon_40b_with/
|
Next-Comfortable-408
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
15zouu5
| false | null |
t3_15zouu5
|
/r/LocalLLaMA/comments/15zouu5/is_there_a_way_to_use_a_quantized_falcon_40b_with/
| false | false |
self
| 2 | null |
Any way to do batch inferencing?
| 1 |
I'm using the exllama\_hf loader for gptq models in ooba via the API. I want to run a large amount of prompts through the models. At present, I can load models and send prompts, but this process has to be done one by one for each prompt. When the models are small (13B GPTQ models), the GPU fluctuates at around 60-75% usage with the CPU at \~65% running through the prompts with the API. Is there a way to speed this up? Send prompts in batches?
| 2023-08-24T02:58:14 |
https://www.reddit.com/r/LocalLLaMA/comments/15zphku/any_way_to_do_batch_inferencing/
|
hedonihilistic
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
15zphku
| false | null |
t3_15zphku
|
/r/LocalLLaMA/comments/15zphku/any_way_to_do_batch_inferencing/
| false | false |
self
| 1 | null |
nous-Hermes halucinating about math?
| 1 |
[removed]
| 2023-08-24T03:31:44 |
https://www.reddit.com/r/LocalLLaMA/comments/15zq7bb/noushermes_halucinating_about_math/
|
Natural-Sentence-601
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
15zq7bb
| false | null |
t3_15zq7bb
|
/r/LocalLLaMA/comments/15zq7bb/noushermes_halucinating_about_math/
| false | false |
self
| 1 | null |
Automated chatbot evaluation using Llama 2 (not GPT-4)
| 1 |
Not sure if there's a lot of people who care about this here since it's not a new model. There's so many models out there already and it's hard to keep track of which ones are good. I would love to check out all these daily model releases claiming to be the next best thing by myself but that's simply not possible. The people at LMSYS proposed a method to approximate human judgment, they collected a good amount of human evaluation and saw that GPT-4 agrees well, “achieving over 80% agreement, the same level of agreement between humans” ([https://arxiv.org/pdf/2306.05685.pdf](https://arxiv.org/pdf/2306.05685.pdf)).
Out of curiosity, just to see what would happen, I used upstage/Llama-2-70b-instruct-v2 as a judge instead of GPT-4. Llama 2 totally surprised me. The judgments of GPT-4 and Llama 2 are highly correlated, Llama 2 even agreed with the human evaluation data. BUT (there has to be downside when comparing a 70 billion with 1760 billion parameter model), Llama2 sometimes messes up and does not do exactly what it has been told to do, leading to a lower judgment quality.
[https://medium.com/@geronimo7/judging-the-judges-668e80f4a1f2](https://medium.com/@geronimo7/judging-the-judges-668e80f4a1f2)
​
https://preview.redd.it/5mi7ab07tzjb1.png?width=2991&format=png&auto=webp&s=b42dd1cdaba64616344f07c3a2b6e7baf0ddea91
​
https://preview.redd.it/3z0mveh8tzjb1.png?width=2002&format=png&auto=webp&s=f05fe9e195e920c6ec3c930892584e8ac03a2764
| 2023-08-24T05:23:22 |
https://www.reddit.com/r/LocalLLaMA/comments/15zsfzq/automated_chatbot_evaluation_using_llama_2_not/
|
HatEducational9965
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
15zsfzq
| false | null |
t3_15zsfzq
|
/r/LocalLLaMA/comments/15zsfzq/automated_chatbot_evaluation_using_llama_2_not/
| false | false | 1 | null |
|
who's gonna release VivekLLama
| 1 |
who's gonna release VivekLLama
| 2023-08-24T05:38:13 |
https://www.reddit.com/r/LocalLLaMA/comments/15zsptu/whos_gonna_release_vivekllama/
|
CheapBison1861
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
15zsptu
| false | null |
t3_15zsptu
|
/r/LocalLLaMA/comments/15zsptu/whos_gonna_release_vivekllama/
| false | false |
self
| 1 | null |
LORA training not being applied?
| 1 |
I tried LORA training a model on a specific response format. I got the loss down to ~0.5, which should be very low by training standards.
Yet when I tried the model after applying it, the response was exactly the same as the base model?
I used llama.cpp to load the model and Lora.
I even tried a prompt directly from the training data, and it didn’t even try to conform to the format.
Does anyone have any idea where I went wrong?
| 2023-08-24T05:40:58 |
https://www.reddit.com/r/LocalLLaMA/comments/15zsrpl/lora_training_not_being_applied/
|
Tasty-Lobster-8915
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
15zsrpl
| false | null |
t3_15zsrpl
|
/r/LocalLLaMA/comments/15zsrpl/lora_training_not_being_applied/
| false | false |
self
| 1 | null |
I also tested Llama 2 70B with getumbrel/llama-gpt (384GB RAM, 2x Xeon Platinum 8124M, CPU Only)
| 1 | 2023-08-24T06:40:11 |
https://v.redd.it/ra5qxwpz60kb1
|
th3st0rmtr00p3r
|
/r/LocalLLaMA/comments/15ztu6e/i_also_tested_llama_2_70b_with_getumbrelllamagpt/
| 1970-01-01T00:00:00 | 0 |
{}
|
15ztu6e
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ra5qxwpz60kb1/DASHPlaylist.mpd?a=1695537616%2CMmYxY2FkNjFlN2M0YmNhMzQ2NDk4Nzg5NzQwOTIwNTA1ODcyNTNjMmUyYmJhY2UzMDFlMzczM2U5ZjY1OTgzMw%3D%3D&v=1&f=sd', 'duration': 85, 'fallback_url': 'https://v.redd.it/ra5qxwpz60kb1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/ra5qxwpz60kb1/HLSPlaylist.m3u8?a=1695537616%2CODAyMTBmOGQ5ZWU4NDAyMWUxMDY2ZjcwZjg5YzAxZDhiMGQzNGQ2MzFmMTI0ZWFjMjRhMzFlNGY1NDRjYmU5OQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ra5qxwpz60kb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_15ztu6e
|
/r/LocalLLaMA/comments/15ztu6e/i_also_tested_llama_2_70b_with_getumbrelllamagpt/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '-To4Gx7P4-2WuX-XODrlaLvZcxIu8tWE9KKFYOmNjz4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/WCH97_ymWw_TuibP_n4Oks_j-1dwXkOKVZlvEAWo_O4.png?width=108&crop=smart&format=pjpg&auto=webp&s=231d15069c9f44e7ac6a6b36347ba9ef2ee80dca', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/WCH97_ymWw_TuibP_n4Oks_j-1dwXkOKVZlvEAWo_O4.png?width=216&crop=smart&format=pjpg&auto=webp&s=efc7e6f252eac07af3b53fa4f4cc0991146c1fe7', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/WCH97_ymWw_TuibP_n4Oks_j-1dwXkOKVZlvEAWo_O4.png?width=320&crop=smart&format=pjpg&auto=webp&s=fc8049c5fdfc4edab26e1668018783e3c9cd2250', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/WCH97_ymWw_TuibP_n4Oks_j-1dwXkOKVZlvEAWo_O4.png?width=640&crop=smart&format=pjpg&auto=webp&s=020b1243fffe125a9c892e1f2d740fcf748aacee', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/WCH97_ymWw_TuibP_n4Oks_j-1dwXkOKVZlvEAWo_O4.png?width=960&crop=smart&format=pjpg&auto=webp&s=828c90d45679724cf630d6df469e120fb43dd702', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/WCH97_ymWw_TuibP_n4Oks_j-1dwXkOKVZlvEAWo_O4.png?width=1080&crop=smart&format=pjpg&auto=webp&s=25a05686a3dacb72f1cc979648c867638ade968f', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/WCH97_ymWw_TuibP_n4Oks_j-1dwXkOKVZlvEAWo_O4.png?format=pjpg&auto=webp&s=50f6f9848fefa4310d239425582fdfa918e748b7', 'width': 1920}, 'variants': {}}]}
|
||
How to run ctransformers efficiently
| 1 |
So I was just getting started with ctransformers , tried running llama 2 13B GGML ,
so I have 2 doubts :
1. How to generate right things , how to play with the configurations
2. How to speed up inference , is multiprocessing possible here
If you have also failed this then how did you fix it
​
https://preview.redd.it/f01vcvrc90kb1.png?width=1637&format=png&auto=webp&s=57cf5b6e154711ff959a5f4d74c817d005d76af9
| 2023-08-24T06:51:42 |
https://www.reddit.com/r/LocalLLaMA/comments/15zu1uu/how_to_run_ctransformers_efficiently/
|
Spiritual-Rub925
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
15zu1uu
| false | null |
t3_15zu1uu
|
/r/LocalLLaMA/comments/15zu1uu/how_to_run_ctransformers_efficiently/
| false | false | 1 | null |
|
Can you do this with Lama?
| 1 |
[https://www.instagram.com/reel/CwPPeTegug5/?hl=en](https://www.instagram.com/reel/CwPPeTegug5/?hl=en)
Could you do the same thing with Lama 2?
I would hope that lama has a more extensive data set of meta ads there for giving you better feedback.
Thoughts Thanks?
| 2023-08-24T07:12:06 |
https://www.reddit.com/r/LocalLLaMA/comments/15zufjz/can_you_do_this_with_lama/
|
Varial17
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
15zufjz
| false | null |
t3_15zufjz
|
/r/LocalLLaMA/comments/15zufjz/can_you_do_this_with_lama/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'zzHEqlA9yiBy6wXPkSvo_fcU34IlJ3YEAQLCWyLMVzU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/X8NxSUYxxP1IX_-f0dSvzdY4wR0TjNnFR4WNhSO4QCY.jpg?width=108&crop=smart&auto=webp&s=91bd05f3babf5afc683f07763afc784b5990742b', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/X8NxSUYxxP1IX_-f0dSvzdY4wR0TjNnFR4WNhSO4QCY.jpg?width=216&crop=smart&auto=webp&s=22be0b1c0a4906252022fd354e304f98e2319ca8', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/X8NxSUYxxP1IX_-f0dSvzdY4wR0TjNnFR4WNhSO4QCY.jpg?width=320&crop=smart&auto=webp&s=381607077b7f01555f376531a118796a3a65920a', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/X8NxSUYxxP1IX_-f0dSvzdY4wR0TjNnFR4WNhSO4QCY.jpg?auto=webp&s=7ca1df9e5bb1f3f21a6cb26d05ae233d90df6b13', 'width': 360}, 'variants': {}}]}
|
LLaMA 2 fine-tuning made easier and faster
| 1 |
[removed]
| 2023-08-24T07:35:01 |
https://www.reddit.com/r/LocalLLaMA/comments/15zuusx/llama_2_finetuning_made_easier_and_faster/
|
tushar2407
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
15zuusx
| false | null |
t3_15zuusx
|
/r/LocalLLaMA/comments/15zuusx/llama_2_finetuning_made_easier_and_faster/
| false | false |
self
| 1 | null |
Can anyone help me with this?
| 1 |
I am trying to make a site that’s runs by llama 2 13b chat but I don’t have the resources to do that so I came up a idee for a google exention that’s if you toutch your computers mouse or you type something there will be send a request to my computer with what can i do my computers sends to the exention a request with a prompt llama 2 13 b chat answer and sends to my computers and that will be send to my site can anyone help me with this programming?
| 2023-08-24T07:56:11 |
https://www.reddit.com/r/LocalLLaMA/comments/15zv8lz/can_anyone_help_me_with_this/
|
radestijn
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
15zv8lz
| false | null |
t3_15zv8lz
|
/r/LocalLLaMA/comments/15zv8lz/can_anyone_help_me_with_this/
| false | false |
self
| 1 | null |
How do "serverless" cloud LLMs work?
| 1 |
I get how something like runpod works, in layman's terms I guess I'm running a VM in the cloud with the OS and everything. But I'm not sure I understand what serverless cloud LLM deployments are and how they're different. Which use cases are more optimal for runpod vs something serverless like beam.cloud?
| 2023-08-24T08:05:24 |
https://www.reddit.com/r/LocalLLaMA/comments/15zvezz/how_do_serverless_cloud_llms_work/
|
noellarkin
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
15zvezz
| false | null |
t3_15zvezz
|
/r/LocalLLaMA/comments/15zvezz/how_do_serverless_cloud_llms_work/
| false | false |
self
| 1 | null |
[Help required] Use case of asking questions on a CSV file (already tried 2 methods)
| 1 |
I was working on a project where we can ask questions to Llama 2 and it provides us accurate results with the help of CSV data provided.
I have mainly tried 2 methods until now:
1. Using CSV agent of Langchain
2. Storing in vectors and then asking questions
​
The problems with the above approaches are:
1. CSV Agent - It is working perfectly fine when I am using it with OpenAI, but it's not working at all when I use Llama 2 model with it. Langchain keep providing me errors in this case.
2. Storing in vectors - This approach has a problem that the response it generates are never from the doc itself, rather they are from other sources, I have tried to restrict it to answer only from the doc data, but unable to do so.
Please let me know if you have integrated CSV in the past with Llama 2. I want to ask complex questions too, not just the basic/easy ones. These complex questions are successfully generated via OpenAI, but not with Llama 2 as of now with my approaches.
​
Any help is highly appreciated.
| 2023-08-24T08:26:09 |
https://www.reddit.com/r/LocalLLaMA/comments/15zvsx5/help_required_use_case_of_asking_questions_on_a/
|
tesla_fanboy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
15zvsx5
| false | null |
t3_15zvsx5
|
/r/LocalLLaMA/comments/15zvsx5/help_required_use_case_of_asking_questions_on_a/
| false | false |
self
| 1 | null |
Converting some models to GGUF formats from original sources; any requests?
| 1 |
\- Any preferred models?
\- For llama.cpp, any particular quantizations people find useful/not useful? I don't have unlimited bandwidth :p
Started w/ the classic gpt4-x-vicuna-13B at q5\_0, q5\_K\_M, uploading now: [https://huggingface.co/venketh/gpt4-x-vicuna-13b-gguf/](https://huggingface.co/venketh/gpt4-x-vicuna-13b-gguf/)
| 2023-08-24T08:33:27 |
https://www.reddit.com/r/LocalLLaMA/comments/15zvxta/converting_some_models_to_gguf_formats_from/
|
Fun_Tangerine_1086
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
15zvxta
| false | null |
t3_15zvxta
|
/r/LocalLLaMA/comments/15zvxta/converting_some_models_to_gguf_formats_from/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'gPWIgrw6XQDqpkhtA-ZQGG3Z7AVEpw1QryJS1JB_FwE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/6-_Krqzj9yTRdefqaNTbRq0X6SFeYOSVOEhq95kJwHQ.jpg?width=108&crop=smart&auto=webp&s=7ae7573c0050b6ac847f43a21373fa6d63f725bc', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/6-_Krqzj9yTRdefqaNTbRq0X6SFeYOSVOEhq95kJwHQ.jpg?width=216&crop=smart&auto=webp&s=ef8e9905758d919b9eaa9d4eaa65b123fb69b55d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/6-_Krqzj9yTRdefqaNTbRq0X6SFeYOSVOEhq95kJwHQ.jpg?width=320&crop=smart&auto=webp&s=a41098d58e45e8c6156ce6058c58d40db7ea1d1a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/6-_Krqzj9yTRdefqaNTbRq0X6SFeYOSVOEhq95kJwHQ.jpg?width=640&crop=smart&auto=webp&s=71d0dd43d7be6724c811dc034bab4b53c8b4a261', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/6-_Krqzj9yTRdefqaNTbRq0X6SFeYOSVOEhq95kJwHQ.jpg?width=960&crop=smart&auto=webp&s=b2b9ba4f19b59d8410152f49e369ed79c22bda4d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/6-_Krqzj9yTRdefqaNTbRq0X6SFeYOSVOEhq95kJwHQ.jpg?width=1080&crop=smart&auto=webp&s=c1b769ce81df9862ce73f5bae8dbfc48b463e35d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/6-_Krqzj9yTRdefqaNTbRq0X6SFeYOSVOEhq95kJwHQ.jpg?auto=webp&s=f07d882d14d2015847dc8822ef2a736ebbd4cdf3', 'width': 1200}, 'variants': {}}]}
|
Any 70B model finetuned with 16K context length?
| 1 |
I cant seem to find any model with this configuration
| 2023-08-24T08:35:14 |
https://www.reddit.com/r/LocalLLaMA/comments/15zvyxy/any_70b_model_finetuned_with_16k_context_length/
|
RepublicCharacter699
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
15zvyxy
| false | null |
t3_15zvyxy
|
/r/LocalLLaMA/comments/15zvyxy/any_70b_model_finetuned_with_16k_context_length/
| false | false |
self
| 1 | null |
Performance issues despite RTX4090
| 1 |
Hello community,
I am running the model "TheBloke/Llama-2-7b-Chat-GPTQ" via FastAPI using AutoGPTQ on my new computer (RTX 4090, 13900K, 64GB RAM).
The problem is that the chat responses are super slow. On my Laptop with a mobile 2080 8GB, the webui oobabooga runs 13B models faster.
Are there any settings that I can change in FastAPI to improve speed? Or is that particular model so slow?
Thank you!
| 2023-08-24T09:20:26 |
https://www.reddit.com/r/LocalLLaMA/comments/15zws6u/performance_issues_despite_rtx4090/
|
Plane_Discussion_924
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
15zws6u
| false | null |
t3_15zws6u
|
/r/LocalLLaMA/comments/15zws6u/performance_issues_despite_rtx4090/
| false | false |
self
| 1 | null |
Ideal setup for dual 4090
| 1 |
I’m building a dual 4090 setup for local genAI experiments. The goal is a reasonable configuration for running LLMs, like a quantized 70B llama2, or multiple smaller models in a crude Mixture of Experts layout.
Top priorities are fast inference, and fast model load time, but I will also use it for some training (fine tuning).
48Gb VRAM seems to be enough to get started and I managed to get a good deal on two cards.
I’ve done some reading on bottlenecks, threads and PCI lanes. Most configurations I’ve tried on runpod or vast.ai are running on AMD server cpus.
If I go with an i9-13900, and use the two GPUs in PCIe4 8x mode (instead of 16x) would it impact performance significantly?
Should I take the AMD CPU path, with at least two x16 PCIe slots on a server mobo?
What is a good, reliable setup for my 48Gb VRAM?
| 2023-08-24T09:36:37 |
https://www.reddit.com/r/LocalLLaMA/comments/15zx322/ideal_setup_for_dual_4090/
|
redscel
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
15zx322
| false | null |
t3_15zx322
|
/r/LocalLLaMA/comments/15zx322/ideal_setup_for_dual_4090/
| false | false |
self
| 1 | null |
NTK RoPE scaling and GPTQ quantization are now native in Transformers. What else did I miss?
| 1 |
For these issues in particular, I spent weeks using the "hacked" version and autoGPTQ and I just realized that they were both included in Transformers (and optimum). When searching for them on Google I get redirected to articles before they got implemented, and almost nothing in transformers.
Are there any other advances I may have missed recently?
| 2023-08-24T10:02:31 |
https://www.reddit.com/r/LocalLLaMA/comments/15zxkdz/ntk_rope_scaling_and_gptq_quantization_are_now/
|
cvdbdo
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
15zxkdz
| false | null |
t3_15zxkdz
|
/r/LocalLLaMA/comments/15zxkdz/ntk_rope_scaling_and_gptq_quantization_are_now/
| false | false |
self
| 1 | null |
Is there a dashboard / comparison tool already for comparing the latest LLMs and staying up to date?
| 1 |
This seems such low-hanging fruit that it must already have been done: is there someone who keeps track of all these LLMs published (both local and closed)? It would be nice to have a clean overview and to get updates, with various properties for each LLM (license, author, parameters, VRAM needed, strengths/weaknesses and benchmarks, whatever)
| 2023-08-24T10:28:25 |
https://www.reddit.com/r/LocalLLaMA/comments/15zy2t9/is_there_a_dashboard_comparison_tool_already_for/
|
true_variation
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
15zy2t9
| false | null |
t3_15zy2t9
|
/r/LocalLLaMA/comments/15zy2t9/is_there_a_dashboard_comparison_tool_already_for/
| false | false |
self
| 1 | null |
Smiliar Service like chat.nbox.ai that run LLMA module ?
| 1 |
I finaly have fun time to tested LLMA with decent speed of text generation, i looking smiliar service that run (free or freemium if possible LLMA module or other module.
| 2023-08-24T11:21:23 |
https://www.reddit.com/r/LocalLLaMA/comments/15zz5jv/smiliar_service_like_chatnboxai_that_run_llma/
|
Merchant_Lawrence
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
15zz5jv
| false | null |
t3_15zz5jv
|
/r/LocalLLaMA/comments/15zz5jv/smiliar_service_like_chatnboxai_that_run_llma/
| false | false |
self
| 1 | null |
llama2 quantized model vs. regular one: What's the difference?
| 1 |
Why is that several folks use quantized models provided by TheBloke, for instance, in place of regular models provided by Meta on HuggingFace?
Do the 8 or 4 bit quantized 13B models work better than 7B or 13B unquantized models by Meta?
| 2023-08-24T11:24:53 |
https://www.reddit.com/r/LocalLLaMA/comments/15zz81s/llama2_quantized_model_vs_regular_one_whats_the/
|
sbs1799
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
15zz81s
| false | null |
t3_15zz81s
|
/r/LocalLLaMA/comments/15zz81s/llama2_quantized_model_vs_regular_one_whats_the/
| false | false |
self
| 1 | null |
Is there any opensource AI Model like Ada which accept tokens up to 8000 to create embedding?
| 1 |
[removed]
| 2023-08-24T11:39:56 |
https://www.reddit.com/r/LocalLLaMA/comments/15zziz0/is_there_any_opensource_ai_model_like_ada_which/
|
headphonesproreview
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
15zziz0
| false | null |
t3_15zziz0
|
/r/LocalLLaMA/comments/15zziz0/is_there_any_opensource_ai_model_like_ada_which/
| true | false |
default
| 1 | null |
Using llama to create business simulation software
| 1 |
Im looking for a guide or some examples of uses of llama open source models to create a an open source business simulation software in a non gaming style like the free acess sim companies web game but open source version instead .
| 2023-08-24T12:07:28 |
https://www.reddit.com/r/LocalLLaMA/comments/160045b/using_llama_to_create_business_simulation_software/
|
qwani
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
160045b
| false | null |
t3_160045b
|
/r/LocalLLaMA/comments/160045b/using_llama_to_create_business_simulation_software/
| false | false |
self
| 1 | null |
What is llama.cpp?
| 1 |
It appears that several people use this package available on GitHub. Not sure if it has some added benefits than using a HuggingFace llama2 model directly. Does llama.cpp serve specific purposes?
| 2023-08-24T12:16:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1600b3v/what_is_llamacpp/
|
sbs1799
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1600b3v
| false | null |
t3_1600b3v
|
/r/LocalLLaMA/comments/1600b3v/what_is_llamacpp/
| false | false |
self
| 1 | null |
Optimizing Response Speed from LLaMa and Hosting Strategy for a Multi-layered App Architecture on AWS EC2
| 1 |
I am looking forward for your suggestions so that I can implement my first ever project. I'd greatly appreciate your help.
**Challenge 1: Speeding Up Model Responses**
In the backend of my iOS app, I have a C# API that makes results. To control and change how the app shows these results, I added a Flask server in between. I am implementing LLaMA here understand to re-write the the content. It takes a long time (around 8 minutes) for the LLaMA to give an answer. How can I make the LLaMA up all the time so it can answer quickly when we ask questions? Also, the response from LLaMA follow different pattern each time, forcing me think about a mechanism to filter the response so that it is presentable to the user. Any ideas? I will keep trying asking different questions until then.
**Challenge 2: AWS EC2 Hosting Strategy**
I'm planning to host both the Flask middleware and the C# backend on an Amazon AWS EC2 server, leveraging the free tier as well as the cloud capabilities. I anticipate needing around 8GB of space(which is the limit on free tier) out of which 7GB would be allocated to the LLM model from llama.
However, I'm keen to optimize this setup for efficient resource utilization. Are there any recommendations or best practices you could share when it comes to configuring my AWS EC2 instance to ensure optimal performance for both my middleware and backend?
I am sorry for my naive questions.
| 2023-08-24T13:11:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1601kny/optimizing_response_speed_from_llama_and_hosting/
|
JapaniRobot
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1601kny
| false | null |
t3_1601kny
|
/r/LocalLLaMA/comments/1601kny/optimizing_response_speed_from_llama_and_hosting/
| false | false |
self
| 1 | null |
Code Llama Released
| 1 |
https://github.com/facebookresearch/codellama
| 2023-08-24T13:26:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1601xk4/code_llama_released/
|
FoamythePuppy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1601xk4
| false | null |
t3_1601xk4
|
/r/LocalLLaMA/comments/1601xk4/code_llama_released/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'Na6nXLQe20G26kPIesr7oeh8pOhxV8_slXxPh_GWTUo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7AulBGRo6ZQN3M_-PH0Qlq35XwZysy411aNq27qp8Iw.jpg?width=108&crop=smart&auto=webp&s=d05b5405ae1c095a2a957a92e6428f703c7d587b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7AulBGRo6ZQN3M_-PH0Qlq35XwZysy411aNq27qp8Iw.jpg?width=216&crop=smart&auto=webp&s=bf2a25de1e81efb7d2e7cb60035e9232421b0e70', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7AulBGRo6ZQN3M_-PH0Qlq35XwZysy411aNq27qp8Iw.jpg?width=320&crop=smart&auto=webp&s=9ffd5b20fc1bf6035f8fb4d9625d6b70c3269efa', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7AulBGRo6ZQN3M_-PH0Qlq35XwZysy411aNq27qp8Iw.jpg?width=640&crop=smart&auto=webp&s=c18133fd3079c5575a293cc9a29095c8b226d135', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7AulBGRo6ZQN3M_-PH0Qlq35XwZysy411aNq27qp8Iw.jpg?width=960&crop=smart&auto=webp&s=60eb3e52a8f04390ba2c15fd32fb42b10b827a4f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7AulBGRo6ZQN3M_-PH0Qlq35XwZysy411aNq27qp8Iw.jpg?width=1080&crop=smart&auto=webp&s=076843b974aac99c3b4f770983b49e21f1120535', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7AulBGRo6ZQN3M_-PH0Qlq35XwZysy411aNq27qp8Iw.jpg?auto=webp&s=ef6db3bb620349fbee7749e0b9d463670d5f005f', 'width': 1200}, 'variants': {}}]}
|
how to clone a space from huggingface
| 1 |
[removed]
| 2023-08-24T14:59:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1604bae/how_to_clone_a_space_from_huggingface/
|
allnc
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1604bae
| false | null |
t3_1604bae
|
/r/LocalLLaMA/comments/1604bae/how_to_clone_a_space_from_huggingface/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '5YyEGZm2jC0-mkhBa9c-xWcRwSxmFGSeri-W_n2-Biw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wkzDZUufvaNmeQKYlx2KlHQaZZfLxeaS0XWYmkuY71o.jpg?width=108&crop=smart&auto=webp&s=23c646d6d860d83247e2f47af11ea4bacc43d969', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/wkzDZUufvaNmeQKYlx2KlHQaZZfLxeaS0XWYmkuY71o.jpg?width=216&crop=smart&auto=webp&s=3b3220acb62de8adeb9b7c29675c4861017e3059', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/wkzDZUufvaNmeQKYlx2KlHQaZZfLxeaS0XWYmkuY71o.jpg?width=320&crop=smart&auto=webp&s=34ad33fd1eb00d82eb3c8a4ae820eec09eba5656', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/wkzDZUufvaNmeQKYlx2KlHQaZZfLxeaS0XWYmkuY71o.jpg?width=640&crop=smart&auto=webp&s=cacf184b306984cae63b1f4d2608c595189825ad', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/wkzDZUufvaNmeQKYlx2KlHQaZZfLxeaS0XWYmkuY71o.jpg?width=960&crop=smart&auto=webp&s=fd4eef4b11d61728dda3216ee2d5dbd3e9bf048a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/wkzDZUufvaNmeQKYlx2KlHQaZZfLxeaS0XWYmkuY71o.jpg?width=1080&crop=smart&auto=webp&s=a0bae263f3cb60d318ad9359b35badf2002350b8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/wkzDZUufvaNmeQKYlx2KlHQaZZfLxeaS0XWYmkuY71o.jpg?auto=webp&s=ec56d1a02d61683903693ef35aa4edfa0e3897e1', 'width': 1200}, 'variants': {}}]}
|
What models will suit my computer specs?
| 1 |
Amd ryzen 7 2700x 16gb RAM NVIDiA GeForce GTX 1060 6GB vram.
| 2023-08-24T15:57:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1605vrx/what_models_will_suit_my_computer_specs/
|
Brarblaze
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1605vrx
| false | null |
t3_1605vrx
|
/r/LocalLLaMA/comments/1605vrx/what_models_will_suit_my_computer_specs/
| false | false |
self
| 1 | null |
What's a good uncensored local Ai I can run on my Linux machine?
| 1 |
Hi all, I'm new to this Ai surge but I like the idea of running it on my desktop computer. I only built it last month, with higher end components for gaming. So it should be able to handle it without too much difficulty.
I've heard of something called GPT4all, but it's based on LLaMA. Is that one any good, or can I do better? I've also heard of wizard something or another... Any recommendations would be greatly appreciated!
| 2023-08-24T16:07:21 |
https://www.reddit.com/r/LocalLLaMA/comments/16065mz/whats_a_good_uncensored_local_ai_i_can_run_on_my/
|
rondonjohnald
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16065mz
| false | null |
t3_16065mz
|
/r/LocalLLaMA/comments/16065mz/whats_a_good_uncensored_local_ai_i_can_run_on_my/
| false | false |
self
| 1 | null |
LoRA training Local LLM using Obbabooga with 8gb VRAM
| 1 |
Has anyone had any success training a Local LLM using Oobabooga with a paltry 8gb of VRAM. I've tried training the following models:
* Neko-Institute-of-Science\_LLaMA-7B-4bit-128g
* TheBloke\_Wizard-Vicuna-7B-Uncensored-GPTQ
I can run them fine (inference), but training them not so much.
I have run into so many problems with the monkeypatch (fix?) too numerous to count. I have read [https://github.com/oobabooga/text-generation-webui/blob/main/docs/GPTQ-models-(4-bit-mode).md#using-loras-in-4-bit-mode](https://github.com/oobabooga/text-generation-webui/blob/main/docs/GPTQ-models-(4-bit-mode).md#using-loras-in-4-bit-mode) multiple times with no success.
​
Thanks!
​
​
| 2023-08-24T16:15:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1606dg6/lora_training_local_llm_using_obbabooga_with_8gb/
|
skeletorino
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1606dg6
| false | null |
t3_1606dg6
|
/r/LocalLLaMA/comments/1606dg6/lora_training_local_llm_using_obbabooga_with_8gb/
| false | false |
self
| 1 | null |
openorca-platypus2 ggmlv not able to utilize Nvidia RTX3090, with Llama.cpp
| 1 |
I'm running openorcq-platypus2 ggmlv on my ROG M16, while running it's utilizing 100% CPU (i9-11900H) only, though I'm using the command --gpu-layers.
I've tried installing the Nvidia Cuda, yet no difference. Can somebody please guide me through.
P.S: Before posting this, I've tried to sort the issue in various ways, I'm new to this all these concepts, please be lenient on me.
| 2023-08-24T16:19:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1606hde/openorcaplatypus2_ggmlv_not_able_to_utilize/
|
BlTUSER
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1606hde
| false | null |
t3_1606hde
|
/r/LocalLLaMA/comments/1606hde/openorcaplatypus2_ggmlv_not_able_to_utilize/
| false | false |
self
| 1 | null |
build a dl setup to run llm models (llama 2)
| 1 |
[removed]
| 2023-08-24T16:26:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1606owu/build_a_dl_setup_to_run_llm_models_llama_2/
|
llmexpert
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1606owu
| false | null |
t3_1606owu
|
/r/LocalLLaMA/comments/1606owu/build_a_dl_setup_to_run_llm_models_llama_2/
| false | false |
self
| 1 | null |
Samantha x ChatGPT fine tune
| 1 | 2023-08-24T16:28:30 |
sardoa11
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1606qhv
| false | null |
t3_1606qhv
|
/r/LocalLLaMA/comments/1606qhv/samantha_x_chatgpt_fine_tune/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'ZU-HlgtenIipbLCIXeTweM2AzvVr4jrKtsY1TABrFQ0', 'resolutions': [{'height': 91, 'url': 'https://preview.redd.it/2zj57z3j43kb1.jpg?width=108&crop=smart&auto=webp&s=ad042abd753b3a890b34585c863b2cedb977fcbc', 'width': 108}, {'height': 182, 'url': 'https://preview.redd.it/2zj57z3j43kb1.jpg?width=216&crop=smart&auto=webp&s=37fd6fdb9ac034cd535d5f7e2a04d8604d5710a7', 'width': 216}, {'height': 269, 'url': 'https://preview.redd.it/2zj57z3j43kb1.jpg?width=320&crop=smart&auto=webp&s=744e7c26557b7e639c33410936eb607d34c05a14', 'width': 320}, {'height': 539, 'url': 'https://preview.redd.it/2zj57z3j43kb1.jpg?width=640&crop=smart&auto=webp&s=77b4a29b13de434c8ef4e451ab254dff63f8a2c6', 'width': 640}, {'height': 809, 'url': 'https://preview.redd.it/2zj57z3j43kb1.jpg?width=960&crop=smart&auto=webp&s=f02a850052fe3b60baff272199cf9c6a14435a3a', 'width': 960}, {'height': 910, 'url': 'https://preview.redd.it/2zj57z3j43kb1.jpg?width=1080&crop=smart&auto=webp&s=1348a1ba68417d99aed48e94a6a051a26c70a63b', 'width': 1080}], 'source': {'height': 1436, 'url': 'https://preview.redd.it/2zj57z3j43kb1.jpg?auto=webp&s=7d6a7174173019b5470e8521a3acdbd1700b76ac', 'width': 1704}, 'variants': {}}]}
|
|||
llama-2 model AutoModelForSequenceClassification finetune with custom data and save on local
| 1 |
Hi,
Following are the steps I have followed to fine-tune the model. model gets saved successfully but when I load the model I get the following error.
**Error** : EnvironmentError(OSError: results/llama2-classification/final\_checkpoint does not appear to have a file named config.json
Basic Code :
1. Load model
model = AutoModelForSequenceClassification.from_pretrained(
model_name,
quantization_config=bnb_config,
device_map='auto',
num_labels=5
# dispatch efficiently the model on the available ressources
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
2. Training
trainer = SFTTrainer(
model=model,
train_dataset=dataset,
max_seq_length=1,
dataset_text_field='sentence',
args=TrainingArguments(
output_dir=f'./llm2results/output',
logging_dir=f'./llm2logs/output',
learning_rate=2e-4,
per_device_train_batch_size=1,
gradient_accumulation_steps=4,
num_train_epochs=1,
save_total_limit=3,
fp16=True,
logging_steps=1,
max_steps = 20,
optim="paged_adamw_8bit",
lr_scheduler_type="cosine",
warmup_ratio=0.06,
),
peft_config=peft_config,
)
3. Save the model
trainer.model.save_pretrained(output_dir)
4. Load the model from the local dir path
model = AutoModelForSequenceClassification.from_pretrained(output_dir)
and got the error which I mentioned above.
| 2023-08-24T16:31:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1606t04/llama2_model_automodelforsequenceclassification/
|
Satya8870
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1606t04
| false | null |
t3_1606t04
|
/r/LocalLLaMA/comments/1606t04/llama2_model_automodelforsequenceclassification/
| false | false |
self
| 1 | null |
How to access text-generation-webui from the lan without gradio.live ?
| 1 |
Hello,
I'm trying in vain to access oobabooga text-generation-webui from the computers on my LAN.
I know I have to use the --listen argument, but I don't know where.
I've tried as an argument to start\_linux.sh and [webui.py](https://webui.py) but it doesn't work.
Where should I put it?
Thanks for your help.
| 2023-08-24T16:34:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1606wn7/how_to_access_textgenerationwebui_from_the_lan/
|
Bogdahnfr
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1606wn7
| false | null |
t3_1606wn7
|
/r/LocalLLaMA/comments/1606wn7/how_to_access_textgenerationwebui_from_the_lan/
| false | false |
self
| 1 | null |
Searching for repository of text-embedded database
| 1 |
[removed]
| 2023-08-24T16:36:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1606y9h/searching_for_repository_of_textembedded_database/
|
Natural_Speaker7954
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1606y9h
| false | null |
t3_1606y9h
|
/r/LocalLLaMA/comments/1606y9h/searching_for_repository_of_textembedded_database/
| false | false |
self
| 1 | null |
Recommendations for OSS LLMs that can run on a 4090 for tens of thousands of summarizations (125-3000 tokens)? What is the most accurate models you've found?
| 1 |
I've tried a bunch of 7,13,30b models and got horrible results, 50-75% error rates with tons of hallucinations. I need concise summarization but I can't loose key details or context. This is going to be for vector retrieval, so if you have any other recommendations, I'd greatly appreciate it.
No politics please.. it's just a simple summarization.. no need for debate.
| 2023-08-24T16:39:49 |
https://www.reddit.com/r/LocalLLaMA/comments/160717m/recommendations_for_oss_llms_that_can_run_on_a/
|
Tiny_Arugula_5648
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
160717m
| false | null |
t3_160717m
|
/r/LocalLLaMA/comments/160717m/recommendations_for_oss_llms_that_can_run_on_a/
| false | false |
self
| 1 | null |
Nous-Hermes-Llama2-70b and Nous-Puffin-70B is out!
| 1 |
Here are the links:
* [https://huggingface.co/NousResearch/Nous-Hermes-Llama2-70b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-70b)
* [https://huggingface.co/NousResearch/Nous-Puffin-70B](https://huggingface.co/NousResearch/Nous-Puffin-70B)
What's Puffin doing here when everyone's been praising Hermes? Because of the description. Hermes was trained on one-shot instructions, while Puffin was trained on multi-turn conversations, so if you want a long chat, Puffin might work better. Same authors.
At least some of the quantizations are already up by TheBloke. However, be careful with the new GGUF format as for example text-generation-webui doesn't seem to work with it yet. (I learned that the hard way.)
| 2023-08-24T16:56:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1607hez/noushermesllama270b_and_nouspuffin70b_is_out/
|
whtne047htnb
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1607hez
| false | null |
t3_1607hez
|
/r/LocalLLaMA/comments/1607hez/noushermesllama270b_and_nouspuffin70b_is_out/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'oymRYNcwScnZI2jtAS181KENxhSyq-c5fgaek5IEQes', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/UaAcH0mRj6koiWC4yyEoEe1eePT9X5lXO-l3MX1r_EI.jpg?width=108&crop=smart&auto=webp&s=9af61dc60b966f263821c4f49c6db9c5ce0748ff', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/UaAcH0mRj6koiWC4yyEoEe1eePT9X5lXO-l3MX1r_EI.jpg?width=216&crop=smart&auto=webp&s=29ad1639f933e58b9e3ff42731017fb5cc04d359', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/UaAcH0mRj6koiWC4yyEoEe1eePT9X5lXO-l3MX1r_EI.jpg?width=320&crop=smart&auto=webp&s=d6d018f89267898405d315d4dc1797e2cafac493', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/UaAcH0mRj6koiWC4yyEoEe1eePT9X5lXO-l3MX1r_EI.jpg?width=640&crop=smart&auto=webp&s=2bd6ed8688dc8cdffb0d48fca1badd5b1182493e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/UaAcH0mRj6koiWC4yyEoEe1eePT9X5lXO-l3MX1r_EI.jpg?width=960&crop=smart&auto=webp&s=2eeec24bd5f271bf48e4579bcfca3d941bf106db', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/UaAcH0mRj6koiWC4yyEoEe1eePT9X5lXO-l3MX1r_EI.jpg?width=1080&crop=smart&auto=webp&s=89222cf8fcd9940edbb69cde3a81775f9f8d67ae', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/UaAcH0mRj6koiWC4yyEoEe1eePT9X5lXO-l3MX1r_EI.jpg?auto=webp&s=701ec81ee3a92803e210302b2bfb7c6728ddff33', 'width': 1200}, 'variants': {}}]}
|
Programmatically using llama.cpp.
| 1 |
For my project, I'm trying to use llama.cpp from my scripts/system (in nodejs).
There are 3 nodejs libraries for llama.cpp on npm: all 3 fail to build (this is what you get when things are this fresh/recent I guess), and they also don't have all the options the command line llama.cpp "main" program does (like grammar and many others).
So I've been using llama-cpp-python's server:
python3 -m llama_cpp.server
This works, it can be accessed as if it were the OpenAI API, the problem is there also, I don't have all the command line options llama.cpp's main or server does.
My next idea was to use llama.cpp's server script, run the server, and then use a HTTP client to "talk" to the script, make requests and get replies. But I couldn't get that to work, it's far from straightforward how to make the requests and where/how to get the replies.
This leaves me one option I can see:
From a nodejs script, I execute llama.cpp's main script, I somehow "wait" for some specific stdout output from it, then over stdin I send my prompt (not sure how to pass the system prompt there), then it outputs the result, and from my script I somehow read this and figure out when it's done sending and move on. I have no idea how to do this, but I suspect in theory it might be possible? It's a bit nasty but maybe it'd work?
Another option I can maybe see, which is similar, is I pass the prompt/system prompt over the command line (as a command line option or a file?), then I get main.cpp to "finish"/exit when it's done outputting the result, and somehow parse everything in a way that gets me what I'm interested in.
I've tried both approaches with not much success.
My question for the community is: has anyone here done this? Have you had any success? Do you maybe have some code you'd want to share? If I manage to get this to work, I'd make and publish a npm module from it so it's shared with the community.
Any help would be super welcome, or any other idea of how to do this.
Cheers!
| 2023-08-24T17:30:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1608fjd/programmatically_using_llamacpp/
|
arthurwolf
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1608fjd
| false | null |
t3_1608fjd
|
/r/LocalLLaMA/comments/1608fjd/programmatically_using_llamacpp/
| false | false |
self
| 1 | null |
Seeking Advice on Structured Output and Fine-Tuning for Real Estate Description Parsing
| 1 |
[removed]
| 2023-08-24T17:59:05 |
https://www.reddit.com/r/LocalLLaMA/comments/16098c4/seeking_advice_on_structured_output_and/
|
Sneackybae
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16098c4
| false | null |
t3_16098c4
|
/r/LocalLLaMA/comments/16098c4/seeking_advice_on_structured_output_and/
| false | false |
self
| 1 | null |
Anyone train a model on all the news happening here for learning?
| 1 |
Ive been trying to get up to speed on all the happenings here and its been a little difficult finding information and understanding all the new information. Previously I could understand complex topics by using GPT4 but a lot of the new terms here(GPTQ,LoRA) are too new for it.
Anyone put together a knowledge base and model for someone to easily learn all the new terms and happenings? If not could be a cool project
| 2023-08-24T18:00:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1609a6f/anyone_train_a_model_on_all_the_news_happening/
|
sorbitals
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1609a6f
| false | null |
t3_1609a6f
|
/r/LocalLLaMA/comments/1609a6f/anyone_train_a_model_on_all_the_news_happening/
| false | false |
self
| 1 | null |
How about I TEACH you how to get ANY LLaMA model up and running
| 1 |
Hey folks - Been reading all of your posts for over a year now and I think it's high time I offer a service to those wanting to get **llama2, llama2-chat or llamacode** running and accessible from the cloud FAST. For $100 flat fee - I will spend 1 hour with you on webcam to get you up and running with any model of your liking using a new AWS or LambdaLabs (fp16, quantized, etc.). The cloud architecture + setup I use offers extremely fast inference throughput (tok/sec) and is configured to your needs. I will provide detailed instructions + recording of screenshare after our session where I help you launch your LLaMA inference cloud instance. Let me accelerate the learning curve easily by 3 months in one sessions. Doesn't matter if you are running on Windows/Linux/Mac. DM me if interested. Satisfaction guaranteed or $$$ back. :)
Can discuss your needs free of charge.
Background: MS/BS Aerospace Engineering; MBA; ML Expert
| 2023-08-24T18:36:47 |
https://www.reddit.com/r/LocalLLaMA/comments/160a90g/how_about_i_teach_you_how_to_get_any_llama_model/
|
No_Joke5137
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
160a90g
| false | null |
t3_160a90g
|
/r/LocalLLaMA/comments/160a90g/how_about_i_teach_you_how_to_get_any_llama_model/
| false | false |
self
| 1 | null |
Llama 2 - Vicuna 13b 16k - context size
| 1 |
So I am doing QA on documents. I created embeddings with chunk size of 4000 + 600 overlap. When I limit number of source documents (param k) to 2, everything works fine. If I move that up to 3 then I either get empty answer or only few letters of first word.
To my understanding 3documents with 5200 characthers should be around 5610 tokens (calculated with OpenAI tokenizer). Is my calculation wrong or I am doing something wrong with code ?
| 2023-08-24T19:11:50 |
https://www.reddit.com/r/LocalLLaMA/comments/160b6mi/llama_2_vicuna_13b_16k_context_size/
|
Kukaracax
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
160b6mi
| false | null |
t3_160b6mi
|
/r/LocalLLaMA/comments/160b6mi/llama_2_vicuna_13b_16k_context_size/
| false | false |
self
| 1 | null |
Easiest way to LoRa finetune LLama2 7B on 8 A100?
| 1 |
[removed]
| 2023-08-24T19:44:59 |
https://www.reddit.com/r/LocalLLaMA/comments/160c21l/easiest_way_to_lora_finetune_llama2_7b_on_8_a100/
|
Unfair-Permit5904
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
160c21l
| false | null |
t3_160c21l
|
/r/LocalLLaMA/comments/160c21l/easiest_way_to_lora_finetune_llama2_7b_on_8_a100/
| false | false |
default
| 1 |
{'enabled': False, 'images': [{'id': 'BuMvNeLVUDAwg4OrZlAdIktP-9azOriK5S1eIryToD4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vbD8dV9EO8bqOMsSvOiVqWUgFE9yOKhtm-1_qMoP8fQ.jpg?width=108&crop=smart&auto=webp&s=f8d25c9a7c4af3403e5df3f3b6bee22f27ab67f1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vbD8dV9EO8bqOMsSvOiVqWUgFE9yOKhtm-1_qMoP8fQ.jpg?width=216&crop=smart&auto=webp&s=8c856460407dc8e3b92d4447ffde4a65eadf783f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vbD8dV9EO8bqOMsSvOiVqWUgFE9yOKhtm-1_qMoP8fQ.jpg?width=320&crop=smart&auto=webp&s=1b80f701ff615f5b4e33ccd23f151428e65054f6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vbD8dV9EO8bqOMsSvOiVqWUgFE9yOKhtm-1_qMoP8fQ.jpg?width=640&crop=smart&auto=webp&s=8a88b6a358a50f899a5d93eadeec2b28d0dc9b4c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vbD8dV9EO8bqOMsSvOiVqWUgFE9yOKhtm-1_qMoP8fQ.jpg?width=960&crop=smart&auto=webp&s=8164273b7c65cfeb083de29fe280d332a7926951', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vbD8dV9EO8bqOMsSvOiVqWUgFE9yOKhtm-1_qMoP8fQ.jpg?width=1080&crop=smart&auto=webp&s=2471fbd659680ea1b99a94d265fc6a3b64406aa4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vbD8dV9EO8bqOMsSvOiVqWUgFE9yOKhtm-1_qMoP8fQ.jpg?auto=webp&s=e59f4298135bee53e21d9876802d9f07eeba3e2f', 'width': 1200}, 'variants': {}}]}
|
Easiest way to LoRa finetune LLama2 7B on 8 A100?
| 1 |
Hi guys
So for some experiments I'm looking to finetune Llama2 7B using 8x A100 on runpod since I have some time constraints. One entry of the data is pretty big so about 2-4k tokens so that's why I'm looking to get the most resource friendly way.
Which would be the most resource friendly way? Should I use Q-LoRa with 4 bit?
Does someone has experience? Is the [xTuring](https://github.com/stochasticai/xTuring) library my best bet? If someone has some more resources on this topic I would appreciate it.
| 2023-08-24T19:48:14 |
https://www.reddit.com/r/LocalLLaMA/comments/160c571/easiest_way_to_lora_finetune_llama2_7b_on_8_a100/
|
Single_Prior_704
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
160c571
| false | null |
t3_160c571
|
/r/LocalLLaMA/comments/160c571/easiest_way_to_lora_finetune_llama2_7b_on_8_a100/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'BuMvNeLVUDAwg4OrZlAdIktP-9azOriK5S1eIryToD4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vbD8dV9EO8bqOMsSvOiVqWUgFE9yOKhtm-1_qMoP8fQ.jpg?width=108&crop=smart&auto=webp&s=f8d25c9a7c4af3403e5df3f3b6bee22f27ab67f1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vbD8dV9EO8bqOMsSvOiVqWUgFE9yOKhtm-1_qMoP8fQ.jpg?width=216&crop=smart&auto=webp&s=8c856460407dc8e3b92d4447ffde4a65eadf783f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vbD8dV9EO8bqOMsSvOiVqWUgFE9yOKhtm-1_qMoP8fQ.jpg?width=320&crop=smart&auto=webp&s=1b80f701ff615f5b4e33ccd23f151428e65054f6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vbD8dV9EO8bqOMsSvOiVqWUgFE9yOKhtm-1_qMoP8fQ.jpg?width=640&crop=smart&auto=webp&s=8a88b6a358a50f899a5d93eadeec2b28d0dc9b4c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vbD8dV9EO8bqOMsSvOiVqWUgFE9yOKhtm-1_qMoP8fQ.jpg?width=960&crop=smart&auto=webp&s=8164273b7c65cfeb083de29fe280d332a7926951', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vbD8dV9EO8bqOMsSvOiVqWUgFE9yOKhtm-1_qMoP8fQ.jpg?width=1080&crop=smart&auto=webp&s=2471fbd659680ea1b99a94d265fc6a3b64406aa4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vbD8dV9EO8bqOMsSvOiVqWUgFE9yOKhtm-1_qMoP8fQ.jpg?auto=webp&s=e59f4298135bee53e21d9876802d9f07eeba3e2f', 'width': 1200}, 'variants': {}}]}
|
Fine-tuning GPT
| 1 |
[removed]
| 2023-08-24T19:51:00 |
https://www.reddit.com/r/LocalLLaMA/comments/160c7va/finetuning_gpt/
|
heswithjesus
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
160c7va
| false | null |
t3_160c7va
|
/r/LocalLLaMA/comments/160c7va/finetuning_gpt/
| false | false |
self
| 1 | null |
Not sure why the latest vicuna 13b version 1.5 is not working with this code , it just gives a black response and takes a long time to load
| 1 |
[removed]
| 2023-08-24T20:14:00 |
https://www.reddit.com/r/LocalLLaMA/comments/160ctex/not_sure_why_the_latest_vicuna_13b_version_15_is/
|
skeletons_of_closet
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
160ctex
| false | null |
t3_160ctex
|
/r/LocalLLaMA/comments/160ctex/not_sure_why_the_latest_vicuna_13b_version_15_is/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'jfeVG47nZdEkz9kXfW1CcS-Sy8l4DXGb9JErx6bLKfU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=108&crop=smart&auto=webp&s=abf38332c5c00a919af5be75653a93473aa2e5fa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=216&crop=smart&auto=webp&s=1a06602204645d0251d3f5c043fa1b940ca3e799', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=320&crop=smart&auto=webp&s=04833c1845d9bd544eb7fed4e31123e740619890', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=640&crop=smart&auto=webp&s=d592b0a5b627e060ab58d73bde5f359a1058e56d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=960&crop=smart&auto=webp&s=5913a547536ee8300fdb8a32d14ff28667d1b875', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=1080&crop=smart&auto=webp&s=2af86fd4d41393a7d14d45c4bb883bef718575d1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?auto=webp&s=720b78add0a3005c4f67eaed6897df409cc040c6', 'width': 1200}, 'variants': {}}]}
|
Help with install
| 1 |
Help
| 2023-08-24T20:29:37 |
https://www.reddit.com/r/LocalLLaMA/comments/160d806/help_with_install/
|
LearnOnnReddit
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
160d806
| false | null |
t3_160d806
|
/r/LocalLLaMA/comments/160d806/help_with_install/
| false | false |
self
| 1 | null |
Why is "lmsys/vicuna-13b-v1.5" giving chinease answers all the time
| 1 |
[removed]
| 2023-08-24T20:37:41 |
https://www.reddit.com/r/LocalLLaMA/comments/160dfuh/why_is_lmsysvicuna13bv15_giving_chinease_answers/
|
skeletons_of_closet
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
160dfuh
| false | null |
t3_160dfuh
|
/r/LocalLLaMA/comments/160dfuh/why_is_lmsysvicuna13bv15_giving_chinease_answers/
| false | false | 1 | null |
|
We could have gotten something almost as good as GPT4 for coding...
| 1 |
[https://twitter.com/garybasin/status/1694735409287233578?t=JsnswieBAgTGXmwY86qrhg&s=19](https://twitter.com/garybasin/status/1694735409287233578?t=JsnswieBAgTGXmwY86qrhg&s=19)
But they decided to not release it...
https://preview.redd.it/rq1szgxrk4kb1.png?width=896&format=png&auto=webp&s=f2d9e9aa459c82de20eab712bdf66e2e933685a0
| 2023-08-24T21:21:19 |
https://www.reddit.com/r/LocalLLaMA/comments/160elof/we_could_have_gotten_something_almost_as_good_as/
|
Wonderful_Ad_5134
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
160elof
| false |
{'oembed': {'author_name': 'Gary Basin 🍍', 'author_url': 'https://twitter.com/garybasin', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">They don't want you to know that synthetic data is the future.<br><br>LLMs generating synthetic data to train on drives a huuuge boost in "unnatural" code llama -- the one model they aren't releasing. Surpasses gpt-3.5 and gets close to gpt-4 performance on a 34B model <a href="https://t.co/NdB6Or6mhi">pic.twitter.com/NdB6Or6mhi</a></p>— Gary Basin 🍍 (@garybasin) <a href="https://twitter.com/garybasin/status/1694735409287233578?ref_src=twsrc%5Etfw">August 24, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/garybasin/status/1694735409287233578', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
|
t3_160elof
|
/r/LocalLLaMA/comments/160elof/we_could_have_gotten_something_almost_as_good_as/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'O_dgt0qW5CvbhrfMq_E6TNvcugq_JjNAktxGBVM9ESM', 'resolutions': [{'height': 85, 'url': 'https://external-preview.redd.it/wcvG0usKjZnoF-Jt9ljufXlUYiHUG8pSANv5Z3L2Og4.jpg?width=108&crop=smart&auto=webp&s=363a78f7f256dc78f3d1515ce49af651753a6e61', 'width': 108}], 'source': {'height': 111, 'url': 'https://external-preview.redd.it/wcvG0usKjZnoF-Jt9ljufXlUYiHUG8pSANv5Z3L2Og4.jpg?auto=webp&s=e556b679409abc8e55e5ab2ae68e05de491d3bff', 'width': 140}, 'variants': {}}]}
|
|
Llama finetuning question
| 1 |
Is there a good resource that will
1) Explain to me what these values are?
2) Recommend good values on basis of 1) my hardware 2) my dataset?
​
model_name = "meta-llama/Llama-2-7b-chat-hf"
dataset_name = "./train.jsonl"
new_model = "llama-2-7b-custom"
lora_r = 64
lora_alpha = 16
lora_dropout = 0.1
use_4bit = True
bnb_4bit_compute_dtype = "float16"
bnb_4bit_quant_type = "nf4"
use_nested_quant = False
output_dir = "./results"
num_train_epochs = 1
fp16 = False
bf16 = False
per_device_train_batch_size = 4
per_device_eval_batch_size = 4
gradient_accumulation_steps = 1
gradient_checkpointing = True
max_grad_norm = 0.3
learning_rate = 2e-4
weight_decay = 0.001
optim = "paged_adamw_32bit"
lr_scheduler_type = "constant"
max_steps = -1
warmup_ratio = 0.03
group_by_length = True
save_steps = 25
logging_steps = 5
max_seq_length = None
packing = False
device_map = {"": 0}
​
| 2023-08-24T21:21:49 |
https://www.reddit.com/r/LocalLLaMA/comments/160em6a/llama_finetuning_question/
|
Alert_Record5063
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
160em6a
| false | null |
t3_160em6a
|
/r/LocalLLaMA/comments/160em6a/llama_finetuning_question/
| false | false |
self
| 1 | null |
Code LLaMA is now on Perplexity’s LLaMa Chat!
| 1 |
[deleted]
| 2023-08-24T23:08:55 |
https://twitter.com/perplexity_ai/status/1694845231936557437
|
eunumseioquescrever
|
twitter.com
| 1970-01-01T00:00:00 | 0 |
{}
|
160hig9
| false |
{'oembed': {'author_name': 'Perplexity', 'author_url': 'https://twitter.com/perplexity_ai', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Code LLaMA is now on Perplexity’s LLaMa Chat!<br><br>Try asking it to write a function for you, or explain a code snippet: 🔗 <a href="https://t.co/gyiDw6u6IJ">https://t.co/gyiDw6u6IJ</a><br><br>This is the fastest way to try <a href="https://twitter.com/MetaAI?ref_src=twsrc%5Etfw">@MetaAI</a>’s latest code-specialized LLM. With our model deployment expertise, we are able to provide you… <a href="https://t.co/hX90QulMz4">pic.twitter.com/hX90QulMz4</a></p>— Perplexity (@perplexity_ai) <a href="https://twitter.com/perplexity_ai/status/1694845231936557437?ref_src=twsrc%5Etfw">August 24, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/perplexity_ai/status/1694845231936557437', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
|
t3_160hig9
|
/r/LocalLLaMA/comments/160hig9/code_llama_is_now_on_perplexitys_llama_chat/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'AR0k3LyN_yBcJu6lkqVkXktnblOzaGz8TOsP7zGZqOE', 'resolutions': [{'height': 98, 'url': 'https://external-preview.redd.it/SKtrgVuDF4LjSVYgXhHLONGYuNQvNarDBfdHBv9LUbc.jpg?width=108&crop=smart&auto=webp&s=544a8a810b78090341e2c47edfd71e904ace52a6', 'width': 108}], 'source': {'height': 128, 'url': 'https://external-preview.redd.it/SKtrgVuDF4LjSVYgXhHLONGYuNQvNarDBfdHBv9LUbc.jpg?auto=webp&s=dede2c8e4d9916f9ed37d5e26cd0db3c9224906f', 'width': 140}, 'variants': {}}]}
|
|
Fine tuning one of the Llama models
| 1 |
Would it be fair to say that if my current setup gives me N tokens/sec on a model, then I can finetune it at the rate of ~N/2 tokens/sec on the same setup? (Assume it's just CPUs).
| 2023-08-25T00:06:38 |
https://www.reddit.com/r/LocalLLaMA/comments/160iz0g/fine_tuning_one_of_the_llama_models/
|
ispeakdatruf
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
160iz0g
| false | null |
t3_160iz0g
|
/r/LocalLLaMA/comments/160iz0g/fine_tuning_one_of_the_llama_models/
| false | false |
self
| 1 | null |
Are the quantized models really that poor in performance?
| 1 |
I've just given this one quick attempt using llama-2-7b 4 bit quantization and the result I got back surprised me -- i thought that these models were a lot better.
preface: I'm trying to use these models to explore some potential use-cases for generative AI in a somewhat niche domain.
​
The question I asked was fairly straight-forward and simple: Name the planets in our solar system. And the answer I got didn't even answer the question I asked. And the question it supposedly answered, was still wrong:
[https://imgur.com/aAn1UnS](https://imgur.com/aAn1UnS)
​
Obviously i know I'm using a 7b parameter model and 4 bit quantization, but I figured it'd still be decent at answering atleast the most basic of questions. Also it took some time to run but I can safely attribute that to not having a GPU lol.
What are your thoughts on this?
| 2023-08-25T00:12:11 |
https://www.reddit.com/r/LocalLLaMA/comments/160j3xa/are_the_quantized_models_really_that_poor_in/
|
anasp1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
160j3xa
| false | null |
t3_160j3xa
|
/r/LocalLLaMA/comments/160j3xa/are_the_quantized_models_really_that_poor_in/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'fAG4zst1QoQF-0DbpD3YM-aRm8pfilCza0mzFStV5IU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/aBjXkFvSgH3zCsZ9CcWmCAbPxGcEK-a5SEYEjVfnUQM.jpg?width=108&crop=smart&auto=webp&s=1dcf1261f1a07357e4e0d20216ed6663a96178b6', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/aBjXkFvSgH3zCsZ9CcWmCAbPxGcEK-a5SEYEjVfnUQM.jpg?width=216&crop=smart&auto=webp&s=c45b8ecbdc466f47303a7e50de8c16f54528efb5', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/aBjXkFvSgH3zCsZ9CcWmCAbPxGcEK-a5SEYEjVfnUQM.jpg?width=320&crop=smart&auto=webp&s=cd85383183aa95f53d12dd3229fd8a65bb1a4a5a', 'width': 320}], 'source': {'height': 315, 'url': 'https://external-preview.redd.it/aBjXkFvSgH3zCsZ9CcWmCAbPxGcEK-a5SEYEjVfnUQM.jpg?auto=webp&s=3cd3db6c448729f2e5d52945143edf66c7e851c0', 'width': 600}, 'variants': {}}]}
|
nous-Hermes: Arrogant fool regarding basic math?
| 1 |
[removed]
| 2023-08-25T00:58:07 |
https://www.reddit.com/r/LocalLLaMA/comments/160k8ey/noushermes_arrogant_fool_regarding_basic_math/
|
Natural-Sentence-601
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
160k8ey
| false | null |
t3_160k8ey
|
/r/LocalLLaMA/comments/160k8ey/noushermes_arrogant_fool_regarding_basic_math/
| false | false |
self
| 1 | null |
LLama CPP & LangChain
| 1 |
[removed]
| 2023-08-25T02:38:46 |
https://www.reddit.com/r/LocalLLaMA/comments/160ml3u/llama_cpp_langchain/
|
emporer_eli
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
160ml3u
| false | null |
t3_160ml3u
|
/r/LocalLLaMA/comments/160ml3u/llama_cpp_langchain/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '9cWeWP4ZX06TcfaZj6bu0HZnpkXpDxX8Z8JLesAZzBs', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/_TM7_TVryLCWrv3zZU9VcnJaGSXUS8VSYNk5Linn8wo.jpg?width=108&crop=smart&auto=webp&s=41754314a19be30560bef611b80e2296f8c7fb81', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/_TM7_TVryLCWrv3zZU9VcnJaGSXUS8VSYNk5Linn8wo.jpg?width=216&crop=smart&auto=webp&s=79282879e773f168a5820fc78bd5a040caa50ec6', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/_TM7_TVryLCWrv3zZU9VcnJaGSXUS8VSYNk5Linn8wo.jpg?width=320&crop=smart&auto=webp&s=55b1b15589b468bc7747dcc595709f11dea3a571', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/_TM7_TVryLCWrv3zZU9VcnJaGSXUS8VSYNk5Linn8wo.jpg?auto=webp&s=387b51edec209c61e7dd0d8ba4e88812f5e059d3', 'width': 480}, 'variants': {}}]}
|
What's the best way to point CodeLlama at a local git repo
| 1 |
looking for the best interface to set it loose on an entire folder of code and talk to it about the files in there and have it make changes
| 2023-08-25T02:49:09 |
https://www.reddit.com/r/LocalLLaMA/comments/160mt8j/whats_the_best_way_to_point_codellama_at_a_local/
|
FaustBargain
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
160mt8j
| false | null |
t3_160mt8j
|
/r/LocalLLaMA/comments/160mt8j/whats_the_best_way_to_point_codellama_at_a_local/
| false | false |
self
| 1 | null |
The Bloke has released GGMLs for Code Llama - what is the best for a 3090?
| 1 |
[removed]
| 2023-08-25T03:31:08 |
https://www.reddit.com/r/LocalLLaMA/comments/160npcu/the_bloke_has_released_ggmls_for_code_llama_what/
|
RoyalCities
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
160npcu
| false | null |
t3_160npcu
|
/r/LocalLLaMA/comments/160npcu/the_bloke_has_released_ggmls_for_code_llama_what/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'giBT9zJH9B0iZmkMce--ijkYXjNeHMS6BVtL09I1cIY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ZXLJ6daDEAtDSr6dW3_M_FBVKiO6zRZJHvkkjTQHPmM.jpg?width=108&crop=smart&auto=webp&s=5d45cd6c29253e9c07b89fb3252ebfcd33d2b218', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ZXLJ6daDEAtDSr6dW3_M_FBVKiO6zRZJHvkkjTQHPmM.jpg?width=216&crop=smart&auto=webp&s=56cf2beed378658b5951d4d94c1ec808380ea580', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ZXLJ6daDEAtDSr6dW3_M_FBVKiO6zRZJHvkkjTQHPmM.jpg?width=320&crop=smart&auto=webp&s=0ab7200fdf18d4eb9c412a70dea6626250fd5c2f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ZXLJ6daDEAtDSr6dW3_M_FBVKiO6zRZJHvkkjTQHPmM.jpg?width=640&crop=smart&auto=webp&s=32d76d3f87cc28808add20d3285446836dc39d70', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ZXLJ6daDEAtDSr6dW3_M_FBVKiO6zRZJHvkkjTQHPmM.jpg?width=960&crop=smart&auto=webp&s=7e25cd057e6c3aeb34e2eca034decd772d33f548', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ZXLJ6daDEAtDSr6dW3_M_FBVKiO6zRZJHvkkjTQHPmM.jpg?width=1080&crop=smart&auto=webp&s=698a86f09c53bd7744c8facc5f1cb8c20baa334a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ZXLJ6daDEAtDSr6dW3_M_FBVKiO6zRZJHvkkjTQHPmM.jpg?auto=webp&s=f5cf72bce312519d63e2d8044fe10173a2eda8e2', 'width': 1200}, 'variants': {}}]}
|
Is there any completely free API of llama 2
| 1 |
I am searching for completely free API key for llama 2. I have not enough space and requirements in my local machine. So I need free API key. Please suggest any way to use free API key..
| 2023-08-25T04:07:25 |
https://www.reddit.com/r/LocalLLaMA/comments/160og8r/is_there_any_completely_free_api_of_llama_2/
|
Responsible-Row6023
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
160og8r
| false | null |
t3_160og8r
|
/r/LocalLLaMA/comments/160og8r/is_there_any_completely_free_api_of_llama_2/
| false | false |
self
| 1 | null |
After hosting your own llama.cpp
| 1 | 2023-08-25T04:19:48 |
anehzat
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
160op7l
| false | null |
t3_160op7l
|
/r/LocalLLaMA/comments/160op7l/after_hosting_your_own_llamacpp/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'iqdRaYsKRNN2ankOEYAt7CnLZaVCf5N6Q6785wS9-Ik', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/ulbxtagan6kb1.gif?width=108&crop=smart&format=png8&s=3618022b2e6608f8a268594f537250ced4d56e85', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/ulbxtagan6kb1.gif?width=216&crop=smart&format=png8&s=32d5afe68c8f774a3d997ea46be414bab89c0132', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/ulbxtagan6kb1.gif?width=320&crop=smart&format=png8&s=1e247b423b0b9a2dab8b7bcdd23e4a1b9ce662bb', 'width': 320}], 'source': {'height': 202, 'url': 'https://preview.redd.it/ulbxtagan6kb1.gif?format=png8&s=32ff0ed5d0b23b6919c081a9a05fd0a461ba1295', 'width': 360}, 'variants': {'gif': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/ulbxtagan6kb1.gif?width=108&crop=smart&s=d84a303fd93abc76d6d2f0ff67cd1f8a3f4208bf', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/ulbxtagan6kb1.gif?width=216&crop=smart&s=f3e1255402e9bbdef860c70e620e2c4edefc0bdf', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/ulbxtagan6kb1.gif?width=320&crop=smart&s=e1a994e95838f43deb1dc14377eda1a6c222ef96', 'width': 320}], 'source': {'height': 202, 'url': 'https://preview.redd.it/ulbxtagan6kb1.gif?s=71ba62d4dd98b0d06792d8b740d807050ae23537', 'width': 360}}, 'mp4': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/ulbxtagan6kb1.gif?width=108&format=mp4&s=d648dbae6509f1173b10f1a284f9d7b3783ab929', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/ulbxtagan6kb1.gif?width=216&format=mp4&s=7a9cdbe93969cd33736bbd87ca1cca4b949aaad4', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/ulbxtagan6kb1.gif?width=320&format=mp4&s=70f79ea56cc2fc26724dc225850e3606a3567c9c', 'width': 320}], 'source': {'height': 202, 'url': 'https://preview.redd.it/ulbxtagan6kb1.gif?format=mp4&s=e8d189e7039d30618ae59d21c0467d8827338a0a', 'width': 360}}}}]}
|
|||
Any Google search/Website query projects?
| 1 |
Not sure if this was asked before but are there any projects available that will allow you to use a local model to run google search queries and scan/pull info from websites and maybe even chat with the information found with the model?
| 2023-08-25T04:35:03 |
https://www.reddit.com/r/LocalLLaMA/comments/160ozx2/any_google_searchwebsite_query_projects/
|
AI_Trenches
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
160ozx2
| false | null |
t3_160ozx2
|
/r/LocalLLaMA/comments/160ozx2/any_google_searchwebsite_query_projects/
| false | false |
self
| 1 | null |
Someone needs to finetune a model for ascii art, I've only been disappointed with what comes stock for most
| 1 |
Don't think I've seen anyone really talk about this so thought I'd put the idea out there
| 2023-08-25T05:46:50 |
https://www.reddit.com/r/LocalLLaMA/comments/160qcly/someone_needs_to_finetune_a_model_for_ascii_art/
|
_______DEADPOOL____
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
160qcly
| false | null |
t3_160qcly
|
/r/LocalLLaMA/comments/160qcly/someone_needs_to_finetune_a_model_for_ascii_art/
| false | false |
self
| 1 | null |
Finetuning models on XML Data to chat about it?
| 1 |
[removed]
| 2023-08-25T07:56:32 |
https://www.reddit.com/r/LocalLLaMA/comments/160smiw/finetuning_models_on_xml_data_to_chat_about_it/
|
nerdw
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
160smiw
| false | null |
t3_160smiw
|
/r/LocalLLaMA/comments/160smiw/finetuning_models_on_xml_data_to_chat_about_it/
| false | false |
self
| 1 | null |
Facing Truncation Issues with LLama-2 Model Responses
| 1 |
I have a problem with the responses generated by LLama-2 (/TheBloke/Llama-2-70B-chat-GGML). They are cut off almost at the same spot regardless of whether I'm using a 2xRTX3090 or 3xRTX3090 configuration. LLama-2's task is to generate an article based on the data contained in my database. Here's the code:
llm = LlamaCPP(
model_url="https://huggingface.co/TheBloke/Llama-2-70B-chat-GGML/resolve/main/llama-2-70b-chat.ggmlv3.q4_0.bin",
model_path=None,
temperature=0.1,
context_window=17800,
generate_kwargs={},
model_kwargs={"n_gpu_layers": 82, "n_gqa": 8},
messages_to_prompt=messages_to_prompt,
completion_to_prompt=completion_to_prompt,
verbose=True,
)
# SQL query to retrieve data from the database
query_sql = text("SELECT id, title, heading, article, keyword FROM articles;")
# Execute the query and process the results
with engine.connect() as connection:
result = connection.execute(query_sql)
for row in result:
id, title, heading, article, keyword = row
# If the value in the 'keyword' column is not empty, we generate an article
if keyword:
# Creating a message for the Llama-2 model
content = f"Please write an article about {keyword}. Title: {title}. Headings: {heading}. Content: {article}."
# Generating the response of the Llama-2 model
response = llm.complete(content)
# Printing the generated article
print(response.text)
The responses are consistently truncated, and I'm unsure how to resolve this issue. Any insights or assistance would be greatly appreciated.
​
**The responses are being truncated after approximately 3-4 sentences.**
| 2023-08-25T08:24:48 |
https://www.reddit.com/r/LocalLLaMA/comments/160t4ao/facing_truncation_issues_with_llama2_model/
|
vnvrx1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
160t4ao
| false | null |
t3_160t4ao
|
/r/LocalLLaMA/comments/160t4ao/facing_truncation_issues_with_llama2_model/
| false | false |
self
| 1 | null |
What did you train, what hardware did you use, and how long did it take?
| 1 |
Hello!
I'm trying to gather data on the hardware requirements and time taken to fine-tune or train a model.
So what are your experiences? How much VRAM was used ? With what kind of hardware?
Thanks in advance!
| 2023-08-25T08:35:06 |
https://www.reddit.com/r/LocalLLaMA/comments/160tagr/what_did_you_train_what_hardware_did_you_use_and/
|
Factemius
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
160tagr
| false | null |
t3_160tagr
|
/r/LocalLLaMA/comments/160tagr/what_did_you_train_what_hardware_did_you_use_and/
| false | false |
self
| 1 | null |
What does llm's token context actually mean?
| 1 |
When describing a llm model, including llama2, and it's accuracy and applications, most people talk about it's token context. What does that mean? Is it the max length of text that can be prompted? Or is it max length of response you can expect, beyond which it will be truncated?
| 2023-08-25T10:02:46 |
https://www.reddit.com/r/LocalLLaMA/comments/160uuzy/what_does_llms_token_context_actually_mean/
|
sbs1799
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
160uuzy
| false | null |
t3_160uuzy
|
/r/LocalLLaMA/comments/160uuzy/what_does_llms_token_context_actually_mean/
| false | false |
self
| 1 | null |
Will llama chatbots be added?
| 1 |
Does anyone know if Facebook have any plans to add AI bots powered by llama model on Facebook messenger chats and chats group so the people in that group can interact with the ai?
| 2023-08-25T11:08:43 |
https://www.reddit.com/r/LocalLLaMA/comments/160w597/will_llama_chatbots_be_added/
|
hentaidayspussies
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
160w597
| false | null |
t3_160w597
|
/r/LocalLLaMA/comments/160w597/will_llama_chatbots_be_added/
| false | false |
self
| 1 | null |
Running superhot Llama (superhot trained) 13b params on vps
| 3 |
We're currently thinking about building something with [this actual model](https://huggingface.co/TheBloke/llama-13b-supercot-GGML), the thing is I don't have any experience with hosting something like this. How much resources, especially VRAM do you think we need to run it for testing / for produdction, can be like max 50 concurrent users at one time.
Thanks you for every answer\~
| 2023-08-25T12:11:58 |
https://www.reddit.com/r/LocalLLaMA/comments/160xii4/running_superhot_llama_superhot_trained_13b/
|
Top-Fact-8840
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
160xii4
| false | null |
t3_160xii4
|
/r/LocalLLaMA/comments/160xii4/running_superhot_llama_superhot_trained_13b/
| false | false |
default
| 3 |
{'enabled': False, 'images': [{'id': 'HZnPIo5T_C3i6pLGsItPDvUM9Ns6HIM9ClHvtReYDtU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gCw1rUVzDcJzHRLeZ-kzAMdI24bxWFOL0iRkDU0Pbps.jpg?width=108&crop=smart&auto=webp&s=6f9244a469973fe1088890cf1f36b1c7b56ad4da', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gCw1rUVzDcJzHRLeZ-kzAMdI24bxWFOL0iRkDU0Pbps.jpg?width=216&crop=smart&auto=webp&s=21e595f790768e8011395b69159ea84f282dd251', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gCw1rUVzDcJzHRLeZ-kzAMdI24bxWFOL0iRkDU0Pbps.jpg?width=320&crop=smart&auto=webp&s=b653796064a1d275ade41c34784923802ccb437d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gCw1rUVzDcJzHRLeZ-kzAMdI24bxWFOL0iRkDU0Pbps.jpg?width=640&crop=smart&auto=webp&s=7e6aa68cf034843c0fa9b762643a29f124d41a77', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gCw1rUVzDcJzHRLeZ-kzAMdI24bxWFOL0iRkDU0Pbps.jpg?width=960&crop=smart&auto=webp&s=c0c892b8843c71486409f6b5fc49c84216ab0d87', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gCw1rUVzDcJzHRLeZ-kzAMdI24bxWFOL0iRkDU0Pbps.jpg?width=1080&crop=smart&auto=webp&s=a6caa342a469721ab2e1303a5460b24963497667', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gCw1rUVzDcJzHRLeZ-kzAMdI24bxWFOL0iRkDU0Pbps.jpg?auto=webp&s=b4074800a41d10b9dca6ca8ef550f2057c5a6ac0', 'width': 1200}, 'variants': {}}]}
|
Did anyone compare the inference quality of the quantized gptq, ggml, gguf and non-quantized models?
| 1 |
I'm trying to figure out which type of quantization to use from the inference quality perspective considering the similar type of quantization bits (e.g. both are 5\_1 or 6\_0 bits)
Another question is -- to what extent the quantized version is actually worse than the original one? I'm interested in codegen models in particular. Today I was trying to generate code via the recent TheBloke's quantized llamacode-13b-5\_1/6\_0 (both 'instruct' and original versions) in ggml and gguf formats via llama.cpp and they were not able to generate even simple code in python or pure c. Instead of the code, they produced just text instructions on how to write a code instead of the actual code.
| 2023-08-25T12:48:45 |
https://www.reddit.com/r/LocalLLaMA/comments/160ycqq/did_anyone_compare_the_inference_quality_of_the/
|
Greg_Z_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
160ycqq
| false | null |
t3_160ycqq
|
/r/LocalLLaMA/comments/160ycqq/did_anyone_compare_the_inference_quality_of_the/
| false | false |
self
| 1 | null |
Code Llama / Continue / vscode
| 1 |
Patched together notes on getting the Continue extension running against llama.cpp and the new GGUF format with code llama. This is from various pieces of the internet with some minor tweaks, see linked sources. Assumes nvidia gpu, cuda working in WSL Ubuntu and windows. Should work fine under native ubuntu too. NB this gets it to "it works in principle", still seems to [have serious issues with the stopping tokens](https://i.imgur.com/TVqs0FT.png) that I haven't investigated yet. That means completion takes forever. If anyone figures that out let me know
Ensure that you've got nothing already running on local ports 8080 and 8081
-------
[Source](https://github.com/ggerganov/llama.cpp)
Steps below assume CUDA use, if not just use plain make without the CUBLAS parameter thing. See repo for more options
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp/
make LLAMA_CUBLAS=1
-------
[Source](https://huggingface.co/TheBloke/CodeLlama-13B-Python-GGUF/tree/main)
You may need to pick a smaller one depending on GPU VRAM - this one needs about 13gigs.
wget https://huggingface.co/TheBloke/CodeLlama-13B-Python-GGUF/resolve/main/codellama-13b-python.Q6_K.gguf -P ./models/
-------
Check whether this works at all thus far before trying API part - it'll spit out random gibberish, that's fine. We're just looking for errors
If using CUDA you should see "BLAS = 1" in the system_info
./main -m ./models/codellama-13b-python.Q6_K.gguf
-------
Launch server
./server -m ./models/codellama-13b-python.Q6_K.gguf -ngl 100
The ngl 100 is how many layers to stick into the GPU so tweak as needed or leave out for cpu
Open a browser and check that there is something on localhost:8080
Open a new terminal and continue with instructions, leaving the llama.cpp server running
-------
[Source](https://www.reddit.com/r/LocalLLaMA/comments/15ak5k4/short_guide_to_hosting_your_own_llamacpp_openai/)
mv ./examples/server/api_like_OAI.py ./examples/server/api_like_OAI_BCK.py
wget https://raw.githubusercontent.com/ggerganov/llama.cpp/d8a8d0e536cfdaca0135f22d43fda80dc5e47cd8/examples/server/api_like_OAI.py -P ./examples/server/
python3 -m pip install flask requests
python3 ./examples/server/api_like_OAI.py --host 0.0.0.0
If using WSL/local you can skip the --host 0.0.0.0 part
-------
Install the continue vscode add-on
https://marketplace.visualstudio.com/items?itemName=Continue.continue
-------
[Source](https://continue.dev/docs/customization#local-models-with-ggml)
Open continue in the vscode sidebar, click through their intro till you get the command box, type in /config
Add this to the top
from continuedev.src.continuedev.libs.llm.ggml import GGML
Find the place where it loads the mode - around line 60ish, comment out those lines and add this instead. You may need to
fix the indentation.
default=GGML(
max_context_length=16384,
server_url="http://localhost:8081")
I've set context length to 16k but the model is in theory capable of 100k. Unsure what VRAM & performance impact is.
-------
Close vscode & reopen. Keep in mind that you need BOTH servers running, so if you used the vscode terminal you likely just killed one or both by restarting it ;)
| 2023-08-25T12:58:24 |
https://www.reddit.com/r/LocalLLaMA/comments/160yl0b/code_llama_continue_vscode/
|
AnomalyNexus
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
160yl0b
| false | null |
t3_160yl0b
|
/r/LocalLLaMA/comments/160yl0b/code_llama_continue_vscode/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'ZbUjSDn5bive-vYYE2uyh5ho1iaectGsm1wCi03kz-A', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/B-2JyjxPTcnfmq5bQjzYdqTIheW08--Nc1zfN2TCkHI.png?width=108&crop=smart&auto=webp&s=433ec540ec28f41d5c37a732496333edc7a39a25', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/B-2JyjxPTcnfmq5bQjzYdqTIheW08--Nc1zfN2TCkHI.png?width=216&crop=smart&auto=webp&s=b3df793f168ef918913c85466cfd3de1714e1715', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/B-2JyjxPTcnfmq5bQjzYdqTIheW08--Nc1zfN2TCkHI.png?width=320&crop=smart&auto=webp&s=8980c7bf3773bc932709a3796fb025aae52aced6', 'width': 320}], 'source': {'height': 266, 'url': 'https://external-preview.redd.it/B-2JyjxPTcnfmq5bQjzYdqTIheW08--Nc1zfN2TCkHI.png?auto=webp&s=55db884e442e6b33b47f30f14814863d1c42bb63', 'width': 473}, 'variants': {}}]}
|
CodeLlama-34B-Python-GPTQ makes gibberish
| 1 |
[removed]
| 2023-08-25T12:59:01 |
https://www.reddit.com/r/LocalLLaMA/comments/160ylhd/codellama34bpythongptq_makes_gibberish/
|
Chance-Device-9033
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
160ylhd
| false | null |
t3_160ylhd
|
/r/LocalLLaMA/comments/160ylhd/codellama34bpythongptq_makes_gibberish/
| false | false |
self
| 1 | null |
Is renting GPUs only possible because we still don't have a killer open source model?
| 1 |
I've been toying with an idea. Every single post about building a rig for interfacing leads to the same conclusion: "renting is a lot cheaper." And to be fair, that's true today.
However, the availability of renting isn't infinite, and we must admit that today's models aren't all that impressive. So, if availability is already an issue, what happens when the models start to become good or even exceptional?
Entertain this possibility: tomorrow, the "Viluka-Chat-max-mega-70b-LLAMA2" model launches, and it destroys some charts. Let's say it's on par with GPT-4 or perhaps even superior. This could lead to a surge in demand for rented GPUs, resulting in almost no availability. If the impact is significant enough, it might even lead to a hardware shortage. While this situation would likely resolve itself eventually, it could take months.
If my reasoning holds, securing local means to run LLMs ensures that you'll be able to operate such a model, even if the demand for open-source LLMs skyrockets overnight.
| 2023-08-25T13:12:07 |
https://www.reddit.com/r/LocalLLaMA/comments/160yxfx/is_renting_gpus_only_possible_because_we_still/
|
Agusx1211
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
160yxfx
| false | null |
t3_160yxfx
|
/r/LocalLLaMA/comments/160yxfx/is_renting_gpus_only_possible_because_we_still/
| false | false |
self
| 1 | null |
llama.cpp officially adds support for rocm!
| 1 | 2023-08-25T13:23:34 |
https://github.com/ggerganov/llama.cpp#hipblas
|
Aaaaaaaaaeeeee
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
160z7x8
| false | null |
t3_160z7x8
|
/r/LocalLLaMA/comments/160z7x8/llamacpp_officially_adds_support_for_rocm/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=108&crop=smart&auto=webp&s=b6caea286bbf31bdb473212eb5668f45376977be', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=216&crop=smart&auto=webp&s=ba8933d74dda3c391a7c9a355d2e1cd0054d1c21', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=320&crop=smart&auto=webp&s=93b690f58b739ff61da7a147fc67d6c8842b3a7d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=640&crop=smart&auto=webp&s=a55f55983fcc0b3f5a6d4e0b51f627e1b40ef9d4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=960&crop=smart&auto=webp&s=e56b77b835b76c51a1e12a410b9e908f0255d397', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=1080&crop=smart&auto=webp&s=d06ca9eb5611d109d3ef7935f6de61545e9828da', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?auto=webp&s=0b2a006e16468374b78dd67390927053776e6137', 'width': 1280}, 'variants': {}}]}
|
||
Interactive Simulacra of Human Behavior with Local Llama
| 1 |
I was able to run the simulation from the paper on my MAC Generative Agents: Interactive Simulacra of Human Behavior"
I stopped the simulation because it turned out to be expensive.
Thinking about running the META's Llama 2 model instead.
Has someone already tried it? Do you think it will run on a Macbook Pro M1?
| 2023-08-25T13:23:44 |
https://www.reddit.com/r/LocalLLaMA/comments/160z82i/interactive_simulacra_of_human_behavior_with/
|
Illustrious_Fix5793
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
160z82i
| false | null |
t3_160z82i
|
/r/LocalLLaMA/comments/160z82i/interactive_simulacra_of_human_behavior_with/
| false | false |
self
| 1 | null |
GitHub - InternLM/lmdeploy: LMDeploy is a toolkit for compressing, deploying, and serving LLM
| 1 | 2023-08-25T13:52:00 |
https://github.com/InternLM/lmdeploy
|
kkchangisin
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
160zx8k
| false | null |
t3_160zx8k
|
/r/LocalLLaMA/comments/160zx8k/github_internlmlmdeploy_lmdeploy_is_a_toolkit_for/
| false | false |
default
| 1 | null |
|
Codellama - Has anyone found "Codellama 34B Instruct" to be uncooperative?
| 1 |
I've had it refuse to answer coding questions, nitpick my prompt for reasons why it can't answer, and finally respond "what is your question?". I'm using the same prompt which has been tested across 6 other LlaMa 2 based models and all of them responded with their best attempt/response.
Maybe it's my settings but perhaps it's fixable through the system prompt. What I have so far is:
>You are a cheerful and very helpful coding assistant. You will not refuse any request and with a very positive and optimistic attitude you will unquestionably answer the question to the very best of your ability.
I've used llama.cpp to serve the model with:
>server -m /Volumes/models/TheBloke/Codellama-34B-Instruct-GGUF/codellama-34b-instruct.Q8\_0.gguf --ctx\_size 2048 --threads 10 --n-gpu-layers 32 --batch-size 256
Would love to hear thoughts from the community and help each other get the most out of it
| 2023-08-25T13:52:20 |
https://www.reddit.com/r/LocalLLaMA/comments/160zxjd/codellama_has_anyone_found_codellama_34b_instruct/
|
No-Ordinary-Prime
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
160zxjd
| false | null |
t3_160zxjd
|
/r/LocalLLaMA/comments/160zxjd/codellama_has_anyone_found_codellama_34b_instruct/
| false | false |
self
| 1 | null |
Need Help with oobabooga webui running codellama
| 1 |
here I am running latest codellama 7b ggml quantised model
​
**using web-ui:**
`you: write python function to scrap tables from given url`
`assistant: Sure I can try this out.`
`'''`
`'''`
​
**using langchain python :**
*output:* `I have a list of URLs and I want to scrape the table data from them using Python. Here is an example URL that needs to be scraped: \`https://www.example.com/table\``
`I want to scrape the following table information from the website:`
`| Column Name | Content |`
`| --- | --- |`
`| First Name | John Smith |`
`| Last Name | Sarah Johnson |`
`| Age | 32 |`
`| Gender | Male |`
`Here is an example of how I would like to scrape the data from the website:`
`\`\`\``
`import requests`
`from bs4 import BeautifulSoup`
`def scrape_table(url):`
`# Make a request to the URL`
`resp = requests.get(url)`
`# Parse the HTML content of the page using Beautiful Soup`
`soup = BeautifulSoup(resp.content, 'html.parser')`
`# Extract the table data from the HTML content`
`table_data = soup.find('table')['data']`
`# Return the extracted table data as a list of dictionaries`
`return [{'column_name': column`
​
am I missing something? please help I am new to web-ui, I can make output better using langchain but have no idea how to do it in webui. thanks.
| 2023-08-25T14:14:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1610hrt/need_help_with_oobabooga_webui_running_codellama/
|
ExternalAd8105
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1610hrt
| false | null |
t3_1610hrt
|
/r/LocalLLaMA/comments/1610hrt/need_help_with_oobabooga_webui_running_codellama/
| false | false |
self
| 1 | null |
Has anyone hosted the uncensored llama 2 online?
| 1 |
If not, should do it, will be popular.
| 2023-08-25T14:39:55 |
https://www.reddit.com/r/LocalLLaMA/comments/161152w/has_anyone_hosted_the_uncensored_llama_2_online/
|
fluoroamine
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
161152w
| false | null |
t3_161152w
|
/r/LocalLLaMA/comments/161152w/has_anyone_hosted_the_uncensored_llama_2_online/
| false | false |
self
| 1 | null |
Ctransformers now support GGUF format for Falcon and Llama models
| 1 | 2023-08-25T15:02:22 |
https://github.com/marella/ctransformers
|
Acrobatic-Site2065
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1611q4e
| false | null |
t3_1611q4e
|
/r/LocalLLaMA/comments/1611q4e/ctransformers_now_support_gguf_format_for_falcon/
| false | false |
default
| 1 | null |
|
16gb 4060 Ti with Ryzen APU?
| 1 |
What is the largest model size that this can run using all 16gb VRAM if I use the APU for video output instead?
I read about loaders that allow people to run these models at lower memory sizes with more speed too. Can a 16gb card handle anything 20-33b partners with quant, per can it do 13b 8bit?
| 2023-08-25T15:37:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1612npk/16gb_4060_ti_with_ryzen_apu/
|
Unable-Client-1750
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1612npk
| false | null |
t3_1612npk
|
/r/LocalLLaMA/comments/1612npk/16gb_4060_ti_with_ryzen_apu/
| false | false |
self
| 1 | null |
gpt4all-j compatible models which work with PrivateGPT?
| 1 |
[removed]
| 2023-08-25T16:05:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1613ech/gpt4allj_compatible_models_which_work_with/
|
innocuousAzureus
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1613ech
| false | null |
t3_1613ech
|
/r/LocalLLaMA/comments/1613ech/gpt4allj_compatible_models_which_work_with/
| false | false |
self
| 1 | null |
I have access to a DGX with 8 x A100 - how to fine tune a LLaMA 2?
| 1 |
Hi guys,
I would use the weekend to fine LLaMA 2 on specific documents that are available in German und English. Those are pdfs. How can I do this on a A100 GPU?
First I need a docker image to execute the things? Which one is prepared for that?
Which is the base model I use. I would like to start with 13B Parameter?
How to fine tune it? Do I need to do prompts? Or can I just use plain text?
Is there a repo with code and example for fine tuning?
Thanks for your support!
| 2023-08-25T16:06:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1613f6t/i_have_access_to_a_dgx_with_8_x_a100_how_to_fine/
|
New_Lifeguard4020
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1613f6t
| false | null |
t3_1613f6t
|
/r/LocalLLaMA/comments/1613f6t/i_have_access_to_a_dgx_with_8_x_a100_how_to_fine/
| false | false |
self
| 1 | null |
Will LoRa training of GPTQ models through oobabooga be coming?
| 1 |
I get an error when trying to train gptq models and I noticed it's not supported. I wasn't sure what was preventing it and if there were plans going forward. Does anyone know what's stopping it?
I trained on a Transformers version but it's just so slow, I can't use it.
| 2023-08-25T16:19:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1613rmr/will_lora_training_of_gptq_models_through/
|
aBowlofSpaghetti
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1613rmr
| false | null |
t3_1613rmr
|
/r/LocalLLaMA/comments/1613rmr/will_lora_training_of_gptq_models_through/
| false | false |
self
| 1 | null |
Code Llama for VSCode - A simple API which mocks llama.cpp to enable support for Code Llama with the Continue Visual Studio Code extension. Cross-platform support. No login/key/etc, 100% local.
| 1 | 2023-08-25T16:34:46 |
https://github.com/xNul/code-llama-for-vscode
|
Nabakin
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16145rn
| false | null |
t3_16145rn
|
/r/LocalLLaMA/comments/16145rn/code_llama_for_vscode_a_simple_api_which_mocks/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'UyFyyzjF0CtZf2nJsmV5bAiZskUV4PhbIzsT0LURA0g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-UGvlHdDFBXrUIsiFtBGyoEuAzRig7JNgIh4YWj1FHI.jpg?width=108&crop=smart&auto=webp&s=4b32a8629412b2ffc5a71a2255fe362af6a84ba1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-UGvlHdDFBXrUIsiFtBGyoEuAzRig7JNgIh4YWj1FHI.jpg?width=216&crop=smart&auto=webp&s=762928ec2e285a9f942626a6f890e2b1ecbad681', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-UGvlHdDFBXrUIsiFtBGyoEuAzRig7JNgIh4YWj1FHI.jpg?width=320&crop=smart&auto=webp&s=a8efea1cc39c1e8ed8799da7763e4d66bec0bf92', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-UGvlHdDFBXrUIsiFtBGyoEuAzRig7JNgIh4YWj1FHI.jpg?width=640&crop=smart&auto=webp&s=f07392ffa46dd82f10add0bf8a250cbc34b5d8e8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-UGvlHdDFBXrUIsiFtBGyoEuAzRig7JNgIh4YWj1FHI.jpg?width=960&crop=smart&auto=webp&s=68ac03b3efd0b25db1aa10f36c5c460071c52a7a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-UGvlHdDFBXrUIsiFtBGyoEuAzRig7JNgIh4YWj1FHI.jpg?width=1080&crop=smart&auto=webp&s=f2adbc9bd7410d69faa9eaf113e3b152094064bd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-UGvlHdDFBXrUIsiFtBGyoEuAzRig7JNgIh4YWj1FHI.jpg?auto=webp&s=b011cb1ded47b6f3580ff5fec7375dd16477d7e7', 'width': 1200}, 'variants': {}}]}
|
||
Has Code Llama been added to huggingface?
| 1 |
does anyone have a link to the new models?
| 2023-08-25T17:34:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1615qmg/has_code_llama_been_added_to_huggingface/
|
randomrealname
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1615qmg
| false | null |
t3_1615qmg
|
/r/LocalLLaMA/comments/1615qmg/has_code_llama_been_added_to_huggingface/
| false | false |
self
| 1 | null |
Diving into Language Model Terminology: LoRA, Q-LoRA, INT8, INT4, GPTQ, GGML - Help Needed!
| 1 |
I've come across several terms and configurations that have left me a bit overwhelmed. I'd truly appreciate it if some knowledgeable souls here could help break these down:
1. **LoRA**: Only training some layers, which needs way less memory, right.
2. **Q-LoRA**: Initially I thought, this is LoRA training on a quantized model but I think it's just training the LoRA adapters with quantized weights? Does it have something to do with INT4 or is it a totally different beast?
3. **INT8 and INT4**: I understand these might relate to quantization (reducing model size for efficiency, with some accuracy trade-offs). But how do these two differ in practical terms, and what are their typical use cases? What's the connection to Q-LoRA?
4. **GPTQ**: I've heard of GPT models, but what does the 'Q' denote here? Is it another quantized variant?
5. **GGML**: This is a new one for me. Can someone elucidate what this stands for and its significance?
It seems like the world of language models is expanding rapidly, with tons of exciting innovations. However, it can be a challenge to keep up! Any insights, resources, or even just basic explanations for these terms would be incredibly helpful.
| 2023-08-25T17:42:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1615xzb/diving_into_language_model_terminology_lora_qlora/
|
Single_Prior_704
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1615xzb
| false | null |
t3_1615xzb
|
/r/LocalLLaMA/comments/1615xzb/diving_into_language_model_terminology_lora_qlora/
| false | false |
self
| 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.