title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
The number of people who want ZERO ethics and ZERO morals is too dam high!
| 0 |
This isn't something we should be encouraging.
If you want to sex chat with your AI it shouldn't be able to be programmed to act like a child.
And yet, soooooooo many people are obsessed with having a ZERO morality, ZERO ethics chatbot, "for no reason."
Yeah, sure.
| 2025-05-01T22:55:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcm4mh/the_number_of_people_who_want_zero_ethics_and/
|
Fit-Produce420
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcm4mh
| false | null |
t3_1kcm4mh
|
/r/LocalLLaMA/comments/1kcm4mh/the_number_of_people_who_want_zero_ethics_and/
| false | false |
self
| 0 | null |
We have come a long way
| 1 |
[removed]
| 2025-05-01T22:57:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcm5yz/we_have_come_a_long_way/
|
nderstand2grow
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcm5yz
| false | null |
t3_1kcm5yz
|
/r/LocalLLaMA/comments/1kcm5yz/we_have_come_a_long_way/
| false | false |
self
| 1 | null |
We have come a long way since GPT-3 days
| 1 |
[removed]
| 2025-05-01T22:58:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcm6y7/we_have_come_a_long_way_since_gpt3_days/
|
nderstand2grow
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcm6y7
| false | null |
t3_1kcm6y7
|
/r/LocalLLaMA/comments/1kcm6y7/we_have_come_a_long_way_since_gpt3_days/
| false | false |
self
| 1 | null |
Career choice: agent application or large model inference infra optimization
| 1 |
[removed]
| 2025-05-01T23:13:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcmigh/career_choice_agent_application_or_large_model/
|
FunJumpy9129
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcmigh
| false | null |
t3_1kcmigh
|
/r/LocalLLaMA/comments/1kcmigh/career_choice_agent_application_or_large_model/
| false | false |
self
| 1 | null |
Google injecting ads into chatbots
| 396 |
I mean, we all knew this was coming.
| 2025-05-01T23:15:41 |
https://www.bloomberg.com/news/articles/2025-04-30/google-places-ads-inside-chatbot-conversations-with-ai-startups?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTc0NjExMzM1MywiZXhwIjoxNzQ2NzE4MTUzLCJhcnRpY2xlSWQiOiJTVkswUlBEV1JHRzAwMCIsImJjb25uZWN0SWQiOiIxMEJDQkE5REUzM0U0M0M0ODBBNzNCMjFFQzdGQ0Q2RiJ9.9sPHivqB3WzwT8wcroxvnIM03XFxDcDq4wo4VPP-9Qg
|
InvertedVantage
|
bloomberg.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcmk4n
| false | null |
t3_1kcmk4n
|
/r/LocalLLaMA/comments/1kcmk4n/google_injecting_ads_into_chatbots/
| false | false | 396 |
{'enabled': False, 'images': [{'id': 't8MHEe67cOvHQOJuhmSVtn_5qs-LY2INzrEWpfjxoSs', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/wDA-atjZptfAOL22LDXck5MswZb4ZcdPSpiAe6th87A.jpg?width=108&crop=smart&auto=webp&s=453dc778bdab9103c1518b8839d78333104b82aa', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/wDA-atjZptfAOL22LDXck5MswZb4ZcdPSpiAe6th87A.jpg?width=216&crop=smart&auto=webp&s=f13165251b34cf4e96999ace3d8b5e26f2595428', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/wDA-atjZptfAOL22LDXck5MswZb4ZcdPSpiAe6th87A.jpg?width=320&crop=smart&auto=webp&s=a430eb58db90a2cca98dd65a81a8d2969d9266bb', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/wDA-atjZptfAOL22LDXck5MswZb4ZcdPSpiAe6th87A.jpg?width=640&crop=smart&auto=webp&s=369db2fe54f3dbac2014eea6f333027ad01fd7f5', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/wDA-atjZptfAOL22LDXck5MswZb4ZcdPSpiAe6th87A.jpg?width=960&crop=smart&auto=webp&s=4a1e18e6328042f4dc46586fad660b7fd28dece8', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/wDA-atjZptfAOL22LDXck5MswZb4ZcdPSpiAe6th87A.jpg?width=1080&crop=smart&auto=webp&s=c95a9ef63ec2fce450457b56751cd40bea59e472', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/wDA-atjZptfAOL22LDXck5MswZb4ZcdPSpiAe6th87A.jpg?auto=webp&s=6655b8f579c8ad086f491d84f9426daad37e9aee', 'width': 1200}, 'variants': {}}]}
|
|
I Made a Privacy Tool to Automate Text Replacement in the Clipboard (Sensitive Data, API Keys, Credentials)
| 12 |
I often find myself copying text, then pasting it into Notepad just to manually clean it up – removing usernames from logs, redacting API keys from config snippets, or deleting personal info – before actually pasting it into LLMs, and it felt ripe for automation.
So, I built **Clipboard Regex Replace**, an open-source Go tool that sits in your system tray. You define regex rules for things you want to change (like specific usernames, API key formats, or email addresses). When you copy text and press a global hotkey, it automatically applies these rules, replaces the content, updates the clipboard, and pastes the cleaned-up text for you.
It's been a huge time-saver for me, automating the cleanup of logs, safely handling config files, and generally making sure I don't accidentally paste sensitive data into LLMs or other online services. If you also deal with repetitive clipboard cleanup, especially when preparing prompts or context data, you might find it useful too. It supports multiple profiles for different tasks and even shows a diff of the changes.
You can check it out and grab it on GitHub: [github.com/TanaroSch/Clipboard-Regex-Replace-2](http://github.com/TanaroSch/Clipboard-Regex-Replace-2)
I'd love to hear if this resonates with anyone here or if you have feedback!
| 2025-05-01T23:17:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcmltw/i_made_a_privacy_tool_to_automate_text/
|
Tannenbaumxy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcmltw
| false | null |
t3_1kcmltw
|
/r/LocalLLaMA/comments/1kcmltw/i_made_a_privacy_tool_to_automate_text/
| false | false |
self
| 12 |
{'enabled': False, 'images': [{'id': 'wpL6iHEqEA_RBJm2gnwa4XjUIbBwAwFP7uB2VWzNrxU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/57MSZHBh5jnbZtaHFaDxC-rKPar8MnNJwOx4hNiAfU8.jpg?width=108&crop=smart&auto=webp&s=464eb2e08c111d2c6cdf8b5e739e1f560975779a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/57MSZHBh5jnbZtaHFaDxC-rKPar8MnNJwOx4hNiAfU8.jpg?width=216&crop=smart&auto=webp&s=081fb7e0343edf9bacada0d431269bbb2cb5c7f9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/57MSZHBh5jnbZtaHFaDxC-rKPar8MnNJwOx4hNiAfU8.jpg?width=320&crop=smart&auto=webp&s=ec162a067b64b0256c50a3e8cf08eeb2425d3c98', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/57MSZHBh5jnbZtaHFaDxC-rKPar8MnNJwOx4hNiAfU8.jpg?width=640&crop=smart&auto=webp&s=5e6976729771f84f7991e5ac7511c86ccf905381', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/57MSZHBh5jnbZtaHFaDxC-rKPar8MnNJwOx4hNiAfU8.jpg?width=960&crop=smart&auto=webp&s=2e099f087bb8c2cf98b7354f82453c2c4d6db27c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/57MSZHBh5jnbZtaHFaDxC-rKPar8MnNJwOx4hNiAfU8.jpg?width=1080&crop=smart&auto=webp&s=c702bd38ff9e79d081066f23742041c68c34ed0d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/57MSZHBh5jnbZtaHFaDxC-rKPar8MnNJwOx4hNiAfU8.jpg?auto=webp&s=b64129f1efc4c378c5a1775d70349f37c2d516ed', 'width': 1200}, 'variants': {}}]}
|
Meta licensing, how does it work?
| 0 |
I'm a bit unclear on the way the Meta licensing is supposed to work.
To download weights from Meta directly, I need to provide them a vaguely verifiable identity and get sent an email to allow download.
From Hugging Face, for the Meta models in meta-llama, same sort of thing -"LLAMA 3.2 COMMUNITY LICENSE AGREEMENT".
But there are heaps of derived models and ggufs that are open access with no login. The license looks like it allows that - anyone can rehost a model that they've converted or quantised or whatever?
Q1. What is the point of this? Just so Meta can claim they only release to known entities?
Q2. Is there a canonical set of GGUFS in HF that mirror Meta?
| 2025-05-01T23:46:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcn6xh/meta_licensing_how_does_it_work/
|
richdrich
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcn6xh
| false | null |
t3_1kcn6xh
|
/r/LocalLLaMA/comments/1kcn6xh/meta_licensing_how_does_it_work/
| false | false |
self
| 0 | null |
Does anyone else get a blank screen when launching LM Studio?
| 2 | 2025-05-01T23:51:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcnair/does_anyone_else_get_a_blank_screen_when/
|
HeirToTheMilkMan
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcnair
| false | null |
t3_1kcnair
|
/r/LocalLLaMA/comments/1kcnair/does_anyone_else_get_a_blank_screen_when/
| false | false | 2 | null |
||
Gemma 3 Fine tuning max token length
| 1 |
[removed]
| 2025-05-02T00:00:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcnh7b/gemma_3_fine_tuning_max_token_length/
|
mukhayy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcnh7b
| false | null |
t3_1kcnh7b
|
/r/LocalLLaMA/comments/1kcnh7b/gemma_3_fine_tuning_max_token_length/
| false | false |
self
| 1 | null |
Very slow text generation
| 1 |
[removed]
| 2025-05-02T00:33:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1kco4u9/very_slow_text_generation/
|
twittermerda
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kco4u9
| false | null |
t3_1kco4u9
|
/r/LocalLLaMA/comments/1kco4u9/very_slow_text_generation/
| false | false |
self
| 1 | null |
Very slow text generation
| 1 |
[removed]
| 2025-05-02T00:39:48 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1kco9hq
| false | null |
t3_1kco9hq
|
/r/LocalLLaMA/comments/1kco9hq/very_slow_text_generation/
| false | false |
default
| 1 | null |
||
Very slow text generation
| 1 |
[removed]
| 2025-05-02T00:47:42 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcoeyi
| false | null |
t3_1kcoeyi
|
/r/LocalLLaMA/comments/1kcoeyi/very_slow_text_generation/
| false | false |
default
| 1 | null |
||
Very slow text generation
| 1 |
Hi, I'm new to this stuff and I've started trying out local models but so far generation has been very slow and i have only \~3 tok/sec at best.
This is my system: Ryzen 5 2600, RX 9700 XT 16 vram, 48gb ddr4 ram 2400mhz.
So far I've tried using LM studio and kobold ccp to run models and I've only tried 7B models.
I know about GPU offloading and I didn't forget to do it. However whether I offload all layers onto my gpu or any other number of them the tok/sec do not increase.
Weirdly enough I have faster generation by not offloading layers onto my GPU. I get double the performance by not offloading layers.
I have tried using these two settings: keep model in memory and flash attention but the situation doesn't get any better.
| 2025-05-02T00:49:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcogf1/very_slow_text_generation/
|
TheRedFurios
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcogf1
| false | null |
t3_1kcogf1
|
/r/LocalLLaMA/comments/1kcogf1/very_slow_text_generation/
| false | false |
self
| 1 | null |
ubergarm/Qwen3-30B-A3B-GGUF 1600 tok/sec PP, 105 tok/sec TG on 3090TI FE 24GB VRAM
| 226 |
Got another exclusive \[ik\_llama.cpp\](https://github.com/ikawrakow/ik\_llama.cpp/) \`IQ4\_K\` 17.679 GiB (4.974 BPW) with great quality benchmarks while remaining very performant for full GPU offload with over 32k context \`f16\` KV-Cache. Or you can offload some layers to CPU for less VRAM etc a described in the model card.
I'm impressed with both the quality and the speed of this model for running locally. Great job Qwen on these new MoE's in perfect sizes for quality quants at home!
Hope to write-up and release my Perplexity and KL-Divergence and other benchmarks soon! :tm: Benchmarking these quants is challenging and we have some good competition going with myself using ik's SotA quants, unsloth with their new "Unsloth Dynamic v2.0" discussions, and bartowski's evolving imatrix and quantization strategies as well! (also I'm a big fan of team mradermacher too!).
It's a good time to be a \`r/LocalLLaMA\`ic!!! Now just waiting for R2 to drop! xD
\_benchmarks graphs in comment below\_
| 2025-05-02T01:22:30 |
https://huggingface.co/ubergarm/Qwen3-30B-A3B-GGUF
|
VoidAlchemy
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcp34g
| false | null |
t3_1kcp34g
|
/r/LocalLLaMA/comments/1kcp34g/ubergarmqwen330ba3bgguf_1600_toksec_pp_105_toksec/
| false | false | 226 |
{'enabled': False, 'images': [{'id': 'kOjYOAnpTaNNUvmg5Vnx8ANyUfP_j2fBz-Yy7Ut56_k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ik47Xx3Pr12IvpMa5JjNjDiiN3MQKlHWXYv5NAJi8dI.jpg?width=108&crop=smart&auto=webp&s=839f4e3234543f0bf0c11de5a805a01b90af5900', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ik47Xx3Pr12IvpMa5JjNjDiiN3MQKlHWXYv5NAJi8dI.jpg?width=216&crop=smart&auto=webp&s=cec31babcd963c04f60c45d6e9b2d050e768908f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ik47Xx3Pr12IvpMa5JjNjDiiN3MQKlHWXYv5NAJi8dI.jpg?width=320&crop=smart&auto=webp&s=7c9b967d71a17606ac9d21f24d0e71d97d4d5270', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ik47Xx3Pr12IvpMa5JjNjDiiN3MQKlHWXYv5NAJi8dI.jpg?width=640&crop=smart&auto=webp&s=e29be453fb80474d04dc1128449d72b5491af522', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ik47Xx3Pr12IvpMa5JjNjDiiN3MQKlHWXYv5NAJi8dI.jpg?width=960&crop=smart&auto=webp&s=94de147c184159768b147526e5d8324fc3f92f39', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ik47Xx3Pr12IvpMa5JjNjDiiN3MQKlHWXYv5NAJi8dI.jpg?width=1080&crop=smart&auto=webp&s=2f59051a0a1749bdbaffdf28117400b0d4a4e9d0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ik47Xx3Pr12IvpMa5JjNjDiiN3MQKlHWXYv5NAJi8dI.jpg?auto=webp&s=0a91e005947b591d3c3ab1e1be130eb3322bfe1f', 'width': 1200}, 'variants': {}}]}
|
|
Gpt 4o-mini vs models
| 1 |
What size of the Qwen-3 model is like the gpt-4o mini?
In terms of not being stupid
| 2025-05-02T02:43:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcqlr3/gpt_4omini_vs_models/
|
Osama_Saba
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcqlr3
| false | null |
t3_1kcqlr3
|
/r/LocalLLaMA/comments/1kcqlr3/gpt_4omini_vs_models/
| false | false |
self
| 1 | null |
Best AI model for mobile devices
| 1 |
Looking for a super small LLM chat model, im working on a real time ear assistant for communication
| 2025-05-02T03:00:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcqxdx/best_ai_model_for_mobile_devices/
|
VastMaximum4282
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcqxdx
| false | null |
t3_1kcqxdx
|
/r/LocalLLaMA/comments/1kcqxdx/best_ai_model_for_mobile_devices/
| false | false |
self
| 1 | null |
phi4-mini-reasoning response for "hi" , followed by "ok you are so fast" - 15KB of tokens - on MacBook Pro M4
| 0 |
Hi,
Just installed ph4-mini-reasoning on ollama and said hi. It outputted almost 15KB ( (didn't count total tokens, that is just file size on mac) size of text in "think" tags, with an answer
"The problem seems to involve determining a specific value based on the provided name or conditions, but after careful consideration and
multiple approaches without a clear mathematical structure presented, it's challenging to derive an exact answer. The process considered
various interpretations such as counting letters, vowels, sums of alphabetical positions, etc., leading to different potential answers
like 14 (total letter count) or 188 (sum of character values). However, due to the lack of a specific problem statement and given that
sometimes placeholder jokes use 42, but that's not responsible here. Given the ambiguity, it's possible there was an error in submitting
the question.
However, since no clear mathematical problem is provided, I must conclude with: \\boxed{0}
====Below is summary of overall thought process of phi4-mini-reasoning by got-4o====
Here’s a **tweet-length version** followed by a slightly longer **blog-style version** for posting:
# 🐦 Tweet Version:
Ever wonder what a *small AI model* thinks before replying to “hi”?
It goes like this:
1. 🤔 “Is this a test or just casual?”
2. 🧠 “Wait, I was told to solve math problems…”
3. 🧩 “No problem found. Prompt them politely.”
Then replies:
>
Even simple inputs trigger deep paths. 🧵👇
# 📝 Blog-style Post or Reddit Longform Version:
🔍 **What Does a Small AI Model Actually** ***Think*** **Before Replying?**
Let’s look at a real example — the user sends:
>
The AI's internal `<think>` process kicks in:
1. “Hmm, I’m an AI math assistant. This seems like a casual greeting.”
2. “But the instruction said: I should solve a math problem, step-by-step.”
3. “Did the user forget to paste the question? Or are they just testing me?”
4. “Best to prompt them gently to submit their question.”
It then replies:
>
Now the user replies:
>
The model thinks again:
1. “Is *this* the problem now?”
2. “Try interpreting it as math? Cipher? Letter sums? Speed puzzle?”
3. “Explore multiple hypotheses (ASCII sums = 188, total letters = 14, etc).”
4. “Nothing solid. Probably no real problem here. Still, I need to reply.”
It finally returns:
>
| 2025-05-02T03:01:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcqxsl/phi4minireasoning_response_for_hi_followed_by_ok/
|
prabhic
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcqxsl
| false | null |
t3_1kcqxsl
|
/r/LocalLLaMA/comments/1kcqxsl/phi4minireasoning_response_for_hi_followed_by_ok/
| false | false |
self
| 0 | null |
LLM Maximum Tokens Output
| 1 |
[removed]
| 2025-05-02T03:31:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcrhl1/llm_maximum_tokens_output/
|
shazil_nisar
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcrhl1
| false | null |
t3_1kcrhl1
|
/r/LocalLLaMA/comments/1kcrhl1/llm_maximum_tokens_output/
| false | false |
self
| 1 | null |
Maximum Tokens Output LLM
| 1 |
[removed]
| 2025-05-02T03:32:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcria0/maximum_tokens_output_llm/
|
shazil_nisar
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcria0
| false | null |
t3_1kcria0
|
/r/LocalLLaMA/comments/1kcria0/maximum_tokens_output_llm/
| false | false |
self
| 1 | null |
My first HF model upload: an embedding model that outputs uint8
| 30 |
I made a slightly modified version of snowflake-arctic-embed-m-v2.0. My version outputs a uint8 tensor for the sentence_embedding output instead of the normal FP32 tensor.
This is directly compatible with qdrant's uint8 data type for collections, saving disk space and computation time.
https://huggingface.co/0xDEADFED5/snowflake2_m_uint8
| 2025-05-02T03:35:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcrk1o/my_first_hf_model_upload_an_embedding_model_that/
|
terminoid_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcrk1o
| false | null |
t3_1kcrk1o
|
/r/LocalLLaMA/comments/1kcrk1o/my_first_hf_model_upload_an_embedding_model_that/
| false | false |
self
| 30 |
{'enabled': False, 'images': [{'id': 'qGgZnf_keF2NECSQut0R_7E5bZcfpo99idhvGNdpuV8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/CnHrNpaBa2hwwUH1Ghy7dzvo7tplNiePq5GzACDjSZE.jpg?width=108&crop=smart&auto=webp&s=e936c62f7aebf63d4472673d23620900b2518651', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/CnHrNpaBa2hwwUH1Ghy7dzvo7tplNiePq5GzACDjSZE.jpg?width=216&crop=smart&auto=webp&s=e8d96c31d9ff5b69c11dfc065c8a615ca2591fc8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/CnHrNpaBa2hwwUH1Ghy7dzvo7tplNiePq5GzACDjSZE.jpg?width=320&crop=smart&auto=webp&s=919ac0fb49fb394bf92b1cf4f833ff33239484d6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/CnHrNpaBa2hwwUH1Ghy7dzvo7tplNiePq5GzACDjSZE.jpg?width=640&crop=smart&auto=webp&s=ead1ed1eff04b8ad6314c1be317ef661d1e4d21f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/CnHrNpaBa2hwwUH1Ghy7dzvo7tplNiePq5GzACDjSZE.jpg?width=960&crop=smart&auto=webp&s=d4e5a1b05c64d26951a564e2e79ab68abbe4f419', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/CnHrNpaBa2hwwUH1Ghy7dzvo7tplNiePq5GzACDjSZE.jpg?width=1080&crop=smart&auto=webp&s=71119b9d7ca41654d510c0f8e7ad90f1e36f7ea6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/CnHrNpaBa2hwwUH1Ghy7dzvo7tplNiePq5GzACDjSZE.jpg?auto=webp&s=199b57177fcc9778b24edaf63986f910106dee89', 'width': 1200}, 'variants': {}}]}
|
Best way to finetune smaller Qwen3 models
| 15 |
What is the best framework/method to finetune the newest Qwen3 models? I'm seeing that people are running into issues during inference such as bad outputs. Maybe due to the model being very new. Anyone have a successful recipe yet? Much appreciated.
| 2025-05-02T03:36:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcrksw/best_way_to_finetune_smaller_qwen3_models/
|
gamesntech
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcrksw
| false | null |
t3_1kcrksw
|
/r/LocalLLaMA/comments/1kcrksw/best_way_to_finetune_smaller_qwen3_models/
| false | false |
self
| 15 | null |
**vision** support for Mistral Small 3.1 merged into llama.cpp
| 140 | 2025-05-02T03:40:40 |
https://github.com/ggml-org/llama.cpp/pull/13231
|
jacek2023
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcrnev
| false | null |
t3_1kcrnev
|
/r/LocalLLaMA/comments/1kcrnev/vision_support_for_mistral_small_31_merged_into/
| false | false |
default
| 140 | null |
|
LLM Training for Coding : All making the same mistake
| 65 |
OpenAI, Gemini, Claude, Deepseek, Qwen, Llama... Local or API, are all making the same **major** mistake, or to put it more fairly, are all in need of this one **major** improvement.
Models need to be trained to be much more aware of the difference between the current date and the date of their own knowledge cutoff.
These models should be acutely aware that the code libraries they were trained with are ***very*** possibly outdated. They should be trained to, instead of confidently jumping into making code edits based on what they "know", hesitate for a moment to consider the fact that a lot can change in a period of 10-14 months, and if a web search tool is available, verifying the current and up-to-date syntax for the code library being used is ***always*** the best practice.
I know that prompting can (sort of) take care of this. And I know that MCPs are popping up, like Context7, for this very purpose. But model providers, imo, need to start taking this into consideration in the way they train models.
No single improvement to training that I can think of would reduce the overall number of errors made by LLMs when coding than this very simple concept.
| 2025-05-02T03:57:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcrxmr/llm_training_for_coding_all_making_the_same/
|
RedZero76
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcrxmr
| false | null |
t3_1kcrxmr
|
/r/LocalLLaMA/comments/1kcrxmr/llm_training_for_coding_all_making_the_same/
| false | false |
self
| 65 | null |
"Supports a context length of up to 131,072 tokens with YaRN (default 32k)"
| 0 |
I am having trouble figuring out what this YaRN is. I typically use LM Studio. How do I enable YaRN?
I have ran "npm install --global yarn" but how do i integrate with LM Studio?
| 2025-05-02T04:00:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcrz9e/supports_a_context_length_of_up_to_131072_tokens/
|
LsDmT
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcrz9e
| false | null |
t3_1kcrz9e
|
/r/LocalLLaMA/comments/1kcrz9e/supports_a_context_length_of_up_to_131072_tokens/
| false | false |
self
| 0 | null |
Qwen 3 8B in GGUF doesn’t want to work for me.
| 1 |
[removed]
| 2025-05-02T04:01:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcrzyv/qwen_3_8b_in_gguf_doesnt_want_to_work_for_me/
|
1982LikeABoss
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcrzyv
| false | null |
t3_1kcrzyv
|
/r/LocalLLaMA/comments/1kcrzyv/qwen_3_8b_in_gguf_doesnt_want_to_work_for_me/
| false | false |
self
| 1 | null |
Does it have memory function
| 0 |
I want helpwith making story
| 2025-05-02T04:08:35 |
saltyridd
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcs4hh
| false | null |
t3_1kcs4hh
|
/r/LocalLLaMA/comments/1kcs4hh/does_it_have_memory_function/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 't5KKL5ZfeJU6EHdRupAGEp-uBexMM3X8P5WX5Y9mpto', 'resolutions': [{'height': 16, 'url': 'https://preview.redd.it/6o2zzmgymaye1.png?width=108&crop=smart&auto=webp&s=8c99d3c615ab42befc76bb72fe112a1d54668f68', 'width': 108}, {'height': 33, 'url': 'https://preview.redd.it/6o2zzmgymaye1.png?width=216&crop=smart&auto=webp&s=a6b630554e9c942be81cabab4cde14cdde5359d8', 'width': 216}, {'height': 48, 'url': 'https://preview.redd.it/6o2zzmgymaye1.png?width=320&crop=smart&auto=webp&s=b7c1e54f5076fb06f1dc6dcd8cfe182d60031d08', 'width': 320}, {'height': 97, 'url': 'https://preview.redd.it/6o2zzmgymaye1.png?width=640&crop=smart&auto=webp&s=c931b43da7a7f60d6d915215d4b884b092a74661', 'width': 640}, {'height': 146, 'url': 'https://preview.redd.it/6o2zzmgymaye1.png?width=960&crop=smart&auto=webp&s=13494f6c936e9ea2a7065c493d930594ba540aa4', 'width': 960}, {'height': 165, 'url': 'https://preview.redd.it/6o2zzmgymaye1.png?width=1080&crop=smart&auto=webp&s=e1e16e3d3f9c366f31247bebbb45be988b8d3961', 'width': 1080}], 'source': {'height': 165, 'url': 'https://preview.redd.it/6o2zzmgymaye1.png?auto=webp&s=eb7f9dc49879de2158a46adcf6f2ab65ff267f99', 'width': 1080}, 'variants': {}}]}
|
||
Someone has tested DeepSeek-Prover-V2-7B?
| 8 |
They are some [quants](https://huggingface.co/models?other=base_model%3Aquantized%3Adeepseek-ai%2FDeepSeek-Prover-V2-7B) available, maybe more coming later.
From the modelcard:
> ### Introduction
>
> We introduce DeepSeek-Prover-V2, an open-source large language model designed for formal theorem proving in Lean 4, with initialization data collected through a recursive theorem proving pipeline powered by DeepSeek-V3. The cold-start training procedure begins by prompting DeepSeek-V3 to decompose complex problems into a series of subgoals. The proofs of resolved subgoals are synthesized into a chain-of-thought process, combined with DeepSeek-V3's step-by-step reasoning, to create an initial cold start for reinforcement learning. This process enables us to integrate both informal and formal mathematical reasoning into a unified model.
| 2025-05-02T04:10:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcs5yn/someone_has_tested_deepseekproverv27b/
|
Disonantemus
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcs5yn
| false | null |
t3_1kcs5yn
|
/r/LocalLLaMA/comments/1kcs5yn/someone_has_tested_deepseekproverv27b/
| false | false |
self
| 8 |
{'enabled': False, 'images': [{'id': '8iqcr81S9wNcnzGPvbK8Hhi_kiBqXCvd49_2S6ZwaWE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jBkf42i8gsEcjqfo8B2258SKDVLcKIXZvu0jSHG71s8.jpg?width=108&crop=smart&auto=webp&s=09d55d855610ce58ed895f8e48816fe021699f60', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/jBkf42i8gsEcjqfo8B2258SKDVLcKIXZvu0jSHG71s8.jpg?width=216&crop=smart&auto=webp&s=057542019e729706b5d8a86393254e650865436a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/jBkf42i8gsEcjqfo8B2258SKDVLcKIXZvu0jSHG71s8.jpg?width=320&crop=smart&auto=webp&s=3d522e006dd559a0e7e08547452997646b318deb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/jBkf42i8gsEcjqfo8B2258SKDVLcKIXZvu0jSHG71s8.jpg?width=640&crop=smart&auto=webp&s=a9b7e4ac2af5202224491d39441dcdaab6b60309', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/jBkf42i8gsEcjqfo8B2258SKDVLcKIXZvu0jSHG71s8.jpg?width=960&crop=smart&auto=webp&s=15009a8ec267d894527af5bcbe1a4f7e5361b71d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/jBkf42i8gsEcjqfo8B2258SKDVLcKIXZvu0jSHG71s8.jpg?width=1080&crop=smart&auto=webp&s=1d7548a7ed95dc68e0c0298a484572c7249bed85', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/jBkf42i8gsEcjqfo8B2258SKDVLcKIXZvu0jSHG71s8.jpg?auto=webp&s=b93ca533ea35c10313bdd75f1bd5760b47c09d11', 'width': 1200}, 'variants': {}}]}
|
Self hosted ollama missing api? How do I get a local one?
| 0 |
I'm using proxmox and I've created a container for open web ui and ollama separately just for learning purposes.
I could have installed openwebui with ollama baked in, but wanted to go through the process to link the two together.
I've hit a snag where openwebui can verify the connection to ollama, but getting error 500 when trying to chat, probably because I'm missing an api key.
It looks like there's instructions to get an api key if I was accessing ollama from the cloud?
But I don't see an option on local ollama. Any tips?
| 2025-05-02T04:21:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcsc7s/self_hosted_ollama_missing_api_how_do_i_get_a/
|
TryTurningItOffAgain
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcsc7s
| false | null |
t3_1kcsc7s
|
/r/LocalLLaMA/comments/1kcsc7s/self_hosted_ollama_missing_api_how_do_i_get_a/
| false | false |
self
| 0 | null |
Let's go back to the 2023 technical report for GPT-4 and laugh in Qwen3-235B-A22B
| 1 | 2025-05-02T04:36:05 |
oobabooga4
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcskl9
| false | null |
t3_1kcskl9
|
/r/LocalLLaMA/comments/1kcskl9/lets_go_back_to_the_2023_technical_report_for/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'EBhq2eK-Q8AdJOZ0Yh8t8eAASdmM-Migc8mZzbXpkzk', 'resolutions': [{'height': 40, 'url': 'https://preview.redd.it/2fldwelhqaye1.png?width=108&crop=smart&auto=webp&s=b24d649729d308ab339305868eebbfc5b745ddef', 'width': 108}, {'height': 80, 'url': 'https://preview.redd.it/2fldwelhqaye1.png?width=216&crop=smart&auto=webp&s=bdda10e257d3b269fc88e0f341f836c41bb832a6', 'width': 216}, {'height': 119, 'url': 'https://preview.redd.it/2fldwelhqaye1.png?width=320&crop=smart&auto=webp&s=ca7f6e22b804b161edb4d12be41d58dbaede8261', 'width': 320}, {'height': 238, 'url': 'https://preview.redd.it/2fldwelhqaye1.png?width=640&crop=smart&auto=webp&s=125cf711b650111150ce7e7d8c3c80f841d835d3', 'width': 640}, {'height': 357, 'url': 'https://preview.redd.it/2fldwelhqaye1.png?width=960&crop=smart&auto=webp&s=afe48d476046bad14370b9b1f72e382691f48a2a', 'width': 960}, {'height': 401, 'url': 'https://preview.redd.it/2fldwelhqaye1.png?width=1080&crop=smart&auto=webp&s=df32f30ca679f2003e11326b174ae5c36ffaff7a', 'width': 1080}], 'source': {'height': 409, 'url': 'https://preview.redd.it/2fldwelhqaye1.png?auto=webp&s=c7bb1a77ec44faabfc9b37f9ded1f7902aedb14c', 'width': 1099}, 'variants': {}}]}
|
|||
Need help to Choose a quantized Qwen3 Model
| 1 |
[removed]
| 2025-05-02T05:17:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1kct83j/need_help_to_choose_a_quantized_qwen3_model/
|
donnadulcinea
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kct83j
| false | null |
t3_1kct83j
|
/r/LocalLLaMA/comments/1kct83j/need_help_to_choose_a_quantized_qwen3_model/
| false | false | 1 | null |
|
Still some bugs. But don’t sleep on tinyllama
| 0 |
Responses generated by tinyllama some prompts and an agent. Project day 14 I think. Still some bugs but I honestly can’t complain.
| 2025-05-02T05:30:15 |
https://www.reddit.com/gallery/1kctewr
|
XDAWONDER
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kctewr
| false | null |
t3_1kctewr
|
/r/LocalLLaMA/comments/1kctewr/still_some_bugs_but_dont_sleep_on_tinyllama/
| false | false | 0 | null |
|
Best Hardware for Qwen3-30B-A3B CPU Inference?
| 1 |
[removed]
| 2025-05-02T05:35:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcthu5/best_hardware_for_qwen330ba3b_cpu_inference/
|
ColdImplement1319
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcthu5
| false | null |
t3_1kcthu5
|
/r/LocalLLaMA/comments/1kcthu5/best_hardware_for_qwen330ba3b_cpu_inference/
| false | false |
self
| 1 | null |
All Hail tiny llama !
| 1 |
[removed]
| 2025-05-02T05:45:00 |
Euphoric-Beat-7610
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kctn2x
| false | null |
t3_1kctn2x
|
/r/LocalLLaMA/comments/1kctn2x/all_hail_tiny_llama/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'CwZu9SgS1CJ7xAmjeAmfO4-VTq8530UU_s0ScVY9qLQ', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/nm7n0vko3bye1.png?width=108&crop=smart&auto=webp&s=507faaf1919a45da2c73261d19d79ca1ef4ce23e', 'width': 108}, {'height': 118, 'url': 'https://preview.redd.it/nm7n0vko3bye1.png?width=216&crop=smart&auto=webp&s=4839007b26ad9ae05fdb790602b984ff49841811', 'width': 216}, {'height': 174, 'url': 'https://preview.redd.it/nm7n0vko3bye1.png?width=320&crop=smart&auto=webp&s=4f80e139655416a40528460819f897d170283bdf', 'width': 320}, {'height': 349, 'url': 'https://preview.redd.it/nm7n0vko3bye1.png?width=640&crop=smart&auto=webp&s=09c007bb0855d8553e388db822275e2263d2e803', 'width': 640}], 'source': {'height': 509, 'url': 'https://preview.redd.it/nm7n0vko3bye1.png?auto=webp&s=1038dc23628709e5c4eb1f25f7c75ceb428312a8', 'width': 931}, 'variants': {}}]}
|
||
All Hail tiny llama !
| 1 |
[removed]
| 2025-05-02T05:53:07 |
Euphoric-Beat-7610
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kctrk2
| false | null |
t3_1kctrk2
|
/r/LocalLLaMA/comments/1kctrk2/all_hail_tiny_llama/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'aa4HWcNQUqbSrjuMI2jc2OefloMDnUXnZJTtckcAoCA', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/wbiems5f5bye1.png?width=108&crop=smart&auto=webp&s=1895673a56b9602fa6540be4dde534091c949729', 'width': 108}, {'height': 118, 'url': 'https://preview.redd.it/wbiems5f5bye1.png?width=216&crop=smart&auto=webp&s=043e9d7c3a3447372828f07a6cfcd2ae7a458282', 'width': 216}, {'height': 174, 'url': 'https://preview.redd.it/wbiems5f5bye1.png?width=320&crop=smart&auto=webp&s=80d204656e0e9ddfa43fa5f1da2189afe9216533', 'width': 320}, {'height': 349, 'url': 'https://preview.redd.it/wbiems5f5bye1.png?width=640&crop=smart&auto=webp&s=9b0cfd0580f74a9ed88c19092fceb8b15fc64535', 'width': 640}], 'source': {'height': 509, 'url': 'https://preview.redd.it/wbiems5f5bye1.png?auto=webp&s=3aabb3090262cb546cd04edc28ccda066f2dc7e4', 'width': 931}, 'variants': {}}]}
|
||
How are you using LLMs for knowledge?
| 17 |
I'm curious how people are using local LLMs for acquiring knowledge.
Given that they hallucinate, and that local models are even more compressed than the ones online... are you using them to understand or learn things?
What is your workflow?
How are you ensuring you aren't learning nonsense?
How is the ability to chat with an LLM changing how you learn or engage with information?
What is it making easy for you that was hard previously?
Is there anything you are worried about?
| 2025-05-02T06:52:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcumdq/how_are_you_using_llms_for_knowledge/
|
Due-Competition4564
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcumdq
| false | null |
t3_1kcumdq
|
/r/LocalLLaMA/comments/1kcumdq/how_are_you_using_llms_for_knowledge/
| false | false |
self
| 17 | null |
Is it possible to nudge a model to more wanted answers if it gets 95+% correct by using very few examples?
| 0 |
Basically I have a task which on a basic qwen3 run ok for something like 95+%.
Now I was wondering is it possible to just take the last 5% correct those and finetune the model for something like 60 to 200 steps to get better results without really impacting the current good results?
The use case is that I have 4 million records / (basically same) q&a of varying quality, but if I run my question over like a 1000 lines of new data which can then be manually checked I receive on a base qwen3 a 95+%.
In the past I have tried finetuning 3 epochs on 4 million records, but it only resulted in overfitting and memorisation.
I am able to manually check the daily new influx, and I was thinking if I add the correct answers as well then I get at the same end-result as with the 4 million records over time.
But if I just add a smaller selection (just the 5% error which are manually corrected) and just run a few steps with something like unsloth will I just nudge the model more towards 100% or will I still change the complete model and so also hurt my current 95%
| 2025-05-02T06:54:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcuncv/is_it_possible_to_nudge_a_model_to_more_wanted/
|
Former-Ad-5757
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcuncv
| false | null |
t3_1kcuncv
|
/r/LocalLLaMA/comments/1kcuncv/is_it_possible_to_nudge_a_model_to_more_wanted/
| false | false |
self
| 0 | null |
A random tip for quality conversations
| 44 |
Whether I'm skillmaxxin or just trying to learn something I found that adding a special instruction, made my life so much better:
"After every answer provide 3 enumerated ways to continue the conversations or possible questions I might have."
I basically find myself just typing 1, 2, 3 to continue conversations in ways I might have never thought of, or often, questions that I would reasonably have.
| 2025-05-02T07:00:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcuqn9/a_random_tip_for_quality_conversations/
|
shaman-warrior
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcuqn9
| false | null |
t3_1kcuqn9
|
/r/LocalLLaMA/comments/1kcuqn9/a_random_tip_for_quality_conversations/
| false | false |
self
| 44 | null |
My local ai just told me to go kill myself
| 0 | 2025-05-02T07:08:49 |
hummingbird1346
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcuv0v
| false | null |
t3_1kcuv0v
|
/r/LocalLLaMA/comments/1kcuv0v/my_local_ai_just_told_me_to_go_kill_myself/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'eszfN5n8Rs44YcOTWDLyC9PD4S1_q3zgBXfufDt67ng', 'resolutions': [{'height': 185, 'url': 'https://preview.redd.it/botkl7w3jbye1.jpeg?width=108&crop=smart&auto=webp&s=c062fe0edfd16cae828eca69a56a49a71ed56fdb', 'width': 108}, {'height': 370, 'url': 'https://preview.redd.it/botkl7w3jbye1.jpeg?width=216&crop=smart&auto=webp&s=9f5504884f7d331c637aeb4745e7ce501652880e', 'width': 216}, {'height': 548, 'url': 'https://preview.redd.it/botkl7w3jbye1.jpeg?width=320&crop=smart&auto=webp&s=872e243e949cb09732cb22d9742dc6ba421c9695', 'width': 320}, {'height': 1096, 'url': 'https://preview.redd.it/botkl7w3jbye1.jpeg?width=640&crop=smart&auto=webp&s=76480551503a15daf8f83f676d4de87bc9d8b92b', 'width': 640}, {'height': 1644, 'url': 'https://preview.redd.it/botkl7w3jbye1.jpeg?width=960&crop=smart&auto=webp&s=6346c65749e373eebfbdcffc1ad0a0226f9dc97a', 'width': 960}, {'height': 1850, 'url': 'https://preview.redd.it/botkl7w3jbye1.jpeg?width=1080&crop=smart&auto=webp&s=a6dbe525d01714eb3bf1811e1e92866ca6607a7e', 'width': 1080}], 'source': {'height': 2003, 'url': 'https://preview.redd.it/botkl7w3jbye1.jpeg?auto=webp&s=23e514aac710a5e315a535bbb6e370584eeba58c', 'width': 1169}, 'variants': {}}]}
|
|||
I built an open-source, visual deep research for your private docs
| 1 |
Hey r/LocalLLaMA!
I'm one of the founders of Morphik - an open source RAG that works especially well with visually rich docs.
We wanted to extend our system to be able to confidently answer multi-hop queries: the type where some text in a page points you to a diagram in a different one.
The easiest way to approach this, to us, was to build an agent. So that's what we did.
We didn't realize that it would do a lot more. With some more prompt tuning, we were able to get a really cool deep-research agent in place.
Get started here: [https://morphik.ai](https://morphik.ai/)
Here's our git if you'd like to check it out: [https://github.com/morphik-org/morphik-core](https://github.com/morphik-org/morphik-core)
| 2025-05-02T07:32:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcv6cv/i_built_an_opensource_visual_deep_research_for/
|
Advanced_Army4706
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcv6cv
| false | null |
t3_1kcv6cv
|
/r/LocalLLaMA/comments/1kcv6cv/i_built_an_opensource_visual_deep_research_for/
| false | false |
self
| 1 | null |
Custom PC for AI/Simulation
| 1 |
[removed]
| 2025-05-02T07:50:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcves0/custom_pc_for_aisimulation/
|
Sharp-Huckleberry862
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcves0
| false | null |
t3_1kcves0
|
/r/LocalLLaMA/comments/1kcves0/custom_pc_for_aisimulation/
| false | false |
self
| 1 | null |
Qwen3 30b/32b - q4/q8/fp16 - gguf/mlx - M4max128gb
| 47 | 2025-05-02T08:19:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcvsy9/qwen3_30b32b_q4q8fp16_ggufmlx_m4max128gb/
|
Komarov_d
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcvsy9
| false | null |
t3_1kcvsy9
|
/r/LocalLLaMA/comments/1kcvsy9/qwen3_30b32b_q4q8fp16_ggufmlx_m4max128gb/
| false | false | 47 | null |
||
AI Founder? We provide a platform for AI Projects to work with AI Testers in the wild
| 1 |
[removed]
| 2025-05-02T08:40:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcw39s/ai_founder_we_provide_a_platform_for_ai_projects/
|
PointlessAIX
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcw39s
| false | null |
t3_1kcw39s
|
/r/LocalLLaMA/comments/1kcw39s/ai_founder_we_provide_a_platform_for_ai_projects/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'fhXzyrRi7qvWZqJA9_6hek0jaHKhigWpg8t3A4tQS7s', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/hbFWQzBBUBTDOdWicJz0K5LRLekuK318KYrVGzcO4gA.jpg?width=108&crop=smart&auto=webp&s=855e9f690a00ba129eeb5b0e19bac83a7a13b229', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/hbFWQzBBUBTDOdWicJz0K5LRLekuK318KYrVGzcO4gA.jpg?width=216&crop=smart&auto=webp&s=32bd20a9d2f585ed3ba52371b99b795b68052155', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/hbFWQzBBUBTDOdWicJz0K5LRLekuK318KYrVGzcO4gA.jpg?width=320&crop=smart&auto=webp&s=b720cd2985759fb888396fb193580b766086e895', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/hbFWQzBBUBTDOdWicJz0K5LRLekuK318KYrVGzcO4gA.jpg?width=640&crop=smart&auto=webp&s=a4db88f7ef3b74c26935c9367f5e3d205ac5536c', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/hbFWQzBBUBTDOdWicJz0K5LRLekuK318KYrVGzcO4gA.jpg?width=960&crop=smart&auto=webp&s=dba7ab6fa527b56beabe3de47da7789ab1ca7d87', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/hbFWQzBBUBTDOdWicJz0K5LRLekuK318KYrVGzcO4gA.jpg?auto=webp&s=beab3b2d2a8041680ec60953e6a28f39e782d8ca', 'width': 1024}, 'variants': {}}]}
|
Qwen3 30b a3b moe speed on RTX5080?
| 1 |
Hi I've been trying a3b moe with Q4_K_M gguf, on both lm studio and llama.cpp server (latest cuda docker image). On lm studio I'm getting about 15t/s, and 25t/s on llama.cpp with tweaked parameters. Is this normal? Any way to make it run faster?
Also I noticed offloading all layers to GPU is slower than 75% layers on GPU
| 2025-05-02T08:53:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcw9gh/qwen3_30b_a3b_moe_speed_on_rtx5080/
|
teamclouday
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcw9gh
| false | null |
t3_1kcw9gh
|
/r/LocalLLaMA/comments/1kcw9gh/qwen3_30b_a3b_moe_speed_on_rtx5080/
| false | false |
self
| 1 | null |
Anyone had any success doing real time image processing with local LLM?
| 11 |
I tried a few image LLM like grounding dino, but none of these can acieve a reliable 60fps or even 30fps like pretrained model yolo does. My input image is at 1k resolution. Anyone tried similar things?
| 2025-05-02T09:17:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcwlci/anyone_had_any_success_doing_real_time_image/
|
GregView
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcwlci
| false | null |
t3_1kcwlci
|
/r/LocalLLaMA/comments/1kcwlci/anyone_had_any_success_doing_real_time_image/
| false | false |
self
| 11 | null |
Underperforming Qwen3-32b-Q4_K_M?
| 3 |
I've been trying to use self-hosted Qwen3-32b via ollama with different code agent technologies like cline, roo code and codex. One thing I've experienced myself is that when comparing to the free one served on openrouter (which is in FP16), it struggles far more with proprer tool calling.
Qualitatively, I find the performance discrepancy to be more noticable than other
Q4\_K\_M variants of a model i've compared prior to this. Does anyone have a similar experience?
| 2025-05-02T09:34:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcwtme/underperforming_qwen332bq4_k_m/
|
k_means_clusterfuck
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcwtme
| false | null |
t3_1kcwtme
|
/r/LocalLLaMA/comments/1kcwtme/underperforming_qwen332bq4_k_m/
| false | false |
self
| 3 | null |
Trying to make offline AI work for my business, but I hit a wall...
| 1 |
[removed]
| 2025-05-02T09:40:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcwwml/trying_to_make_offline_ai_work_for_my_business/
|
LuckBuff
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcwwml
| false | null |
t3_1kcwwml
|
/r/LocalLLaMA/comments/1kcwwml/trying_to_make_offline_ai_work_for_my_business/
| false | false |
self
| 1 | null |
Yea keep "cooking"
| 1,165 | 2025-05-02T09:41:33 |
freehuntx
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcwx8e
| false | null |
t3_1kcwx8e
|
/r/LocalLLaMA/comments/1kcwx8e/yea_keep_cooking/
| false | false | 1,165 |
{'enabled': True, 'images': [{'id': 'du31rehMlZ2Q0MiROW-zxFCDbtX68grUY1o1KE7dy9s', 'resolutions': [{'height': 142, 'url': 'https://preview.redd.it/y007y359acye1.png?width=108&crop=smart&auto=webp&s=496aa8517fd001316d229027a4aee85f61419eee', 'width': 108}, {'height': 285, 'url': 'https://preview.redd.it/y007y359acye1.png?width=216&crop=smart&auto=webp&s=322c939609157c230d375df4121cba48f81c6a49', 'width': 216}, {'height': 422, 'url': 'https://preview.redd.it/y007y359acye1.png?width=320&crop=smart&auto=webp&s=ac9c0604f08f64cd98b9faa0f83a609854942c79', 'width': 320}], 'source': {'height': 802, 'url': 'https://preview.redd.it/y007y359acye1.png?auto=webp&s=8d1a5e458682d910d3f13f1d1bb4e7e521c88269', 'width': 607}, 'variants': {}}]}
|
|||
Trying to make offline AI work for my business, but I hit a wall...
| 1 |
[removed]
| 2025-05-02T09:56:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcx4kb/trying_to_make_offline_ai_work_for_my_business/
|
LuckBuff
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcx4kb
| false | null |
t3_1kcx4kb
|
/r/LocalLLaMA/comments/1kcx4kb/trying_to_make_offline_ai_work_for_my_business/
| false | false |
self
| 1 | null |
Phi4 vs qwen3
| 1 |
According to y’all, which is a better reasoning model ? Phi4 reasoning or Qwen 3 (all sizes) ?
| 2025-05-02T10:30:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcxmrw/phi4_vs_qwen3/
|
Basic-Pay-9535
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcxmrw
| false | null |
t3_1kcxmrw
|
/r/LocalLLaMA/comments/1kcxmrw/phi4_vs_qwen3/
| false | false |
self
| 1 | null |
Best reasoning models to create and finetune ?
| 1 |
I have a dataset with input and output that I want to use for fine tuning . But I want to fine tune a REASONING model. I do not have the thinking tokens. So which model do you prefer that I should use to create the thinking part of the dataset and which reasoning model
Should I finetune ? do not consider the limitations of infra .
| 2025-05-02T10:40:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcxsbb/best_reasoning_models_to_create_and_finetune/
|
Basic-Pay-9535
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcxsbb
| false | null |
t3_1kcxsbb
|
/r/LocalLLaMA/comments/1kcxsbb/best_reasoning_models_to_create_and_finetune/
| false | false |
self
| 1 | null |
Is there any local model for generating viral and addictive reels
| 0 |
I know it’s very compute heavy to run image or video generation models on pc
I have 16 gb ram on my pc m4 chip
Is there some ai that could do this locally
| 2025-05-02T10:44:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcxuvi/is_there_any_local_model_for_generating_viral_and/
|
TheLogiqueViper
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcxuvi
| false | null |
t3_1kcxuvi
|
/r/LocalLLaMA/comments/1kcxuvi/is_there_any_local_model_for_generating_viral_and/
| false | false |
self
| 0 | null |
Bought 3090, need emotional support
| 1 |
[removed]
| 2025-05-02T10:51:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcxylb/bought_3090_need_emotional_support/
|
CycleSad7591
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcxylb
| false | null |
t3_1kcxylb
|
/r/LocalLLaMA/comments/1kcxylb/bought_3090_need_emotional_support/
| false | false |
self
| 1 | null |
Qwen3 gets confused when query isn't in english
| 1 |
[removed]
| 2025-05-02T11:45:48 |
Big_Armadillo_2896
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcywru
| false | null |
t3_1kcywru
|
/r/LocalLLaMA/comments/1kcywru/qwen3_gets_confused_when_query_isnt_in_english/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'X4RGf_hVllGK-FOTktTroVog7QH_S2mgsGadmFbisWg', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/njwqepruvcye1.png?width=108&crop=smart&auto=webp&s=e3657ee56c26a27566653e37c2f34a75a2224457', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/njwqepruvcye1.png?width=216&crop=smart&auto=webp&s=94d47417109163c1c455eaba8c53176150aa2332', 'width': 216}, {'height': 168, 'url': 'https://preview.redd.it/njwqepruvcye1.png?width=320&crop=smart&auto=webp&s=73a81af26f20591de98b9a49313ba6c6b51bacd4', 'width': 320}, {'height': 337, 'url': 'https://preview.redd.it/njwqepruvcye1.png?width=640&crop=smart&auto=webp&s=9d4ebbadb30ae50bee06789214b5c86d3dc86915', 'width': 640}, {'height': 506, 'url': 'https://preview.redd.it/njwqepruvcye1.png?width=960&crop=smart&auto=webp&s=44b466753c6c85ffbf6091eb7658c52092268efa', 'width': 960}, {'height': 569, 'url': 'https://preview.redd.it/njwqepruvcye1.png?width=1080&crop=smart&auto=webp&s=6d86811f420f70c490d3b95fc93198418246ff5f', 'width': 1080}], 'source': {'height': 1155, 'url': 'https://preview.redd.it/njwqepruvcye1.png?auto=webp&s=35a6e065bd51d223c61d80a998644130064841c5', 'width': 2189}, 'variants': {}}]}
|
||
Looking for a way to search or ask a large codebase
| 1 |
[removed]
| 2025-05-02T11:51:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1kcz0ob/looking_for_a_way_to_search_or_ask_a_large/
|
HamadaFromEgypt
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kcz0ob
| false | null |
t3_1kcz0ob
|
/r/LocalLLaMA/comments/1kcz0ob/looking_for_a_way_to_search_or_ask_a_large/
| false | false |
self
| 1 | null |
Phi-4 vs. Mistral vs. LLaMA 3 vs. Gemma — Thoughts After Testing the Smartest Small Models of 2025
| 1 |
[removed]
| 2025-05-02T12:31:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1kczsaa/phi4_vs_mistral_vs_llama_3_vs_gemma_thoughts/
|
opsbydesign
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kczsaa
| false | null |
t3_1kczsaa
|
/r/LocalLLaMA/comments/1kczsaa/phi4_vs_mistral_vs_llama_3_vs_gemma_thoughts/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'Eo74Ewc-7SD7ynwnNHK1r3SmOz_N1u0ziwAIbpJuPcE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/SiC8OHK2JXl9kdl589VHeDG1XeX_ayC9Bfa48ISSTN8.jpg?width=108&crop=smart&auto=webp&s=36b66bc27eb6426a54659d65b69c1e2a7210464a', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/SiC8OHK2JXl9kdl589VHeDG1XeX_ayC9Bfa48ISSTN8.jpg?width=216&crop=smart&auto=webp&s=434dfe3c7949e8474b64b013d53ab55a7bb09b4a', 'width': 216}, {'height': 178, 'url': 'https://external-preview.redd.it/SiC8OHK2JXl9kdl589VHeDG1XeX_ayC9Bfa48ISSTN8.jpg?width=320&crop=smart&auto=webp&s=8b03638fb617f13b6af5484e9e3e83a781c25a62', 'width': 320}, {'height': 357, 'url': 'https://external-preview.redd.it/SiC8OHK2JXl9kdl589VHeDG1XeX_ayC9Bfa48ISSTN8.jpg?width=640&crop=smart&auto=webp&s=6a79a8bdb674d3390a7a3a2d0424677a3a4e3097', 'width': 640}, {'height': 535, 'url': 'https://external-preview.redd.it/SiC8OHK2JXl9kdl589VHeDG1XeX_ayC9Bfa48ISSTN8.jpg?width=960&crop=smart&auto=webp&s=69386d656995d93dccabb032ad17e8ea1ba62bfc', 'width': 960}, {'height': 602, 'url': 'https://external-preview.redd.it/SiC8OHK2JXl9kdl589VHeDG1XeX_ayC9Bfa48ISSTN8.jpg?width=1080&crop=smart&auto=webp&s=92c18a5e0cfe1c83b2965b887b27915572eb12f4', 'width': 1080}], 'source': {'height': 768, 'url': 'https://external-preview.redd.it/SiC8OHK2JXl9kdl589VHeDG1XeX_ayC9Bfa48ISSTN8.jpg?auto=webp&s=56aac00988aaa3e62016dc32b1f44b218d9e45cc', 'width': 1376}, 'variants': {}}]}
|
Phi-4 vs. Mistral vs. LLaMA 3 vs. Gemma — Thoughts After Testing the Smartest Small Models of 2025
| 1 |
[removed]
| 2025-05-02T12:33:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1kczu09/phi4_vs_mistral_vs_llama_3_vs_gemma_thoughts/
|
opsbydesign
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kczu09
| false | null |
t3_1kczu09
|
/r/LocalLLaMA/comments/1kczu09/phi4_vs_mistral_vs_llama_3_vs_gemma_thoughts/
| false | false |
self
| 1 | null |
Thoughts on Phi-4 vs. Mistral vs. LLaMA 3?
| 1 |
[removed]
| 2025-05-02T12:35:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1kczv0c/thoughts_on_phi4_vs_mistral_vs_llama_3/
|
opsbydesign
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kczv0c
| false | null |
t3_1kczv0c
|
/r/LocalLLaMA/comments/1kczv0c/thoughts_on_phi4_vs_mistral_vs_llama_3/
| false | false |
self
| 1 | null |
Anyone tested Phi-4-Reasoning-Plus vs Mistral or LLaMA 3? Curious how they compare in reasoning-heavy use.
| 1 |
[removed]
| 2025-05-02T12:38:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1kczww7/anyone_tested_phi4reasoningplus_vs_mistral_or/
|
opsbydesign
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kczww7
| false | null |
t3_1kczww7
|
/r/LocalLLaMA/comments/1kczww7/anyone_tested_phi4reasoningplus_vs_mistral_or/
| false | false |
self
| 1 | null |
Best settings for Qwen3 30B A3B?
| 9 |
Hey guys, trying out new Qwen models, can anyone tell me if this is a good quant (Qwen\_Qwen3-30B-A3B-Q5\_K\_M.gguf from bartowski) for 3090 and what settings are good? I have Oobabooga and kobald.exe installed/downloaded. Which one is better? Also how much tokens context works best? anything else to keep in mind about this model?
| 2025-05-02T12:52:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1kd06ze/best_settings_for_qwen3_30b_a3b/
|
Suimeileo
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd06ze
| false | null |
t3_1kd06ze
|
/r/LocalLLaMA/comments/1kd06ze/best_settings_for_qwen3_30b_a3b/
| false | false |
self
| 9 | null |
The 4 Things Qwen-3’s Chat Template Teaches Us
| 52 | 2025-05-02T12:54:12 |
https://huggingface.co/blog/qwen-3-chat-template-deep-dive
|
paf1138
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd086t
| false | null |
t3_1kd086t
|
/r/LocalLLaMA/comments/1kd086t/the_4_things_qwen3s_chat_template_teaches_us/
| false | false | 52 |
{'enabled': False, 'images': [{'id': 'arOzbiXdkOeRHycA11BXc7pPRAB_aZJ3VtlTRJfW51E', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/tp6VtfVRZfldT0G_447JyM6zc4l5dUA3OWXIeoIFtUE.jpg?width=108&crop=smart&auto=webp&s=827e5e457697bf1d7db202e59d9984214ae4a57e', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/tp6VtfVRZfldT0G_447JyM6zc4l5dUA3OWXIeoIFtUE.jpg?width=216&crop=smart&auto=webp&s=9232cf8a8fada67cc3684e9d3290d0410dbad1e4', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/tp6VtfVRZfldT0G_447JyM6zc4l5dUA3OWXIeoIFtUE.jpg?width=320&crop=smart&auto=webp&s=1a4e8783db124fa2883a1d32640de0ca66d1b1db', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/tp6VtfVRZfldT0G_447JyM6zc4l5dUA3OWXIeoIFtUE.jpg?width=640&crop=smart&auto=webp&s=8b85df85dd7a72c51abbda6242937e8bca8c97fe', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/tp6VtfVRZfldT0G_447JyM6zc4l5dUA3OWXIeoIFtUE.jpg?width=960&crop=smart&auto=webp&s=cb3e422f53785a3c3f79bc70ff1233f873292371', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/tp6VtfVRZfldT0G_447JyM6zc4l5dUA3OWXIeoIFtUE.jpg?width=1080&crop=smart&auto=webp&s=c277fe840b9d467462b06a01123e1b658affaa68', 'width': 1080}], 'source': {'height': 1440, 'url': 'https://external-preview.redd.it/tp6VtfVRZfldT0G_447JyM6zc4l5dUA3OWXIeoIFtUE.jpg?auto=webp&s=1ed0c40981018083b417385b5a81ac5cb7038bf5', 'width': 2750}, 'variants': {}}]}
|
||
Solution for high idle of 3060/3090 series
| 40 |
So some of the Linux users of Ampere (30xx) cards
(https://www.reddit.com/r/LocalLLaMA/comments/1k2fb67/save_13w_of_idle_power_on_your_3090/)
, me including, have probably noticed that the card (3060 in my case) can potentially get stuck in either high idle - 17-20W or low idle, 10W (irrespectively id the model is loaded or not). High idle is bothersome if you have more than one card - they eat energy for no reason and heat up the machine; well I found that sleep and wake helps, temporarily, like for an hour or so than it will creep up again. However, making it sleep and wake is annoying or even not always possible.
Luckily, I found working solution:
echo ‘suspend’ > /proc/driver/nvidia/suspend
followed by
echo ‘resume’ > /proc/driver/nvidia/suspend
immediately fixes problem. 18W idle -> 10W idle.
Yay, now I can lay off my p104 and buy another 3060!
| 2025-05-02T13:00:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1kd0csu/solution_for_high_idle_of_30603090_series/
|
AppearanceHeavy6724
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd0csu
| false | null |
t3_1kd0csu
|
/r/LocalLLaMA/comments/1kd0csu/solution_for_high_idle_of_30603090_series/
| false | false |
self
| 40 | null |
Zuck says half of llama dev is ai
| 1 |
[removed]
| 2025-05-02T13:08:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1kd0ir8/zuck_says_half_of_llama_dev_is_ai/
|
Longjumping_Essay498
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd0ir8
| false | null |
t3_1kd0ir8
|
/r/LocalLLaMA/comments/1kd0ir8/zuck_says_half_of_llama_dev_is_ai/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'om_gw-Vp74P9Vpdiw2IAYEpXR0Wr56p6dk6QAGs3WVc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/J7Krf3vIshbWjuC0DOkYfj2HOeHJkv5a_Yywo4RCa_s.jpg?width=108&crop=smart&auto=webp&s=ac1361c6fbf7335739268884747b63bf1a8a1994', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/J7Krf3vIshbWjuC0DOkYfj2HOeHJkv5a_Yywo4RCa_s.jpg?width=216&crop=smart&auto=webp&s=428b6b6c402331073d719e47301e503f5c2c23da', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/J7Krf3vIshbWjuC0DOkYfj2HOeHJkv5a_Yywo4RCa_s.jpg?width=320&crop=smart&auto=webp&s=a97a71bf6c64f52d0b654b4cbfbf0301864f0bf2', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/J7Krf3vIshbWjuC0DOkYfj2HOeHJkv5a_Yywo4RCa_s.jpg?auto=webp&s=e676a56e32f67f2c8cbd7bf9b26258179d507c64', 'width': 480}, 'variants': {}}]}
|
Zuck says half of llama dev is ai
| 1 |
[removed]
| 2025-05-02T13:10:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1kd0kt1/zuck_says_half_of_llama_dev_is_ai/
|
Longjumping_Essay498
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd0kt1
| false | null |
t3_1kd0kt1
|
/r/LocalLLaMA/comments/1kd0kt1/zuck_says_half_of_llama_dev_is_ai/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'om_gw-Vp74P9Vpdiw2IAYEpXR0Wr56p6dk6QAGs3WVc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/J7Krf3vIshbWjuC0DOkYfj2HOeHJkv5a_Yywo4RCa_s.jpg?width=108&crop=smart&auto=webp&s=ac1361c6fbf7335739268884747b63bf1a8a1994', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/J7Krf3vIshbWjuC0DOkYfj2HOeHJkv5a_Yywo4RCa_s.jpg?width=216&crop=smart&auto=webp&s=428b6b6c402331073d719e47301e503f5c2c23da', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/J7Krf3vIshbWjuC0DOkYfj2HOeHJkv5a_Yywo4RCa_s.jpg?width=320&crop=smart&auto=webp&s=a97a71bf6c64f52d0b654b4cbfbf0301864f0bf2', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/J7Krf3vIshbWjuC0DOkYfj2HOeHJkv5a_Yywo4RCa_s.jpg?auto=webp&s=e676a56e32f67f2c8cbd7bf9b26258179d507c64', 'width': 480}, 'variants': {}}]}
|
Good local model/settings for polishing text?
| 1 |
[removed]
| 2025-05-02T13:20:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1kd0s7y/good_local_modelsettings_for_polishing_text/
|
TragedyofLight
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd0s7y
| false | null |
t3_1kd0s7y
|
/r/LocalLLaMA/comments/1kd0s7y/good_local_modelsettings_for_polishing_text/
| false | false |
self
| 1 | null |
LLM GPU calculator for inference and fine-tuning requirements
| 460 |
[https://apxml.com/tools/vram-calculator](https://apxml.com/tools/vram-calculator)
| 2025-05-02T13:23:00 |
https://v.redd.it/sm6m5gr3ddye1
|
No_Scheme14
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd0ucu
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/sm6m5gr3ddye1/DASHPlaylist.mpd?a=1748784196%2COWM5NmNmMTE1M2FkN2QxODJkMGM1MTkyZmJlYTk0MjgyZGY2Mzc0NDY4NGFiYzIzMDA2YWQyNWE2NGE5M2NiYQ%3D%3D&v=1&f=sd', 'duration': 16, 'fallback_url': 'https://v.redd.it/sm6m5gr3ddye1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/sm6m5gr3ddye1/HLSPlaylist.m3u8?a=1748784196%2CYWViYjZmOWUyYTdkYTRkZDJkNTk3NDA1Mjk4ZjUxMjY2OWFjZjk5YTFkYTA3Y2M2MDdkMDgwMzQwMzU3MzA2NQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/sm6m5gr3ddye1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1204}}
|
t3_1kd0ucu
|
/r/LocalLLaMA/comments/1kd0ucu/llm_gpu_calculator_for_inference_and_finetuning/
| false | false | 460 |
{'enabled': False, 'images': [{'id': 'bzRudjNocjNkZHllMa9aDj6wzeLpWrMMtqT_U_yHgbhUh_Z7zAQyljZLVhSt', 'resolutions': [{'height': 96, 'url': 'https://external-preview.redd.it/bzRudjNocjNkZHllMa9aDj6wzeLpWrMMtqT_U_yHgbhUh_Z7zAQyljZLVhSt.png?width=108&crop=smart&format=pjpg&auto=webp&s=09dd3ae967a04454c5b25e1f28a6b13e1910582d', 'width': 108}, {'height': 193, 'url': 'https://external-preview.redd.it/bzRudjNocjNkZHllMa9aDj6wzeLpWrMMtqT_U_yHgbhUh_Z7zAQyljZLVhSt.png?width=216&crop=smart&format=pjpg&auto=webp&s=fb739d8539d988d3a8fcbc15acc7605c0b4dd920', 'width': 216}, {'height': 286, 'url': 'https://external-preview.redd.it/bzRudjNocjNkZHllMa9aDj6wzeLpWrMMtqT_U_yHgbhUh_Z7zAQyljZLVhSt.png?width=320&crop=smart&format=pjpg&auto=webp&s=0d1c49ed59249bd041c0a2f6f93caaf11942a6df', 'width': 320}, {'height': 573, 'url': 'https://external-preview.redd.it/bzRudjNocjNkZHllMa9aDj6wzeLpWrMMtqT_U_yHgbhUh_Z7zAQyljZLVhSt.png?width=640&crop=smart&format=pjpg&auto=webp&s=17871f7e8f08ccb425f667602dc78333e21f38d9', 'width': 640}, {'height': 860, 'url': 'https://external-preview.redd.it/bzRudjNocjNkZHllMa9aDj6wzeLpWrMMtqT_U_yHgbhUh_Z7zAQyljZLVhSt.png?width=960&crop=smart&format=pjpg&auto=webp&s=5ccf6fdac2698c4ac5b49176790a39563c454464', 'width': 960}, {'height': 968, 'url': 'https://external-preview.redd.it/bzRudjNocjNkZHllMa9aDj6wzeLpWrMMtqT_U_yHgbhUh_Z7zAQyljZLVhSt.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c17100661eb5bed301f723a9d82edc4d3acf1a32', 'width': 1080}], 'source': {'height': 2012, 'url': 'https://external-preview.redd.it/bzRudjNocjNkZHllMa9aDj6wzeLpWrMMtqT_U_yHgbhUh_Z7zAQyljZLVhSt.png?format=pjpg&auto=webp&s=4a1153135bfdf138d59ea1083562688c9000c46a', 'width': 2244}, 'variants': {}}]}
|
|
The AI won't destroy us... The AI:
| 1 |
[removed]
| 2025-05-02T13:28:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1kd0ylg/the_ai_wont_destroy_us_the_ai/
|
Best_Tool
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd0ylg
| false | null |
t3_1kd0ylg
|
/r/LocalLLaMA/comments/1kd0ylg/the_ai_wont_destroy_us_the_ai/
| false | false |
self
| 1 | null |
How to prevent endless loops?
| 1 |
I am testing qwen3-30b-a3b with ollama and openwebui
I also tried out the version by unsloth (Qwen3-30B-A3B-GGUF:Q4\_K\_XL)
But it keeps getting stuck in an endless loop, while thinking and also after thinking.
I set the suggested temperature, top k, top p, presence penalty settings.
Is there a way to fix this?
https://preview.redd.it/crflt7ghjdye1.png?width=1060&format=png&auto=webp&s=083dd93cbd67898822e5ea18c8c9c02afac17d08
https://preview.redd.it/jwoeoykkjdye1.png?width=925&format=png&auto=webp&s=0727c4354ff9dbeb54dc1b357c6fb0698fd51fa6
| 2025-05-02T13:57:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1kd1lp7/how_to_prevent_endless_loops/
|
CodingKiwi_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd1lp7
| false | null |
t3_1kd1lp7
|
/r/LocalLLaMA/comments/1kd1lp7/how_to_prevent_endless_loops/
| false | false | 1 | null |
|
I builtToolBridge - Now tool calling works with ANY model
| 21 |
After getting frustrated with the limitations tool calling support for many capable models, I created ToolBridge - a proxy server that enables tool/function calling for ANY capable model.
You can now use clients like your own code or something like GitHub Copilot with completely free models (Deepseek, Llama, Qwen, Gemma, etc.) that when they don't even support tools via providers
ToolBridge sits between your client and the LLM backend, translating API formats and adding function calling capabilities to models that don't natively support it. It converts between OpenAI and Ollama formats seamlessly for local usage as well.
**Why is this useful?** Now you can:
* Try with free models from Chutes, OpenRouter, or Targon
* Use local open-source models with Copilot or other clients to keep your code private
* Experiment with different models without changing your workflow
This works with any platform that uses function calling:
* LangChain/LlamaIndex agents
* VS Code AI extensions
* JetBrains AI Assistant
* CrewAI, Auto-GPT
* And many more
Even better, you can chain ToolBridge with LiteLLM to make ANY provider work with these tools. LiteLLM handles the provider routing while ToolBridge adds the function calling capabilities - giving you universal access to any model from any provider.
Setup takes just a few minutes - clone the repo, configure the .env file, and point your tool to your proxy endpoint.
Check it out on GitHub: [ToolBridge](https://github.com/oct4pie/toolbridge)
[https://github.com/oct4pie/toolbridge](https://github.com/oct4pie/toolbridge)
What model would you try with first?
| 2025-05-02T13:59:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1kd1n7p/i_builttoolbridge_now_tool_calling_works_with_any/
|
Greedy_Letterhead155
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd1n7p
| false | null |
t3_1kd1n7p
|
/r/LocalLLaMA/comments/1kd1n7p/i_builttoolbridge_now_tool_calling_works_with_any/
| false | false |
self
| 21 |
{'enabled': False, 'images': [{'id': 'MjUrYMIrGL2wCOHg6FoB2Yg3sk_80jef_TTgXA_d1b4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7vMwLqPSAtRp-iJboOaQsX6Tx41t_DeBUVm67q5GUiA.jpg?width=108&crop=smart&auto=webp&s=074d59625db4e4d454bd019a8824442cc6548bda', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7vMwLqPSAtRp-iJboOaQsX6Tx41t_DeBUVm67q5GUiA.jpg?width=216&crop=smart&auto=webp&s=c308dae18b43d843ed85cb3435ca32f157fccb29', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7vMwLqPSAtRp-iJboOaQsX6Tx41t_DeBUVm67q5GUiA.jpg?width=320&crop=smart&auto=webp&s=2cbb3e9d9dc666041ec38ced3e1383b08b6e9df7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7vMwLqPSAtRp-iJboOaQsX6Tx41t_DeBUVm67q5GUiA.jpg?width=640&crop=smart&auto=webp&s=84d36b77a35e1cf228b88877ae9e5337c253b1a5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7vMwLqPSAtRp-iJboOaQsX6Tx41t_DeBUVm67q5GUiA.jpg?width=960&crop=smart&auto=webp&s=08e105e0bd62f2ec60956aa4116b013883068c49', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7vMwLqPSAtRp-iJboOaQsX6Tx41t_DeBUVm67q5GUiA.jpg?width=1080&crop=smart&auto=webp&s=7960f70e201e67d69f35d596fdf0fbf7b40cfa7e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7vMwLqPSAtRp-iJboOaQsX6Tx41t_DeBUVm67q5GUiA.jpg?auto=webp&s=b5ade86841d0fd9b1feb46af260d9cd85f944b05', 'width': 1200}, 'variants': {}}]}
|
How’s your RP with Qwen 3 models going? What settings do you have set up?
| 1 |
[removed]
| 2025-05-02T14:08:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1kd1uv3/hows_your_rp_with_qwen_3_models_going_what/
|
internal-pagal
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd1uv3
| false | null |
t3_1kd1uv3
|
/r/LocalLLaMA/comments/1kd1uv3/hows_your_rp_with_qwen_3_models_going_what/
| false | false |
self
| 1 | null |
HEXACO Personality Test - Request for Data
| 1 |
Hello,
I want to train an AI using varied personalities to make more realistic personalities. The MBTI 16 personality test isn’t as accurate as other tests.
The HEXACO personality test has scientific backing and dataset is publically available. But I’m curious if we can create a bigger dataset by filling out this google form I created.
I covers all 240 HEXACO questions with the addition of gender and country for breakdowns.
I’m aiming to share this form far and wide. The only data I’m collecting is that which is in the form.
If you could help me complete this dataset I’ll share it on Kaggle.
I’m also thinking of making a dataset of over 300 random questions to further train the AI and cross referencing it with random personality responses in this form making more nuanced personalities.
Eventually based on gender and country of birth and year of birth I’ll be able to make cultural references too.
https://docs.google.com/forms/d/1xt3WwL7jl7l82ayMEkJaeRfDIOn48LEeWpl4HMZuQLY/viewform?pli=1&pli=1&edit_requested=true
Any help much appreciated .
Upvote if your keen on this.
P.S. none of the data collected will personally identify you.
Many Thanks,
K
| 2025-05-02T14:21:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1kd263f/hexaco_personality_test_request_for_data/
|
klain42
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd263f
| false | null |
t3_1kd263f
|
/r/LocalLLaMA/comments/1kd263f/hexaco_personality_test_request_for_data/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'Y3uyC-UJJVTlk4HZiryKM2bal2Hf0bAUD0cSw-wh_3Y', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/sPYW3UiiRXqy_qp4TLQXfKNPQIZg1evuDd3G6QFEq-k.jpg?width=108&crop=smart&auto=webp&s=d3bdfd0597f9ce7103b93e41a9eda1440764d06d', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/sPYW3UiiRXqy_qp4TLQXfKNPQIZg1evuDd3G6QFEq-k.jpg?width=216&crop=smart&auto=webp&s=094681bb8362b7e0c6d5535415ce3fd739be5856', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/sPYW3UiiRXqy_qp4TLQXfKNPQIZg1evuDd3G6QFEq-k.jpg?width=320&crop=smart&auto=webp&s=23f1f21fb440d85d3297167308320dcab0465223', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/sPYW3UiiRXqy_qp4TLQXfKNPQIZg1evuDd3G6QFEq-k.jpg?width=640&crop=smart&auto=webp&s=2089740e2f1097424487cd28a80203a00e9cda53', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/sPYW3UiiRXqy_qp4TLQXfKNPQIZg1evuDd3G6QFEq-k.jpg?width=960&crop=smart&auto=webp&s=5db61fe060b05d28e352aedfeaba5a16d2e34776', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/sPYW3UiiRXqy_qp4TLQXfKNPQIZg1evuDd3G6QFEq-k.jpg?width=1080&crop=smart&auto=webp&s=8873c04eb927119e0448573cee7ccde2ae1910eb', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/sPYW3UiiRXqy_qp4TLQXfKNPQIZg1evuDd3G6QFEq-k.jpg?auto=webp&s=a56067f5d10b757ab4f7a0b081ff7eb82a5ba876', 'width': 1200}, 'variants': {}}]}
|
Llama4
| 0 |
FYI: Error: model requires more system memory (62.8 GiB) than is available (58.3 GiB)
Solution: with 64GB RAM, just reboot into recovery mode and bingo!
BTW: the llama4 system prompt is amazing
| 2025-05-02T14:50:10 |
immediate_a982
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd2urk
| false | null |
t3_1kd2urk
|
/r/LocalLLaMA/comments/1kd2urk/llama4/
| true | false |
spoiler
| 0 |
{'enabled': True, 'images': [{'id': '-CfZBwPWmZ5LrQ4RxCC-Dx2mUVKqtuCAOvO5kZAfg9Y', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/jegpt5bftdye1.jpeg?width=108&crop=smart&auto=webp&s=ba2cbcc302e101ab478fab866b50456d7043a26f', 'width': 108}, {'height': 164, 'url': 'https://preview.redd.it/jegpt5bftdye1.jpeg?width=216&crop=smart&auto=webp&s=48729d0238d0e94c260ff6653212f0155cbde391', 'width': 216}, {'height': 243, 'url': 'https://preview.redd.it/jegpt5bftdye1.jpeg?width=320&crop=smart&auto=webp&s=bea10ece399bdcfe899bf7cb3d4e6be644bee6ec', 'width': 320}, {'height': 486, 'url': 'https://preview.redd.it/jegpt5bftdye1.jpeg?width=640&crop=smart&auto=webp&s=48211eaf4dcd8b740d3a71ac17ba295699d1b5ec', 'width': 640}, {'height': 730, 'url': 'https://preview.redd.it/jegpt5bftdye1.jpeg?width=960&crop=smart&auto=webp&s=8f41fd824339cdd7fe26280deb7a20ac7b607b6c', 'width': 960}, {'height': 821, 'url': 'https://preview.redd.it/jegpt5bftdye1.jpeg?width=1080&crop=smart&auto=webp&s=81718f47898ab62f0848fc2af986852216c861cb', 'width': 1080}], 'source': {'height': 3238, 'url': 'https://preview.redd.it/jegpt5bftdye1.jpeg?auto=webp&s=cd5748f3477f9108276c1d578793c5f34d18f955', 'width': 4256}, 'variants': {'obfuscated': {'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/jegpt5bftdye1.jpeg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=827bada374b2b560ec3c71ebb7941c9b02c310e2', 'width': 108}, {'height': 164, 'url': 'https://preview.redd.it/jegpt5bftdye1.jpeg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=070eeb005dd6426c9b697665cf569ca5d6085634', 'width': 216}, {'height': 243, 'url': 'https://preview.redd.it/jegpt5bftdye1.jpeg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=cbd106b5dc7d52afab045085399ba7060df6a492', 'width': 320}, {'height': 486, 'url': 'https://preview.redd.it/jegpt5bftdye1.jpeg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=f3335d8f3e858d2a6aa14fe9b05710c4315c4a7a', 'width': 640}, {'height': 730, 'url': 'https://preview.redd.it/jegpt5bftdye1.jpeg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=ed42239ca33688659e8690aeafe1bdc516c7ce9d', 'width': 960}, {'height': 821, 'url': 'https://preview.redd.it/jegpt5bftdye1.jpeg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=35ee3d3644f1c0828fa81daf2ec7fb523eb616b3', 'width': 1080}], 'source': {'height': 3238, 'url': 'https://preview.redd.it/jegpt5bftdye1.jpeg?blur=40&format=pjpg&auto=webp&s=0e1c3388be0a1a0e274185998936213bef3147cb', 'width': 4256}}}}]}
|
|
Can we use StarCoder/Qwen via Ollama for Cursor agent & code completion (local setup)?
| 1 |
[removed]
| 2025-05-02T14:57:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1kd30up/can_we_use_starcoderqwen_via_ollama_for_cursor/
|
siakshit
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd30up
| false | null |
t3_1kd30up
|
/r/LocalLLaMA/comments/1kd30up/can_we_use_starcoderqwen_via_ollama_for_cursor/
| false | false |
self
| 1 | null |
Granite-4-Tiny-Preview is a 7B A1 MoE
| 289 | 2025-05-02T15:05:14 |
https://huggingface.co/ibm-granite/granite-4.0-tiny-preview
|
secopsml
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd38c7
| false | null |
t3_1kd38c7
|
/r/LocalLLaMA/comments/1kd38c7/granite4tinypreview_is_a_7b_a1_moe/
| false | false | 289 |
{'enabled': False, 'images': [{'id': '-HMDXLeTkEAVt70di7dWmAvCyarLOvSm8N3THYwh_mY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/T0iz7mxuwalnXsLjlSxQeBsQsKzlmiOKtYaRXmEHmqI.jpg?width=108&crop=smart&auto=webp&s=980ed1be7bc497cbcc7f6dd355e3012b6879ad0c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/T0iz7mxuwalnXsLjlSxQeBsQsKzlmiOKtYaRXmEHmqI.jpg?width=216&crop=smart&auto=webp&s=2c2e7f1def859ad2114df7eb12c7910e90b49f71', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/T0iz7mxuwalnXsLjlSxQeBsQsKzlmiOKtYaRXmEHmqI.jpg?width=320&crop=smart&auto=webp&s=177617ff9e969be2476aee80ae58418d765bf610', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/T0iz7mxuwalnXsLjlSxQeBsQsKzlmiOKtYaRXmEHmqI.jpg?width=640&crop=smart&auto=webp&s=e2c21e4063e5745fcc181f2ff35f680f740d5f48', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/T0iz7mxuwalnXsLjlSxQeBsQsKzlmiOKtYaRXmEHmqI.jpg?width=960&crop=smart&auto=webp&s=db0c82ead921dbc17ac56e050820a8c12a6ac170', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/T0iz7mxuwalnXsLjlSxQeBsQsKzlmiOKtYaRXmEHmqI.jpg?width=1080&crop=smart&auto=webp&s=256a637e8bad3a3f7641c4c93f4c481415ea21a9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/T0iz7mxuwalnXsLjlSxQeBsQsKzlmiOKtYaRXmEHmqI.jpg?auto=webp&s=643c7e4d7f1f3f2158d760a28cf41eefb8343893', 'width': 1200}, 'variants': {}}]}
|
||
LLM to support work and personal project - is there a best LLM to serve them all (and hardware)?
| 1 |
[removed]
| 2025-05-02T15:26:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1kd3rmv/llm_to_support_work_and_personal_project_is_there/
|
Davidvia0x
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd3rmv
| false | null |
t3_1kd3rmv
|
/r/LocalLLaMA/comments/1kd3rmv/llm_to_support_work_and_personal_project_is_there/
| false | false |
self
| 1 | null |
Looking for some advice on improving Llama through LoRAs
| 1 |
[removed]
| 2025-05-02T15:38:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1kd41kw/looking_for_some_advice_on_improving_llama/
|
MoistUnderoos
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd41kw
| false | null |
t3_1kd41kw
|
/r/LocalLLaMA/comments/1kd41kw/looking_for_some_advice_on_improving_llama/
| false | false |
self
| 1 | null |
K20 for 8€?
| 1 |
[removed]
| 2025-05-02T15:44:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1kd47a9/k20_for_8/
|
Zealousideal_Rub3005
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd47a9
| false | null |
t3_1kd47a9
|
/r/LocalLLaMA/comments/1kd47a9/k20_for_8/
| false | false |
self
| 1 | null |
First time running LLM, how is the performance? Can I or should I run larger models if this prompt took 43 seconds?
| 7 | 2025-05-02T15:54:02 |
TryTurningItOffAgain
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd4fc4
| false | null |
t3_1kd4fc4
|
/r/LocalLLaMA/comments/1kd4fc4/first_time_running_llm_how_is_the_performance_can/
| false | false | 7 |
{'enabled': True, 'images': [{'id': 'OU_d4DOZvwAOnYMo7e8PhpPrtyZ6vUt3NfaxC6SjgPQ', 'resolutions': [{'height': 89, 'url': 'https://preview.redd.it/ieppsu3q4eye1.png?width=108&crop=smart&auto=webp&s=be97625d655711f540234fe90adc1a93e8ac5614', 'width': 108}, {'height': 179, 'url': 'https://preview.redd.it/ieppsu3q4eye1.png?width=216&crop=smart&auto=webp&s=cb4e8243496619bf3d4688ca791050b4512d74ed', 'width': 216}, {'height': 266, 'url': 'https://preview.redd.it/ieppsu3q4eye1.png?width=320&crop=smart&auto=webp&s=25469139666033a4134cff100eabb9d69f91601c', 'width': 320}, {'height': 533, 'url': 'https://preview.redd.it/ieppsu3q4eye1.png?width=640&crop=smart&auto=webp&s=55f6f517fbc450ec5e0d71767ea98d3ae5c18d58', 'width': 640}, {'height': 799, 'url': 'https://preview.redd.it/ieppsu3q4eye1.png?width=960&crop=smart&auto=webp&s=ae7edbb30a2b28d1048f07f73877f4222ac3f634', 'width': 960}, {'height': 899, 'url': 'https://preview.redd.it/ieppsu3q4eye1.png?width=1080&crop=smart&auto=webp&s=58cccf629a6a5856215dd8cfc976bc4ef43954a8', 'width': 1080}], 'source': {'height': 932, 'url': 'https://preview.redd.it/ieppsu3q4eye1.png?auto=webp&s=834303e47f55752aaaa5d1dbdd8be154ea12565e', 'width': 1119}, 'variants': {}}]}
|
|||
Dual GPU RTX 6000 Ada Workstation Boot Issue with ASUS ProArt Z790 MoBo
| 1 |
[removed]
| 2025-05-02T15:59:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1kd4kbf/dual_gpu_rtx_6000_ada_workstation_boot_issue_with/
|
javier_aml
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd4kbf
| false | null |
t3_1kd4kbf
|
/r/LocalLLaMA/comments/1kd4kbf/dual_gpu_rtx_6000_ada_workstation_boot_issue_with/
| false | false |
self
| 1 | null |
Wife running our local llama, a bit slow because it's too large (the llama not my wife)
| 1,295 | 2025-05-02T16:04:05 |
Osama_Saba
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd4old
| false | null |
t3_1kd4old
|
/r/LocalLLaMA/comments/1kd4old/wife_running_our_local_llama_a_bit_slow_because/
| false | false | 1,295 |
{'enabled': True, 'images': [{'id': '1Nar1PDMxkzGeN9s6hagwH16Rh4zWsorV-lfZb43RKM', 'resolutions': [{'height': 153, 'url': 'https://preview.redd.it/vx6db70m6eye1.jpeg?width=108&crop=smart&auto=webp&s=f847d91c6bc732c239f6ad6db59d0a4a198c8465', 'width': 108}, {'height': 307, 'url': 'https://preview.redd.it/vx6db70m6eye1.jpeg?width=216&crop=smart&auto=webp&s=a3a82d04a74ad936742d37f8cdbdeb76a4346bfc', 'width': 216}, {'height': 455, 'url': 'https://preview.redd.it/vx6db70m6eye1.jpeg?width=320&crop=smart&auto=webp&s=83c0c22ff5c6c1a201daa8d275a921d17e0e887a', 'width': 320}, {'height': 910, 'url': 'https://preview.redd.it/vx6db70m6eye1.jpeg?width=640&crop=smart&auto=webp&s=c58f7f25eff35b200125a5a8beef9f4fce7ffd2e', 'width': 640}], 'source': {'height': 1195, 'url': 'https://preview.redd.it/vx6db70m6eye1.jpeg?auto=webp&s=90c269515e1d35fcae270b2e37bade60905b63c6', 'width': 840}, 'variants': {}}]}
|
|||
Trying to extract data from this image I have tried almost everything and all of them make mistakes. Need help.
| 1 | 2025-05-02T16:16:14 |
DinoCoiner
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd4z11
| false | null |
t3_1kd4z11
|
/r/LocalLLaMA/comments/1kd4z11/trying_to_extract_data_from_this_image_i_have/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'JsByf2B8_78NuIFDANcZnr7kbneUklu_HDr61hZNJ_M', 'resolutions': [{'height': 121, 'url': 'https://preview.redd.it/mxcae4iq8eye1.jpeg?width=108&crop=smart&auto=webp&s=4a368badde81124f91927f1242d3914ba98740e8', 'width': 108}, {'height': 243, 'url': 'https://preview.redd.it/mxcae4iq8eye1.jpeg?width=216&crop=smart&auto=webp&s=d0ca59345a321f45155faaebc90b3d18a280d5c9', 'width': 216}, {'height': 360, 'url': 'https://preview.redd.it/mxcae4iq8eye1.jpeg?width=320&crop=smart&auto=webp&s=f19a615f461f42dec29171f2d682c0ac134e7d63', 'width': 320}, {'height': 721, 'url': 'https://preview.redd.it/mxcae4iq8eye1.jpeg?width=640&crop=smart&auto=webp&s=75fa87d220b0d272ee1722dafc8fc373c669bf09', 'width': 640}], 'source': {'height': 925, 'url': 'https://preview.redd.it/mxcae4iq8eye1.jpeg?auto=webp&s=6a2df2b7a33be04e00cbb8ef5a611c95dbdafc4d', 'width': 820}, 'variants': {}}]}
|
|||
How to add token metrics to open webui?
| 7 |
In webui you can get token metrics like this:
https://preview.redd.it/hz4p70us8eye1.png?width=1080&format=png&auto=webp&s=1b511f9ee307a458a242e47ad236e617aa416888
This seems to be provided by the inference provider (API). I use LiteLLM, how do I get LiteLLM to pass these metrics over to Open WebUI?
| 2025-05-02T16:17:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1kd502d/how_to_add_token_metrics_to_open_webui/
|
DeltaSqueezer
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd502d
| false | null |
t3_1kd502d
|
/r/LocalLLaMA/comments/1kd502d/how_to_add_token_metrics_to_open_webui/
| false | false | 7 | null |
|
SOLO Bench - A new type of LLM benchmark I developed to address the shortcomings of many existing benchmarks
| 532 |
See the pictures for additional info or you can read more about it (or try it out yourself) here:
[Github](https://github.com/jd-3d/SOLOBench)
[Website](https://dull-stop-29a.notion.site/SOLO-Bench-1e70c13d9e4580e48cdfda54ccc15f70)
| 2025-05-02T16:17:48 |
https://www.reddit.com/gallery/1kd50fl
|
jd_3d
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd50fl
| false | null |
t3_1kd50fl
|
/r/LocalLLaMA/comments/1kd50fl/solo_bench_a_new_type_of_llm_benchmark_i/
| false | false | 532 | null |
|
Qwen3 Fine-tuning now in Unsloth - 2x faster with 70% less VRAM
| 441 |
Hey guys! You can now fine-tune Qwen3 up to **8x longer context lengths** with [Unsloth](https://github.com/unslothai/unsloth) than all setups with FA2 on a 24GB GPU. Qwen3-30B-A3B comfortably fits on 17.5GB VRAM!
Some of you may have seen us updating GGUFs for Qwen3. If you have versions from 3 days ago - you don't have to re-download. We just refined how the imatrix was calculated so accuracy should be improved ever so slightly.
* **Fine-tune Qwen3 (14B) for free** using our [Colab notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen3_(14B)-Reasoning-Conversational.ipynb)
* Because Qwen3 supports both reasoning and non-reasoning, you can fine-tune it with non-reasoning data, but to preserve reasoning (optional), include some chain-of-thought examples. Our Conversational notebook uses a dataset which mixes NVIDIA’s open-math-reasoning and Maxime’s FineTome datasets
* A reminder, Unsloth now supports [everything](https://unsloth.ai/blog/gemma3#everything). This includes full fine-tuning, pretraining, and support for all models (like Mixtral, MoEs, Cohere etc. models).
* You can read our full Qwen3 update here: [unsloth.ai/blog/qwen3](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune#fine-tuning-qwen3-with-unsloth)
* We uploaded Dynamic 4-bit safetensors for fine-tuning/deployment. See all Qwen3 Uploads including GGUF, 4-bit etc: [Models](https://huggingface.co/collections/unsloth/qwen3-680edabfb790c8c34a242f95)
Qwen3 Dynamic 4-bit instruct quants:
|[1.7B](https://huggingface.co/unsloth/Qwen3-1.7B-unsloth-bnb-4bit)|[4B](https://huggingface.co/unsloth/Qwen3-4B-unsloth-bnb-4bit)|[8B](https://huggingface.co/unsloth/Qwen3-8B-unsloth-bnb-4bit)|[14B](https://huggingface.co/unsloth/Qwen3-14B-unsloth-bnb-4bit)|[32B](https://huggingface.co/unsloth/Qwen3-32B-unsloth-bnb-4bit)|
|:-|:-|:-|:-|:-|
Let me know if you have any questions and hope you all have a lovely Friday and weekend! :) Also to update Unsloth do:
`pip install --upgrade --force-reinstall --no-deps unsloth unsloth_zoo`
Colab Notebook to finetune Qwen3 14B for free: [https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen3\_(14B)-Reasoning-Conversational.ipynb](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen3_(14B)-Reasoning-Conversational.ipynb)
On finetuning MoEs - it's probably NOT a good idea to finetune the router layer - I disabled it my default. The 30B MoE surprisingly only needs 17.5GB of VRAM. Docs for more details: [https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune)
model, tokenizer = FastModel.from_pretrained(
model_name = "unsloth/Qwen3-30B-A3B",
max_seq_length = 2048,
load_in_4bit = True,
load_in_8bit = False,
full_finetuning = False, # Full finetuning now in Unsloth!
)
| 2025-05-02T16:20:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1kd531l/qwen3_finetuning_now_in_unsloth_2x_faster_with_70/
|
danielhanchen
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd531l
| false | null |
t3_1kd531l
|
/r/LocalLLaMA/comments/1kd531l/qwen3_finetuning_now_in_unsloth_2x_faster_with_70/
| false | false |
self
| 441 |
{'enabled': False, 'images': [{'id': '6S72ZNKK8n4X-ql3yyuXhBB_m1VinWgdHOa9KG9S-tw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ftaDNCwSihbYGLQbBTzgh7b5BpWLJdswJxDKPN1Yo1g.jpg?width=108&crop=smart&auto=webp&s=a73d3b61fc246dbc35cce12639b80aecd70cf639', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ftaDNCwSihbYGLQbBTzgh7b5BpWLJdswJxDKPN1Yo1g.jpg?width=216&crop=smart&auto=webp&s=5b486523566f4f450243cba517fcfcc806e3a914', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ftaDNCwSihbYGLQbBTzgh7b5BpWLJdswJxDKPN1Yo1g.jpg?width=320&crop=smart&auto=webp&s=88b278dfc1fb84997dc3467d02552167e64fe95b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ftaDNCwSihbYGLQbBTzgh7b5BpWLJdswJxDKPN1Yo1g.jpg?width=640&crop=smart&auto=webp&s=69079af3ab256e383f5fd89e9b49c2fde228a59d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ftaDNCwSihbYGLQbBTzgh7b5BpWLJdswJxDKPN1Yo1g.jpg?width=960&crop=smart&auto=webp&s=b28c7b9ae72aeb2e9f96b805844a5012e7471600', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ftaDNCwSihbYGLQbBTzgh7b5BpWLJdswJxDKPN1Yo1g.jpg?width=1080&crop=smart&auto=webp&s=c34fd9c80dd6f3c244f163d87168d88ae3fddc71', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ftaDNCwSihbYGLQbBTzgh7b5BpWLJdswJxDKPN1Yo1g.jpg?auto=webp&s=e5b3e392033bd93f1c5af1ac8eca3c43963211ed', 'width': 1200}, 'variants': {}}]}
|
I Investigated a Fake LLM Project (“VRINDA”) Claiming 64x H100 Training — Here’s What I Found
| 1 |
[removed]
| 2025-05-02T16:27:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1kd58ls/i_investigated_a_fake_llm_project_vrinda_claiming/
|
zyxciss
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd58ls
| false | null |
t3_1kd58ls
|
/r/LocalLLaMA/comments/1kd58ls/i_investigated_a_fake_llm_project_vrinda_claiming/
| false | false |
self
| 1 | null |
I vibe coded a terminal assistant for PowerShell that uses Ryzen AI LLMs
| 0 |
tldr: [PEEL (PowerShell Enhanced by Embedded Lemonade)](https://github.com/lemonade-apps/peel) is a small PowerShell module that I vibe coded that lets you run `Get-Aid` to have a local NPU-accelerated LLM help explain the output of your last command.
[PEEL in action.](https://preview.redd.it/gygyw2s9xdye1.png?width=1110&format=png&auto=webp&s=188e0b4f42835ea31adc31c0b65bbe30ad144601)
Hey good people, Jeremy from AMD here again. First of all, thank you for the great discussion on my last post! I took all the feedback to my colleagues, especially about llama.cpp and Linux support.
In the meantime, I'm using Ryzen AI LLMs on Windows, and I made something for others like me to enjoy: [lemonade-apps/peel: Get aid from local LLMs right in your PowerShell](https://github.com/lemonade-apps/peel)
This project was inspired by u/jsonathan's excellent [wut](https://github.com/shobrook/wut) project. That project requires tmux (we have a guide for integrating it with Ryzen AI LLMs [here](https://github.com/onnx/turnkeyml/blob/main/examples/lemonade/server/wut.md)), but I wanted something that worked natively in PowerShell, so I vibe coded this project up in a couple of days.
It isn't meant to be a serious product or anything, but I do find it legitimately useful in my day-to-day work. Curious to get the community's feedback, especially any Windows users who have a chance to try it out.
PS. Requires a Ryzen AI 300-series processor at this time (although I'm open to adding support for any x86 CPU if there's interest).
| 2025-05-02T16:38:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1kd5hyk/i_vibe_coded_a_terminal_assistant_for_powershell/
|
jfowers_amd
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd5hyk
| false | null |
t3_1kd5hyk
|
/r/LocalLLaMA/comments/1kd5hyk/i_vibe_coded_a_terminal_assistant_for_powershell/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'aYpbRh0ycCJSnxKbSOOx7rKUZUyRAHFtx7Sb9rI_jfg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/B_O_lQWycNvlfBXRIYIuXME2Xp8sD8UHdjeLVXufOVY.jpg?width=108&crop=smart&auto=webp&s=2978803631c7065a9e7f43872ac610bd649f1766', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/B_O_lQWycNvlfBXRIYIuXME2Xp8sD8UHdjeLVXufOVY.jpg?width=216&crop=smart&auto=webp&s=29ebe4bd66472c295df7ab8318af3087639c418b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/B_O_lQWycNvlfBXRIYIuXME2Xp8sD8UHdjeLVXufOVY.jpg?width=320&crop=smart&auto=webp&s=13bf4d04b05db3562ccac85d3b843aeb7b120eb4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/B_O_lQWycNvlfBXRIYIuXME2Xp8sD8UHdjeLVXufOVY.jpg?width=640&crop=smart&auto=webp&s=50a03b9c6a3e2d69a3334260b5eafbc1d63aa8fb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/B_O_lQWycNvlfBXRIYIuXME2Xp8sD8UHdjeLVXufOVY.jpg?width=960&crop=smart&auto=webp&s=d519b94b725dbebae3877936a9e453081b4e6e7a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/B_O_lQWycNvlfBXRIYIuXME2Xp8sD8UHdjeLVXufOVY.jpg?width=1080&crop=smart&auto=webp&s=8de49c80113648d0837b0042167ffe027b08e66c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/B_O_lQWycNvlfBXRIYIuXME2Xp8sD8UHdjeLVXufOVY.jpg?auto=webp&s=e0fa582bfacbbce940c28e275423c39788a78dc0', 'width': 1200}, 'variants': {}}]}
|
|
Qwen3 235B-A22B on a Windows tablet @ ~11.1t/s on AMD Ryzen AI Max 395+ 128GB RAM (Radeon 8060S iGPU-only inference, using 87.7GB out of 95.8GB total for 'VRAM')
| 445 |
The fact you can run the full 235B-A33B model fully in iGPU without CPU offload, on a portable machine, at a reasonable token speed is nuts! (Yes, I know Apple M-series can probably also do this too, lol). This is using the Vulkan backend; ROCm is only supported on Linux, but you can get it to work on this device if you decide to go that route and you self-compile llama.cpp
This is all with the caveat that I'm using an aggressive quant, using Q2\_K\_XL with Unsloth Dynamic 2.0 quantization.
Leaving the LLM on leaves \~30GB RAM left over (I had VS Code, OBS, and a few Chrome tabs open), and CPU usage stays completely unused with the GPU taking over all LLM compute needs. Feels very usable to be able to do work while doing LLM inference on the side, without the LLM completely taking your entire machine over.
Weakness of AMD Strix Halo for LLMs, despite 'on-die' memory like Apple M-series, is that memory bandwidth is still very slow in comparison (M4 Max @ 546Gb/s, Ryzen 395+ @ 256Gb/s). Strix Halo products do undercut Macbooks with similar RAM size in price brand-new (\~$2800 for a Flow Z13 Tablet with 128GB RAM).
This is my llama.cpp params (same params used for LM Studio):
\`-m Qwen3-235B-A22B-UD-Q2\_K\_XL-00001-of-00002.gguf -c 12288 --batch-size 320 -ngl 95 --temp 0.6 --top-k 20 --top-p .95 --min-p 0 --repeat-penalty 1.2 --no-mmap --jinja --chat-template-file ./qwen3-workaround.jinja\`.
\`--batch-size 320\` is important for Vulkan inference due to a bug outlined here: [https://github.com/ggml-org/llama.cpp/issues/13164](https://github.com/ggml-org/llama.cpp/issues/13164), you need to set evaluation batch size under 365 or you will get a model crash.
| 2025-05-02T16:49:48 |
https://v.redd.it/yct8as264eye1
|
Invuska
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd5rua
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/yct8as264eye1/DASHPlaylist.mpd?a=1748796603%2CYmZjMmIxNmE2OTEyYWY1OTc1Nzc1MDRjNmZkODcwZmM4OWVlYTRjMjg3YWRkZDkxODhhZmRjOTg2ZDJlZjlhOQ%3D%3D&v=1&f=sd', 'duration': 42, 'fallback_url': 'https://v.redd.it/yct8as264eye1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/yct8as264eye1/HLSPlaylist.m3u8?a=1748796603%2CYWNlOWY0NmE1NDQwMmY1MmI5NDA1YWJlNWYwNTY4YTI5OGZjYjY4MTQ3MDEyYzVkMmUyZGVlM2JhMzBiNzNmOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/yct8as264eye1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1kd5rua
|
/r/LocalLLaMA/comments/1kd5rua/qwen3_235ba22b_on_a_windows_tablet_111ts_on_amd/
| false | false | 445 |
{'enabled': False, 'images': [{'id': 'MTRoZmRzMjY0ZXllMUcsKz2Py1mD-nSurwbqeO4CsbiV3jfLe_WuIL5DA8ts', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MTRoZmRzMjY0ZXllMUcsKz2Py1mD-nSurwbqeO4CsbiV3jfLe_WuIL5DA8ts.png?width=108&crop=smart&format=pjpg&auto=webp&s=e2c986f14a803cdef6e5c28866ba7231e04b83cf', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MTRoZmRzMjY0ZXllMUcsKz2Py1mD-nSurwbqeO4CsbiV3jfLe_WuIL5DA8ts.png?width=216&crop=smart&format=pjpg&auto=webp&s=d0c46e5dcee34c770d9a2386bb41a6b1c93ea5d4', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MTRoZmRzMjY0ZXllMUcsKz2Py1mD-nSurwbqeO4CsbiV3jfLe_WuIL5DA8ts.png?width=320&crop=smart&format=pjpg&auto=webp&s=88f2a355e9611c3eda1aa973c590ec2e26e2786b', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MTRoZmRzMjY0ZXllMUcsKz2Py1mD-nSurwbqeO4CsbiV3jfLe_WuIL5DA8ts.png?width=640&crop=smart&format=pjpg&auto=webp&s=5b1415fd4aafc62543ae169f8e4c6b1a33c53fb7', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MTRoZmRzMjY0ZXllMUcsKz2Py1mD-nSurwbqeO4CsbiV3jfLe_WuIL5DA8ts.png?width=960&crop=smart&format=pjpg&auto=webp&s=12b5124387f7d288c06d6ece06b5d96e23247319', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MTRoZmRzMjY0ZXllMUcsKz2Py1mD-nSurwbqeO4CsbiV3jfLe_WuIL5DA8ts.png?width=1080&crop=smart&format=pjpg&auto=webp&s=942bf3f49afac96f651b266ae779d18e546d716a', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/MTRoZmRzMjY0ZXllMUcsKz2Py1mD-nSurwbqeO4CsbiV3jfLe_WuIL5DA8ts.png?format=pjpg&auto=webp&s=52d29baed57fa5750ab37d4d54bcc1dce6e839c4', 'width': 1920}, 'variants': {}}]}
|
|
Impact of schema directed prompts on LLM determinism, accuracy
| 5 |
I created a small notebook at: [https://github.com/breckbaldwin/llm-stability/blob/main/experiments/json\_schema/analysis.ipynb](https://github.com/breckbaldwin/llm-stability/blob/main/experiments/json_schema/analysis.ipynb) reporting on how schemas influence on LLM accuracy/determinism.
TL;DR Schemas do help with determinism generally at the raw output level and answer level but it may come with a performance penalty on accuracy. More models/tasks should be evaluated.
| 2025-05-02T17:08:50 |
Skiata
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd68gz
| false | null |
t3_1kd68gz
|
/r/LocalLLaMA/comments/1kd68gz/impact_of_schema_directed_prompts_on_llm/
| false | false | 5 |
{'enabled': True, 'images': [{'id': 'Q8FWsmddy16tj0_6j8v6OjGsB5DKJwNYKELPc_zkwQM', 'resolutions': [{'height': 52, 'url': 'https://preview.redd.it/a8rlxw9fheye1.png?width=108&crop=smart&auto=webp&s=71a578696ae9e7cde1ca0cfd78e9fa51b1b75684', 'width': 108}, {'height': 105, 'url': 'https://preview.redd.it/a8rlxw9fheye1.png?width=216&crop=smart&auto=webp&s=620dbbd02c804611977adc113018de06cb7f4349', 'width': 216}, {'height': 156, 'url': 'https://preview.redd.it/a8rlxw9fheye1.png?width=320&crop=smart&auto=webp&s=69491d4eb2570c0354dffeea71c0fb99d37bdd6c', 'width': 320}, {'height': 313, 'url': 'https://preview.redd.it/a8rlxw9fheye1.png?width=640&crop=smart&auto=webp&s=c670e342936249afacbc96fe5a9c480118b575c9', 'width': 640}, {'height': 469, 'url': 'https://preview.redd.it/a8rlxw9fheye1.png?width=960&crop=smart&auto=webp&s=605db10ae4c097b72da3740989ec47ce86d061fe', 'width': 960}, {'height': 528, 'url': 'https://preview.redd.it/a8rlxw9fheye1.png?width=1080&crop=smart&auto=webp&s=172e8deb7c1fea13359ab3b122cd09641926a358', 'width': 1080}], 'source': {'height': 812, 'url': 'https://preview.redd.it/a8rlxw9fheye1.png?auto=webp&s=5bfd15a16f657caa35777fa095716af7a1eb6a9d', 'width': 1660}, 'variants': {}}]}
|
||
TTS Arena V2
| 1 |
[removed]
| 2025-05-02T17:13:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1kd6cwi/tts_arena_v2/
|
mrfakename0
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd6cwi
| false | null |
t3_1kd6cwi
|
/r/LocalLLaMA/comments/1kd6cwi/tts_arena_v2/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'Sk_T6RBRjpPEm8_RLpWP0PssUVSdnPKQf9DcmP6DVMM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Sk_T6RBRjpPEm8_RLpWP0PssUVSdnPKQf9DcmP6DVMM.png?width=108&crop=smart&auto=webp&s=a77230e002795e581bc07c4afc14b15faea6125d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Sk_T6RBRjpPEm8_RLpWP0PssUVSdnPKQf9DcmP6DVMM.png?width=216&crop=smart&auto=webp&s=e324183ae2533378f07f55d6437d699507187a24', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Sk_T6RBRjpPEm8_RLpWP0PssUVSdnPKQf9DcmP6DVMM.png?width=320&crop=smart&auto=webp&s=014afdd11c113ca965254c04bde9d7ecded4b9b2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Sk_T6RBRjpPEm8_RLpWP0PssUVSdnPKQf9DcmP6DVMM.png?width=640&crop=smart&auto=webp&s=f976156a2a7b2baf237e90e6706a1a7ecf17cded', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Sk_T6RBRjpPEm8_RLpWP0PssUVSdnPKQf9DcmP6DVMM.png?width=960&crop=smart&auto=webp&s=ba76f1096c71d595a6e3c2ef0095ddb4fb2ccaf4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Sk_T6RBRjpPEm8_RLpWP0PssUVSdnPKQf9DcmP6DVMM.png?width=1080&crop=smart&auto=webp&s=d388d5bbda5aa2d12a2b544deb38e0c646870221', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Sk_T6RBRjpPEm8_RLpWP0PssUVSdnPKQf9DcmP6DVMM.png?auto=webp&s=e6bc516d09fe794f21e224f9de44cb4fceda2bb8', 'width': 1200}, 'variants': {}}]}
|
Hiring Developer for Private LLaMA 3 Setup (Local AI Companion Project)
| 1 |
[removed]
| 2025-05-02T17:17:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1kd6g1a/hiring_developer_for_private_llama_3_setup_local/
|
BitterHead3873
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd6g1a
| false | null |
t3_1kd6g1a
|
/r/LocalLLaMA/comments/1kd6g1a/hiring_developer_for_private_llama_3_setup_local/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'Td6MIJJZ8Z1__qvxJyhRpI-h03Ta9j85tgebQu-lRyE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/d9X-2G4DsSY4ADbImhzY4-hvM6t-sK0R7U_wxdWeH0U.jpg?width=108&crop=smart&auto=webp&s=5c8c2eae0654518db81fbcd75e4075ca42e995e4', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/d9X-2G4DsSY4ADbImhzY4-hvM6t-sK0R7U_wxdWeH0U.jpg?width=216&crop=smart&auto=webp&s=70fac657202b168bf65d6733dc3d8eaee4ba9307', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/d9X-2G4DsSY4ADbImhzY4-hvM6t-sK0R7U_wxdWeH0U.jpg?width=320&crop=smart&auto=webp&s=d97e8399f99030df6c21717e04c8364d0cf267f8', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/d9X-2G4DsSY4ADbImhzY4-hvM6t-sK0R7U_wxdWeH0U.jpg?width=640&crop=smart&auto=webp&s=9056f54bcc2579779408d21d8fdef322c4e0f729', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/d9X-2G4DsSY4ADbImhzY4-hvM6t-sK0R7U_wxdWeH0U.jpg?width=960&crop=smart&auto=webp&s=3dd769e5d3ca9ee9ce171e336f24634ae20c9ffc', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/d9X-2G4DsSY4ADbImhzY4-hvM6t-sK0R7U_wxdWeH0U.jpg?width=1080&crop=smart&auto=webp&s=c431cb3ca7417ddea0cdffaa9d58ebdb0699603a', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/d9X-2G4DsSY4ADbImhzY4-hvM6t-sK0R7U_wxdWeH0U.jpg?auto=webp&s=45d97cc35a2846634f42a2868dc2cba7c8f201fe', 'width': 1198}, 'variants': {}}]}
|
Optimal Hardware Recommendations for Running Local LLMs
| 1 |
[removed]
| 2025-05-02T17:28:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1kd6pr8/optimal_hardware_recommendations_for_running/
|
miracleBTC
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd6pr8
| false | null |
t3_1kd6pr8
|
/r/LocalLLaMA/comments/1kd6pr8/optimal_hardware_recommendations_for_running/
| false | false |
self
| 1 | null |
runnint local llms on android hexagon NPU.
| 0 |
So I'm using the ChatApp example on the Quallcomm ai website
https://github.com/quic/ai-hub-apps/tree/main/apps/android/ChatApp
Problem is, even 2b and 3b models get killed by the os even though i have 8gb of ram.
| 2025-05-02T17:34:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1kd6v8u/runnint_local_llms_on_android_hexagon_npu/
|
ExtremeAcceptable289
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd6v8u
| false | null |
t3_1kd6v8u
|
/r/LocalLLaMA/comments/1kd6v8u/runnint_local_llms_on_android_hexagon_npu/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': '9w84gGWl2QSL0JsJpiuAec4RF4s-Pni-Ve4TodeZmkU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BZElevWzYntYTm93ceKdkijd4usUcOyG-cCtXCT9Pos.jpg?width=108&crop=smart&auto=webp&s=9863ec0dfb9a2bbac118399a790bb49652d28418', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BZElevWzYntYTm93ceKdkijd4usUcOyG-cCtXCT9Pos.jpg?width=216&crop=smart&auto=webp&s=855f86b2287c1e5da765ebea76ca024091f1bc02', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BZElevWzYntYTm93ceKdkijd4usUcOyG-cCtXCT9Pos.jpg?width=320&crop=smart&auto=webp&s=f69328602ca5ebe2d838d7d00a8650f1ff08411f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BZElevWzYntYTm93ceKdkijd4usUcOyG-cCtXCT9Pos.jpg?width=640&crop=smart&auto=webp&s=b158bf935687860b22b7ef04d6c9742533649d26', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BZElevWzYntYTm93ceKdkijd4usUcOyG-cCtXCT9Pos.jpg?width=960&crop=smart&auto=webp&s=ec250ba922c1dd61042c6190e978270d7abfdd2c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BZElevWzYntYTm93ceKdkijd4usUcOyG-cCtXCT9Pos.jpg?width=1080&crop=smart&auto=webp&s=d2b32c66ce986af862eb167f91fdef7fd5e7c74e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BZElevWzYntYTm93ceKdkijd4usUcOyG-cCtXCT9Pos.jpg?auto=webp&s=76a91e3903d00c42a0c24d0282fabdfd43840b4b', 'width': 1200}, 'variants': {}}]}
|
Cline + Qwen3 30b-8bit performance far worse than expected. Very surprising; think I might’ve set it up wrong. Any tips?
| 0 |
Recently tried out Cline with a local endpoint (mlx_lm.server running Qwen3 30b-8bit) and ran into a *lot* of headaches. Model perf dropped off a cliff, prompt processing took forever, and worst was generations just suddenly stopping halfway through a reply. The continue feature would also have the model not pick up where it left off, but instead start a whole new thought chain like I’d sent a fresh prompt; which was pretty counter intuitive. Not to mention prompt caching didn’t seem to work, so every attempt to continue was accompanied by the same long wait time.
Thought it might’ve been mlx_lm.server being at fault / stopping generations early. But continue dot dev worked seamlessly. As did various of my little homemade apps. Same experience running LMStudio instead of mlx_lm.server.
Any tricks I've missed..?
| 2025-05-02T17:50:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1kd78on/cline_qwen3_30b8bit_performance_far_worse_than/
|
mark-lord
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd78on
| false | null |
t3_1kd78on
|
/r/LocalLLaMA/comments/1kd78on/cline_qwen3_30b8bit_performance_far_worse_than/
| false | false |
self
| 0 | null |
I am looking for the AI model that can determine the most relevant URL based on the user query.
| 1 |
[removed]
| 2025-05-02T17:51:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1kd799o/i_am_looking_for_the_ai_model_that_can_determine/
|
d_mnnn
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd799o
| false | null |
t3_1kd799o
|
/r/LocalLLaMA/comments/1kd799o/i_am_looking_for_the_ai_model_that_can_determine/
| false | false |
self
| 1 | null |
Kinda lost with the Qwen3 MoE fixes.
| 52 |
I've been using Qwen3-30B-A3B-Q8\_0 (gguf) since the day it was released. Since then, there have been multiple bug fixes that required reuploading the model files. I ended up trying those out and found them to be worse than what I initially had. One didn't even load at all, erroring out in llama.cpp, while the other was kind of dumb, failing to one-shot a Tetris clone (pygame & HTML5 canvas). I'm quite sure the first versions I had were able to do it, while the files now feel notably dumber, even with a freshly compiled llama.cpp.
Can anyone direct me to a gguf repo on Hugging Face that has those files fixed without bugs or degraded quality? I've tried out a few, but none of them were able to one-shot a Tetris clone, which the first file I had definitely did in a reproducible manner.
| 2025-05-02T17:55:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1kd7dgs/kinda_lost_with_the_qwen3_moe_fixes/
|
yami_no_ko
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd7dgs
| false | null |
t3_1kd7dgs
|
/r/LocalLLaMA/comments/1kd7dgs/kinda_lost_with_the_qwen3_moe_fixes/
| false | false |
self
| 52 | null |
unsloth dynamic quants (bartowski attacking unsloth-team)
| 0 | 2025-05-02T17:57:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1kd7epw/unsloth_dynamic_quants_bartowski_attacking/
|
lucyknada
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd7epw
| false | null |
t3_1kd7epw
|
/r/LocalLLaMA/comments/1kd7epw/unsloth_dynamic_quants_bartowski_attacking/
| false | false | 0 | null |
||
Where to find a developer for a project job setting up LLaMA locally?
| 1 |
[removed]
| 2025-05-02T18:01:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1kd7ih0/where_to_find_a_developer_for_a_project_job/
|
BitterHead3873
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd7ih0
| false | null |
t3_1kd7ih0
|
/r/LocalLLaMA/comments/1kd7ih0/where_to_find_a_developer_for_a_project_job/
| false | false |
self
| 1 | null |
Local chat w/multiple human participants?
| 0 |
I'd like to set up a fully-local group chat with multiple people and one AI for brainstorming. Something like multiuser OpenWebUI would be ideal, but I don't see any plugins or similar projects. I've thought about RocketChat, but I haven't seen anything other than their paid AI thing. Are there any projects out there capable of doing this?
| 2025-05-02T18:02:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1kd7juk/local_chat_wmultiple_human_participants/
|
dutch_dynamite
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd7juk
| false | null |
t3_1kd7juk
|
/r/LocalLLaMA/comments/1kd7juk/local_chat_wmultiple_human_participants/
| false | false |
self
| 0 | null |
What is a real use of local AI for business?
| 1 |
[removed]
| 2025-05-02T18:04:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1kd7lon/what_is_a_real_use_of_local_ai_for_business/
|
RepaBali
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd7lon
| false | null |
t3_1kd7lon
|
/r/LocalLLaMA/comments/1kd7lon/what_is_a_real_use_of_local_ai_for_business/
| false | false |
self
| 1 | null |
Augment code backend? Claude?
| 0 |
Guys so I saw "Unlimited uses" at augmentcode website for 30 usd per month.
Does the backend use claude? How are they doing this?
| 2025-05-02T18:15:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1kd7ugp/augment_code_backend_claude/
|
backinthe90siwasinav
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd7ugp
| false | null |
t3_1kd7ugp
|
/r/LocalLLaMA/comments/1kd7ugp/augment_code_backend_claude/
| false | false |
self
| 0 | null |
I'm proud of myself for getting this to work
| 16 | 2025-05-02T18:56:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1kd8t3r/im_proud_of_myself_for_getting_this_to_work/
|
DiodeInc
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kd8t3r
| false | null |
t3_1kd8t3r
|
/r/LocalLLaMA/comments/1kd8t3r/im_proud_of_myself_for_getting_this_to_work/
| false | false | 16 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.