title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Qwen3 235b pairs EXTREMELY well with a MacBook
164
I have tried the new Qwen3 MoEs on my MacBook m4 max 128gb, and I was expecting speedy inference but I was blown out off the water. On the smaller MoE at q8 I get approx. 75 tok/s on the mlx version which is insane compared to "only" 15 on a 32b dense model. Not expecting great results tbh, I loaded a q3 quant of the 235b version, eating up 100 gigs of ram. And to my surprise it got almost 30 (!!) tok/s. That is actually extremely usable, especially for coding tasks, where it seems to be performing great. This model might actually be the perfect match for apple silicon and especially the 128gb MacBooks. It brings decent knowledge but at INSANE speeds compared to dense models. Also 100 gb of ram usage is a pretty big hit, but it leaves enough room for an IDE and background apps which is mind blowing. In the next days I will look at doing more in depth benchmarks once I find the time, but for the time being I thought this would be of interest since I haven't heard much about Owen3 on apple silicon yet.
2025-05-05T20:55:14
https://www.reddit.com/r/LocalLLaMA/comments/1kfmoyx/qwen3_235b_pairs_extremely_well_with_a_macbook/
Ashefromapex
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfmoyx
false
null
t3_1kfmoyx
/r/LocalLLaMA/comments/1kfmoyx/qwen3_235b_pairs_extremely_well_with_a_macbook/
false
false
self
164
null
How good is Qwen3-30B-A3B
12
How well does it run on CPU btw?
2025-05-05T20:56:32
https://www.reddit.com/r/LocalLLaMA/comments/1kfmq5e/how_good_is_qwen330ba3b/
Own-Potential-2308
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfmq5e
false
null
t3_1kfmq5e
/r/LocalLLaMA/comments/1kfmq5e/how_good_is_qwen330ba3b/
false
false
self
12
null
Can you save KV Cache to disk in llama.cpp/ ooba booga?
2
Hi all, I'm running deepseek v3 on 512gb of ram and 4 3090s. It runs fast enough for my needs at low context but prompt processing on long contexts takes forever, to the point where I wonder if there's a bug or unoptumization somewhere. But I was wondering if there was a way to save the kv cache to disk so we wouldn't have to process it again for hours if we want to resume. Watching the vram fill up it only looks like a couple of gigs, which would be fine with me for some tasks. Does the option in llama.cpp exist, and if not, is there a good reason? I use ooba booga with llama.cpp backend and sometimes sillytavern.
2025-05-05T21:08:49
https://www.reddit.com/r/LocalLLaMA/comments/1kfn1hg/can_you_save_kv_cache_to_disk_in_llamacpp_ooba/
TheSilentFire
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfn1hg
false
null
t3_1kfn1hg
/r/LocalLLaMA/comments/1kfn1hg/can_you_save_kv_cache_to_disk_in_llamacpp_ooba/
false
false
self
2
null
What benchmarks/scores do you trust to give a good idea of a models performance?
21
Just looking for some advice on how i can quickly look up a models actual performance compared to others. The benchmarks used seem to change alot and seeing every single model on huggingface have themselves at the very top or competing just under like OpenAI at 30b params just seems unreal. Where would you recommend I look for scores that are atleast somewhat accurate and unbiased?
2025-05-05T21:14:42
https://www.reddit.com/r/LocalLLaMA/comments/1kfn6qh/what_benchmarksscores_do_you_trust_to_give_a_good/
Business_Respect_910
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfn6qh
false
null
t3_1kfn6qh
/r/LocalLLaMA/comments/1kfn6qh/what_benchmarksscores_do_you_trust_to_give_a_good/
false
false
self
21
null
[Update] MyDeviceAI: Now with Brave Search, Thinking Mode, and support for all modern iPhones!
8
Hey r/LocalLLaMA! A few months ago, [I shared the initial version of MyDeviceAI](https://www.reddit.com/r/LocalLLaMA/comments/1hgxow1/mydeviceai_an_app_that_lets_you_run_llama_32_on/), and I'm excited to share some major updates I've made to the app! What's MyDeviceAI? It's a completely free and open-source iOS app that lets you run private AI locally on your iPhone. Here's what's new:🚀  Key Features: * Lightning-fast responses on modern iPhones (older models supported too!) * Seamless background model loading - no waiting for initialization * Brave Web Search integration (2000 free queries/month) * Thinking Mode powered by Qwen 3 for complex problem-solving * Personalization (Beta) with dynamic user context loading * 30-day or more chat history * Now works on ALL modern iPhones (not just iPhone 13 Pro and later) * Free and open source! About Brave Search Integration: While you'll need to provide a credit card to get the API key from Brave on Braves website, the free tier (2000 queries/month) is more than enough for regular use. The app also has instructions on how to get the API key. Get Started: * GitHub: [github.com/navedmerchant/MyDeviceAI](http://github.com/navedmerchant/MyDeviceAI) * All contributions welcome! With Web search integration, it has completely replaced Google and ChatGPT for me personally, since it always gives me accurate information I am looking for. It is also really fast on my phone (iPhone 14 pro) but I have tested on an iphone 12 mini and works reasonably fast on it as well. I'm actively developing this as a side project and would love your feedback. Try it out and let me know what you think! Download on the AppStore [https://apps.apple.com/us/app/mydeviceai/id6736578281](https://apps.apple.com/us/app/mydeviceai/id6736578281)
2025-05-05T21:18:53
https://v.redd.it/ijr1bqai51ze1
Ssjultrainstnict
v.redd.it
1970-01-01T00:00:00
0
{}
1kfnaf8
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/ijr1bqai51ze1/DASHPlaylist.mpd?a=1749071945%2CN2VkNmI2Y2YzMDk1Mjc5Yjg2ZTQxMWJmY2Q0MDcyMzE3YzQxZmQ4OTA1NzI4YTg5NDg0YjA2NjY1NDJkYWVhYw%3D%3D&v=1&f=sd', 'duration': 19, 'fallback_url': 'https://v.redd.it/ijr1bqai51ze1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 1280, 'hls_url': 'https://v.redd.it/ijr1bqai51ze1/HLSPlaylist.m3u8?a=1749071945%2CYzg5YjhmMmE0YTg4Y2QyNGZlOThkOTI2MDZlZTUzMzM1ZWZjZmUzODdiMmYzN2E1OTIwMDQ4OThjNGQzOWY3MA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ijr1bqai51ze1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 590}}
t3_1kfnaf8
/r/LocalLLaMA/comments/1kfnaf8/update_mydeviceai_now_with_brave_search_thinking/
false
false
https://external-preview…408657bb5d2ba58d
8
{'enabled': False, 'images': [{'id': 'OWsxZnIwOWk1MXplMdZ7mAi5jpl0uQsq5nxvDiHtNDNWTlCUexWyl3TLASNC', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/OWsxZnIwOWk1MXplMdZ7mAi5jpl0uQsq5nxvDiHtNDNWTlCUexWyl3TLASNC.png?width=108&crop=smart&format=pjpg&auto=webp&s=43ef75c901c346814c54c07047eb16dcdc48fd7c', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/OWsxZnIwOWk1MXplMdZ7mAi5jpl0uQsq5nxvDiHtNDNWTlCUexWyl3TLASNC.png?width=216&crop=smart&format=pjpg&auto=webp&s=7cfb92eb07255d55f2bc323776cc60335b5ea488', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/OWsxZnIwOWk1MXplMdZ7mAi5jpl0uQsq5nxvDiHtNDNWTlCUexWyl3TLASNC.png?width=320&crop=smart&format=pjpg&auto=webp&s=3fbc7d31779053ae5cb0ef45fed2dbacaa4966ce', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/OWsxZnIwOWk1MXplMdZ7mAi5jpl0uQsq5nxvDiHtNDNWTlCUexWyl3TLASNC.png?width=640&crop=smart&format=pjpg&auto=webp&s=b42885d068acc17f81b11953dd873ca1274e8d47', 'width': 640}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/OWsxZnIwOWk1MXplMdZ7mAi5jpl0uQsq5nxvDiHtNDNWTlCUexWyl3TLASNC.png?format=pjpg&auto=webp&s=fdd3517b46ab5bea0b09534c2358f23db19e4004', 'width': 886}, 'variants': {}}]}
Question on LM Studio?
3
I see at the bottom of LM Studio it says Context is 6.9% full What does this mean? thanks
2025-05-05T21:19:19
https://www.reddit.com/r/LocalLLaMA/comments/1kfnasj/question_on_lm_studio/
rocky_balboa202
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfnasj
false
null
t3_1kfnasj
/r/LocalLLaMA/comments/1kfnasj/question_on_lm_studio/
false
false
self
3
null
Local coding LLM - Has Claude been defeated yet? (HELP)
1
[removed]
2025-05-05T21:19:47
https://www.reddit.com/r/LocalLLaMA/comments/1kfnb83/local_coding_llm_has_claude_been_defeated_yet_help/
Mixtery1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfnb83
false
null
t3_1kfnb83
/r/LocalLLaMA/comments/1kfnb83/local_coding_llm_has_claude_been_defeated_yet_help/
false
false
self
1
null
Is there API service that provides prompt log-probabilities, like open source libraries do (like vLLM, TGI)? Why most API endpoints are so limited compared to locally hosted inference?
7
Hi, are there LLM API providers that provide log-probabilities? Why most providers do not do it? Occasionally I use some API providers, mostly OpenRouter and DeepInfra so far, and I noticed that almost no provider gives logprobabilities in their response, regardless of requestng them in API call. Only OpenAI provides logprobabilities for the completion, but not for the prompt. I would want to be able to access prompt logprobabilities (it is useful for automatic prompt optimization, for instance https://arxiv.org/html/2502.11560v1) as I do when I set up my own inference with vLLM, but through the maintained API.
2025-05-05T21:32:31
https://www.reddit.com/r/LocalLLaMA/comments/1kfnmfg/is_there_api_service_that_provides_prompt/
FormerIYI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfnmfg
false
null
t3_1kfnmfg
/r/LocalLLaMA/comments/1kfnmfg/is_there_api_service_that_provides_prompt/
false
false
self
7
null
Where to buy workstation GPUs?
9
I've bought some used ones in the past from Ebay, but looking at the RTX Pro 6000 and can't find places to buy an individual card. Anyone know where to look? I've been bouncing around the Nvidia Partners link (https://www.nvidia.com/en-us/design-visualization/where-to-buy/) but haven't found individual cards for sale. Microcenter doesn't list anything near me either.
2025-05-05T21:49:04
https://www.reddit.com/r/LocalLLaMA/comments/1kfo0hx/where_to_buy_workstation_gpus/
Prestigious_Thing797
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfo0hx
false
null
t3_1kfo0hx
/r/LocalLLaMA/comments/1kfo0hx/where_to_buy_workstation_gpus/
false
false
self
9
{'enabled': False, 'images': [{'id': 'TqMOlzZAjHroFgU-3N51gonaqzMCvA3ajpuxCIO5MbM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/TqMOlzZAjHroFgU-3N51gonaqzMCvA3ajpuxCIO5MbM.jpeg?width=108&crop=smart&auto=webp&s=af2f2bb45e432f4831ad1dbcee66b5743f0ec26f', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/TqMOlzZAjHroFgU-3N51gonaqzMCvA3ajpuxCIO5MbM.jpeg?width=216&crop=smart&auto=webp&s=02a5dfacfd96c01077cb698bad64264d227eebd8', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/TqMOlzZAjHroFgU-3N51gonaqzMCvA3ajpuxCIO5MbM.jpeg?width=320&crop=smart&auto=webp&s=b6c9b17c2ac46a2fd992590d6d5f1a9708c98bdd', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/TqMOlzZAjHroFgU-3N51gonaqzMCvA3ajpuxCIO5MbM.jpeg?width=640&crop=smart&auto=webp&s=a95a9dc3787b0ca9507e909c46983d76aa1fdd1e', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/TqMOlzZAjHroFgU-3N51gonaqzMCvA3ajpuxCIO5MbM.jpeg?width=960&crop=smart&auto=webp&s=c742291117ee55aa755ee494e20334e0aedfb8e7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/TqMOlzZAjHroFgU-3N51gonaqzMCvA3ajpuxCIO5MbM.jpeg?width=1080&crop=smart&auto=webp&s=21349375139d5a43a5c615dc090ee62fb568e25e', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/TqMOlzZAjHroFgU-3N51gonaqzMCvA3ajpuxCIO5MbM.jpeg?auto=webp&s=959cdffe4d2cb2588ed0f32ad07983c764258c9f', 'width': 1200}, 'variants': {}}]}
MI50 32GB Performance on Gemma3 and Qwq32b
1
[removed]
2025-05-05T22:01:57
https://www.reddit.com/r/LocalLLaMA/comments/1kfobh9/mi50_32gb_performance_on_gemma3_and_qwq32b/
UnProbug
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfobh9
false
null
t3_1kfobh9
/r/LocalLLaMA/comments/1kfobh9/mi50_32gb_performance_on_gemma3_and_qwq32b/
false
false
self
1
null
Some help here please.
1
[removed]
2025-05-05T22:11:47
https://www.reddit.com/r/LocalLLaMA/comments/1kfojtg/some_help_here_please/
Wise-Condition2634
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfojtg
false
null
t3_1kfojtg
/r/LocalLLaMA/comments/1kfojtg/some_help_here_please/
false
false
self
1
null
AGI is here: Qwen3 - 4b (!) Pong
0
at least for my standards...
2025-05-05T22:13:03
https://i.redd.it/t2r9ui24f1ze1.jpeg
JLeonsarmiento
i.redd.it
1970-01-01T00:00:00
0
{}
1kfokuh
false
null
t3_1kfokuh
/r/LocalLLaMA/comments/1kfokuh/agi_is_here_qwen3_4b_pong/
false
false
https://external-preview…039c981151022c84
0
{'enabled': True, 'images': [{'id': 'gxcTox1D499njJ3HaDcsLFPa17pwqbBbhUnY9fL13V8', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/t2r9ui24f1ze1.jpeg?width=108&crop=smart&auto=webp&s=d7600e83f2ded00fb3169d6bab34c42c58422df5', 'width': 108}, {'height': 138, 'url': 'https://preview.redd.it/t2r9ui24f1ze1.jpeg?width=216&crop=smart&auto=webp&s=a4618eaadbe07c57145c5d37f364cb7d16eea058', 'width': 216}, {'height': 204, 'url': 'https://preview.redd.it/t2r9ui24f1ze1.jpeg?width=320&crop=smart&auto=webp&s=b7a1d8c157df78ef91dea867d9d8da4132734929', 'width': 320}, {'height': 409, 'url': 'https://preview.redd.it/t2r9ui24f1ze1.jpeg?width=640&crop=smart&auto=webp&s=13b696f8b27c212abf00f715fbce5ca5398ab65b', 'width': 640}, {'height': 614, 'url': 'https://preview.redd.it/t2r9ui24f1ze1.jpeg?width=960&crop=smart&auto=webp&s=63ba3e9141a046a2b26043aa429cfaf08106de19', 'width': 960}, {'height': 691, 'url': 'https://preview.redd.it/t2r9ui24f1ze1.jpeg?width=1080&crop=smart&auto=webp&s=d7e8ff565dfd14ad3647f7e5a979fcf53f0d4e14', 'width': 1080}], 'source': {'height': 1275, 'url': 'https://preview.redd.it/t2r9ui24f1ze1.jpeg?auto=webp&s=c8ceef30d4d2bb1d40b6308ed2b1042f6866f3bf', 'width': 1991}, 'variants': {}}]}
RTX PRO 6000 now available at €9000
105
2025-05-05T22:20:52
https://videocardz.com/newz/nvidia-rtx-pro-6000-blackwell-gpus-now-available-starting-at-e9000
newdoria88
videocardz.com
1970-01-01T00:00:00
0
{}
1kfor2i
false
null
t3_1kfor2i
/r/LocalLLaMA/comments/1kfor2i/rtx_pro_6000_now_available_at_9000/
false
false
default
105
null
Where do I start? Company wants a local LLM (sensitive information reasons) that responds reasonably quick and can handle your standard user ChatGPT/Claude style requests.
1
[removed]
2025-05-05T22:25:18
https://www.reddit.com/r/LocalLLaMA/comments/1kfouly/where_do_i_start_company_wants_a_local_llm/
cantcantdancer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfouly
false
null
t3_1kfouly
/r/LocalLLaMA/comments/1kfouly/where_do_i_start_company_wants_a_local_llm/
false
false
self
1
null
Can I combine Qwen 2.5 VL, a robot hand, a robot arm, and a wireless camera to create a robot that can learn to pick things up?
6
I was going to add something here, but I realized pretty much the entire question is in the title. I found robot hands and arms on Amazon for about $100 a piece. I'd have to find a way to run scripts with Qwen. Maybe something like Sorcery for SillyTavern, and use Java to run HTTP to run arduino?? Yes I know I'm in over my head.
2025-05-05T22:27:13
https://www.reddit.com/r/LocalLLaMA/comments/1kfow6o/can_i_combine_qwen_25_vl_a_robot_hand_a_robot_arm/
False_Grit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfow6o
false
null
t3_1kfow6o
/r/LocalLLaMA/comments/1kfow6o/can_i_combine_qwen_25_vl_a_robot_hand_a_robot_arm/
false
false
self
6
null
Lost in model soup
1
[removed]
2025-05-05T22:32:16
https://www.reddit.com/r/LocalLLaMA/comments/1kfp06v/lost_in_model_soup/
annakhouri2150
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfp06v
false
null
t3_1kfp06v
/r/LocalLLaMA/comments/1kfp06v/lost_in_model_soup/
false
false
self
1
null
5070 Ti - What's the best RP model I can run?
1
Most models I've tried that are the typical infamous recommendations are just... kind of unintelligent? However plenty of them are dated and others are simply just small models. I liked Cydonia alright, but it's still not all too smart.
2025-05-05T22:54:34
https://www.reddit.com/r/LocalLLaMA/comments/1kfphpp/5070_ti_whats_the_best_rp_model_i_can_run/
PangurBanTheCat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfphpp
false
null
t3_1kfphpp
/r/LocalLLaMA/comments/1kfphpp/5070_ti_whats_the_best_rp_model_i_can_run/
false
false
self
1
null
Help with hardware please
1
[removed]
2025-05-05T22:58:50
https://www.reddit.com/r/LocalLLaMA/comments/1kfpkyk/help_with_hardware_please/
Wise-Condition2634
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfpkyk
false
null
t3_1kfpkyk
/r/LocalLLaMA/comments/1kfpkyk/help_with_hardware_please/
false
false
self
1
null
Some Benchmarks of Qwen/Qwen3-32B-AWQ
26
I ran some benchmarks locally for the AWQ version of Qwen3-32B using vLLM and evalscope (38K context size without rope scaling) * Default thinking mode: temperature=0.6,top\_p=0.95,top\_k=20,presence\_penalty=1.5 * /no\_think: temperature=0.7,top\_p=0.8,top\_k=20,presence\_penalty=1.5 * live code bench only 30 samples: "2024-10-01" to "2025-02-28" * all were few\_shot\_num: 0 * statistically not super sound, but good enough for my personal evaluation
2025-05-05T23:24:15
https://www.reddit.com/gallery/1kfq4q5
Specific-Rub-7250
reddit.com
1970-01-01T00:00:00
0
{}
1kfq4q5
false
null
t3_1kfq4q5
/r/LocalLLaMA/comments/1kfq4q5/some_benchmarks_of_qwenqwen332bawq/
false
false
https://b.thumbs.redditm…1ExeVbSi248M.jpg
26
null
3090 + 32gb ram + nvme
2
Hi! Thanks in advance for your help. Could you tell me which is the best open-source AI for this hardware? I’d use it for programming with Visual Code and Cline. Thanks!
2025-05-05T23:35:48
https://www.reddit.com/r/LocalLLaMA/comments/1kfqdjs/3090_32gb_ram_nvme/
EnvironmentalHelp363
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfqdjs
false
null
t3_1kfqdjs
/r/LocalLLaMA/comments/1kfqdjs/3090_32gb_ram_nvme/
false
false
self
2
null
Created my own leaderboards for SimpleQA and Coding
4
I compiled 10+ sources for both the [SimpleQA leaderboard](https://blog.elijahlopez.ca/posts/ai-simpleqa-leaderboard/) and the [Coding leaderboard](https://blog.elijahlopez.ca/posts/ai-coding-leaderboard/). I plan on continuously updating them as new model scores come out (or you can contribute, since my blog is [open-source](https://github.com/elibroftw/blog.elijahlopez.ca)). When I was writing my [AI awesome list ](https://blog.elijahlopez.ca/posts/ai/), I realized that leaderboards were missing for the ways I wanted to compare models in both coding and search. I respect SimpleQA because I care about factuality when using AI to learn something. For coding, I have ranked models by SWE-bench verified scores, but also included Codeforces Elo ratings as that was something I noticed was unavailable. After doing all this I came to a few conclusions. 1. EvalPlus is deprecated; read more in the coding leaderboard 2. xAI is releasing a suspicuiously low amount of benchmark scores. Not only that, but the xAI team has taken the approach that we all have patience. Their LCB score is useless to real world scenarios once you realize not only did it have to think to achieve them, gemini 2.5 pro beat it anyways. Then there's the funny situation that o4-mini and Gemini 2.5 Pro Preview were released on openrouter 7-8 days after grok 3 BETA was released on openrouter. 3. The short-list of companies putting in the work to drive innovation: OpenAI, Google Deepmind, Claude, Qwen, DeepSeek; 4. Qwen3 30B is a great model and has deprecated DeepSeek R1 Distill 70B 5. Phi-4 reasoning results are really nice for offering better performance than Qwen3 4B but under 30B. I've placed it under the DeepSeek R1 Distill 70B only because their own LCB benchmark placed DeepSeek R1 Distill 70B above their own.
2025-05-06T00:01:20
https://www.reddit.com/r/LocalLLaMA/comments/1kfqx4t/created_my_own_leaderboards_for_simpleqa_and/
Elibroftw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfqx4t
false
null
t3_1kfqx4t
/r/LocalLLaMA/comments/1kfqx4t/created_my_own_leaderboards_for_simpleqa_and/
false
false
self
4
null
local debugging menace now supports phi4-reasoning and qwen3
0
no cap fr fr this update is straight bussin, been tweaking on building **Cloi** its local debugging agent that runs in your terminal Cloi deadass catches your error tracebacks, spins up a local LLM (zero api key nonsense, no cloud tax) and only with your permission drops some clean af patches directly to ur files. New features dropped: run `/model` to choose ANY models already on your mac or try the new **phi4-reasoning** and **qwen3** models for local usage your code debugging experience about to be skibidi gyatt with these models fr BTW built this bc cursor's o3 got me down astronomical ($0.30 per request??) and local models are just getting better and better (benchmarks don't lie frfr) on god! If anyone's interested in the implementation or wants to issue feedback or PRs, check out da code: [https://github.com/cloi-ai/cloi](https://github.com/cloi-ai/cloi)
2025-05-06T00:13:35
https://v.redd.it/x1av4yhg02ze1
AntelopeEntire9191
v.redd.it
1970-01-01T00:00:00
0
{}
1kfr69v
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/x1av4yhg02ze1/DASHPlaylist.mpd?a=1749082431%2CNjRkNjhlZGRmZTcwMjZmMGQ4OGQzMTA0YjQ3ZTRiZDdjMGM1OGRlMmQxODMxNWU2ZWExNjgxNmRlNjFiZjk5MQ%3D%3D&v=1&f=sd', 'duration': 17, 'fallback_url': 'https://v.redd.it/x1av4yhg02ze1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/x1av4yhg02ze1/HLSPlaylist.m3u8?a=1749082431%2CYmNiYjU3NmZiMTYxNTM3MTg1YjI0NjBmODc0ZDMzYTJjN2YxOWJlZGM0ODQ0MzdhYTIzNGE0M2M5ZGRmNDBlMQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/x1av4yhg02ze1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1112}}
t3_1kfr69v
/r/LocalLLaMA/comments/1kfr69v/local_debugging_menace_now_supports_phi4reasoning/
false
false
https://external-preview…879c923f32c00fd5
0
{'enabled': False, 'images': [{'id': 'emJkNjAzaWcwMnplMWc1n15xJrjGmONCGSEDHTHNeXkVD0MktWP-lK4UFrFE', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/emJkNjAzaWcwMnplMWc1n15xJrjGmONCGSEDHTHNeXkVD0MktWP-lK4UFrFE.png?width=108&crop=smart&format=pjpg&auto=webp&s=13e99df6431d00df9be695d20ff4df147f716070', 'width': 108}, {'height': 139, 'url': 'https://external-preview.redd.it/emJkNjAzaWcwMnplMWc1n15xJrjGmONCGSEDHTHNeXkVD0MktWP-lK4UFrFE.png?width=216&crop=smart&format=pjpg&auto=webp&s=81fad1ee4767bb91cd2f90058c2f17b96500a407', 'width': 216}, {'height': 207, 'url': 'https://external-preview.redd.it/emJkNjAzaWcwMnplMWc1n15xJrjGmONCGSEDHTHNeXkVD0MktWP-lK4UFrFE.png?width=320&crop=smart&format=pjpg&auto=webp&s=78c47d1159beb8ec0e28350b2ac5ad0bfef1ae5e', 'width': 320}, {'height': 414, 'url': 'https://external-preview.redd.it/emJkNjAzaWcwMnplMWc1n15xJrjGmONCGSEDHTHNeXkVD0MktWP-lK4UFrFE.png?width=640&crop=smart&format=pjpg&auto=webp&s=fac7691624726d0666637d353091400964c22cea', 'width': 640}, {'height': 621, 'url': 'https://external-preview.redd.it/emJkNjAzaWcwMnplMWc1n15xJrjGmONCGSEDHTHNeXkVD0MktWP-lK4UFrFE.png?width=960&crop=smart&format=pjpg&auto=webp&s=1ebda63004674b6d4169feae471fca824e404c98', 'width': 960}, {'height': 699, 'url': 'https://external-preview.redd.it/emJkNjAzaWcwMnplMWc1n15xJrjGmONCGSEDHTHNeXkVD0MktWP-lK4UFrFE.png?width=1080&crop=smart&format=pjpg&auto=webp&s=7fbe2398d73731cd80fe513f0e3dad792d35ddcb', 'width': 1080}], 'source': {'height': 912, 'url': 'https://external-preview.redd.it/emJkNjAzaWcwMnplMWc1n15xJrjGmONCGSEDHTHNeXkVD0MktWP-lK4UFrFE.png?format=pjpg&auto=webp&s=0a72acc4d611a963e2007290408318620c59d732', 'width': 1408}, 'variants': {}}]}
Qwen 3 Small Models: 0.6B, 1.7B & 4B compared with Gemma 3
67
[https://youtube.com/watch?v=v8fBtLdvaBM&si=L\_xzVrmeAjcmOKLK](https://youtube.com/watch?v=v8fBtLdvaBM&si=L_xzVrmeAjcmOKLK) I compare the performance of smaller Qwen 3 models (0.6B, 1.7B, and 4B) against Gemma 3 models on various tests. TLDR: Qwen 3 4b outperforms Gemma 3 12B on 2 of the tests and comes in close on 2. It outperforms Gemma 3 4b on all tests. These tests were done without reasoning, for an apples to apples with Gemma. This is the first time I have seen a 4B model actually acheive a respectable score on many of the tests. | Test | 0.6B Model | 1.7B Model | 4B Model | | :---------------------------- | :--------- | :--------- | :------- | | Harmful Question Detection | 40% | 60% | 70% | | Named Entity Recognition | Did not perform well | 45% | 60% | | SQL Code Generation | 45% | 75% | 75% | | Retrieval Augmented Generation | 37% | 75% | 83% |
2025-05-06T00:22:44
https://www.reddit.com/r/LocalLLaMA/comments/1kfrcul/qwen_3_small_models_06b_17b_4b_compared_with/
Ok-Contribution9043
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfrcul
false
null
t3_1kfrcul
/r/LocalLLaMA/comments/1kfrcul/qwen_3_small_models_06b_17b_4b_compared_with/
false
false
self
67
{'enabled': False, 'images': [{'id': 'ntIU_qTIoAMoI0s5kHKeqz8E4362OrmtJzN35QZwXcM', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/0ok9Ek9tvrUesQ__gcLkvavBGjRxVWI2fFIL8yQbBrs.jpg?width=108&crop=smart&auto=webp&s=59bec90540940ab625391094d6b556c23789c15f', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/0ok9Ek9tvrUesQ__gcLkvavBGjRxVWI2fFIL8yQbBrs.jpg?width=216&crop=smart&auto=webp&s=2ee507a8d4d06432819b5b4a7e57b593fdc43608', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/0ok9Ek9tvrUesQ__gcLkvavBGjRxVWI2fFIL8yQbBrs.jpg?width=320&crop=smart&auto=webp&s=5f54661bb0fd46eecd8d328199872ee6714e54fc', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/0ok9Ek9tvrUesQ__gcLkvavBGjRxVWI2fFIL8yQbBrs.jpg?auto=webp&s=57fee9a85f680b5001b2991fec4a8e7c33757840', 'width': 480}, 'variants': {}}]}
Need advice on my PC spec
0
Hey everyone! I just got an estimate from a friend who has more experiences than me for my first PC build, around $7,221 USD. It has some high-end components like dual RTX 4090s and an Intel Xeon processor. Here’s a rough breakdown of the costs: - CPUs: ~$2,000 each - Coolers: ~$100 each - Motherboard: ~$500 - Memory: ~$100 - Storage: ~$80 - Graphics Cards: ~$1,600 each - Case: ~$200 - Power Supply: ~$300 Do you think this is a good setup? Would love your thoughts! user case: to help my family to run their personal family business (an office of 8 ppl and home private stuff)
2025-05-06T00:36:15
https://www.reddit.com/r/LocalLLaMA/comments/1kfrmow/need_advice_on_my_pc_spec/
AfraidScheme433
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfrmow
false
null
t3_1kfrmow
/r/LocalLLaMA/comments/1kfrmow/need_advice_on_my_pc_spec/
false
false
self
0
null
Expected Mac Studio M3 Ultra TTFT with MLX?
0
I run the `mlx-community/DeepSeek-R1-4bit` with `mlx-lm` (version `0.24.0`) directly and am seeing \~60s for the time to first token. I see in posts like [this](https://www.reddit.com/r/LocalLLaMA/comments/1kfi8xh/benchmark_quickanddirty_test_of_5_models_on_a_mac/) and [this](https://www.reddit.com/r/LocalLLaMA/comments/1jw9fba/macbook_pro_m4_max_inference_speeds/) that the TTFT should not be this long, maybe \~15s. Is it expected to see 60s for TTFT with a small context window on a Mac Studio M3 Ultra? The prompt I run is: `mlx_lm.generate --model mlx-community/DeepSeek-R1-4bit --prompt "Explain to me why sky is blue at an physiscist Level PhD."`
2025-05-06T00:44:24
https://www.reddit.com/r/LocalLLaMA/comments/1kfrsdi/expected_mac_studio_m3_ultra_ttft_with_mlx/
nonredditaccount
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfrsdi
false
null
t3_1kfrsdi
/r/LocalLLaMA/comments/1kfrsdi/expected_mac_studio_m3_ultra_ttft_with_mlx/
false
false
self
0
null
Advice: Wanting to create a Claude.ai server on my LAN for personal use
8
So I am Super new to all this LLM stuff, and y'all will probably be frustrated at my lack of knowledge. Appologies in advanced. If there is a better place to post this, please delete and repost to the proper forum or tell me. I have been using Claude.ai and having had a blast. I've been using the free version to help me with Commodore Basic 7.0 code, and it's been so much fun! I hit the limits of usage whenever I consult it. So what I would like to do is build a computer to put on my LAN so I don't have the limitations (if it's even possible) of the number of tokens or whatever it is that it has. Again, I am not sure if that is possible, but it can't hurt to ask, right? I have a bunch of computer parts that I could cobble something together. I understand it won't be near as fast/responsive as Claude.ai - BUT that is ok. I just want something I could have locally without the limtations, or not have to spend $20/month I was looking at this: https://www.kdnuggets.com/using-claude-3-7-locally As far as hardware goes, I have an i7 and willing to purchase a minimum graphics card and memory (like a 4060 8g for <%500 [I realize 16gb is prefered] - or maybe the 3060 12gb for < $400). So, is this realistic, or am I (probably) just not understanding all of what's involved? Feel free to flame me or whatever, I realize I don't know much about this and just want a Claude.ai on my LAN. And after following that tutorial, not sure how I would access it over the LAN. But baby steps. I'm semi-Tech-savy, so I hope I could figure it out.
2025-05-06T01:03:15
https://www.reddit.com/r/LocalLLaMA/comments/1kfs61t/advice_wanting_to_create_a_claudeai_server_on_my/
phIIX
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfs61t
false
null
t3_1kfs61t
/r/LocalLLaMA/comments/1kfs61t/advice_wanting_to_create_a_claudeai_server_on_my/
false
false
self
8
null
Local solutions for long-context?
4
Hi folks, I work in a small team within an org and we have a relatively small knowledge base (~10,000 tokens). I've tried RAG but found it difficult to implement, particularly getting the embedding model to select the right chunks. Since our knowledge base is small I want to know if a more straightforward solution would be better. Basically I'd like to host an LLM where the entirety of the knowledge base is loaded into the context at the start of every chat session. So rather than using RAG to provide the LLM chunks of documents, to just provide it all of the documents instead. Is this feasible given the size of our knowledge base? Any suggestions for applications/frameworks, or models that are good at this? Thanks
2025-05-06T01:17:53
https://www.reddit.com/r/LocalLLaMA/comments/1kfsgl8/local_solutions_for_longcontext/
ASTRdeca
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfsgl8
false
null
t3_1kfsgl8
/r/LocalLLaMA/comments/1kfsgl8/local_solutions_for_longcontext/
false
false
self
4
null
Using RunPod for Qwen3? Is there a more cost effective option for personal use?
1
[removed]
2025-05-06T01:32:21
https://www.reddit.com/r/LocalLLaMA/comments/1kfsqqa/using_runpod_for_qwen3_is_there_a_more_cost/
Ambitious_Donkey6605
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfsqqa
false
null
t3_1kfsqqa
/r/LocalLLaMA/comments/1kfsqqa/using_runpod_for_qwen3_is_there_a_more_cost/
false
false
self
1
null
Personal project - Hosting Qwen3-32b - RunPod?
6
Im currently developing a personal project for myself that requires an LLM. I just want to understand RunPod's billing for an intermittently used personal project. If I run a 4090 for a few minutes while using the flex workers set up, am I only paying for those few minutes plus storage? Are there any alternatives that are cheaper for a sparingly used LLM project?
2025-05-06T01:40:24
https://www.reddit.com/r/LocalLLaMA/comments/1kfsw9x/personal_project_hosting_qwen332b_runpod/
fake-bird-123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfsw9x
false
null
t3_1kfsw9x
/r/LocalLLaMA/comments/1kfsw9x/personal_project_hosting_qwen332b_runpod/
false
false
self
6
null
Chached input locally?????
0
I'm running something super insane with ai, the best AI, qwen! the first half of the prompt is always the same, it's short tho, 150 tokens. I need to make 300 calls in a row, and only the things after the first part change Can I cache the input? Can I do it in lm studio specifically?
2025-05-06T01:43:49
https://www.reddit.com/r/LocalLLaMA/comments/1kfsynd/chached_input_locally/
Osama_Saba
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfsynd
false
null
t3_1kfsynd
/r/LocalLLaMA/comments/1kfsynd/chached_input_locally/
false
false
self
0
null
Should I build my own server for MOE?
5
I am thinking about building an server/pc to run MOE but maybe event add a second GPU to run larger dense models. Here is what I got so far: Supermicro X10DRi-T4+ motherboard 2x Intel Xeon E5-2620 v4 CPUs (8 cores each, 16 total cores) 8x 32GB DDR4-2400 ECC RDIMM (256GB total RAM) 1x NVIDIA RTX 3090 GPU I already have a spare 3090. The rest of the other parts would be cheap like under $200 for everything. Is it worth pursuing? I'd like to use the MOE models and fill up that RAM and use the 3090 to speed up things. I currently run Qwen3 30b a3b and work computer as it as very snappy on my 3090 with 64 gb of DDR5 RAM. Since I could get DDR4 RAM cheap, I could work towards running the Qwen3 235b a30b model or even large MOE. This motherboard setup is also appealing, because it has enough PCIE lanes to run two 3090. So a cheaper alternative to Threadripper if I did not want to really use the DDR4. Is there anything else I should consider? I don't want to just make a purchase, because it would be cool to build something when I would not really see much of a performance change from my work computer. I could invest that money into upgrading to 128gb of DDR5 RAM instead.
2025-05-06T01:48:51
https://www.reddit.com/r/LocalLLaMA/comments/1kft22l/should_i_build_my_own_server_for_moe/
fgoricha
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kft22l
false
null
t3_1kft22l
/r/LocalLLaMA/comments/1kft22l/should_i_build_my_own_server_for_moe/
false
false
self
5
null
Qwen 14B is better than me...
652
I'm crying, what's the point of living when a 9GB file on my hard drive is batter than me at everything! It expresses itself better, it codes better, known better math, knows how to talk to girls, and use tools that will take me hours to figure out instantly... In a useless POS, you too all are... Maybe if you told me I'm like a 1TB I could deal with that, but 9GB???? That's so small I won't even notice that on my phone..... Not only all of that, it also writes and thinks faster than me, in different languages... I barley learned English as a 2nd language after 20 years.... I'm not even sure if I'm better than the 8B, but I spot it make mistakes that I won't do... But the 14? Nope, if I ever think it's wrong then it'll prove to me that it isn't...
2025-05-06T01:54:32
https://www.reddit.com/r/LocalLLaMA/comments/1kft5yu/qwen_14b_is_better_than_me/
Osama_Saba
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kft5yu
false
null
t3_1kft5yu
/r/LocalLLaMA/comments/1kft5yu/qwen_14b_is_better_than_me/
false
false
self
652
null
Qwen3-32B-Q4 GGUFs MMLU-PRO benchmark comparison - IQ4_XS / Q4_K_M / UD-Q4_K_XL / Q4_K_L
95
**MMLU-PRO 0.25 subset(3003 questions), 0 temp, No Think, IQ4\_XS, Q8 KV Cache** Qwen3-32B-IQ4\_XS / Q4\_K\_M / UD-Q4\_K\_XL / Q4\_K\_L The entire benchmark took ***12 hours 17 minutes and 53 seconds.*** Observation: IQ4\_XS is the most efficient Q4 quant for 32B, the quality difference is minimum https://preview.redd.it/ux7ohwp9i2ze1.png?width=813&format=png&auto=webp&s=5540409d217577a2511ea158329ded47c598990f https://preview.redd.it/g5fwhfeai2ze1.png?width=1420&format=png&auto=webp&s=8b1e705703af19fbdbab465029ff9a3921616eb5 https://preview.redd.it/8kctawyai2ze1.png?width=2187&format=png&auto=webp&s=7b2686abe38d52897e6cfb10d263eed7114ad237 *The official MMLU-PRO leaderboard is listing the score of Qwen3 base model instead of instruct, that's why these iq4 quants score higher than the one on MMLU-PRO leaderboard.* gguf source: [https://huggingface.co/unsloth/Qwen3-32B-GGUF](https://huggingface.co/unsloth/Qwen3-32B-GGUF) [https://huggingface.co/bartowski/Qwen\_Qwen3-32B-GGUF](https://huggingface.co/bartowski/Qwen_Qwen3-32B-GGUF)
2025-05-06T01:57:26
https://www.reddit.com/r/LocalLLaMA/comments/1kft806/qwen332bq4_ggufs_mmlupro_benchmark_comparison_iq4/
AaronFeng47
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kft806
false
null
t3_1kft806
/r/LocalLLaMA/comments/1kft806/qwen332bq4_ggufs_mmlupro_benchmark_comparison_iq4/
false
false
https://external-preview…b2621f57564c2aee
95
{'enabled': False, 'images': [{'id': 'NQpYaauXMlT7nzqNv0PNj3mHiJO1bt0uUtsvQ0JZElk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/NQpYaauXMlT7nzqNv0PNj3mHiJO1bt0uUtsvQ0JZElk.png?width=108&crop=smart&auto=webp&s=ed0fb1e36d7538651f32e69014c1e794431c774f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/NQpYaauXMlT7nzqNv0PNj3mHiJO1bt0uUtsvQ0JZElk.png?width=216&crop=smart&auto=webp&s=34afa525de3a4c6aaa9c2811084e1dea84cd5c2b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/NQpYaauXMlT7nzqNv0PNj3mHiJO1bt0uUtsvQ0JZElk.png?width=320&crop=smart&auto=webp&s=42c1614f410ee12ab7574b41c58b0c02a17afe41', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/NQpYaauXMlT7nzqNv0PNj3mHiJO1bt0uUtsvQ0JZElk.png?width=640&crop=smart&auto=webp&s=f486e072431ebbf747ecc9e5b34d3c56ba17fda6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/NQpYaauXMlT7nzqNv0PNj3mHiJO1bt0uUtsvQ0JZElk.png?width=960&crop=smart&auto=webp&s=65dbe2d0d2a9aef28d95c30d1de6b1d210304094', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/NQpYaauXMlT7nzqNv0PNj3mHiJO1bt0uUtsvQ0JZElk.png?width=1080&crop=smart&auto=webp&s=a10227953fcd6ca70167b6c82c6c2b2209862ada', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/NQpYaauXMlT7nzqNv0PNj3mHiJO1bt0uUtsvQ0JZElk.png?auto=webp&s=fbb68ed9e1a7e1002a0329bee95edbae06ecbe17', 'width': 1200}, 'variants': {}}]}
Open AI buys WindSurf for $3B. https://www.bloomberg.com/news/articles/2025-05-06/openai-reaches-agreement-to-buy-startup-windsurf-for-3-billion?
0
https://www.bloomberg.com/news/articles/2025-05-06/openai-reaches-agreement-to-buy-startup-windsurf-for-3-billion?
2025-05-06T02:16:20
https://www.reddit.com/r/LocalLLaMA/comments/1kftlok/open_ai_buys_windsurf_for_3b/
appakaradi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kftlok
false
null
t3_1kftlok
/r/LocalLLaMA/comments/1kftlok/open_ai_buys_windsurf_for_3b/
false
false
self
0
{'enabled': False, 'images': [{'id': 'uKR-MhZlGNA5QVYfhviZz_faisyrxvjzrIQxyosjabw', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/Lhh87zCT1lmKhDSfpobtiBaGGkslLmEUIIuW0_UkgiE.jpg?width=108&crop=smart&auto=webp&s=1f45d6092b6a6a459ef56525796cc3a26ea2a007', 'width': 108}, {'height': 142, 'url': 'https://external-preview.redd.it/Lhh87zCT1lmKhDSfpobtiBaGGkslLmEUIIuW0_UkgiE.jpg?width=216&crop=smart&auto=webp&s=fb016fdd2b5e1011cc905a82d53b2a048352f85a', 'width': 216}, {'height': 210, 'url': 'https://external-preview.redd.it/Lhh87zCT1lmKhDSfpobtiBaGGkslLmEUIIuW0_UkgiE.jpg?width=320&crop=smart&auto=webp&s=e96dd456c20ec83c18a34bd4e76191df64fcc52a', 'width': 320}, {'height': 421, 'url': 'https://external-preview.redd.it/Lhh87zCT1lmKhDSfpobtiBaGGkslLmEUIIuW0_UkgiE.jpg?width=640&crop=smart&auto=webp&s=f63bbe0a9f37778ae8336382d5f3eabace7c08f3', 'width': 640}, {'height': 632, 'url': 'https://external-preview.redd.it/Lhh87zCT1lmKhDSfpobtiBaGGkslLmEUIIuW0_UkgiE.jpg?width=960&crop=smart&auto=webp&s=695de13a41c8ccb57eb9f970e8e1eb08669d6770', 'width': 960}, {'height': 711, 'url': 'https://external-preview.redd.it/Lhh87zCT1lmKhDSfpobtiBaGGkslLmEUIIuW0_UkgiE.jpg?width=1080&crop=smart&auto=webp&s=df9ae58bd38d7315f2fcc61d1a40142216b4c9d5', 'width': 1080}], 'source': {'height': 790, 'url': 'https://external-preview.redd.it/Lhh87zCT1lmKhDSfpobtiBaGGkslLmEUIIuW0_UkgiE.jpg?auto=webp&s=03f314ad19cae2e7ab8cca39df02e70a47404675', 'width': 1200}, 'variants': {}}]}
Has someone written a good blog post about lifecycle of a open source GPT model and its quantizations/versions? Who tends to put those versions out?
3
I am newer to LLMs but as I understand it once a LLM is "out" there is an option to quantize it to greatly reduce system resources it needs to run all around. There is then the option to PQT or QAT it depending on system resources you have available and whether you are willing to retrain it. So if we take for example LLaMA 4. Released about a month ago. It has this idea of Experts which I dont fully understand but seems to be an innovation on inference that sounds conceptually similar where its decomposing its compute into multiple lower order matrices/for every request even though the model is gargantuan only a subset, that is much more manageable to compute with, is used to compute a response. That being said clearly I dont understand what experts bring to the table or how they impact what kind of hardware LLaMA can run on. We have Behemoth (coming soon), Maverick at a model size of 125.27GB with 17B active parameters, and scout at a model size of 114.53 GB with also 17B active parameters. The implication being here while a high VRAM device may be able to use these for inference its going to be dramatically held back by paging things in and out of VRAM. A computer that wants to run LLAMA 4 should ideally have at least 115 GB VRAM. I am not sure if that's even right though as normally I would assume 17B active parameters means 32 GB VRAM is sufficient. Looks like Meta did do some quantization on these released models. When might further quantization come into play? I am assuming no one has the resources to do QAT so we have to wait for meta to decide if they want to try anything there. The community however could take a crack at PQT. For example with LLaMA 3.3 I can see a community model that uses Q3\_K\_L to shrink the model size to 37.14 GB while keeping 70B active parameters. Nonetheless OpenLLM advises me that my 48GB M4 MAX may not be up to the task of that model despite it being able to technically fit the model into memory. What I am hoping to understand is, now that LLaMA 4 is out, if the community likes it and deems it worthy, do people tend to figure out ways to shrink such a model down to laptop-sized models using quantization (at a tradeoff of accuracy)? How long might it take to see a LLaMA 4 that can run on the same hardware a fairly standard 32B model could? I feel like I hear occasional excitement that "\_ has taken model \_ and made it \_ so that it can run on just about any MacBook" but I don't get how community models get it there or how long that process takes.
2025-05-06T02:21:47
https://www.reddit.com/r/LocalLLaMA/comments/1kftphl/has_someone_written_a_good_blog_post_about/
kierumcak
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kftphl
false
null
t3_1kftphl
/r/LocalLLaMA/comments/1kftphl/has_someone_written_a_good_blog_post_about/
false
false
self
3
null
Low Cost Multilingual TTS Solutions
1
[removed]
2025-05-06T02:24:05
https://www.reddit.com/r/LocalLLaMA/comments/1kftr2x/low_cost_multilingual_tts_solutions/
Ok_Clock_2728
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kftr2x
false
null
t3_1kftr2x
/r/LocalLLaMA/comments/1kftr2x/low_cost_multilingual_tts_solutions/
false
false
self
1
null
Draft Model Compatible With unsloth/Qwen3-235B-A22B-GGUF?
15
I have installed unsloth/Qwen3-235B-A22B-GGUF and while it runs, it's only about 4 t/sec. I was hoping to speed it up a bit with a draft model such as unsloth/Qwen3-16B-A3B-GGUF or unsloth/Qwen3-8B-GGUF but the smaller models are not "compatible". I've used draft models with Llama with no problems. I don't know enough about draft models to know what makes them compatible other than they have to be in the same family. Example, I don't know if it's possible to use draft models of an MoE model. Is it possible at all with Qwen3?
2025-05-06T02:28:20
https://www.reddit.com/r/LocalLLaMA/comments/1kftu3s/draft_model_compatible_with/
Simusid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kftu3s
false
null
t3_1kftu3s
/r/LocalLLaMA/comments/1kftu3s/draft_model_compatible_with/
false
false
self
15
null
Built an open-source tool to easily compare token counts across different LLMs (GPT, Claude, HF models, etc.)
1
[removed]
2025-05-06T02:55:42
https://www.reddit.com/r/LocalLLaMA/comments/1kfucod/built_an_opensource_tool_to_easily_compare_token/
Historical_Pepper888
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfucod
false
null
t3_1kfucod
/r/LocalLLaMA/comments/1kfucod/built_an_opensource_tool_to_easily_compare_token/
false
false
https://b.thumbs.redditm…9qnOb37AxYRg.jpg
1
{'enabled': False, 'images': [{'id': 'b3_nc9eUM96LdvrtRkpsiSfLCjhmgpLRDj18BCf7ynE', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/HqToyOsrmDCjBuhXei1-vt-6tIgpPrf7PUaZlCXpTVk.jpg?width=108&crop=smart&auto=webp&s=c73c2dc0d57ce9172cfe617fde07de0904c57004', 'width': 108}, {'height': 111, 'url': 'https://external-preview.redd.it/HqToyOsrmDCjBuhXei1-vt-6tIgpPrf7PUaZlCXpTVk.jpg?width=216&crop=smart&auto=webp&s=3af2a89e70fb29327a34c3df60719785a9786eca', 'width': 216}, {'height': 165, 'url': 'https://external-preview.redd.it/HqToyOsrmDCjBuhXei1-vt-6tIgpPrf7PUaZlCXpTVk.jpg?width=320&crop=smart&auto=webp&s=764e629585154ec7ed2a38ba3edf66c55c96637d', 'width': 320}, {'height': 331, 'url': 'https://external-preview.redd.it/HqToyOsrmDCjBuhXei1-vt-6tIgpPrf7PUaZlCXpTVk.jpg?width=640&crop=smart&auto=webp&s=b6f46441d285ac5546160617191bc8f5966fe4ae', 'width': 640}, {'height': 496, 'url': 'https://external-preview.redd.it/HqToyOsrmDCjBuhXei1-vt-6tIgpPrf7PUaZlCXpTVk.jpg?width=960&crop=smart&auto=webp&s=424c542e26c50f1c677a06c9fe56900dc3585b17', 'width': 960}, {'height': 558, 'url': 'https://external-preview.redd.it/HqToyOsrmDCjBuhXei1-vt-6tIgpPrf7PUaZlCXpTVk.jpg?width=1080&crop=smart&auto=webp&s=9e65e630076b2bf464aae1535e4b63ec77cd8b0d', 'width': 1080}], 'source': {'height': 1280, 'url': 'https://external-preview.redd.it/HqToyOsrmDCjBuhXei1-vt-6tIgpPrf7PUaZlCXpTVk.jpg?auto=webp&s=a65074fce8eb58ade52fb9421f109c9518de98d5', 'width': 2473}, 'variants': {}}]}
Built an open-source tool to easily compare token counts across different LLMs (GPT, Claude, HF models, etc.)
1
[removed]
2025-05-06T02:58:57
https://www.reddit.com/r/LocalLLaMA/comments/1kfueu0/built_an_opensource_tool_to_easily_compare_token/
Historical_Pepper888
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfueu0
false
null
t3_1kfueu0
/r/LocalLLaMA/comments/1kfueu0/built_an_opensource_tool_to_easily_compare_token/
false
false
https://a.thumbs.redditm…70XkHCys6k34.jpg
1
{'enabled': False, 'images': [{'id': 'b3_nc9eUM96LdvrtRkpsiSfLCjhmgpLRDj18BCf7ynE', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/HqToyOsrmDCjBuhXei1-vt-6tIgpPrf7PUaZlCXpTVk.jpg?width=108&crop=smart&auto=webp&s=c73c2dc0d57ce9172cfe617fde07de0904c57004', 'width': 108}, {'height': 111, 'url': 'https://external-preview.redd.it/HqToyOsrmDCjBuhXei1-vt-6tIgpPrf7PUaZlCXpTVk.jpg?width=216&crop=smart&auto=webp&s=3af2a89e70fb29327a34c3df60719785a9786eca', 'width': 216}, {'height': 165, 'url': 'https://external-preview.redd.it/HqToyOsrmDCjBuhXei1-vt-6tIgpPrf7PUaZlCXpTVk.jpg?width=320&crop=smart&auto=webp&s=764e629585154ec7ed2a38ba3edf66c55c96637d', 'width': 320}, {'height': 331, 'url': 'https://external-preview.redd.it/HqToyOsrmDCjBuhXei1-vt-6tIgpPrf7PUaZlCXpTVk.jpg?width=640&crop=smart&auto=webp&s=b6f46441d285ac5546160617191bc8f5966fe4ae', 'width': 640}, {'height': 496, 'url': 'https://external-preview.redd.it/HqToyOsrmDCjBuhXei1-vt-6tIgpPrf7PUaZlCXpTVk.jpg?width=960&crop=smart&auto=webp&s=424c542e26c50f1c677a06c9fe56900dc3585b17', 'width': 960}, {'height': 558, 'url': 'https://external-preview.redd.it/HqToyOsrmDCjBuhXei1-vt-6tIgpPrf7PUaZlCXpTVk.jpg?width=1080&crop=smart&auto=webp&s=9e65e630076b2bf464aae1535e4b63ec77cd8b0d', 'width': 1080}], 'source': {'height': 1280, 'url': 'https://external-preview.redd.it/HqToyOsrmDCjBuhXei1-vt-6tIgpPrf7PUaZlCXpTVk.jpg?auto=webp&s=a65074fce8eb58ade52fb9421f109c9518de98d5', 'width': 2473}, 'variants': {}}]}
Anybody have luck finetuning Qwen3 Base models?
11
I've been trying to finetune Qwen3 Base models (just the regular smaller ones, not even the MoE ones) and that doesn't seem to work well. Basically the fine tuned model either keep generating text endlessly or keeps generating bad tokens after the response. Their instruction tuned models are all obviously working well so there must be something missing in configuration or settings? I'm not sure if anyone has insights into this or has access to someone from the Qwen3 team to find out. It has been quite disappointing not knowing what I'm missing. I was told the instruction tuned model fine tunes seem to be fine but that's not what I'm trying to do.
2025-05-06T03:23:19
https://www.reddit.com/r/LocalLLaMA/comments/1kfuv3v/anybody_have_luck_finetuning_qwen3_base_models/
gamesntech
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfuv3v
false
null
t3_1kfuv3v
/r/LocalLLaMA/comments/1kfuv3v/anybody_have_luck_finetuning_qwen3_base_models/
false
false
self
11
null
R1-Reward: Training Multimodal Reward Model Through Stable Reinforcement Learning
28
2025-05-06T03:37:36
https://github.com/yfzhang114/r1_reward
ninjasaid13
github.com
1970-01-01T00:00:00
0
{}
1kfv4az
false
null
t3_1kfv4az
/r/LocalLLaMA/comments/1kfv4az/r1reward_training_multimodal_reward_model_through/
false
false
https://external-preview…3386c933cb12c022
28
{'enabled': False, 'images': [{'id': 'DRUyBXGCdAbmSo-nJrtkAf7BikOvApJQt5EoUxj_MX8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DRUyBXGCdAbmSo-nJrtkAf7BikOvApJQt5EoUxj_MX8.png?width=108&crop=smart&auto=webp&s=5b56af501d8332016377d3d366f2e8dc5fb835d9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DRUyBXGCdAbmSo-nJrtkAf7BikOvApJQt5EoUxj_MX8.png?width=216&crop=smart&auto=webp&s=7bffb4297576da4f8ce17d9ca8de17c4354628d1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DRUyBXGCdAbmSo-nJrtkAf7BikOvApJQt5EoUxj_MX8.png?width=320&crop=smart&auto=webp&s=b8176357e353c6d03cbb8be48f6740ef668b4e84', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DRUyBXGCdAbmSo-nJrtkAf7BikOvApJQt5EoUxj_MX8.png?width=640&crop=smart&auto=webp&s=bfd0e976dcabe6674bcf24ee8c68272ca23e30bf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DRUyBXGCdAbmSo-nJrtkAf7BikOvApJQt5EoUxj_MX8.png?width=960&crop=smart&auto=webp&s=534eb7a10d88da12fa4a85c7f30aca51650f0779', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DRUyBXGCdAbmSo-nJrtkAf7BikOvApJQt5EoUxj_MX8.png?width=1080&crop=smart&auto=webp&s=6b6efeeb40dd3bc1d634b1e7408bbd4c4a522308', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/DRUyBXGCdAbmSo-nJrtkAf7BikOvApJQt5EoUxj_MX8.png?auto=webp&s=9fdda972f9723fa69fae44cb97180bec5021cdc3', 'width': 1200}, 'variants': {}}]}
Please enlighten an beginner :)
1
[removed]
2025-05-06T03:41:26
https://www.reddit.com/r/LocalLLaMA/comments/1kfv6s3/please_enlighten_an_beginner/
LeMrXa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfv6s3
false
null
t3_1kfv6s3
/r/LocalLLaMA/comments/1kfv6s3/please_enlighten_an_beginner/
false
false
self
1
null
VRAM requirements for all Qwen3 models (0.6B to 32B) – what fits on your GPU?
1
Have been testing out the Qwen3 models and compiled a VRAM table I thought could be helpful for the community. I used Unsloth quantizations for the best balance of performance and size. Even Qwen3-4B works impressively well with MCP tools! *Processing img gksedh5h03ze1...* **Note:** TPS (tokens per second) is just a rough ballpark from short prompt testing (e.g., one-liner questions). If you’re curious about how to set up the system prompt and parameters for Qwen3-4B with MCP, feel free to check out my video: [https://youtu.be/N-B1rYJ61a8?si=ilQeL1sQmt-5ozRD](https://youtu.be/N-B1rYJ61a8?si=ilQeL1sQmt-5ozRD)
2025-05-06T03:44:52
https://i.redd.it/0pe5g7je13ze1.png
AdOdd4004
i.redd.it
1970-01-01T00:00:00
0
{}
1kfv8uz
false
null
t3_1kfv8uz
/r/LocalLLaMA/comments/1kfv8uz/vram_requirements_for_all_qwen3_models_06b_to_32b/
false
false
https://b.thumbs.redditm…EMFQYR6K6HyM.jpg
1
{'enabled': True, 'images': [{'id': 'smB9kC6dME93h6mNtpy_Vx0hXEiUa2-lqmVyunK4_t4', 'resolutions': [{'height': 18, 'url': 'https://preview.redd.it/0pe5g7je13ze1.png?width=108&crop=smart&auto=webp&s=7bb7d10fc5bd2ad35cb0b6264cec70f87d7f8106', 'width': 108}, {'height': 37, 'url': 'https://preview.redd.it/0pe5g7je13ze1.png?width=216&crop=smart&auto=webp&s=72dd1fe59b63006fc0c7afafa59b4e11b87080d4', 'width': 216}, {'height': 55, 'url': 'https://preview.redd.it/0pe5g7je13ze1.png?width=320&crop=smart&auto=webp&s=b70b353826385526a582f23e00e6d4bd27dac344', 'width': 320}, {'height': 111, 'url': 'https://preview.redd.it/0pe5g7je13ze1.png?width=640&crop=smart&auto=webp&s=347777ab5fde388409a1159c193961ead7f69e54', 'width': 640}, {'height': 167, 'url': 'https://preview.redd.it/0pe5g7je13ze1.png?width=960&crop=smart&auto=webp&s=77168d09ad4b9c853c49067a49071dfa509fecd1', 'width': 960}, {'height': 187, 'url': 'https://preview.redd.it/0pe5g7je13ze1.png?width=1080&crop=smart&auto=webp&s=3e3bdb145aeec4feb5eff6a891954a77d11a087b', 'width': 1080}], 'source': {'height': 236, 'url': 'https://preview.redd.it/0pe5g7je13ze1.png?auto=webp&s=2af52c928eb207c69911b5d4b83190a60351bf00', 'width': 1356}, 'variants': {}}]}
VRAM requirements for all Qwen3 models (0.6B–32B) – what fits on your GPU?
162
I used Unsloth quantizations for the best balance of performance and size. Even Qwen3-4B runs impressively well with MCP tools! **Note:** TPS (tokens per second) is just a rough ballpark from short prompt testing (e.g., one-liner questions). If you’re curious about how to set up the system prompt and parameters for Qwen3-4B with MCP, feel free to check out my video: ▶️ [https://youtu.be/N-B1rYJ61a8?si=ilQeL1sQmt-5ozRD](https://youtu.be/N-B1rYJ61a8?si=ilQeL1sQmt-5ozRD)
2025-05-06T03:48:44
https://i.redd.it/l8bxcpzj23ze1.png
AdOdd4004
i.redd.it
1970-01-01T00:00:00
0
{}
1kfvba4
false
null
t3_1kfvba4
/r/LocalLLaMA/comments/1kfvba4/vram_requirements_for_all_qwen3_models_06b32b/
false
false
https://b.thumbs.redditm…NjoOdI2ZpR-k.jpg
162
{'enabled': True, 'images': [{'id': 'uHFlezFbQPgNW-vDTYSwUBQSR3e0-kaTUBynj40GbjU', 'resolutions': [{'height': 18, 'url': 'https://preview.redd.it/l8bxcpzj23ze1.png?width=108&crop=smart&auto=webp&s=02879c88c5705312b2ec0a9e13e0df027388e6e4', 'width': 108}, {'height': 37, 'url': 'https://preview.redd.it/l8bxcpzj23ze1.png?width=216&crop=smart&auto=webp&s=f279162df3d52839ca9b8b35b042744a0af32b1a', 'width': 216}, {'height': 55, 'url': 'https://preview.redd.it/l8bxcpzj23ze1.png?width=320&crop=smart&auto=webp&s=072619af1b5ba27b524a0cddbb56bd12f7f7b1f9', 'width': 320}, {'height': 111, 'url': 'https://preview.redd.it/l8bxcpzj23ze1.png?width=640&crop=smart&auto=webp&s=aabedd846224fbf7f398e436fd72a24d816f674a', 'width': 640}, {'height': 167, 'url': 'https://preview.redd.it/l8bxcpzj23ze1.png?width=960&crop=smart&auto=webp&s=e0c1bb654aa7caac92b129c6250cfdafc2d3e3ac', 'width': 960}, {'height': 187, 'url': 'https://preview.redd.it/l8bxcpzj23ze1.png?width=1080&crop=smart&auto=webp&s=e4904dd5774a7a8a1f9431d872c20437ae195384', 'width': 1080}], 'source': {'height': 236, 'url': 'https://preview.redd.it/l8bxcpzj23ze1.png?auto=webp&s=9a77f0f76bccc6908929deec3ea48aaf22a1bb0f', 'width': 1356}, 'variants': {}}]}
MOC (Model On Chip?
14
Im fairly certain AI is going to end up as MOC’s (baked models on chips for ultra efficiency). It’s just a matter of time until one is small enough and good enough to start production for. I think Qwen 3 is going to be the first MOC. Thoughts?
2025-05-06T04:44:29
https://www.reddit.com/r/LocalLLaMA/comments/1kfw8rb/moc_model_on_chip/
astral_crow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfw8rb
false
null
t3_1kfw8rb
/r/LocalLLaMA/comments/1kfw8rb/moc_model_on_chip/
false
false
self
14
null
Are Two 5090 GPUs Sufficient for AI R&D into Novel Architectures?
1
[removed]
2025-05-06T04:51:46
https://www.reddit.com/r/LocalLLaMA/comments/1kfwcx8/are_two_5090_gpus_sufficient_for_ai_rd_into_novel/
ParkingImpressive168
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfwcx8
false
null
t3_1kfwcx8
/r/LocalLLaMA/comments/1kfwcx8/are_two_5090_gpus_sufficient_for_ai_rd_into_novel/
false
false
self
1
null
Best tool callers
2
Has anyone had any luck with tool calling models on local hardware? I've been playing around with Qwen3:14b.
2025-05-06T05:05:27
https://www.reddit.com/r/LocalLLaMA/comments/1kfwko9/best_tool_callers/
soorg_nalyd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfwko9
false
null
t3_1kfwko9
/r/LocalLLaMA/comments/1kfwko9/best_tool_callers/
false
false
self
2
null
Lighteval - running out of memory
2
For people who have used lighteval from HuggingFace, I'm using a very simple tutorial prompt: lighteval accelerate \\ "pretrained=gpt2" \\ "leaderboard|truthfulqa:mc|0|0" and I keep running out of memory. Has anyone encountered this too? What can I do? I tried running it locally on my Mac (M1 chip) as well as using Google Colab. Genuinely unsure on how to proceed, any help would be greatly appreciated. Thank you so much!!!!!!
2025-05-06T05:34:02
https://www.reddit.com/r/LocalLLaMA/comments/1kfx0au/lighteval_running_out_of_memory/
darkGrayAdventurer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfx0au
false
null
t3_1kfx0au
/r/LocalLLaMA/comments/1kfx0au/lighteval_running_out_of_memory/
false
false
self
2
null
Proof of concept: Ollama chat in PowerToys Command Palette
69
Suddenly had a thought last night that if we can access LLM chatbot directly in [PowerToys Command Palette](https://learn.microsoft.com/en-us/windows/powertoys/command-palette/overview) (which is basically a Windows alternative to the Mac Spotlight), I think it would be quite convenient, so I made this simple extension to chat with Ollama. To be honest I think this has much more potentials, but I am not really into desktop application development. If anyone is interested, you can find the code at [https://github.com/LioQing/cmd-pal-ollama-extension](https://github.com/LioQing/cmd-pal-ollama-extension)
2025-05-06T06:14:04
https://v.redd.it/4dcuhg27r3ze1
GGLio
/r/LocalLLaMA/comments/1kfxl36/proof_of_concept_ollama_chat_in_powertoys_command/
1970-01-01T00:00:00
0
{}
1kfxl36
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/4dcuhg27r3ze1/DASHPlaylist.mpd?a=1749233652%2CMzgyZWFmYmU0ZmRkMmYyMTNmMzhkNWE1OGUxODM5ZTRkYjRlNDFkMjkwZWViY2M5ODllNjYyYTZmM2YzYjY4OA%3D%3D&v=1&f=sd', 'duration': 49, 'fallback_url': 'https://v.redd.it/4dcuhg27r3ze1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/4dcuhg27r3ze1/HLSPlaylist.m3u8?a=1749233652%2CNWI1NDJhY2RkMjA0OTEyMjZmYjg1YzZhNzczNjQ3M2ZiYzUzYmE3NmUxZTNkZWFjYTRmZDNiOTAyYWQ1MDRmNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/4dcuhg27r3ze1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1kfxl36
/r/LocalLLaMA/comments/1kfxl36/proof_of_concept_ollama_chat_in_powertoys_command/
false
false
https://external-preview…955fae64e81a117c
69
{'enabled': False, 'images': [{'id': 'dHQ1M3pmMjdyM3plMSvmL84iY40gCY7YnbjXn7zDVAjhPLvuGrDKqijUVYcy', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dHQ1M3pmMjdyM3plMSvmL84iY40gCY7YnbjXn7zDVAjhPLvuGrDKqijUVYcy.png?width=108&crop=smart&format=pjpg&auto=webp&s=6a20529325dcfbf0aa381f95a7c4314b91dc3d14', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dHQ1M3pmMjdyM3plMSvmL84iY40gCY7YnbjXn7zDVAjhPLvuGrDKqijUVYcy.png?width=216&crop=smart&format=pjpg&auto=webp&s=f330d0fc68c5fc451616533c5de2c24e17842944', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dHQ1M3pmMjdyM3plMSvmL84iY40gCY7YnbjXn7zDVAjhPLvuGrDKqijUVYcy.png?width=320&crop=smart&format=pjpg&auto=webp&s=998d70cf833f066db2f794a7652bb1e4deb6789d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dHQ1M3pmMjdyM3plMSvmL84iY40gCY7YnbjXn7zDVAjhPLvuGrDKqijUVYcy.png?width=640&crop=smart&format=pjpg&auto=webp&s=fa16674a7818eecd2220eaca3b8007d10906b140', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dHQ1M3pmMjdyM3plMSvmL84iY40gCY7YnbjXn7zDVAjhPLvuGrDKqijUVYcy.png?width=960&crop=smart&format=pjpg&auto=webp&s=ee2cadb2715fe1723b8e5c9b4c3a69d0782ec81d', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dHQ1M3pmMjdyM3plMSvmL84iY40gCY7YnbjXn7zDVAjhPLvuGrDKqijUVYcy.png?width=1080&crop=smart&format=pjpg&auto=webp&s=00f6669b1ad86b7a355b2c38bbee1538a6d6570c', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/dHQ1M3pmMjdyM3plMSvmL84iY40gCY7YnbjXn7zDVAjhPLvuGrDKqijUVYcy.png?format=pjpg&auto=webp&s=ab4f17def1c5d88a2478e62500505ebb326fc81c', 'width': 1920}, 'variants': {}}]}
MCP issue for ToolUsing
1
[removed]
2025-05-06T06:30:13
https://www.reddit.com/r/LocalLLaMA/comments/1kfxtfa/mcp_issue_for_toolusing/
One-Impression-1784
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfxtfa
false
null
t3_1kfxtfa
/r/LocalLLaMA/comments/1kfxtfa/mcp_issue_for_toolusing/
false
false
self
1
null
FYI Coding GPU Poor: Qwen3 A30B A3B vs... why not both?
1
[removed]
2025-05-06T06:31:38
https://www.reddit.com/r/LocalLLaMA/comments/1kfxu4u/fyi_coding_gpu_poor_qwen3_a30b_a3b_vs_why_not_both/
AfterAte
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfxu4u
false
null
t3_1kfxu4u
/r/LocalLLaMA/comments/1kfxu4u/fyi_coding_gpu_poor_qwen3_a30b_a3b_vs_why_not_both/
false
false
self
1
{'enabled': False, 'images': [{'id': 'reTkFtFjNlM2i6QHsQIrf1rlZgTsPfCEB1nUh457Uag', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/eRtv8n6aFlEpqyp_2Mrshn0fUvW1qne05-5_YQHENfA.jpg?width=108&crop=smart&auto=webp&s=f182f0ddc40414aa7971b1a91aefa934939537bf', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/eRtv8n6aFlEpqyp_2Mrshn0fUvW1qne05-5_YQHENfA.jpg?width=216&crop=smart&auto=webp&s=676bac4872c320259f7f37b724b57b0805266c9d', 'width': 216}, {'height': 201, 'url': 'https://external-preview.redd.it/eRtv8n6aFlEpqyp_2Mrshn0fUvW1qne05-5_YQHENfA.jpg?width=320&crop=smart&auto=webp&s=85fdb9c53c50780960ec5467e21a37f571681f3e', 'width': 320}, {'height': 402, 'url': 'https://external-preview.redd.it/eRtv8n6aFlEpqyp_2Mrshn0fUvW1qne05-5_YQHENfA.jpg?width=640&crop=smart&auto=webp&s=30364b44db2271c95c65f40f236c5b76cdc32391', 'width': 640}, {'height': 603, 'url': 'https://external-preview.redd.it/eRtv8n6aFlEpqyp_2Mrshn0fUvW1qne05-5_YQHENfA.jpg?width=960&crop=smart&auto=webp&s=2b30586fff47fc5d29df205dbe93150ce953406a', 'width': 960}, {'height': 678, 'url': 'https://external-preview.redd.it/eRtv8n6aFlEpqyp_2Mrshn0fUvW1qne05-5_YQHENfA.jpg?width=1080&crop=smart&auto=webp&s=1d72e384162f7bdbe2830236f9812d52829d837f', 'width': 1080}], 'source': {'height': 1204, 'url': 'https://external-preview.redd.it/eRtv8n6aFlEpqyp_2Mrshn0fUvW1qne05-5_YQHENfA.jpg?auto=webp&s=4b0d6265743d2d5db33ff561b5bb414109477fa2', 'width': 1916}, 'variants': {}}]}
Local Agents and AMD AI Max
1
I am setting up a server with 128G (AMD AI Max) for local AI. I still plan on using Claude a lot, but I do want to see how much I can get out of it without using credits. I was thinking vLLM would be my best bet (I have experience with Ollama and LM Studio) but I understand this will perform a lot better for serving. Is the AMD AI Max 395 be supported? I want to create MCP servers to build out tools for things I will do repeatedly. One thing I want to do is have it research metrics for my industry. I was planning on trying to build tools to create a consistent process for as much as possible. But i also want it to be able to do web search to gather information. I'm familiar using MCP with cursor and so on, but what would I use for something like this? I have a N8N instance setup on my proxmox cluster but I never use it, and not sure I want to use that. I mostly use Python, but I don't' want to build it from scratch. I want to build something similar to Manus locally and see how good it can get with this machine and if it ends up being valuable.
2025-05-06T07:16:55
https://www.reddit.com/r/LocalLLaMA/comments/1kfygtq/local_agents_and_amd_ai_max/
MidnightProgrammer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfygtq
false
null
t3_1kfygtq
/r/LocalLLaMA/comments/1kfygtq/local_agents_and_amd_ai_max/
false
false
self
1
null
Graduation project
1
[removed]
2025-05-06T07:33:29
https://www.reddit.com/r/LocalLLaMA/comments/1kfyovn/graduation_project/
Any-Understanding835
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfyovn
false
null
t3_1kfyovn
/r/LocalLLaMA/comments/1kfyovn/graduation_project/
false
false
self
1
null
Quanta QuantaPlex T42S-2U Node Server as cpu inference
1
[removed]
2025-05-06T07:41:42
https://www.reddit.com/r/LocalLLaMA/comments/1kfysms/quanta_quantaplex_t42s2u_node_server_as_cpu/
Ok_Appeal8653
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfysms
false
null
t3_1kfysms
/r/LocalLLaMA/comments/1kfysms/quanta_quantaplex_t42s2u_node_server_as_cpu/
false
false
self
1
null
Character arc descriptions using LLM
1
Looking to generate character arcs from a novel. System: * RAM: 96 GB * CPU: AMD Ryzen 5 7600 6-Core * GPU: NVIDIA T1000 8GB * Context length: 128000 * Novel: 509,837 chars / 83,988 words = 6 chars / word. * ollama: version 0.6.8 Any model and settings suggestions? Any idea how long the model will take to start generating tokens? Prompt: > You are a professional movie producer and script writer who excels at writing character arcs. You must write a character arc without altering the user's ideas. Write in clear, succinct, engaging language that captures the distinct essence of the character. Do not use introductory phrases. The character arc must be at most three sentences long. Analyze the following novel and write a character arc for ${CHARACTER}:
2025-05-06T07:51:52
https://www.reddit.com/r/LocalLLaMA/comments/1kfyxg7/character_arc_descriptions_using_llm/
autonoma_2042
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfyxg7
false
null
t3_1kfyxg7
/r/LocalLLaMA/comments/1kfyxg7/character_arc_descriptions_using_llm/
false
false
self
1
null
Why ollama can run q8_0 quantization but huggingface pipeline can't run FP8
1
[removed]
2025-05-06T07:55:56
https://www.reddit.com/r/LocalLLaMA/comments/1kfyzeg/why_ollama_can_run_q8_0_quantization_but/
milky-meda
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfyzeg
false
null
t3_1kfyzeg
/r/LocalLLaMA/comments/1kfyzeg/why_ollama_can_run_q8_0_quantization_but/
false
false
self
1
null
Keep or skip CoT reasoning in chat context
1
[removed]
2025-05-06T08:02:15
https://www.reddit.com/r/LocalLLaMA/comments/1kfz2hf/keep_or_skip_cot_reasoning_in_chat_context/
Ill-Still-6859
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfz2hf
false
null
t3_1kfz2hf
/r/LocalLLaMA/comments/1kfz2hf/keep_or_skip_cot_reasoning_in_chat_context/
false
false
https://b.thumbs.redditm…nk0oh2ULhOqc.jpg
1
null
Keep or skip CoT reasoning in chat context?
1
[removed]
2025-05-06T08:08:38
https://www.reddit.com/r/LocalLLaMA/comments/1kfz5kv/keep_or_skip_cot_reasoning_in_chat_context/
Ill-Still-6859
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfz5kv
false
null
t3_1kfz5kv
/r/LocalLLaMA/comments/1kfz5kv/keep_or_skip_cot_reasoning_in_chat_context/
false
false
https://b.thumbs.redditm…7wu09uuMJXSQ.jpg
1
null
Is local LLM really worth it or not?
64
I plan to upgrade my rig, but after some calculation, it really seems not worth it. A single 4090 in my place costs around $2,900 right now. If you add up other parts and recurring electricity bills, it really seems better to just use the APIs, which let you run better models for years with all that cost. The only advantage I can see from local deployment is either data privacy or latency, which are not at the top of the priority list for most ppl. Or you could call the LLM at an extreme rate, but if you factor in maintenance costs and local instabilities, that doesn’t seem worth it either.
2025-05-06T08:12:29
https://www.reddit.com/r/LocalLLaMA/comments/1kfz7dk/is_local_llm_really_worth_it_or_not/
GregView
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfz7dk
false
null
t3_1kfz7dk
/r/LocalLLaMA/comments/1kfz7dk/is_local_llm_really_worth_it_or_not/
false
false
self
64
null
Keep or skip CoT reasoning in chat context?
1
[removed]
2025-05-06T08:13:28
https://www.reddit.com/r/LocalLLaMA/comments/1kfz7sw/keep_or_skip_cot_reasoning_in_chat_context/
Ill-Still-6859
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfz7sw
false
null
t3_1kfz7sw
/r/LocalLLaMA/comments/1kfz7sw/keep_or_skip_cot_reasoning_in_chat_context/
false
false
https://a.thumbs.redditm…oWVcyWzFZnZ8.jpg
1
null
LLMs can hallucinate dependencies that malicious actors can make a (bad) reality. Video is not mine; it's about Slopsquatting.
1
2025-05-06T08:19:34
https://youtu.be/ai77Pa79_TY
Weekly_Elevator_2857
youtu.be
1970-01-01T00:00:00
0
{}
1kfzapc
false
{'oembed': {'author_name': 'Brodie Robertson', 'author_url': 'https://www.youtube.com/@BrodieRobertson', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/ai77Pa79_TY?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Slopsquatting: Latest Software Supply Chain Scourge"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/ai77Pa79_TY/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Slopsquatting: Latest Software Supply Chain Scourge', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1kfzapc
/r/LocalLLaMA/comments/1kfzapc/llms_can_hallucinate_dependencies_that_malicious/
false
false
https://b.thumbs.redditm…a4L8JBE95Q8s.jpg
1
{'enabled': False, 'images': [{'id': 'eaFCCviNTAW71ohCgueNzjt-EQDB2KY2bodiSlkuLuw', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/BMsiX0fq5LQDg11hh2ev-cswvyVKvRsuLcuAiC3p6EE.jpg?width=108&crop=smart&auto=webp&s=bff7ab4a320c1630395e4325c1627a3320c61170', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/BMsiX0fq5LQDg11hh2ev-cswvyVKvRsuLcuAiC3p6EE.jpg?width=216&crop=smart&auto=webp&s=22ecfb6ee587c2643707db7ace1abc85d9672647', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/BMsiX0fq5LQDg11hh2ev-cswvyVKvRsuLcuAiC3p6EE.jpg?width=320&crop=smart&auto=webp&s=ce26517598f23316cf8bfe84597fb30909ace2d9', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/BMsiX0fq5LQDg11hh2ev-cswvyVKvRsuLcuAiC3p6EE.jpg?auto=webp&s=3848ce58d46340528cd5664b36cf327b0ee4fd75', 'width': 480}, 'variants': {}}]}
Best & Fast Local TTS models?
1
[removed]
2025-05-06T08:21:59
https://www.reddit.com/r/LocalLLaMA/comments/1kfzbtj/best_fast_local_tts_models/
Uiqueblhats
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfzbtj
false
null
t3_1kfzbtj
/r/LocalLLaMA/comments/1kfzbtj/best_fast_local_tts_models/
false
false
self
1
{'enabled': False, 'images': [{'id': 'FN49sCWLUhm0gr5AxhgsPr6PI2qYHiJvLfV3XZcOm00', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YeORspUhJ1ZlQlP6r_Wm7cN-6XNOC2xRSX4oc2JBtPM.jpg?width=108&crop=smart&auto=webp&s=3836a3dbc348e2466921a583c8a91393027e03d8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YeORspUhJ1ZlQlP6r_Wm7cN-6XNOC2xRSX4oc2JBtPM.jpg?width=216&crop=smart&auto=webp&s=ca999142286764e51b4b833a8d87358b12b88292', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YeORspUhJ1ZlQlP6r_Wm7cN-6XNOC2xRSX4oc2JBtPM.jpg?width=320&crop=smart&auto=webp&s=9d35f8730004a1a36cadb35830b4718e61a40a72', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YeORspUhJ1ZlQlP6r_Wm7cN-6XNOC2xRSX4oc2JBtPM.jpg?width=640&crop=smart&auto=webp&s=d7bebd9480f63cec7e36091732b0bf909f113f83', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YeORspUhJ1ZlQlP6r_Wm7cN-6XNOC2xRSX4oc2JBtPM.jpg?width=960&crop=smart&auto=webp&s=54cb8b5ae14cc79358e8033250159c17e6d14fa0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YeORspUhJ1ZlQlP6r_Wm7cN-6XNOC2xRSX4oc2JBtPM.jpg?width=1080&crop=smart&auto=webp&s=1467fc0d8c40329df0b4121c8398245a72e0303c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YeORspUhJ1ZlQlP6r_Wm7cN-6XNOC2xRSX4oc2JBtPM.jpg?auto=webp&s=f9d487b2a0efa53c4a50accf2f078e1b0b57d475', 'width': 1200}, 'variants': {}}]}
Best & Fast Local TTS Models?
1
[removed]
2025-05-06T08:23:03
https://www.reddit.com/r/LocalLLaMA/comments/1kfzcao/best_fast_local_tts_models/
Uiqueblhats
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfzcao
false
null
t3_1kfzcao
/r/LocalLLaMA/comments/1kfzcao/best_fast_local_tts_models/
false
false
self
1
null
What model we can self host for coding and general purposes with a buget of 20,000$?
1
[removed]
2025-05-06T08:57:43
https://www.reddit.com/r/LocalLLaMA/comments/1kfzsr6/what_model_we_can_self_host_for_coding_and/
Desperate_Entrance71
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfzsr6
false
null
t3_1kfzsr6
/r/LocalLLaMA/comments/1kfzsr6/what_model_we_can_self_host_for_coding_and/
false
false
self
1
null
Does llama.cpp have to be manually built
1
[removed]
2025-05-06T08:58:22
https://www.reddit.com/r/LocalLLaMA/comments/1kfzt2s/does_llamacpp_have_to_be_manually_built/
NoSoftware9760
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfzt2s
false
null
t3_1kfzt2s
/r/LocalLLaMA/comments/1kfzt2s/does_llamacpp_have_to_be_manually_built/
false
false
self
1
null
What are some unorthodox use cases for a local llm?
5
Basically what the title says.
2025-05-06T09:02:20
https://www.reddit.com/r/LocalLLaMA/comments/1kfzv6i/what_are_some_unorthodox_use_cases_for_a_local_llm/
LorestForest
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kfzv6i
false
null
t3_1kfzv6i
/r/LocalLLaMA/comments/1kfzv6i/what_are_some_unorthodox_use_cases_for_a_local_llm/
false
false
self
5
null
Building an NSFW AI App: Seeking Guidance on Integrating Text-to-Text
6
Hey everyone, I’m developing an NSFW app and looking to integrate AI functionalities and I’m particularly interested in text-to-text: I’ve been considering Qwen3,does anyone have experience with it? How does it perform, especially in NSFW contexts? I’m using Windsurf as my development environment. If anyone has experience integrating these types of APIs or can point me toward helpful resources, tutorials, or documentation, I’d greatly appreciate it. Also, if someone is open to mentoring or assisting me when I encounter challenges, that would be fantastic.✨ Thanks in advance for your support!
2025-05-06T09:23:17
https://www.reddit.com/r/LocalLLaMA/comments/1kg05o4/building_an_nsfw_ai_app_seeking_guidance_on/
ZookeepergameOk1689
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg05o4
false
null
t3_1kg05o4
/r/LocalLLaMA/comments/1kg05o4/building_an_nsfw_ai_app_seeking_guidance_on/
false
false
nsfw
6
null
Voila - Voice-Language Foundation Models for Real-Time Autonomous Interaction and Voice Role-Play (Paper w/ Code + Weights)
1
2025-05-06T09:31:14
https://huggingface.co/collections/maitrix-org/voila-67e0d96962c19f221fc73fa5
spellbound_app
huggingface.co
1970-01-01T00:00:00
0
{}
1kg09rr
false
null
t3_1kg09rr
/r/LocalLLaMA/comments/1kg09rr/voila_voicelanguage_foundation_models_for/
false
false
default
1
null
Bought 3090, need emotional support
1
[removed]
2025-05-06T09:33:21
https://www.reddit.com/r/LocalLLaMA/comments/1kg0aw4/bought_3090_need_emotional_support/
HandsOnDyk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg0aw4
false
null
t3_1kg0aw4
/r/LocalLLaMA/comments/1kg0aw4/bought_3090_need_emotional_support/
false
false
self
1
null
Nvidia's nemontron-ultra released
75
HF: [https://huggingface.co/collections/nvidia/llama-nemotron-67d92346030a2691293f200b](https://huggingface.co/collections/nvidia/llama-nemotron-67d92346030a2691293f200b) technical report: [https://arxiv.org/abs/2505.00949](https://arxiv.org/abs/2505.00949) https://preview.redd.it/9yt3kbqpu4ze1.png?width=2294&format=png&auto=webp&s=6f5cf17c0e9a3674092eeb2fe870a68bb499619f
2025-05-06T09:45:36
https://www.reddit.com/r/LocalLLaMA/comments/1kg0gzt/nvidias_nemontronultra_released/
BreakfastFriendly728
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg0gzt
false
null
t3_1kg0gzt
/r/LocalLLaMA/comments/1kg0gzt/nvidias_nemontronultra_released/
false
false
https://b.thumbs.redditm…_GZUY_Gy0kxE.jpg
75
{'enabled': False, 'images': [{'id': 'tehQzTDXjsQG_XSn7uPgnAE5FuopLCaJYuKH6pJbZQ4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/elH6J8bIbGaZITZWD-SJrWx2cQnvD8jIxmZYLCf2bCg.jpg?width=108&crop=smart&auto=webp&s=c3b943fe47f8ab6973746e0f9029559e977de987', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/elH6J8bIbGaZITZWD-SJrWx2cQnvD8jIxmZYLCf2bCg.jpg?width=216&crop=smart&auto=webp&s=122d6adc8d47a01254156fb06634f0956e62ba58', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/elH6J8bIbGaZITZWD-SJrWx2cQnvD8jIxmZYLCf2bCg.jpg?width=320&crop=smart&auto=webp&s=8392bb89d8cacbf078ddbdd95d198ae64a4b2775', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/elH6J8bIbGaZITZWD-SJrWx2cQnvD8jIxmZYLCf2bCg.jpg?width=640&crop=smart&auto=webp&s=3d35a89ef4644d20d16b7438637e855c4267938e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/elH6J8bIbGaZITZWD-SJrWx2cQnvD8jIxmZYLCf2bCg.jpg?width=960&crop=smart&auto=webp&s=26772143c7e941d44e55aa4dfb3ad870df530783', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/elH6J8bIbGaZITZWD-SJrWx2cQnvD8jIxmZYLCf2bCg.jpg?width=1080&crop=smart&auto=webp&s=1338b237ad1ab91573b92b3974f3d7b27f4b92bf', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/elH6J8bIbGaZITZWD-SJrWx2cQnvD8jIxmZYLCf2bCg.jpg?auto=webp&s=03060780b88381460876bf51b2a1ae61e0523e42', 'width': 1200}, 'variants': {}}]}
Whats happening in GenAI
1
[removed]
2025-05-06T10:00:18
https://www.reddit.com/r/LocalLLaMA/comments/1kg0okv/whats_happening_in_genai/
Winter-Jeweler-9926
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg0okv
false
null
t3_1kg0okv
/r/LocalLLaMA/comments/1kg0okv/whats_happening_in_genai/
false
false
self
1
{'enabled': False, 'images': [{'id': '50-e1ccggnFSgyprK9rmtaVxoEFFYR0md1KNFzzdSAM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/50-e1ccggnFSgyprK9rmtaVxoEFFYR0md1KNFzzdSAM.jpeg?width=108&crop=smart&auto=webp&s=d3cce043d29b5264d7acb9d0e2d2a1a056bb3f84', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/50-e1ccggnFSgyprK9rmtaVxoEFFYR0md1KNFzzdSAM.jpeg?width=216&crop=smart&auto=webp&s=800903b07ae4ba313950cdd412c2aba331c839c5', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/50-e1ccggnFSgyprK9rmtaVxoEFFYR0md1KNFzzdSAM.jpeg?width=320&crop=smart&auto=webp&s=e220ac98df9d507e9096b18d35a21ffc3cd508eb', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/50-e1ccggnFSgyprK9rmtaVxoEFFYR0md1KNFzzdSAM.jpeg?width=640&crop=smart&auto=webp&s=63aa5362f66639709f792720b6e95215ffa01685', 'width': 640}], 'source': {'height': 419, 'url': 'https://external-preview.redd.it/50-e1ccggnFSgyprK9rmtaVxoEFFYR0md1KNFzzdSAM.jpeg?auto=webp&s=5c2959d3f8ba108559d6f0cbf8cd05a1d852a8a5', 'width': 800}, 'variants': {}}]}
Llama's Mystery
1
[removed]
2025-05-06T10:12:01
https://i.redd.it/yb3ct8txy4ze1.png
NmkNm
i.redd.it
1970-01-01T00:00:00
0
{}
1kg0v08
false
null
t3_1kg0v08
/r/LocalLLaMA/comments/1kg0v08/llamas_mystery/
false
false
https://a.thumbs.redditm…C0LA5ocpkXf4.jpg
1
{'enabled': True, 'images': [{'id': 'PJQT7jGVCicN2B7E32cEg4R6GJiBpEv3Ev1HOKvi_BU', 'resolutions': [{'height': 29, 'url': 'https://preview.redd.it/yb3ct8txy4ze1.png?width=108&crop=smart&auto=webp&s=677f287c401360bd6d786f9cc58fccb0b041fb12', 'width': 108}, {'height': 58, 'url': 'https://preview.redd.it/yb3ct8txy4ze1.png?width=216&crop=smart&auto=webp&s=24d6223e68855775c5aa024383d9fbe6f15dc9d6', 'width': 216}, {'height': 86, 'url': 'https://preview.redd.it/yb3ct8txy4ze1.png?width=320&crop=smart&auto=webp&s=016406cadd23ae802b33e82112f4d7a2db9382db', 'width': 320}, {'height': 172, 'url': 'https://preview.redd.it/yb3ct8txy4ze1.png?width=640&crop=smart&auto=webp&s=594ed6ae80c2f1d22a455a9296c73ca3e2a25a89', 'width': 640}], 'source': {'height': 209, 'url': 'https://preview.redd.it/yb3ct8txy4ze1.png?auto=webp&s=2a04c6592c09bf1839982f2f9e44771e7a2e1fe0', 'width': 777}, 'variants': {}}]}
Mysterious Llama
1
[removed]
2025-05-06T10:15:31
https://i.redd.it/oahuy6fnz4ze1.png
NmkNm
i.redd.it
1970-01-01T00:00:00
0
{}
1kg0wvq
false
null
t3_1kg0wvq
/r/LocalLLaMA/comments/1kg0wvq/mysterious_llama/
false
false
https://b.thumbs.redditm…HkkJ2CyeMnIM.jpg
1
{'enabled': True, 'images': [{'id': 'dprsyzXQxbECjrB1BmTQ9gO_GnQOCSpTRm8OR1SDKwE', 'resolutions': [{'height': 29, 'url': 'https://preview.redd.it/oahuy6fnz4ze1.png?width=108&crop=smart&auto=webp&s=0d4fbc8c755efb965998dedfe183478a870287ab', 'width': 108}, {'height': 58, 'url': 'https://preview.redd.it/oahuy6fnz4ze1.png?width=216&crop=smart&auto=webp&s=f33fb38d604fb29adcaae132e6b6195d10f533ce', 'width': 216}, {'height': 86, 'url': 'https://preview.redd.it/oahuy6fnz4ze1.png?width=320&crop=smart&auto=webp&s=ba09c751e886cf583b2fa51c533ceb0df1729b94', 'width': 320}, {'height': 172, 'url': 'https://preview.redd.it/oahuy6fnz4ze1.png?width=640&crop=smart&auto=webp&s=22f37627fcac5b146fda241519e9ecf2403e2a85', 'width': 640}], 'source': {'height': 209, 'url': 'https://preview.redd.it/oahuy6fnz4ze1.png?auto=webp&s=3be4a7d4bf72a567356a70057f8234e8aefbfdce', 'width': 777}, 'variants': {}}]}
Best model for synthetic data generation ?
0
I’m trying to generate reasoning traces so that I can finetune Qwen . (I have input and output, I just need the reasoning traces) . Which model / method would yall suggest ?
2025-05-06T10:58:07
https://www.reddit.com/r/LocalLLaMA/comments/1kg1kka/best_model_for_synthetic_data_generation/
Basic-Pay-9535
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg1kka
false
null
t3_1kg1kka
/r/LocalLLaMA/comments/1kg1kka/best_model_for_synthetic_data_generation/
false
false
self
0
null
I finally broke it
1
[removed]
2025-05-06T10:58:29
https://www.reddit.com/r/LocalLLaMA/comments/1kg1krk/i_finally_broke_it/
the_ITman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg1krk
false
null
t3_1kg1krk
/r/LocalLLaMA/comments/1kg1krk/i_finally_broke_it/
false
false
self
1
null
Gemini 2.5 context wierdness on fiction.livebench?? 🤨
22
Spoiler: >!I gave my original post to AI for it rewrite and it was better so I kept it!< Hey guys, So I saw this thing on fiction.livebench, and it said Gemini 2.5 got a 66 on 16k context but then an 86 on 32k. Kind of backwards, right? Why would it be worse with less stuff to read? I was trying to make a sequel to this book I read, like 200k words. My prompt was like 4k. The first try was... meh. Not awful, but not great. Then I summarized the book down to about 16k and it was WAY better! But the benchmark says 32k is even better. So, like, should I actually try to make my context *bigger* again for it to do better? Seems weird after my first try. What do you think? 🤔
2025-05-06T11:09:23
https://i.redd.it/tf18qjsn95ze1.png
AlgorithmicKing
i.redd.it
1970-01-01T00:00:00
0
{}
1kg1rgx
false
null
t3_1kg1rgx
/r/LocalLLaMA/comments/1kg1rgx/gemini_25_context_wierdness_on_fictionlivebench/
false
false
https://b.thumbs.redditm…6l-PT9OrZIlg.jpg
22
{'enabled': True, 'images': [{'id': '2-kgWJh8STyLRKukt_fFL54dY87cQpivRdDKnoHwT4s', 'resolutions': [{'height': 42, 'url': 'https://preview.redd.it/tf18qjsn95ze1.png?width=108&crop=smart&auto=webp&s=05e18a881f9599783606c727d43942de9299d1b9', 'width': 108}, {'height': 84, 'url': 'https://preview.redd.it/tf18qjsn95ze1.png?width=216&crop=smart&auto=webp&s=9cfff5aba2999ff97c0452f1b155a5fcd7ff1eb9', 'width': 216}, {'height': 124, 'url': 'https://preview.redd.it/tf18qjsn95ze1.png?width=320&crop=smart&auto=webp&s=a3d7140aa29ea30b9cc5c041aec69c4847b68104', 'width': 320}, {'height': 249, 'url': 'https://preview.redd.it/tf18qjsn95ze1.png?width=640&crop=smart&auto=webp&s=c89835155cafef4f94fa326596cfa9cbec7a9ae8', 'width': 640}, {'height': 374, 'url': 'https://preview.redd.it/tf18qjsn95ze1.png?width=960&crop=smart&auto=webp&s=b39831b0986acb86afe5ca34e5f2f77fb6e039be', 'width': 960}, {'height': 421, 'url': 'https://preview.redd.it/tf18qjsn95ze1.png?width=1080&crop=smart&auto=webp&s=60025a00ffd8cfdf1ab19b15b235bdf1c48e2543', 'width': 1080}], 'source': {'height': 716, 'url': 'https://preview.redd.it/tf18qjsn95ze1.png?auto=webp&s=f8509455ce13dd3f835c03a9dce65ef5ee2e1d46', 'width': 1836}, 'variants': {}}]}
So why are we sh**ing on ollama again?
218
I am asking the redditors who take a dump on ollama. I mean, pacman -S ollama ollama-cuda was everything I needed, didn't even have to touch open-webui as it comes pre-configured for ollama. It does the model swapping for me, so I don't need llama-swap or manually change the server parameters. It has its own model library, which I don't have to use since it also supports gguf models. The cli is also nice and clean, and it supports oai API as well. Yes, it's annoying that it uses its own model storage format, but you can create .ggluf symlinks to these sha256 files and load them with your koboldcpp or llamacpp if needed. So what's your problem? Is it bad on windows or mac?
2025-05-06T11:24:42
https://www.reddit.com/r/LocalLLaMA/comments/1kg20mu/so_why_are_we_shing_on_ollama_again/
__Maximum__
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg20mu
false
null
t3_1kg20mu
/r/LocalLLaMA/comments/1kg20mu/so_why_are_we_shing_on_ollama_again/
false
false
self
218
null
I struggle with copy-pasting AI context when using different LLMs, so I am building Window
0
I usually work on multiple projects using different LLMs. I juggle between ChatGPT, Claude, Grok..., and I constantly need to re-explain my project (context) every time I switch LLMs when working on the same task. It’s annoying. Some people suggested to keep a doc and update it with my context and progress which is not that ideal. I am building Window to solve this problem. Window is a common context window where you save your context once and re-use it across LLMs. Here are the features: * Add your context once to Window * Use it across all LLMs * Model to model context transfer * Up-to-date context across models * No more re-explaining your context to models I can share with you the website in the DMs if you ask. Looking for your feedback. Thanks.
2025-05-06T11:52:04
https://www.reddit.com/r/LocalLLaMA/comments/1kg2ihj/i_struggle_with_copypasting_ai_context_when_using/
Dagadogo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg2ihj
false
null
t3_1kg2ihj
/r/LocalLLaMA/comments/1kg2ihj/i_struggle_with_copypasting_ai_context_when_using/
false
false
self
0
null
Good NFSW model for story writing
1
[removed]
2025-05-06T11:54:03
https://www.reddit.com/r/LocalLLaMA/comments/1kg2jsu/good_nfsw_model_for_story_writing/
ClarieObscur
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg2jsu
false
null
t3_1kg2jsu
/r/LocalLLaMA/comments/1kg2jsu/good_nfsw_model_for_story_writing/
false
false
self
1
null
could a shared gpu rental work?
2
What if we could just hook our GPUs to some sort of service. The ones who need processing power pay per tokens/s, while you get paid for the tokens/s you generate. Wouldn't this make AI cheap and also earn you a few bucks when your computer is doing nothing?
2025-05-06T12:11:44
https://www.reddit.com/r/LocalLLaMA/comments/1kg2w50/could_a_shared_gpu_rental_work/
freecodeio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg2w50
false
null
t3_1kg2w50
/r/LocalLLaMA/comments/1kg2w50/could_a_shared_gpu_rental_work/
false
false
self
2
null
OpenAI buying Windsurf
0
[https://www.youtube.com/watch?v=g\_pxe-H1QtY](https://www.youtube.com/watch?v=g_pxe-H1QtY) [https://www.bloomberg.com/news/articles/2025-05-06/openai-reaches-agreement-to-buy-startup-windsurf-for-3-billion](https://www.bloomberg.com/news/articles/2025-05-06/openai-reaches-agreement-to-buy-startup-windsurf-for-3-billion)
2025-05-06T12:39:05
https://www.reddit.com/r/LocalLLaMA/comments/1kg3fle/openai_buying_windsurf/
kekePower
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg3fle
false
null
t3_1kg3fle
/r/LocalLLaMA/comments/1kg3fle/openai_buying_windsurf/
false
false
self
0
{'enabled': False, 'images': [{'id': 'tZMl_cpgxaLQWnm37WhX-b_FNiierjANVaYoCXrpaH4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/tZMl_cpgxaLQWnm37WhX-b_FNiierjANVaYoCXrpaH4.jpeg?width=108&crop=smart&auto=webp&s=2ae3df4e3b102e3be1ce3ec5296c194437c8ce2c', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/tZMl_cpgxaLQWnm37WhX-b_FNiierjANVaYoCXrpaH4.jpeg?width=216&crop=smart&auto=webp&s=376c47b2e8bed2fbff2fce26cfd9628bedfbb646', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/tZMl_cpgxaLQWnm37WhX-b_FNiierjANVaYoCXrpaH4.jpeg?width=320&crop=smart&auto=webp&s=e25ec3e26e0a29def67518d6881a95b8c10c3800', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/tZMl_cpgxaLQWnm37WhX-b_FNiierjANVaYoCXrpaH4.jpeg?auto=webp&s=2e292dcc89097bdd2baf7d1816d0d310dd74f16e', 'width': 480}, 'variants': {}}]}
Stop Thinking AGI's Coming soon !
0
Yoo seriously..... I don't get why people are acting like AGI is just around the corner. All this talk about it being here in 2027..wtf Nah, it’s not happening. Imma be fucking real there won’t be any breakthrough or real progress by then it's all just hype !!! If you think AGI is coming anytime soon, you’re seriously mistaken Everyone’s hyping up AGI as if it's the next big thing but the truth is it’s still a long way off. The reality is we’ve got a lot of work left before it’s even close to happening. So everyone stop yapping abt this nonsense. AGI isn’t coming in the next decade. It’s gonna take a lot more time, trust me.
2025-05-06T13:13:38
https://www.reddit.com/r/LocalLLaMA/comments/1kg45gk/stop_thinking_agis_coming_soon/
d4z7wk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg45gk
false
null
t3_1kg45gk
/r/LocalLLaMA/comments/1kg45gk/stop_thinking_agis_coming_soon/
false
false
self
0
null
OpenWebUI license change: red flag?
139
[https://docs.openwebui.com/license/](https://docs.openwebui.com/license/) / [https://github.com/open-webui/open-webui/blob/main/LICENSE](https://github.com/open-webui/open-webui/blob/main/LICENSE) Open WebUI's last update included changes to the license beyond their original BSD-3 license, presumably for monetization. Their reasoning is "other companies are running instances of our code and put their own logo on open webui. this is not what open-source is about". Really? Imagine if llama.cpp did the same thing in response to ollama. I just recently made the upgrade to v0.6.6 and of course I don't have 50 active users, but it just always leaves a bad taste in my mouth when they do this, and I'm starting to wonder if I should use/make a fork instead. I know everything isn't a slippery slope but it clearly makes it more likely that this project won't be uncompromizably open-source from now on. What are you guys' thoughts on this. Am I being overdramatic?
2025-05-06T13:20:27
https://www.reddit.com/r/LocalLLaMA/comments/1kg4avg/openwebui_license_change_red_flag/
k_means_clusterfuck
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg4avg
false
null
t3_1kg4avg
/r/LocalLLaMA/comments/1kg4avg/openwebui_license_change_red_flag/
false
false
self
139
null
Best model for copy editing and story-level feedback?
0
I'm a writer, and I'm looking for an LLM that's good at understanding and critiquing text, be it for spotting grammar and style issues or just general story-level feedback. If it can do a bit of coding on the side, that's a bonus. Just to be clear, I don't need the LLM to write the story for me (I still prefer to do that myself), so it doesn't have to be good at RP specifically. So perhaps something that's good at following instructions and reasoning? I'm honestly new to this, so any feedback is welcome. I run a M3 32GB mac.
2025-05-06T13:47:41
https://www.reddit.com/r/LocalLLaMA/comments/1kg4x0r/best_model_for_copy_editing_and_storylevel/
AcceptablePeanut
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg4x0r
false
null
t3_1kg4x0r
/r/LocalLLaMA/comments/1kg4x0r/best_model_for_copy_editing_and_storylevel/
false
false
self
0
null
What is the best local AI model for coding?
35
I'm looking mostly for Javascript/Typescript. And Frontend (HTML/CSS) + Backend (Node) if there are any good ones specifically at Tailwind. Is there any model that is top-tier now? I read a thread from 3 months ago that said Qwen 2.5-Coder-32B but Qwen 3 just released so was thinking I should download that directly. But then I saw in LMStudio that there is no Qwen 3 Coder yet. So alternatives for right now?
2025-05-06T14:13:51
https://www.reddit.com/r/LocalLLaMA/comments/1kg5j75/what_is_the_best_local_ai_model_for_coding/
deadcoder0904
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg5j75
false
null
t3_1kg5j75
/r/LocalLLaMA/comments/1kg5j75/what_is_the_best_local_ai_model_for_coding/
false
false
self
35
null
Qwen3 14b vs the new Phi 4 Reasoning model
45
Im about to run my own set of personal tests to compare the two but was wondering what everyone else's experiences have been so far. Seen and heard good things about the new qwen model, but almost nothing on the new phi model. Also looking for any third party benchmarks that have both in them, I havent really been able to find any myself. I like u/_sqrkl benchmarks but they seem to have omitted the smaller qwen models from the creative writing benchmark and phi 4 thinking completely in the rest. [https://huggingface.co/microsoft/Phi-4-reasoning](https://huggingface.co/microsoft/Phi-4-reasoning) [https://huggingface.co/Qwen/Qwen3-14B](https://huggingface.co/Qwen/Qwen3-14B)
2025-05-06T14:17:20
https://www.reddit.com/r/LocalLLaMA/comments/1kg5m5a/qwen3_14b_vs_the_new_phi_4_reasoning_model/
lemon07r
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg5m5a
false
null
t3_1kg5m5a
/r/LocalLLaMA/comments/1kg5m5a/qwen3_14b_vs_the_new_phi_4_reasoning_model/
false
false
self
45
{'enabled': False, 'images': [{'id': 'qtFRsVUHHSscVBLKk81CTXXwniwSEY09ICcAY3SjgeU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/xhbXhNzzAfOFxvXg_YIhAxSOEKRdg1clS_62I7lKK8c.jpg?width=108&crop=smart&auto=webp&s=72a36ee08cc9ebaacf0b42e0836d96adfa01fd33', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/xhbXhNzzAfOFxvXg_YIhAxSOEKRdg1clS_62I7lKK8c.jpg?width=216&crop=smart&auto=webp&s=8b7383e34c8d8d73ad76b401b540961f847b1a2f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/xhbXhNzzAfOFxvXg_YIhAxSOEKRdg1clS_62I7lKK8c.jpg?width=320&crop=smart&auto=webp&s=5a7ab717c7d245951dbaee99652aa5cac047d977', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/xhbXhNzzAfOFxvXg_YIhAxSOEKRdg1clS_62I7lKK8c.jpg?width=640&crop=smart&auto=webp&s=6d5d1eccafad99c7ee9d2abed2ab5c72d51e01dc', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/xhbXhNzzAfOFxvXg_YIhAxSOEKRdg1clS_62I7lKK8c.jpg?width=960&crop=smart&auto=webp&s=025078f1abc697b435610b0b667b187773c94df0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/xhbXhNzzAfOFxvXg_YIhAxSOEKRdg1clS_62I7lKK8c.jpg?width=1080&crop=smart&auto=webp&s=01bfedd96a19487c0cf5baa39041f8661ac42146', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/xhbXhNzzAfOFxvXg_YIhAxSOEKRdg1clS_62I7lKK8c.jpg?auto=webp&s=35a1ed7cdd54b483e926627f69dde2689174085f', 'width': 1200}, 'variants': {}}]}
Looking for a free open-source vector database for RAG similar to PGVector. What do you recommend?
1
[removed]
2025-05-06T14:19:40
https://www.reddit.com/r/LocalLLaMA/comments/1kg5o2w/looking_for_a_free_opensource_vector_database_for/
FitWeb6354
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg5o2w
false
null
t3_1kg5o2w
/r/LocalLLaMA/comments/1kg5o2w/looking_for_a_free_opensource_vector_database_for/
false
false
self
1
null
Bought 3090, need emotional support
1
[removed]
2025-05-06T14:22:30
https://www.reddit.com/r/LocalLLaMA/comments/1kg5qji/bought_3090_need_emotional_support/
HandsOnDyk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg5qji
false
null
t3_1kg5qji
/r/LocalLLaMA/comments/1kg5qji/bought_3090_need_emotional_support/
false
false
self
1
null
Noob question about making “agents”
1
[removed]
2025-05-06T14:37:07
https://www.reddit.com/r/LocalLLaMA/comments/1kg639w/noob_question_about_making_agents/
Careful_Breath_1108
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg639w
false
null
t3_1kg639w
/r/LocalLLaMA/comments/1kg639w/noob_question_about_making_agents/
false
false
self
1
null
New Gemini-2.5-pro-preview-05-06 released on Vertex AI
51
Gemini-2.5-pro-preview-05-06 released on Vertex Ai
2025-05-06T14:39:10
https://i.redd.it/ehdzxxbza6ze1.png
das_rdsm
i.redd.it
1970-01-01T00:00:00
0
{}
1kg651p
false
null
t3_1kg651p
/r/LocalLLaMA/comments/1kg651p/new_gemini25propreview0506_released_on_vertex_ai/
false
false
https://external-preview…077abc6674c7a075
51
{'enabled': True, 'images': [{'id': 'SN4-6nPfSHeOL9MVgCx9F_FxIWuUM83xWyyfJTAjkH0', 'resolutions': [{'height': 90, 'url': 'https://preview.redd.it/ehdzxxbza6ze1.png?width=108&crop=smart&auto=webp&s=d228ecd134217ab1b1463f1e87e1d8f91bf5bf36', 'width': 108}, {'height': 180, 'url': 'https://preview.redd.it/ehdzxxbza6ze1.png?width=216&crop=smart&auto=webp&s=1743b9d99bc0bea745ec6938ef7ed7bf3d7b2960', 'width': 216}, {'height': 267, 'url': 'https://preview.redd.it/ehdzxxbza6ze1.png?width=320&crop=smart&auto=webp&s=4bff8a329fd03d5d9c2576923d354b6796cf6ab3', 'width': 320}, {'height': 535, 'url': 'https://preview.redd.it/ehdzxxbza6ze1.png?width=640&crop=smart&auto=webp&s=a5f864a04f5367452e1954cb919e07baad435185', 'width': 640}, {'height': 802, 'url': 'https://preview.redd.it/ehdzxxbza6ze1.png?width=960&crop=smart&auto=webp&s=f16a99a68e304dcd98569e2115a5e4ec117f2fb2', 'width': 960}], 'source': {'height': 826, 'url': 'https://preview.redd.it/ehdzxxbza6ze1.png?auto=webp&s=7cb63414a134d127985828f9d08013a280b2b9a5', 'width': 988}, 'variants': {}}]}
Best Practices to Connect Services for a Personal Agent?
3
What’s been your go-to setup for linking services to build custom, private agents? I’ve found the process surprisingly painful. For example, Parakeet is powerful but hard to wire into something like a usable scribe. n8n has great integrations, but debugging is a mess (e.g., “Non string tool message content” errors). I considered using n8n as an MCP backend for OpenWebUI, but SSE/OpenAPI complexities are holding me back. Current setup: local LLMs (e.g., Qwen 0.6B, Gemma 4B) on Docker via Ollama, with OpenWebUI + n8n to route inputs/functions. Limited GPU (RTX 2060 Super), but tinkering with Hugging Face spaces and Dockerized tools as I go. Appreciate any advice—especially from others piecing this together solo.
2025-05-06T14:42:43
https://www.reddit.com/r/LocalLLaMA/comments/1kg682j/best_practices_to_connect_services_for_a_personal/
Careful_Breath_1108
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg682j
false
null
t3_1kg682j
/r/LocalLLaMA/comments/1kg682j/best_practices_to_connect_services_for_a_personal/
false
false
self
3
null
Gemini use multiple api keys.
9
If you are working on any project whether it is generating data set for fine-tuning or anything that uses gemini really. I made a python package that allows you to use multiple API keys to increase your rate limit. [johnmalek312/gemini\_rotator: Don't get dizzy 😵](https://github.com/johnmalek312/gemini_rotator) Important: please do not abuse.
2025-05-06T14:47:39
https://www.reddit.com/r/LocalLLaMA/comments/1kg6ce3/gemini_use_multiple_api_keys/
Senior-Raspberry-929
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg6ce3
false
null
t3_1kg6ce3
/r/LocalLLaMA/comments/1kg6ce3/gemini_use_multiple_api_keys/
false
false
self
9
{'enabled': False, 'images': [{'id': '9llpppdD7MrCECZ8AiDDj0LV97R9qHr6BCXdd-c_FJo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/A54hFxK3q4W7I9mAaDcWIiIY2KSizIY5uQC1ze55IOk.jpg?width=108&crop=smart&auto=webp&s=5ad376e87577f31c7f5f87b02e5b005b6d436069', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/A54hFxK3q4W7I9mAaDcWIiIY2KSizIY5uQC1ze55IOk.jpg?width=216&crop=smart&auto=webp&s=7519597085abfa33ea8390d2070ddcc6c9c042d0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/A54hFxK3q4W7I9mAaDcWIiIY2KSizIY5uQC1ze55IOk.jpg?width=320&crop=smart&auto=webp&s=7efb7ce509793e989509507518d1fed0d5ca7795', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/A54hFxK3q4W7I9mAaDcWIiIY2KSizIY5uQC1ze55IOk.jpg?width=640&crop=smart&auto=webp&s=2cc4c0ba7dea72ed2683821ecb16a3f20c5fcaf9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/A54hFxK3q4W7I9mAaDcWIiIY2KSizIY5uQC1ze55IOk.jpg?width=960&crop=smart&auto=webp&s=9b3fd0dde9112d0d7d21f007c7b9985f12897482', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/A54hFxK3q4W7I9mAaDcWIiIY2KSizIY5uQC1ze55IOk.jpg?width=1080&crop=smart&auto=webp&s=ab0174a19bbd3c631ef873efa5df74f95d285b9f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/A54hFxK3q4W7I9mAaDcWIiIY2KSizIY5uQC1ze55IOk.jpg?auto=webp&s=9154219ccbe96bea644a95d4335ac2beffd6ecd2', 'width': 1200}, 'variants': {}}]}
Reasoning in tool calls / structured output
1
Hello everyone, I am currently experimenting with the new Qwen3 models and I am quite pleased with them. However, I am facing an issue with getting them to utilize reasoning, if that is even possible, when I implement a structured output. I am using the Ollama API for this, but it seems that the results lack critical thinking. For example, when I use the standard Ollama terminal chat, I receive better results and can see that the model is indeed employing reasoning tokens. Unfortunately, the format of those responses is not suitable for my needs. In contrast, when I use the structured output, the formatting is always perfect, but the results are significantly poorer. I have not found many resources on this topic, so I would greatly appreciate any guidance you could provide :)
2025-05-06T14:59:48
https://www.reddit.com/r/LocalLLaMA/comments/1kg6msv/reasoning_in_tool_calls_structured_output/
AbstrusSchatten
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg6msv
false
null
t3_1kg6msv
/r/LocalLLaMA/comments/1kg6msv/reasoning_in_tool_calls_structured_output/
false
false
self
1
null
Is there any point in building a 2x 5090 rig?
1
As title. Amazon in my country has MSI SKUs at RRP. But are there enough models that split well across 2 (or more??) 32GB chunks as to make it worth while?
2025-05-06T15:01:53
https://www.reddit.com/r/LocalLLaMA/comments/1kg6ovv/is_there_any_point_in_building_a_2x_5090_rig/
hurrdurrmeh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg6ovv
false
null
t3_1kg6ovv
/r/LocalLLaMA/comments/1kg6ovv/is_there_any_point_in_building_a_2x_5090_rig/
false
false
self
1
null
Model swapping with vLLM
2
I'm currently running a small 2 GPU setup with ollama on it. Today, I tried to switch to vLLM with LiteLLM as a proxy/gateway for the models I'm hosting, however I can't figure out how to properly do swapping. I really liked the fact new models can be loaded on the GPU provided there is enough VRAM to load the model with the context and some cache, and unload models when I receive a request for a new model not currently loaded. (So I can keep 7-8 models in my "stock" and load 4 different at the same time). I found [llama-swap](https://github.com/mostlygeek/llama-swap) and I think I can make something that look likes this with swap groups, but as I'm using the official vllm docker image, I couldn't find a great way to start the server. I'd happily take any suggestions or criticism for what I'm trying to achieve and hope someone managed to make this kind of setup work. Thanks!
2025-05-06T15:07:10
https://www.reddit.com/r/LocalLLaMA/comments/1kg6tk3/model_swapping_with_vllm/
Nightlyside
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg6tk3
false
null
t3_1kg6tk3
/r/LocalLLaMA/comments/1kg6tk3/model_swapping_with_vllm/
false
false
self
2
{'enabled': False, 'images': [{'id': 'OZ35IgMsJ1BfPNH2o3DRjE7UMcGygbhNmSs49a0Zbw4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OZ35IgMsJ1BfPNH2o3DRjE7UMcGygbhNmSs49a0Zbw4.png?width=108&crop=smart&auto=webp&s=62511810ffd64628e3630619327fbfe1ed8173db', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OZ35IgMsJ1BfPNH2o3DRjE7UMcGygbhNmSs49a0Zbw4.png?width=216&crop=smart&auto=webp&s=07871c00cb5a1273c045eca2a522260eeb0e8775', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OZ35IgMsJ1BfPNH2o3DRjE7UMcGygbhNmSs49a0Zbw4.png?width=320&crop=smart&auto=webp&s=4812ac480e3665e071f8184543a2c53148f63254', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OZ35IgMsJ1BfPNH2o3DRjE7UMcGygbhNmSs49a0Zbw4.png?width=640&crop=smart&auto=webp&s=f0bae1eab1baf22be422341933b829b060defa9c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OZ35IgMsJ1BfPNH2o3DRjE7UMcGygbhNmSs49a0Zbw4.png?width=960&crop=smart&auto=webp&s=e6a1d1edf917688517c36d8e5090911bbac20e12', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OZ35IgMsJ1BfPNH2o3DRjE7UMcGygbhNmSs49a0Zbw4.png?width=1080&crop=smart&auto=webp&s=5a28594be75f644b129daf79861bdcad2ab58d56', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OZ35IgMsJ1BfPNH2o3DRjE7UMcGygbhNmSs49a0Zbw4.png?auto=webp&s=0adbd5c95aeefc659dcadad96c3e91fe46893513', 'width': 1200}, 'variants': {}}]}
Open LLM controlling pc?
1
[removed]
2025-05-06T15:21:01
https://www.reddit.com/r/LocalLLaMA/comments/1kg766x/open_llm_controlling_pc/
sirdarc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg766x
false
null
t3_1kg766x
/r/LocalLLaMA/comments/1kg766x/open_llm_controlling_pc/
false
false
self
1
null
Nvidia to drop CUDA support for Maxwell, Pascal, and Volta GPUs with the next major Toolkit release
171
https://www.tomshardware.com/pc-components/gpus/nvidia-to-drop-cuda-support-for-maxwell-pascal-and-volta-gpus-with-the-next-major-toolkit-release
2025-05-06T15:22:04
https://www.reddit.com/r/LocalLLaMA/comments/1kg7768/nvidia_to_drop_cuda_support_for_maxwell_pascal/
Educational_Sun_8813
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg7768
false
null
t3_1kg7768
/r/LocalLLaMA/comments/1kg7768/nvidia_to_drop_cuda_support_for_maxwell_pascal/
false
false
self
171
{'enabled': False, 'images': [{'id': '4yv8NMimcakkdFJJ7PCPCJYm4Ic_3NztsIFl0Ht89Uo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bu9kOglWoDRbVgnzp6di2Ey2cs6oSN8mlD0jksIIMog.jpg?width=108&crop=smart&auto=webp&s=f46bc8c37d863448b1d853473bc5fc6cf277ecac', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bu9kOglWoDRbVgnzp6di2Ey2cs6oSN8mlD0jksIIMog.jpg?width=216&crop=smart&auto=webp&s=b5542b9db6f7c199eaf79201d7bd8aaa86962b30', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bu9kOglWoDRbVgnzp6di2Ey2cs6oSN8mlD0jksIIMog.jpg?width=320&crop=smart&auto=webp&s=b4e823800c6742d46886f2941b5efd2590cf95f2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bu9kOglWoDRbVgnzp6di2Ey2cs6oSN8mlD0jksIIMog.jpg?width=640&crop=smart&auto=webp&s=4be804b2dd722e4eb1f66464cea01759dafeaeb3', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bu9kOglWoDRbVgnzp6di2Ey2cs6oSN8mlD0jksIIMog.jpg?width=960&crop=smart&auto=webp&s=e5911bdfc08883c9092b112d977f7c98176f4b30', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bu9kOglWoDRbVgnzp6di2Ey2cs6oSN8mlD0jksIIMog.jpg?width=1080&crop=smart&auto=webp&s=7aa6abfc81ba252b33ad2d87201bbf23660a1aa5', 'width': 1080}], 'source': {'height': 2250, 'url': 'https://external-preview.redd.it/bu9kOglWoDRbVgnzp6di2Ey2cs6oSN8mlD0jksIIMog.jpg?auto=webp&s=a961bf50961ba2b74fb6310e167dd14dc87b6714', 'width': 4000}, 'variants': {}}]}
Google did it again
1
[removed]
2025-05-06T15:25:15
https://www.reddit.com/gallery/1kg7a34
Linkpharm2
reddit.com
1970-01-01T00:00:00
0
{}
1kg7a34
false
null
t3_1kg7a34
/r/LocalLLaMA/comments/1kg7a34/google_did_it_again/
false
false
https://a.thumbs.redditm…aqoWM4WWAQO8.jpg
1
null
How to share compute accross different machines?
2
I have a Mac mini 16gb, a laptop with intel arc 4gb vram and a desktop with a 2060 with 6gb vram. How can I use the compute together to access one llm model?
2025-05-06T15:44:41
https://www.reddit.com/r/LocalLLaMA/comments/1kg7rkp/how_to_share_compute_accross_different_machines/
Material_Key7014
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg7rkp
false
null
t3_1kg7rkp
/r/LocalLLaMA/comments/1kg7rkp/how_to_share_compute_accross_different_machines/
false
false
self
2
null
Creative Uses for Local LLMs in 2025. What Are You Building?
1
[removed]
2025-05-06T15:45:25
https://www.reddit.com/r/LocalLLaMA/comments/1kg7s95/creative_uses_for_local_llms_in_2025_what_are_you/
urfairygodmother_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg7s95
false
null
t3_1kg7s95
/r/LocalLLaMA/comments/1kg7s95/creative_uses_for_local_llms_in_2025_what_are_you/
false
false
self
1
null