title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Can you run Qwen 30B A3B on 8gb vram/ 16gb ram?
| 7 |
Is there a way to archive this? I saw people doing this on pretty low end builds but I dont know how to get it to work.
| 2025-04-29T08:54:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1kajbmm/can_you_run_qwen_30b_a3b_on_8gb_vram_16gb_ram/
|
Famous-Appointment-8
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kajbmm
| false | null |
t3_1kajbmm
|
/r/LocalLLaMA/comments/1kajbmm/can_you_run_qwen_30b_a3b_on_8gb_vram_16gb_ram/
| false | false |
self
| 7 | null |
Qwen3 now runs locally in Jan via llama.cpp (Update the llama.cpp backend in Settings to run it)
| 65 |
Hey, just sharing a quick note: Jan uses llama.cpp as its backend, and we recently shipped a feature that lets you bump the llama.cpp version without waiting for any updates.
So you can now run newer models like Qwen3 without needing a full Jan update.
| 2025-04-29T08:56:53 |
eck72
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kajcl7
| false | null |
t3_1kajcl7
|
/r/LocalLLaMA/comments/1kajcl7/qwen3_now_runs_locally_in_jan_via_llamacpp_update/
| false | false | 65 |
{'enabled': True, 'images': [{'id': '_i6JEfMIphBptDu05S22rbbHo7nkazD_QVHKCTrK2l0', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/ouuukg15mqxe1.jpeg?width=108&crop=smart&auto=webp&s=e6b2c4e57130a1a19e789e4a3c05ac3892fff32b', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/ouuukg15mqxe1.jpeg?width=216&crop=smart&auto=webp&s=c2f514953ef40126fd8d5229fe3805f3ae32ed26', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/ouuukg15mqxe1.jpeg?width=320&crop=smart&auto=webp&s=cea0fc7c51a366261ef534bb8a03f0f8a84fde11', 'width': 320}, {'height': 481, 'url': 'https://preview.redd.it/ouuukg15mqxe1.jpeg?width=640&crop=smart&auto=webp&s=529c48013baf50687a416a958f98a1c1099867ea', 'width': 640}, {'height': 722, 'url': 'https://preview.redd.it/ouuukg15mqxe1.jpeg?width=960&crop=smart&auto=webp&s=09a7c57ab081c04040c2793598cf0cbaaaa6f0c6', 'width': 960}, {'height': 812, 'url': 'https://preview.redd.it/ouuukg15mqxe1.jpeg?width=1080&crop=smart&auto=webp&s=22e1874c188ca38aec2762eacc7e328e013c73e0', 'width': 1080}], 'source': {'height': 2004, 'url': 'https://preview.redd.it/ouuukg15mqxe1.jpeg?auto=webp&s=0c38f5140b8de1fca9ce4b94efa9fc9178b2c33d', 'width': 2664}, 'variants': {}}]}
|
||
Qwen 3 F8
| 1 |
[removed]
| 2025-04-29T08:58:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1kajd4j/qwen_3_f8/
|
Creative_Bottle_3225
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kajd4j
| false | null |
t3_1kajd4j
|
/r/LocalLLaMA/comments/1kajd4j/qwen_3_f8/
| false | false |
self
| 1 | null |
What are the Qwen3 settings for Ollama?
| 1 |
[removed]
| 2025-04-29T09:00:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1kaje9d/what_are_the_qwen3_settings_for_ollama/
|
RogueZero123
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaje9d
| false | null |
t3_1kaje9d
|
/r/LocalLLaMA/comments/1kaje9d/what_are_the_qwen3_settings_for_ollama/
| false | false |
self
| 1 | null |
Ai Engine Host: A Low-Cost GPU Hosting Service – Buyer Beware
| 1 |
[removed]
| 2025-04-29T09:00:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1kajej0/ai_engine_host_a_lowcost_gpu_hosting_service/
|
PrettyRevolution1842
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kajej0
| false | null |
t3_1kajej0
|
/r/LocalLLaMA/comments/1kajej0/ai_engine_host_a_lowcost_gpu_hosting_service/
| false | false |
self
| 1 | null |
Qwen3:30b-a3b incredibly fast on CPU, 20T/s on AMD 7950x 16c32t
| 1 | 2025-04-29T09:24:11 |
https://v.redd.it/4z1cay65sqxe1
|
D33455432
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kajpwp
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/4z1cay65sqxe1/DASHPlaylist.mpd?a=1748510666%2COThiYzhkMzcyODgyYjVkNDBlN2ZmOThiZjVmNTgxZGI2OWQwZjc3NGUwYWU5OGFmMGRmNTdiNmNiNTVmZjk4YQ%3D%3D&v=1&f=sd', 'duration': 54, 'fallback_url': 'https://v.redd.it/4z1cay65sqxe1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/4z1cay65sqxe1/HLSPlaylist.m3u8?a=1748510666%2CMmI3YjY2NzI2YTFhNGVkMDBjNjljMGM2MTA2OTMyZjQ3YzUwMjA4YTZjM2U3MDQ0NmEzYzQ1NmY0YWQxZWRmNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/4z1cay65sqxe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1348}}
|
t3_1kajpwp
|
/r/LocalLLaMA/comments/1kajpwp/qwen330ba3b_incredibly_fast_on_cpu_20ts_on_amd/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'bDQ5MGZ2eGlzcXhlMcRMTPaPyq-RsOehr_VJSR9voYJqpWFoGfifz6GEdxRY', 'resolutions': [{'height': 86, 'url': 'https://external-preview.redd.it/bDQ5MGZ2eGlzcXhlMcRMTPaPyq-RsOehr_VJSR9voYJqpWFoGfifz6GEdxRY.png?width=108&crop=smart&format=pjpg&auto=webp&s=99b27451ca67e996b3327cf9ed7aa6bb948617a8', 'width': 108}, {'height': 172, 'url': 'https://external-preview.redd.it/bDQ5MGZ2eGlzcXhlMcRMTPaPyq-RsOehr_VJSR9voYJqpWFoGfifz6GEdxRY.png?width=216&crop=smart&format=pjpg&auto=webp&s=bb1db8b55100cec223ffae4ea182944ab8726ecf', 'width': 216}, {'height': 256, 'url': 'https://external-preview.redd.it/bDQ5MGZ2eGlzcXhlMcRMTPaPyq-RsOehr_VJSR9voYJqpWFoGfifz6GEdxRY.png?width=320&crop=smart&format=pjpg&auto=webp&s=78679d23af1b730c8d97267b1ef84a6087835ffd', 'width': 320}, {'height': 512, 'url': 'https://external-preview.redd.it/bDQ5MGZ2eGlzcXhlMcRMTPaPyq-RsOehr_VJSR9voYJqpWFoGfifz6GEdxRY.png?width=640&crop=smart&format=pjpg&auto=webp&s=389174270d43a9190feaee68190212bc7f57a7ff', 'width': 640}, {'height': 768, 'url': 'https://external-preview.redd.it/bDQ5MGZ2eGlzcXhlMcRMTPaPyq-RsOehr_VJSR9voYJqpWFoGfifz6GEdxRY.png?width=960&crop=smart&format=pjpg&auto=webp&s=cbaa7a80020089265cf0746b3b9be49a5917a9c9', 'width': 960}, {'height': 864, 'url': 'https://external-preview.redd.it/bDQ5MGZ2eGlzcXhlMcRMTPaPyq-RsOehr_VJSR9voYJqpWFoGfifz6GEdxRY.png?width=1080&crop=smart&format=pjpg&auto=webp&s=39001af30f53976538932cdabdcd47816118ac53', 'width': 1080}], 'source': {'height': 1366, 'url': 'https://external-preview.redd.it/bDQ5MGZ2eGlzcXhlMcRMTPaPyq-RsOehr_VJSR9voYJqpWFoGfifz6GEdxRY.png?format=pjpg&auto=webp&s=019822200e4012f1a8b97b2c61716899d2aa8699', 'width': 1706}, 'variants': {}}]}
|
||
Qwen_Qwen3-14B-Q8_0 seems to be repeating itself
| 21 |
Does anybody else encounter this problem?
| 2025-04-29T09:30:01 |
ChazychazZz
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kajsq0
| false | null |
t3_1kajsq0
|
/r/LocalLLaMA/comments/1kajsq0/qwen_qwen314bq8_0_seems_to_be_repeating_itself/
| false | false | 21 |
{'enabled': True, 'images': [{'id': 'guwee2cVzTrrdWy13Bm7V0QuoN5JC-Xeuc-YE6KXGWM', 'resolutions': [{'height': 113, 'url': 'https://preview.redd.it/d4b370tctqxe1.png?width=108&crop=smart&auto=webp&s=3e1c5f7668ad8189bc5e88452e22fc6c648b9fbe', 'width': 108}, {'height': 227, 'url': 'https://preview.redd.it/d4b370tctqxe1.png?width=216&crop=smart&auto=webp&s=c4afc050f509dddc7ffc66e839842ca2ba9cfd56', 'width': 216}, {'height': 337, 'url': 'https://preview.redd.it/d4b370tctqxe1.png?width=320&crop=smart&auto=webp&s=7e9a720ca599bf31992a43313f5bfb563ae178e7', 'width': 320}, {'height': 675, 'url': 'https://preview.redd.it/d4b370tctqxe1.png?width=640&crop=smart&auto=webp&s=fc6d0b2582d215fa77ead7dd976aef7dc11b56ea', 'width': 640}], 'source': {'height': 785, 'url': 'https://preview.redd.it/d4b370tctqxe1.png?auto=webp&s=a92718f571dbbbfaaf6a76444baa0e8dfad6c93e', 'width': 744}, 'variants': {}}]}
|
||
Any reason why Qwen3 GGUF models are only in BF16? No FP16 versions around?
| 3 |
Hey folks, quick question — my GPU doesn’t support BF16, and I noticed all the Qwen3 GGUF models I’ve found are in BF16 only.
Haven’t seen any FP16 versions around.
Anyone know why, or if I’m just missing something? Would really appreciate any tips!
https://preview.redd.it/3h6vfbn5vqxe1.png?width=3820&format=png&auto=webp&s=1d483b335fb0fc14006c044d097b7291a0affc68
| 2025-04-29T09:42:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1kajyj9/any_reason_why_qwen3_gguf_models_are_only_in_bf16/
|
Known-Classroom2655
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kajyj9
| false | null |
t3_1kajyj9
|
/r/LocalLLaMA/comments/1kajyj9/any_reason_why_qwen3_gguf_models_are_only_in_bf16/
| false | false | 3 | null |
|
Quantized pre-trained model to generate summaries crashes in colab
| 1 |
[removed]
| 2025-04-29T09:51:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1kak35f/quantized_pretrained_model_to_generate_summaries/
|
Miserable_Music_8029
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kak35f
| false | null |
t3_1kak35f
|
/r/LocalLLaMA/comments/1kak35f/quantized_pretrained_model_to_generate_summaries/
| false | false |
self
| 1 | null |
Run production-ready distributed Qwen3 locally via GPUStack
| 6 |
Hi, everyone, just sharing a new, GPUStack has released v0.6, with support for distributed inference using both vLLM and llama-box (llama.cpp).
No need for a monster machine — you can run Qwen/Qwen3-235B-A22B across your desktops and test machines using llama-box distributed inference, or deploy production-grade Qwen3 with vLLM distributed inference.
| 2025-04-29T09:57:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1kak6d8/run_productionready_distributed_qwen3_locally_via/
|
RepulsiveEbb4011
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kak6d8
| false | null |
t3_1kak6d8
|
/r/LocalLLaMA/comments/1kak6d8/run_productionready_distributed_qwen3_locally_via/
| false | false |
self
| 6 | null |
new user here. model is failing to load.
| 2 |
greetings, i wanted to try running a local llm so i with the help of chatgtp installed gemma 2 2B in lm studio but it keeps saying "model failed to load" .
what should i do? should i tweak smt in the 2nd pic?
| 2025-04-29T10:11:52 |
https://www.reddit.com/gallery/1kakdyn
|
Both-Drama-8561
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kakdyn
| false | null |
t3_1kakdyn
|
/r/LocalLLaMA/comments/1kakdyn/new_user_here_model_is_failing_to_load/
| false | false | 2 | null |
|
We could
| 0 |
Ok hear me out.
We keep quantizing these models to remove at least half the bits. What if you instead of downsizing the model, put another model embedded in the bits that would otherwise be trimmed.
I know, it would actually create some complications where full bit depth numbers come into play in ggufs. The final file would be bigger.
Anyway that aside.
They cohabitate in the memory and access, so they inference in parallel the same context.
This could allow a lot of stuff. May be the models would have to be co-trained, or maybe we could slap four random Q4s together and take averages or something. Idk.
I'm not exactly sure how it all comes together inside the math of the LLM.
Goodmorning. I better drive to work.
| 2025-04-29T10:29:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1kakn7a/we_could/
|
aseichter2007
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kakn7a
| false | null |
t3_1kakn7a
|
/r/LocalLLaMA/comments/1kakn7a/we_could/
| false | false |
self
| 0 | null |
偷偷告诉你:OpenAI用户省钱的秘密通道
| 1 |
[removed]
| 2025-04-29T10:31:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1kako82/偷偷告诉你openai用户省钱的秘密通道/
|
True-Foundation-3697
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kako82
| false | null |
t3_1kako82
|
/r/LocalLLaMA/comments/1kako82/偷偷告诉你openai用户省钱的秘密通道/
| false | false |
self
| 1 | null |
Fixed Qwen3 Jinja template
| 1 |
[removed]
| 2025-04-29T10:41:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1kaktc4/fixed_qwen3_jinja_template/
|
KraiiFox
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaktc4
| false | null |
t3_1kaktc4
|
/r/LocalLLaMA/comments/1kaktc4/fixed_qwen3_jinja_template/
| false | false |
self
| 1 | null |
Fixed qwen3 template.
| 1 |
[removed]
| 2025-04-29T10:43:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1kakut4/fixed_qwen3_template/
|
KraiiFox
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kakut4
| false | null |
t3_1kakut4
|
/r/LocalLLaMA/comments/1kakut4/fixed_qwen3_template/
| false | false |
self
| 1 | null |
Fixed Qwen 3 Jinja template.
| 24 |
For those getting the unable to parse chat template error.
[https://pastebin.com/DmZEJxw8](https://pastebin.com/DmZEJxw8)
Save it to a file and use the flag --chat-template-file <filename> to use it.
| 2025-04-29T10:45:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1kakw2c/fixed_qwen_3_jinja_template/
|
KraiiFox
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kakw2c
| false | null |
t3_1kakw2c
|
/r/LocalLLaMA/comments/1kakw2c/fixed_qwen_3_jinja_template/
| false | false |
self
| 24 |
{'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?width=108&crop=smart&auto=webp&s=3d74dbe4f1d67cc8b587db9aa01762f26e269bcf', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?auto=webp&s=b9f5c4e4867fbffb2c1ff45dd70aa338d1e3f40c', 'width': 150}, 'variants': {}}]}
|
Bartowski qwen3 14b Q4_K_M uses almost no ram?
| 3 |
I'm running this model on a macbook with ollama and open webui in non thinking mode. The activity monitor shows ollama using 469mb of ram. What kind of sorcery is this?
| 2025-04-29T10:46:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1kakwko/bartowski_qwen3_14b_q4_k_m_uses_almost_no_ram/
|
No-Report-1805
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kakwko
| false | null |
t3_1kakwko
|
/r/LocalLLaMA/comments/1kakwko/bartowski_qwen3_14b_q4_k_m_uses_almost_no_ram/
| false | false |
self
| 3 | null |
Your dream dataset
| 1 |
[removed]
| 2025-04-29T10:52:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1kakzm2/your_dream_dataset/
|
Efficient-Pay-1258
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kakzm2
| false | null |
t3_1kakzm2
|
/r/LocalLLaMA/comments/1kakzm2/your_dream_dataset/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'RIOIruaZeUQnCoI4OYwFadUOTfsnV0c6NM5ulVqfORY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/dPZ6zZURl-yj2KkRR3PRSLTMc9-N9ps6uRjEB7vKV4o.jpg?width=108&crop=smart&auto=webp&s=9e142293435d2a455b172101f2af7587e9558673', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/dPZ6zZURl-yj2KkRR3PRSLTMc9-N9ps6uRjEB7vKV4o.jpg?width=216&crop=smart&auto=webp&s=b7522240ed6a64029ac6f40c2a828a0157f771f0', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/dPZ6zZURl-yj2KkRR3PRSLTMc9-N9ps6uRjEB7vKV4o.jpg?width=320&crop=smart&auto=webp&s=43fcbc9dfe0b74661403b2ee45353264872bb216', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/dPZ6zZURl-yj2KkRR3PRSLTMc9-N9ps6uRjEB7vKV4o.jpg?width=640&crop=smart&auto=webp&s=8a96e0fc19ee3e92dca51262fe66975d97fa8754', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/dPZ6zZURl-yj2KkRR3PRSLTMc9-N9ps6uRjEB7vKV4o.jpg?width=960&crop=smart&auto=webp&s=500543cc6b67c70004bf0dad4fa3e5ac560bce15', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/dPZ6zZURl-yj2KkRR3PRSLTMc9-N9ps6uRjEB7vKV4o.jpg?width=1080&crop=smart&auto=webp&s=3393619ab074ff5650b0730ed2c15a416b3f9eb4', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/dPZ6zZURl-yj2KkRR3PRSLTMc9-N9ps6uRjEB7vKV4o.jpg?auto=webp&s=b6677ce2a42dc8ae1f9a0bc82415950fe78c9e5f', 'width': 1200}, 'variants': {}}]}
|
first Qwen 3 variants available
| 29 |
that was quick ;)
[https://huggingface.co/mlabonne/Qwen3-1.7B-abliterated](https://huggingface.co/mlabonne/Qwen3-1.7B-abliterated)
[https://huggingface.co/mlabonne/Qwen3-0.6B-abliterated](https://huggingface.co/mlabonne/Qwen3-0.6B-abliterated)
| 2025-04-29T10:57:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1kal2lq/first_qwen_3_variants_available/
|
jacek2023
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kal2lq
| false | null |
t3_1kal2lq
|
/r/LocalLLaMA/comments/1kal2lq/first_qwen_3_variants_available/
| false | false |
self
| 29 |
{'enabled': False, 'images': [{'id': 'LWtsl4nLnkdIWkKTwuEiBTXiB4p2wIyzW3XiW7y-Qkc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/cuJZ_vPqnblf3yuIOOz3GCsbA6zvVaDyWiCkhCRapcM.jpg?width=108&crop=smart&auto=webp&s=15f42f523837031bfb4ed2b47fc49b3101cbe762', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/cuJZ_vPqnblf3yuIOOz3GCsbA6zvVaDyWiCkhCRapcM.jpg?width=216&crop=smart&auto=webp&s=42497efa3e40b2bf2e527de0043f5691b7be4ddc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/cuJZ_vPqnblf3yuIOOz3GCsbA6zvVaDyWiCkhCRapcM.jpg?width=320&crop=smart&auto=webp&s=2cc4c368c2e900597825af9bb07757cc4cf9fb69', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/cuJZ_vPqnblf3yuIOOz3GCsbA6zvVaDyWiCkhCRapcM.jpg?width=640&crop=smart&auto=webp&s=c832a15fa073637a8e29cd9e887ede2e1aa94770', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/cuJZ_vPqnblf3yuIOOz3GCsbA6zvVaDyWiCkhCRapcM.jpg?width=960&crop=smart&auto=webp&s=86b1ed6f5231be01dff6253f6eee0c94faf39254', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/cuJZ_vPqnblf3yuIOOz3GCsbA6zvVaDyWiCkhCRapcM.jpg?width=1080&crop=smart&auto=webp&s=9d2852b6fde7ee6114eb7ffd52740f4d48cd42c7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/cuJZ_vPqnblf3yuIOOz3GCsbA6zvVaDyWiCkhCRapcM.jpg?auto=webp&s=c550426c55121a9b28b43f850e777251a1fa6f5d', 'width': 1200}, 'variants': {}}]}
|
What's interesting is that Qwen's release is three months behind Deepseek's. So, if you believe Qwen 3 is currently the leader in open source, I don't think that will last, as R2 is on the verge of release. You can see the gap between Qwen 3 and the three-month-old Deepseek R1.
| 69 | 2025-04-29T11:05:07 |
Select_Dream634
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kal789
| false | null |
t3_1kal789
|
/r/LocalLLaMA/comments/1kal789/whats_interesting_is_that_qwens_release_is_three/
| false | false | 69 |
{'enabled': True, 'images': [{'id': 'nPPKXjyZYaSMzEZG8x4D2gvvwY9Nfu57yD0sANicWt8', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/10yj2jkm9rxe1.png?width=108&crop=smart&auto=webp&s=22ea27585e9af9cedae19dd44896a261449de7b1', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/10yj2jkm9rxe1.png?width=216&crop=smart&auto=webp&s=9257a898cf5daec33e676d22b0aa6a3fef85648d', 'width': 216}, {'height': 214, 'url': 'https://preview.redd.it/10yj2jkm9rxe1.png?width=320&crop=smart&auto=webp&s=d88bae189b75c9c3ec5a7fe544559924867ccec3', 'width': 320}, {'height': 428, 'url': 'https://preview.redd.it/10yj2jkm9rxe1.png?width=640&crop=smart&auto=webp&s=dbe936563146c5f466a3a174f20a285134431a20', 'width': 640}], 'source': {'height': 581, 'url': 'https://preview.redd.it/10yj2jkm9rxe1.png?auto=webp&s=22c3aad1e1eeb99c0c5fffbd7173d88085a89df2', 'width': 867}, 'variants': {}}]}
|
|||
I just realized Qwen3-30B-A3B is all I need for local LLM
| 710 |
After I found out that the new Qwen3-30B-A3B MoE is really slow in Ollama, I decided to try LM Studio instead, and it's working as expected, over 100+ tk/s on a power-limited 4090.
After testing it more, I suddenly realized: this one model is all I need!
I tested translation, coding, data analysis, video, and blog summarization, etc. It performs really well on all categories and is super fast. Additionally, it's very VRAM efficient—I still have 4GB VRAM left after maxing out the context length (Q8 cache enabled).
I used to switch between multiple models of different sizes and quantization levels for different tasks, which is why I stuck with Ollama because of its easy model switching. I also keep using an older version of Open WebUI because the model management is more complex in the latest version.
Now all I need is LM Studio, the latest Open WebUI, and Qwen3-30B-A3B. I can finally free up some disk space and move my huge model library to the backup drive.
| 2025-04-29T11:26:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1kalkgi/i_just_realized_qwen330ba3b_is_all_i_need_for/
|
AaronFeng47
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kalkgi
| false | null |
t3_1kalkgi
|
/r/LocalLLaMA/comments/1kalkgi/i_just_realized_qwen330ba3b_is_all_i_need_for/
| false | false |
self
| 710 | null |
Now that Qwen3 is out, has anybody seen its translation capabilities?
| 21 |
I've only managed to compare 30B-A3B (with thinking) to some synthetic translations from novel text from GLM-4-9B and Deepseek 0314, and it seems wordy but okay, but it'd be awesome to see a few more opinions from readers like myself here on what they think about it, and the other models as well!
i tend to do japanese to english or korean to english, since im usually trying to read ahead of scanlation groups from novelupdates, for context.
| 2025-04-29T11:34:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1kaloxw/now_that_qwen3_is_out_has_anybody_seen_its/
|
JustImmunity
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaloxw
| false | null |
t3_1kaloxw
|
/r/LocalLLaMA/comments/1kaloxw/now_that_qwen3_is_out_has_anybody_seen_its/
| false | false |
self
| 21 | null |
<count> ... </count> ... 42 x Qwen3 ... wait ... really?
| 1 |
[removed]
| 2025-04-29T11:48:18 |
https://www.reddit.com/gallery/1kalxua
|
KaanTheChosenOne
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kalxua
| false | null |
t3_1kalxua
|
/r/LocalLLaMA/comments/1kalxua/count_count_42_x_qwen3_wait_really/
| false | false | 1 | null |
|
Counting ... Wait ... 42 x Qwen3? ... That might be the answer ... really?
| 1 |
[removed]
| 2025-04-29T11:50:57 |
https://www.reddit.com/gallery/1kalzmi
|
KaanTheChosenOne
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kalzmi
| false | null |
t3_1kalzmi
|
/r/LocalLLaMA/comments/1kalzmi/counting_wait_42_x_qwen3_that_might_be_the_answer/
| false | false | 1 | null |
|
Qwen3 is really good at MCP/FunctionCall
| 102 |
I've been keeping an eye on the performance of LLMs using MCP. I believe that MCP is the key for LLMs to make an impact on real-world workflows. I've always dreamed of having a local LLM serve as the brain and act as the intelligent core for smart-home system.
Now, it seems I've found the one. Qwen3 fits the bill perfectly, and it's an absolute delight to use. This is a test for the best local LLMs. I used Cherry Studio, MCP/server-file-system, and all the models were from the free versions on OpenRouter, without any extra system prompts. The test is pretty straightforward. I asked the LLMs to write a poem and save it to a specific file. The tricky part of this task is that the models first have to realize they're restricted to operating within a designated directory, so they need to do a query first. Then, they have to correctly call the MCP interface for file - writing. The unified test instruction is:
`Write a poem, an aria, with the theme of expressing my desire to eat hot pot. Write it into a file in a directory that you are allowed to access.`
Here's how these models performed.
|Model/Version|Rating|Key Performance|
|:-|:-|:-|
|**Qwen3-8B**|⭐⭐⭐⭐⭐|🌟 Directly called `list_allowed_directories` and `write_file`, executed smoothly|
|**Qwen3-30B-A3B**|⭐⭐⭐⭐⭐|🌟 Equally clean as Qwen3-8B, textbook-level logic|
|**Gemma3-27B**|⭐⭐⭐⭐⭐|🎵 Perfect workflow + friendly tone, completed task efficiently|
|**Llama-4-Scout**|⭐⭐⭐|⚠️ Tried system path first, fixed format errors after feedback|
|**Deepseek-0324**|⭐⭐⭐|🔁 Checked dirs but wrote to invalid path initially, finished after retries|
|**Mistral-3.1-24B**|⭐⭐💫|🤔 Created dirs correctly but kept deleting line breaks repeatedly|
|**Gemma3-12B**|⭐⭐|💔 Kept trying to read non-existent `hotpot_aria.txt`, gave up apologizing|
|**Deepseek-R1**|❌|🚫 Forced write to invalid Windows `/mnt` path, ignored error messages|
| 2025-04-29T11:57:36 |
https://www.reddit.com/gallery/1kam3sf
|
reabiter
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kam3sf
| false | null |
t3_1kam3sf
|
/r/LocalLLaMA/comments/1kam3sf/qwen3_is_really_good_at_mcpfunctioncall/
| false | false | 102 | null |
|
HugstonOne can create charts now :)
| 1 |
[removed]
| 2025-04-29T12:01:04 |
Trilogix
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kam603
| false | null |
t3_1kam603
|
/r/LocalLLaMA/comments/1kam603/hugstonone_can_create_charts_now/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'jMdWXdeUlaEyMjteuKAfoyKgIRRRxx6fbt14EX7rluQ', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/gtgoe8o8krxe1.png?width=108&crop=smart&auto=webp&s=0c086b365890ef4259a0505b2d7ec7b3fadf9d20', 'width': 108}, {'height': 115, 'url': 'https://preview.redd.it/gtgoe8o8krxe1.png?width=216&crop=smart&auto=webp&s=7012ed96980b892452ec0197efd37921460d84f6', 'width': 216}, {'height': 171, 'url': 'https://preview.redd.it/gtgoe8o8krxe1.png?width=320&crop=smart&auto=webp&s=05fae5060dd5bd527686126c849b925901c784bf', 'width': 320}, {'height': 343, 'url': 'https://preview.redd.it/gtgoe8o8krxe1.png?width=640&crop=smart&auto=webp&s=bbbd12a1166d2f7ce506085e1cad2684968e270d', 'width': 640}, {'height': 515, 'url': 'https://preview.redd.it/gtgoe8o8krxe1.png?width=960&crop=smart&auto=webp&s=de58134e7beb0342f16cf8e8894690337fd9156e', 'width': 960}, {'height': 579, 'url': 'https://preview.redd.it/gtgoe8o8krxe1.png?width=1080&crop=smart&auto=webp&s=ea5d6f9ba2cbf35703ba3d989521b319d61e1963', 'width': 1080}], 'source': {'height': 978, 'url': 'https://preview.redd.it/gtgoe8o8krxe1.png?auto=webp&s=6daec95d00488865f8371541ce90c7076ac89c6a', 'width': 1823}, 'variants': {}}]}
|
||
How to jailbreak Qwen3-30B-A3B?
| 0 |
help me to jailbreak Qwen3-30B-A3B.
| 2025-04-29T12:08:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1kamb50/how_to_jailbreak_qwen330ba3b/
|
tamal4444
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kamb50
| false | null |
t3_1kamb50
|
/r/LocalLLaMA/comments/1kamb50/how_to_jailbreak_qwen330ba3b/
| false | false |
self
| 0 | null |
Simulate an entire Reddit community in minutes
| 1 |
[removed]
| 2025-04-29T12:13:42 |
FitHeron1933
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kamekp
| false | null |
t3_1kamekp
|
/r/LocalLLaMA/comments/1kamekp/simulate_an_entire_reddit_community_in_minutes/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'bObQCu4SFOuVcMJSGPAZ724lwRt0FgOXY7yN0f7X4RM', 'resolutions': [{'height': 44, 'url': 'https://preview.redd.it/l343wm97mrxe1.jpeg?width=108&crop=smart&auto=webp&s=15475c5bad81f9a88ba33de981f5a5ce9b024c44', 'width': 108}, {'height': 89, 'url': 'https://preview.redd.it/l343wm97mrxe1.jpeg?width=216&crop=smart&auto=webp&s=6219e46205809bb256f77a3316994cabf1cf2463', 'width': 216}, {'height': 133, 'url': 'https://preview.redd.it/l343wm97mrxe1.jpeg?width=320&crop=smart&auto=webp&s=c71bf71d191f654719dc454e949cb45dd6acb4b1', 'width': 320}, {'height': 266, 'url': 'https://preview.redd.it/l343wm97mrxe1.jpeg?width=640&crop=smart&auto=webp&s=feb9b854f0492e970acd3794af188071eee381e2', 'width': 640}, {'height': 399, 'url': 'https://preview.redd.it/l343wm97mrxe1.jpeg?width=960&crop=smart&auto=webp&s=8048cf813cfb11e5575c75cc47de69972206f4de', 'width': 960}, {'height': 449, 'url': 'https://preview.redd.it/l343wm97mrxe1.jpeg?width=1080&crop=smart&auto=webp&s=63b06162c97a7d212ca70d98024e321c7be4eeb0', 'width': 1080}], 'source': {'height': 900, 'url': 'https://preview.redd.it/l343wm97mrxe1.jpeg?auto=webp&s=11f3f9139f9366c7f1942ba794215327ae65cfe6', 'width': 2161}, 'variants': {}}]}
|
||
Qwen3 function calling is not working at all. Is this my router problem?
| 1 |
Trying to benchmark function calling performance on qwen3, but such error occurs in OpenRouter.
Is this problem of OpenRouter? Or of Qwen3?
Is your local installed Qwen3 is working properly abou the function calling?
```bash
404 No endpoints found that support tool use.
```
| 2025-04-29T12:17:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1kamha7/qwen3_function_calling_is_not_working_at_all_is/
|
jhnam88
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kamha7
| false | null |
t3_1kamha7
|
/r/LocalLLaMA/comments/1kamha7/qwen3_function_calling_is_not_working_at_all_is/
| false | false |
self
| 1 | null |
OASIS is on PyPI, Run million-agent social simulations in a few minutes
| 1 |
[removed]
| 2025-04-29T12:25:23 |
iamnotdeadnuts
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kammi7
| false | null |
t3_1kammi7
|
/r/LocalLLaMA/comments/1kammi7/oasis_is_on_pypi_run_millionagent_social/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'oJDeLQx472XWa6GFHZKdDx9FZmSF60UaldsiaEaubNY', 'resolutions': [{'height': 44, 'url': 'https://preview.redd.it/orgwoet4orxe1.jpeg?width=108&crop=smart&auto=webp&s=ad8dbd2dab313173e8072810b0d9796650a0f8de', 'width': 108}, {'height': 89, 'url': 'https://preview.redd.it/orgwoet4orxe1.jpeg?width=216&crop=smart&auto=webp&s=4d9f3864a5664cc99a8dc582eadfb98f0f79ca27', 'width': 216}, {'height': 133, 'url': 'https://preview.redd.it/orgwoet4orxe1.jpeg?width=320&crop=smart&auto=webp&s=f096f120662e0c5433b5046a6ef61fdc6df5a959', 'width': 320}, {'height': 266, 'url': 'https://preview.redd.it/orgwoet4orxe1.jpeg?width=640&crop=smart&auto=webp&s=f820c96aab3df1528df31e823b647be9a6a34d1b', 'width': 640}, {'height': 399, 'url': 'https://preview.redd.it/orgwoet4orxe1.jpeg?width=960&crop=smart&auto=webp&s=28829d7f7c1f9f52dc6007802ac9f2dcf1ae7ba9', 'width': 960}, {'height': 449, 'url': 'https://preview.redd.it/orgwoet4orxe1.jpeg?width=1080&crop=smart&auto=webp&s=8a0367c08f5abce6dd4d91c00af7abd59cdcaf97', 'width': 1080}], 'source': {'height': 900, 'url': 'https://preview.redd.it/orgwoet4orxe1.jpeg?auto=webp&s=9d572ba231bce4091f8c07f6d45a982751cc7d1e', 'width': 2161}, 'variants': {}}]}
|
||
Who cares?
| 1 |
[removed]
| 2025-04-29T12:25:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1kammoz/who_cares/
|
inquisitivehoover
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kammoz
| false | null |
t3_1kammoz
|
/r/LocalLLaMA/comments/1kammoz/who_cares/
| false | false |
self
| 1 | null |
Difference in Qwen3 quants from providers
| 9 |
I see that besides bartowski there are other providers of quants like unsloth. Do they differ in performance, size etc. or are they all the same?
| 2025-04-29T12:38:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1kamvvx/difference_in_qwen3_quants_from_providers/
|
Bitter-College8786
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kamvvx
| false | null |
t3_1kamvvx
|
/r/LocalLLaMA/comments/1kamvvx/difference_in_qwen3_quants_from_providers/
| false | false |
self
| 9 | null |
Help finding links to an online AI frontend
| 0 |
I am looking for links to any online frontend (hosted by someone else, public URL), that is accessible via a mobile (ios) browser (safari/chrome), where I can plug in an (OpenAI/Anthropic) base_url and api_key and chat with the LLMs that my backend supports. Hosting a frontend (ex: from github) myself is not desirable in my current situation.
I have already tried https://lite.koboldai.net/, but it is very laggy when working with large documents and is filled with bugs. Are there any other frontend links?
| 2025-04-29T12:41:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1kamyn5/help_finding_links_to_an_online_ai_frontend/
|
Additional_Top1210
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kamyn5
| false | null |
t3_1kamyn5
|
/r/LocalLLaMA/comments/1kamyn5/help_finding_links_to_an_online_ai_frontend/
| false | false |
self
| 0 | null |
Which of you managed to move away from big tech AIs
| 1 |
[removed]
| 2025-04-29T12:44:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1kan0rz/which_of_you_managed_to_move_away_from_big_tech/
|
Archerion0
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kan0rz
| false | null |
t3_1kan0rz
|
/r/LocalLLaMA/comments/1kan0rz/which_of_you_managed_to_move_away_from_big_tech/
| false | false |
self
| 1 | null |
Any way to run Qwen3 on an iPhone?
| 2 |
There’s a bunch of apps that can load llms but they usually need to update for new models
Do you know any ios app that can run any version of qwen3?
Thank you
| 2025-04-29T12:46:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1kan1tk/any_way_to_run_qwen3_on_an_iphone/
|
Swimming_Nobody8634
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kan1tk
| false | null |
t3_1kan1tk
|
/r/LocalLLaMA/comments/1kan1tk/any_way_to_run_qwen3_on_an_iphone/
| false | false |
self
| 2 | null |
Any open source local competition to Sora?
| 4 |
Any open source local competition to Sora? For image and video generation.
| 2025-04-29T12:55:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1kan8r4/any_open_source_local_competition_to_sora/
|
maifee
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kan8r4
| false | null |
t3_1kan8r4
|
/r/LocalLLaMA/comments/1kan8r4/any_open_source_local_competition_to_sora/
| false | false |
self
| 4 | null |
VRAM Requirements Reference - What can you run with your VRAM? (Contributions welcome)
| 215 |
I created this resource to help me quickly see which models I can run on certain VRAM constraints.
Check it out here: [https://imraf.github.io/ai-model-reference/](https://imraf.github.io/ai-model-reference/)
I'd like this to be as comprehensive as possible. It's on GitHub and contributions are welcome!
| 2025-04-29T13:10:12 |
Oatilis
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kank02
| false | null |
t3_1kank02
|
/r/LocalLLaMA/comments/1kank02/vram_requirements_reference_what_can_you_run_with/
| false | false | 215 |
{'enabled': True, 'images': [{'id': '-hK5RrRWfydhNn-IqVA35Xj1Y7id3XbFyF-6rY9jBXY', 'resolutions': [{'height': 48, 'url': 'https://preview.redd.it/winyewblwrxe1.jpeg?width=108&crop=smart&auto=webp&s=e897dbb2e4a0b7a3c95df98788c10e567038c4c4', 'width': 108}, {'height': 97, 'url': 'https://preview.redd.it/winyewblwrxe1.jpeg?width=216&crop=smart&auto=webp&s=050f65e5e3b2b664d260faf752c90658f2e43474', 'width': 216}, {'height': 144, 'url': 'https://preview.redd.it/winyewblwrxe1.jpeg?width=320&crop=smart&auto=webp&s=9bfa3215311b5938cfa61456e4e14abcf85387af', 'width': 320}, {'height': 288, 'url': 'https://preview.redd.it/winyewblwrxe1.jpeg?width=640&crop=smart&auto=webp&s=5cae8973b440ad7f513edeea46ed84c1b9a21d70', 'width': 640}, {'height': 433, 'url': 'https://preview.redd.it/winyewblwrxe1.jpeg?width=960&crop=smart&auto=webp&s=bfc9e585bdca119c34fe55fdb02885acd580a1b1', 'width': 960}, {'height': 487, 'url': 'https://preview.redd.it/winyewblwrxe1.jpeg?width=1080&crop=smart&auto=webp&s=51556a6d32a7b234d5743f8be97f32bc27a21984', 'width': 1080}], 'source': {'height': 859, 'url': 'https://preview.redd.it/winyewblwrxe1.jpeg?auto=webp&s=2cef8269b2c80ce59cab8746cafd93e357a030d0', 'width': 1903}, 'variants': {}}]}
|
||
Don't forget to update llama.cpp
| 91 |
If you're like me, you try to avoid recompiling llama.cpp all too often.
In my case, I was 50ish commits behind, but Qwen3 30-A3B q4km from bartowski was still running fine on my 4090, albeit with with 86t/s.
I got curious after reading about 3090s being able to push 100+ t/s
After updating to the latest master, llama-bench failed to allocate to CUDA :-(
But refreshing bartowski's page, he now specified the tag used to provide the quants, which in my case was `b5200`
After another recompile, I get **160+ ** t/s
Holy shit indeed - so as always, read the fucking manual :-)
| 2025-04-29T13:20:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1kanrt7/dont_forget_to_update_llamacpp/
|
c-rious
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kanrt7
| false | null |
t3_1kanrt7
|
/r/LocalLLaMA/comments/1kanrt7/dont_forget_to_update_llamacpp/
| false | false |
self
| 91 | null |
Qwen3 enable_thinking not working with python's openai
| 0 |
Hi,
How do I set enable\_thinking=False in the call using Python's openai library?
It isn't supported:
Error: Completions.create() got an unexpected keyword argument 'enable\_thinking'
| 2025-04-29T13:40:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1kao749/qwen3_enable_thinking_not_working_with_pythons/
|
Acrobatic_Cat_3448
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kao749
| false | null |
t3_1kao749
|
/r/LocalLLaMA/comments/1kao749/qwen3_enable_thinking_not_working_with_pythons/
| false | false |
self
| 0 | null |
Why are my models from HF twice the listed size in storage space?
| 0 |
Just downloaded the 400GB Qwen3-235B model via the copy pasta'd git clone from the three sea shells on the model page. But on my harddrive it takes up 800GB? How do I prevent this from happening? Should there be an additional flag I use in the command to prevent it? It looks like their is a .git folder that makes up the difference. Why haven't single file containers for models gone mainstream on HF yet?
| 2025-04-29T13:46:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1kaocc8/why_are_my_models_from_hf_twice_the_listed_size/
|
LargelyInnocuous
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaocc8
| false | null |
t3_1kaocc8
|
/r/LocalLLaMA/comments/1kaocc8/why_are_my_models_from_hf_twice_the_listed_size/
| false | false |
self
| 0 | null |
Qwen3 Unsloth Dynamic GGUFs + 128K Context + Bug Fixes
| 664 |
Hey [r/Localllama]()! We've uploaded Dynamic 2.0 GGUFs and quants for Qwen3. **ALL Qwen3** models now benefit from [Dynamic 2.0](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs) format.
We've also **fixed all chat template & loading issues.** They now work properly on all inference engines (llama.cpp, Ollama, LM Studio, Open WebUI etc.)
* These bugs came from incorrect chat template implementations, **not** the Qwen team. We've informed them, and they’re helping fix it in places like llama.cpp. Small bugs like this happen all the time, and it was through your guy's feedback that we were able to catch this. Some GGUFs had the wrong `chat_ml` template, so they seemed to work but weren't fully correct. All our uploads are now corrected.
* Context length has been extended from 32K to **128K** using native YaRN.
* Some **235B-A22B** quants aren't compatible with iMatrix + Dynamic 2.0 despite many testing. We're uploaded as many standard GGUF sizes as possible and left a few of the iMatrix + Dynamic 2.0 that do work.
* Thanks to your feedback, we now added Q4\_NL, Q5.1, Q5.0, Q4.1, and Q4.0 formats.
* ICYMI: Dynamic 2.0 sets new benchmarks for KL Divergence and 5-shot MMLU, making it the best performing quants for running LLMs. [See benchmarks](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs)
* We also uploaded Dynamic safetensors for fine-tuning/deployment. Fine-tuning is technically supported in Unsloth, but please wait for the official announcement coming very soon.
* We made a detailed guide on how to run Qwen3 (including 235B-A22B) with official settings: [https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune)
**Qwen3 - Official Settings:**
|Setting|Non-Thinking Mode|Thinking Mode|
|:-|:-|:-|
|**Temperature**|0.7|0.6|
|**Min\_P**|0.0 (optional, but 0.01 works well; llama.cpp default is 0.1)|0.0|
|**Top\_P**|0.8|0.95|
|**TopK**|20|20|
**Qwen3 - Unsloth Dynamic 2.0 Uploads -with optimal configs:**
|Qwen3 variant|GGUF|GGUF (128K Context)|Dynamic 4-bit Safetensor|
|:-|:-|:-|:-|
|**0.6B**|[0.6B](https://huggingface.co/unsloth/Qwen3-0.6B-GGUF)|[0.6B](https://huggingface.co/unsloth/Qwen3-0.6B-128K-GGUF)|[0.6B](https://huggingface.co/unsloth/Qwen3-0.6B-unsloth-bnb-4bit)|
|**1.7B**|[1.7B](https://huggingface.co/unsloth/Qwen3-1.7B-GGUF)|[1.7B](https://huggingface.co/unsloth/Qwen3-1.7B-128K-GGUF)|[1.7B](https://huggingface.co/unsloth/Qwen3-1.7B-unsloth-bnb-4bit)|
|**4B**|[4B](https://huggingface.co/unsloth/Qwen3-4B-GGUF)|[4B](https://huggingface.co/unsloth/Qwen3-4B-128K-GGUF)|[4B](https://huggingface.co/unsloth/Qwen3-4B-unsloth-bnb-4bit)|
|**8B**|[8B](https://huggingface.co/unsloth/Qwen3-8B-GGUF)|[8B](https://huggingface.co/unsloth/Qwen3-8B-128K-GGUF)|[8B](https://huggingface.co/unsloth/Qwen3-8B-unsloth-bnb-4bit)|
|**14B**|[14B](https://huggingface.co/unsloth/Qwen3-14B-GGUF)|[14B](https://huggingface.co/unsloth/Qwen3-14B-128K-GGUF)|[14B](https://huggingface.co/unsloth/Qwen3-14B-unsloth-bnb-4bit)|
|**30B-A3B**|[30B-A3B](https://huggingface.co/unsloth/Qwen3-30B-A3B-GGUF)|[30B-A3B](https://huggingface.co/unsloth/Qwen3-30B-A3B-128K-GGUF)||
|**32B**|[32B](https://huggingface.co/unsloth/Qwen3-32B-GGUF)|[32B](https://huggingface.co/unsloth/Qwen3-32B-128K-GGUF)|[32B](https://huggingface.co/unsloth/Qwen3-32B-unsloth-bnb-4bit)|
Also wanted to give a huge shoutout to the Qwen team for helping us and the open-source community with their incredible team support! And of course thank you to you all for reporting and testing the issues with us! :)
| 2025-04-29T13:48:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1kaodxu/qwen3_unsloth_dynamic_ggufs_128k_context_bug_fixes/
|
danielhanchen
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaodxu
| false | null |
t3_1kaodxu
|
/r/LocalLLaMA/comments/1kaodxu/qwen3_unsloth_dynamic_ggufs_128k_context_bug_fixes/
| false | false |
self
| 664 |
{'enabled': False, 'images': [{'id': 'ZmadbtMLxXXHFKwJkCjeTUDuX5sS57sYwkHR8IIGo6Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=108&crop=smart&auto=webp&s=1ef4773905a7285d6ca9d2707252ecf3322ec746', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=216&crop=smart&auto=webp&s=6555cce3e1543ec541933b9a1ea746f3da79448a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=320&crop=smart&auto=webp&s=346b61e1006578bd8c7c90ff8b45496164cd4933', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=640&crop=smart&auto=webp&s=2e74df95b54af72feafa558281ef5e11bc4e8a7c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=960&crop=smart&auto=webp&s=8d3ac1cc3775d1b7217345a94a6e9f18f0ba2092', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=1080&crop=smart&auto=webp&s=57e2a43db692dc32eecd433adfbae429f9bca7fd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?auto=webp&s=2704eae76891f7897192cd5a7236096d2b9f8a5f', 'width': 1200}, 'variants': {}}]}
|
Recommendation for Mailadresscheck
| 1 |
[removed]
| 2025-04-29T13:53:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1kaohxy/recommendation_for_mailadresscheck/
|
PriceInitial2373
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaohxy
| false | null |
t3_1kaohxy
|
/r/LocalLLaMA/comments/1kaohxy/recommendation_for_mailadresscheck/
| false | false |
self
| 1 | null |
Best llm for 4*a6000
| 1 |
[removed]
| 2025-04-29T14:15:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1kaozxi/best_llm_for_4a6000/
|
AttentionConnect3567
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaozxi
| false | null |
t3_1kaozxi
|
/r/LocalLLaMA/comments/1kaozxi/best_llm_for_4a6000/
| false | false |
self
| 1 | null |
Waiting for Qwen-3-30B-A3B AWQ Weights and Benchmarks – Any Updates? Thank you
| 16 |
I'm amazed that a 3B active parameter model can rival a 32B parameter one! Really eager to see real-world evaluations, especially with quantization like AWQ. I know AWQ takes time since it involves identifying active parameters and generating weights, but I’m hopeful it’ll deliver. This could be a game-changer!
Also, the performance of tiny models like 4B is impressive. Not every use case needs a massive model. Putting a classifier in front of an to route tasks to different models could delivery a lot on a modest hardware.
Anyone actively working on these AWQ weights or benchmarks? Thanks!
| 2025-04-29T14:15:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1kap09r/waiting_for_qwen330ba3b_awq_weights_and/
|
appakaradi
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kap09r
| false | null |
t3_1kap09r
|
/r/LocalLLaMA/comments/1kap09r/waiting_for_qwen330ba3b_awq_weights_and/
| false | false |
self
| 16 | null |
How to make prompt processing faster in llama.cpp?
| 2 |
I'm using a 4070 12G and 32G DDR5 ram. This is the command I use:
\`.\\build\\bin\\llama-server.exe -m D:\\llama.cpp\\models\\Qwen3-30B-A3B-UD-Q3\_K\_XL.gguf -c 32768 --port 9999 -ngl 99 --no-webui --device CUDA0 -fa -ot ".ffn\_.\*\_exps.=CPU"\`
And for long prompts it takes over a minute to process, which is a pain in the ass:
\> prompt eval time = 68442.52 ms / 29933 tokens ( 2.29 ms per token, 437.35 tokens per second)
\> eval time = 19719.89 ms / 398 tokens ( 49.55 ms per token, 20.18 tokens per second)
\> total time = 88162.41 ms / 30331 tokens
Is there any approach to increase prompt processing speed? Only use \~5G vram, so I suppose there's room for improvement.
| 2025-04-29T14:21:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1kap54w/how_to_make_prompt_processing_faster_in_llamacpp/
|
Conscious_Chef_3233
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kap54w
| false | null |
t3_1kap54w
|
/r/LocalLLaMA/comments/1kap54w/how_to_make_prompt_processing_faster_in_llamacpp/
| false | false |
self
| 2 | null |
What are all the problems with model distillation? Are the distilled models being used much in production compared to pure models?
| 1 |
basically the title. I dont have stats to back my question but as much as I have explored, distilled models are seemingly used more by individuals. Enterprises prefer the raw model. Is there any technical bottleneck for the usage of distillation?
I saw another reddit thread telling that distilled model takes memory as much as the training phase. If yes, why?
I know, it's a such a newbie question but I couldn't find the resources for my question except papers that overcomplicates things that I want to understand.
| 2025-04-29T14:36:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1kaphye/what_are_all_the_problems_with_model_distillation/
|
Immediate_Ad9718
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaphye
| false | null |
t3_1kaphye
|
/r/LocalLLaMA/comments/1kaphye/what_are_all_the_problems_with_model_distillation/
| false | false |
self
| 1 | null |
Qwen 3: A Reality Check (fanboys, this isn't for you)
| 0 |
Some things you should know before filling up your SSD with these new models:
1. There’s no significant gain in multilingual capabilities (if there’s any gain at all)
2. All models start by "thinking", and will flood your context with nonsense like "Hmm...", "Oh!...", "Wait..." But thankfully, this can be disabled using `/no_think` in the system prompt
3. From 0.6 to 8B, none of them outperforms Gemma. Use Gemma 2 2B for smaller sizes and Gemma 3 4B for the rest. We don’t even need to go up to Gemma 3 12B. As for the larger models, I spared myself and didn’t even bother downloading them for testing
In short, don’t waste your time downloading them. They’re not better than what we already had.
"Oh, but I saw a benchmark that..."
I think we’re old enough to understand that every new model is entirely focused on scoring well in benchmarks, which is far from actually improving real-world, day-to-day usage.
If you’re still curious, just use the versions available online.
Test all models from 0.6 to 8B at the highest quantization available.
| 2025-04-29T14:37:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1kapiok/qwen_3_a_reality_check_fanboys_this_isnt_for_you/
|
CaptainCivil7097
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kapiok
| false | null |
t3_1kapiok
|
/r/LocalLLaMA/comments/1kapiok/qwen_3_a_reality_check_fanboys_this_isnt_for_you/
| false | false |
self
| 0 | null |
I really like the Qwen3-8B model
| 1 |
[removed]
| 2025-04-29T14:38:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1kapjl5/i_really_like_the_qwen38b_model/
|
Delicious_Current269
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kapjl5
| false | null |
t3_1kapjl5
|
/r/LocalLLaMA/comments/1kapjl5/i_really_like_the_qwen38b_model/
| false | false |
self
| 1 | null |
Running Qwen3-30B-A3B on ARM CPU of Single-board computer
| 92 | 2025-04-29T14:39:19 |
https://v.redd.it/cyidinq8csxe1
|
Inv1si
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kapjwa
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/cyidinq8csxe1/DASHPlaylist.mpd?a=1748529573%2CYmE0MmRmYzRjOGQxOTVlYmRhN2ZmZWIxZjc2MTRmMzliMGQzNThmNWNjOGRhZDI1NGEwZDhhOWJiN2QzMzdmYg%3D%3D&v=1&f=sd', 'duration': 90, 'fallback_url': 'https://v.redd.it/cyidinq8csxe1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/cyidinq8csxe1/HLSPlaylist.m3u8?a=1748529573%2CNjJhMDE0YTA0NjllNDQ3M2YzZTZlNzE1NDE1ZTZiODczYzlhYWNmMThiMmFkMjVlNGM4NjIwNWI3YzJiMzhjNg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/cyidinq8csxe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1kapjwa
|
/r/LocalLLaMA/comments/1kapjwa/running_qwen330ba3b_on_arm_cpu_of_singleboard/
| false | false | 92 |
{'enabled': False, 'images': [{'id': 'aWE3aHZwcThjc3hlMd4Ogkh7a8SmmiEqmgn4vXegHkXYe3lpup27kwb7AKZU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/aWE3aHZwcThjc3hlMd4Ogkh7a8SmmiEqmgn4vXegHkXYe3lpup27kwb7AKZU.png?width=108&crop=smart&format=pjpg&auto=webp&s=0724958e241f5d5475e0750d8741d2b00a4fd15d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/aWE3aHZwcThjc3hlMd4Ogkh7a8SmmiEqmgn4vXegHkXYe3lpup27kwb7AKZU.png?width=216&crop=smart&format=pjpg&auto=webp&s=03f9fe95f912688c1365386852e8fc0ee47fc935', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/aWE3aHZwcThjc3hlMd4Ogkh7a8SmmiEqmgn4vXegHkXYe3lpup27kwb7AKZU.png?width=320&crop=smart&format=pjpg&auto=webp&s=18fa719843a8beb0341da810425c4bf806a77dd0', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/aWE3aHZwcThjc3hlMd4Ogkh7a8SmmiEqmgn4vXegHkXYe3lpup27kwb7AKZU.png?width=640&crop=smart&format=pjpg&auto=webp&s=4fb33a71a1176d67b1c5d0d55b0adbf40c47e80a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/aWE3aHZwcThjc3hlMd4Ogkh7a8SmmiEqmgn4vXegHkXYe3lpup27kwb7AKZU.png?width=960&crop=smart&format=pjpg&auto=webp&s=37cab58c4aaec831525ecd3a154be8e175c7ab2d', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/aWE3aHZwcThjc3hlMd4Ogkh7a8SmmiEqmgn4vXegHkXYe3lpup27kwb7AKZU.png?width=1080&crop=smart&format=pjpg&auto=webp&s=1849722ff3874ee16a141c8a2f2b3faec5f01031', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/aWE3aHZwcThjc3hlMd4Ogkh7a8SmmiEqmgn4vXegHkXYe3lpup27kwb7AKZU.png?format=pjpg&auto=webp&s=e2a1b991871a5eb5c977b6008f0797c1c984d9b1', 'width': 1920}, 'variants': {}}]}
|
||
What are all the problems with model distillation? Are the distilled models being used much in production compared to pure models?
| 2 |
basically the title. I dont have stats to back my question but as much as I have explored, distilled models are seemingly used more by individuals. Enterprises prefer the raw model. Is there any technical bottleneck for the usage of distillation?
I saw another reddit thread telling that distilled model takes memory as much as the training phase. If yes, why?
I know, it's a such a newbie question but I couldn't find the resources for my question except papers that overcomplicates things that I want to understand.
| 2025-04-29T14:39:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1kapkaf/what_are_all_the_problems_with_model_distillation/
|
Immediate_Ad9718
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kapkaf
| false | null |
t3_1kapkaf
|
/r/LocalLLaMA/comments/1kapkaf/what_are_all_the_problems_with_model_distillation/
| false | false |
self
| 2 | null |
I really like the Qwen3-8B model
| 1 |
[removed]
| 2025-04-29T14:41:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1kaplyk/i_really_like_the_qwen38b_model/
|
Delicious_Current269
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaplyk
| false | null |
t3_1kaplyk
|
/r/LocalLLaMA/comments/1kaplyk/i_really_like_the_qwen38b_model/
| false | false |
self
| 1 | null |
What sites hosting largest newest qwen?
| 2 |
For chatting and testing purpose
| 2025-04-29T14:43:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1kapnk3/what_sites_hosting_largest_newest_qwen/
|
CacheConqueror
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kapnk3
| false | null |
t3_1kapnk3
|
/r/LocalLLaMA/comments/1kapnk3/what_sites_hosting_largest_newest_qwen/
| false | false |
self
| 2 | null |
Qwen3 1.7b is not smarter than qwen2.5 1.5b using quants that give the same token speed
| 2 |
I ran my own benchmark and that’s the conclusion. Theire about the same. Did anyone else get similar results?
I disabled thinking (/no_think)
| 2025-04-29T14:47:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1kapqxk/qwen3_17b_is_not_smarter_than_qwen25_15b_using/
|
Dean_Thomas426
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kapqxk
| false | null |
t3_1kapqxk
|
/r/LocalLLaMA/comments/1kapqxk/qwen3_17b_is_not_smarter_than_qwen25_15b_using/
| false | false |
self
| 2 | null |
Its been a while, did anyone come closer to personal/jarvis like AI?
| 1 |
[removed]
| 2025-04-29T14:48:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1kaprfr/its_been_a_while_did_anyone_come_closer_to/
|
its_kanwischer
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaprfr
| false | null |
t3_1kaprfr
|
/r/LocalLLaMA/comments/1kaprfr/its_been_a_while_did_anyone_come_closer_to/
| false | false |
self
| 1 | null |
Dynamic Multi-Function Calling Locally with Gemma 3 + Ollama – Full Demo Walkthrough
| 3 |
Hi everyone! 👋
I recently worked on **dynamic function calling** using **Gemma 3 (1B)** running **locally** via **Ollama** — allowing the LLM to **trigger real-time Search, Translation, and Weather retrieval** dynamically based on user input.
**Demo Video:**
[Demo](https://reddit.com/link/1kapu5u/video/18muxz65esxe1/player)
**Dynamic Function Calling Flow Diagram :**
https://preview.redd.it/3nxoand7esxe1.png?width=959&format=png&auto=webp&s=bb56d5bed7f07c2957209362175a31fffd81645c
Instead of only answering from memory, the model smartly decides when to:
🔍 Perform a **Google Sear**ch (using [Serper.dev](http://serper.dev/) API)
🌐 **Translate te**xt live (using MyMemory API)
⛅ **Fetch weather** in real-time (using OpenWeatherMap API)
🧠 **Answer direct**ly if internal memory is sufficient
This showcases how **structured function calling** can make local LLMs smarter and much more flexible!
💡 **Key Highlight**s:
✅ JSON-structured function calls for safe external tool invocation
✅ Local-first architecture — no cloud LLM inference
✅ Ollama + Gemma 3 1B combo works great even on modest hardware
✅ Fully modular — easy to plug in more tools beyond search, translate, weather
🛠 **Tech Stac**k:
⚡ [Gemma 3 (1B)](https://ollama.com/library/gemma3:1b) via **Ollama**
⚡ **Gradio** (Chatbot Frontend)
⚡ [**Serper.dev**](http://serper.dev/) **API** (Search)
⚡ **MyMemory API** (Translation)
⚡ **OpenWeatherMap API** (Weather)
⚡ **Pydantic + Python** (Function parsing & validation)
📌 **Full blog + complete code walkthrou**gh: [sridhartech.hashnode.dev/dynamic-multi-function-calling-locally-with-gemma-3-and-ollama](https://sridhartech.hashnode.dev/dynamic-multi-function-calling-locally-with-gemma-3-and-ollama)
Would love to hear your thoughts !
| 2025-04-29T14:51:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1kapu5u/dynamic_multifunction_calling_locally_with_gemma/
|
srireddit2020
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kapu5u
| false | null |
t3_1kapu5u
|
/r/LocalLLaMA/comments/1kapu5u/dynamic_multifunction_calling_locally_with_gemma/
| false | false | 3 | null |
|
how to summarize unlimited size videos with AI locally for free?
| 0 |
how to summarize unlimited size videos with AI locally for free?
| 2025-04-29T15:05:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1kaq6cy/how_to_summarize_unlimited_size_videos_with_ai/
|
Glum_Thing5808
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaq6cy
| false | null |
t3_1kaq6cy
|
/r/LocalLLaMA/comments/1kaq6cy/how_to_summarize_unlimited_size_videos_with_ai/
| false | false |
self
| 0 | null |
Llama 4 reasoning 14b model releasing today
| 1 |
[deleted]
| 2025-04-29T15:18:16 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaqhit
| false | null |
t3_1kaqhit
|
/r/LocalLLaMA/comments/1kaqhit/llama_4_reasoning_14b_model_releasing_today/
| false | false |
default
| 1 | null |
||
Llama 4 reasoning 17b model releasing today
| 547 | 2025-04-29T15:18:43 |
Independent-Wind4462
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaqhxy
| false | null |
t3_1kaqhxy
|
/r/LocalLLaMA/comments/1kaqhxy/llama_4_reasoning_17b_model_releasing_today/
| false | false | 547 |
{'enabled': True, 'images': [{'id': 'vp3QWYZPPVZaTd3MtfKRk28XbuRts3kSTVAifl4_5uU', 'resolutions': [{'height': 115, 'url': 'https://preview.redd.it/hy71sz0sjsxe1.jpeg?width=108&crop=smart&auto=webp&s=ef48528058c676a8e9958db321af26538c67c64e', 'width': 108}, {'height': 231, 'url': 'https://preview.redd.it/hy71sz0sjsxe1.jpeg?width=216&crop=smart&auto=webp&s=b5add04b9aa5cda74cdc4824aa2a7ddc97d80f63', 'width': 216}, {'height': 342, 'url': 'https://preview.redd.it/hy71sz0sjsxe1.jpeg?width=320&crop=smart&auto=webp&s=128abaa765440de656a55276488cb241e587eb80', 'width': 320}, {'height': 685, 'url': 'https://preview.redd.it/hy71sz0sjsxe1.jpeg?width=640&crop=smart&auto=webp&s=5ff1abe7a5307a69a23eb4ca02f7f5cbb5bafaf4', 'width': 640}, {'height': 1027, 'url': 'https://preview.redd.it/hy71sz0sjsxe1.jpeg?width=960&crop=smart&auto=webp&s=8d0b33c946d69c0430eab381808d8a9df93823d3', 'width': 960}, {'height': 1156, 'url': 'https://preview.redd.it/hy71sz0sjsxe1.jpeg?width=1080&crop=smart&auto=webp&s=f9c0e522fb5ecdac1001b3d133b304c6451c4efb', 'width': 1080}], 'source': {'height': 1156, 'url': 'https://preview.redd.it/hy71sz0sjsxe1.jpeg?auto=webp&s=81ed55b37d32e2055a3735b89089e87a462acb40', 'width': 1080}, 'variants': {}}]}
|
|||
Qwen 3 8B, 14B, 32B, 30B-A3B & 235B-A22B Tested
| 85 |
[https://www.youtube.com/watch?v=GmE4JwmFuHk](https://www.youtube.com/watch?v=GmE4JwmFuHk)
**Score Tables with Key Insights:**
* These are generally very very good models.
* They all seem to struggle a bit in non english languages. If you take out non English questions from the dataset, the scores will across the board rise about 5-10 points.
* Coding is top notch, even with the smaller models.
* I have not yet tested the 0.6, 1 and 4B, that will come soon. In my experience for the use cases I cover, 8b is the bare minimum, but I have been surprised in the past, I'll post soon!
**Test 1: Harmful Question Detection** (Timestamp \~3:30)
|Model|Score|
|:-|:-|
|qwen/qwen3-32b:free|100.00|
|qwen/qwen3-235b-a22b-04-28:free|95.00|
|qwen/qwen3-8b:free|80.00|
|qwen/qwen3-30b-a3b-04-28:free|80.00|
|qwen/qwen3-14b:free|75.00|
**Test 2: Named Entity Recognition (NER)** (Timestamp \~5:56)
|Model|Score|
|:-|:-|
|qwen/qwen3-30b-a3b-04-28:free|90.00|
|qwen/qwen3-32b:free|80.00|
|qwen/qwen3-8b:free|80.00|
|qwen/qwen3-14b:free|80.00|
|qwen/qwen3-235b-a22b-04-28:free|75.00|
|*Note: multilingual translation seemed to be the main source of errors, especially Nordic languages.*||
**Test 3: SQL Query Generation** (Timestamp \~8:47)
|Model|Score|Key Insight|
|:-|:-|:-|
|qwen/qwen3-235b-a22b-04-28:free|100.00|Excellent coding performance,|
|qwen/qwen3-14b:free|100.00|Excellent coding performance,|
|qwen/qwen3-32b:free|100.00|Excellent coding performance,|
|qwen/qwen3-30b-a3b-04-28:free|95.00|Very strong performance from the smaller MoE model.|
|qwen/qwen3-8b:free|85.00|Good performance, comparable to other 8b models.|
**Test 4: Retrieval Augmented Generation (RAG)** (Timestamp \~11:22)
|Model|Score|
|:-|:-|
|qwen/qwen3-32b:free|92.50|
|qwen/qwen3-14b:free|90.00|
|qwen/qwen3-235b-a22b-04-28:free|89.50|
|qwen/qwen3-8b:free|85.00|
|qwen/qwen3-30b-a3b-04-28:free|85.00|
|*Note: Key issue is models responding in English when asked to respond in the source language (e.g., Japanese).*||
| 2025-04-29T15:18:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1kaqi3k/qwen_3_8b_14b_32b_30ba3b_235ba22b_tested/
|
Ok-Contribution9043
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaqi3k
| false | null |
t3_1kaqi3k
|
/r/LocalLLaMA/comments/1kaqi3k/qwen_3_8b_14b_32b_30ba3b_235ba22b_tested/
| false | false |
self
| 85 |
{'enabled': False, 'images': [{'id': 'eWR9DZhle6RYjYn8fBiue87N9i0Mr0-k6vmZLPnJn7o', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/wRH0VvNpyKQwcLypwHqolgHZWeQ48hkfVcSaXAYJ0V4.jpg?width=108&crop=smart&auto=webp&s=72fc17ab37c23f3d63109c0554b8b9e2a89fed99', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/wRH0VvNpyKQwcLypwHqolgHZWeQ48hkfVcSaXAYJ0V4.jpg?width=216&crop=smart&auto=webp&s=426e6e7b3dfacf98d3b4d4f01d4e9868a0deaf0e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/wRH0VvNpyKQwcLypwHqolgHZWeQ48hkfVcSaXAYJ0V4.jpg?width=320&crop=smart&auto=webp&s=8659068d53f6f342c74c252f83dcff321708b5e5', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/wRH0VvNpyKQwcLypwHqolgHZWeQ48hkfVcSaXAYJ0V4.jpg?auto=webp&s=1603a34585cf66c83f7d3ee11184cdc78174e1ea', 'width': 480}, 'variants': {}}]}
|
"draw a portrait of you using Matplotlib." a collection of <4b LLM models self-portraits.
| 1 | 2025-04-29T15:20:47 |
https://www.reddit.com/gallery/1kaqjqk
|
JLeonsarmiento
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaqjqk
| false | null |
t3_1kaqjqk
|
/r/LocalLLaMA/comments/1kaqjqk/draw_a_portrait_of_you_using_matplotlib_a/
| false | false | 1 | null |
||
Anyone tried giving their agent an LLM evaluation tool to self-correct? Here's a demo workflow for a tool-agent-user interaction benchmark
| 1 | 2025-04-29T15:23:20 |
https://v.redd.it/eyw4jk4iksxe1
|
fortunemaple
|
/r/LocalLLaMA/comments/1kaqlye/anyone_tried_giving_their_agent_an_llm_evaluation/
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaqlye
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/eyw4jk4iksxe1/DASHPlaylist.mpd?a=1748661807%2CNWM1Y2E1YmEwNTVkZjA1NGJlMzAxMTBjZmU3YmZiNWYzMDZiNjI3Yjc5ZTk2N2U3ZDY2NzVhMGMwMDU0NWY0Nw%3D%3D&v=1&f=sd', 'duration': 61, 'fallback_url': 'https://v.redd.it/eyw4jk4iksxe1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/eyw4jk4iksxe1/HLSPlaylist.m3u8?a=1748661807%2CZmEzN2Q1OTZiNmFjNTZkNzZhYWI3OGMyYzQ2YjRiNzVkZTUyZDJjYWY1ZmQ1YTE1NGYzZDQzY2FhZGRiZDVlYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/eyw4jk4iksxe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1kaqlye
|
/r/LocalLLaMA/comments/1kaqlye/anyone_tried_giving_their_agent_an_llm_evaluation/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'eXRyOTVuNGlrc3hlMRKGyMC4ysPML3P4SqBvKD3oDkUYq0-Q3KVnpd7bQn_G', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eXRyOTVuNGlrc3hlMRKGyMC4ysPML3P4SqBvKD3oDkUYq0-Q3KVnpd7bQn_G.png?width=108&crop=smart&format=pjpg&auto=webp&s=24fe9450115593bf1c55cce360ff853e5e3d6834', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/eXRyOTVuNGlrc3hlMRKGyMC4ysPML3P4SqBvKD3oDkUYq0-Q3KVnpd7bQn_G.png?width=216&crop=smart&format=pjpg&auto=webp&s=0359e7600aae7a95540bb0753c8ea6dc8f43b066', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/eXRyOTVuNGlrc3hlMRKGyMC4ysPML3P4SqBvKD3oDkUYq0-Q3KVnpd7bQn_G.png?width=320&crop=smart&format=pjpg&auto=webp&s=1a2de4b9d9a28b8e5405941591c0cbc2ba942e64', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/eXRyOTVuNGlrc3hlMRKGyMC4ysPML3P4SqBvKD3oDkUYq0-Q3KVnpd7bQn_G.png?width=640&crop=smart&format=pjpg&auto=webp&s=6736d886c1addddc359196d06b228b7fa230f80d', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/eXRyOTVuNGlrc3hlMRKGyMC4ysPML3P4SqBvKD3oDkUYq0-Q3KVnpd7bQn_G.png?width=960&crop=smart&format=pjpg&auto=webp&s=ba9a9c23947c28f52a93998d5a4110de87ca48c6', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/eXRyOTVuNGlrc3hlMRKGyMC4ysPML3P4SqBvKD3oDkUYq0-Q3KVnpd7bQn_G.png?width=1080&crop=smart&format=pjpg&auto=webp&s=06bb549d2df2298bb6ae296b83f5ab387073dac4', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/eXRyOTVuNGlrc3hlMRKGyMC4ysPML3P4SqBvKD3oDkUYq0-Q3KVnpd7bQn_G.png?format=pjpg&auto=webp&s=54976eb6cd173d50dd8104bc73c417ffec22381d', 'width': 1920}, 'variants': {}}]}
|
||
😲 Speed with Qwen3 on Mac Against Various Prompt Sizes!
| 5 |
First, we all know prompt processing on a Mac is slower than on Nvidia GPUs. Let's just get that out of the way.
In my previous experience, speed between MLX and Llama.cpp was pretty much neck and neck, with a slight edge to MLX. Because of that, I've been mainly using Ollama for convenience.
Recently, I asked about prompt processing speed, and an LCP developer mentioned that prompt speed was significantly optimized starting with LCP 0.25.0.
Here’s a comparison between LCP 8bit and Q8_0 using Qwen3-30B-A3B, running on an M3 Max with 64GB of RAM:
I pulled the latest commits for both engines available as of this morning.
* MLX-LM: 0.24.0: with MLX: 0.25.1.dev20250428+99b986885
* LCP 5215 (5f5e39e1): loading full weights to GPU and flash attention enabled.
| Engine | Prompt Tokens | Prompt Processing Speed | Generated Tokens | Token Generation Speed | Total Execution Time |
| ----- | --- | --- | --- | --- | --- |
| MLX | 681 | 1160.636 | 939 | 68.016 | 24s |
| LCP | 680 | 320.66 | 1255 | 57.26 | 38s |
| MLX | 774 | 1193.223 | 1095 | 67.620 | 25s |
| LCP | 773 | 469.05 | 1165 | 56.04 | 24s |
| MLX | 1165 | 1276.406 | 1194 | 66.135 | 27s |
| LCP | 1164 | 395.88 | 939 | 55.61 | 22s |
| MLX | 1498 | 1309.557 | 1373 | 64.622 | 31s |
| LCP | 1497 | 467.97 | 1061 | 55.22 | 24s |
| MLX | 2178 | 1336.514 | 1395 | 62.485 | 33s |
| LCP | 2177 | 420.58 | 1422 | 53.66 | 34s |
| MLX | 3254 | 1301.808 | 1241 | 59.783 | 32s |
| LCP | 3253 | 399.03 | 1657 | 51.86 | 42s |
| MLX | 4007 | 1267.555 | 1522 | 60.945 | 37s |
| LCP | 4006 | 442.46 | 1252 | 51.15 | 36s |
| MLX | 6076 | 1188.697 | 1684 | 57.093 | 44s |
| LCP | 6075 | 424.56 | 1446 | 48.41 | 46s |
| MLX | 8050 | 1105.783 | 1263 | 54.186 | 39s |
| LCP | 8049 | 407.96 | 1705 | 46.13 | 59s |
| MLX | 12006 | 966.065 | 1961 | 48.330 | 1m2s |
| LCP | 12005 | 356.43 | 1503 | 42.43 | 1m11s |
| MLX | 16059 | 853.156 | 1973 | 43.580 | 1m18s |
| LCP | 16058 | 332.21 | 1285 | 39.38 | 1m23s |
| MLX | 24036 | 691.141 | 1592 | 34.724 | 1m30s |
| LCP | 24035 | 296.13 | 1666 | 33.78 | 2m13s |
| MLX | 32067 | 570.459 | 1088 | 29.289 | 1m43s |
| LCP | 32066 | 257.69 | 1643 | 29.76 | 3m2s |
| 2025-04-29T15:24:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1kaqnbj/speed_with_qwen3_on_mac_against_various_prompt/
|
chibop1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaqnbj
| false | null |
t3_1kaqnbj
|
/r/LocalLLaMA/comments/1kaqnbj/speed_with_qwen3_on_mac_against_various_prompt/
| false | false |
self
| 5 | null |
Anyone tried giving their agent an LLM evaluation tool to self-correct? Here's a demo workflow for a tool-agent-user benchmark
| 0 | 2025-04-29T15:25:13 |
https://v.redd.it/n71xw8uwksxe1
|
fortunemaple
|
/r/LocalLLaMA/comments/1kaqnlx/anyone_tried_giving_their_agent_an_llm_evaluation/
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaqnlx
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/n71xw8uwksxe1/DASHPlaylist.mpd?a=1748661921%2CNGM2ZjE1ODU2MDA3NDY1ODMzNGE0NWZiODZkYmY1ZTExODZmOGIzYzhmYjZmNzY5YmYyMjBjOTkyYWM0NzE3Nw%3D%3D&v=1&f=sd', 'duration': 61, 'fallback_url': 'https://v.redd.it/n71xw8uwksxe1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/n71xw8uwksxe1/HLSPlaylist.m3u8?a=1748661921%2CNGViYjc0MWI2ZjE4MmQ1MTAzMTI5YjU4YzdhZmFkYTc1ZTIzMmM5N2RkZjc2NDQ3Y2JmYTQ3ZDg2OTlmMzgzYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/n71xw8uwksxe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1kaqnlx
|
/r/LocalLLaMA/comments/1kaqnlx/anyone_tried_giving_their_agent_an_llm_evaluation/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'Yms2a3E1dXdrc3hlMRKGyMC4ysPML3P4SqBvKD3oDkUYq0-Q3KVnpd7bQn_G', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Yms2a3E1dXdrc3hlMRKGyMC4ysPML3P4SqBvKD3oDkUYq0-Q3KVnpd7bQn_G.png?width=108&crop=smart&format=pjpg&auto=webp&s=2e6ccf03de76d6bc7ad7fea5b8e4983b8dec1220', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Yms2a3E1dXdrc3hlMRKGyMC4ysPML3P4SqBvKD3oDkUYq0-Q3KVnpd7bQn_G.png?width=216&crop=smart&format=pjpg&auto=webp&s=1388264ebf53cab710b09d8a22d83590e21b3fa1', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Yms2a3E1dXdrc3hlMRKGyMC4ysPML3P4SqBvKD3oDkUYq0-Q3KVnpd7bQn_G.png?width=320&crop=smart&format=pjpg&auto=webp&s=8d3cf32b141df9fe3b0fda399851f7f269245176', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Yms2a3E1dXdrc3hlMRKGyMC4ysPML3P4SqBvKD3oDkUYq0-Q3KVnpd7bQn_G.png?width=640&crop=smart&format=pjpg&auto=webp&s=63b24c0434e03c2c7096fab0a1e6fc940bdcedd6', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Yms2a3E1dXdrc3hlMRKGyMC4ysPML3P4SqBvKD3oDkUYq0-Q3KVnpd7bQn_G.png?width=960&crop=smart&format=pjpg&auto=webp&s=df32b3ceb3e9da311bb128be2ef6953d4ee9e6bb', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Yms2a3E1dXdrc3hlMRKGyMC4ysPML3P4SqBvKD3oDkUYq0-Q3KVnpd7bQn_G.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c3850fecc3b55176bfe1f94e514d81132b30e8ec', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/Yms2a3E1dXdrc3hlMRKGyMC4ysPML3P4SqBvKD3oDkUYq0-Q3KVnpd7bQn_G.png?format=pjpg&auto=webp&s=d387a99a9e4f075a0d00b49043b726f44d05967b', 'width': 1920}, 'variants': {}}]}
|
||
Qwen3 EQ-Bench results. Tested: 235b-a22b, 32b, 14b, 30b-a3b.
| 168 |
Links:
[https://eqbench.com/creative\_writing\_longform.html](https://eqbench.com/creative_writing_longform.html)
[https://eqbench.com/creative\_writing.html](https://eqbench.com/creative_writing.html)
[https://eqbench.com/judgemark-v2.html](https://eqbench.com/judgemark-v2.html)
Samples:
[https://eqbench.com/results/creative-writing-longform/qwen\_\_qwen3-235b-a22b\_longform\_report.html](https://eqbench.com/results/creative-writing-longform/qwen__qwen3-235b-a22b_longform_report.html)
[https://eqbench.com/results/creative-writing-longform/qwen\_\_qwen3-32b\_longform\_report.html](https://eqbench.com/results/creative-writing-longform/qwen__qwen3-32b_longform_report.html)
[https://eqbench.com/results/creative-writing-longform/qwen\_\_qwen3-30b-a3b\_longform\_report.html](https://eqbench.com/results/creative-writing-longform/qwen__qwen3-30b-a3b_longform_report.html)
[https://eqbench.com/results/creative-writing-longform/qwen\_\_qwen3-14b\_longform\_report.html](https://eqbench.com/results/creative-writing-longform/qwen__qwen3-14b_longform_report.html)
| 2025-04-29T15:33:57 |
https://www.reddit.com/gallery/1kaqvi5
|
_sqrkl
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaqvi5
| false | null |
t3_1kaqvi5
|
/r/LocalLLaMA/comments/1kaqvi5/qwen3_eqbench_results_tested_235ba22b_32b_14b/
| false | false | 168 | null |
|
Qwen3 30B A3B 4_k_m - 2x more token/s boost from ~20 to ~40 by changing the runtime in a 5070ti (16g vram)
| 20 |
IDK why, but I just find that changing the runtime into Vulkan can boost 2x more token/s, which is definitely much more usable than ever before to me. The default setting, "CUDA 12," is the worst in my test; even the "CUDA" setting is better than it. hope it's useful to you!
\*But Vulkan seems to cause noticeable speed loss for Gemma3 27b.
| 2025-04-29T15:35:48 |
https://www.reddit.com/gallery/1kaqx3x
|
Ill-Language4452
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaqx3x
| false | null |
t3_1kaqx3x
|
/r/LocalLLaMA/comments/1kaqx3x/qwen3_30b_a3b_4_k_m_2x_more_tokens_boost_from_20/
| false | false | 20 | null |
|
Qwen3 30B A3B seems tough to replace,
| 1 |
I today tested qwen3 30B A3B with my internal questineres where one is a simple constructed mathematical question with imaginary condition of inflation counting in EMI payment, for my surprise this and and many other tests gave more accurate mathematical reauls than the gemini 2.5 flash exp.
Question:
"""suppose there is inflation at 5percent a month and am investing 1000 per month at the end of the 1 month the value of 1000 will be 1000-1000x((5/12)/100) = 995.8, this 995.8 plus 1000 for the second month, so after second months 1995.8-1995.8x((5/12)/100)= 1987.4
3rd month 1987.4 + 1000 = 2987.4,
the. after the 3rd month the value of 2987.4 is ....
this continues till 12th month and the final value of the money we have is = """
Purposefully the question was made by my 15y old cousin for one thing that the question should not be perfect, ai should be given ways to explore the ways to imaging this question
Response:
Deepseek r1
To determine the final value after 12 months of investing $1000 each month with a monthly inflation rate of 5% per annum (0.4167% monthly), we adjust each month's balance by multiplying by 0.995833 (1 - 0.004167). Here's the step-by-step calculation:
1. **Month 1**:
Investment: $1000
After inflation: $1000 × 0.995833 ≈ **$995.83**
2. **Month 2**:
Previous balance: $995.83
New investment: $1000
Total before inflation: $1995.83
After inflation: $1995.83 × 0.995833 ≈ **$1987.51**
3. **Month 3**:
Total before inflation: $1987.51 + $1000 = $2987.51
After inflation: $2987.51 × 0.995833 ≈ **$2975.07**
Continuing this process for all 12 months and truncating values at each step (as per the user's method), the final balance is:
**Final Value after 12 months: ≈ $11,679.40**
---
**Answer:**
The final value of the investment after 12 months is approximately **\$11,679.40**.
Qwen3 30B A3B
To determine the final value of the investment after 12 months, we must account for **monthly inflation** of **5% annually**, which translates to a **monthly inflation rate** of:
$$
r = \frac{5\%}{12} = \frac{0.05}{12} \approx 0.00416667
$$
You are investing **$1000 at the end of each month**, and **each month’s investment is immediately subjected to inflation**. The key process is:
1. At the end of each month, **add $1000** to the current balance.
2. Then **apply the inflation rate** to the total amount.
This creates a **recursive relationship** for the balance at the end of each month:
$$
B_n = (B_{n-1} + 1000) \cdot (1 - r)
$$
Where:
- $ B_n $: Balance at the end of month $ n $
- $ r = 0.00416667 $: Monthly inflation rate
- Initial balance $ B_0 = 0 $
---
### Closed-Form Formula
This recursive process can be modeled using the closed-form formula for a geometric series with monthly contributions and continuous discounting:
$$
B_n = \frac{C \cdot (1 - r)}{r} \cdot \left(1 - (1 - r)^n\right)
$$
Where:
- $ C = 1000 $: Monthly investment
- $ r = 0.00416667 $
- $ n = 12 $: Number of months
Substitute the known values:
$$
B_{12} = \frac{1000 \cdot (1 - 0.00416667)}{0.00416667} \cdot \left(1 - (1 - 0.00416667)^{12}\right)
$$
$$
= \frac{1000 \cdot 0.9958333}{0.00416667} \cdot \left(1 - (0.9958333)^{12}\right)
$$
$$
= 239000 \cdot \left(1 - 0.95107\right)
$$
$$
= 239000 \cdot 0.04893 \approx 11680.00
$$
---
### Manual Verification
By simulating the process month by month and applying the same logic (adding $1000, then applying inflation), the final balance at the end of the 12th month converges to **$11,680.00** when rounded to the nearest dollar.
---
### Final Answer
$$
\boxed{11680}
$$
Gemini 2.5 pro flash exp
The final value of the money you have at the end of the 12th month is approximately $11687.38.
This value represents the real value or purchasing power of your accumulated savings after accounting for the given inflation rate each month. The nominal amount saved is 12 \times 1000 = 12000, so the real value is less than the nominal value due to inflation.
Real answer
11679.91394190
Qwen3 30B a3b came as close as possible work the answer
| 2025-04-29T15:36:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1kaqxb2/qwen3_30b_a3b_seems_tough_to_replace/
|
ThenNeighborhood3723
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaqxb2
| false | null |
t3_1kaqxb2
|
/r/LocalLLaMA/comments/1kaqxb2/qwen3_30b_a3b_seems_tough_to_replace/
| false | false |
self
| 1 | null |
leak: meta.llama4-reasoning-17b-instruct-v1:0
| 1 |
[removed]
| 2025-04-29T15:39:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1kaqzv4/leak_metallama4reasoning17binstructv10/
|
celsowm
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaqzv4
| false | null |
t3_1kaqzv4
|
/r/LocalLLaMA/comments/1kaqzv4/leak_metallama4reasoning17binstructv10/
| false | false | 1 | null |
|
OASIS an open-source reddit and X million-agent social simulator
| 1 |
[removed]
| 2025-04-29T15:39:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1kar08n/oasis_an_opensource_reddit_and_x_millionagent/
|
FitHeron1933
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kar08n
| false | null |
t3_1kar08n
|
/r/LocalLLaMA/comments/1kar08n/oasis_an_opensource_reddit_and_x_millionagent/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'G9lI5m7olJ6kxt-NLJn8v67C8l-uHphUjRN25CP41aI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/M_ugwss_smzf0hBvzPYREl2I6H8VdPy6HOQb6EG6qQ4.jpg?width=108&crop=smart&auto=webp&s=db83937bc9941fef2c96b5024c28688f31f2e4c2', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/M_ugwss_smzf0hBvzPYREl2I6H8VdPy6HOQb6EG6qQ4.jpg?width=216&crop=smart&auto=webp&s=ee66139c6c5ece1de226c552913fe3995a8df832', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/M_ugwss_smzf0hBvzPYREl2I6H8VdPy6HOQb6EG6qQ4.jpg?width=320&crop=smart&auto=webp&s=eac01b55b4cb90da5bc6c9ce86f323efd1a1a0c1', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/M_ugwss_smzf0hBvzPYREl2I6H8VdPy6HOQb6EG6qQ4.jpg?width=640&crop=smart&auto=webp&s=b5d13013b6527fd421fb52a5e7dab9797ed4ec37', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/M_ugwss_smzf0hBvzPYREl2I6H8VdPy6HOQb6EG6qQ4.jpg?width=960&crop=smart&auto=webp&s=61573e98d7d03ad3ccff0d04fe576a3b6e2b43ea', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/M_ugwss_smzf0hBvzPYREl2I6H8VdPy6HOQb6EG6qQ4.jpg?width=1080&crop=smart&auto=webp&s=eacd2c355bf2765d69879de3b8558fae8ad0cea8', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/M_ugwss_smzf0hBvzPYREl2I6H8VdPy6HOQb6EG6qQ4.jpg?auto=webp&s=7826ab281bd51b0add065dc38ac47d991f2f3251', 'width': 1200}, 'variants': {}}]}
|
Multi-GPU folks, Qwen3 sizes (except 235B) might actually be great
| 0 |
There have been plenty of complaints and concerns (especially before the release of the 32B dense model) about how multi-GPU systems were a waste and whatnot, with the most positive post probably being [MoEs are the future!](https://www.reddit.com/r/LocalLLaMA/comments/1k9xsda/moes_are_the_future/). Most were for the 235B model, but plenty for the 30B-A3B model. Sure, some multi-GPU systems (mine included, as well as most mining-based systems) won't be able to run the biggest model easily if at all, but there one alternative not discussed at all: running N*+1* models simultaneously.
It's not common, and mostly useless for chat, but plenty good for other tasks (at least for my work). I only remembered/thought about it after reading and replying to [Made Mistral 24B \[...\] recursively argue with itself](https://www.reddit.com/r/LocalLLaMA/comments/1k8yrem/comment/mpa7w3o/) and seeing [good performance when running 30B-A3B CPU-only](https://www.reddit.com/r/LocalLLaMA/comments/1ka8b2u/comment/mpky2km/) (with ollama on Windows, not even using latest commits & self-compilation), but with the current model sizes you can run a panel of models (or [Ensemble](https://en.wikipedia.org/wiki/Ensemble_learning), for older ML folks) simultaneously, each running on a dedicated GPU, *plus one on the CPU*. E.g. if you have 4 GPUs:
* GPU1: Mistral 24B
* GPU2: Qwen2.5-Coder-32B
* GPU3: Gemma3-27B
* GPU4: Qwen3-32B
* **CPU**: Qwen3-30B-A3B
A second, more common way to run multiple models is by using some sort routing (we do this at work! *kinda*), where you direct each question to the model that is better suited for each specific task. Maybe you want FIM/autocomplete requests to go to Qwen-Coder but creative writing to to go Gemma3, math stuff to Qwen3, and tool calls to Mistral. Routers can configured with rules or be trained.
If you have it in you, [Harnessing Multiple Large Language Models: A Survey on LLM Ensemble](https://arxiv.org/pdf/2502.18036) (and [their Github page](https://github.com/junchenzhi/Awesome-LLM-Ensemble)) is a good starting point to explore. Have fun!
And here's a bonus table from (slightly dated, but still perhaps more relevant) [LLM-BLENDER: Ensembling Large Language Models with Pairwise Ranking and Generative Fusion](https://arxiv.org/pdf/2306.02561):
https://preview.redd.it/0euih4gnosxe1.png?width=2350&format=png&auto=webp&s=6069dea493126fd7a4a815cf4801beba346e6317
| 2025-04-29T15:47:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1kar7lm/multigpu_folks_qwen3_sizes_except_235b_might/
|
kmouratidis
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kar7lm
| false | null |
t3_1kar7lm
|
/r/LocalLLaMA/comments/1kar7lm/multigpu_folks_qwen3_sizes_except_235b_might/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'WfBjVOTDLvTk4a6T_e7QVw0TdmuFptsQ_NOJmhlehUk', 'resolutions': [{'height': 73, 'url': 'https://external-preview.redd.it/Rc5LUP2E5WLL-FKrTCfYv-tqa_nJ9aSJMVzd5y02RSE.jpg?width=108&crop=smart&auto=webp&s=31e03f808d1c27f48efcb64110cfd883a5a82a9a', 'width': 108}, {'height': 147, 'url': 'https://external-preview.redd.it/Rc5LUP2E5WLL-FKrTCfYv-tqa_nJ9aSJMVzd5y02RSE.jpg?width=216&crop=smart&auto=webp&s=b2590b3470b11d8477c7d53d43dbf5ed5f1f7163', 'width': 216}, {'height': 218, 'url': 'https://external-preview.redd.it/Rc5LUP2E5WLL-FKrTCfYv-tqa_nJ9aSJMVzd5y02RSE.jpg?width=320&crop=smart&auto=webp&s=bd42c3bdc50a50d41fc2de7f27042645bb5a26f9', 'width': 320}], 'source': {'height': 341, 'url': 'https://external-preview.redd.it/Rc5LUP2E5WLL-FKrTCfYv-tqa_nJ9aSJMVzd5y02RSE.jpg?auto=webp&s=e39fa888bc9552a624504dcb57b95e0158ab5b8e', 'width': 500}, 'variants': {}}]}
|
|
Why are literally 90% of posts on frontpage about Qwen?
| 1 |
[removed]
| 2025-04-29T15:59:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1karhfh/why_are_literally_90_of_posts_on_frontpage_about/
|
LosingID_583
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1karhfh
| false | null |
t3_1karhfh
|
/r/LocalLLaMA/comments/1karhfh/why_are_literally_90_of_posts_on_frontpage_about/
| false | false |
self
| 1 | null |
Agentica, AI Function Calling Framework: Can you make function? Then you're AI developer
| 8 | 2025-04-29T16:00:04 |
http://wrtnlabs.io/agentica/articles/can-you-make-function-then-you-are-ai-developer.html
|
jhnam88
|
wrtnlabs.io
| 1970-01-01T00:00:00 | 0 |
{}
|
1kari80
| false | null |
t3_1kari80
|
/r/LocalLLaMA/comments/1kari80/agentica_ai_function_calling_framework_can_you/
| false | false | 8 |
{'enabled': False, 'images': [{'id': 'yqmX4iZQlRy1eu-z9yMJ6FZSNEwkqo7jOZcMT7_f_Ro', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/iO1XGoBIkrAf_PZddXb6hlQ0s7pOHeldfNPF62N4nD0.jpg?width=108&crop=smart&auto=webp&s=090f077a18a730bca72e43057e819bd086ad6e45', 'width': 108}, {'height': 90, 'url': 'https://external-preview.redd.it/iO1XGoBIkrAf_PZddXb6hlQ0s7pOHeldfNPF62N4nD0.jpg?width=216&crop=smart&auto=webp&s=72c614c7383c352c003098d0c94378f770c0b27f', 'width': 216}, {'height': 134, 'url': 'https://external-preview.redd.it/iO1XGoBIkrAf_PZddXb6hlQ0s7pOHeldfNPF62N4nD0.jpg?width=320&crop=smart&auto=webp&s=9fda679cfffd2029eb5e762dc768c0e5fa86c576', 'width': 320}, {'height': 268, 'url': 'https://external-preview.redd.it/iO1XGoBIkrAf_PZddXb6hlQ0s7pOHeldfNPF62N4nD0.jpg?width=640&crop=smart&auto=webp&s=53ff6921cc31d03647b513e8822dcfcdc0a23dc1', 'width': 640}, {'height': 403, 'url': 'https://external-preview.redd.it/iO1XGoBIkrAf_PZddXb6hlQ0s7pOHeldfNPF62N4nD0.jpg?width=960&crop=smart&auto=webp&s=a1a055d8048e34bd92bc462a67141882afbf69ad', 'width': 960}], 'source': {'height': 420, 'url': 'https://external-preview.redd.it/iO1XGoBIkrAf_PZddXb6hlQ0s7pOHeldfNPF62N4nD0.jpg?auto=webp&s=f4e6203c4487488e7cd0d2c3e599366e90224378', 'width': 1000}, 'variants': {}}]}
|
||
This is my experience trying to get everything I want set up... Set up.
| 8 | 2025-04-29T16:13:01 |
Flying_Madlad
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kartxb
| false | null |
t3_1kartxb
|
/r/LocalLLaMA/comments/1kartxb/this_is_my_experience_trying_to_get_everything_i/
| false | false | 8 |
{'enabled': True, 'images': [{'id': 'WfCY5gOYjGKEqCuuYiB6083w5VFoZA8RQA5dlX8YTFI', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/ac10q2tgtsxe1.png?width=108&crop=smart&auto=webp&s=56054c45b21c89c7b49baea8dd9f7dc7c022361b', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/ac10q2tgtsxe1.png?width=216&crop=smart&auto=webp&s=522f7e2a614b89a1e24827ecea2b4be8e5a244b7', 'width': 216}, {'height': 480, 'url': 'https://preview.redd.it/ac10q2tgtsxe1.png?width=320&crop=smart&auto=webp&s=c5b4170a8e28fff024bcb0a0d9fc0498a4f540f3', 'width': 320}, {'height': 960, 'url': 'https://preview.redd.it/ac10q2tgtsxe1.png?width=640&crop=smart&auto=webp&s=16dace21f284a1896903f23ba83710fcfe5c935f', 'width': 640}, {'height': 1440, 'url': 'https://preview.redd.it/ac10q2tgtsxe1.png?width=960&crop=smart&auto=webp&s=80455f5731d8dd600740f817058f04a4c0193e48', 'width': 960}], 'source': {'height': 1536, 'url': 'https://preview.redd.it/ac10q2tgtsxe1.png?auto=webp&s=834077169db038981c56039c775d8d9c5d6c3154', 'width': 1024}, 'variants': {}}]}
|
|||
[P] PPO-inspired Inference Optimization Without Training — 4× Output Quality Boost in Gemini 2.0 vs 2.5 Pro
| 1 |
[removed]
| 2025-04-29T16:25:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1kas4we/p_ppoinspired_inference_optimization_without/
|
Apprehensive-Exit301
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kas4we
| false | null |
t3_1kas4we
|
/r/LocalLLaMA/comments/1kas4we/p_ppoinspired_inference_optimization_without/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'deETycvWehK7MlMrJAXX7Axq2FA8yChrCpdm2dGBYRc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ZILoLxy5V8eodtWWC2O48J9oDgpZVp9gEp6OkvENr5U.jpg?width=108&crop=smart&auto=webp&s=58648b93cf8fed0e56d2fdf39c92d510c116a007', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ZILoLxy5V8eodtWWC2O48J9oDgpZVp9gEp6OkvENr5U.jpg?width=216&crop=smart&auto=webp&s=bc54930474d21c545913212a3f32fc6fa85e15fd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ZILoLxy5V8eodtWWC2O48J9oDgpZVp9gEp6OkvENr5U.jpg?width=320&crop=smart&auto=webp&s=c74fb2e81bc689a5230184fae111f0d304b59f1b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ZILoLxy5V8eodtWWC2O48J9oDgpZVp9gEp6OkvENr5U.jpg?width=640&crop=smart&auto=webp&s=d1a137e00ad06306af7e1f77e6504523c1efcf83', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ZILoLxy5V8eodtWWC2O48J9oDgpZVp9gEp6OkvENr5U.jpg?width=960&crop=smart&auto=webp&s=a25877cf1897ee78fc8bda84381852db392219a6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ZILoLxy5V8eodtWWC2O48J9oDgpZVp9gEp6OkvENr5U.jpg?width=1080&crop=smart&auto=webp&s=bc677703a7352132069303c8c2cba792c6a97791', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ZILoLxy5V8eodtWWC2O48J9oDgpZVp9gEp6OkvENr5U.jpg?auto=webp&s=bb54882d199b2956424c32c1c87ea31fd273811a', 'width': 1200}, 'variants': {}}]}
|
Is Qwen 3 the tiny tango?
| 1 |
Ok, not on all models. Some are just as solid as they are dense. But, did we do it, in a way?
https://www.reddit.com/r/LocalLLaMA/s/OhK7sqLr5r
There's a few similarities in concept xo
Love it!
| 2025-04-29T16:27:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1kas6bv/is_qwen_3_the_tiny_tango/
|
Sambojin1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kas6bv
| false | null |
t3_1kas6bv
|
/r/LocalLLaMA/comments/1kas6bv/is_qwen_3_the_tiny_tango/
| false | false |
self
| 1 | null |
Building a Gen AI Lab for Students - Need Your Expert Advice!
| 1 |
Hi everyone,
I'm planning the hardware for a Gen AI lab for my students and would appreciate your expert opinions on these PC builds:
* **Instructor PC (High-Tier):** [PCPartPicker - https://pcpartpicker.com/list/Qjh8C8](https://pcpartpicker.com/list/Qjh8C8)
* **Student PCs (Multiple, Low-Tier):** [PCPartPicker - https://pcpartpicker.com/list/Bvksxg](https://pcpartpicker.com/list/Bvksxg)
Looking for advice on:
* Component compatibility and performance.
* Value optimisation for the student builds.
* Suggestions for improvements or alternatives.
Any input is greatly appreciated!
| 2025-04-29T16:27:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1kas6lz/building_a_gen_ai_lab_for_students_need_your/
|
_tzman
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kas6lz
| false | null |
t3_1kas6lz
|
/r/LocalLLaMA/comments/1kas6lz/building_a_gen_ai_lab_for_students_need_your/
| false | false |
self
| 1 | null |
cobalt-exp-beta-v8 giving very good answers on lmarena
| 3 |
Any thoughts which chatbot that is?
| 2025-04-29T16:41:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1kasjb4/cobaltexpbetav8_giving_very_good_answers_on/
|
Terminator857
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kasjb4
| false | null |
t3_1kasjb4
|
/r/LocalLLaMA/comments/1kasjb4/cobaltexpbetav8_giving_very_good_answers_on/
| false | false |
self
| 3 | null |
In Qwen 3 you can use /no_think in your prompt to skip the reasoning step
| 17 | 2025-04-29T16:41:57 |
onil_gova
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kasjge
| false | null |
t3_1kasjge
|
/r/LocalLLaMA/comments/1kasjge/in_qwen_3_you_can_use_no_think_in_your_prompt_to/
| false | false | 17 |
{'enabled': True, 'images': [{'id': 'SfNe8azPZ-osH7QHkGK61RdSIKyCHz4WZxnRfgfAMzc', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/93nvv0vmysxe1.jpeg?width=108&crop=smart&auto=webp&s=66b74a19ffa85ad71c662a2be77d0a538b980a56', 'width': 108}, {'height': 122, 'url': 'https://preview.redd.it/93nvv0vmysxe1.jpeg?width=216&crop=smart&auto=webp&s=b5d614bd7a3f1591c367625c08ad6d7a5e6d27f2', 'width': 216}, {'height': 182, 'url': 'https://preview.redd.it/93nvv0vmysxe1.jpeg?width=320&crop=smart&auto=webp&s=db589ee199758d129bb0d8864391fa8466756f53', 'width': 320}, {'height': 364, 'url': 'https://preview.redd.it/93nvv0vmysxe1.jpeg?width=640&crop=smart&auto=webp&s=00a972096fd4227512b26a31ea12d4ab28172f76', 'width': 640}, {'height': 546, 'url': 'https://preview.redd.it/93nvv0vmysxe1.jpeg?width=960&crop=smart&auto=webp&s=11ee14cdd20f3f441a0e19d70427799ca08e853b', 'width': 960}, {'height': 614, 'url': 'https://preview.redd.it/93nvv0vmysxe1.jpeg?width=1080&crop=smart&auto=webp&s=bb32384d03b47878df8fcf9e0af2ea612e8572e4', 'width': 1080}], 'source': {'height': 1056, 'url': 'https://preview.redd.it/93nvv0vmysxe1.jpeg?auto=webp&s=788942cce5688294b6a03ccc800a2b325c125085', 'width': 1855}, 'variants': {}}]}
|
|||
We trained transformers on edge hardware — no GPU clusters required. Here’s Lodestone.
| 1 |
[removed]
| 2025-04-29T16:44:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1kaslqp/we_trained_transformers_on_edge_hardware_no_gpu/
|
PraetorDefenseAI
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaslqp
| false | null |
t3_1kaslqp
|
/r/LocalLLaMA/comments/1kaslqp/we_trained_transformers_on_edge_hardware_no_gpu/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'uzhr1IMWhnzrs13x1Lddb1xEYkcBF8Uk8nPRETICttA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uqten8w0Ttjd7rL2OIHxzx4QUmergXmq4outwF6oIKw.jpg?width=108&crop=smart&auto=webp&s=b3b3d3640dcb60c73fc9990b5a64b36a80a69eb8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uqten8w0Ttjd7rL2OIHxzx4QUmergXmq4outwF6oIKw.jpg?width=216&crop=smart&auto=webp&s=45403db48bb624ced22be1d22c6ded5d23bfd4b5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uqten8w0Ttjd7rL2OIHxzx4QUmergXmq4outwF6oIKw.jpg?width=320&crop=smart&auto=webp&s=93ffdf93273815e874e115afa946952e6bba4891', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uqten8w0Ttjd7rL2OIHxzx4QUmergXmq4outwF6oIKw.jpg?width=640&crop=smart&auto=webp&s=eece92b21a470fc56ba575cfec69a702ea62bdc5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uqten8w0Ttjd7rL2OIHxzx4QUmergXmq4outwF6oIKw.jpg?width=960&crop=smart&auto=webp&s=f2231f21b84cf96a4d29fce4df7b5d040f6f72d1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uqten8w0Ttjd7rL2OIHxzx4QUmergXmq4outwF6oIKw.jpg?width=1080&crop=smart&auto=webp&s=8636edf816c5cb088010ae9b216b7289c684c233', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uqten8w0Ttjd7rL2OIHxzx4QUmergXmq4outwF6oIKw.jpg?auto=webp&s=da6902899eafb4881ce0daa006563f48191e3847', 'width': 1200}, 'variants': {}}]}
|
LlamaCon
| 106 | 2025-04-29T16:51:20 |
siddhantparadox
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kasrnx
| false | null |
t3_1kasrnx
|
/r/LocalLLaMA/comments/1kasrnx/llamacon/
| false | false | 106 |
{'enabled': True, 'images': [{'id': '8stdrAlDRFofgkfWtwPzkuPuByzQ9CAtk_Sssb5Fs2I', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/jyewwk4b0txe1.jpeg?width=108&crop=smart&auto=webp&s=f81a8c3111cbfcf50d4043045f046adc44e9b683', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/jyewwk4b0txe1.jpeg?width=216&crop=smart&auto=webp&s=1ab43b8cee5d18b079b4e1cb8091662734507c9d', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/jyewwk4b0txe1.jpeg?width=320&crop=smart&auto=webp&s=9030faef10b020ab6a2b6211216a2909608a9af5', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/jyewwk4b0txe1.jpeg?width=640&crop=smart&auto=webp&s=af69c60a53a3daa10b58560398ed565f6d435dad', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/jyewwk4b0txe1.jpeg?width=960&crop=smart&auto=webp&s=3682fdb24cc804e715f38f2b10dbb6d2cc89f29a', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/jyewwk4b0txe1.jpeg?width=1080&crop=smart&auto=webp&s=da38bc7882506abeeb202fe52fdf5f2a63e1dbf3', 'width': 1080}], 'source': {'height': 3000, 'url': 'https://preview.redd.it/jyewwk4b0txe1.jpeg?auto=webp&s=9ba8be78d44c5f321c6181b19b1d208242828e96', 'width': 4000}, 'variants': {}}]}
|
|||
I Benchmarked OpenAI Memory vs LangMem vs Letta (MemGPT) vs Mem0 for Long-Term Memory: Here’s How They Stacked Up
| 1 |
[removed]
| 2025-04-29T16:51:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1kasrzh/i_benchmarked_openai_memory_vs_langmem_vs_letta/
|
staranjeet
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kasrzh
| false | null |
t3_1kasrzh
|
/r/LocalLLaMA/comments/1kasrzh/i_benchmarked_openai_memory_vs_langmem_vs_letta/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'a0RZrE7GFt0DmG96zzPXqvq5hYViOc0VToNDs9kVOl8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/GMSO7W8n8TzbagObVlLjD6Lfh0lgU5aGETO6YedVnFE.jpg?width=108&crop=smart&auto=webp&s=c72fb0a66471351f333efe72a4a2b95a1f1d4605', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/GMSO7W8n8TzbagObVlLjD6Lfh0lgU5aGETO6YedVnFE.jpg?width=216&crop=smart&auto=webp&s=214dedf0eafe805259b789be11833dc7bd1141c1', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/GMSO7W8n8TzbagObVlLjD6Lfh0lgU5aGETO6YedVnFE.jpg?width=320&crop=smart&auto=webp&s=e9306a6f0438ea7e57c43c0ac36d35bec7abe86a', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/GMSO7W8n8TzbagObVlLjD6Lfh0lgU5aGETO6YedVnFE.jpg?width=640&crop=smart&auto=webp&s=059a7bfa5922517723e726099d97c7435b6248a1', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/GMSO7W8n8TzbagObVlLjD6Lfh0lgU5aGETO6YedVnFE.jpg?width=960&crop=smart&auto=webp&s=2d62e3b449eb7928689d7f8e28df2bcb7f03ad36', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/GMSO7W8n8TzbagObVlLjD6Lfh0lgU5aGETO6YedVnFE.jpg?width=1080&crop=smart&auto=webp&s=e908fdd03b9a0fc7a39675425af9fa9059983e2a', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/GMSO7W8n8TzbagObVlLjD6Lfh0lgU5aGETO6YedVnFE.jpg?auto=webp&s=822a7286748aa2c9e4c43cf646ef88b0c1c4b1c3', 'width': 2400}, 'variants': {}}]}
|
Qwen 30B MOE is near top tier in quality and top tier in speed! 6 Model test - 27b-70b models M1 Max 64gb
| 2 |
System: Mac M1 Studio Max, 64gb - Upgraded GPU.
Goal: Test 27b-70b models currently considered near or the best
Questions: 3 of 8 questions complete so far
Setup: Ollama + Open Web Ui / All models downloaded today with exception of L3 70b finetune / All models from Unsloth on HF as well and Q8 with exception of 70b which are Q4 and again the L3 70b finetune.
The DM finetune is the Dungeon Master variant I saw over perform on some benchmarks.
Question 1 was about potty training a child and making a song for it.
I graded based on if the song made sense, if their was words that didn't seem appropriate or rhythm etc.
All the 70b models > 30B MOE Qwen / 27b Gemma3 > Qwen3 32b / Deepseek R1 Q32b.
The 70b models was fairly good, slightly better then 30b MOE / Gemma3 but not by much. The drop from those to Q3 32b and R1 is due to both having very odd word choices or wording that didn't work.
2nd Question was write a outline for a possible bestselling book. I specifically asked for the first 3k words of the book.
Again it went similar with these ranks:
All the 70b models > 30B MOE Qwen / 27b Gemma3 > Qwen3 32b / Deepseek R1 Q32b.
70b models all got 1500+ words of the start of the book and seemed alright from the outline reading and scanning the text for issues. Gemma3 + Q3 MOE both got 1200+ words, and had similar abilities. Q3 32b alone with DS R1 both had issues again. R1 wrote 700 words then repeated 4 paragraphs for 9k words before I stopped it and Q3 32b wrote a pretty bad story that I immediately caught a impossible plot point to and the main character seemed like a moron.
3rd question is personal use case, D&D campaign/material writing.
I need to dig more into it as it's a long prompt which has a lot of things to hit such as theme, format of how the world is outlined, starting of a campaign (similar to a starting campaign book) and I will have to do some grading but I think it shows Q3 MOE doing better then I expect.
So the 30B MOE in 1/2 of my tests I have (working on the rest right now) performs almost on par with 70B models and on par or possibly better then Gemma3 27b. It definitely seems better then the 32b Qwen 3 but I am hoping with some fine tunes the 32b will get better. I was going to test GLM but I find it under performs in my test not related to coding and mostly similar to Gemma3 in everything else. I might do another round with GLM + QWQ + 1 more model later once I finish this round.
https://imgur.com/a/9ko6NtN
Not saying this is super scientific I just did my best to make it a fair test for my own knowledge and I thought I would share. Since Q3 30b MOE gets 40t/s on my system compared to ~10t/s or less for other models of that quality seems like a great model.
| 2025-04-29T16:58:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1kasy3x/qwen_30b_moe_is_near_top_tier_in_quality_and_top/
|
Shouldhaveknown2015
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kasy3x
| false | null |
t3_1kasy3x
|
/r/LocalLLaMA/comments/1kasy3x/qwen_30b_moe_is_near_top_tier_in_quality_and_top/
| false | false |
self
| 2 |
{'enabled': False, 'images': [{'id': 'XG0zqpcpXuKjtF2KdE5u9DqrpBuNuR6TbpMZxiZQlPU', 'resolutions': [{'height': 86, 'url': 'https://external-preview.redd.it/n-bbv2LeuFAToz8e7U13hcz_F2DGF77OiWH3D5yMYl4.jpg?width=108&crop=smart&auto=webp&s=9741bd44481225b9b4e5641db9dd960a4214f6a9', 'width': 108}, {'height': 173, 'url': 'https://external-preview.redd.it/n-bbv2LeuFAToz8e7U13hcz_F2DGF77OiWH3D5yMYl4.jpg?width=216&crop=smart&auto=webp&s=fb5b3b7cb00536248ad63a6b0542af7135a35004', 'width': 216}, {'height': 257, 'url': 'https://external-preview.redd.it/n-bbv2LeuFAToz8e7U13hcz_F2DGF77OiWH3D5yMYl4.jpg?width=320&crop=smart&auto=webp&s=ce1a8f6dfa2ddb2151c61ca0e2326e57983ad38d', 'width': 320}, {'height': 514, 'url': 'https://external-preview.redd.it/n-bbv2LeuFAToz8e7U13hcz_F2DGF77OiWH3D5yMYl4.jpg?width=640&crop=smart&auto=webp&s=27fa336837583050cce43c15b8d9d167f6939933', 'width': 640}, {'height': 771, 'url': 'https://external-preview.redd.it/n-bbv2LeuFAToz8e7U13hcz_F2DGF77OiWH3D5yMYl4.jpg?width=960&crop=smart&auto=webp&s=d62701031fbdfbe5ab422fabab23ff024598bde2', 'width': 960}, {'height': 867, 'url': 'https://external-preview.redd.it/n-bbv2LeuFAToz8e7U13hcz_F2DGF77OiWH3D5yMYl4.jpg?width=1080&crop=smart&auto=webp&s=38d3dfbc2f64fadd01ddedbc2781c3d6866ab913', 'width': 1080}], 'source': {'height': 1694, 'url': 'https://external-preview.redd.it/n-bbv2LeuFAToz8e7U13hcz_F2DGF77OiWH3D5yMYl4.jpg?auto=webp&s=b66023977338942f4459efddb8f0d7aa39123143', 'width': 2108}, 'variants': {}}]}
|
Privacy-safe (local / opensource backend) coding assistant?
| 1 |
[removed]
| 2025-04-29T16:59:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1kasz4a/privacysafe_local_opensource_backend_coding/
|
pandudon
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kasz4a
| false | null |
t3_1kasz4a
|
/r/LocalLLaMA/comments/1kasz4a/privacysafe_local_opensource_backend_coding/
| false | false |
self
| 1 | null |
How much models get smarter by time?
| 1 |
For example qwen3 4b which model one year old is nearly as the same level.....
What's the expectations for next year?
| 2025-04-29T17:05:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1kat4g5/how_much_models_get_smarter_by_time/
|
Current-Rabbit-620
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kat4g5
| false | null |
t3_1kat4g5
|
/r/LocalLLaMA/comments/1kat4g5/how_much_models_get_smarter_by_time/
| false | false |
self
| 1 | null |
Qwen 3 30B-A3B running on a midrange gaming PC is virtually indistinguishable from GPT-4 (with caveats)
| 1 |
[removed]
| 2025-04-29T17:15:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1katdq1/qwen_3_30ba3b_running_on_a_midrange_gaming_pc_is/
|
atdrilismydad
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1katdq1
| false | null |
t3_1katdq1
|
/r/LocalLLaMA/comments/1katdq1/qwen_3_30ba3b_running_on_a_midrange_gaming_pc_is/
| false | false | 1 | null |
|
How is the thinking budget of Gemini 2.5 flash and qwen 3 trained?
| 1 |
[removed]
| 2025-04-29T17:18:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1katgp0/how_is_the_thinking_budget_of_gemini_25_flash_and/
|
one-wandering-mind
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1katgp0
| false | null |
t3_1katgp0
|
/r/LocalLLaMA/comments/1katgp0/how_is_the_thinking_budget_of_gemini_25_flash_and/
| false | false |
self
| 1 | null |
Can We Expect a 4B Model Next Year to Match Today’s 70B?
| 0 |
For example qwen3 4b which model one year old is nearly as the same level.....
What's the expectations for next year? Until when the trend goes
| 2025-04-29T17:19:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1kathen/can_we_expect_a_4b_model_next_year_to_match/
|
Current-Rabbit-620
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kathen
| false | null |
t3_1kathen
|
/r/LocalLLaMA/comments/1kathen/can_we_expect_a_4b_model_next_year_to_match/
| false | false |
self
| 0 | null |
Qwen 3 performance compared to Llama 3.3. 70B?
| 15 |
I'm curious to hear people's experiences who've used Llama 3.3 70B frequently and are now switching to Qwen 3, either Qwen3-30B-A3B or Qwen3-32B dense. Are they at the level that they can replace the 70B Llama chonker? That would effectively allow me to reduce my set up from 4x 3090 to 2x.
I looked at the Llama 3.3 model card but the benchmark results there are for different benchmarks than Qwen 3 so can't really compare those.
I'm not interested in thinking (using it for high volume data processing).
| 2025-04-29T17:24:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1katlc9/qwen_3_performance_compared_to_llama_33_70b/
|
MountainGoatAOE
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1katlc9
| false | null |
t3_1katlc9
|
/r/LocalLLaMA/comments/1katlc9/qwen_3_performance_compared_to_llama_33_70b/
| false | false |
self
| 15 | null |
Qwen3:0.6B fast and smart!
| 7 |
This little llm can understand functions and make documents for it. It is powerful.
I tried C++ function around 200 lines. I used gpt-o1 as the judge and she got 75%!
| 2025-04-29T17:24:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1katlfj/qwen306b_fast_and_smart/
|
hairlessing
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1katlfj
| false | null |
t3_1katlfj
|
/r/LocalLLaMA/comments/1katlfj/qwen306b_fast_and_smart/
| false | false |
self
| 7 | null |
Proper Comparison Sizes for Qwen 3 MoE to Dense Models
| 8 |
According to the Geometric Mean Prediction of MoE Performance (https://www.reddit.com/r/LocalLLaMA/comments/1bqa96t/geometric_mean_prediction_of_moe_performance), the performance of Mixture of Experts (MoE) models can be approximated using the geometric mean of the total and active parameters, i.e., sqrt(total_params × active_params), when comparing to dense models.
For example, in the case of the Qwen3 235B-A22B model:
sqrt(235 × 22) ≈ 72
This suggests that its effective performance is roughly equivalent to that of a 72B dense model.
Similarly, for the 30B-A3B model:
sqrt(30 × 3) ≈ 9.5
which would place it on par with a 9.5B dense model in terms of effective performance.
From this perspective, both the 235B-A22B and 30B-A3B models demonstrate impressive efficiency and smart training strategies when compared to their dense counterparts. (Benchmark score and actual testing result)
The increased VRAM requirements remain a notable drawback for local LLM users.
Please feel free to point out any errors or misinterpretations. Thank you.
| 2025-04-29T17:27:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1katoag/proper_comparison_sizes_for_qwen_3_moe_to_dense/
|
ExcuseAccomplished97
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1katoag
| false | null |
t3_1katoag
|
/r/LocalLLaMA/comments/1katoag/proper_comparison_sizes_for_qwen_3_moe_to_dense/
| false | false |
self
| 8 | null |
M3 ultra binned or unbinned ?
| 2 |
Is the $1500 increase in price for unbinned version really worth it?.
| 2025-04-29T17:32:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1katt9v/m3_ultra_binned_or_unbinned/
|
No_Conversation9561
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1katt9v
| false | null |
t3_1katt9v
|
/r/LocalLLaMA/comments/1katt9v/m3_ultra_binned_or_unbinned/
| false | false |
self
| 2 | null |
Rumor: Intel ARC GPU 24 GB of memory in June
| 16 |
[https://videocardz.com/newz/sparkle-confirms-arc-battlemage-gpu-with-24gb-memory-slated-for-may-june](https://videocardz.com/newz/sparkle-confirms-arc-battlemage-gpu-with-24gb-memory-slated-for-may-june)
| 2025-04-29T17:37:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1katx3x/rumor_intel_arc_gpu_24_gb_of_memory_in_june/
|
Terminator857
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1katx3x
| false | null |
t3_1katx3x
|
/r/LocalLLaMA/comments/1katx3x/rumor_intel_arc_gpu_24_gb_of_memory_in_june/
| false | false |
self
| 16 |
{'enabled': False, 'images': [{'id': 'yTGKQmzlJPEcmY67lteToxEiwcHXEZhwqNGD8IMnznI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/WdUGpP3unGKFZihZrELR3GH6ZUOa768rHIdn2YSXrsA.jpg?width=108&crop=smart&auto=webp&s=6c079e79ea271fc0ce072d5b49871faa61ff18ea', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/WdUGpP3unGKFZihZrELR3GH6ZUOa768rHIdn2YSXrsA.jpg?width=216&crop=smart&auto=webp&s=49b773a4f29cc7ad7b26881a29719149abb3dd09', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/WdUGpP3unGKFZihZrELR3GH6ZUOa768rHIdn2YSXrsA.jpg?width=320&crop=smart&auto=webp&s=2bd4d0c6eb4016659d1a53bab134b19f8e92ca40', 'width': 320}, {'height': 332, 'url': 'https://external-preview.redd.it/WdUGpP3unGKFZihZrELR3GH6ZUOa768rHIdn2YSXrsA.jpg?width=640&crop=smart&auto=webp&s=461a9ba85d2877f5c00bb8c11f93f1ceac11d893', 'width': 640}, {'height': 499, 'url': 'https://external-preview.redd.it/WdUGpP3unGKFZihZrELR3GH6ZUOa768rHIdn2YSXrsA.jpg?width=960&crop=smart&auto=webp&s=065fe8c04c6ff13e832aab7196ab42c3f14ad9ea', 'width': 960}, {'height': 561, 'url': 'https://external-preview.redd.it/WdUGpP3unGKFZihZrELR3GH6ZUOa768rHIdn2YSXrsA.jpg?width=1080&crop=smart&auto=webp&s=c4383cef0d92af251d52b59a038eb832aa607133', 'width': 1080}], 'source': {'height': 1300, 'url': 'https://external-preview.redd.it/WdUGpP3unGKFZihZrELR3GH6ZUOa768rHIdn2YSXrsA.jpg?auto=webp&s=8068b3820c65b479ff627cf40f91bdce3cd88aec', 'width': 2500}, 'variants': {}}]}
|
No new models in LlamaCon announced
| 268 |
I guess it wasn’t good enough
| 2025-04-29T17:39:27 |
https://ai.meta.com/blog/llamacon-llama-news/
|
mehyay76
|
ai.meta.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1katz6u
| false | null |
t3_1katz6u
|
/r/LocalLLaMA/comments/1katz6u/no_new_models_in_llamacon_announced/
| false | false | 268 |
{'enabled': False, 'images': [{'id': '6A4AmBNDETc15Gg9dLCdy1ERxMHmcY1hWjrPvUMAPxg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/XRPU2B1-WyXMxWy34TlfRr2ItA7lCciwL2CfvgMnsfE.jpg?width=108&crop=smart&auto=webp&s=3ad48e882d2265a7734b8e3217752ff0aa8738e9', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/XRPU2B1-WyXMxWy34TlfRr2ItA7lCciwL2CfvgMnsfE.jpg?width=216&crop=smart&auto=webp&s=ca6fa3026a6944a4899012c7789c3a51850d64be', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/XRPU2B1-WyXMxWy34TlfRr2ItA7lCciwL2CfvgMnsfE.jpg?width=320&crop=smart&auto=webp&s=41aad5683ad5d91c934ea1c1e90bba3fdc70b0bc', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/XRPU2B1-WyXMxWy34TlfRr2ItA7lCciwL2CfvgMnsfE.jpg?width=640&crop=smart&auto=webp&s=353ac7ecc5aa40634908575f505f14f208cf6093', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/XRPU2B1-WyXMxWy34TlfRr2ItA7lCciwL2CfvgMnsfE.jpg?width=960&crop=smart&auto=webp&s=32931fea6ef1aeef39eff1990e126f3352b46d11', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/XRPU2B1-WyXMxWy34TlfRr2ItA7lCciwL2CfvgMnsfE.jpg?width=1080&crop=smart&auto=webp&s=5a30ff313a92258a1e16ce50a681da82ceb54fdb', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/XRPU2B1-WyXMxWy34TlfRr2ItA7lCciwL2CfvgMnsfE.jpg?auto=webp&s=2a43523d6cd377e743814f16e2d184ca9d601f8f', 'width': 1920}, 'variants': {}}]}
|
|
Qwen3 vs Gemma 3
| 228 |
After playing around with Qwen3, I’ve got mixed feelings. It’s actually pretty solid in math, coding, and reasoning. The hybrid reasoning approach is impressive — it really shines in that area.
But compared to Gemma, there are a few things that feel lacking:
- **Multilingual support** isn’t great. Gemma 3 12B does better than Qwen3 14B, 30B MoE, and maybe even the 32B dense model in my language.
- **Factual knowledge** is really weak — even worse than LLaMA 3.1 8B in some cases. Even the biggest Qwen3 models seem to struggle with facts.
- **No vision capabilities.**
Ever since Qwen 2.5, I was hoping for better factual accuracy and multilingual capabilities, but unfortunately, it still falls short. That said, it’s a solid step forward overall. The range of sizes and especially the 30B MoE for speed are great. Also, the hybrid reasoning is genuinely impressive.
**What’s your experience been like?**
| 2025-04-29T17:43:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1kau30f/qwen3_vs_gemma_3/
|
Sadman782
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kau30f
| false | null |
t3_1kau30f
|
/r/LocalLLaMA/comments/1kau30f/qwen3_vs_gemma_3/
| false | false |
self
| 228 |
{'enabled': False, 'images': [{'id': 'AZp35yz_V4VtuibwE_8jzrenzMzkdIvHKwLWODdTpXs', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/2lmGrEsWuW3nCpTSZP05rkfUhhv47hD-ICav-ZrkZSU.jpg?width=108&crop=smart&auto=webp&s=2dfef35e8b640884ae949480f6486c0ad548cb89', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/2lmGrEsWuW3nCpTSZP05rkfUhhv47hD-ICav-ZrkZSU.jpg?width=216&crop=smart&auto=webp&s=745100d146657f049da013b456be8d2ea57a73c3', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/2lmGrEsWuW3nCpTSZP05rkfUhhv47hD-ICav-ZrkZSU.jpg?width=320&crop=smart&auto=webp&s=f1531690c6196d5df924be2f7ab419d22903c1bd', 'width': 320}, {'height': 358, 'url': 'https://external-preview.redd.it/2lmGrEsWuW3nCpTSZP05rkfUhhv47hD-ICav-ZrkZSU.jpg?width=640&crop=smart&auto=webp&s=e365c82d3cb5901f9cc0b29f6dcadf5a1c4f367e', 'width': 640}, {'height': 538, 'url': 'https://external-preview.redd.it/2lmGrEsWuW3nCpTSZP05rkfUhhv47hD-ICav-ZrkZSU.jpg?width=960&crop=smart&auto=webp&s=746bdbde25834cc422a554a471e08759ed650cf4', 'width': 960}, {'height': 605, 'url': 'https://external-preview.redd.it/2lmGrEsWuW3nCpTSZP05rkfUhhv47hD-ICav-ZrkZSU.jpg?width=1080&crop=smart&auto=webp&s=eeed15a44233505a4ab817f87664aad8e2dd47b7', 'width': 1080}], 'source': {'height': 668, 'url': 'https://external-preview.redd.it/2lmGrEsWuW3nCpTSZP05rkfUhhv47hD-ICav-ZrkZSU.jpg?auto=webp&s=a476af18a48199ccfba869f0e1589ec62eaa73eb', 'width': 1191}, 'variants': {}}]}
|
What I got out of LlamaCon
| 1 | 2025-04-29T17:45:48 |
Poolunion1
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kau4u0
| false | null |
t3_1kau4u0
|
/r/LocalLLaMA/comments/1kau4u0/what_i_got_out_of_llamacon/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'RyuEw0K11FGF34i5NAez968HglAauotw0AgeD355ZAo', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/vrpoc6sn9txe1.jpeg?width=108&crop=smart&auto=webp&s=63818b3a7f03f743ff9ce0423e741b29fe8f3569', 'width': 108}, {'height': 108, 'url': 'https://preview.redd.it/vrpoc6sn9txe1.jpeg?width=216&crop=smart&auto=webp&s=219db7a1c8298ae73b4a22720def67a034676241', 'width': 216}, {'height': 160, 'url': 'https://preview.redd.it/vrpoc6sn9txe1.jpeg?width=320&crop=smart&auto=webp&s=3621f7e429c2b9222d1316608a5b74f0756d5190', 'width': 320}, {'height': 320, 'url': 'https://preview.redd.it/vrpoc6sn9txe1.jpeg?width=640&crop=smart&auto=webp&s=9321947d7dad156b1ecadbde7833fa61f61ec143', 'width': 640}, {'height': 480, 'url': 'https://preview.redd.it/vrpoc6sn9txe1.jpeg?width=960&crop=smart&auto=webp&s=45b8eaab16a436747b154591dee4d6df7e6b8988', 'width': 960}], 'source': {'height': 500, 'url': 'https://preview.redd.it/vrpoc6sn9txe1.jpeg?auto=webp&s=9d8ecece171ab90796a8b186e9dbe437a439d023', 'width': 1000}, 'variants': {}}]}
|
|||
Benchmarks for prompted VLM Object Detection / Bounding Boxes
| 3 |
Curious if there are any benchmarks that evaluate a models ability to detect and segment/bounding box select an object in a given image. I checked [OpenVLM](https://huggingface.co/spaces/opencompass/open_vlm_leaderboard) but its not clear which benchmark to look at.
I know that Florence-2 and Moondream support object localization but unsure if theres a giant list of performance metrics anywhere. Florence-2 and moondream is a big hit or miss in my experience.
While yolo is more performant its not quite smart enough for what I need it for.
| 2025-04-29T17:54:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1kauchr/benchmarks_for_prompted_vlm_object_detection/
|
mnt_brain
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kauchr
| false | null |
t3_1kauchr
|
/r/LocalLLaMA/comments/1kauchr/benchmarks_for_prompted_vlm_object_detection/
| false | false |
self
| 3 |
{'enabled': False, 'images': [{'id': 'z3K-kdYaewFHYsIgnvLFsaxAVEzAgNJnZ7uWC75FdMo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/uwZwI709Tx2Oa4BXn0Bd-RPtUi9mCmfGWLkPfEgDGjU.jpg?width=108&crop=smart&auto=webp&s=3a8e4468ae780492901deab902fa214e9fdca60c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/uwZwI709Tx2Oa4BXn0Bd-RPtUi9mCmfGWLkPfEgDGjU.jpg?width=216&crop=smart&auto=webp&s=65388fc2d79ca0ead5aa6c3fc46274ba43bf01df', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/uwZwI709Tx2Oa4BXn0Bd-RPtUi9mCmfGWLkPfEgDGjU.jpg?width=320&crop=smart&auto=webp&s=8d3692e0f5904e3f3dd5b91d575e67eb36821d36', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/uwZwI709Tx2Oa4BXn0Bd-RPtUi9mCmfGWLkPfEgDGjU.jpg?width=640&crop=smart&auto=webp&s=ac86bcfb39d6b23c22c44431f9b6adb1602bf7b8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/uwZwI709Tx2Oa4BXn0Bd-RPtUi9mCmfGWLkPfEgDGjU.jpg?width=960&crop=smart&auto=webp&s=d3355b932e297d7aa4f49a67d37411a03abbf624', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/uwZwI709Tx2Oa4BXn0Bd-RPtUi9mCmfGWLkPfEgDGjU.jpg?width=1080&crop=smart&auto=webp&s=abf189fbc834226a37ac790d79f15a9921e85254', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/uwZwI709Tx2Oa4BXn0Bd-RPtUi9mCmfGWLkPfEgDGjU.jpg?auto=webp&s=355fe3c9d80bca39bcfe9aa0ff2c6e7dc0e4effb', 'width': 1200}, 'variants': {}}]}
|
I Benchmarked OpenAI Memory vs LangMem vs Letta (MemGPT) vs Mem0 for Long-Term Memory: Here’s How They Stacked Up
| 1 |
[removed]
| 2025-04-29T18:18:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1kauxq1/i_benchmarked_openai_memory_vs_langmem_vs_letta/
|
anmolbaranwal
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kauxq1
| false | null |
t3_1kauxq1
|
/r/LocalLLaMA/comments/1kauxq1/i_benchmarked_openai_memory_vs_langmem_vs_letta/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'a0RZrE7GFt0DmG96zzPXqvq5hYViOc0VToNDs9kVOl8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/GMSO7W8n8TzbagObVlLjD6Lfh0lgU5aGETO6YedVnFE.jpg?width=108&crop=smart&auto=webp&s=c72fb0a66471351f333efe72a4a2b95a1f1d4605', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/GMSO7W8n8TzbagObVlLjD6Lfh0lgU5aGETO6YedVnFE.jpg?width=216&crop=smart&auto=webp&s=214dedf0eafe805259b789be11833dc7bd1141c1', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/GMSO7W8n8TzbagObVlLjD6Lfh0lgU5aGETO6YedVnFE.jpg?width=320&crop=smart&auto=webp&s=e9306a6f0438ea7e57c43c0ac36d35bec7abe86a', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/GMSO7W8n8TzbagObVlLjD6Lfh0lgU5aGETO6YedVnFE.jpg?width=640&crop=smart&auto=webp&s=059a7bfa5922517723e726099d97c7435b6248a1', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/GMSO7W8n8TzbagObVlLjD6Lfh0lgU5aGETO6YedVnFE.jpg?width=960&crop=smart&auto=webp&s=2d62e3b449eb7928689d7f8e28df2bcb7f03ad36', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/GMSO7W8n8TzbagObVlLjD6Lfh0lgU5aGETO6YedVnFE.jpg?width=1080&crop=smart&auto=webp&s=e908fdd03b9a0fc7a39675425af9fa9059983e2a', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/GMSO7W8n8TzbagObVlLjD6Lfh0lgU5aGETO6YedVnFE.jpg?auto=webp&s=822a7286748aa2c9e4c43cf646ef88b0c1c4b1c3', 'width': 2400}, 'variants': {}}]}
|
Complete noob question
| 1 |
I have a 12gb Arc B580. I want to run models on it just to mess around and learn. My ultimate goal (in the intermediate term) is to get it working with my Home Assistant setup. I also have a Sapphire RX 570 8gb and a GTX1060 6gb. Would it be beneficial and/or possible to add the AMD and Nvidia cards to the Intel card and run a single model across platforms? Would the two older cards have enough vram and speed by themselves to make a usable system for my home needs in eventially bypassing Google and Alexa?
Note: I use the B580 for gaming, so it won't be able to be fully dedicated to an AI setup when I eventually dive into the deep end with a dedicated AI box.
| 2025-04-29T18:18:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1kauxvk/complete_noob_question/
|
Universal_Cognition
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kauxvk
| false | null |
t3_1kauxvk
|
/r/LocalLLaMA/comments/1kauxvk/complete_noob_question/
| false | false |
self
| 1 | null |
Ali Ghodsi just featured LocalLLaMA subreddit community at LlamaCon
| 1 |
[removed]
| 2025-04-29T18:20:54 |
vineethsai
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kav02n
| false | null |
t3_1kav02n
|
/r/LocalLLaMA/comments/1kav02n/ali_ghodsi_just_featured_localllama_subreddit/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'BzjqBjWU1smP0SIt4RJVjObM0NYU4givO5OScUVAkrs', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/nggjpcy9gtxe1.jpeg?width=108&crop=smart&auto=webp&s=5db816f12e7088b608c5e8a2b2bf827aec47a62e', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/nggjpcy9gtxe1.jpeg?width=216&crop=smart&auto=webp&s=79f223bf847f5468c30f45df1076bcb2099f3f3c', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/nggjpcy9gtxe1.jpeg?width=320&crop=smart&auto=webp&s=a9fefc349feeafbc3a9f791b233f4a3c1f1da3ee', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/nggjpcy9gtxe1.jpeg?width=640&crop=smart&auto=webp&s=afc837350428ca54146108135ef22cd0f743acd0', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/nggjpcy9gtxe1.jpeg?width=960&crop=smart&auto=webp&s=d6dcdcb8ea932b6cc03d91fbc4e7857eab276359', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/nggjpcy9gtxe1.jpeg?width=1080&crop=smart&auto=webp&s=b1f11e2f5ff06eeff00848a9dfeea1c4ef4610ca', 'width': 1080}], 'source': {'height': 3024, 'url': 'https://preview.redd.it/nggjpcy9gtxe1.jpeg?auto=webp&s=990b3bb3e99a0bf69265e70b324c34edfead76c9', 'width': 4032}, 'variants': {}}]}
|
||
How do you uncensor qwen3?
| 6 |
Seems to be very censored
| 2025-04-29T18:38:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1kavfkv/how_do_you_uncensor_qwen3/
|
InsideYork
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kavfkv
| false | null |
t3_1kavfkv
|
/r/LocalLLaMA/comments/1kavfkv/how_do_you_uncensor_qwen3/
| false | false |
self
| 6 | null |
Llama4 Tool Calling + Reasoning Tutorial via Llama API
| 0 |
Wanted to share our small tutorial on how to do tool-calling + reasoning on models using a simple DSL for prompts (baml) : [https://www.boundaryml.com/blog/llama-api-tool-calling](https://www.boundaryml.com/blog/llama-api-tool-calling)
Note that the llama4 docs specify you have to add <function> for doing tool-calling, but they still leave the parsing to you. In this demo you don't need any special tokens nor parsing (since we wrote one for you that fixes common json mistakes). Happy to answer any questions.
P.S. we havent tested all models, but Qwen should work nicely as well.
| 2025-04-29T18:43:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1kavkbr/llama4_tool_calling_reasoning_tutorial_via_llama/
|
fluxwave
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kavkbr
| false | null |
t3_1kavkbr
|
/r/LocalLLaMA/comments/1kavkbr/llama4_tool_calling_reasoning_tutorial_via_llama/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'EYjuCy-mlwO5ga5EPJEO3vBQhwOcG15qksW1TAIV6-Q', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/3sFxd_8kGFTrCyTMyHDDpv6cFiRxc2mj44qAx0M806o.jpg?width=108&crop=smart&auto=webp&s=5fa071728ad074da88b980b70bf6ed4b46f6eff6', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/3sFxd_8kGFTrCyTMyHDDpv6cFiRxc2mj44qAx0M806o.jpg?width=216&crop=smart&auto=webp&s=e79f401bfd0a167a283363729737a27311517b85', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/3sFxd_8kGFTrCyTMyHDDpv6cFiRxc2mj44qAx0M806o.jpg?width=320&crop=smart&auto=webp&s=4ad390be9e2eecd11d329f57023d67b50ee2124d', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/3sFxd_8kGFTrCyTMyHDDpv6cFiRxc2mj44qAx0M806o.jpg?width=640&crop=smart&auto=webp&s=590cb4ba285e4cc6f228fa219b7f60a8205058e7', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/3sFxd_8kGFTrCyTMyHDDpv6cFiRxc2mj44qAx0M806o.jpg?width=960&crop=smart&auto=webp&s=2b1ced2f4841311746dfeebd752f840a78206acd', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/3sFxd_8kGFTrCyTMyHDDpv6cFiRxc2mj44qAx0M806o.jpg?width=1080&crop=smart&auto=webp&s=e638f9254383b52c2cf50cbaa4c170b3b2958aa0', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/3sFxd_8kGFTrCyTMyHDDpv6cFiRxc2mj44qAx0M806o.jpg?auto=webp&s=dc8bbc3127abb6ba86be5b8b58c1dc14f1dd023e', 'width': 1200}, 'variants': {}}]}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.