title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
😲 M3Max vs 2xRTX3090 with Qwen3 MoE Against Various Prompt Sizes!
| 0 |
I didn't expect this. Here is a surprising comparison between MLX 8bit and GGUF Q8_0 using Qwen3-30B-A3B, running on an M3 Max 64GB as well as 2xrtx-3090 with llama.cpp. Notice the difference for prompt processing speed.
In my previous experience, speed between MLX and Llama.cpp was pretty much neck and neck, with a slight edge to MLX. Because of that, I've been mainly using Ollama for convenience.
Recently, I asked about prompt processing speed, and an MLX developer mentioned that prompt speed was significantly optimized starting with MLX 0.25.0.
I pulled the latest commits on their Github for both engines available as of this morning.
* MLX-LM: 0.24.0: with MLX: 0.25.1.dev20250428+99b986885
* Llama.cpp 5215 (5f5e39e1): loading all layers to GPU and flash attention enabled.
| Config | Prompt Tokens | Prompt Processing Speed | Generated Tokens | Token Generation Speed | Total Execution Time |
| ----- | --- | --- | --- | --- | --- |
| 2x3090 | 680 | 794.85 | 1087 | 82.68 | 23s |
| MLX | 681 | 1160.636 | 939 | 68.016 | 24s |
| LCP | 680 | 320.66 | 1255 | 57.26 | 38s |
| 2x3090 | 773 | 831.87 | 1071 | 82.63 | 23s |
| MLX | 774 | 1193.223 | 1095 | 67.620 | 25s |
| LCP | 773 | 469.05 | 1165 | 56.04 | 24s |
| 2x3090 | 1164 | 868.81 | 1025 | 81.97 | 23s |
| MLX | 1165 | 1276.406 | 1194 | 66.135 | 27s |
| LCP | 1164 | 395.88 | 939 | 55.61 | 22s |
| 2x3090 | 1497 | 957.58 | 1254 | 81.97 | 26s |
| MLX | 1498 | 1309.557 | 1373 | 64.622 | 31s |
| LCP | 1497 | 467.97 | 1061 | 55.22 | 24s |
| 2x3090 | 2177 | 938.00 | 1157 | 81.17 | 26s |
| MLX | 2178 | 1336.514 | 1395 | 62.485 | 33s |
| LCP | 2177 | 420.58 | 1422 | 53.66 | 34s |
| 2x3090 | 3253 | 967.21 | 1311 | 79.69 | 29s |
| MLX | 3254 | 1301.808 | 1241 | 59.783 | 32s |
| LCP | 3253 | 399.03 | 1657 | 51.86 | 42s |
| 2x3090 | 4006 | 1000.83 | 1169 | 78.65 | 28s |
| MLX | 4007 | 1267.555 | 1522 | 60.945 | 37s |
| LCP | 4006 | 442.46 | 1252 | 51.15 | 36s |
| 2x3090 | 6075 | 1012.06 | 1696 | 75.57 | 38s |
| MLX | 6076 | 1188.697 | 1684 | 57.093 | 44s |
| LCP | 6075 | 424.56 | 1446 | 48.41 | 46s |
| 2x3090 | 8049 | 999.02 | 1354 | 73.20 | 36s |
| MLX | 8050 | 1105.783 | 1263 | 54.186 | 39s |
| LCP | 8049 | 407.96 | 1705 | 46.13 | 59s |
| 2x3090 | 12005 | 975.59 | 1709 | 67.87 | 47s |
| MLX | 12006 | 966.065 | 1961 | 48.330 | 1m2s |
| LCP | 12005 | 356.43 | 1503 | 42.43 | 1m11s |
| 2x3090 | 16058 | 941.14 | 1667 | 65.46 | 52s |
| MLX | 16059 | 853.156 | 1973 | 43.580 | 1m18s |
| LCP | 16058 | 332.21 | 1285 | 39.38 | 1m23s |
| 2x3090 | 24035 | 888.41 | 1556 | 60.06 | 1m3s |
| MLX | 24036 | 691.141 | 1592 | 34.724 | 1m30s |
| LCP | 24035 | 296.13 | 1666 | 33.78 | 2m13s |
| 2x3090 | 32066 | 842.65 | 1060 | 55.16 | 1m7s |
| MLX | 32067 | 570.459 | 1088 | 29.289 | 1m43s |
| LCP | 32066 | 257.69 | 1643 | 29.76 | 3m2s |
| 2025-04-29T18:45:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1kavlkz/m3max_vs_2xrtx3090_with_qwen3_moe_against_various/
|
chibop1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kavlkz
| false | null |
t3_1kavlkz
|
/r/LocalLLaMA/comments/1kavlkz/m3max_vs_2xrtx3090_with_qwen3_moe_against_various/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'SrMBNhHR5cYttf3jJgmrTLTWWG0AUx5eAmsMIvDm2OY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SpEDBFi6GGDSevz1ujS5YTozRnu54NUroPBO7xK2FXs.jpg?width=108&crop=smart&auto=webp&s=eb70e60b802df4c1afc82cb8e3c526e5a68579f0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/SpEDBFi6GGDSevz1ujS5YTozRnu54NUroPBO7xK2FXs.jpg?width=216&crop=smart&auto=webp&s=702d906429e51b8b919c9a422c03c46305fd2927', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/SpEDBFi6GGDSevz1ujS5YTozRnu54NUroPBO7xK2FXs.jpg?width=320&crop=smart&auto=webp&s=b34486e24b0365d57598f1072ba1fc5116b5de0d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/SpEDBFi6GGDSevz1ujS5YTozRnu54NUroPBO7xK2FXs.jpg?width=640&crop=smart&auto=webp&s=6d4cdffcb1988123b3a8e1bdea1da884d5a28f4f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/SpEDBFi6GGDSevz1ujS5YTozRnu54NUroPBO7xK2FXs.jpg?width=960&crop=smart&auto=webp&s=dbce5fa81ad0291b1a3dbd80310fb61bfb8b5db2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/SpEDBFi6GGDSevz1ujS5YTozRnu54NUroPBO7xK2FXs.jpg?width=1080&crop=smart&auto=webp&s=1b55dd14067fce4f2f4a13bc0e3d66a055b12c37', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/SpEDBFi6GGDSevz1ujS5YTozRnu54NUroPBO7xK2FXs.jpg?auto=webp&s=88ff12eaffe625ccd4e77f3abc694e86ff795ee9', 'width': 1200}, 'variants': {}}]}
|
M4 Pro (48GB) Qwen3-30b-a3b gguf vs mlx
| 8 |
At 4 bit quantization, the result for gguf vs MLX
Prompt: “what are you good at?”
GGUF: 48.62 tok/sec
MLX: 79.55 tok/sec
Am a happy camper today.
| 2025-04-29T18:51:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1kavr8r/m4_pro_48gb_qwen330ba3b_gguf_vs_mlx/
|
KittyPigeon
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kavr8r
| false | null |
t3_1kavr8r
|
/r/LocalLLaMA/comments/1kavr8r/m4_pro_48gb_qwen330ba3b_gguf_vs_mlx/
| false | false |
self
| 8 | null |
Benchmarking AI Agent Memory Providers for Long-Term Memory
| 49 |
We’ve been exploring different memory systems for managing long, multi-turn conversations in AI agents, focusing on key aspects like:
* **Factual consistency** over extended dialogues
* **Low retrieval latency**
* **Token footprint efficiency** for cost-effectiveness
To assess their performance, I used the LOCOMO benchmark, which includes tests for single-hop, multi-hop, temporal, and open-domain questions. Here's what I found:
# Factual Consistency and Reasoning:
* **OpenAI Memory**:
* Strong for simple fact retrieval (single-hop: J = 63.79) but weaker for multi-hop reasoning (J = 42.92).
* **LangMem**:
* Good for straightforward lookups (single-hop: J = 62.23) but struggles with multi-hop (J = 47.92).
* **Letta (MemGPT)**:
* Lower overall performance (single-hop F1 = 26.65, multi-hop F1 = 9.15). Better suited for shorter contexts.
* **Mem0**:
* Best scores on both single-hop (J = 67.13) and multi-hop reasoning (J = 51.15). It also performs well on temporal reasoning (J = 55.51).
# Latency:
* **LangMem**:
* Retrieval latency can be slow (p95 latency \~60s).
* **OpenAI Memory**:
* Fast retrieval (p95 \~0.889s), though it integrates extracted memories rather than performing separate retrievals.
* **Mem0**:
* Consistently low retrieval latency (p95 \~1.44s), even with long conversation histories.
# Token Footprint:
* **Mem0**:
* Efficient, averaging \~7K tokens per conversation.
* **Mem0 (Graph Variant)**:
* Slightly higher token usage (\~14K tokens), but provides improved temporal and relational reasoning.
# Key Takeaways:
* **Full-context approaches** (feeding entire conversation history) deliver the highest accuracy, but come with high latency (\~17s p95).
* **OpenAI Memory** is suitable for shorter-term memory needs but may struggle with deep reasoning or granular control.
* **LangMem** offers an open-source alternative if you're willing to trade off speed for flexibility.
* **Mem0** strikes a balance for longer conversations, offering good factual consistency, low latency, and cost-efficient token usage.
**For those also testing memory systems for AI agents:**
* Do you prioritize accuracy, speed, or token efficiency in your use case?
* Have you found any hybrid approaches (e.g., selective memory consolidation) that perform better?
I’d be happy to share more detailed metrics (F1, BLEU, J-scores) if anyone is interested!
**Resources:**
* [Full Blog Comparison](https://mem0.ai/blog/ai-agent-memory-benchmark/)
* [Research Paper (arXiv)](https://arxiv.org/abs/2504.19413)
| 2025-04-29T18:54:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1kavtwr/benchmarking_ai_agent_memory_providers_for/
|
deshrajdry
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kavtwr
| false | null |
t3_1kavtwr
|
/r/LocalLLaMA/comments/1kavtwr/benchmarking_ai_agent_memory_providers_for/
| false | false |
self
| 49 |
{'enabled': False, 'images': [{'id': '-hzYqQK2NlmUNCzS527iKajXs-i3shyuCdq1Y06LQV0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/yqSA3DjVUpzdYXOGH636y124GJmIphrm10bcnkwBYGg.jpg?width=108&crop=smart&auto=webp&s=e9530a57ea52b7f516cc983957de2a3592cbd515', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/yqSA3DjVUpzdYXOGH636y124GJmIphrm10bcnkwBYGg.jpg?width=216&crop=smart&auto=webp&s=4d2c1e96d63653ba2756b3d2a039c722028dee64', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/yqSA3DjVUpzdYXOGH636y124GJmIphrm10bcnkwBYGg.jpg?width=320&crop=smart&auto=webp&s=e1502bcb9a6bf1e61a72f1ccf7ab5fe488e975f1', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/yqSA3DjVUpzdYXOGH636y124GJmIphrm10bcnkwBYGg.jpg?width=640&crop=smart&auto=webp&s=6d997493b43caa5f993fbdf775b9cc3c22777f05', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/yqSA3DjVUpzdYXOGH636y124GJmIphrm10bcnkwBYGg.jpg?width=960&crop=smart&auto=webp&s=db48f610157704e9594b9498b0d3e2cf6841c062', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/yqSA3DjVUpzdYXOGH636y124GJmIphrm10bcnkwBYGg.jpg?width=1080&crop=smart&auto=webp&s=fd85f94185bd8033c47a79626b1f89cc03861490', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/yqSA3DjVUpzdYXOGH636y124GJmIphrm10bcnkwBYGg.jpg?auto=webp&s=bd319824825f07579cea0a0c006c2087e5d1d7fc', 'width': 1200}, 'variants': {}}]}
|
Qwen3-235B-A22B is now available for free on HuggingChat!
| 114 |
Hi everyone!
We wanted to make sure this model was available as soon as possible to try out: The benchmarks are super impressive but nothing beats the community vibe checks!
The inference speed is really impressive and to me this is looking really good. You can control the thinking mode by appending `/think and /nothink` to your query. We might build a UI toggle for it directly if you think that would be handy?
Let us know if it works well for you and if you have any feedback! Always looking to hear what models people would like to see being added.
| 2025-04-29T18:59:18 |
https://hf.co/chat/models/Qwen/Qwen3-235B-A22B
|
SensitiveCranberry
|
hf.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1kavxh0
| false | null |
t3_1kavxh0
|
/r/LocalLLaMA/comments/1kavxh0/qwen3235ba22b_is_now_available_for_free_on/
| false | false | 114 |
{'enabled': False, 'images': [{'id': 'IAiRvTkllYbZ6pLEfA8LHAu1AW3jBoXRE_du5pRxa_s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gH4bW8N8gprzvc3iqJQVDrJQyV_d3RdXRxLHSVvGfH0.jpg?width=108&crop=smart&auto=webp&s=7a6d6f5336700b95ced5c43407635073a290c809', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gH4bW8N8gprzvc3iqJQVDrJQyV_d3RdXRxLHSVvGfH0.jpg?width=216&crop=smart&auto=webp&s=a608958e1faac8631634c0b246ce2f8b65d53d17', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gH4bW8N8gprzvc3iqJQVDrJQyV_d3RdXRxLHSVvGfH0.jpg?width=320&crop=smart&auto=webp&s=07028e32c57fa89292edd682e387ebb3a6422aab', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gH4bW8N8gprzvc3iqJQVDrJQyV_d3RdXRxLHSVvGfH0.jpg?width=640&crop=smart&auto=webp&s=45a0b4ecdd0a6260dccadaab474c63682477e4b3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gH4bW8N8gprzvc3iqJQVDrJQyV_d3RdXRxLHSVvGfH0.jpg?width=960&crop=smart&auto=webp&s=191ada0ace409e17005ee66b9370bc8e94d305a7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gH4bW8N8gprzvc3iqJQVDrJQyV_d3RdXRxLHSVvGfH0.jpg?width=1080&crop=smart&auto=webp&s=ca264c20b16278cfcfa728964f211489ab10cfaf', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gH4bW8N8gprzvc3iqJQVDrJQyV_d3RdXRxLHSVvGfH0.jpg?auto=webp&s=99d951033aef5c0093a447917a6cbe22ef21ae04', 'width': 1200}, 'variants': {}}]}
|
|
Coding with Qwen makes me feel like I'm Roo, and he's the idiot
| 1 |
[removed]
| 2025-04-29T18:59:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1kavxok/coding_with_qwen_makes_me_feel_like_im_roo_and/
|
Worried-Signal-2992
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kavxok
| false | null |
t3_1kavxok
|
/r/LocalLLaMA/comments/1kavxok/coding_with_qwen_makes_me_feel_like_im_roo_and/
| false | false |
self
| 1 | null |
Fastest multimodal and uncensored model for 20GB vram GPU?
| 2 |
Hi,
What would be the fastest multimodal model that I can run on a RTX 4000 SFF Ada Generation 20GB gpu?
The model should be able to process potentially toxic memes + a prompt, give a detailed description of them and do OCR + maybe some more specific object recognition stuff. I'd also like it to return structured JSON.
I'm currently running \`pixtral-12b\` with Transformers lib and outlines for the JSON and liking the results, but it's so slow ("slow as thick shit through a funnel" my dad would say...). Running it async gives Out Of Memory. I need to process thousands of images.
What would be faster alternatives?
| 2025-04-29T19:00:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1kavya5/fastest_multimodal_and_uncensored_model_for_20gb/
|
David_Crynge
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kavya5
| false | null |
t3_1kavya5
|
/r/LocalLLaMA/comments/1kavya5/fastest_multimodal_and_uncensored_model_for_20gb/
| false | false |
self
| 2 | null |
Coding of wearher cards: Qwen3-32B 8-bit (MLX) vs GLM-4-32B Q8 (0414)
| 1 |
[removed]
| 2025-04-29T19:01:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1kavzar/coding_of_wearher_cards_qwen332b_8bit_mlx_vs/
|
Gregory-Wolf
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kavzar
| false | null |
t3_1kavzar
|
/r/LocalLLaMA/comments/1kavzar/coding_of_wearher_cards_qwen332b_8bit_mlx_vs/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'gAoNoPyD_UVgFeu80LoSA_vdnrzd0fA0ZbzOkySB3_M', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/JVU3CWwV4ZHc9r855EE5DIDxSnzG5dAgs7ph4JNeenI.jpg?width=108&crop=smart&auto=webp&s=91c9077630ce9fa358a40cc9a7ef5ab0e2be695b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/JVU3CWwV4ZHc9r855EE5DIDxSnzG5dAgs7ph4JNeenI.jpg?width=216&crop=smart&auto=webp&s=f009ed8dbddd4a2fdbceaaa50f6ce17efdf27a8a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/JVU3CWwV4ZHc9r855EE5DIDxSnzG5dAgs7ph4JNeenI.jpg?width=320&crop=smart&auto=webp&s=d6f864826db5217201d6b78b07e6beb037508d1b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/JVU3CWwV4ZHc9r855EE5DIDxSnzG5dAgs7ph4JNeenI.jpg?width=640&crop=smart&auto=webp&s=1370377ab7f9f1eac03a33523352b10ef14482e6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/JVU3CWwV4ZHc9r855EE5DIDxSnzG5dAgs7ph4JNeenI.jpg?width=960&crop=smart&auto=webp&s=83b5130b46d3a7925ab5c4d967aff0de38b2fe1c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/JVU3CWwV4ZHc9r855EE5DIDxSnzG5dAgs7ph4JNeenI.jpg?width=1080&crop=smart&auto=webp&s=e9e36a398959e8bb4b2207ca7c968c8a6ff96a92', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/JVU3CWwV4ZHc9r855EE5DIDxSnzG5dAgs7ph4JNeenI.jpg?auto=webp&s=38efccb40b836737610b159fd2d4b468db9d9bcf', 'width': 1200}, 'variants': {}}]}
|
|
Qwen3-32B - Testing the limits of massive context sizes using a 107,142 tokens prompt
| 22 |
I've created the following prompt (based [on this comment](https://www.reddit.com/r/LocalLLaMA/comments/1ka6b9p/comment/mpm079o/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button)) to test how well the quantized Qwen3-32B models do on large context sizes. So far none of the ones I've tested have successfully answered the question.
I'm curious to know if this is just the GGUFs from unsloth that aren't quite right or if this is a general issue with the Qwen3 models.
Massive prompt: [https://thireus.com/REDDIT/Qwen3\_Runescape\_Massive\_Prompt.txt](https://thireus.com/REDDIT/Qwen3_Runescape_Massive_Prompt.txt)
* Qwen3-32B-128K-UD-Q8\_K\_XL.gguf would simply answer "Okay", and nothing else
* Qwen3-32B-UD-Q8\_K\_XL.gguf would answer nonsense, invent number, or repeat stuff (expected)
Note: I'm using the latest uploaded unsloth models, and also using the recommended settings from [https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune)
| 2025-04-29T19:05:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1kaw33r/qwen332b_testing_the_limits_of_massive_context/
|
Thireus
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaw33r
| false | null |
t3_1kaw33r
|
/r/LocalLLaMA/comments/1kaw33r/qwen332b_testing_the_limits_of_massive_context/
| false | false |
self
| 22 | null |
Speech to Speech Interactive Model with tool calling support
| 5 |
Why has only OpenAI (with models like GPT-4o Realtime) managed to build advanced real-time speech-to-speech models with tool-calling support, while most other companies are still struggling with basic interactive speech models? What technical or strategic advantages does OpenAI have? Correct me if I’m wrong, and please mention if there are other models doing something similar.
| 2025-04-29T19:06:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1kaw484/speech_to_speech_interactive_model_with_tool/
|
martian7r
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaw484
| false | null |
t3_1kaw484
|
/r/LocalLLaMA/comments/1kaw484/speech_to_speech_interactive_model_with_tool/
| false | false |
self
| 5 | null |
Any way to use an LLM to check PDF accessibility (fonts, margins, colors, etc.)?
| 1 |
[removed]
| 2025-04-29T19:09:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1kaw6ku/any_way_to_use_an_llm_to_check_pdf_accessibility/
|
Mobo6886
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaw6ku
| false | null |
t3_1kaw6ku
|
/r/LocalLLaMA/comments/1kaw6ku/any_way_to_use_an_llm_to_check_pdf_accessibility/
| false | false |
self
| 1 | null |
Best open source llama model for storytelling?
| 1 |
[removed]
| 2025-04-29T19:11:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1kaw8dc/best_open_source_llama_model_for_storytelling/
|
Old_Risk6485
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaw8dc
| false | null |
t3_1kaw8dc
|
/r/LocalLLaMA/comments/1kaw8dc/best_open_source_llama_model_for_storytelling/
| false | false |
self
| 1 | null |
Qwen 3 4B 128k unsloth
| 1 |
[removed]
| 2025-04-29T19:12:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1kaw9gm/qwen_3_4b_128k_unsloth/
|
Ni_Guh_69
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaw9gm
| false | null |
t3_1kaw9gm
|
/r/LocalLLaMA/comments/1kaw9gm/qwen_3_4b_128k_unsloth/
| false | false |
self
| 1 | null |
So no new llama model today?
| 10 |
Surprised we haven’t see any news with llamacon on a new model release? Or did I miss it?
What’s everyone’s thoughts so far with llamacon?
| 2025-04-29T19:18:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1kaweax/so_no_new_llama_model_today/
|
scary_kitten_daddy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaweax
| false | null |
t3_1kaweax
|
/r/LocalLLaMA/comments/1kaweax/so_no_new_llama_model_today/
| false | false |
self
| 10 | null |
Out of the game for 12 months, what's the goto?
| 1 |
When local LLM kicked off a couple years ago I got myself an Ollama server running with Open-WebUI. I've just span these containers backup and I'm ready to load some models on my 3070 8GB (assuming Ollama and Open-WebUI is still considered good!).
I've heard the Qwen models are pretty popular but there appears to be a bunch of talk about context size which I don't recall ever doing, I don't see these parameters within Open-WebUI. With information flying about everywhere and everyone providing different answers. Is there a concrete guide anywhere that covers the ideal models for different applications? There's far too many acronyms to keep up!
The latest llama edition seems to only offer a 70b option, I'm pretty sure this is too big for my GPU. Is llama3.2:8b my best bet?
| 2025-04-29T19:29:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1kawnki/out_of_the_game_for_12_months_whats_the_goto/
|
westie1010
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kawnki
| false | null |
t3_1kawnki
|
/r/LocalLLaMA/comments/1kawnki/out_of_the_game_for_12_months_whats_the_goto/
| false | false |
self
| 1 | null |
Qwen3 on Fiction.liveBench for Long Context Comprehension
| 122 | 2025-04-29T19:31:14 |
fictionlive
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kawox7
| false | null |
t3_1kawox7
|
/r/LocalLLaMA/comments/1kawox7/qwen3_on_fictionlivebench_for_long_context/
| false | false | 122 |
{'enabled': True, 'images': [{'id': 'c78dKhcQb8Ns1942R7_5ke3H7maiIJlmA0V5UpxLKcI', 'resolutions': [{'height': 148, 'url': 'https://preview.redd.it/fpnum6crstxe1.png?width=108&crop=smart&auto=webp&s=486de98be4c05efb1b38da42ccf43015c7970197', 'width': 108}, {'height': 296, 'url': 'https://preview.redd.it/fpnum6crstxe1.png?width=216&crop=smart&auto=webp&s=20eacbacb63c498ffda2c567a29bd59be84d7835', 'width': 216}, {'height': 438, 'url': 'https://preview.redd.it/fpnum6crstxe1.png?width=320&crop=smart&auto=webp&s=2d603f7466b4beb176711c85b2cd0a100a9f9832', 'width': 320}, {'height': 877, 'url': 'https://preview.redd.it/fpnum6crstxe1.png?width=640&crop=smart&auto=webp&s=40324fb2fbb186ded80a243612e1e2b2969b6161', 'width': 640}, {'height': 1315, 'url': 'https://preview.redd.it/fpnum6crstxe1.png?width=960&crop=smart&auto=webp&s=2b29db641a95664bcb2ee69aebe911220ed2558e', 'width': 960}, {'height': 1480, 'url': 'https://preview.redd.it/fpnum6crstxe1.png?width=1080&crop=smart&auto=webp&s=4ac9da716ca30f8077bedf4c96d40a350f7367cc', 'width': 1080}], 'source': {'height': 2516, 'url': 'https://preview.redd.it/fpnum6crstxe1.png?auto=webp&s=1bb0bbbe77eaeaab418dc62c611bf20f95807018', 'width': 1836}, 'variants': {}}]}
|
|||
Ex-Google CEO's DISTURBING Predictions |
| 1 |
[removed]
| 2025-04-29T19:36:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1kawt7k/exgoogle_ceos_disturbing_predictions/
|
SubliminalPoet
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kawt7k
| false | null |
t3_1kawt7k
|
/r/LocalLLaMA/comments/1kawt7k/exgoogle_ceos_disturbing_predictions/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '9vW3pFIc4GDKyPHB8QEWstC3sKqUhNyqhRCI8l5uf9Y', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/RIqpYDPixdi5MD-0ppWfTbEfDZIuZM9WfH3YFr5cE1s.jpg?width=108&crop=smart&auto=webp&s=03d7722f5f1c8622647407783db7b34c9d632cd0', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/RIqpYDPixdi5MD-0ppWfTbEfDZIuZM9WfH3YFr5cE1s.jpg?width=216&crop=smart&auto=webp&s=a2801db230ea44da7708ce4e916dca1ea4ec108e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/RIqpYDPixdi5MD-0ppWfTbEfDZIuZM9WfH3YFr5cE1s.jpg?width=320&crop=smart&auto=webp&s=c5d5ed91b1887d0f67a944daacc98369d10d72a7', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/RIqpYDPixdi5MD-0ppWfTbEfDZIuZM9WfH3YFr5cE1s.jpg?auto=webp&s=8a403e3e1a7f22d26b6e8e463bef0ed5c180f7c3', 'width': 480}, 'variants': {}}]}
|
Qwen2.5 Max -Qwen Team, can you please open-weight?
| 1 |
[removed]
| 2025-04-29T19:40:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1kawwun/qwen25_max_qwen_team_can_you_please_openweight/
|
nite2k
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kawwun
| false | null |
t3_1kawwun
|
/r/LocalLLaMA/comments/1kawwun/qwen25_max_qwen_team_can_you_please_openweight/
| false | false |
self
| 1 | null |
Qwen3-235B-A22B => UD-Q3_K_XL GGUF @12t/s with 4x3090 and old Xeon
| 35 |
Hi guys,
Just sharing I get constant 12t/s with the following stuff. I think these could be adjusted depending on hardware but tbh I am not the best to help with the "-ot" flag with llama.cpp.
Hardware : 4 x RTX 3090 + old Xeon E5-2697 v3 and Asus X99-E-10G WS (96GB DDR4 2133 MHz but not sure it has any impact here).
Model : [unsloth/Qwen3-235B-A22B-GGUF/tree/main/](https://huggingface.co/unsloth/Qwen3-235B-A22B-GGUF/tree/main/UD-Q3_K_XL)[](https://www.asus.com/fr/motherboards-components/motherboards/workstation/x99-e-10g-ws/)
I use this command :
`./llama-server -m '/GGUF/Qwen3-235B-A22B-UD-Q3_K_XL-00001-of-00003.gguf' -ngl 99 -fa -c 16384 --override-tensor "([0-1]).ffn_.*_exps.=CUDA0,([2-3]).ffn_.*_exps.=CUDA1,([4-5]).ffn_.*_exps.=CUDA2,([6-7]).ffn_.*_exps.=CUDA3,([8-9]|[1-9][0-9])\.ffn_.*_exps\.=CPU" -ub 4096 --temp 0.6 --min-p 0.0 --top-p 0.95 --top-k 20 --port 8001`
Thanks to llama.cpp team, Unsloth, and to the guy behind [this post](https://www.reddit.com/r/LocalLLaMA/comments/1k1rjm1/how_to_run_llama_4_fast_even_though_its_too_big/).
| 2025-04-29T19:43:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1kawzia/qwen3235ba22b_udq3_k_xl_gguf_12ts_with_4x3090_and/
|
Leflakk
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kawzia
| false | null |
t3_1kawzia
|
/r/LocalLLaMA/comments/1kawzia/qwen3235ba22b_udq3_k_xl_gguf_12ts_with_4x3090_and/
| false | false |
self
| 35 |
{'enabled': False, 'images': [{'id': 'rx3n0qdAkHK6iirdeW-jkcWJMyS3AefZQJIArfhCVr0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Ru6A7MPZmszNmMavBIJXZ-Qf6-aHWBhAoaKIh7tO0Ks.jpg?width=108&crop=smart&auto=webp&s=6c9699e848c6744d3541b58d5088430c8f383c39', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Ru6A7MPZmszNmMavBIJXZ-Qf6-aHWBhAoaKIh7tO0Ks.jpg?width=216&crop=smart&auto=webp&s=a5c829e91fd83f00c99ca445eabd68bc7f4f53c9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Ru6A7MPZmszNmMavBIJXZ-Qf6-aHWBhAoaKIh7tO0Ks.jpg?width=320&crop=smart&auto=webp&s=26c54064d3772f765b4fc33cc0a988816e2d834b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Ru6A7MPZmszNmMavBIJXZ-Qf6-aHWBhAoaKIh7tO0Ks.jpg?width=640&crop=smart&auto=webp&s=7e5ca681836e6d44c749694b55962298027a4b34', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Ru6A7MPZmszNmMavBIJXZ-Qf6-aHWBhAoaKIh7tO0Ks.jpg?width=960&crop=smart&auto=webp&s=5ac8b2ba7f716bb32e608e5ac9247afbd01f8314', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Ru6A7MPZmszNmMavBIJXZ-Qf6-aHWBhAoaKIh7tO0Ks.jpg?width=1080&crop=smart&auto=webp&s=9bfceef4bfafaa2b50be907d375f3ff3ec1e9200', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Ru6A7MPZmszNmMavBIJXZ-Qf6-aHWBhAoaKIh7tO0Ks.jpg?auto=webp&s=a4b0f149dbb55e174dcd3c9079bf234ac779a566', 'width': 1200}, 'variants': {}}]}
|
Thinking of Trying the New Qwen Models? Here's What You Should Know First!
| 0 |
Qwen’s team deserves real credit. They’ve been releasing models at an impressive pace, with solid engineering and attention to detail. It makes total sense that so many people are excited to try them out.
If you’re thinking about downloading the new models and filling up your SSD, here are a few things you might want to know beforehand.
**Multilingual capabilities**
If you were hoping for major improvements here, you might want to manage expectations. So far, there's no noticeable gain in multilingual performance. If multilingual use is a priority for you, the current models might not bring much new to the table.
**The “thinking” behavior**
All models tend to begin their replies with phrases like “Hmm...”, “Oh, I see...”, or “Wait a second...”. While that can sound friendly, it also takes up unnecessary space in the context window. Fortunately, you can turn it off by adding **/no\_think** in the system prompt.
**Performance compared to existing models**
I tested the Qwen models from 0.6B to 8B and none of them outperformed the Gemma lineup. If you’re looking for something compact and efficient, **Gemma 2 2B** is a great option. For something more powerful, **Gemma 3 4B** has been consistently solid. I didn’t even feel the need to go up to Gemma 3 12B. As for the larger Qwen models, I skipped them because the results from the smaller ones were already quite clear.
**Quick summary**
If you're already using something like Gemma and it's serving you well, these new Qwen models probably won’t bring a practical improvement to your day-to-day usage.
But if you’re still curious, and curiosity is always welcome, I’d recommend trying them out online. You can experiment with all versions from 0.6B to 8B using the highest quantization available. It’s a convenient way to explore without using up local resources.
**One last note**
Benchmarks can be interesting, but it’s worth remembering that many new models are trained to do well specifically on those tests. That doesn’t always mean they’ll offer a better experience in real-world scenarios.
Thank you! 🙏
| 2025-04-29T19:43:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1kawzjl/thinking_of_trying_the_new_qwen_models_heres_what/
|
CaptainCivil7097
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kawzjl
| false | null |
t3_1kawzjl
|
/r/LocalLLaMA/comments/1kawzjl/thinking_of_trying_the_new_qwen_models_heres_what/
| false | false |
self
| 0 | null |
Qwen2.5 Max - Qwen Team, can you please open-weight?
| 1 |
[removed]
| 2025-04-29T19:47:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1kax321/qwen25_max_qwen_team_can_you_please_openweight/
|
Impossible_Ground_15
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kax321
| false | null |
t3_1kax321
|
/r/LocalLLaMA/comments/1kax321/qwen25_max_qwen_team_can_you_please_openweight/
| false | false |
self
| 1 | null |
Most human like TTS to run locally?
| 5 |
I tried several to find something that doesn't sound like a robot. So far Zonos produces acceptable results, but it is prone to a weird bouts of garbled sound. This led to a setup where I have to record every sentence separately and run it through STT to validate results. Are there other more stable solutions out there?
| 2025-04-29T19:53:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1kax7ua/most_human_like_tts_to_run_locally/
|
SwimmerJazzlike
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kax7ua
| false | null |
t3_1kax7ua
|
/r/LocalLLaMA/comments/1kax7ua/most_human_like_tts_to_run_locally/
| false | false |
self
| 5 | null |
Meta AI with its llm Llama
| 1 |
[removed]
| 2025-04-29T19:54:25 |
Sensitive-Routine-66
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kax8op
| false | null |
t3_1kax8op
|
/r/LocalLLaMA/comments/1kax8op/meta_ai_with_its_llm_llama/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'YFYbYNiwCDQTXmIPtZDGMNyJs8h4Fh3m-F3d4VuoAq8', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/h8eg1gxywtxe1.jpeg?width=108&crop=smart&auto=webp&s=f42504891be3e03b8e201a5f86dcf1ded848c06c', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/h8eg1gxywtxe1.jpeg?width=216&crop=smart&auto=webp&s=88db6b2008af78c93ec93f73d22f1310f1518262', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/h8eg1gxywtxe1.jpeg?width=320&crop=smart&auto=webp&s=4ae38e53420b9da46e24cb20c80eef9ad3e234e9', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/h8eg1gxywtxe1.jpeg?width=640&crop=smart&auto=webp&s=1c47d30d7b57bd8a0df7b3a03de1c6f2e7d19026', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/h8eg1gxywtxe1.jpeg?width=960&crop=smart&auto=webp&s=77a2dff3072bde61d25ee2ce42eecc2043e25fbf', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/h8eg1gxywtxe1.jpeg?width=1080&crop=smart&auto=webp&s=4525b5a39738caf762c1c395aa4fb6536551543d', 'width': 1080}], 'source': {'height': 720, 'url': 'https://preview.redd.it/h8eg1gxywtxe1.jpeg?auto=webp&s=cdd5d4699c119d6e88f001209a5ba84e8a5bbb0e', 'width': 1280}, 'variants': {}}]}
|
||
Qwen 3 presence of tools affect output length?
| 2 |
Experimented with Qwen 3 32B Q5 and Qwen 4 8B fp16 with and without tools present. The query doesn't use the tools specified. The output without tools specified is consistently longer (double) than the one with tools specified.
Is this normal? I tested the same query and tools with Qwen 2.5 and it doesn't exhibit the same behavior.
| 2025-04-29T20:07:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1kaxk52/qwen_3_presence_of_tools_affect_output_length/
|
McSendo
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaxk52
| false | null |
t3_1kaxk52
|
/r/LocalLLaMA/comments/1kaxk52/qwen_3_presence_of_tools_affect_output_length/
| false | false |
self
| 2 | null |
Replicate and fal.ai
| 1 |
[removed]
| 2025-04-29T20:08:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1kaxkou/replicate_and_falai/
|
Potential_Nature4974
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaxkou
| false | null |
t3_1kaxkou
|
/r/LocalLLaMA/comments/1kaxkou/replicate_and_falai/
| false | false |
self
| 1 | null |
Hey ChatGPT, lets play Tic Tac Toe!
| 0 | 2025-04-29T20:16:22 |
https://chatgpt.com/share/68113301-7f80-8002-8e37-bdb25b741716
|
harlekinrains
|
chatgpt.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaxrrh
| false | null |
t3_1kaxrrh
|
/r/LocalLLaMA/comments/1kaxrrh/hey_chatgpt_lets_play_tic_tac_toe/
| false | false |
default
| 0 | null |
|
What do you use for quick model switching via API?
| 1 |
[removed]
| 2025-04-29T20:17:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1kaxswg/what_do_you_use_for_quick_model_switching_via_api/
|
jonglaaa
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaxswg
| false | null |
t3_1kaxswg
|
/r/LocalLLaMA/comments/1kaxswg/what_do_you_use_for_quick_model_switching_via_api/
| false | false |
self
| 1 | null |
"I want a representation of yourself using Matplotlib."
| 1 | 2025-04-29T20:20:51 |
https://www.reddit.com/gallery/1kaxvn7
|
JLeonsarmiento
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaxvn7
| false | null |
t3_1kaxvn7
|
/r/LocalLLaMA/comments/1kaxvn7/i_want_a_representation_of_yourself_using/
| false | false | 1 | null |
||
"I want a representation of yourself using matplotlib."
| 83 | 2025-04-29T20:29:15 |
https://www.reddit.com/gallery/1kay2t3
|
JLeonsarmiento
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kay2t3
| false | null |
t3_1kay2t3
|
/r/LocalLLaMA/comments/1kay2t3/i_want_a_representation_of_yourself_using/
| false | false | 83 | null |
||
What Fast AI Voice System Is Used?
| 1 |
[removed]
| 2025-04-29T20:36:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1kay92c/what_fast_ai_voice_system_is_used/
|
StrangerQuestionsOhA
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kay92c
| false | null |
t3_1kay92c
|
/r/LocalLLaMA/comments/1kay92c/what_fast_ai_voice_system_is_used/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'ALvO0UwZODj7Mx_z9pqYh4rE5zXNOaoNgZoy7Ex9bPM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/0Oou01TWJtl4pCxAfr8J8uVDtgZi-bGwYpd_n05vFC0.jpg?width=108&crop=smart&auto=webp&s=37506e3ac24db95dc5545b67defdf1a8d2d00c04', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/0Oou01TWJtl4pCxAfr8J8uVDtgZi-bGwYpd_n05vFC0.jpg?width=216&crop=smart&auto=webp&s=df6ebf82293a9f4e65f7d164088b16844960fd36', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/0Oou01TWJtl4pCxAfr8J8uVDtgZi-bGwYpd_n05vFC0.jpg?width=320&crop=smart&auto=webp&s=a568449f2fc06377c18158cae96b21b30ea54c6b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/0Oou01TWJtl4pCxAfr8J8uVDtgZi-bGwYpd_n05vFC0.jpg?width=640&crop=smart&auto=webp&s=1c2b382f99e013187fac6c4280a099933e4b0d47', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/0Oou01TWJtl4pCxAfr8J8uVDtgZi-bGwYpd_n05vFC0.jpg?width=960&crop=smart&auto=webp&s=b8fe6bc282b17a7831decfab7e61978156af4fc5', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/0Oou01TWJtl4pCxAfr8J8uVDtgZi-bGwYpd_n05vFC0.jpg?width=1080&crop=smart&auto=webp&s=092668d4239bd2181ed1011846370fbbbfb2cb20', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/0Oou01TWJtl4pCxAfr8J8uVDtgZi-bGwYpd_n05vFC0.jpg?auto=webp&s=6947dcbd44381523b0c1b480eac830d1e29bddbc', 'width': 1200}, 'variants': {}}]}
|
You can run Qwen3-30B-A3B on a 16GB RAM CPU-only PC!
| 330 |
I just got the Qwen3-30B-A3B model running on my CPU-only PC using llama.cpp, and honestly, I’m blown away by how well it's performing. I'm running the q4 quantized version of the model, and despite having just 16GB of RAM and no GPU, I’m consistently getting more than 10 tokens per second.
I wasnt expecting much given the size of the model and my relatively modest hardware setup. I figured it would crawl or maybe not even load at all, but to my surprise, it's actually snappy and responsive for many tasks.
| 2025-04-29T20:36:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1kay93z/you_can_run_qwen330ba3b_on_a_16gb_ram_cpuonly_pc/
|
Foxiya
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kay93z
| false | null |
t3_1kay93z
|
/r/LocalLLaMA/comments/1kay93z/you_can_run_qwen330ba3b_on_a_16gb_ram_cpuonly_pc/
| false | false |
self
| 330 | null |
VideoGameBench: Benchmarking an AI's Visual Capabilities through Video Games (Research)
| 1 | 2025-04-29T20:38:36 |
https://www.vgbench.com/
|
StrangerQuestionsOhA
|
vgbench.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kayavl
| false | null |
t3_1kayavl
|
/r/LocalLLaMA/comments/1kayavl/videogamebench_benchmarking_an_ais_visual/
| false | false |
default
| 1 | null |
|
Is this AI's Version of Moore's Law? - Computerphile
| 0 | 2025-04-29T20:40:46 |
https://youtube.com/watch?v=evSFeqTZdqs&feature=shared
|
behradkhodayar
|
youtube.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaycqb
| false |
{'oembed': {'author_name': 'Computerphile', 'author_url': 'https://www.youtube.com/@Computerphile', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/evSFeqTZdqs?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Is this AI's Version of Moore's Law? - Computerphile"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/evSFeqTZdqs/hqdefault.jpg', 'thumbnail_width': 480, 'title': "Is this AI's Version of Moore's Law? - Computerphile", 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1kaycqb
|
/r/LocalLLaMA/comments/1kaycqb/is_this_ais_version_of_moores_law_computerphile/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'UScL5-SeWhXQZ5AHTMnBQkl_0IHoaj6i9-FiBpAOCCY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/7AxhS_ClShptk3ZWbdhT7-aNLWCGQ2LLYW8AWAEG_Ss.jpg?width=108&crop=smart&auto=webp&s=0237342c79d57375d303adc9de77b54af101dd36', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/7AxhS_ClShptk3ZWbdhT7-aNLWCGQ2LLYW8AWAEG_Ss.jpg?width=216&crop=smart&auto=webp&s=c3d6a4389331b6d905623b2a700146c102412a8a', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/7AxhS_ClShptk3ZWbdhT7-aNLWCGQ2LLYW8AWAEG_Ss.jpg?width=320&crop=smart&auto=webp&s=fe7c8786f05eff2fea2d94aededb7c830cc36532', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/7AxhS_ClShptk3ZWbdhT7-aNLWCGQ2LLYW8AWAEG_Ss.jpg?auto=webp&s=33d03d323a21d1c290edf959bcdcab79b342a774', 'width': 480}, 'variants': {}}]}
|
||
Best model for me?
| 1 |
[removed]
| 2025-04-29T20:42:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1kayeek/best_model_for_me/
|
annakhouri2150
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kayeek
| false | null |
t3_1kayeek
|
/r/LocalLLaMA/comments/1kayeek/best_model_for_me/
| false | false |
self
| 1 | null |
Is there any TTS that can clone a voice to sound like Glados or Darth Vader
| 3 |
Has anyone found a paid or open source tts model that can get really close to voices like Glados and darth vader. Voices that are not the typical sound
| 2025-04-29T20:48:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1kayjfa/is_there_any_tts_that_can_clone_a_voice_to_sound/
|
ahadcove
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kayjfa
| false | null |
t3_1kayjfa
|
/r/LocalLLaMA/comments/1kayjfa/is_there_any_tts_that_can_clone_a_voice_to_sound/
| false | false |
self
| 3 | null |
Just $1/Week for The Netflix of AI – Only 25 Spots!
| 1 |
[removed]
| 2025-04-29T20:53:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1kaynw2/just_1week_for_the_netflix_of_ai_only_25_spots/
|
AdmixSoftware
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaynw2
| false | null |
t3_1kaynw2
|
/r/LocalLLaMA/comments/1kaynw2/just_1week_for_the_netflix_of_ai_only_25_spots/
| false | false |
self
| 1 | null |
Best Llama model for function/tools?
| 1 |
[removed]
| 2025-04-29T20:56:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1kayq56/best_llama_model_for_functiontools/
|
Silly_Goose_369
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kayq56
| false | null |
t3_1kayq56
|
/r/LocalLLaMA/comments/1kayq56/best_llama_model_for_functiontools/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '52CiiB4rXzDCv9FyVXCCo5_WtNA9nSsCphoxLyUDNPw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/bXejXeCQew6g3GObHjW2MnLTGgnAeX6lpnbWEhOwiOE.jpg?width=108&crop=smart&auto=webp&s=b322b18282ab3a7165c5229992830152e311de21', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/bXejXeCQew6g3GObHjW2MnLTGgnAeX6lpnbWEhOwiOE.jpg?width=216&crop=smart&auto=webp&s=1a63c5fa60346c1e8d16fd8ee4ee024805bff8a6', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/bXejXeCQew6g3GObHjW2MnLTGgnAeX6lpnbWEhOwiOE.jpg?width=320&crop=smart&auto=webp&s=a26ff728ff773654990bf770326ec8245616a04e', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/bXejXeCQew6g3GObHjW2MnLTGgnAeX6lpnbWEhOwiOE.jpg?width=640&crop=smart&auto=webp&s=18eb1e8634ac075a1a09d2f2697131a9861440f9', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/bXejXeCQew6g3GObHjW2MnLTGgnAeX6lpnbWEhOwiOE.jpg?width=960&crop=smart&auto=webp&s=d056cd34c1fd2997e7326bc4c90c46099e9709b7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/bXejXeCQew6g3GObHjW2MnLTGgnAeX6lpnbWEhOwiOE.jpg?width=1080&crop=smart&auto=webp&s=e72908bf09e447205821bb23ea4d2ceb1bfc3812', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/bXejXeCQew6g3GObHjW2MnLTGgnAeX6lpnbWEhOwiOE.jpg?auto=webp&s=ea988c79e4eb227000ac0d03497b13c8d34f90bb', 'width': 1200}, 'variants': {}}]}
|
Mac hardware for fine-tuning
| 2 |
Hello everyone,
I'd like to fine-tune some Qwen / Qwen VL models locally, ranging from 0.5B to 8B to 32B.
Which type of Mac should I invest in? I usually fine tune with Unsloth, 4bit, A100.
I've been a Windows user for years, but I think with the unified RAM of Mac, this can be very helpful for making prototypes.
Also, how does the speed compare to A100?
Please share your experiences, spec. That helps a lot !
| 2025-04-29T21:11:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1kaz424/mac_hardware_for_finetuning/
|
AcanthaceaeNo5503
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaz424
| false | null |
t3_1kaz424
|
/r/LocalLLaMA/comments/1kaz424/mac_hardware_for_finetuning/
| false | false |
self
| 2 | null |
We have Deep Research at Home, and it finally cites its sources and verifies its citations
| 23 | 2025-04-29T21:12:40 |
https://github.com/atineiatte/deep-research-at-home
|
atineiatte
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaz4sf
| false | null |
t3_1kaz4sf
|
/r/LocalLLaMA/comments/1kaz4sf/we_have_deep_research_at_home_and_it_finally/
| false | false | 23 |
{'enabled': False, 'images': [{'id': 'RL241EEiU57OCF0YX_MRyJzjZz6RTuHs98I8pc78x0I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/axfJ1KMG7iillauwvOIoInbH4t8F1Q8S57hfD1_I3UM.jpg?width=108&crop=smart&auto=webp&s=b15ccbcd0dbe2855e6f279b12e598e341e201035', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/axfJ1KMG7iillauwvOIoInbH4t8F1Q8S57hfD1_I3UM.jpg?width=216&crop=smart&auto=webp&s=bc8ca5fa6327715f83ec6aebdd161f3a391a34d0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/axfJ1KMG7iillauwvOIoInbH4t8F1Q8S57hfD1_I3UM.jpg?width=320&crop=smart&auto=webp&s=450d33e62e9c053e08a298a8668987fd348c5696', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/axfJ1KMG7iillauwvOIoInbH4t8F1Q8S57hfD1_I3UM.jpg?width=640&crop=smart&auto=webp&s=144cd206e5fb6aa3c25deb4f22b0bfa62c02ea3f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/axfJ1KMG7iillauwvOIoInbH4t8F1Q8S57hfD1_I3UM.jpg?width=960&crop=smart&auto=webp&s=414d0052163a3c0e5d1fffb83bd059bca04b0798', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/axfJ1KMG7iillauwvOIoInbH4t8F1Q8S57hfD1_I3UM.jpg?width=1080&crop=smart&auto=webp&s=92d29ef5da21ff602398954de4745fd8953f9ebb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/axfJ1KMG7iillauwvOIoInbH4t8F1Q8S57hfD1_I3UM.jpg?auto=webp&s=47252e108f4f06bf80e53e7c31143031b06a9819', 'width': 1200}, 'variants': {}}]}
|
||
Nous Research "Today - we release Atropos - our RL environments framework."
| 1 |
[removed]
| 2025-04-29T21:24:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1kazeuh/nous_research_today_we_release_atropos_our_rl/
|
Cane_P
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kazeuh
| false | null |
t3_1kazeuh
|
/r/LocalLLaMA/comments/1kazeuh/nous_research_today_we_release_atropos_our_rl/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'McQmDoGmL0cO6cAeTNg8lbEfAOgCA36KgU4FvpFOzqI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/yPpaWBwRxlgYy3A4nktsJTWbupf27fRi7XK53aKI58g.jpg?width=108&crop=smart&auto=webp&s=b8e28580c13e2dc6ed5f55c708004ccb23a12333', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/yPpaWBwRxlgYy3A4nktsJTWbupf27fRi7XK53aKI58g.jpg?width=216&crop=smart&auto=webp&s=b0bc904a40fbe855682d7083bbf3949c355d318c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/yPpaWBwRxlgYy3A4nktsJTWbupf27fRi7XK53aKI58g.jpg?width=320&crop=smart&auto=webp&s=5da0073284534fa406f78503284a34cde625af22', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/yPpaWBwRxlgYy3A4nktsJTWbupf27fRi7XK53aKI58g.jpg?width=640&crop=smart&auto=webp&s=41dbe7b9bc823742f068d32de56940ae1e530676', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/yPpaWBwRxlgYy3A4nktsJTWbupf27fRi7XK53aKI58g.jpg?width=960&crop=smart&auto=webp&s=96987d2103184108ab743e8d055f11e1104310b7', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/yPpaWBwRxlgYy3A4nktsJTWbupf27fRi7XK53aKI58g.jpg?width=1080&crop=smart&auto=webp&s=a7190c1e2875589329267bab34116bd8d4b52c62', 'width': 1080}], 'source': {'height': 1441, 'url': 'https://external-preview.redd.it/yPpaWBwRxlgYy3A4nktsJTWbupf27fRi7XK53aKI58g.jpg?auto=webp&s=f650fd476ee2c53b9e548e420a9db4d243b74550', 'width': 2560}, 'variants': {}}]}
|
Is there a good Llama-Swap Docker Image?
| 1 |
Hey, I am interesting in moving my setup over to llama-swap. While it runs fine if I install it natively I want to run it in a docker container for convenience and easy updating, but the example they provide on the official Github does not help.
The only thing I can find in their documentation is a list of the different versions like cuda and the run command itself.
docker run -it --rm -p 9292:8080 ghcr.io/mostlygeek/llama-swap:cpu
But this does not expose the yaml file, to the host machine. If I try to expose the /app folder or just the yaml file directly, I get a docker cant be deployed error. Yes I could just make my own container from scratch but that defeats the purpose of wanting it for easy quick updating.
Is there a good docker image of it that allows exposing the model folder and config file that is kept up do date with new llama.cpp versions?
| 2025-04-29T21:25:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1kazfo7/is_there_a_good_llamaswap_docker_image/
|
MaruluVR
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kazfo7
| false | null |
t3_1kazfo7
|
/r/LocalLLaMA/comments/1kazfo7/is_there_a_good_llamaswap_docker_image/
| false | false |
self
| 1 | null |
Qwen3 235B UDQ2 AMD 16GB VRAM == 4t/s and 190watts at outlet
| 19 |
Strongly influenced by this post:
[https://www.reddit.com/r/LocalLLaMA/comments/1k1rjm1/how\_to\_run\_llama\_4\_fast\_even\_though\_its\_too\_big/?rdt=47695](https://www.reddit.com/r/LocalLLaMA/comments/1k1rjm1/how_to_run_llama_4_fast_even_though_its_too_big/?rdt=47695)
Use llama.cpp Vulkan (i used pre-compiled b5214):
[https://github.com/ggml-org/llama.cpp/releases?page=1](https://github.com/ggml-org/llama.cpp/releases?page=1)
hardware requirements and notes:
64GB RAM (i have ddr4 around 45GB/s benchmark)
16GB VRAM AMD 6900 XT (any 16GB will do, your miles may vary)
gen4 pcie NVME (slower will mean slower step 6-8)
Vulkan SDK and Vulkan manually installed (google it)
any operating system supported by the above.
1) extract the zip of the pre-compiled zip to the folder of your choosing
2) open cmd as admin (probably don't need admin)
3) navigate to your decompressed zip folder (cd D:\\YOUR\_FOLDER\_HERE\_llama\_b5214)
4) download unsloth (bestsloth) Qwen3-235B-A22B-UD-Q2\_K\_XL and place in a folder you will remember (mine displayed below in step 6)
5) close every application that is unnecessary and free up as much RAM as possible.
6) in the cmd terminal try this:
llama-server.exe -m F:\\YOUR\_MODELS\_FOLDER\_models\\Qwen3-235B-A22B-UD-Q2\_K\_XL-00001-of-00002.gguf -ngl 95 -c 11000 --override-tensor "(\[7-9\]|\[1-9\]\[0-9\]).ffn\_.\*\_exps.=CPU,(\[0-6\]).ffn\_.\*\_exps.=Vulkan0" --ubatch-size 1
7) Wait about 14 minutes for warm-up. Worth the wait. don't get impatient.
8) launch a browser window to http://127.0.0.1:8080. don't use Chrome, i prefer a new install of Opera specifically for this use-case.
9) prompt processing is also about 4 t/s kekw, wait a long time for big prompts during pp.
10) if you have other tricks that would improve this method, add them in the comments.
| 2025-04-29T21:34:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1kazna1/qwen3_235b_udq2_amd_16gb_vram_4ts_and_190watts_at/
|
bennmann
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kazna1
| false | null |
t3_1kazna1
|
/r/LocalLLaMA/comments/1kazna1/qwen3_235b_udq2_amd_16gb_vram_4ts_and_190watts_at/
| false | false |
self
| 19 |
{'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=216&crop=smart&auto=webp&s=dd615bfe0453b06d53bc1f5f17fc3f6ad926694f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=320&crop=smart&auto=webp&s=0bc6ac2e1db55ec07cc6a17178ea52bf436f9bce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=640&crop=smart&auto=webp&s=b0d58c9a49c1e9ce629e5b31dce17b727d8c6ab8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=960&crop=smart&auto=webp&s=7c835cb0600a4d280a57f12d0bc008ef12acd26d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=1080&crop=smart&auto=webp&s=1f2580bd36b3bf3b766d205ac6d737a9d8d34c2a', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?auto=webp&s=d8b103bed805ceb641b2ff49dc8c7403318263b1', 'width': 1280}, 'variants': {}}]}
|
Modern MoE on old mining hardware?
| 1 |
[removed]
| 2025-04-29T21:38:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1kazqym/modern_moe_on_old_mining_hardware/
|
Njee_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kazqym
| false | null |
t3_1kazqym
|
/r/LocalLLaMA/comments/1kazqym/modern_moe_on_old_mining_hardware/
| false | false |
self
| 1 | null |
Why is Llama 4 considered bad?
| 3 |
I just watched Llamacon this morning and did some quick research while reading comments, and it seems like the vast majority of people aren't happy with the new Llama 4 Scout and Maverick models. Can someone explain why? I've finetuned some 3.1 models before, and I was wondering if it's even worth switching to 4. Any thoughts?
| 2025-04-29T21:41:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1kaztmm/why_is_llama_4_considered_bad/
|
Aaron_MLEngineer
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kaztmm
| false | null |
t3_1kaztmm
|
/r/LocalLLaMA/comments/1kaztmm/why_is_llama_4_considered_bad/
| false | false |
self
| 3 | null |
Local llm use case
| 1 |
[removed]
| 2025-04-29T21:42:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1kazur5/local_llm_use_case/
|
nonerequired_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kazur5
| false | null |
t3_1kazur5
|
/r/LocalLLaMA/comments/1kazur5/local_llm_use_case/
| false | false |
self
| 1 | null |
I need a consistent text to speech for my meditation app
| 1 |
I am going to be making alot of guided meditations, but right now as I use 11 labs every time I regenerate a certain text, it sounds a little bit different. Is there any way to consistently get the same sounding text to speech?
| 2025-04-29T21:46:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1kazxqt/i_need_a_consistent_text_to_speech_for_my/
|
Separate_Penalty7991
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kazxqt
| false | null |
t3_1kazxqt
|
/r/LocalLLaMA/comments/1kazxqt/i_need_a_consistent_text_to_speech_for_my/
| false | false |
self
| 1 | null |
CPU only performance king Qwen3:32b-q4_K_M. No GPU required for usable speed.
| 23 |
I tried this on my no GPU desktop system. It worked really well. For a 1000 token prompt I got 900 tk/s prompt processing and 12 tk/s evaluation. The system is a Ryzen 5 5600G with 32GB of 3600MHz RAM with Ollama. It is quite usable and it's not stupid. A new high point for CPU only.
With a modern DDR5 system it should be 1.5 the speed to as much as double speed.
For CPU only it is a game changer. Nothing I have tried before even came close.
The only requirement is that you need 32gb of RAM.
On a GPU it is really fast.
| 2025-04-29T21:55:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1kb04xg/cpu_only_performance_king_qwen332bq4_k_m_no_gpu/
|
PermanentLiminality
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb04xg
| false | null |
t3_1kb04xg
|
/r/LocalLLaMA/comments/1kb04xg/cpu_only_performance_king_qwen332bq4_k_m_no_gpu/
| false | false |
self
| 23 | null |
lmstudio-community or mlx-community for Qwen3 32B
| 1 |
[removed]
| 2025-04-29T22:15:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1kb0mag/lmstudiocommunity_or_mlxcommunity_for_qwen3_32b/
|
ActarusDev
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb0mag
| false | null |
t3_1kb0mag
|
/r/LocalLLaMA/comments/1kb0mag/lmstudiocommunity_or_mlxcommunity_for_qwen3_32b/
| false | false |
self
| 1 | null |
Where is qwen-3 ranked on lmarena?
| 2 |
Current open weight models:
|Rank||ELO Score|
|:-|:-|:-|
|7|DeepSeek|1373|
|13|Gemma|1342|
|18|QwQ-32B|1314|
|19|Command A by Cohere|1305|
|38|Athene nexusflow|1275|
|38|Llama-4|1271|
| 2025-04-29T22:17:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1kb0nqv/where_is_qwen3_ranked_on_lmarena/
|
Terminator857
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb0nqv
| false | null |
t3_1kb0nqv
|
/r/LocalLLaMA/comments/1kb0nqv/where_is_qwen3_ranked_on_lmarena/
| false | false |
self
| 2 |
{'enabled': False, 'images': [{'id': 'GN_X5R1sCiL1TZtqrJfkh-QeM-o6eoX2e7St-SuEu00', 'resolutions': [{'height': 90, 'url': 'https://external-preview.redd.it/9uYViHOg12ZUXk7-pLkOqzRixsV2LsgNE5_6OStaIXM.jpg?width=108&crop=smart&auto=webp&s=20ee8b0e8095870a4ab002f5421a0758d3b83c41', 'width': 108}, {'height': 181, 'url': 'https://external-preview.redd.it/9uYViHOg12ZUXk7-pLkOqzRixsV2LsgNE5_6OStaIXM.jpg?width=216&crop=smart&auto=webp&s=d420ec73737e19e24ffeeb2522853625831fe031', 'width': 216}, {'height': 268, 'url': 'https://external-preview.redd.it/9uYViHOg12ZUXk7-pLkOqzRixsV2LsgNE5_6OStaIXM.jpg?width=320&crop=smart&auto=webp&s=02470232a57eb813cd4168bce5b8aa4969cb371e', 'width': 320}, {'height': 537, 'url': 'https://external-preview.redd.it/9uYViHOg12ZUXk7-pLkOqzRixsV2LsgNE5_6OStaIXM.jpg?width=640&crop=smart&auto=webp&s=0ce87f1cf7665e472e4b3660d51ff8edaccd273a', 'width': 640}, {'height': 806, 'url': 'https://external-preview.redd.it/9uYViHOg12ZUXk7-pLkOqzRixsV2LsgNE5_6OStaIXM.jpg?width=960&crop=smart&auto=webp&s=db79d49f5692d9443f66f6752cadd25052eb8e11', 'width': 960}, {'height': 907, 'url': 'https://external-preview.redd.it/9uYViHOg12ZUXk7-pLkOqzRixsV2LsgNE5_6OStaIXM.jpg?width=1080&crop=smart&auto=webp&s=ade65f0673d058df6f2a55d4365537dc3ffbbdae', 'width': 1080}], 'source': {'height': 1512, 'url': 'https://external-preview.redd.it/9uYViHOg12ZUXk7-pLkOqzRixsV2LsgNE5_6OStaIXM.jpg?auto=webp&s=d7ff2160714ffff9ece4298375bbb5ce177df7c0', 'width': 1800}, 'variants': {}}]}
|
Guidance on home AI Lab
| 1 |
[removed]
| 2025-04-29T22:38:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1kb14cj/guidance_on_home_ai_lab/
|
smoothbrainbiglips
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb14cj
| false | null |
t3_1kb14cj
|
/r/LocalLLaMA/comments/1kb14cj/guidance_on_home_ai_lab/
| false | false |
self
| 1 | null |
Qwen3 30B A3B Almost Gets Flappy Bird....
| 14 |
The space bar does almost nothing in terms of making the "bird" go upwards, but it's close for an A3B :)
| 2025-04-29T22:39:17 |
https://v.redd.it/njay1q3cpuxe1
|
random-tomato
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb15bq
| false |
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/njay1q3cpuxe1/DASHPlaylist.mpd?a=1748558369%2CNmIzY2FjNGEwMDViNzdlMDNjOTk1ODZiMzdhODdkMWQ1NTIxMTUyOTk0OGIxMGRmNWZiNjA2YTdiNzc2ZDk0OQ%3D%3D&v=1&f=sd', 'duration': 13, 'fallback_url': 'https://v.redd.it/njay1q3cpuxe1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/njay1q3cpuxe1/HLSPlaylist.m3u8?a=1748558369%2CMjc4ZmNmNTU4NmQyY2IyZjY2OTMxNzQ0MzAxNjBmYjIwZDNhNzJkY2JmOTMyMDBlYmZlMGFlOTE2ZmQxNDM4NA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/njay1q3cpuxe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
|
t3_1kb15bq
|
/r/LocalLLaMA/comments/1kb15bq/qwen3_30b_a3b_almost_gets_flappy_bird/
| false | false | 14 |
{'enabled': False, 'images': [{'id': 'NGNrMm94M2NwdXhlMb3dkNZTDeXeqSiGb-b5PoYLi3MyVNdUUgm-BXoMfjd8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NGNrMm94M2NwdXhlMb3dkNZTDeXeqSiGb-b5PoYLi3MyVNdUUgm-BXoMfjd8.png?width=108&crop=smart&format=pjpg&auto=webp&s=429f4a78185e5c7ed1cf3b9308dd16db446decf6', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/NGNrMm94M2NwdXhlMb3dkNZTDeXeqSiGb-b5PoYLi3MyVNdUUgm-BXoMfjd8.png?width=216&crop=smart&format=pjpg&auto=webp&s=bc8addd789ccfcc13e3885bbbd76a2b0b6e304cc', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/NGNrMm94M2NwdXhlMb3dkNZTDeXeqSiGb-b5PoYLi3MyVNdUUgm-BXoMfjd8.png?width=320&crop=smart&format=pjpg&auto=webp&s=48503358ed387817e1a0b9a7badca046c95430dd', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/NGNrMm94M2NwdXhlMb3dkNZTDeXeqSiGb-b5PoYLi3MyVNdUUgm-BXoMfjd8.png?width=640&crop=smart&format=pjpg&auto=webp&s=60339ec5dce3044fe0d22b028e86ea03ff49e56d', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/NGNrMm94M2NwdXhlMb3dkNZTDeXeqSiGb-b5PoYLi3MyVNdUUgm-BXoMfjd8.png?width=960&crop=smart&format=pjpg&auto=webp&s=7a159706e2eee0f029ef17c657f8883b55dac0a4', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/NGNrMm94M2NwdXhlMb3dkNZTDeXeqSiGb-b5PoYLi3MyVNdUUgm-BXoMfjd8.png?width=1080&crop=smart&format=pjpg&auto=webp&s=b3a680b6ccc83cc341f695145f526c32141210d6', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/NGNrMm94M2NwdXhlMb3dkNZTDeXeqSiGb-b5PoYLi3MyVNdUUgm-BXoMfjd8.png?format=pjpg&auto=webp&s=6e1b1b90ba684a5880ed5ef3cf748e1e50db5659', 'width': 1280}, 'variants': {}}]}
|
|
Local deployment hardware
| 1 |
[removed]
| 2025-04-29T22:48:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1kb1cvq/local_deployment_hardware/
|
smoothbrainbiglips
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb1cvq
| false | null |
t3_1kb1cvq
|
/r/LocalLLaMA/comments/1kb1cvq/local_deployment_hardware/
| false | false |
self
| 1 | null |
Tinyllama Frustrating but not that bad.
| 1 |
I decided for my first build I would use an agent with tinyllama to see what all I could get out of the model. I was very surprised to say the least. How you prompt it really matters. Vibe coded agent from scratch and website. Still some tuning to do but I’m excited about future builds for sure. Anybody else use tinyllama for anything? What is a model that is a step or two above it but still pretty compact.
| 2025-04-29T22:50:15 |
XDAWONDER
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb1e0a
| false | null |
t3_1kb1e0a
|
/r/LocalLLaMA/comments/1kb1e0a/tinyllama_frustrating_but_not_that_bad/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'eWcR1I19gfFcsx3vcJzFqZ68qmLU6gYMcww_KWltPR0', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/nqm0wfbcsuxe1.jpeg?width=108&crop=smart&auto=webp&s=65e60d1cf54e06761da9860a037b6813425ea566', 'width': 108}, {'height': 135, 'url': 'https://preview.redd.it/nqm0wfbcsuxe1.jpeg?width=216&crop=smart&auto=webp&s=b7850263ac64fd4799915d46f1b8c5b44869e17e', 'width': 216}, {'height': 200, 'url': 'https://preview.redd.it/nqm0wfbcsuxe1.jpeg?width=320&crop=smart&auto=webp&s=27ed7faf3d98a8a43f7106994bf040767ec82394', 'width': 320}, {'height': 400, 'url': 'https://preview.redd.it/nqm0wfbcsuxe1.jpeg?width=640&crop=smart&auto=webp&s=c1777dda4f4cf7d1441a31053815bb25ee8ee7e8', 'width': 640}, {'height': 600, 'url': 'https://preview.redd.it/nqm0wfbcsuxe1.jpeg?width=960&crop=smart&auto=webp&s=db2df823c7b56b14a1e3ad6ac7fad8c9145226d8', 'width': 960}, {'height': 675, 'url': 'https://preview.redd.it/nqm0wfbcsuxe1.jpeg?width=1080&crop=smart&auto=webp&s=4663927d246f55435aca2153a15cf0b538105ac3', 'width': 1080}], 'source': {'height': 751, 'url': 'https://preview.redd.it/nqm0wfbcsuxe1.jpeg?auto=webp&s=2a1b1ed0950b139f2dc84db3283837e949533ccb', 'width': 1201}, 'variants': {}}]}
|
||
Local deployment hardware
| 1 |
[removed]
| 2025-04-29T22:56:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1kb1ily/local_deployment_hardware/
|
Emergency_Cattle_522
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb1ily
| false | null |
t3_1kb1ily
|
/r/LocalLLaMA/comments/1kb1ily/local_deployment_hardware/
| false | false |
self
| 1 | null |
INTELLECT-2 finished training today
| 101 | 2025-04-29T23:06:45 |
https://app.primeintellect.ai/intelligence/intellect-2
|
kmouratidis
|
app.primeintellect.ai
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb1qz6
| false | null |
t3_1kb1qz6
|
/r/LocalLLaMA/comments/1kb1qz6/intellect2_finished_training_today/
| false | false | 101 |
{'enabled': False, 'images': [{'id': 'f8W94Jb9DabuzLu03ljqyu2UMmIk-3p6UDElsvW-vWs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/2TaHsZn-6uiirSACnSyMrnV5EoZxDaj8F8MVB4S1kQw.jpg?width=108&crop=smart&auto=webp&s=28fe381f5553b562ea234f919fdcf24f69dd3075', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/2TaHsZn-6uiirSACnSyMrnV5EoZxDaj8F8MVB4S1kQw.jpg?width=216&crop=smart&auto=webp&s=4df03f45c166016a379a0e7d4099a45247ee788d', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/2TaHsZn-6uiirSACnSyMrnV5EoZxDaj8F8MVB4S1kQw.jpg?width=320&crop=smart&auto=webp&s=f5aca7f9814b26f9efa404f7300ba7c59ef5eb75', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/2TaHsZn-6uiirSACnSyMrnV5EoZxDaj8F8MVB4S1kQw.jpg?width=640&crop=smart&auto=webp&s=b8f21a2effbdc013afb553d8655b0b5e6a67982c', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/2TaHsZn-6uiirSACnSyMrnV5EoZxDaj8F8MVB4S1kQw.jpg?width=960&crop=smart&auto=webp&s=af70cc41e2ca5fbc9f9a272af46034a4dbd612e7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/2TaHsZn-6uiirSACnSyMrnV5EoZxDaj8F8MVB4S1kQw.jpg?width=1080&crop=smart&auto=webp&s=332b010a786b2b57793d348e8cabd0fb0770953e', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/2TaHsZn-6uiirSACnSyMrnV5EoZxDaj8F8MVB4S1kQw.jpg?auto=webp&s=e9f3219cddc148a1cd815be64c5c92f73597fa8d', 'width': 1200}, 'variants': {}}]}
|
||
What's the best context window/memory managers you have tried so far?
| 18 |
I've tried world books in silly tavern and kobold, but the results seem kind of unpredictable.
I'd really like to get to the point where I can have an agent working on my PC, consistently, on a project, but context window seems to be the main thing holding me back right now. We need infinite context windows or some really godlike memory manager. What's the best solutions you've found so far?
| 2025-04-29T23:29:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1kb28h5/whats_the_best_context_windowmemory_managers_you/
|
ChainOfThot
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb28h5
| false | null |
t3_1kb28h5
|
/r/LocalLLaMA/comments/1kb28h5/whats_the_best_context_windowmemory_managers_you/
| false | false |
self
| 18 | null |
Do you even use the models we have?
| 1 |
[removed]
| 2025-04-29T23:32:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1kb2al3/do_you_even_use_the_models_we_have/
|
Doogie707
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb2al3
| false | null |
t3_1kb2al3
|
/r/LocalLLaMA/comments/1kb2al3/do_you_even_use_the_models_we_have/
| false | false |
self
| 1 | null |
codename "LittleLLama". 8B llama 4 incoming
| 60 | 2025-04-29T23:35:44 |
https://www.youtube.com/watch?v=rYXeQbTuVl0
|
secopsml
|
youtube.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb2d7z
| false |
{'oembed': {'author_name': 'Dwarkesh Patel', 'author_url': 'https://www.youtube.com/@DwarkeshPatel', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/rYXeQbTuVl0?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Mark Zuckerberg – Llama 4, DeepSeek, Trump, AI Friends, & Race to AGI"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/rYXeQbTuVl0/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Mark Zuckerberg – Llama 4, DeepSeek, Trump, AI Friends, & Race to AGI', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1kb2d7z
|
/r/LocalLLaMA/comments/1kb2d7z/codename_littlellama_8b_llama_4_incoming/
| false | false | 60 |
{'enabled': False, 'images': [{'id': 'BRZ-JcG39YABcZ8AR0yiGdBdUdM-cLjzUK_a-xeqORQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/i1zk2WxsO1Z-ctZBJ8kCoA7z8bnswfkJFDSk_5B34jo.jpg?width=108&crop=smart&auto=webp&s=d2a3cfd5a76636874a0a2f6b7c62008b7690293c', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/i1zk2WxsO1Z-ctZBJ8kCoA7z8bnswfkJFDSk_5B34jo.jpg?width=216&crop=smart&auto=webp&s=9beedb76cfa4b09e657cc1e568237f734008dc68', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/i1zk2WxsO1Z-ctZBJ8kCoA7z8bnswfkJFDSk_5B34jo.jpg?width=320&crop=smart&auto=webp&s=ac196252196a0daf71dbc62e063cb102c4d40cab', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/i1zk2WxsO1Z-ctZBJ8kCoA7z8bnswfkJFDSk_5B34jo.jpg?auto=webp&s=1725876d810b84e79009767ad6a1e87eb7185b1f', 'width': 480}, 'variants': {}}]}
|
||
Dia TTS - 40% Less VRAM Usage, Longer Audio Generation, Improved Gradio UI, Improved Voice Consistency
| 1 |
[removed]
| 2025-04-30T00:01:12 |
https://github.com/RobertAgee/dia/tree/optimized-chunking
|
Fold-Plastic
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb2w66
| false | null |
t3_1kb2w66
|
/r/LocalLLaMA/comments/1kb2w66/dia_tts_40_less_vram_usage_longer_audio/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'SUXWjXv5jaT4Wxxq-evoLpPEElv7FTbE9SEFRDeVl9c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qCdpq58Z4Iqyck6CX3muYYEZxHRP_BK5DgGnAJcLU78.jpg?width=108&crop=smart&auto=webp&s=3585d8c9d2750b9b62d5a0de65738c2a5fc3be42', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qCdpq58Z4Iqyck6CX3muYYEZxHRP_BK5DgGnAJcLU78.jpg?width=216&crop=smart&auto=webp&s=09a98ffb6ba343012d2f248f3ed8626038271f42', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qCdpq58Z4Iqyck6CX3muYYEZxHRP_BK5DgGnAJcLU78.jpg?width=320&crop=smart&auto=webp&s=fc5a29c77be704a312eccd2a7032068668c958e8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qCdpq58Z4Iqyck6CX3muYYEZxHRP_BK5DgGnAJcLU78.jpg?width=640&crop=smart&auto=webp&s=86c37d9ce1a6637449535becbdc0d00e86b742b0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qCdpq58Z4Iqyck6CX3muYYEZxHRP_BK5DgGnAJcLU78.jpg?width=960&crop=smart&auto=webp&s=fa615e17f58d1ff7d03700eff7f19fccd8a1e58f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qCdpq58Z4Iqyck6CX3muYYEZxHRP_BK5DgGnAJcLU78.jpg?width=1080&crop=smart&auto=webp&s=1b3f3a6e6955157256720571b0b65fe3aaefee28', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qCdpq58Z4Iqyck6CX3muYYEZxHRP_BK5DgGnAJcLU78.jpg?auto=webp&s=c6b961aae3ed5f98494da2b902e73b897fa95705', 'width': 1200}, 'variants': {}}]}
|
|
Structured Form Filling Benchmark Results
| 12 |
I created a benchmark to test various locally-hostable models on form filling accuracy and speed. Thought you all might find it interesting.
The task was to read a chunk of text and fill out the relevant fields on a long structured form by returning a specifically-formatted json object. The form is several dozen fields, and the text is intended to provide answers for a selection of 19 of the fields. All models were tested on deepinfra's API.
Takeaways:
* Fastest Model: Llama-4-Maverick-17B-128E-Instruct-FP8 (11.80s)
* Slowest Model: Qwen3-235B-A22B (190.76s)
* Most accurate model: DeepSeek-V3-0324 (89.5%)
* Least Accurate model: Llama-4-Scout-17B-16E-Instruct (52.6%)
* All models tested returned valid json on the first try except the bottom 3, which all failed to return valid json after 3 tries (MythoMax-L2-13b-turbo, gemini-2.0-flash-001, gemma-3-4b-it)
I am most suprised by the performance of llama-4-maverick-17b-128E-Instruct which was much faster than any other model while still providing pretty good accuracy.
| 2025-04-30T00:11:50 |
https://www.reddit.com/gallery/1kb340h
|
gthing
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb340h
| false | null |
t3_1kb340h
|
/r/LocalLLaMA/comments/1kb340h/structured_form_filling_benchmark_results/
| false | false | 12 | null |
|
LM Studio vs Ollama resources usage
| 0 |
Hello all.
Just installed LM Studio to try the new Qwen3 model (since someone says it's bugged in Ollama).
Can someone please advise why LM Studio uses CPU more extensively (\~60%) vs Ollama (\~20%) for the same model/model parameters/task?
Model in both cases: Qwen3-30B-A3B Q4\_K\_M gguf, 36/48 offload, q8 K/V cache, flash attention, 16384 context window.
Maybe I miss some configuration?
Also for some reason Ollama allocates additional 4.5 GB to shared GPU memory (?)
Thanks for insights!
| 2025-04-30T00:18:04 |
https://www.reddit.com/gallery/1kb38hu
|
alisitsky
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb38hu
| false | null |
t3_1kb38hu
|
/r/LocalLLaMA/comments/1kb38hu/lm_studio_vs_ollama_resources_usage/
| false | false | 0 | null |
|
Technically Correct, Qwen 3 working hard
| 827 | 2025-04-30T00:29:20 |
poli-cya
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb3gox
| false | null |
t3_1kb3gox
|
/r/LocalLLaMA/comments/1kb3gox/technically_correct_qwen_3_working_hard/
| false | false | 827 |
{'enabled': True, 'images': [{'id': 'v5bgTiZG3c-6IRCwu-EwfKV5ENhD3-ag5AoFKOa3FGw', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/dudbg02v9vxe1.png?width=108&crop=smart&auto=webp&s=be1fc4b183f56400936e22eb7e8b91bc52eeafb5', 'width': 108}, {'height': 125, 'url': 'https://preview.redd.it/dudbg02v9vxe1.png?width=216&crop=smart&auto=webp&s=12eda3ed31fd7f1a3b50f173cdce1200f3d37f71', 'width': 216}, {'height': 185, 'url': 'https://preview.redd.it/dudbg02v9vxe1.png?width=320&crop=smart&auto=webp&s=ce3877d83e7a51e4c5182a1fefa7375a9ea6d852', 'width': 320}, {'height': 371, 'url': 'https://preview.redd.it/dudbg02v9vxe1.png?width=640&crop=smart&auto=webp&s=40a85bf16e35037679b06a0406d1230e4e6050e5', 'width': 640}, {'height': 557, 'url': 'https://preview.redd.it/dudbg02v9vxe1.png?width=960&crop=smart&auto=webp&s=73b5ea12b91465fff82bfb4a2850609461e19807', 'width': 960}, {'height': 627, 'url': 'https://preview.redd.it/dudbg02v9vxe1.png?width=1080&crop=smart&auto=webp&s=7d1f8d8b57f034992137d3239eea86f3d56e672a', 'width': 1080}], 'source': {'height': 796, 'url': 'https://preview.redd.it/dudbg02v9vxe1.png?auto=webp&s=a564f5c3b0a5ddc6012b113899d5bde7998e6426', 'width': 1371}, 'variants': {}}]}
|
|||
A Low-Cost GPU Hosting Service
| 1 |
[removed]
| 2025-04-30T00:36:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1kb3m40/a_lowcost_gpu_hosting_service/
|
PrettyRevolution1842
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb3m40
| false | null |
t3_1kb3m40
|
/r/LocalLLaMA/comments/1kb3m40/a_lowcost_gpu_hosting_service/
| false | false |
self
| 1 | null |
test
| 1 |
[removed]
| 2025-04-30T00:48:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1kb3uao/test/
|
ConnectPea8944
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb3uao
| false | null |
t3_1kb3uao
|
/r/LocalLLaMA/comments/1kb3uao/test/
| false | false |
self
| 1 | null |
Tested 60 Models for use as an AI writing assistant.
| 1 |
[removed]
| 2025-04-30T01:08:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1kb48oo/tested_60_models_for_use_as_an_ai_writing/
|
Economy-Hippo8351
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb48oo
| false | null |
t3_1kb48oo
|
/r/LocalLLaMA/comments/1kb48oo/tested_60_models_for_use_as_an_ai_writing/
| false | false |
self
| 1 | null |
Help with Inference Server Build Specs
| 1 |
[removed]
| 2025-04-30T01:18:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1kb4ft0/help_with_inference_server_build_specs/
|
Impossible_Ground_15
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb4ft0
| false | null |
t3_1kb4ft0
|
/r/LocalLLaMA/comments/1kb4ft0/help_with_inference_server_build_specs/
| false | false |
self
| 1 | null |
I benchmarked 24 LLMs x 12 difficult frontend questions. An open weight model tied for first!
| 12 | 2025-04-30T01:23:25 |
https://adamniederer.com/blog/llm-frontend-benchmarks.html
|
vvimpcrvsh
|
adamniederer.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb4jq3
| false | null |
t3_1kb4jq3
|
/r/LocalLLaMA/comments/1kb4jq3/i_benchmarked_24_llms_x_12_difficult_frontend/
| false | false |
default
| 12 | null |
|
Reasoning model with OpenWebUI + LiteLLM + OpenAI compatible API
| 1 |
[removed]
| 2025-04-30T01:23:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1kb4k07/reasoning_model_with_openwebui_litellm_openai/
|
PalDoPalKaaShaayar
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb4k07
| false | null |
t3_1kb4k07
|
/r/LocalLLaMA/comments/1kb4k07/reasoning_model_with_openwebui_litellm_openai/
| false | false |
self
| 1 | null |
¿Cuál es la mejor llm open source para programar? VALE TODO
| 0 |
Cuál creen que es la mejor llm open source para que nos acompañe en la programación?. Desde la interpretación de la idea hasta el desarrollo. No importa el equipo que tengas. Simplemente cual es la mejor? Banco un top 3 eh!
Los leo.
| 2025-04-30T01:24:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1kb4k9p/cuál_es_la_mejor_llm_open_source_para_programar/
|
EnvironmentalHelp363
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb4k9p
| false | null |
t3_1kb4k9p
|
/r/LocalLLaMA/comments/1kb4k9p/cuál_es_la_mejor_llm_open_source_para_programar/
| false | false |
self
| 0 | null |
What can my computer run?
| 0 |
Hello all! Im wanting to run some models on my computer with the ultimate goal of stt-model-tts that also has access to python so it can run itself as an automated user.
Im fine if my computer cant get me there, but I was curious about what llms I would be able to run? I just heard about mistrals moes and I was wondering if that would dramatically increase my performance.
Desktop Computer Specs
CPU: Intel Core i9-13900HX
GPU: NVIDIA RTX 4090 (16GB VRAM)
RAM: 96GB
Model: Lenovo Legion Pro 7i Gen 8
| 2025-04-30T01:26:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1kb4m0o/what_can_my_computer_run/
|
LyAkolon
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb4m0o
| false | null |
t3_1kb4m0o
|
/r/LocalLLaMA/comments/1kb4m0o/what_can_my_computer_run/
| false | false |
self
| 0 | null |
China's Huawei develops new AI chip, seeking to match Nvidia, WSJ reports
| 73 | 2025-04-30T02:00:29 |
https://www.cnbc.com/2025/04/27/chinas-huawei-develops-new-ai-chip-seeking-to-match-nvidia-wsj-reports.html
|
fallingdowndizzyvr
|
cnbc.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb59p2
| false | null |
t3_1kb59p2
|
/r/LocalLLaMA/comments/1kb59p2/chinas_huawei_develops_new_ai_chip_seeking_to/
| false | false | 73 |
{'enabled': False, 'images': [{'id': 'baPp8quajZK3_uik9bIwkvFzwItK6D8dCYwfwLrD128', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/_MhQV-xUqiO3Y0IIwXyN_CLCzP6Uu0GLDq4JaPxeNyI.jpg?width=108&crop=smart&auto=webp&s=2da9db5e4dcf79daf639d8020d679bbd56aece68', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/_MhQV-xUqiO3Y0IIwXyN_CLCzP6Uu0GLDq4JaPxeNyI.jpg?width=216&crop=smart&auto=webp&s=36262d418e469607d13ca1fd39bedc99b2cc6c70', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/_MhQV-xUqiO3Y0IIwXyN_CLCzP6Uu0GLDq4JaPxeNyI.jpg?width=320&crop=smart&auto=webp&s=318f1519cf193b7f71eb4701561bc37e07845336', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/_MhQV-xUqiO3Y0IIwXyN_CLCzP6Uu0GLDq4JaPxeNyI.jpg?width=640&crop=smart&auto=webp&s=101f90eb751f4582a08ef73c287e17ff0e2987c8', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/_MhQV-xUqiO3Y0IIwXyN_CLCzP6Uu0GLDq4JaPxeNyI.jpg?width=960&crop=smart&auto=webp&s=7d15cf897d67dace9979a80552d0c0582cb9b644', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/_MhQV-xUqiO3Y0IIwXyN_CLCzP6Uu0GLDq4JaPxeNyI.jpg?width=1080&crop=smart&auto=webp&s=5092d69e78b39613eff5b4d5dce17f2705eeff29', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/_MhQV-xUqiO3Y0IIwXyN_CLCzP6Uu0GLDq4JaPxeNyI.jpg?auto=webp&s=9c419197597316c1dd274fe676ebb36e469be261', 'width': 1920}, 'variants': {}}]}
|
||
Why are people rushing to programming frameworks for agents?
| 13 |
I might be off by a few digits, but I think every day there are about \~6.7 agent SDKs and frameworks that get released. And I humbly don't get the mad rush to a framework. I would rather rush to strong mental frameworks that help us build and eventually take these things into production.
Here's the thing, I don't think its a bad thing to have programming abstractions to improve developer productivity, but I think having a mental model of what's "business logic" vs. "low level" platform capabilities is a far better way to go about picking the right abstractions to work with. This puts the focus back on "what problems are we solving" and "how should we solve them in a durable way"=
For example, lets say you want to be able to run an A/B test between two LLMs for live chat traffic. How would you go about that in LangGraph or LangChain?
|Challenge|Description|
|:-|:-|
||
|🔁 Repetition|`state["model_choice"]`Every node must read and handle both models manually|
|❌ Hard to scale|Adding a new model (e.g., Mistral) means touching every node again|
|🤝 Inconsistent behavior risk|A mistake in one node can break the consistency (e.g., call the wrong model)|
|🧪 Hard to analyze|You’ll need to log the model choice in every flow and build your own comparison infra|
Yes, you can wrap model calls. But now you're rebuilding the functionality of a proxy — inside your application. You're now responsible for routing, retries, rate limits, logging, A/B policy enforcement, and traceability - in a global way that cuts across multiple instances of your agents. And if you ever want to experiment with routing logic, say add a new model, you need a full redeploy.
We need the right building blocks and infrastructure capabilities if we are do build more than a shiny-demo. We need a focus on mental frameworks not just programming frameworks.
| 2025-04-30T02:02:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1kb5bjh/why_are_people_rushing_to_programming_frameworks/
|
AdditionalWeb107
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb5bjh
| false | null |
t3_1kb5bjh
|
/r/LocalLLaMA/comments/1kb5bjh/why_are_people_rushing_to_programming_frameworks/
| false | false |
self
| 13 | null |
GitHub - abstract-agent: Locally hosted AI Agent Python Tool To Generate Novel Research Hypothesis + Abstracts
| 36 |
## What is abstract-agent?
It's an easily extendable multi-agent system that:
- Generates research hypotheses, abstracts, and references
- Runs 100% locally using Ollama LLMs
- Pulls from public sources like arXiv, Semantic Scholar, PubMed, etc.
- No API keys. No cloud. Just you, your GPU/CPU, and public research.
## Key Features
* **Multi-agent pipeline:** Different agents handle breakdown, critique, synthesis, innovation, and polishing
* **Public research sources:** Pulls from arXiv, Semantic Scholar, EuropePMC, Crossref, DOAJ, bioRxiv, medRxiv, OpenAlex, PubMed
* **Research evaluation:** Scores, ranks, and summarizes literature
* **Local processing:** Uses Ollama for summarization and novelty checks
* **Human-readable output:** Clean, well-formatted panel with stats and insights
## Example Output
Here's a sample of what the tool produces:
```
Pipeline 'Research Hypothesis Generation' Finished in 102.67s
Final Results Summary
----- FINAL HYPOTHESIS STRUCTURED -----
This research introduces a novel approach to Large Language Model (LLM) compression predicated on Neuro-Symbolic Contextual Compression. We propose a system that translates LLM attention maps into a discrete, graph-based representation, subsequently employing a learned graph pruning algorithm to remove irrelevant nodes while preserving critical semantic relationships. Unlike existing compression methods focused on direct neural manipulation, this approach leverages the established techniques of graph pruning, offering potentially significant gains in model size and efficiency. The integration of learned pruning, adapting to specific task and input characteristics, represents a fundamentally new paradigm for LLM compression, moving beyond purely neural optimizations.
----- NOVELTY ASSESSMENT -----
**Novelty Score: 7/10**
**Reasoning:**
This hypothesis demonstrates a moderate level of novelty, primarily due to the specific combination of techniques and the integration of neuro-symbolic approaches. Let's break down the assessment:
* **Elements of Novelty (Strengths):**
* **Neuro-Symbolic Contextual Compression:** The core idea of translating LLM attention maps into a discrete, graph-based representation *is* a relatively new area of exploration. While graph pruning exists, applying it specifically to the output of LLM attention maps – and framing it within a neuro-symbolic context – is a distinctive aspect.
* **Learned Graph Pruning:** The explicit mention of a *learned* graph pruning algorithm elevates the novelty. Many pruning methods are static, whereas learning the pruning criteria based on task and input characteristics is a significant step forward.
* **Integration of Graph Pruning with LLMs:** While graph pruning is used in other domains, its application to LLMs, particularly in this way, is not widely established.
* **Elements Limiting Novelty (Weaknesses):**
* **Graph Pruning is Not Entirely New:** As highlighted in Paper 1, graph pruning techniques exist in general. The core concept of pruning nodes based on importance is well-established.
* **Related Work Exists:** Several papers (Papers 2, 3, 4, 5, 6, 7) address aspects of model compression, including quantization, sparsity, and dynamic budgets. While the *combination* is novel, the individual components are not. Paper 7's "thinking step-by-step compression" is particularly relevant, even though it uses a different framing (dynamic compression of reasoning steps).
* **Fine-grained vs. Coarse-grained:** The hypothesis positions itself against "coarse-grained" methods (Paper 1). However, many current compression techniques are moving towards finer-grained approaches.
**Justification for the Score:**
A score of 7 reflects that the hypothesis presents a novel *approach* rather than a completely new concept. The combination of learned graph pruning with attention maps represents a worthwhile exploration. However, it's not a revolutionary breakthrough because graph pruning itself isn't entirely novel, and the field is already actively investigating various compression strategies.
**Recommendations for Strengthening the Hypothesis:**
* **Quantify the Expected Gains:** Adding specific claims about the expected reduction in model size and efficiency would strengthen the hypothesis.
* **Elaborate on the "Neuro-Symbolic" Aspect:** Provide more detail on how the discrete graph representation represents the underlying semantic relationships within the LLM.
* **Highlight the Advantage over Existing Methods:** Clearly articulate *why* this approach is expected to be superior to existing techniques (e.g., in terms of accuracy, speed, or ease of implementation).
```
## How to Get Started
1. Clone the repo:
```
git clone https://github.com/tegridydev/abstract-agent
cd abstract-agent
```
2. Install dependencies:
```
pip install -r requirements.txt
```
3. Install Ollama and pull a model:
```
ollama pull gemma3:4b
```
4. Run the agent:
```
python agent.py
```
## The Agent Pipeline (Think Lego Blocks)
* **Agent A:** Breaks down your topic into core pieces
* **Agent B:** Roasts the literature, finds gaps and trends
* **Agent C:** Synthesizes new directions
* **Agent D:** Goes wild, generates bold hypotheses
* **Agent E:** Polishes, references, and scores the final abstract
* **Novelty Check:** Verifies if the hypothesis is actually new or just recycled
## Dependencies
* ollama
* rich
* arxiv
* requests
* xmltodict
* pydantic
* pyyaml
No API keys needed - all sources are public.
## How to Modify
* Edit `agents_config.yaml` to change the agent pipeline, prompts, or personas
* Add new sources in `multi_source.py`
Enjoy xo
| 2025-04-30T02:18:56 |
https://github.com/tegridydev/abstract-agent
|
tegridyblues
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb5mpt
| false | null |
t3_1kb5mpt
|
/r/LocalLLaMA/comments/1kb5mpt/github_abstractagent_locally_hosted_ai_agent/
| false | false | 36 |
{'enabled': False, 'images': [{'id': 'on-RdDxDSAkrKuF45duxbg0K9XfqRKF4ZFr_DSrOVzw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WJZ0XqkZhdzNGhGF5-ZIuuXmKLEO95PCNRVAy0xlrWk.jpg?width=108&crop=smart&auto=webp&s=f62cb510ba76ce91cad3c829bd3b5a5ceedc1ec2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WJZ0XqkZhdzNGhGF5-ZIuuXmKLEO95PCNRVAy0xlrWk.jpg?width=216&crop=smart&auto=webp&s=b3f8001bc62e0d9233bca2568cdec4cf4ccbf27a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WJZ0XqkZhdzNGhGF5-ZIuuXmKLEO95PCNRVAy0xlrWk.jpg?width=320&crop=smart&auto=webp&s=23f4acbd3b68e02f032683c95979c9b386dc2bcc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WJZ0XqkZhdzNGhGF5-ZIuuXmKLEO95PCNRVAy0xlrWk.jpg?width=640&crop=smart&auto=webp&s=d57cebf4713050ed1b1250289da76ce371ef0d94', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WJZ0XqkZhdzNGhGF5-ZIuuXmKLEO95PCNRVAy0xlrWk.jpg?width=960&crop=smart&auto=webp&s=3def6d54a1ff238e2bc2a2ad084aa5f57681d7cd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WJZ0XqkZhdzNGhGF5-ZIuuXmKLEO95PCNRVAy0xlrWk.jpg?width=1080&crop=smart&auto=webp&s=8bc4482154fbb043531ad22f58647e55f4cab27d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WJZ0XqkZhdzNGhGF5-ZIuuXmKLEO95PCNRVAy0xlrWk.jpg?auto=webp&s=9f096a9a607d8f29cc88e677af4e71d09e7a4119', 'width': 1200}, 'variants': {}}]}
|
|
Thoughts on Mistral.rs
| 90 |
Hey all! I'm the developer of [mistral.rs](https://github.com/EricLBuehler/mistral.rs), and I wanted to gauge community interest and feedback.
Do you use mistral.rs? Have you heard of mistral.rs?
Please let me know! I'm open to any feedback.
| 2025-04-30T02:31:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1kb5v6h/thoughts_on_mistralrs/
|
EricBuehler
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb5v6h
| false | null |
t3_1kb5v6h
|
/r/LocalLLaMA/comments/1kb5v6h/thoughts_on_mistralrs/
| false | false |
self
| 90 |
{'enabled': False, 'images': [{'id': '3G5NI2Po7y_metOyK_idJg5FkRIGKpGczOtJeYBUMRg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xE_zDZLfbdDc4RjoH-YkCS74GJhIOcLFz9Jvp6brSu4.jpg?width=108&crop=smart&auto=webp&s=d52c4e1c08b5b579f9125128770275e8b8950ade', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xE_zDZLfbdDc4RjoH-YkCS74GJhIOcLFz9Jvp6brSu4.jpg?width=216&crop=smart&auto=webp&s=1e4c923040711a57f60f97205aafcebde5f6c8c4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xE_zDZLfbdDc4RjoH-YkCS74GJhIOcLFz9Jvp6brSu4.jpg?width=320&crop=smart&auto=webp&s=d21123c7da139b5f470d0eb52a4dac499bd8c286', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xE_zDZLfbdDc4RjoH-YkCS74GJhIOcLFz9Jvp6brSu4.jpg?width=640&crop=smart&auto=webp&s=31daca98f8a6b1583db4708350ba1826698757f7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xE_zDZLfbdDc4RjoH-YkCS74GJhIOcLFz9Jvp6brSu4.jpg?width=960&crop=smart&auto=webp&s=4eb8d68a65120593f4ce3c27cc6c32d29ecc285b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xE_zDZLfbdDc4RjoH-YkCS74GJhIOcLFz9Jvp6brSu4.jpg?width=1080&crop=smart&auto=webp&s=7a5b9f9ca793697b505d8ca7e82e143103fa416c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xE_zDZLfbdDc4RjoH-YkCS74GJhIOcLFz9Jvp6brSu4.jpg?auto=webp&s=a50d3a87060d06007fe0064ff2f5463a7073a18a', 'width': 1200}, 'variants': {}}]}
|
New study from Cohere shows Lmarena (formerly known as Lmsys Chatbot Arena) is heavily rigged against smaller open source model providers and favors big companies like Google, OpenAI and Meta
| 1 |
[removed]
| 2025-04-30T02:44:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1kb64ll/new_study_from_cohere_shows_lmarena_formerly/
|
obvithrowaway34434
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb64ll
| false | null |
t3_1kb64ll
|
/r/LocalLLaMA/comments/1kb64ll/new_study_from_cohere_shows_lmarena_formerly/
| false | false | 1 | null |
|
New study from Cohere shows Lmarena (formerly known as Lmsys Chatbot Arena) is heavily rigged against smaller open source model providers and favors big companies like Google, OpenAI and Meta
| 1 |
[removed]
| 2025-04-30T02:46:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1kb65q4/new_study_from_cohere_shows_lmarena_formerly/
|
obvithrowaway34434
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb65q4
| false | null |
t3_1kb65q4
|
/r/LocalLLaMA/comments/1kb65q4/new_study_from_cohere_shows_lmarena_formerly/
| false | false |
self
| 1 | null |
New study from Cohere shows Lmarena (formerly known as Lmsys Chatbot Arena) is heavily rigged against smaller open source model providers and favors big companies like Google, OpenAI and Meta
| 1 |
[removed]
| 2025-04-30T02:49:56 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb68bq
| false | null |
t3_1kb68bq
|
/r/LocalLLaMA/comments/1kb68bq/new_study_from_cohere_shows_lmarena_formerly/
| false | false |
default
| 1 | null |
||
Cohere just insulted L4 in a paper!
| 1 |
[removed]
| 2025-04-30T02:51:21 |
https://arxiv.org/abs/2504.20879
|
Ornery_Local_6814
|
arxiv.org
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb69d2
| false | null |
t3_1kb69d2
|
/r/LocalLLaMA/comments/1kb69d2/cohere_just_insulted_l4_in_a_paper/
| false | false |
default
| 1 | null |
Which version of Qwen 3 should I use?
| 6 |
Looking to make the switch from Phi4 to Qwen3 for running on my laptop. I have a Intel Core Ultra 5 125U and 16gb system ram and it dedicates 8gb to VRAM for the IGPU. is the decrease from qwen3 14b Q8 to Qwen3 8b q6\_k\_XL worth the increase in inference speed of running the 8b on the IGPU?
| 2025-04-30T02:54:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1kb6b60/which_version_of_qwen_3_should_i_use/
|
yeet5566
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb6b60
| false | null |
t3_1kb6b60
|
/r/LocalLLaMA/comments/1kb6b60/which_version_of_qwen_3_should_i_use/
| false | false |
self
| 6 | null |
New study from Cohere shows Lmarena (formerly known as Lmsys Chatbot Arena) is heavily rigged against smaller open source model providers and favors big companies like Google, OpenAI and Meta
| 491 |
* Meta tested over 27 private variants, Google 10 to select the best performing one. \\
* OpenAI and Google get the majority of data from the arena (\~40%).
* All closed source providers get more frequently featured in the battles.
Paper: [https://arxiv.org/abs/2504.20879](https://arxiv.org/abs/2504.20879)
| 2025-04-30T02:54:14 |
https://www.reddit.com/gallery/1kb6bbl
|
obvithrowaway34434
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb6bbl
| false | null |
t3_1kb6bbl
|
/r/LocalLLaMA/comments/1kb6bbl/new_study_from_cohere_shows_lmarena_formerly/
| false | false | 491 | null |
|
Recommendation for tiny model: targeted contextually aware text correction
| 2 |
Are there any 'really tiny' models that I can ideally run on CPU, that would be suitable for performing contextual correction of targeted STT errors - mainly product, company names? Most of the high quality STT services now offer an option to 'boost' specific vocabulary. This works well in Google, Whisper, etc. But there are many services that still do not, and while this helps, it will never be a silver bullet.
OTOH all the larger LLMs - open and closed - do a very good job with this, with a prompt like "check this transcript and look for likely instances where IBM was mistranscribed" or something like that. Most recent release LLMs do a great job at correctly identifying and fixing examples like "and here at Ivan we build cool technology". The problem is that this is too expensive and too slow for correction in a live transcript.
I'm looking for recommendations, either existing models that might fit the bill (ideal obviously) or a clear verdict that I need to take matters into my own hands.
I'm looking for a small model - of any provenance - where I could ideally run it on CPU, feed it short texts - think 1-3 turns in a conversation, with a short list of "targeted words and phrases" which it will make contextually sensible corrections on. If our list here is \["IBM", "Google"\], and we have an input, "Here at Ivan we build cool software" this should be corrected. But "Our new developer Ivan ..." should not.
I'm using a procedurally driven Regex solution at the moment, and I'd like to improve on it but not break the compute bank. OSS projects, github repos, papers, general thoughts - all welcome.
| 2025-04-30T03:37:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1kb73rj/recommendation_for_tiny_model_targeted/
|
blackkettle
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb73rj
| false | null |
t3_1kb73rj
|
/r/LocalLLaMA/comments/1kb73rj/recommendation_for_tiny_model_targeted/
| false | false |
self
| 2 | null |
Xiaomi MiMo - MiMo-7B-RL
| 53 |
[https://huggingface.co/XiaomiMiMo/MiMo-7B-RL](https://huggingface.co/XiaomiMiMo/MiMo-7B-RL)
**Short Summary by Qwen3-30B-A3B:**
This work introduces *MiMo-7B*, a series of reasoning-focused language models trained from scratch, demonstrating that small models can achieve exceptional mathematical and code reasoning capabilities, even outperforming larger 32B models. Key innovations include:
* **Pre-training optimizations**: Enhanced data pipelines, multi-dimensional filtering, and a three-stage data mixture (25T tokens) with *Multiple-Token Prediction* for improved reasoning.
* **Post-training techniques**: Curated 130K math/code problems with rule-based rewards, a difficulty-driven code reward for sparse tasks, and data re-sampling to stabilize RL training.
* **RL infrastructure**: A *Seamless Rollout Engine* accelerates training/validation by 2.29×/1.96×, paired with robust inference support. MiMo-7B-RL matches OpenAI’s o1-mini on reasoning tasks, with all models (base, SFT, RL) open-sourced to advance the community’s development of powerful reasoning LLMs.
https://preview.redd.it/rhbeynh1awxe1.png?width=714&format=png&auto=webp&s=78ac27cfa4b73b3fcc1cb591f7a1a7b314700ec2
| 2025-04-30T03:53:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1kb7dqt/xiaomi_mimo_mimo7brl/
|
AaronFeng47
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb7dqt
| false | null |
t3_1kb7dqt
|
/r/LocalLLaMA/comments/1kb7dqt/xiaomi_mimo_mimo7brl/
| false | false | 53 |
{'enabled': False, 'images': [{'id': 'wIhw_T1eYpOrTTgf9KeJChYKEO3hnhfLW-snI7rvPlg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/QhG2hxjpDpBYc1nPejzCvz7HJ-lDjRJrjJUX5Q5zU8w.jpg?width=108&crop=smart&auto=webp&s=2440950f4120b07904df57b228bd4cc60cd61121', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/QhG2hxjpDpBYc1nPejzCvz7HJ-lDjRJrjJUX5Q5zU8w.jpg?width=216&crop=smart&auto=webp&s=398bfe6534bd8b27bb3fbc51e2a8931485ce4fee', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/QhG2hxjpDpBYc1nPejzCvz7HJ-lDjRJrjJUX5Q5zU8w.jpg?width=320&crop=smart&auto=webp&s=4ee4c4ff845cbb5ac1b211d6963534502ffe928f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/QhG2hxjpDpBYc1nPejzCvz7HJ-lDjRJrjJUX5Q5zU8w.jpg?width=640&crop=smart&auto=webp&s=a19f7779daa5666d3a2ff3dafca48ef50e0c785e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/QhG2hxjpDpBYc1nPejzCvz7HJ-lDjRJrjJUX5Q5zU8w.jpg?width=960&crop=smart&auto=webp&s=d75891b2c8c645c2b12b2a89c102b361c3f5c495', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/QhG2hxjpDpBYc1nPejzCvz7HJ-lDjRJrjJUX5Q5zU8w.jpg?width=1080&crop=smart&auto=webp&s=0c4cce85868d1707d3e6e95a9e4db61df572a91d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/QhG2hxjpDpBYc1nPejzCvz7HJ-lDjRJrjJUX5Q5zU8w.jpg?auto=webp&s=e1ef84285150b4f97c1f00a6c5b0798b8377a2df', 'width': 1200}, 'variants': {}}]}
|
|
We haven’t seen a new open SOTA performance model in ages.
| 0 |
As the title, many cost-efficient models released and claim R1-level performance, but the absolute performance frontier just stands there in solid, just like when GPT4-level stands. I thought Qwen3 might break it up but well you'll see, yet another smaller R1-level.
| 2025-04-30T04:06:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1kb7lsl/we_havent_seen_a_new_open_sota_performance_model/
|
Key_Papaya2972
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb7lsl
| false | null |
t3_1kb7lsl
|
/r/LocalLLaMA/comments/1kb7lsl/we_havent_seen_a_new_open_sota_performance_model/
| false | false |
self
| 0 | null |
OpenRouter Qwen3 does not have tool support
| 9 |
AS the above states....Is it me or ?
| 2025-04-30T04:26:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1kb7y8b/openrouter_qwen3_does_not_have_tool_support/
|
klippers
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb7y8b
| false | null |
t3_1kb7y8b
|
/r/LocalLLaMA/comments/1kb7y8b/openrouter_qwen3_does_not_have_tool_support/
| false | false |
self
| 9 | null |
QWEN3:30B on M1
| 3 |
Hey ladies and gents, Happy Wed!
I've seen couple posts about running qwen3:30B on Raspberry Pi box and I can't even run 14:8Q on an M1 laptop! can you guys please explain to me like I'm 5, I'm new to this! is there some setting so adjust? I'm using Ollama with OpenWeb UI, thank you in advance.
| 2025-04-30T05:00:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1kb8hvc/qwen330b_on_m1/
|
dadgam3r
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb8hvc
| false | null |
t3_1kb8hvc
|
/r/LocalLLaMA/comments/1kb8hvc/qwen330b_on_m1/
| false | false |
self
| 3 | null |
Yo'Chameleon: Personalized Vision and Language Generation
| 5 | 2025-04-30T05:17:00 |
https://github.com/thaoshibe/YoChameleon
|
ninjasaid13
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb8qwd
| false | null |
t3_1kb8qwd
|
/r/LocalLLaMA/comments/1kb8qwd/yochameleon_personalized_vision_and_language/
| false | false | 5 |
{'enabled': False, 'images': [{'id': '9L5WHrlSL9FnLmNuLqputxl5t1OfxGGWN7Ab7OQANAI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hwiViFGQPiJj_FbNAWP7F4-TOPIurD16u-g7ymY2Lf8.jpg?width=108&crop=smart&auto=webp&s=2a028f83439cc71e566d63596f7d7871f1e2cb21', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hwiViFGQPiJj_FbNAWP7F4-TOPIurD16u-g7ymY2Lf8.jpg?width=216&crop=smart&auto=webp&s=cc7bda4e609274cb497efb63836980f6b3a3ef2b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hwiViFGQPiJj_FbNAWP7F4-TOPIurD16u-g7ymY2Lf8.jpg?width=320&crop=smart&auto=webp&s=b1e761e9ddcad1d31d164afc25732310f94a3937', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hwiViFGQPiJj_FbNAWP7F4-TOPIurD16u-g7ymY2Lf8.jpg?width=640&crop=smart&auto=webp&s=ff45dd64fed545d75480c1e22e8adbdab7077e33', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hwiViFGQPiJj_FbNAWP7F4-TOPIurD16u-g7ymY2Lf8.jpg?width=960&crop=smart&auto=webp&s=eb48aa577d44af9e794fc45b45ffe98d8c06880e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hwiViFGQPiJj_FbNAWP7F4-TOPIurD16u-g7ymY2Lf8.jpg?width=1080&crop=smart&auto=webp&s=847cde39e1b535ebb37ef490ba758df614605b14', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hwiViFGQPiJj_FbNAWP7F4-TOPIurD16u-g7ymY2Lf8.jpg?auto=webp&s=1d32d59966baa9202263d5432a42622b1d56f83d', 'width': 1200}, 'variants': {}}]}
|
||
Llamacpp is FAST
| 1 |
[removed]
| 2025-04-30T05:17:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1kb8rcl/llamacpp_is_fast/
|
Linkpharm2
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb8rcl
| false | null |
t3_1kb8rcl
|
/r/LocalLLaMA/comments/1kb8rcl/llamacpp_is_fast/
| false | false |
self
| 1 | null |
3090 power efficiency
| 1 |
[removed]
| 2025-04-30T05:19:21 |
Linkpharm2
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb8s8b
| false | null |
t3_1kb8s8b
|
/r/LocalLLaMA/comments/1kb8s8b/3090_power_efficiency/
| false | false | 1 |
{'enabled': True, 'images': [{'id': '1aYjPi1BbSWDLSKciKqCJBk21a5lDwVW-C5JHC8mAlg', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/ew7mbymmpwxe1.png?width=108&crop=smart&auto=webp&s=9b4584af14e42e099b061c1f1ea0ca852aa07cf2', 'width': 108}, {'height': 128, 'url': 'https://preview.redd.it/ew7mbymmpwxe1.png?width=216&crop=smart&auto=webp&s=572b215697a281371b8d2981655f40357b372a19', 'width': 216}, {'height': 190, 'url': 'https://preview.redd.it/ew7mbymmpwxe1.png?width=320&crop=smart&auto=webp&s=f3569bb5aa12ffd862fee6997a4b1bbcce72d12e', 'width': 320}, {'height': 381, 'url': 'https://preview.redd.it/ew7mbymmpwxe1.png?width=640&crop=smart&auto=webp&s=2a1386707f3f04df77601a78a15ba01bc26df4a0', 'width': 640}, {'height': 572, 'url': 'https://preview.redd.it/ew7mbymmpwxe1.png?width=960&crop=smart&auto=webp&s=cd81b0c021a357d14d8242300f157f356679bc02', 'width': 960}, {'height': 643, 'url': 'https://preview.redd.it/ew7mbymmpwxe1.png?width=1080&crop=smart&auto=webp&s=ba4b69deb98584020a396864cd3ccb657b0a2866', 'width': 1080}], 'source': {'height': 1180, 'url': 'https://preview.redd.it/ew7mbymmpwxe1.png?auto=webp&s=bc6a5861f1957c9e747c4765a8e3562bcc366133', 'width': 1980}, 'variants': {}}]}
|
||
Is it just me or is Qwen3-235B is bad at coding ?
| 12 |
Dont get me wrong, the multi-lingual capablities have surpassed Google gemma which was my goto for indic languages - which Qwen now handles with amazing accurac, but really seems to struggle with coding.
I was having a blast with deepseekv3 for creating threejs based simulations which it was zero shotting like it was nothing and the best part I was able to verify it in the preview of the artifact in the official website.
But Qwen3 is really struggling to get it right and even when reasoning and artifact mode are enabled it wasn't able to get it right
Eg. Prompt
"A threejs based projectile simulation for kids to understand
Give output in a single html file"
Is anyone is facing the same with coding.
| 2025-04-30T05:19:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1kb8sh4/is_it_just_me_or_is_qwen3235b_is_bad_at_coding/
|
maayon
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb8sh4
| false | null |
t3_1kb8sh4
|
/r/LocalLLaMA/comments/1kb8sh4/is_it_just_me_or_is_qwen3235b_is_bad_at_coding/
| false | false |
self
| 12 | null |
DFloat11: Lossless LLM Compression for Efficient GPU Inference
| 54 | 2025-04-30T05:31:23 |
https://github.com/LeanModels/DFloat11
|
ninjasaid13
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb8yyw
| false | null |
t3_1kb8yyw
|
/r/LocalLLaMA/comments/1kb8yyw/dfloat11_lossless_llm_compression_for_efficient/
| false | false |
default
| 54 | null |
|
ubergarm/Qwen3-235B-A22B-GGUF over 140 tok/s PP and 10 tok/s TG quant for gaming rigs!
| 82 |
Just cooked up an experimental ik\_llama.cpp exclusive 3.903 BPW quant blend for Qwen3-235B-A22B that delivers good quality and speed on a high end gaming rig fitting full 32k context in under 120 GB (V)RAM e.g. 24GB VRAM + 2x48GB DDR5 RAM.
Just benchmarked over 140 tok/s prompt processing and 10 tok/s generation on my 3090TI FE + AMD 9950X 96GB RAM DDR5-6400 gaming rig (see comment for graph).
Keep in mind this quant is \*not\* supported by mainline llama.cpp, ollama, koboldcpp, lm studio etc. I'm not releasing those as mainstream quality quants are available from bartowski, unsloth, mradermacher, et al.
| 2025-04-30T05:47:24 |
https://huggingface.co/ubergarm/Qwen3-235B-A22B-GGUF
|
VoidAlchemy
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb97ys
| false | null |
t3_1kb97ys
|
/r/LocalLLaMA/comments/1kb97ys/ubergarmqwen3235ba22bgguf_over_140_toks_pp_and_10/
| false | false | 82 |
{'enabled': False, 'images': [{'id': 'zZ1fvxddUNedUGZkzI0jI2J8CYoFosxPI-t-qJjx_Lo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/cYA9q4-I6muEuEtTYSLVR3CPWT7qR_y4KkbLTSvSOXY.jpg?width=108&crop=smart&auto=webp&s=81eec49fc2cdf7cf990b43e019f0267b7700bf1d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/cYA9q4-I6muEuEtTYSLVR3CPWT7qR_y4KkbLTSvSOXY.jpg?width=216&crop=smart&auto=webp&s=deee0c60670a183098d97084d830b8282e538a41', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/cYA9q4-I6muEuEtTYSLVR3CPWT7qR_y4KkbLTSvSOXY.jpg?width=320&crop=smart&auto=webp&s=b39b920e3eba70ff47ec025b0ec2e9d592d84d16', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/cYA9q4-I6muEuEtTYSLVR3CPWT7qR_y4KkbLTSvSOXY.jpg?width=640&crop=smart&auto=webp&s=b11d509cf500d20e1a15b7a569f5bf8fcc448a5f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/cYA9q4-I6muEuEtTYSLVR3CPWT7qR_y4KkbLTSvSOXY.jpg?width=960&crop=smart&auto=webp&s=9339f810c51d415f8afbc22159be42a0c1ff058a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/cYA9q4-I6muEuEtTYSLVR3CPWT7qR_y4KkbLTSvSOXY.jpg?width=1080&crop=smart&auto=webp&s=fd1eca751e56a0dfeca4ec08746c69c1382fd1b4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/cYA9q4-I6muEuEtTYSLVR3CPWT7qR_y4KkbLTSvSOXY.jpg?auto=webp&s=580689b7ac9bb2fc884ea7c431c990a5d95ed0c4', 'width': 1200}, 'variants': {}}]}
|
|
What is the performance difference between 12GB and 16GB of VRAM when the system still needs to use additional RAM?
| 2 |
I've [experimented](https://github.com/donatas-xyz/AI/discussions/1) a fair bit with local LLMs, but I can't find a definitive answer on the performance gains from upgrading from a 12GB GPU to a 16GB GPU when the system RAM is still being used in both cases. What's the theory behind it?
For example, I can fit 32B FP16 models in 12GB VRAM + 128GB RAM and achieve around 0.5 t/s. Would upgrading to 16GB VRAM make a noticeable difference? If the performance increased to 1.0 t/s, that would be significant, but if it only went up to 0.6 t/s, I doubt it would matter much.
I value quality over performance, so reducing the model's accuracy doesn't sit well with me. However, if an additional 4GB of VRAM would noticeably boost the existing performance, I would consider it.
| 2025-04-30T06:11:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1kb9l7t/what_is_the_performance_difference_between_12gb/
|
donatas_xyz
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb9l7t
| false | null |
t3_1kb9l7t
|
/r/LocalLLaMA/comments/1kb9l7t/what_is_the_performance_difference_between_12gb/
| false | false |
self
| 2 | null |
llama-cpp-python 0.3.8 layers assigned to device CPU in Google Colab despite n_gpu_layers > 0
| 1 |
[removed]
| 2025-04-30T06:17:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1kb9ojj/llamacpppython_038_layers_assigned_to_device_cpu/
|
DeZenEffect
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb9ojj
| false | null |
t3_1kb9ojj
|
/r/LocalLLaMA/comments/1kb9ojj/llamacpppython_038_layers_assigned_to_device_cpu/
| false | false |
self
| 1 | null |
How does Qwen3 natively support MCP? Is it different from conventional Function Calling implementations?
| 1 |
[removed]
| 2025-04-30T06:18:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1kb9os1/how_does_qwen3_natively_support_mcp_is_it/
|
Constant_Fox_6322
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kb9os1
| false | null |
t3_1kb9os1
|
/r/LocalLLaMA/comments/1kb9os1/how_does_qwen3_natively_support_mcp_is_it/
| false | false |
self
| 1 | null |
qwen3:32b Add crime to the game Monopoly...
| 1 |
[removed]
| 2025-04-30T06:40:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1kba01a/qwen332b_add_crime_to_the_game_monopoly/
|
nameless_0
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kba01a
| false | null |
t3_1kba01a
|
/r/LocalLLaMA/comments/1kba01a/qwen332b_add_crime_to_the_game_monopoly/
| false | false |
self
| 1 | null |
Performance Qwen3 30BQ4 and 235B Unsloth DQ2 on MBP M4 Max 128GB
| 11 |
So I was wondering what performance I could get out of the Mac MBP M4 Max 128GB
\- LMStudio Qwen3 30BQ4 MLX: 100tokens/s
\- LMStudio Qwen3 30BQ4 GUFF: 65tokens/s
\- LMStudio Qwen3 235B USDQ2: 2 tokens per second?
So I tried llama-server with the models, 30B same speed as LMStudio but the 235B went to 20 t/s!!! So starting to become usable … but …
In general I’m impressed with the speed and general questions, like why is the sky blue … but they all fail with the Heptagon 20 balls test, either none working code or with llama-server it eventually start repeating itself …. both 30B or 235B??!!
| 2025-04-30T07:04:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbacf2/performance_qwen3_30bq4_and_235b_unsloth_dq2_on/
|
Careless_Garlic1438
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbacf2
| false | null |
t3_1kbacf2
|
/r/LocalLLaMA/comments/1kbacf2/performance_qwen3_30bq4_and_235b_unsloth_dq2_on/
| false | false |
self
| 11 | null |
Honestly, THUDM might be the new star on the horizon (creators of GLM-4)
| 206 |
I've read many comments here saying that THUDM/GLM-4-32B-0414 is better than the latest Qwen 3 models and I have to agree. The 9B is also very good and fits in just 6 GB VRAM at IQ4\_XS. These GLM-4 models have crazy efficient attention (less VRAM usage for context than any other model I've tried.)
It does better in my tests, I like its personality and writing style more and imo it also codes better.
I didn't expect these pretty unknown model creators to beat Qwen 3 to be honest, so if they keep it up they might have a chance to become the next DeepSeek.
There's nice room for improvement, like native multimodality, hybrid reasoning and better multilingual support (it leaks chinese characters sometimes, sadly)
What are your experiences with these models?
| 2025-04-30T07:07:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbaecl/honestly_thudm_might_be_the_new_star_on_the/
|
dampflokfreund
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbaecl
| false | null |
t3_1kbaecl
|
/r/LocalLLaMA/comments/1kbaecl/honestly_thudm_might_be_the_new_star_on_the/
| false | false |
self
| 206 | null |
Using AI to find nodes and edges by scraping info of a real world situation.
| 1 |
Hi, I'm working on making a graph that describes the various forces at play. However, doing this manually, and finding all possible influencing factors and figuring out edges is becoming cumbersome.
I'm inexperienced when it comes to using AI, but it seems my work would be benefitted greatly if I could learn. The end-goal is to set up a system that scrapes documents and the web to figure out these relations and produces a graph.
How do i get there? What do I learn and work on? also if there are any tools to use to do this using a "black box" for now, I'd really appreciate that.
| 2025-04-30T07:08:50 |
https://www.reddit.com/gallery/1kbaeto
|
DaInvictus
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbaeto
| false | null |
t3_1kbaeto
|
/r/LocalLLaMA/comments/1kbaeto/using_ai_to_find_nodes_and_edges_by_scraping_info/
| false | false | 1 | null |
|
What could I run?
| 1 |
[removed]
| 2025-04-30T07:23:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbalyv/what_could_i_run/
|
Kooky_Skirtt
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbalyv
| false | null |
t3_1kbalyv
|
/r/LocalLLaMA/comments/1kbalyv/what_could_i_run/
| false | false |
self
| 1 | null |
dnakov/anon-kode GitHub repo taken down by Anthropic
| 34 |
GitHub repo dnakov/anon-kode has been hit with a DMCA takedown from Anthropic.
Link to the notice:
https://github.com/github/dmca/blob/master/2025/04/2025-04-28-anthropic.md
Repo is no longer publicly accessible and all forks have been taken down.
| 2025-04-30T07:25:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbamoh/dnakovanonkode_github_repo_taken_down_by_anthropic/
|
Economy-Fact-8362
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbamoh
| false | null |
t3_1kbamoh
|
/r/LocalLLaMA/comments/1kbamoh/dnakovanonkode_github_repo_taken_down_by_anthropic/
| false | false |
self
| 34 |
{'enabled': False, 'images': [{'id': '2RHSDkHWFVlQyawZhk3LZrYAONGThIvA4A6l0OEu5fs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qpupqnABdwlyi1gthuz3CzPWSrXO5oL4Ysm0QzgJ5q4.jpg?width=108&crop=smart&auto=webp&s=58c3d9c5179697484b06f383af241811df995559', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qpupqnABdwlyi1gthuz3CzPWSrXO5oL4Ysm0QzgJ5q4.jpg?width=216&crop=smart&auto=webp&s=c4268988176a2cffdc590e3c68e5ce757ed24ed2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qpupqnABdwlyi1gthuz3CzPWSrXO5oL4Ysm0QzgJ5q4.jpg?width=320&crop=smart&auto=webp&s=f053c3ea982d718a3ba5dd7989ccbf9927b48fa6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qpupqnABdwlyi1gthuz3CzPWSrXO5oL4Ysm0QzgJ5q4.jpg?width=640&crop=smart&auto=webp&s=9c14dffb9fc0e3e28797fd7fd846b0f8479ac5fb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qpupqnABdwlyi1gthuz3CzPWSrXO5oL4Ysm0QzgJ5q4.jpg?width=960&crop=smart&auto=webp&s=0df43b3e0a14ef1b187ba945521f85e787508716', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qpupqnABdwlyi1gthuz3CzPWSrXO5oL4Ysm0QzgJ5q4.jpg?width=1080&crop=smart&auto=webp&s=3da0070e7ae3086477731355b8543a76e51c2ba4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qpupqnABdwlyi1gthuz3CzPWSrXO5oL4Ysm0QzgJ5q4.jpg?auto=webp&s=2543b3b4e415168ce51e5ba30c8d876f129b7df7', 'width': 1200}, 'variants': {}}]}
|
Is there any api or local model which can accept 2 audio files and say which ones sounds better
| 2 |
I'm trying to do lazy QC with TTS and sometimes there are artifacts in the generation. I've tried gemini 2.5 but it can't tell upload A from upload B
| 2025-04-30T07:32:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbaq97/is_there_any_api_or_local_model_which_can_accept/
|
paswut
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbaq97
| false | null |
t3_1kbaq97
|
/r/LocalLLaMA/comments/1kbaq97/is_there_any_api_or_local_model_which_can_accept/
| false | false |
self
| 2 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.