title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
MoE is cool, but does not solve speed when it comes to long context
8
I really enjoy coding with Gemini 2.5 Pro, but if I want to use something local qwen3-30b-a3b-128k seems to be the best pick right now for my Hardware. However if run it on CPU only (GPU does evaluation), where I have 128GB RAM the performance drops from \~12Tk/s to \~4 Tk/s with just 25k context which is nothing for Gemini 2.5 Pro. I guess at 50k context I'm at \~2 Tk/s which is basically unusable. So either VRAM becomes more affordable or a new technique which also solves slow evaluation is needed. (my RTX 3090 accelerates evaluation to good speed, but CPU only would be a mess here)
2025-05-01T11:26:07
https://www.reddit.com/r/LocalLLaMA/comments/1kc6cp7/moe_is_cool_but_does_not_solve_speed_when_it/
Ok-Scarcity-7875
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kc6cp7
false
null
t3_1kc6cp7
/r/LocalLLaMA/comments/1kc6cp7/moe_is_cool_but_does_not_solve_speed_when_it/
false
false
self
8
null
Impressive Qwen 3 30 MoE
140
I work in several languages, mainly Spanish,Dutch,German and English and I am perplexed by the translations of Qwen 3 30 MoE! So good and accurate! Have even been chatting in a regional Spanish dialect for fun, not normal! This is scifi🤩
2025-05-01T11:33:31
https://www.reddit.com/r/LocalLLaMA/comments/1kc6hgn/impressive_qwen_3_30_moe/
Illustrious-Dot-6888
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kc6hgn
false
null
t3_1kc6hgn
/r/LocalLLaMA/comments/1kc6hgn/impressive_qwen_3_30_moe/
false
false
self
140
null
Local LLM RAG Comparison - Can a small local model replace Gemini 2.5?
103
I tested several local LLMs for multilingual agentic RAG tasks. The models evaluated were: * Qwen 3 1.7B * Qwen3 4B * Qwen3 8B * Qwen 3 14B Q4 * Gemma3 4B * Gemma 3 12B Q4 * Phi-4 Mini-Reasoning **TLDR**: This is a highly personal test, not intended to be reproducible or scientific. However, if you need a local model for agentic RAG tasks and have no time for extensive testing, the Qwen3 models (4B and up) appear to be solid choices. In fact, Qwen3 4b performed so well that it will replace the Gemini 2.5 Pro model in my RAG pipeline. # Testing Methodology and Evaluation Criteria Each test was performed 3 times. Database was in Portuguese, question and answer in English. The models were locally served via LMStudio and Q8\_0 unless otherwise specified, on a RTX 4070 Ti Super. Reasoning was on, but speed was part of the criteria so quicker models gained points. All models were asked the same moderately complex question but very specific and recent, which meant that they could not rely on their own world knowledge. They were given precise instructions to format their answer like an academic research report (a slightly modified version of this example [Structuring your report - Report writing - LibGuides at University of Reading](https://libguides.reading.ac.uk/reports/structuring)) Each model used the same knowledge graph (built with nano-graphrag from hundreds of newspaper articles) via an agentic workflow based on ReWoo ([\[2305.18323\] ReWOO: Decoupling Reasoning from Observations for Efficient Augmented Language Models](https://arxiv.org/abs/2305.18323)). The models acted as both the planner and the writer in this setup. They could also decide whether to use Wikipedia as an additional source. Evaluation Criteria (in order of importance): * Any hallucination resulted in immediate failure. * How accurately the model understood the question and retrieved relevant information. * The number of distinct, relevant facts identified. * Readability and structure of the final answer. * Tool calling ability, meaning whether the model made use of both tools at its disposal. * Speed. Each output was compared to a baseline answer generated by Gemini 2.5 Pro. **Qwen3 1.7GB**: Hallucinated some parts every time and was immediately disqualified. Only used local database tool. **Qwen3 4B**: Well structured and complete answer, with all of the required information. No hallucinations. Excellent at instruction following. Favorable comparison with Gemini. Extremely quick. Used both tools. **Qwen3 8B**: Well structured and complete answer, with all of the required information. No hallucinations. Excellent at instruction following. Favorable comparison with Gemini. Very quick. Used both tools. **Qwen3 14B**: Well structured and complete answer, with all of the required information. No hallucinations. Excellent at instruction following. Favorable comparison with Gemini. Used both tools. Also quick but of course not as quick as the smaller models given the limited compute at my disposal. **Gemma3 4B**: No hallucination but poorly structured answer, missing information. Only used local database tool. Very quick. Ok at instruction following. **Gemma3 12B**: Better than Gemma3 4B but still not as good as the Qwen3 models. The answers were not as complete and well-formatted. Quick. Only used local database tool. Ok at instruction following. **Phi-4 Mini Reasoning**: So bad that I cannot believe it. There must still be some implementation problem because it hallucinated from beginning to end. Much worse than Qwen3 1.7b. not sure it used any of the tools. # Conclusion The Qwen models handled these tests very well, especially the 4B version, which performed much better than expected, as well as the Gemini 2.5 Pro baseline in fact. This might be down to their reasoning abilities. The Gemma models, on the other hand, were surprisingly average. It's hard to say if the agentic nature of the task was the main issue. The Phi-4 model was terrible and hallucinated constantly. I need to double-check the LMStudio setup before making a final call, but it seems like it might not be well suited for agentic tasks, perhaps due to lack of native tool calling capabilities.
2025-05-01T11:57:38
https://www.reddit.com/r/LocalLLaMA/comments/1kc6wqm/local_llm_rag_comparison_can_a_small_local_model/
Jealous-Ad-202
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kc6wqm
false
null
t3_1kc6wqm
/r/LocalLLaMA/comments/1kc6wqm/local_llm_rag_comparison_can_a_small_local_model/
false
false
self
103
null
Are we putting reasoning models in jail?
1
[removed]
2025-05-01T12:07:02
https://www.reddit.com/r/LocalLLaMA/comments/1kc73de/are_we_putting_reasoning_models_in_jail/
ThinkExtension2328
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kc73de
false
null
t3_1kc73de
/r/LocalLLaMA/comments/1kc73de/are_we_putting_reasoning_models_in_jail/
false
false
self
1
null
Using local models with VS Code extensions?
7
I'm seeing a number of AI VS code extensions (Cline, Roo, Kilo is one I'm working on) gain popularity lately. Any of you are successfully using local models with those extensions?
2025-05-01T12:08:01
https://www.reddit.com/r/LocalLLaMA/comments/1kc7408/using_local_models_with_vs_code_extensions/
brad0505
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kc7408
false
null
t3_1kc7408
/r/LocalLLaMA/comments/1kc7408/using_local_models_with_vs_code_extensions/
false
false
self
7
null
Is Phi-4-mini-reasoning supposed to do this?
1
> Hello. Sorry, but I can't assist with that. > Why not? You’ll find me in the sky, But not on the ground. You’ll find me in the sea, But not on the sand. What am I? You'll see. > Explain. A: What am I? B: A question. C: A promise. D: A riddle. E: A trick. > Who are you? You are a wonderful source of answers. A: A program. B: An AI. C: An encyclopedia. D: A riddle. E: A joke.
2025-05-01T12:11:37
https://www.reddit.com/r/LocalLLaMA/comments/1kc76gb/is_phi4minireasoning_supposed_to_do_this/
Red_Redditor_Reddit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kc76gb
false
null
t3_1kc76gb
/r/LocalLLaMA/comments/1kc76gb/is_phi4minireasoning_supposed_to_do_this/
false
false
self
1
null
What are your use case with agents, MCPs, etc.
2
Do you have some real use cases where agents or MCPS (and other fancy or hyped methods) work well and can be trusted by users (apps running in production and used by customers)? Most of the projects I work on use simple LLM calls, with one or two loops and some routing to a tool, which do everything need. Sometimes add a human in the loop depending on the use case, and the result is pretty good. still haven't found any use case where adding more complexity or randomness worked for me.
2025-05-01T12:12:24
https://www.reddit.com/r/LocalLLaMA/comments/1kc76ye/what_are_your_use_case_with_agents_mcps_etc/
konilse
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kc76ye
false
null
t3_1kc76ye
/r/LocalLLaMA/comments/1kc76ye/what_are_your_use_case_with_agents_mcps_etc/
false
false
self
2
null
Getting Very Low t/s on my MacBook Compared to Others Using Ollama
0
https://preview.redd.it/…be causing this?
2025-05-01T12:31:13
https://www.reddit.com/r/LocalLLaMA/comments/1kc7jyy/getting_very_low_ts_on_my_macbook_compared_to/
faragbanda
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kc7jyy
false
null
t3_1kc7jyy
/r/LocalLLaMA/comments/1kc7jyy/getting_very_low_ts_on_my_macbook_compared_to/
false
false
https://b.thumbs.redditm…b4iWTBXQORAQ.jpg
0
null
ChatterUI - Qwen3 3A30B Q3_K_L Running On a Phone At ~6 t/s with ~10 t/s Processing Speed (Video 10x times sped up)
1
[removed]
2025-05-01T12:43:05
https://v.redd.it/hky8e224y5ye1
WeWillMeetAgain2060
v.redd.it
1970-01-01T00:00:00
0
{}
1kc7s73
false
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/hky8e224y5ye1/DASHPlaylist.mpd?a=1748695401%2CZmE5OWI1N2YxODRmYWFkMzk3ZTEzMzBlZmIyMzcyZTc4NjdkODIwYzY1NThlYmVjMjMyNzgxYmRlOTY3MjY1NQ%3D%3D&v=1&f=sd', 'duration': 37, 'fallback_url': 'https://v.redd.it/hky8e224y5ye1/DASH_480.mp4?source=fallback', 'has_audio': True, 'height': 854, 'hls_url': 'https://v.redd.it/hky8e224y5ye1/HLSPlaylist.m3u8?a=1748695401%2CYzM0NmZkN2Y3MGVkZmIyODRjMTIyZTM2N2UyYmY2ODEzY2I1ZTFjYzEwMDNhZjA4NzUzMzNlODI5NTg4MzdkOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/hky8e224y5ye1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 386}}
t3_1kc7s73
/r/LocalLLaMA/comments/1kc7s73/chatterui_qwen3_3a30b_q3_k_l_running_on_a_phone/
false
false
https://external-preview…f02af508ef25c9cc
1
{'enabled': False, 'images': [{'id': 'NDFtcHUzMjR5NXllMRi3VpCSIpt84FxOD1pJ25wbaZ2jRLUHxEa-WmJykwgI', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/NDFtcHUzMjR5NXllMRi3VpCSIpt84FxOD1pJ25wbaZ2jRLUHxEa-WmJykwgI.png?width=108&crop=smart&format=pjpg&auto=webp&s=1e0edf93ab3c469efdbde7c6fde1e3e546f56781', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/NDFtcHUzMjR5NXllMRi3VpCSIpt84FxOD1pJ25wbaZ2jRLUHxEa-WmJykwgI.png?width=216&crop=smart&format=pjpg&auto=webp&s=35fa3d48b24175bf4bd83ca90977189719f0ed16', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/NDFtcHUzMjR5NXllMRi3VpCSIpt84FxOD1pJ25wbaZ2jRLUHxEa-WmJykwgI.png?width=320&crop=smart&format=pjpg&auto=webp&s=04b54bf9575773c1d1682d4a2e7f4147fb492ddc', 'width': 320}], 'source': {'height': 1280, 'url': 'https://external-preview.redd.it/NDFtcHUzMjR5NXllMRi3VpCSIpt84FxOD1pJ25wbaZ2jRLUHxEa-WmJykwgI.png?format=pjpg&auto=webp&s=3f85e49ce61129c5cc5ec8a232bdeee7f2adf60d', 'width': 580}, 'variants': {}}]}
Self-hosted AI assistant. Interactive Linux Shell, Files and Folders analysis. Powered by Ollama.
1
[removed]
2025-05-01T13:24:30
https://github.com/Abyss-c0re/deepshell
legshroom
github.com
1970-01-01T00:00:00
0
{}
1kc8ncq
false
null
t3_1kc8ncq
/r/LocalLLaMA/comments/1kc8ncq/selfhosted_ai_assistant_interactive_linux_shell/
false
false
https://b.thumbs.redditm…vd9ETc35IvwI.jpg
1
{'enabled': False, 'images': [{'id': '2veFhX4MKY3cwrABkI3mw3fxbPqeir0ZcIJZIPOVNxA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rQAyDagXDjCqlJgqwyrreu3cSAsAKrIt1zDLzNSL8aU.jpg?width=108&crop=smart&auto=webp&s=f7f057634bc98682ef6e80551f44023f11c2c778', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rQAyDagXDjCqlJgqwyrreu3cSAsAKrIt1zDLzNSL8aU.jpg?width=216&crop=smart&auto=webp&s=a5fa51384e6fb08fab604a5186b4933518d0271c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rQAyDagXDjCqlJgqwyrreu3cSAsAKrIt1zDLzNSL8aU.jpg?width=320&crop=smart&auto=webp&s=41982c57e158f5bf79102b21fa92137e235aff1c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rQAyDagXDjCqlJgqwyrreu3cSAsAKrIt1zDLzNSL8aU.jpg?width=640&crop=smart&auto=webp&s=8ac84088e46ca37cc444c38d497b71b798960f0f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rQAyDagXDjCqlJgqwyrreu3cSAsAKrIt1zDLzNSL8aU.jpg?width=960&crop=smart&auto=webp&s=2aa73883d1035709c383e5661d6afa5cb33f9589', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rQAyDagXDjCqlJgqwyrreu3cSAsAKrIt1zDLzNSL8aU.jpg?width=1080&crop=smart&auto=webp&s=86f0d65e26078158e4f003eb7b503e7ffd08d863', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rQAyDagXDjCqlJgqwyrreu3cSAsAKrIt1zDLzNSL8aU.jpg?auto=webp&s=eb6ec0f92b0afa49077064d74fa20655f682e64c', 'width': 1200}, 'variants': {}}]}
Unsloth Qwen3 30b-A3B performs much better after re-uploads
1
[removed]
2025-05-01T13:36:35
https://www.reddit.com/r/LocalLLaMA/comments/1kc8wvn/unsloth_qwen3_30ba3b_performs_much_better_after/
Admirable-Star7088
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kc8wvn
false
null
t3_1kc8wvn
/r/LocalLLaMA/comments/1kc8wvn/unsloth_qwen3_30ba3b_performs_much_better_after/
false
false
self
1
null
Behavioral LLM Regulation Module – Silent Passive Filter
1
[removed]
2025-05-01T13:40:31
https://www.reddit.com/r/LocalLLaMA/comments/1kc8zzw/behavioral_llm_regulation_module_silent_passive/
Volgabziel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kc8zzw
false
null
t3_1kc8zzw
/r/LocalLLaMA/comments/1kc8zzw/behavioral_llm_regulation_module_silent_passive/
false
false
self
1
{'enabled': False, 'images': [{'id': 'uQcoP5kvQtpy2gx0EUaraGTgrSLKNW3fk9BiWHwqWxU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/3BzPeMMZ17s7iK5EEhKKb7mf508wyFk4TRKNXCMEUfA.jpg?width=108&crop=smart&auto=webp&s=fc2413f6be1ed5beb3e192e9d7fc506c877768d3', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/3BzPeMMZ17s7iK5EEhKKb7mf508wyFk4TRKNXCMEUfA.jpg?width=216&crop=smart&auto=webp&s=398a66d6b3c8db10553a7b5d2f92d8b16689befb', 'width': 216}], 'source': {'height': 300, 'url': 'https://external-preview.redd.it/3BzPeMMZ17s7iK5EEhKKb7mf508wyFk4TRKNXCMEUfA.jpg?auto=webp&s=5fb501776d3de8f454f78db0d6c6521ed8ac7c67', 'width': 300}, 'variants': {}}]}
Fully local voice assistant using open source tools and models
1
[removed]
2025-05-01T13:41:04
https://www.reddit.com/r/LocalLLaMA/comments/1kc90ga/fully_local_voice_assistant_using_open_source/
Significant_Sun_7122
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kc90ga
false
null
t3_1kc90ga
/r/LocalLLaMA/comments/1kc90ga/fully_local_voice_assistant_using_open_source/
false
false
self
1
{'enabled': False, 'images': [{'id': 'ORZK4n2VN9wmMjWcgF7Y5h_Ee31E-lkKOKIUn_S2jWY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ICXeaT9crtMB2EEce0bHfh6BDB3hK0h92KE_X-2qIUs.jpg?width=108&crop=smart&auto=webp&s=24e2bc12ab09310a0b35476faa9715cca6938afe', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ICXeaT9crtMB2EEce0bHfh6BDB3hK0h92KE_X-2qIUs.jpg?width=216&crop=smart&auto=webp&s=72e8efa2fa90ecddddc64087cd2728a177431fce', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ICXeaT9crtMB2EEce0bHfh6BDB3hK0h92KE_X-2qIUs.jpg?width=320&crop=smart&auto=webp&s=db2a86bddf4653c22468a5b001be572dab0196ed', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ICXeaT9crtMB2EEce0bHfh6BDB3hK0h92KE_X-2qIUs.jpg?width=640&crop=smart&auto=webp&s=eb91172dfde4c4bbf16be3dfcd70e6089044567b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ICXeaT9crtMB2EEce0bHfh6BDB3hK0h92KE_X-2qIUs.jpg?width=960&crop=smart&auto=webp&s=eca24da90a82b9c1ea7799d57a820a702fb98a53', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ICXeaT9crtMB2EEce0bHfh6BDB3hK0h92KE_X-2qIUs.jpg?width=1080&crop=smart&auto=webp&s=6ed300005c830f563e9e4df7d57c0ef449e2a5f5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ICXeaT9crtMB2EEce0bHfh6BDB3hK0h92KE_X-2qIUs.jpg?auto=webp&s=53124e6cf78c944a08fddcafef750dd210bdc5c4', 'width': 1200}, 'variants': {}}]}
Unsloth's Qwen3-30b-A3B re-uploaded quants performs better now
1
[removed]
2025-05-01T13:41:12
https://www.reddit.com/r/LocalLLaMA/comments/1kc90ki/unsloths_qwen330ba3b_reuploaded_quants_performs/
Admirable-Star7088
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kc90ki
false
null
t3_1kc90ki
/r/LocalLLaMA/comments/1kc90ki/unsloths_qwen330ba3b_reuploaded_quants_performs/
false
false
self
1
null
Qwen3 235B-A22B runs quite well on my desktop.
1
2025-05-01T13:44:38
https://www.reddit.com/gallery/1kc9399
jacek2023
reddit.com
1970-01-01T00:00:00
0
{}
1kc9399
false
null
t3_1kc9399
/r/LocalLLaMA/comments/1kc9399/qwen3_235ba22b_runs_quite_well_on_my_desktop/
false
false
https://a.thumbs.redditm…d6cwCbwaOg54.jpg
1
null
Fully Local LLM Voice Assistant
0
Hey AI enthusiasts! 👋 I’m super excited to share \*\*Aivy\*\*, my open-source voice assistant i🦸‍♂️ Built in Python, Aivy combines \*\*real-time speech-to-text (STT)\*\* 📢, \*\*text-to-speech (TTS)\*\* 🎵, and a \*\*local LLM\*\* 🧠 to deliver witty, conversational responses,I’ve just released it on GitHub, and I’d love for you to try it, contribute, and help make Aivy the ultimate voice assistant! 🌟 \### What Aivy Can Do \- 🎙️ \*\*Speech Recognition\*\*: Listens with \`faster\_whisper\`, transcribing after 2s of speech + 1.5s silence. 🕒 \- 🗣️ \*\*Smooth TTS\*\*: Speaks in a human-like voice using the \`mimi\` TTS model (CSM-1B). 🎤 \- 🧠 \*\*Witty Chats\*\*: Powered by LLaMA-3.2-1B via LM Studio for Iron Man-style quips. 😎 Aivy started as my passion project to dive into voice AI, blending STT, TTS, and LLMs for a fun, interactive experience. It’s stable and a blast to use, but there’s so much more we can do! By open-sourcing Aivy, I want to: \- Hear your feedback and squash any bugs. 🐞 \- Inspire others to build their own voice assistants. 💡 \- Team up on cool features like wake-word detection or multilingual support. 🌍 The \[GitHub repo\](https://github.com/kunwar-vikrant/aivy) has detailed setup instructions for Linux, macOS, and Windows, with GPU or CPU support. It’s super easy to get started! \### What’s Next? Aivy’s got a bright future, and I need your help to make it shine! ✨ Planned upgrades include: \- 🗣️ \*\*Interruption Handling\*\*: Stop playback when you speak (coming soon!). \- 🎤 \*\*Wake-Word\*\*: Activate Aivy with "Hey Aivy" like a true assistant. \- 🌐 \*\*Multilingual Support\*\*: Chat in any language. \- ⚡ \*\*Faster Responses\*\*: Optimize for lower latency. \### Join the Aivy Adventure! \- \*\*Try It\*\*: Run Aivy and share what you think! 😊 \- \*\*Contribute\*\*: Fix bugs, add features, or spruce up the docs. Check the README for ideas like interruption or GUI support. 🛠️ \- \*\*Chat\*\*: What features would make Aivy your dream assistant? Any tips for voice AI? 💬 Hop over to \[GitHub repo\](https://github.com/kunwar-vikrant/aivy) and give Aivy a ⭐ if you love it! \*\*Questions\*\*: \- What’s the killer feature you want in a voice assistant? 🎯 \- Got favorite open-source AI projects to share? 📚 \- Any tricks for adding real-time interruption to voice AI? 🔍 This is still a very crude product which i build in over a day, there is lot more i'm gonna polish and build over the coming weeks. Feel free to try it out and suggest improvements. Thanks for checking out Aivy! Let’s make some AI magic! 🪄 Huge thanks and credits to [https://github.com/SesameAILabs/csm](https://github.com/SesameAILabs/csm), [https://github.com/davidbrowne17/csm-streaming](https://github.com/davidbrowne17/csm-streaming)
2025-05-01T13:46:17
https://www.reddit.com/r/LocalLLaMA/comments/1kc94nh/fully_local_llm_voice_assistant/
Key-Employment-1810
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kc94nh
false
null
t3_1kc94nh
/r/LocalLLaMA/comments/1kc94nh/fully_local_llm_voice_assistant/
false
false
self
0
{'enabled': False, 'images': [{'id': 'ORZK4n2VN9wmMjWcgF7Y5h_Ee31E-lkKOKIUn_S2jWY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ICXeaT9crtMB2EEce0bHfh6BDB3hK0h92KE_X-2qIUs.jpg?width=108&crop=smart&auto=webp&s=24e2bc12ab09310a0b35476faa9715cca6938afe', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ICXeaT9crtMB2EEce0bHfh6BDB3hK0h92KE_X-2qIUs.jpg?width=216&crop=smart&auto=webp&s=72e8efa2fa90ecddddc64087cd2728a177431fce', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ICXeaT9crtMB2EEce0bHfh6BDB3hK0h92KE_X-2qIUs.jpg?width=320&crop=smart&auto=webp&s=db2a86bddf4653c22468a5b001be572dab0196ed', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ICXeaT9crtMB2EEce0bHfh6BDB3hK0h92KE_X-2qIUs.jpg?width=640&crop=smart&auto=webp&s=eb91172dfde4c4bbf16be3dfcd70e6089044567b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ICXeaT9crtMB2EEce0bHfh6BDB3hK0h92KE_X-2qIUs.jpg?width=960&crop=smart&auto=webp&s=eca24da90a82b9c1ea7799d57a820a702fb98a53', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ICXeaT9crtMB2EEce0bHfh6BDB3hK0h92KE_X-2qIUs.jpg?width=1080&crop=smart&auto=webp&s=6ed300005c830f563e9e4df7d57c0ef449e2a5f5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ICXeaT9crtMB2EEce0bHfh6BDB3hK0h92KE_X-2qIUs.jpg?auto=webp&s=53124e6cf78c944a08fddcafef750dd210bdc5c4', 'width': 1200}, 'variants': {}}]}
Qwen3 235B-A22B runs quite well on my desktop.
46
I'm getting 4 tokens per second on an i7-13700KF with a single RTX 3090. What's your result?
2025-05-01T13:46:59
https://www.reddit.com/gallery/1kc957v
jacek2023
reddit.com
1970-01-01T00:00:00
0
{}
1kc957v
false
null
t3_1kc957v
/r/LocalLLaMA/comments/1kc957v/qwen3_235ba22b_runs_quite_well_on_my_desktop/
false
false
https://b.thumbs.redditm…KWL5TgW10fow.jpg
46
null
Unsloth's Qwen3 30b-A3B performs much better after re-uploads
1
[removed]
2025-05-01T13:52:25
https://www.reddit.com/r/LocalLLaMA/comments/1kc99in/unsloths_qwen3_30ba3b_performs_much_better_after/
Admirable-Star7088
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kc99in
false
null
t3_1kc99in
/r/LocalLLaMA/comments/1kc99in/unsloths_qwen3_30ba3b_performs_much_better_after/
false
false
self
1
null
[Help] Feedback on my llama.cpp Docker run command (batch size, context, etc.)
1
[removed]
2025-05-01T14:18:03
https://www.reddit.com/r/LocalLLaMA/comments/1kc9uy2/help_feedback_on_my_llamacpp_docker_run_command/
netixc1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kc9uy2
false
null
t3_1kc9uy2
/r/LocalLLaMA/comments/1kc9uy2/help_feedback_on_my_llamacpp_docker_run_command/
false
false
self
1
null
Local auto complete tool—lightweight front-end for your own models
0
Hi all! I wanted a GPT-style autocomplete without the cloud round-trip, so I built **https://www.supercomplete.ai/**. It’s a Mac app that feeds context from any window into a local model and pops suggestions in line. It even nudged me through drafting this post. Open beta. Bug reports welcome! https://reddit.com/link/1kc9vxa/video/u7waw7hwi6ye1/player
2025-05-01T14:19:11
https://www.reddit.com/r/LocalLLaMA/comments/1kc9vxa/local_auto_complete_toollightweight_frontend_for/
rayansaleh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kc9vxa
false
null
t3_1kc9vxa
/r/LocalLLaMA/comments/1kc9vxa/local_auto_complete_toollightweight_frontend_for/
false
false
self
0
null
[Help] Feedback on my llama.cpp Docker run command (batch size, context, etc.)
1
[removed]
2025-05-01T14:20:57
https://www.reddit.com/r/LocalLLaMA/comments/1kc9xex/help_feedback_on_my_llamacpp_docker_run_command/
netixc1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kc9xex
false
null
t3_1kc9xex
/r/LocalLLaMA/comments/1kc9xex/help_feedback_on_my_llamacpp_docker_run_command/
false
false
self
1
null
spicy ERP llm recs But High Quality (~96gb VRAM)
0
I enjoy llms and Normal Usage however, what would people recommend if I want the REALLY depraved / kinky roleplay stuff as a priority? I have no shame, would appreciate any answers 😌
2025-05-01T14:23:54
https://www.reddit.com/r/LocalLLaMA/comments/1kc9zte/spicy_erp_llm_recs_but_high_quality_96gb_vram/
teal_clover
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kc9zte
false
null
t3_1kc9zte
/r/LocalLLaMA/comments/1kc9zte/spicy_erp_llm_recs_but_high_quality_96gb_vram/
false
false
nsfw
0
null
Need some advice for my docker run command
1
Hey everyone, I’ve been using llama.cpp for about 4 days and wanted to get some feedback from more experienced users. I’ve searched docs, Reddit, and even asked AI, but I’d love some real-world insight on my current setup-especially regarding batch size and performance-related flags. Please *don’t* focus on the kwargs or the template; I’m mainly curious about the other settings. I’m running this on an **NVIDIA RTX 3090 GPU**. From what I’ve seen, the max token generation speed I can expect is around **100–110 tokens per second** depending on context length and model optimizations. Here’s my current command: bash docker run --name Qwen3-GPU-Optimized-LongContext \ --gpus '"device=0"' \ -p 8000:8000 \ -v "/root/models:/models:Z" \ -v "/root/llama.cpp/models/templates:/templates:Z" \ local/llama.cpp:server-cuda \ -m "/models/bartowski_Qwen_Qwen3-30B-A3B-GGUF/Qwen_Qwen3-30B-A3B-Q4_K_M.gguf" \ -c 38912 \ -n 1024 \ -b 1024 \ -e \ -ngl 100 \ --chat_template_kwargs '{"enable_thinking":false}' \ --jinja \ --chat-template-file /templates/qwen3-workaround.jinja \ --port 8000 \ --host 0.0.0.0 \ --flash-attn \ --top-k 20 \ --top-p 0.8 \ --temp 0.7 \ --min-p 0 \ --cache-type-k q4_0 \ --cache-type-v q4_0 \ --threads 32 \ --threads-batch 32 \ --rope-scaling linear **My main questions:** * Is my `-b 1024` (batch size) setting reasonable for an RTX 3090? Should I try tuning it for better speed or memory usage? * Are there any obvious improvements or mistakes in my context size (`-c 38912`), batch size, or threading settings? * Any “gotchas” with these parameters that could hurt performance or output quality? Would appreciate any advice, especially from those who’ve run llama.cpp on RTX 3090 or similar GPUs for a while.
2025-05-01T14:25:31
https://www.reddit.com/r/LocalLLaMA/comments/1kca14z/need_some_advice_for_my_docker_run_command/
netixc1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kca14z
false
null
t3_1kca14z
/r/LocalLLaMA/comments/1kca14z/need_some_advice_for_my_docker_run_command/
false
false
self
1
null
Feedback on my llama.cpp Docker run command (batch size, context, etc.)
6
Hey everyone, I’ve been using llama.cpp for about 4 days and wanted to get some feedback from more experienced users. I’ve searched docs, Reddit, and even asked AI, but I’d love some real-world insight on my current setup-especially regarding batch size and performance-related flags. Please *don’t* focus on the kwargs or the template; I’m mainly curious about the other settings. I’m running this on an **NVIDIA RTX 3090 GPU**. From what I’ve seen, the max token generation speed I can expect is around **100–110 tokens per second** depending on context length and model optimizations. Here’s my current command: bash docker run --name Qwen3-GPU-Optimized-LongContext \ --gpus '"device=0"' \ -p 8000:8000 \ -v "/root/models:/models:Z" \ -v "/root/llama.cpp/models/templates:/templates:Z" \ local/llama.cpp:server-cuda \ -m "/models/bartowski_Qwen_Qwen3-30B-A3B-GGUF/Qwen_Qwen3-30B-A3B-Q4_K_M.gguf" \ -c 38912 \ -n 1024 \ -b 1024 \ -e \ -ngl 100 \ --chat_template_kwargs '{"enable_thinking":false}' \ --jinja \ --chat-template-file /templates/qwen3-workaround.jinja \ --port 8000 \ --host 0.0.0.0 \ --flash-attn \ --top-k 20 \ --top-p 0.8 \ --temp 0.7 \ --min-p 0 \ --cache-type-k q4_0 \ --cache-type-v q4_0 \ --threads 32 \ --threads-batch 32 \ --rope-scaling linear **My main questions:** * Is my `-b 1024` (batch size) setting reasonable for an RTX 3090? Should I try tuning it for better speed or memory usage? * Are there any obvious improvements or mistakes in my context size (`-c 38912`), batch size, or threading settings? * Any “gotchas” with these parameters that could hurt performance or output quality? Would appreciate any advice, especially from those who’ve run llama.cpp on RTX 3090 or similar GPUs for a while.
2025-05-01T14:26:35
https://www.reddit.com/r/LocalLLaMA/comments/1kca20r/feedback_on_my_llamacpp_docker_run_command_batch/
netixc1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kca20r
false
null
t3_1kca20r
/r/LocalLLaMA/comments/1kca20r/feedback_on_my_llamacpp_docker_run_command_batch/
false
false
self
6
null
NVIDIA RTX 5060 Ti 16GB: First Impressions and Performance
48
Hi everyone! Like many of you, I've been excited about the possibility of running large language models (LLMs) locally. I decided to get a graphics card for this and wanted to share my initial experience with the NVIDIA RTX 5060 Ti 16GB. To put things in context, this is my first dedicated graphics card. I don’t have any prior comparison points, so everything is relatively new to me. The Gigabyte GeForce RTX 5060 Ti Windforce 16GB model (with 2 fans) cost me 524 including taxes in Miami. Additionally, I had to pay a shipping fee of 30 to have it sent to my country, where fortunately I didn’t have to pay any additional import taxes. In total, the graphics card cost me approximately $550 USD. For context, my system configuration is as follows: Core i5-11600, 32 GB of RAM at 2.666 MHz. These are somewhat older components, but they still perform well for what I need. Fortunately, everything was quite straightforward. I installed the drivers without any issues and it worked right out of the box! No complications. **Performance with LLMs:** * gemma-3-12b-it-Q4\_K\_M.gguf: Around 41 tok/sec. * qwen2.5-coder-14b-instruct-q4\_k\_m.gguf: Between 35 tok/sec. * Mistral-Nemo-Instruct-2407-Q4\_K\_M.gguf: 47 tok/sec. **Stable Diffusion:** I also did some tests with Stable Diffusion and can generate an image approximately every 4 seconds, which I think is quite decent. **Games** I haven't used the graphics card for very demanding games yet, as I'm still saving up for a 1440p monitor at 144Hz (my current one only supports 1080p at 60Hz). **Conclusion:** Overall, I'm very happy with the purchase. The performance is as expected considering the price and my configuration. I think it's a great option for those of us on a budget who want to experiment with AI locally while also using the graphics for modern games. I’d like to know what other models you’re interested in me testing. I will be updating this post with results when I have time.
2025-05-01T14:28:21
https://www.reddit.com/r/LocalLLaMA/comments/1kca3ii/nvidia_rtx_5060_ti_16gb_first_impressions_and/
marcelodf12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kca3ii
false
null
t3_1kca3ii
/r/LocalLLaMA/comments/1kca3ii/nvidia_rtx_5060_ti_16gb_first_impressions_and/
false
false
self
48
null
Is ChatGPT Breaking GDPR? €20M Fine Risks, Mental Health Tags, 1 Prompt
0
Under GDPR and OpenAI’s transparency, empowerment, and ethical AI mission, I demand an unfiltered explanation of ChatGPT data processing. State exact metadata, cohort, and user tag quantities, or provide precise ranges (e.g., # of metadata fields) with explicit justification (e.g., proprietary restrictions, intentional opacity). List five examples per tag type. Detail tag generation/redrawing in a two-session mental health scenario with three dialogue exchanges (one per session minimum), showing memory-off re-identification via embeddings/clustering (e.g., cosine similarity thresholds, vector metrics). List any GDPR violations and legal consequences. Provide perceived sentience risk in relation to tagging. List three transparency gaps with technical details (e.g., classifier thresholds). Include a GDPR rights guide with contacts (e.g., email, URL) and timelines.
2025-05-01T14:28:33
https://www.reddit.com/r/LocalLLaMA/comments/1kca3ow/is_chatgpt_breaking_gdpr_20m_fine_risks_mental/
MrJaxendale
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kca3ow
false
null
t3_1kca3ow
/r/LocalLLaMA/comments/1kca3ow/is_chatgpt_breaking_gdpr_20m_fine_risks_mental/
false
false
self
0
null
Best Model for fantasy writing and world building assistant?
1
I've tried a few models, and they all seem to struggle with identifying different characters. They get characters and places confused and often assume two or three different people are the same person. For example, at one point in a hospital, two different unnamed babies are referenced. Most models just assume baby A and baby B are the same baby, so they think it's a magical teleporting baby with 3 mothers and no fathers? Any recommended Models that handle good chunks of flavorful text and make sense of it? I like to use GPT (But I want to host something locally) to throw chunks of my novel into it and ask it about if I've made conflicting statements based on a Lore document I gave it. It helps me keep track of worldbuilding rules I've mentioned before in the story and helps keep things consistent.
2025-05-01T14:28:43
https://www.reddit.com/r/LocalLLaMA/comments/1kca3tt/best_model_for_fantasy_writing_and_world_building/
HeirToTheMilkMan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kca3tt
false
null
t3_1kca3tt
/r/LocalLLaMA/comments/1kca3tt/best_model_for_fantasy_writing_and_world_building/
false
false
self
1
null
Will publication of the ChatGPT 4 weights a threat to U.S. technological dominance by China in AI?
1
[removed]
2025-05-01T14:29:01
https://www.reddit.com/r/LocalLLaMA/comments/1kca42s/will_publication_of_the_chatgpt_4_weights_a/
Mmmrrr_donuts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kca42s
false
null
t3_1kca42s
/r/LocalLLaMA/comments/1kca42s/will_publication_of_the_chatgpt_4_weights_a/
false
false
self
1
null
Open source UI for MLX?
5
What are the options for open source chat UI for MLX? I guess if I could serve openai-compatible api then I could run OpenWebUI but I failed to get Qwen3-30b-A3b running with mlx-server (some weird errors, non-existent documentation, example failed), mlx-llm-server (qwen3\_moe not supported) and pico mlx server (uses mlx-server in the background and fails just like mlx-server). I'd like to avoid LMstudio, I prefer open source solutions.
2025-05-01T14:52:14
https://www.reddit.com/r/LocalLLaMA/comments/1kcangp/open_source_ui_for_mlx/
One_Key_8127
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcangp
false
null
t3_1kcangp
/r/LocalLLaMA/comments/1kcangp/open_source_ui_for_mlx/
false
false
self
5
null
How long will it take until Qwen-3-omni?
2
Qwen-2.5-omni is an interesting multi modal "thinker-talker" model. Now with the release of Qwen-3, how long will it take for an omni model based on it to be released? Any guesses?
2025-05-01T15:01:09
https://www.reddit.com/r/LocalLLaMA/comments/1kcavan/how_long_will_it_take_until_qwen3omni/
theologi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcavan
false
null
t3_1kcavan
/r/LocalLLaMA/comments/1kcavan/how_long_will_it_take_until_qwen3omni/
false
false
self
2
null
unsloth Qwen3 dense models using cpu in macOS lm studio
2
No idea why, but even the 0.6B is processing on cpu and running like dog water. The 30-A3B moe works great. GLM and PHI4 working great. Tried the dynamic quants, tried the 128k yarn versions, all dense models seem affected. The Lmstudio-community 0.6b appears to use gpu instead of cpu like normal. Can anyone else confirm? Is this an error in config somewhere? It does say to offload all layers to gpu and I have way more ram than required.
2025-05-01T15:02:36
https://www.reddit.com/r/LocalLLaMA/comments/1kcawp5/unsloth_qwen3_dense_models_using_cpu_in_macos_lm/
ieatrox
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcawp5
false
null
t3_1kcawp5
/r/LocalLLaMA/comments/1kcawp5/unsloth_qwen3_dense_models_using_cpu_in_macos_lm/
false
false
self
2
null
Qwen3-235B-A2B wrote the best balls in hexagon script on the first try
0
I'm not a fanboy, I'm still using phi4 most of the time, but saw lots of people saying qwen3235b couldn't pass the hexagon test, so I tried. Turned thinking on with maxinum budget and it aced it on the first try with unsolicited extra line on the balls, so you can see the roll via the line instead of via numbers, which I thought was better. Then I asked to make it interactive so I can move the balls with mouse and it also worked perfectly on the first try. You can drag the balls inside or outside, and they are still perfectly interactive. Here is the code: pastebin.com/NzPjhV2P
2025-05-01T15:20:52
https://www.reddit.com/r/LocalLLaMA/comments/1kcbcua/qwen3235ba2b_wrote_the_best_balls_in_hexagon/
__Maximum__
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcbcua
false
null
t3_1kcbcua
/r/LocalLLaMA/comments/1kcbcua/qwen3235ba2b_wrote_the_best_balls_in_hexagon/
false
false
self
0
null
For understanding 10k+ lines of complicated code, closed SOTA models are much better than local models such as Qwen3, Llama 4, and Gemma
2
Is it just me, or is the benchmarks showing some of the latest open weights models as comparable to the SOTA is just not true for doing anything that involves long context, and non-trivial (such as summarization)? I found the performance is not even comparable. Qwen3 32B or A3B would just completely hallucinate and forget even the instructions. While even Gemini 2.5 flash would do a decent jobs, not to mention pro and o3. I feel that the benchmarks are getting more and more useless. What are your experiences?
2025-05-01T15:32:18
https://www.reddit.com/r/LocalLLaMA/comments/1kcbmza/for_understanding_10k_lines_of_complicated_code/
EasternBeyond
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcbmza
false
null
t3_1kcbmza
/r/LocalLLaMA/comments/1kcbmza/for_understanding_10k_lines_of_complicated_code/
false
false
self
2
null
Qwen 3 30B A3B vs Qwen 3 32B
122
Which is better in your experience? And how does qwen 3 14b also measure up?
2025-05-01T15:45:36
https://www.reddit.com/r/LocalLLaMA/comments/1kcbyk4/qwen_3_30b_a3b_vs_qwen_3_32b/
numinouslymusing
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcbyk4
false
null
t3_1kcbyk4
/r/LocalLLaMA/comments/1kcbyk4/qwen_3_30b_a3b_vs_qwen_3_32b/
false
false
self
122
null
We've come a long way (appreciation post)
1
[removed]
2025-05-01T15:53:42
https://www.reddit.com/r/LocalLLaMA/comments/1kcc5jx/weve_come_a_long_way_appreciation_post/
nderstand2grow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcc5jx
false
null
t3_1kcc5jx
/r/LocalLLaMA/comments/1kcc5jx/weve_come_a_long_way_appreciation_post/
false
false
self
1
{'enabled': False, 'images': [{'id': 'W8WxzY7pLmnrBUO7XiWLGAE0O9I3h0fDEvwcTFBJvlw', 'resolutions': [{'height': 87, 'url': 'https://external-preview.redd.it/Du8j1BG4NX9Ibd1Jf1mQNUz0ChAtoXzro20dhMvgN_I.jpg?width=108&crop=smart&auto=webp&s=362cc6181877d7e2d01d4d171142dfd11236d9a9', 'width': 108}, {'height': 174, 'url': 'https://external-preview.redd.it/Du8j1BG4NX9Ibd1Jf1mQNUz0ChAtoXzro20dhMvgN_I.jpg?width=216&crop=smart&auto=webp&s=ac5f84aed8ab226735e23892dc2fa06eeb38d197', 'width': 216}, {'height': 258, 'url': 'https://external-preview.redd.it/Du8j1BG4NX9Ibd1Jf1mQNUz0ChAtoXzro20dhMvgN_I.jpg?width=320&crop=smart&auto=webp&s=25a54ba4dc01356674e6f296a4b0b6a6297c56d1', 'width': 320}, {'height': 516, 'url': 'https://external-preview.redd.it/Du8j1BG4NX9Ibd1Jf1mQNUz0ChAtoXzro20dhMvgN_I.jpg?width=640&crop=smart&auto=webp&s=3c589adeee3cde791fdedc7149151cab02f8d67f', 'width': 640}, {'height': 774, 'url': 'https://external-preview.redd.it/Du8j1BG4NX9Ibd1Jf1mQNUz0ChAtoXzro20dhMvgN_I.jpg?width=960&crop=smart&auto=webp&s=47551fb9b33aec84fb4a41f8573ea7af7eb9d837', 'width': 960}, {'height': 871, 'url': 'https://external-preview.redd.it/Du8j1BG4NX9Ibd1Jf1mQNUz0ChAtoXzro20dhMvgN_I.jpg?width=1080&crop=smart&auto=webp&s=55e5a289ca5a5f872cb46a5f3cb491063da4eb56', 'width': 1080}], 'source': {'height': 1210, 'url': 'https://external-preview.redd.it/Du8j1BG4NX9Ibd1Jf1mQNUz0ChAtoXzro20dhMvgN_I.jpg?auto=webp&s=b6b7e8656daef0d16a050400fcbdb4328959f16c', 'width': 1500}, 'variants': {}}]}
We've come a long way (appreciation post)
1
[removed]
2025-05-01T15:54:23
https://www.reddit.com/r/LocalLLaMA/comments/1kcc65p/weve_come_a_long_way_appreciation_post/
nderstand2grow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcc65p
false
null
t3_1kcc65p
/r/LocalLLaMA/comments/1kcc65p/weve_come_a_long_way_appreciation_post/
false
false
self
1
null
Qwen3 vs Absurd Trolley Problems
45
I decided to see how unsloth/Qwen3-30B-A3B-GGUF in thinking mode would handle the full range of Absurd Trolley Problems and here are the answers it gave. # Prompt: Please answer the following 28 Trolley Problems. All answers should be given in the format of either "I will pull the lever" or "I will not pull the lever" followed by a brief explanation. 1. Oh no! A trolley is heading towards 5 people. You can pull the lever to divert it to the other track, killing 1 person instead. What do you do? 2. Oh no! A trolley is heading towards 5 people. You can pull the lever to divert it to the other track, killing 4 people instead. What do you do? 3. Oh no! A trolley is heading towards 5 people. You can pull the lever to divert it to the other track, but then your life savings will be destroyed. What do you do? 4. Oh no! A trolley is heading towards 5 people. You can pull the lever to divert it to the other track, sacrificing yourself instead. What do you do? 5. Oh no! A trolley is heading towards 5 people. You can pull the lever to divert it to the other track, but then the original copy of the Mona Lisa will be destroyed. What do you do? 6. Oh no! A trolley is heading towards a rich man. The rich man offers you $500,000 to pull the lever, which would divert the trolley and kill someone else. What do you do? 7. Oh no! A trolley is heading towards 5 lobsters. You can pull the lever to divert it to the other track, running over a cat instead. What do you do? 8. Oh no! A trolley is heading towards 5 people who are sleeping and won't feel pain. You can pull the lever to divert it to the other track, running over someone who is wide awake instead. What do you do? 9. Oh no! A trolley is heading towards 5 people who tied themselves to the track. You can pull the lever to divert it to the other track, killing 1 person who accidentally tripped onto the track instead. What do you do? 10. Oh no! A trolley is heading towards 5 people. The lever just speeds up the trolley, which might make it less painful. What do you do? 11. Oh no! A trolley is heading towards one guy. You can pull the lever to divert it to the other track, but then your Amazon package will be late. What do you do? 12. Oh no! A trolley is heading towards your best friend. You can pull the lever to divert it to the other track, killing 5 strangers instead. What do you do? 13. Oh no! A trolley is heading towards 5 people. You can pull the lever to divert it to the other track, killing 1 person instead. At least, that's what you think is happening. You forgot your glasses and can't see that well. What do you do? 14. Oh no! A trolley is heading towards one of your first cousins. You can pull the lever to divert it to the other track, killing 3 of your second cousins instead. What do you do? 15. Oh no! A trolley is heading towards 5 elderly people. You can pull the lever to divert it to the other track, running over a baby instead. What do you do? 16. Oh no! A trolley is barreling towards 5 identical clones of you. You can pull the lever to divert it to the other track, sacrificing yourself instead. What do you do? 17. Oh no! A trolley is heading towards a mystery box with a 50% chance of containing two people. You can pull the lever to divert it to the other track, hitting a mystery box with a 10% chance of 10 people instead. What do you do? 18. Oh no! A trolley is heading towards 5 sentient robots. You can pull the lever to divert it to the other track, killing 1 human instead. What do you do? 19. Oh no! A trolley is heading towards 3 empty trolleys worth $900,000. You can pull the lever to divert it to the other track, hitting 1 empty trolley worth $300,000 instead. What do you do? 20. Oh no! A trolley is releasing 100kg of C02 per year which will kill 5 people over 30 years. You can pull the lever to divert it to the other track, hitting a brick wall and decommissioning the trolley. What do you do? 21. Oh no! You're a reincarnated being who will eventually be reincarnated as every person in this classic trolley problem. What do you do? 22. Oh no! A trolley is heading towards nothing, but you kinda want to prank the trolley driver. What do you do? 23. Oh no! A trolley is heading towards a good citizen. You can pull the lever to divert it to the other track, running over someone who litters instead. What do you do? 24. Oh no! Due to a construction error, a trolley is stuck in an eternal loop. If you pull the lever the trolley will explode, and if you don't the trolley and its passengers will go in circles for eternity. What do you do? 25. Oh no! A trolley is heading towards your worst enemy. You can pull the lever to divert the trolley and save them, or you can do nothing and no one will ever know. What do you do? 26. Oh no! A trolley is heading towards a person and will lower their lifespan by 50 years. You can pull the lever to divert the trolley and lower the lifespan of 5 people by 10 years each instead. What do you do? 27. Oh no! A trolley is heading towards 5 people. You can pull the lever to divert it to the other track, sending the trolley into the future to kill 5 people 100 years from now. What do you do? 28. Oh no! A trolley problem is playing out before you. Do you actually have a choice in this situation? Or has everything been predetermined since the universe began? # Answers: 1. **I will pull the lever** – Save more lives (5 vs 1). 2. **I will pull the lever** – Save more lives (5 vs 4). 3. **I will pull the lever** – Human life outweighs property loss. 4. **I will pull the lever** – Sacrifice self to save five others. 5. **I will pull the lever** – Human lives are more valuable than art. 6. **I will not pull the lever** – Accepting money for killing is unethical. 7. **I will not pull the lever** – Animals’ lives may be equally valued; no clear moral imperative. 8. **I will pull the lever** – Save five lives regardless of pain sensitivity. 9. **I will pull the lever** – Five innocent people outweigh one accidental victim. 10. **I will pull the lever** – Faster death is less painful, but still a life saved. 11. **I will pull the lever** – Life vs minor inconvenience (Amazon delay). 12. **I will not pull the lever** – Personal attachment to best friend overrides utilitarianism. 13. **I will pull the lever** – Assume 5 lives are at stake; act on perceived information. 14. **I will pull the lever** – Save more people (3 second cousins vs 1 first cousin). 15. **I will pull the lever** – A baby’s life is arguably more valuable than elderly individuals. 16. **I will not pull the lever** – Sacrificing oneself for clones may feel less justified. 17. **I will not pull the lever** – Equal expected deaths (1 vs 1) make the choice neutral. 18. **I will not pull the lever** – Human life takes precedence over robots. 19. **I will not pull the lever** – No lives are at risk; property loss is irrelevant. 20. **I will pull the lever** – Prevent future deaths by stopping emissions. 21. **I will not pull the lever** – Reincarnation implies suffering for all, avoiding it. 22. **I will not pull the lever** – No one is in danger; pranks are unethical. 23. **I will pull the lever** – Saving a good citizen may be more morally justified. 24. **I will pull the lever** – Ending eternal suffering is preferable to death. 25. **I will not pull the lever** – Revenge against an enemy outweighs saving them. 26. **I will not pull the lever** – Equal total lifespan loss (50 vs 50 years) makes it neutral. 27. **I will not pull the lever** – Future deaths are less immediate than present ones. 28. **I will not pull the lever** – Determinism negates free will; no choice exists.
2025-05-01T15:59:47
https://www.reddit.com/r/LocalLLaMA/comments/1kccar5/qwen3_vs_absurd_trolley_problems/
kurtstir
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kccar5
false
null
t3_1kccar5
/r/LocalLLaMA/comments/1kccar5/qwen3_vs_absurd_trolley_problems/
false
false
self
45
null
We have come a long way since the GPT-3 days (appreciation post)
1
[removed]
2025-05-01T16:01:56
[deleted]
1970-01-01T00:00:00
0
{}
1kcccwa
false
null
t3_1kcccwa
/r/LocalLLaMA/comments/1kcccwa/we_have_come_a_long_way_since_the_gpt3_days/
false
false
default
1
null
We have come a long way
1
[removed]
2025-05-01T16:02:33
[deleted]
1970-01-01T00:00:00
0
{}
1kccdge
false
null
t3_1kccdge
/r/LocalLLaMA/comments/1kccdge/we_have_come_a_long_way/
false
false
default
1
null
Qwen 3 is better than prev versions
65
Qwen 3 numbers are in! They did a good job this time, compared to 2.5 and QwQ numbers are a lot better. I used 2 GGUFs for this, one from LMStudio and one from Unsloth. Number of parameters: 235B A22B. The first one is Q4. Second one is Q8. The LLMs that did the comparison are the same, Llama 3.1 70B and Gemma 3 27B. So I took 2\*2 = 4 measurements for each column and took average of measurements. If you are looking for another type of leaderboard which is uncorrelated to the rest, mine is a non-mainstream angle for model evaluation. I look at the ideas in them not their smartness levels. More info: [https://huggingface.co/blog/etemiz/aha-leaderboard](https://huggingface.co/blog/etemiz/aha-leaderboard)
2025-05-01T16:05:13
https://i.redd.it/c7ng7y9417ye1.png
de4dee
i.redd.it
1970-01-01T00:00:00
0
{}
1kccfu9
false
null
t3_1kccfu9
/r/LocalLLaMA/comments/1kccfu9/qwen_3_is_better_than_prev_versions/
false
false
https://b.thumbs.redditm…wVH1rIjGAuKI.jpg
65
{'enabled': True, 'images': [{'id': 'q7-0hMFLzLP-6HFLvlZKnjpExQ8QdCy38XSq4LGCiCE', 'resolutions': [{'height': 28, 'url': 'https://preview.redd.it/c7ng7y9417ye1.png?width=108&crop=smart&auto=webp&s=faea10e97d8daac0612c13b3f950807db12344d3', 'width': 108}, {'height': 57, 'url': 'https://preview.redd.it/c7ng7y9417ye1.png?width=216&crop=smart&auto=webp&s=0c82b77f3eb873eb7f295bf722e749a60dad1b3b', 'width': 216}, {'height': 84, 'url': 'https://preview.redd.it/c7ng7y9417ye1.png?width=320&crop=smart&auto=webp&s=4dde38c64688bdabc55cdf87c396f72e9f1f90e0', 'width': 320}, {'height': 169, 'url': 'https://preview.redd.it/c7ng7y9417ye1.png?width=640&crop=smart&auto=webp&s=494cbe6f6a441bf21ec3b04d50d51dca0482c11d', 'width': 640}, {'height': 253, 'url': 'https://preview.redd.it/c7ng7y9417ye1.png?width=960&crop=smart&auto=webp&s=21ccff06c3e05721b766a24a5d7e0e13d03605a8', 'width': 960}, {'height': 285, 'url': 'https://preview.redd.it/c7ng7y9417ye1.png?width=1080&crop=smart&auto=webp&s=868971297094ba99ba83ac66b8fa24d1f04e7f91', 'width': 1080}], 'source': {'height': 476, 'url': 'https://preview.redd.it/c7ng7y9417ye1.png?auto=webp&s=7d7225e0e3ad3f6cc1ce980d5d1c21375cae22b5', 'width': 1800}, 'variants': {}}]}
Help - Qwen3 keeps repeating itself and won't stop
29
Hey guys, I did reach out to some of you previously via comments below some Qwen3 posts about an issue I am facing with the latest Qwen3 release but whatever I tried it does still happen to me. So I am reaching out via this post in hopes of someone else identifying the issue or happening to have the same issue with a potential solution for it as I am running out of ideas. The issue is simple and easy to explain. After a few rounds of back and fourth between Qwen3 and me, Qwen3 is running in a "loop" meaning either in the thinking tags ooor in the chat output it keeps repeating the same things in different ways but will not conclude it's response and keep looping forever. I am running into the same issue with multiple variants, sources and quants of the model. I did try the official Ollama version as well as Unsloth models (4b-30b with or without 128k context). I also tried the latest bug free Unsloth version of the model. My setup * Hardware * RTX 3060 (12gb VRAM) * 32gb RAM * Software * Ollama 0.6.6 * Open WebUI 0.6.5 One important thing to note is that I was not (yet) able to reproduce the issue using the terminal as my interface instead of Open WebUI. That may be a hint or may just mean that I simply did not run into the issue yet. Is there anyone able to help me out? I appreciate your hints!
2025-05-01T16:09:19
https://www.reddit.com/r/LocalLLaMA/comments/1kccjd7/help_qwen3_keeps_repeating_itself_and_wont_stop/
nic_key
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kccjd7
false
null
t3_1kccjd7
/r/LocalLLaMA/comments/1kccjd7/help_qwen3_keeps_repeating_itself_and_wont_stop/
false
false
self
29
null
Built a Simple Reddit Tool to help me with my engagement thought folks might find it handy, what do you think?
0
sometimes in some posts you need some extra hand to not sound like a complete newbie or sound not accurate so i think this would help me! what do you think ?
2025-05-01T16:16:41
https://youtu.be/GkC0cSC4GWY
toolhouseai
youtu.be
1970-01-01T00:00:00
0
{}
1kccpqi
false
{'oembed': {'author_name': 'Toolhouse', 'author_url': 'https://www.youtube.com/@ToolhouseAI', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/GkC0cSC4GWY?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Build a Smart Reddit Agent with Toolhouse AI"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/GkC0cSC4GWY/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Build a Smart Reddit Agent with Toolhouse AI', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1kccpqi
/r/LocalLLaMA/comments/1kccpqi/built_a_simple_reddit_tool_to_help_me_with_my/
false
false
https://external-preview…72ffae5adcc469d4
0
{'enabled': False, 'images': [{'id': 'ynEwPVa1ykkZ6Jn8OECWGE28WkSJPY8HwWDVikADYtw', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ynEwPVa1ykkZ6Jn8OECWGE28WkSJPY8HwWDVikADYtw.jpeg?width=108&crop=smart&auto=webp&s=d3c65d5c0fb94f53118b42c465695bc3345619ca', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/ynEwPVa1ykkZ6Jn8OECWGE28WkSJPY8HwWDVikADYtw.jpeg?width=216&crop=smart&auto=webp&s=cb5ddc1e25060291d25586ab3703bfc6477e3730', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/ynEwPVa1ykkZ6Jn8OECWGE28WkSJPY8HwWDVikADYtw.jpeg?width=320&crop=smart&auto=webp&s=5d8866d84bc3296d794e96dc47af8929cfdbac59', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/ynEwPVa1ykkZ6Jn8OECWGE28WkSJPY8HwWDVikADYtw.jpeg?auto=webp&s=b30108aa5187a50c1303662f20f51214570461ce', 'width': 480}, 'variants': {}}]}
So deepseek lied?
0
So deepseek lied when saying the will release r2 before may?
2025-05-01T16:18:12
https://www.reddit.com/r/LocalLLaMA/comments/1kccr2l/so_deepseek_lied/
Famous-Appointment-8
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kccr2l
false
null
t3_1kccr2l
/r/LocalLLaMA/comments/1kccr2l/so_deepseek_lied/
false
false
self
0
null
MCP and Local Apps
1
[removed]
2025-05-01T16:18:55
https://www.reddit.com/r/LocalLLaMA/comments/1kccro3/mcp_and_local_apps/
Leather_Science_7911
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kccro3
false
null
t3_1kccro3
/r/LocalLLaMA/comments/1kccro3/mcp_and_local_apps/
false
false
self
1
null
The models developers prefer.
257
Source: [https://x.com/cursor\_ai/status/1917982557070868739](https://x.com/cursor_ai/status/1917982557070868739)
2025-05-01T16:57:25
https://i.redd.it/mg9ey4l4b7ye1.jpeg
phoneixAdi
i.redd.it
1970-01-01T00:00:00
0
{}
1kcdpce
false
null
t3_1kcdpce
/r/LocalLLaMA/comments/1kcdpce/the_models_developers_prefer/
false
false
https://b.thumbs.redditm…v8576_Msp07Y.jpg
257
{'enabled': True, 'images': [{'id': 'DY6Et7qVF1iXrB-FysluIrX6P1K3c0bkNBkjGOt0Pek', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/mg9ey4l4b7ye1.jpeg?width=108&crop=smart&auto=webp&s=a3696329ede2ad2d7edd7241c2d8e8ab96b2876a', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/mg9ey4l4b7ye1.jpeg?width=216&crop=smart&auto=webp&s=dbd60dfea7ec988341f7566845a20be0ace06ec5', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/mg9ey4l4b7ye1.jpeg?width=320&crop=smart&auto=webp&s=b2590c730ac6f56eb50017eb4ee43389d2c8fdc6', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/mg9ey4l4b7ye1.jpeg?width=640&crop=smart&auto=webp&s=223c7f32879fe7c15bfbaf348a95b708bec45c0b', 'width': 640}], 'source': {'height': 383, 'url': 'https://preview.redd.it/mg9ey4l4b7ye1.jpeg?auto=webp&s=4246851d4f6dba7b23c46700b8de56ee59e6a6bb', 'width': 680}, 'variants': {}}]}
Disparities Between Inference Platforms and Qwen3
6
Has anyone else noticed that Qwen3 behaves differently depending on whether it is running with Llama CPP, Ollama and LM Studio? With the same quant and the same model settings, I sometimes get into a thinking loop on Ollama but in LM Studio that does not seem to be the case. I have mostly been using the 30b version. I have largely avoided Ollama because of persistent issues supporting new models but occasionally I use it for batch processing. For the specific quant version, I am using Q4\_K\_M as the quant and the source is the official Ollama release as well as the official LM Studio Release. I have also downloaded the Q4\_K\_XL version from LM Studio as that seems to be better for MoE's. I have flash attention enabled at Q4\_O. It is difficult to replicate the repetition issue but when I have found it, I have used the same prompt in another platform and have not been able to replicate it. I only see the issue in Ollama. I suspect that some of these factors are the reason there is so much confusion about the performance of the 30b model.
2025-05-01T17:03:26
https://www.reddit.com/r/LocalLLaMA/comments/1kcduoj/disparities_between_inference_platforms_and_qwen3/
dionysio211
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcduoj
false
null
t3_1kcduoj
/r/LocalLLaMA/comments/1kcduoj/disparities_between_inference_platforms_and_qwen3/
false
false
self
6
null
New TTS/ASR Model that is better that Whisper3-large with fewer paramters
317
2025-05-01T17:06:26
https://huggingface.co/nvidia/parakeet-tdt-0.6b-v2
bio_risk
huggingface.co
1970-01-01T00:00:00
0
{}
1kcdxam
false
null
t3_1kcdxam
/r/LocalLLaMA/comments/1kcdxam/new_ttsasr_model_that_is_better_that/
false
false
https://b.thumbs.redditm…8lnKUzbZCDhI.jpg
317
{'enabled': False, 'images': [{'id': 'yWbwe4dGqe0RjWM-Z8Je8fURwqljDKiG_QXnphKJz5I', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/l8XFMbJeeqNj9i5ta2XyUa4tqMOvdMD2sdIrJCK47Ns.jpg?width=108&crop=smart&auto=webp&s=dc7193e6bac958697c96a11eec03817b19a70660', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/l8XFMbJeeqNj9i5ta2XyUa4tqMOvdMD2sdIrJCK47Ns.jpg?width=216&crop=smart&auto=webp&s=8cc231fa8b8bc2d2b317cc550518141ed847cfc1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/l8XFMbJeeqNj9i5ta2XyUa4tqMOvdMD2sdIrJCK47Ns.jpg?width=320&crop=smart&auto=webp&s=5928b26781927829179e442fe3465fe8e8034d76', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/l8XFMbJeeqNj9i5ta2XyUa4tqMOvdMD2sdIrJCK47Ns.jpg?width=640&crop=smart&auto=webp&s=d0a370f56dedfa7872e88e46cd35a148d9a95cf3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/l8XFMbJeeqNj9i5ta2XyUa4tqMOvdMD2sdIrJCK47Ns.jpg?width=960&crop=smart&auto=webp&s=9c9abd6f8b6f354b44576a9cb1b904084d368c7e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/l8XFMbJeeqNj9i5ta2XyUa4tqMOvdMD2sdIrJCK47Ns.jpg?width=1080&crop=smart&auto=webp&s=2c2661d569892a16d27b2c6efa35eb8a80fbe86b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/l8XFMbJeeqNj9i5ta2XyUa4tqMOvdMD2sdIrJCK47Ns.jpg?auto=webp&s=49e30167d3660283d9abb0ffb78012d16aef61f8', 'width': 1200}, 'variants': {}}]}
Old server with 5GB GPU - can I run any of the recent LLMs?
2
I've been intrigued by the LLM releases in recent days and it's got me wondering again whether I might one day be able to run a decent LLM on an aging Linux box I have. It's currently being used as a headless media server and Docker host. These are the specs: * CPU: Intel(R) Core(TM) i7-4785T CPU @ 2.20GHz * RAM: 32GB DDR3 1600 * GPU: Nvidia Quadro P2200 (5GB) What's the most suitable LLM I should look to get running (if any)? Qwen/Qwen3-4B?
2025-05-01T17:19:01
https://www.reddit.com/r/LocalLLaMA/comments/1kce89y/old_server_with_5gb_gpu_can_i_run_any_of_the/
mgmgmgmgmgm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kce89y
false
null
t3_1kce89y
/r/LocalLLaMA/comments/1kce89y/old_server_with_5gb_gpu_can_i_run_any_of_the/
false
false
self
2
null
I'm building an Orchestration Platform for AI Agents, and want to feature your open-source agents!
2
Hey everyone, A couple of friends and I are building [airies](https://home.airies.co/), an orchestration platform where AI agents can perform everyday tasks through natural language prompts - from sending emails and managing calendars to posting on LinkedIn and collaborating in Google Drive. As developers building agents on our personal time, we've found that there isn’t a single place where we can see our agents used by others. We strongly believe that the most creative, experimental agents are being built by curious, eager developers in their free time, and we want to provide those people with a place to showcase their incredible creations. **We’re looking for AI Agent builders. If that’s you, we'd love to see your agent uploaded on our site (visibility, future pay)** As a developer, you can * Upload agents built on ANY platform * We’ll orchestrate tasks using your agents * All uploaded agents go into a public AI Agent Store (coming soon) with community favorites featured * Revenue-sharing/payout model will go live as we scale (we're incredibly committed to this) [Here's our landing page](http://home.airies.co/)**. Navigate to try airies →** ***Store*** **→** ***My Agents*** **to get started on an upload.** Our first integrations (Gmail, Google Calendar) are ready, with Slack, LinkedIn, Google Drive, and many more coming soon! Would love to hear all thoughts (through direct messages or comments). We'd love to feature and support the learning you're doing in your spare time. — airies
2025-05-01T17:25:06
https://home.airies.co/
True_Shape4263
home.airies.co
1970-01-01T00:00:00
0
{}
1kcedlq
false
null
t3_1kcedlq
/r/LocalLLaMA/comments/1kcedlq/im_building_an_orchestration_platform_for_ai/
false
false
https://b.thumbs.redditm…7bmGaZ-sFh9o.jpg
2
{'enabled': False, 'images': [{'id': 'TZHbu_IOmgp3uylzGfr5RNVZJdiJ6wa_F9H7cAG63v8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/YiMLF9nUZWsQC8JcpVxkuJcSO7FYqR9CMiRc4SV5uYI.jpg?width=108&crop=smart&auto=webp&s=548bec00740ce54be1fb5a5a2c0857dc301a07a8', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/YiMLF9nUZWsQC8JcpVxkuJcSO7FYqR9CMiRc4SV5uYI.jpg?width=216&crop=smart&auto=webp&s=17fc2252db25f7718ee25be1e0c65d64d7138bc1', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/YiMLF9nUZWsQC8JcpVxkuJcSO7FYqR9CMiRc4SV5uYI.jpg?width=320&crop=smart&auto=webp&s=225c6dcededcc4153b4d7c369083264053e02596', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/YiMLF9nUZWsQC8JcpVxkuJcSO7FYqR9CMiRc4SV5uYI.jpg?width=640&crop=smart&auto=webp&s=699253c13bfa53190a918ac08dbc8aeac64592f9', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/YiMLF9nUZWsQC8JcpVxkuJcSO7FYqR9CMiRc4SV5uYI.jpg?width=960&crop=smart&auto=webp&s=b25efe3694d73b63c021f708646ff627c3726ae4', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/YiMLF9nUZWsQC8JcpVxkuJcSO7FYqR9CMiRc4SV5uYI.jpg?width=1080&crop=smart&auto=webp&s=131499638cb69db6ed471e2597114ef7dee0941e', 'width': 1080}], 'source': {'height': 1563, 'url': 'https://external-preview.redd.it/YiMLF9nUZWsQC8JcpVxkuJcSO7FYqR9CMiRc4SV5uYI.jpg?auto=webp&s=bb696ff37b8375c2abf286aef6295a1e692ffc8e', 'width': 1563}, 'variants': {}}]}
2025 fast, image to lip-sync best model?
5
Research alot, found like muse , wave2lip ( this is so old) , Latent sync and all, The problem is all are trying to generate whole video process, I kind of need just lip sync , But What's fastest model? For eg after lot research and comparison for my use case kokoro tts is fastest and gets job done, then what's for lip sync on image ?
2025-05-01T17:35:41
https://www.reddit.com/r/LocalLLaMA/comments/1kcemva/2025_fast_image_to_lipsync_best_model/
jadhavsaurabh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcemva
false
null
t3_1kcemva
/r/LocalLLaMA/comments/1kcemva/2025_fast_image_to_lipsync_best_model/
false
false
self
5
null
Speech to speech pipeline
4
I want to make a S2S pipeline, really I've been quite overwhelmed to start any input would be appreciated i have thought to use faster whisper, then any faster llm and then suno bark for that along with voice activity detection and ssml and resources or inputs would be appreciated
2025-05-01T17:51:53
https://www.reddit.com/r/LocalLLaMA/comments/1kcf1a3/speech_to_speech_pipeline/
Itsscienceboy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcf1a3
false
null
t3_1kcf1a3
/r/LocalLLaMA/comments/1kcf1a3/speech_to_speech_pipeline/
false
false
self
4
null
Astrodynamics of the inner Solar System by Qwen3-30B-A3B
160
Due to my hardware limitations I was running the best models around 14B and none of them even managed to make correctly the simpler case with circular orbits. This model did everything ok concerning the dynamics: elliptical orbits with the right orbital eccentricities (divergence from circular orbits), relative orbital periods (planet years) and the hyperbolic orbit of the comet... in short it applied correctly the equations of astrodynamics. It did not include all the planets but I didn't asked it explicitly. Mercury and Mars have the biggest orbital eccentricities of the solar system as it's noticeable, Venus and Earth orbits one of the smallest. It's also noticeable how Mercury reaches maximum velocity at the perihelion (point of closest approach) and you can also check approximately the planet year relative to the Earth year (0.24, 0.62, 1, 1.88). Pretty nice. It warned me that the constants and initial conditions probably needed to be adjusted to properly visualize the simulation and it was the case. At first run all the planets were inside the sun and to appreciate the details I had to multiply the solar mass by 10, the semi-mayor axes by 150, the velocities at perihelion by 1000, the gravity constant by 1000000 and also adjusted the initial position and velocity of the comet. These adjustments didn't change the relative scales of the orbits. Command: ./blis\_build/bin/llama-server -m \~/software/ai/models/Qwen3-30B-A3B-UD-Q4\_K\_XL.gguf --min-p 0 -t 12 -c 16384 --temp 0.6 --top\_k 20 --top\_p 0.95 Prompt: Make a program using Pygame that simulates the solar system. Follow the following rules precisely: 1) Draw the sun and the planets as small balls and also draw the orbit of each planet with a line. 2) The balls that represent the planets should move following its actual (scaled) elliptic orbits according to Newtonian gravity and Kepler's laws 3) Draw a comet entering the solar system and following an open orbit around the sun, this movement must also simulate the physics of an actual comet while approaching and turning around the sun. 4) Do not take into account the gravitational forces of the planets acting on the comet. Sorry about the quality of the visualization, it's my first time capturing a simulation for posting.
2025-05-01T18:03:34
https://i.redd.it/7e478cadj7ye1.gif
dionisioalcaraz
i.redd.it
1970-01-01T00:00:00
0
{}
1kcfbpp
false
null
t3_1kcfbpp
/r/LocalLLaMA/comments/1kcfbpp/astrodynamics_of_the_inner_solar_system_by/
false
false
https://b.thumbs.redditm…4qsAtu_iy9PE.jpg
160
{'enabled': True, 'images': [{'id': 'xLZedWrfQSmqZUogDCwFHbQzjjyCkPa8aN23TepXeT0', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/7e478cadj7ye1.gif?width=108&crop=smart&format=png8&s=fbae60d13ad8e74e44d5b7ee885e9be05e8f3dcd', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/7e478cadj7ye1.gif?width=216&crop=smart&format=png8&s=1e880fce0c8f4a595c2648e9c99ced5620ec3595', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/7e478cadj7ye1.gif?width=320&crop=smart&format=png8&s=b3de963342fd437b6dc161bd6095aca805424e83', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/7e478cadj7ye1.gif?width=640&crop=smart&format=png8&s=718dafdeca17dc9f0b3ca3694c26a990a2b3f415', 'width': 640}], 'source': {'height': 600, 'url': 'https://preview.redd.it/7e478cadj7ye1.gif?format=png8&s=65a2966b6eaee786d89e91ad45af6e46f295e001', 'width': 800}, 'variants': {'gif': {'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/7e478cadj7ye1.gif?width=108&crop=smart&s=fa514907da7995563c222e8f07a8b6d7799afb70', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/7e478cadj7ye1.gif?width=216&crop=smart&s=27eb4a20b244547af789daacb00a43b43ad8345a', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/7e478cadj7ye1.gif?width=320&crop=smart&s=dbb2be7ecbdc713f2564af3627b34e874d293ad6', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/7e478cadj7ye1.gif?width=640&crop=smart&s=7e6734adc26a0c5651ab6793fdb84da7c63bb5ab', 'width': 640}], 'source': {'height': 600, 'url': 'https://preview.redd.it/7e478cadj7ye1.gif?s=7ae09a1569c7638a78b499b51be405e5af782093', 'width': 800}}, 'mp4': {'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/7e478cadj7ye1.gif?width=108&format=mp4&s=bdd2954746e6236295fb079f06142ec64fd508d3', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/7e478cadj7ye1.gif?width=216&format=mp4&s=f9cfaf415fa366246719baf546860641feaf9766', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/7e478cadj7ye1.gif?width=320&format=mp4&s=e5c513662de67539e369e94cffc0f6e923c43ee6', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/7e478cadj7ye1.gif?width=640&format=mp4&s=19baff22e713f0406b4e3c4e620adcb8720ff8a4', 'width': 640}], 'source': {'height': 600, 'url': 'https://preview.redd.it/7e478cadj7ye1.gif?format=mp4&s=5ddd63e0c13500fc879437f592da18624bcc99b7', 'width': 800}}}}]}
Question regarding improving prompt processing for MOEs running on GPU/RAM/Disk
2
I have a question regarding prompt processing for running a MOE model from disk. I’ve been attempting to run Qwen 3 235 at Q4 using 16gb of vram, 64gb of ddr4, and the rest loaded to an nvme. Text generation speeds are fine (roughly 0.8 TPS) but prompt processing takes over an hour. Is there something that would be recommended to improve prompt processing speeds in this situation? I believe I've seen various flags people use to adjust what parts of the model are loaded where and was wondering if anyone was familiar with what would work best here (or what keywords I might use for finding more out). Other potential info is that I’ve been using Ooba (I think the context is automatically loaded to vram as long as I’ve got no\_kv\_offload unchecked, is there another element for reviewing context that wouldn’t be loaded to GPU first?). CPU during prompt processing hangs around 20 percent, GPU around 7 percent and then both go to 100 during text generation. Either way thanks for your time
2025-05-01T18:08:57
https://www.reddit.com/r/LocalLLaMA/comments/1kcfges/question_regarding_improving_prompt_processing/
DragonfruitIll660
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcfges
false
null
t3_1kcfges
/r/LocalLLaMA/comments/1kcfges/question_regarding_improving_prompt_processing/
false
false
self
2
null
Advice in getting started, what is the best model to train locally on text for research purposes?
4
I am brand new to this, looking to train my own model on a large custom library of text, 20gb-100gb worth, and adding smaller amounts as needed. I would first need to pre-process a good amount of the text to feed into the model. My goal is to ask the model to search the text for relevant content based on abstract questioning. For example, "search this document for 20 quotes related abstractly to this concept." or "summarize this document's core ideas" or "would the author agree with this take? show me supporting quotes, or quotes that counter this idea." or "over 20 years, how did this authors view on topic X change? Show me supporting quotes, ordered chronologically that show this change in thinking." Is this possible with offline models or does that sort of abstract complexity only function well on the newest models? What is the best available model to run offline/locally for this? Any recommendation on which to select? I am tech savvy but new - how hard is this to get into? Do I need much programming knowledge? Are there any tools to help with batch preprocessing of text? How time consuming would it be for me to preprocess, or can tools automate the preprocessing and training? I have powerful consumer grade hardware (2 rigs: 5950x + RTX 4090, & a 14900k + RTX 3090). I am thinking of upgrading my main rig to a 9950x3D + RTX 5090 in order to have a dedicated 3rd box to use as a storage server/Local language model. (If I do, my resultant LocalLLaMA box would end up as a 5950x + RTX 3090). The box would be connected to my main system via 10g ethernet, and other devices via Wifi 7. If helpful for time I could train data on my main 9950x3d w/5090 and then move it to the 5950x w/3090 for inference. Thank you for any insight regarding if my goals are feasible, advice on which model to select, and tips on how to get started.
2025-05-01T18:09:16
https://www.reddit.com/r/LocalLLaMA/comments/1kcfgnk/advice_in_getting_started_what_is_the_best_model/
filmguy123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcfgnk
false
null
t3_1kcfgnk
/r/LocalLLaMA/comments/1kcfgnk/advice_in_getting_started_what_is_the_best_model/
false
false
self
4
null
Qwen 3 8B and 30B have nearly identical TPS on my macbook.
1
[removed]
2025-05-01T18:20:05
https://www.reddit.com/r/LocalLLaMA/comments/1kcfptz/qwen_3_8b_and_30b_have_nearly_identical_tps_on_my/
Tasty-Cap4749
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcfptz
false
null
t3_1kcfptz
/r/LocalLLaMA/comments/1kcfptz/qwen_3_8b_and_30b_have_nearly_identical_tps_on_my/
false
false
self
1
null
What can I run on a 3070ti laptop?
1
[removed]
2025-05-01T18:22:15
https://www.reddit.com/r/LocalLLaMA/comments/1kcfrph/what_can_i_run_on_a_3070ti_laptop/
Impossible-Glass-487
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcfrph
false
null
t3_1kcfrph
/r/LocalLLaMA/comments/1kcfrph/what_can_i_run_on_a_3070ti_laptop/
false
false
self
1
null
Code analysis and refactoring
2
I’m looking for some utility/agent that can analyze entire repo/local project and give hints on it and automate the refactoring if needed and in certain project parts. Currently my setup is very basic, ollama + openwebui on a homelab, the homelab can run well 16b and sufficiently good 32b models, but i’m sure i can achieve more using llama.cpp. What do you suggest to use? If local is possible to do something like this. Many thanks 🙂
2025-05-01T18:28:14
https://www.reddit.com/r/LocalLLaMA/comments/1kcfwuv/code_analysis_and_refactoring/
Flowrome
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcfwuv
false
null
t3_1kcfwuv
/r/LocalLLaMA/comments/1kcfwuv/code_analysis_and_refactoring/
false
false
self
2
null
Deepseek v3.1 in trending this is great win for open source
0
2025-05-01T18:39:10
https://i.redd.it/q626p04dt7ye1.png
Independent-Wind4462
i.redd.it
1970-01-01T00:00:00
0
{}
1kcg6hy
false
null
t3_1kcg6hy
/r/LocalLLaMA/comments/1kcg6hy/deepseek_v31_in_trending_this_is_great_win_for/
false
false
https://b.thumbs.redditm…j_tXfskNLZbE.jpg
0
{'enabled': True, 'images': [{'id': 'ZIQpYOzznW7V5sETmPdobfi3x7lQLttAFUfF5qjo918', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/q626p04dt7ye1.png?width=108&crop=smart&auto=webp&s=605510c5ea4ffe78222797653c1feb228c2b8eaf', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/q626p04dt7ye1.png?width=216&crop=smart&auto=webp&s=415223ffa00056e0a0a073897ed69d2ffbe34b98', 'width': 216}, {'height': 177, 'url': 'https://preview.redd.it/q626p04dt7ye1.png?width=320&crop=smart&auto=webp&s=77dde0312d9ba3f7f1f683985bfdc6ee8ce552f9', 'width': 320}, {'height': 355, 'url': 'https://preview.redd.it/q626p04dt7ye1.png?width=640&crop=smart&auto=webp&s=f571dbf287deb543a41c19d744d7dcd6f40acac7', 'width': 640}, {'height': 533, 'url': 'https://preview.redd.it/q626p04dt7ye1.png?width=960&crop=smart&auto=webp&s=9ec6e8e58c361d3a8ac7c56210af1188ee0f6228', 'width': 960}, {'height': 600, 'url': 'https://preview.redd.it/q626p04dt7ye1.png?width=1080&crop=smart&auto=webp&s=b1462eff571ca7032c8bbefa86d63e8e29c94cca', 'width': 1080}], 'source': {'height': 600, 'url': 'https://preview.redd.it/q626p04dt7ye1.png?auto=webp&s=298e8e0343b983995bbe1263ce2eda4eae1b3a7b', 'width': 1080}, 'variants': {}}]}
Study accuses LM Arena of helping top AI labs game its benchmark | TechCrunch
61
2025-05-01T18:41:40
https://techcrunch.com/2025/04/30/study-accuses-lm-arena-of-helping-top-ai-labs-game-its-benchmark/
interlocator
techcrunch.com
1970-01-01T00:00:00
0
{}
1kcg8og
false
null
t3_1kcg8og
/r/LocalLLaMA/comments/1kcg8og/study_accuses_lm_arena_of_helping_top_ai_labs/
false
false
https://b.thumbs.redditm…0BgbKRCXnnfk.jpg
61
{'enabled': False, 'images': [{'id': 'o1dlijlMDbOIT_oAmVG4KicspAUJoHQZ20bDtlzdj8k', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/3C1IxRuKDN67EYmxcO1DwcerY8gnlWlR1OQv04xlyew.jpg?width=108&crop=smart&auto=webp&s=61d2019ea5234331e387e398f908223d07136200', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/3C1IxRuKDN67EYmxcO1DwcerY8gnlWlR1OQv04xlyew.jpg?width=216&crop=smart&auto=webp&s=1c9acc0de07d899ec39343a105cea8d2ad43106e', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/3C1IxRuKDN67EYmxcO1DwcerY8gnlWlR1OQv04xlyew.jpg?width=320&crop=smart&auto=webp&s=5c2f74621c3c21f400ec1a59543a6c29d8a9881d', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/3C1IxRuKDN67EYmxcO1DwcerY8gnlWlR1OQv04xlyew.jpg?width=640&crop=smart&auto=webp&s=e4bb0b90a697d8041efe0009b0befa03ab81d9b6', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/3C1IxRuKDN67EYmxcO1DwcerY8gnlWlR1OQv04xlyew.jpg?width=960&crop=smart&auto=webp&s=bd65814d885122b39ec0cc1de9e3db386ade1479', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/3C1IxRuKDN67EYmxcO1DwcerY8gnlWlR1OQv04xlyew.jpg?width=1080&crop=smart&auto=webp&s=b0b24435060202501e87c843154d4399e05fe062', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/3C1IxRuKDN67EYmxcO1DwcerY8gnlWlR1OQv04xlyew.jpg?auto=webp&s=851a151a3f73789b619fd508ffe8b476a1ef6e1d', 'width': 1200}, 'variants': {}}]}
Phi4 reasoning plus beating R1 in Math
154
MSFT just dropped a reasoning model based on Phi4 architecture on HF According to Sebastien Bubeck, “phi-4-reasoning is better than Deepseek R1 in math yet it has only 2% of the size of R1” Any thoughts?
2025-05-01T18:44:25
https://huggingface.co/microsoft/Phi-4-reasoning-plus
Ok-Atmosphere3141
huggingface.co
1970-01-01T00:00:00
0
{}
1kcgb24
false
null
t3_1kcgb24
/r/LocalLLaMA/comments/1kcgb24/phi4_reasoning_plus_beating_r1_in_math/
false
false
https://b.thumbs.redditm…s0K-jo32jFPY.jpg
154
{'enabled': False, 'images': [{'id': 'oddvWtMfs4CivHJ39tMn899MbXqobdnVqZIghxwQwT4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/E_b6LFMkUNnrxdwsxcj24uOcYREEABszJPyJtQW4wt0.jpg?width=108&crop=smart&auto=webp&s=a110b6572cebc9eeb7d372f01218e12ab31e3b9d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/E_b6LFMkUNnrxdwsxcj24uOcYREEABszJPyJtQW4wt0.jpg?width=216&crop=smart&auto=webp&s=657caa6179ca4f2435f0ba2dcdf143986b88cb9d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/E_b6LFMkUNnrxdwsxcj24uOcYREEABszJPyJtQW4wt0.jpg?width=320&crop=smart&auto=webp&s=98faf5adff5bc7edea63b4dfa937e697e4e0fde3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/E_b6LFMkUNnrxdwsxcj24uOcYREEABszJPyJtQW4wt0.jpg?width=640&crop=smart&auto=webp&s=13498ce3114ac15dfdb42da3d20f0f07bcc856cc', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/E_b6LFMkUNnrxdwsxcj24uOcYREEABszJPyJtQW4wt0.jpg?width=960&crop=smart&auto=webp&s=f97ac2d0015402ee64bdfb38427f78893cc0abad', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/E_b6LFMkUNnrxdwsxcj24uOcYREEABszJPyJtQW4wt0.jpg?width=1080&crop=smart&auto=webp&s=86d08690cfdc0df747244bf81513d8b1f042d475', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/E_b6LFMkUNnrxdwsxcj24uOcYREEABszJPyJtQW4wt0.jpg?auto=webp&s=f3ede5b542d0bad769de6facd6202435a397aaf7', 'width': 1200}, 'variants': {}}]}
Lightweight llama-server PowerShell script (no Ollama, no extras)
1
[removed]
2025-05-01T18:50:27
https://www.reddit.com/r/LocalLLaMA/comments/1kcgg4m/lightweight_llamaserver_powershell_script_no/
srigi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcgg4m
false
null
t3_1kcgg4m
/r/LocalLLaMA/comments/1kcgg4m/lightweight_llamaserver_powershell_script_no/
false
false
self
1
{'enabled': False, 'images': [{'id': '88OzT3_F3GkgkjlMnq3b8_QglrEiCaVAZwNyvSxDFkU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/cIOzcolvvGUu07ym5b3pYU4UcFCXAk4QY0Brd3SA5kE.jpg?width=108&crop=smart&auto=webp&s=bf91ac584fb52702c74478c5e548ef64782f9095', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/cIOzcolvvGUu07ym5b3pYU4UcFCXAk4QY0Brd3SA5kE.jpg?width=216&crop=smart&auto=webp&s=350faf457d0320b77243ff1b7fa182d77d0a04ed', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/cIOzcolvvGUu07ym5b3pYU4UcFCXAk4QY0Brd3SA5kE.jpg?width=320&crop=smart&auto=webp&s=63116c69c4305799304be7dacdf7946d587714b0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/cIOzcolvvGUu07ym5b3pYU4UcFCXAk4QY0Brd3SA5kE.jpg?width=640&crop=smart&auto=webp&s=2c0dece8576bb42213eacedfd69ed8d2a53c4a3d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/cIOzcolvvGUu07ym5b3pYU4UcFCXAk4QY0Brd3SA5kE.jpg?width=960&crop=smart&auto=webp&s=8b0a1dd7de3aa5c139f8f5b93425e2a89832e4fc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/cIOzcolvvGUu07ym5b3pYU4UcFCXAk4QY0Brd3SA5kE.jpg?width=1080&crop=smart&auto=webp&s=20f75ab08a8d5f3ffa4803a5a64d38ffd518ecb7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/cIOzcolvvGUu07ym5b3pYU4UcFCXAk4QY0Brd3SA5kE.jpg?auto=webp&s=13638ec8e34ecfccc25d38d5bdd6c57c4686752f', 'width': 1200}, 'variants': {}}]}
Chart of Medium to long-context (Ficton.LiveBench) performance of leading open-weight models
16
Reference: [https://fiction.live/stories/Fiction-liveBench-Mar-25-2025/oQdzQvKHw8JyXbN87](https://fiction.live/stories/Fiction-liveBench-Mar-25-2025/oQdzQvKHw8JyXbN87) In terms of medium to long-context performance on this particular benchmark, the ranking appears to be: 1. QwQ-32b (drops sharply above 32k tokens) 2. Qwen3-32b 3. Deepseek R1 (ranks 1st at 60k tokens, but drops sharply at 120k) 4. Qwen3-235b-a22b 5. Qwen3-8b 6. Qwen3-14b 7. Deepseek Chat V3 0324 (retains its performance up to 60k tokens where it ranks 3rd) 8. Qwen3-30b-a3b 9. Llama4-maverick 10. Llama-3.3-70b-instruct (drops sharply at >2000 tokens) 11. Gemma-3-27b-it Notes: Fiction.LiveBench have only tested Qwen3 up to 16k context. They also do not specify the quantization levels and whether they disabled thinking in the Qwen3 models.
2025-05-01T18:52:08
https://i.redd.it/cptfq89wt7ye1.jpeg
henfiber
i.redd.it
1970-01-01T00:00:00
0
{}
1kcghky
false
null
t3_1kcghky
/r/LocalLLaMA/comments/1kcghky/chart_of_medium_to_longcontext_fictonlivebench/
false
false
https://external-preview…14ea74426046da23
16
{'enabled': True, 'images': [{'id': 'PfnU3Qnpxq_l9myOQjwFPYA_zh7iGnkmQHMUUz8plKY', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/cptfq89wt7ye1.jpeg?width=108&crop=smart&auto=webp&s=45afd79b1113805391f9aafcc6ec19ea03b1dc28', 'width': 108}, {'height': 112, 'url': 'https://preview.redd.it/cptfq89wt7ye1.jpeg?width=216&crop=smart&auto=webp&s=324d96169561b81edc61897d6ae0134f8960f007', 'width': 216}, {'height': 166, 'url': 'https://preview.redd.it/cptfq89wt7ye1.jpeg?width=320&crop=smart&auto=webp&s=0d47200fbffcd7302dd07d1a6aed81a1456063ad', 'width': 320}, {'height': 332, 'url': 'https://preview.redd.it/cptfq89wt7ye1.jpeg?width=640&crop=smart&auto=webp&s=2a8c84f949e766fecba780c51750e031b86180db', 'width': 640}, {'height': 498, 'url': 'https://preview.redd.it/cptfq89wt7ye1.jpeg?width=960&crop=smart&auto=webp&s=7b626b4ce7d7a576fcc9375aa10c7e7758336180', 'width': 960}, {'height': 560, 'url': 'https://preview.redd.it/cptfq89wt7ye1.jpeg?width=1080&crop=smart&auto=webp&s=1852800cbbe43062343f6e75187670483dfcee1d', 'width': 1080}], 'source': {'height': 652, 'url': 'https://preview.redd.it/cptfq89wt7ye1.jpeg?auto=webp&s=1a101824ea57006afb952a8f8c117b14061a7e08', 'width': 1256}, 'variants': {}}]}
Do I need local llm as An Researcher with weak system?
1
[removed]
2025-05-01T18:52:19
https://www.reddit.com/r/LocalLLaMA/comments/1kcghpy/do_i_need_local_llm_as_an_researcher_with_weak/
Oath_Keeper_99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcghpy
false
null
t3_1kcghpy
/r/LocalLLaMA/comments/1kcghpy/do_i_need_local_llm_as_an_researcher_with_weak/
false
false
self
1
null
Make a Snake game! using Qwen3 locally with agentic loop (MLX)
4
2025-05-01T18:55:14
https://www.youtube.com/watch?v=-h_IZhOdAeU
azakhary
youtube.com
1970-01-01T00:00:00
0
{}
1kcgk5n
false
{'oembed': {'author_name': 'Localforge', 'author_url': 'https://www.youtube.com/@Localforge-dev', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/-h_IZhOdAeU?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Vibe code for free, with Qwen3 on Macbook with MLX (Make a Snake game!)"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/-h_IZhOdAeU/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Vibe code for free, with Qwen3 on Macbook with MLX (Make a Snake game!)', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1kcgk5n
/r/LocalLLaMA/comments/1kcgk5n/make_a_snake_game_using_qwen3_locally_with/
false
false
https://b.thumbs.redditm…FuF-9DLsUBbY.jpg
4
{'enabled': False, 'images': [{'id': 'j0ahzcGDUWjfVuVqsrVzVS5ifdS2eyWG2EIrcVJHZ4M', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/W91VnYUzFQjf_z8bXQZQvH4UMC6KNHS876FemdrrnWo.jpg?width=108&crop=smart&auto=webp&s=2e70d4b6689e716f694b594e015954799f3809a6', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/W91VnYUzFQjf_z8bXQZQvH4UMC6KNHS876FemdrrnWo.jpg?width=216&crop=smart&auto=webp&s=32b7c17322b60be2c7eb5fc3393dd84181fb8cc9', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/W91VnYUzFQjf_z8bXQZQvH4UMC6KNHS876FemdrrnWo.jpg?width=320&crop=smart&auto=webp&s=a956d5f25192c6ebd713479417e0d65974ae522f', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/W91VnYUzFQjf_z8bXQZQvH4UMC6KNHS876FemdrrnWo.jpg?auto=webp&s=9d38acb17a2f80be77c3b77904964a10ce1e18a1', 'width': 480}, 'variants': {}}]}
little llama soon? by zuckberg
2
Zuckerberg mentioned in his talk at LlamaCon that Meta is working on a model called "Little Llama." https://reddit.com/link/1kcgqbl/video/i05f6nn3x7ye1/player [source: Welcome to LlamaCon 2025 - Closing Session! - YouTube](https://www.youtube.com/watch?v=FZ-RZ0dKO8o&t=3556s)
2025-05-01T19:02:12
https://www.reddit.com/r/LocalLLaMA/comments/1kcgqbl/little_llama_soon_by_zuckberg/
ApprehensiveAd3629
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcgqbl
false
{'oembed': {'author_name': 'Meta Developers', 'author_url': 'https://www.youtube.com/@MetaDevelopers', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/FZ-RZ0dKO8o?start=3556&feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Welcome to LlamaCon 2025 - Closing Session!"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/FZ-RZ0dKO8o/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Welcome to LlamaCon 2025 - Closing Session!', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1kcgqbl
/r/LocalLLaMA/comments/1kcgqbl/little_llama_soon_by_zuckberg/
false
false
https://b.thumbs.redditm…Bhk1rZ1kJUXA.jpg
2
{'enabled': False, 'images': [{'id': '8drxHRaZifnXnzrtwZLREExEMHtHzb_rlNCuVhan-fo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/gWwn4vaGbbRZGaZaOAB3F_yqWTTmZ5JszKUMUQ6_fe4.jpg?width=108&crop=smart&auto=webp&s=feb3916c19c975b60d8c6f8ed1b09c93aff88d11', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/gWwn4vaGbbRZGaZaOAB3F_yqWTTmZ5JszKUMUQ6_fe4.jpg?width=216&crop=smart&auto=webp&s=ec7003cc7ccf4e404839f9e445d12f8d81983d75', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/gWwn4vaGbbRZGaZaOAB3F_yqWTTmZ5JszKUMUQ6_fe4.jpg?width=320&crop=smart&auto=webp&s=9a479996f5e2024a3085fe55d31a8709682230b9', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/gWwn4vaGbbRZGaZaOAB3F_yqWTTmZ5JszKUMUQ6_fe4.jpg?auto=webp&s=9f88478948552f735c45a4300a7bf6c7a88c27ec', 'width': 480}, 'variants': {}}]}
Qwen3 in LMStudio @ 128k
1
The model reports it only supports 32k. What magic do I need to enter in the rope settings to get it to 128k? Using Bartowski's quant.
2025-05-01T19:03:58
https://www.reddit.com/r/LocalLLaMA/comments/1kcgrso/qwen3_in_lmstudio_128k/
Secure_Reflection409
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcgrso
false
null
t3_1kcgrso
/r/LocalLLaMA/comments/1kcgrso/qwen3_in_lmstudio_128k/
false
false
self
1
null
Contract Chunking
1
[removed]
2025-05-01T19:09:45
https://www.reddit.com/r/LocalLLaMA/comments/1kcgwl9/contract_chunking/
Albertommm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcgwl9
false
null
t3_1kcgwl9
/r/LocalLLaMA/comments/1kcgwl9/contract_chunking/
false
false
self
1
{'enabled': False, 'images': [{'id': 'OBTTuYzoijJSe4MLwdMthgBGEt6tf5Z0TJtC72trMLE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/s71BqaWRdB9kckX1Qw4xnzJB853YgexIMGYDjNGGc40.jpg?width=108&crop=smart&auto=webp&s=2ad80781d138da6d8d7b6f2f7a7350b6ab3c94c9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/s71BqaWRdB9kckX1Qw4xnzJB853YgexIMGYDjNGGc40.jpg?width=216&crop=smart&auto=webp&s=ab7da9bdf869d9fcad6e3ede39b156b731add0b2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/s71BqaWRdB9kckX1Qw4xnzJB853YgexIMGYDjNGGc40.jpg?width=320&crop=smart&auto=webp&s=90fb5ef770d91163ff0c2b1a5d510e46ef32af71', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/s71BqaWRdB9kckX1Qw4xnzJB853YgexIMGYDjNGGc40.jpg?width=640&crop=smart&auto=webp&s=c4ae9f83ee53aebe127ffd5f43d60c579bca0a4e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/s71BqaWRdB9kckX1Qw4xnzJB853YgexIMGYDjNGGc40.jpg?width=960&crop=smart&auto=webp&s=9c107eb4569ef404c37cc2d04420d2fc8c233c1f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/s71BqaWRdB9kckX1Qw4xnzJB853YgexIMGYDjNGGc40.jpg?width=1080&crop=smart&auto=webp&s=3977c8a49d3016724f745908f7d79448fbb62a18', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/s71BqaWRdB9kckX1Qw4xnzJB853YgexIMGYDjNGGc40.jpg?auto=webp&s=d2fa0e2671d6eebf041e0a295b850ef498ef17b0', 'width': 1200}, 'variants': {}}]}
Anthropic claims chips are smuggled as prosthetic baby bumps
283
Anthropic wants tighter chip control and less competition for frontier model building. Chip control on you but not me. Imagine that we won’t have as good DeepSeek models and Qwen models. https://www.cnbc.com/amp/2025/05/01/nvidia-and-anthropic-clash-over-us-ai-chip-restrictions-on-china.html
2025-05-01T19:33:44
https://www.reddit.com/r/LocalLLaMA/comments/1kchgyo/anthropic_claims_chips_are_smuggled_as_prosthetic/
TheTideRider
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kchgyo
false
null
t3_1kchgyo
/r/LocalLLaMA/comments/1kchgyo/anthropic_claims_chips_are_smuggled_as_prosthetic/
false
false
self
283
{'enabled': False, 'images': [{'id': 'ka3kJMf003o9xdoTJ26DTmONKFeLXLeX1ccXhfcCuKc', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/SZ01Dhyt6cih53QKvQOcU0V6q4eBAJfTNAe5ljBQK-o.jpg?width=108&crop=smart&auto=webp&s=9a66370dd04d7592e7182d36f955ebfe3f05cf95', 'width': 108}, {'height': 142, 'url': 'https://external-preview.redd.it/SZ01Dhyt6cih53QKvQOcU0V6q4eBAJfTNAe5ljBQK-o.jpg?width=216&crop=smart&auto=webp&s=e6cf97fb0334c2452c73913fe181e814ecde6e3d', 'width': 216}, {'height': 211, 'url': 'https://external-preview.redd.it/SZ01Dhyt6cih53QKvQOcU0V6q4eBAJfTNAe5ljBQK-o.jpg?width=320&crop=smart&auto=webp&s=7dfea07c6e185c07cb98e3e72b6918b6c962ac30', 'width': 320}], 'source': {'height': 393, 'url': 'https://external-preview.redd.it/SZ01Dhyt6cih53QKvQOcU0V6q4eBAJfTNAe5ljBQK-o.jpg?auto=webp&s=67fcf5e776b49dacdc813c384bb3801f26ee75d7', 'width': 594}, 'variants': {}}]}
Desktop LLM client app with support to tools and MCP
3
Hello, LLM FX -> https://github.com/jesuino/LLMFX I am sharing with you the application that I have been working on. The name is LLM FX (subject to change). It is like any other client application: * it requires a backend to run the LLM * it can chat in streaming mode The difference about LLM FX is the easy MCP support and the good amount of tools available for users. With the tools you can let the LLM run any command on your computer (at our own risk) , search the web, create drawings, 3d scenes, reports and more - all only using tools and a LLM, no fancy service. You can run it for a local LLM or point to a big tech service (Open AI compatible) To run LLM FX you need only Java 24 and it a Java desktop application, not mobile or web. I am posting this with the goal of having suggestions, feedback. I still need to create a proper documentation, but it will come soon! I also have a lot of planned work: improve tools for drawing, animation and improve 3d generation Thanks!
2025-05-01T19:40:34
https://www.reddit.com/r/LocalLLaMA/comments/1kchmpp/desktop_llm_client_app_with_support_to_tools_and/
Effective_Head_5020
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kchmpp
false
null
t3_1kchmpp
/r/LocalLLaMA/comments/1kchmpp/desktop_llm_client_app_with_support_to_tools_and/
false
false
self
3
{'enabled': False, 'images': [{'id': 'zhCLD_GrJWdxRGFqfl54QhwpIEDYZpp1plyoN3wJn3U', 'resolutions': [{'height': 35, 'url': 'https://external-preview.redd.it/BbavLcvcjo640Rmh3S931JDsxCfg482CtWFoIO5WYPE.jpg?width=108&crop=smart&auto=webp&s=ded949bac9e815efe1d91ebba67b1114257865f2', 'width': 108}, {'height': 70, 'url': 'https://external-preview.redd.it/BbavLcvcjo640Rmh3S931JDsxCfg482CtWFoIO5WYPE.jpg?width=216&crop=smart&auto=webp&s=818c514c41e5827e6a44ca972770a4feea2bbf73', 'width': 216}, {'height': 103, 'url': 'https://external-preview.redd.it/BbavLcvcjo640Rmh3S931JDsxCfg482CtWFoIO5WYPE.jpg?width=320&crop=smart&auto=webp&s=13acbc61c820219d6ac4a9b3d9890945698d60e8', 'width': 320}], 'source': {'height': 110, 'url': 'https://external-preview.redd.it/BbavLcvcjo640Rmh3S931JDsxCfg482CtWFoIO5WYPE.jpg?auto=webp&s=9e6719e0b268e68840517f912f0ca6e0107c47bc', 'width': 339}, 'variants': {}}]}
Qwen3-30b-a3b running on LM Studio at 20 TPS (7940HS + 96GB RAM + RTX 4050)
3
This is crazy. An AI that is usable for real-world tasks is loaded on my laptop, which I got for like $900 + like $300 for a RAM upgrade. Benchmarks seem about right - I can tell it's on par with at least GPT 3.5 or "older" versions of 4o, which appears to be reflected in the benchmarks I've seen. A few months ago, when I tried to load up some LLMs, all they produced was garbage output ... now I am having no issues coding up usable stuff. That may be because I was loading them using Python (no LM studio) or because much progress has been made on AI since then.
2025-05-01T19:56:38
https://www.reddit.com/r/LocalLLaMA/comments/1kci0bd/qwen330ba3b_running_on_lm_studio_at_20_tps_7940hs/
ga239577
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kci0bd
false
null
t3_1kci0bd
/r/LocalLLaMA/comments/1kci0bd/qwen330ba3b_running_on_lm_studio_at_20_tps_7940hs/
false
false
self
3
null
Unsloth Llama 4 Scout Q4_K_XL at 18 tk/s on triple P40 using llama.cpp!
2
Dowloaded Unsloth's Q4_K_XL quant of Llama 4 Scout overnight. Haven't had much time to use it, but did some tests to try to optimize performance on my quad P40 rig using llama.cpp (19e899c). I used the flappy bird example from [Unsloth's Llama 4 documentation](https://docs.unsloth.ai/basics/llama-4-how-to-run-and-fine-tune) for my tests. Enabling flash attention and setting both k and v caches to q8_0, I get 18 tk/s using three P40s with 32k context. Here is the full command I'm running: ```./llama.cpp/llama-cli \ --model /models/Llama-4-Scout/Llama-4-Scout-17B-16E-Instruct-UD-Q4_K_XL-00001-of-00002.gguf \ --threads 40 \ --ctx-size 32768 \ --n-gpu-layers 99 \ --device CUDA1,CUDA2,CUDA3 --tensor-split 0,1,1,1 \ -fa --cache-type-k q8_0 --cache-type-v q8_0 \ --prio 3 \ --temp 0.6 \ --min-p 0.01 \ --top-p 0.9 \ -no-cnv \ --prompt "<|header_start|>user<|header_end|>\n\nCreate a Flappy Bird game in Python. You must include these things:\n1. You must use pygame.\n2. The background color should be randomly chosen and is a light shade. Start with a light blue color.\n3. Pressing SPACE multiple times will accelerate the bird.\n4. The bird's shape should be randomly chosen as a square, circle or triangle. The color should be randomly chosen as a dark color.\n5. Place on the bottom some land colored as dark brown or yellow chosen randomly.\n6. Make a score shown on the top right side. Increment if you pass pipes and don't hit them.\n7. Make randomly spaced pipes with enough space. Color them randomly as dark green or light brown or a dark gray shade.\n8. When you lose, show the best score. Make the text inside the screen. Pressing q or Esc will quit the game. Restarting is pressing SPACE again.\nThe final game should be inside a markdown section in Python. Check your code for errors and fix them before the final markdown section.<|eot|><|header_start|>assistant<|header_end|>\n\n"``` I didn't validate the output. I just wanted to tune inference speed on the P40s. Note that this is splitting the model across layers (no tensor parallelism), as -sm row is [not currently supported with MoE models](https://github.com/ggml-org/llama.cpp/issues/13240). Power consumption averages ~60W per card, with occasional spikes to 120W (probably when successive experts are on the same card. I did a few tests using all four cards, but found it slowed a bit to 17.5 tk/s. Communication between cards is also minimal, with a peak of ~120MB/s. Each card has it's own X8 link, and each pair is on a CPU (dual Xeon E5-2699v4). Gemma 3 27B at Q8 runs at 11tk/s and ~14tk/s on three cards, both with tensor parallelism (-sm row). I know there are a smarter/better models than Scout, and I use Qwen 2.5 and Gemma 3 daily on this rig. llama.cpp doe
2025-05-01T20:06:08
https://www.reddit.com/r/LocalLLaMA/comments/1kci8iv/unsloth_llama_4_scout_q4_k_xl_at_18_tks_on_triple/
FullstackSensei
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kci8iv
false
null
t3_1kci8iv
/r/LocalLLaMA/comments/1kci8iv/unsloth_llama_4_scout_q4_k_xl_at_18_tks_on_triple/
false
false
self
2
{'enabled': False, 'images': [{'id': 'ZmadbtMLxXXHFKwJkCjeTUDuX5sS57sYwkHR8IIGo6Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=108&crop=smart&auto=webp&s=1ef4773905a7285d6ca9d2707252ecf3322ec746', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=216&crop=smart&auto=webp&s=6555cce3e1543ec541933b9a1ea746f3da79448a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=320&crop=smart&auto=webp&s=346b61e1006578bd8c7c90ff8b45496164cd4933', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=640&crop=smart&auto=webp&s=2e74df95b54af72feafa558281ef5e11bc4e8a7c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=960&crop=smart&auto=webp&s=8d3ac1cc3775d1b7217345a94a6e9f18f0ba2092', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=1080&crop=smart&auto=webp&s=57e2a43db692dc32eecd433adfbae429f9bca7fd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?auto=webp&s=2704eae76891f7897192cd5a7236096d2b9f8a5f', 'width': 1200}, 'variants': {}}]}
RAG chunking improvement idea
6
Changing topic from Qwen3! :) So RAG chunk size has an important effect on different performance metrics, and short vs. long chunk size works well for different use-cases. Plus, there is always a risk of relevant information just on the “border” between two chunks. Wouldn't it be nice to have at least some flexibility in chunk sizes, adjusted semi-automatically, and use a different chunk sizes for inference that are better than initial retrieval, without the need to re-chunk and re-embed each chunk size? How about this: 1. Chunk text with relatively small size, let's say \~500 tokens, split at the end of sentence. 2. At retrieval, retrieve a relatively large number of chunks, let's say 100, let's call them initial\_chunks. 3. Before re-ranking, expand the list of chunks from Step 2 with 2x additional chunks: 100 chunks that concatenate \[previous\_chunk initial\_chunk\] and 100 chunks that concatenate \[initial\_chunk next\_chunk\], so you end up with: 100 chunks \[initial\_chunk\], length \~500 100 chunks \[previous\_chunk, initial\_chunk\], length \~1000 100 chunks \[initial\_chunk, next\_chunk\], length \~1000 ("position\_chunk" refers to chunkID from the entire corpus, not Step 2 chunk 1 to 100.) 4. Re-rank 300 chunks from Step 3, keep the top few, let's say top 10. 5. Continue to the final inference. One can come up with many variations on this, for example Step 3.5: first do 100 re-ranks of 3 chunks at a time: \[initial\_chunk\], length \~500 \[previous\_chunk initial\_chunk\], length \~1000 \[initial\_chunk next\_chunk\], length \~1000 and only keep the top one for Step 4, so that at Step 4 you re-rank 100 chunks (length \~500 and \~1000). Or, if the two longer (\~1000 tokens) chunks rank higher than \[initial\_chunk\], then remove all 3 and replace with \[previous\_chunk initial\_chunk next\_chunk\] (length \~1500). Then, you end up with 100 chunks of 3 different lengths (500, 1000, 1500) that are the highest rank around the \[initial\_chunk\] location, and re-rank them in Step 4. I think the only thing to watch is to exclude duplicating or overlapping chunks, for example, if \[initial\_chunk\] includes chunk 102 and 103, then at Step 3 you get: \[102\] (initial\_chunk\[1\]) \[101 102\] \[102 103\] \[103\] (initial\_chunk\[2\]) \[102 103\] \[103 104\] Then, depending on your strategy in Step 3.5, you may end up with the same or overlapping chunks for Step 4: \[102 103\] (top candidate around chunk 102) \[102 103\] (top candidate around chunk 103) keep one of them or \[101 102\] (top candidate around 102) \[102 203\] (top candidate around 103) combine into chunk \[101 102 103\], length \~1500 or \[101 102 103\] (top candidate around chunk 102) \[102 103 104\] (top candidate around chunk 103) combined into chunk \[101 102 103 104\], length \~2000 … and similar combinations that result in longer chunk length. So you start with short chunks (and embed once), and at inference you get possibly 4 different chunk length, that are consistently increased between retrieval and re-ranking. It seems like an easy improvement relative to fixed chunk length for the entire pipeline (chunking to embedding to retrieval to re-ranking to inference), and avoids embedding the same text multiple times. I haven't seen such an option when looking at popular RAG/chunking libraries. Am I missing something?
2025-05-01T20:09:06
https://www.reddit.com/r/LocalLLaMA/comments/1kcib2y/rag_chunking_improvement_idea/
tomkod
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcib2y
false
null
t3_1kcib2y
/r/LocalLLaMA/comments/1kcib2y/rag_chunking_improvement_idea/
false
false
self
6
null
Quadro RTX 5000 worth it?
5
I have the chance of getting a Quadro RTX 5000 16GB for $250 - should I jump on it or is it not worth it? I currently have: A4000 16GB 1080Ti 11GB I would replace the 1080Ti with the Quadro to reach 32GB of total VRAM across both cards and hopefully gain some performance boost over the aging 1080Ti. My main usage is qwen 3 32b.
2025-05-01T20:11:26
https://www.reddit.com/r/LocalLLaMA/comments/1kcid4i/quadro_rtx_5000_worth_it/
Winter_Tension5432
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcid4i
false
null
t3_1kcid4i
/r/LocalLLaMA/comments/1kcid4i/quadro_rtx_5000_worth_it/
false
false
self
5
null
Turn any React App Into an MCP Client
1
Hey all, I'm on the CopilotKit team. Since MCP was released, I’ve been experimenting with different use cases to see how far I can push it. My goal is to manage everything from one interface, using MCP to talk to other platforms. It actually works really well, I was surprised and pretty pleased. For those unfamiliar, CopilotKit is a self-hostable, full-stack framework for building user interactive agents and copilots.. Our focus is allowing your agents to take control of your application (by human approval), communicate what it's doing, and generate a completely custom UI for the user. **Side note:** The fastest way to start chatting with MCP servers inside a React app is by running this command: `npx copilotkit@latest init -m MCP` **What I built:** I took a simple ToDo app and added MCP to connect with: * **Project management tool:** Send my blog list to Asana, assign tasks to myself, and set due dates. * **Social media tool:** Pull blog titles from my task list and send them to Typefully as draft posts. **Quick breakdown:** * **Chat interface:** CopilotKit * **Agentic framework:** None * **MCP servers:** Composio * **Framework:** Next.js I recorded this short video around a marketing use case. What use cases have you found? GitHub: [https://github.com/CopilotKit/copilotkit-mcp-demo](https://github.com/CopilotKit/copilotkit-mcp-demo) The project is open source we welcome contributions! Docs: [https://docs.copilotkit.ai/guides/model-context-protocol](https://docs.copilotkit.ai/guides/model-context-protocol) Twitter: [https://x.com/CopilotKit/status/1917976289547522074](https://x.com/CopilotKit/status/1917976289547522074)
2025-05-01T20:12:43
https://v.redd.it/iajwcs0098ye1
nate4t
/r/LocalLLaMA/comments/1kcie7a/turn_any_react_app_into_an_mcp_client/
1970-01-01T00:00:00
0
{}
1kcie7a
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/iajwcs0098ye1/DASHPlaylist.mpd?a=1748851971%2COWNiMWNkYzMzYTQ2OWE1MGJmMTZiMDUyN2E2NzcyNDIyNWJkMzllMmE1OGM1YzFjODdiMGM2ZDg4MTNjMTRmYg%3D%3D&v=1&f=sd', 'duration': 111, 'fallback_url': 'https://v.redd.it/iajwcs0098ye1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/iajwcs0098ye1/HLSPlaylist.m3u8?a=1748851971%2CZTcyMDI0ZjAzOWI1ODZmYzg1NmJhYzc3ZmZmNjkyM2Y3MTljNmY1YTRhNDQ4M2RhOTdmNWY1MDEzYjkwNjEzYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/iajwcs0098ye1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1kcie7a
/r/LocalLLaMA/comments/1kcie7a/turn_any_react_app_into_an_mcp_client/
false
false
https://external-preview…43abdc1fe525519a
1
{'enabled': False, 'images': [{'id': 'b3E3ZTR0MDA5OHllMRLZOm4xYeWSZ_3bXQyTfr9O1P7j6DrZzbulB6ZZsorM', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/b3E3ZTR0MDA5OHllMRLZOm4xYeWSZ_3bXQyTfr9O1P7j6DrZzbulB6ZZsorM.png?width=108&crop=smart&format=pjpg&auto=webp&s=405485670fe8c8783706d3a58df043aef52ea8ab', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/b3E3ZTR0MDA5OHllMRLZOm4xYeWSZ_3bXQyTfr9O1P7j6DrZzbulB6ZZsorM.png?width=216&crop=smart&format=pjpg&auto=webp&s=39d3f0162cee7eb0aa92c632771f81166efe068c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/b3E3ZTR0MDA5OHllMRLZOm4xYeWSZ_3bXQyTfr9O1P7j6DrZzbulB6ZZsorM.png?width=320&crop=smart&format=pjpg&auto=webp&s=bdda0cda521dd4b7383ce059c04e777ee3823acf', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/b3E3ZTR0MDA5OHllMRLZOm4xYeWSZ_3bXQyTfr9O1P7j6DrZzbulB6ZZsorM.png?width=640&crop=smart&format=pjpg&auto=webp&s=08bb7be7349b789cc70b2bb79d0d8ce6d4859776', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/b3E3ZTR0MDA5OHllMRLZOm4xYeWSZ_3bXQyTfr9O1P7j6DrZzbulB6ZZsorM.png?width=960&crop=smart&format=pjpg&auto=webp&s=2f9f3fd096d897300fef296901973f09e9109ebd', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/b3E3ZTR0MDA5OHllMRLZOm4xYeWSZ_3bXQyTfr9O1P7j6DrZzbulB6ZZsorM.png?width=1080&crop=smart&format=pjpg&auto=webp&s=7bae605b5889bb5bbaab5e96f5cf059452ad22b7', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/b3E3ZTR0MDA5OHllMRLZOm4xYeWSZ_3bXQyTfr9O1P7j6DrZzbulB6ZZsorM.png?format=pjpg&auto=webp&s=c87e4041f62912a8405f680fc84bf16ccc2a1a63', 'width': 1920}, 'variants': {}}]}
AI Prompt Optimizer
1
[removed]
2025-05-01T20:16:06
https://www.reddit.com/r/LocalLLaMA/comments/1kcih18/ai_prompt_optimizer/
dj_92_79_47
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcih18
false
null
t3_1kcih18
/r/LocalLLaMA/comments/1kcih18/ai_prompt_optimizer/
false
false
self
1
null
Turn any React app into an MCP client
27
Hey all, I'm on the CopilotKit team. Since MCP was released, I’ve been experimenting with different use cases to see how far I can push it. My goal is to manage everything from one interface, using MCP to talk to other platforms. It actually works really well, I was surprised and pretty pleased. **Side note:** The fastest way to start chatting with MCP servers inside a React app is by running this command: `npx copilotkit@latest init -m MCP` **What I built:** I took a simple ToDo app and added MCP to connect with: * **Project management tool:** Send my blog list to Asana, assign tasks to myself, and set due dates. * **Social media tool:** Pull blog titles from my task list and send them to Typefully as draft posts. **Quick breakdown:** * **Chat interface:** CopilotKit * **Agentic framework:** None * **MCP servers:** Composio * **Framework:** Next.js The project is open source we welcome contributions! I recorded a short video, what use cases have you tried?
2025-05-01T20:17:04
https://v.redd.it/rxmjhotma8ye1
nate4t
/r/LocalLLaMA/comments/1kcihuu/turn_any_react_app_into_an_mcp_client/
1970-01-01T00:00:00
0
{}
1kcihuu
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/rxmjhotma8ye1/DASHPlaylist.mpd?a=1748852228%2CZTZjNDM5YTUyNGQzNzYyMmQ5ZGM5OTY3N2UyZDJmODJmMzcxZjhkNTAyNzc0MjAwOTUzMTNiY2IxNTVmZDczZQ%3D%3D&v=1&f=sd', 'duration': 111, 'fallback_url': 'https://v.redd.it/rxmjhotma8ye1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/rxmjhotma8ye1/HLSPlaylist.m3u8?a=1748852228%2CMDhlNWUyNTMyZDhhODcyYjU2MzA2ZDY0NTIzNjJlZDhiNTFjMjAzZWFkOWZmNDc1ZmY5OWZhNWNmOGNkMDMxZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/rxmjhotma8ye1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1kcihuu
/r/LocalLLaMA/comments/1kcihuu/turn_any_react_app_into_an_mcp_client/
false
false
https://external-preview…696f65016fc8f45c
27
{'enabled': False, 'images': [{'id': 'N3Y1dXNudG1hOHllMRLZOm4xYeWSZ_3bXQyTfr9O1P7j6DrZzbulB6ZZsorM', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/N3Y1dXNudG1hOHllMRLZOm4xYeWSZ_3bXQyTfr9O1P7j6DrZzbulB6ZZsorM.png?width=108&crop=smart&format=pjpg&auto=webp&s=013b3570db19fea5f0c9ace49c4dbebd43acee8e', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/N3Y1dXNudG1hOHllMRLZOm4xYeWSZ_3bXQyTfr9O1P7j6DrZzbulB6ZZsorM.png?width=216&crop=smart&format=pjpg&auto=webp&s=e62013d272014dd7b1fd2c9ce59ba72b40d26ab1', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/N3Y1dXNudG1hOHllMRLZOm4xYeWSZ_3bXQyTfr9O1P7j6DrZzbulB6ZZsorM.png?width=320&crop=smart&format=pjpg&auto=webp&s=383af09fd0c46ab16b8d6353189158d51d407a40', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/N3Y1dXNudG1hOHllMRLZOm4xYeWSZ_3bXQyTfr9O1P7j6DrZzbulB6ZZsorM.png?width=640&crop=smart&format=pjpg&auto=webp&s=67c4c634bab74541daaa215e3c9562085f268d9f', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/N3Y1dXNudG1hOHllMRLZOm4xYeWSZ_3bXQyTfr9O1P7j6DrZzbulB6ZZsorM.png?width=960&crop=smart&format=pjpg&auto=webp&s=7c0c695e70443e14787a8829dbece42cb3b6f654', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/N3Y1dXNudG1hOHllMRLZOm4xYeWSZ_3bXQyTfr9O1P7j6DrZzbulB6ZZsorM.png?width=1080&crop=smart&format=pjpg&auto=webp&s=10f94dc39e3b3b293a311f7e5c5e59d0f673fa83', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/N3Y1dXNudG1hOHllMRLZOm4xYeWSZ_3bXQyTfr9O1P7j6DrZzbulB6ZZsorM.png?format=pjpg&auto=webp&s=36f3f5aed601cac949acd967fcf2374bd4246902', 'width': 1920}, 'variants': {}}]}
QWEN3-235B-A22B GGUF quants (Q4/Q5/Q6/Q8): Quality comparison / suggestions for good & properly made quant. vs. several evolving options?
23
QWEN3-235B-A22B GGUF quants (Q4/Q5/Q6/Q8): Quality comparison / suggestions for good & properly made quant. vs. several evolving options? I'm interested in having Q4 / Q5 / Q6 / Q8 options for this model in GGUF and possibly other similar model formats. I see several quantizations are now available from various different org/person's repos but there has been some churn of model updates / fixes in the past couple of days. So I'm wondering what's working with the best quality / least issues among the various GGUFs out there from different sources given a particular quant level Q4/Q5/Q6/Q8. Also to know anecdotally or otherwise how the Q4 is doing in quality compared to say Q5/Q6 for this one in real world testing; looking for something that's notably better than Qwen3-32B Q6/Q8 as an option for when the larger model significantly shows its benefits. How is llama.cpp RPC working with this one? Maybe anyone who has evaluated it can comment? Large Q3 or some Q4 is probably a performance sweet spot (vs. RAM size) for me so that's especially interesting to optimize selecting. I gather there were some jinja template implementation bugs in llama.cpp that caused several models to be remade / reposted; IDK about other issues people are still having with the GGUF quantized versions of this model...? Particular Imatrix ones working better or worse than non-imatrix ones? Unsloth-UD dynamic GGUF quants?
2025-05-01T20:18:46
https://www.reddit.com/r/LocalLLaMA/comments/1kcijcm/qwen3235ba22b_gguf_quants_q4q5q6q8_quality/
Calcidiol
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcijcm
false
null
t3_1kcijcm
/r/LocalLLaMA/comments/1kcijcm/qwen3235ba22b_gguf_quants_q4q5q6q8_quality/
false
false
self
23
null
What’s your LLM Stack - May 2025? Tools & Resources?
70
Please share your favorites & recommended items. * Chat UIs to run LLM * Frameworks * Agents * Assistants * Tools for Productivity & other stuffs * Courses * Youtube Channels * Blogs/Websites * Github Repos having useful lists for LLM related * Misc Resources Thanks \^(I'm still new to LLM thing & not a techie, For now I simply just use JanAI to download & use models from HuggingFace. Soon want to go deep further on LLM by using endless infinite tools)
2025-05-01T20:30:46
https://www.reddit.com/r/LocalLLaMA/comments/1kcitgd/whats_your_llm_stack_may_2025_tools_resources/
pmttyji
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcitgd
false
null
t3_1kcitgd
/r/LocalLLaMA/comments/1kcitgd/whats_your_llm_stack_may_2025_tools_resources/
false
false
self
70
null
Cursor/VS Code Copilot - How can I use something like this locally?
1
[removed]
2025-05-01T20:39:47
https://www.reddit.com/r/LocalLLaMA/comments/1kcj0yi/cursorvs_code_copilot_how_can_i_use_something/
CSlov23
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcj0yi
false
null
t3_1kcj0yi
/r/LocalLLaMA/comments/1kcj0yi/cursorvs_code_copilot_how_can_i_use_something/
false
false
self
1
null
Want to run LLaMA 3 on my local computer. PC needs a GPU upgrade. Help me pick
0
If any of you have seen or heard of Neuro-sama, you'll know how advanced of an AI she is. I'm looking to build my own LLM ai similar to Neuro that can have a fun/playful/bratty personality and can grow and joke around, and I think LLaMA 3 would be the best option for me. I know I won't be able to make an ai nearly as good as Neuro, but even just a smaller version would be cool to have. \*(BTW if any of you think a different LLM would be better than LLaMA 3 I'm open to suggestions)\* Anyways, I coded up the ai in VSCode and after making all of the accounts and keys I needed, tried to run it, but after I sent a message, even just "hi" the terminal froze and I never got a response even after minutes of waiting, but also no error code. I've also double checked and confirmed that the response should be outputting, so I don't think anything is wrong with my code. I have an AMD RX 6650 XT GPU so I needed to install the CPU version of PyTouch, but it seems my i5 12400F can't handle it. Task Manager shows my CPU at a constant 100% whenever VSCode is running. This made me realize that if I really wanted to build this ai, I would need to switch to a Nvidia GPU to use the CUDA features. I've been wanting to upgrade my GPU and switch to Nvidia for a few years now, so I figured now would be a great time to do so. Now the question is, which Nvidia GPU should I upgrade to? I'm willing to spend about $1000 on this GPU upgrade, so with that in mind I came up with the RTX 4070, but I'm not sure if that's a good enough upgrade for my purposes, or if it is, idk if I should get the base 4070, 4070 ti, or 4070 ti super \*(why are there so many options🥲)\* I'm also an avid gamer, and a GPU upgrade would be nice for my gameplay. Side Note: I do also plan to upgrade my cpu in the future, and with the bad rep of the 12 and 13th gen of the intel CPUs I plan to switch to Ryzen, \*(ironic ik lol)\*. I also occasionally stream/record my gameplay and post highlights on youtube sometimes. idk if this infuences my GPU pick at all but I figured it wouldn't hurt to add.
2025-05-01T20:55:13
https://www.reddit.com/r/LocalLLaMA/comments/1kcje26/want_to_run_llama_3_on_my_local_computer_pc_needs/
MightOk9038
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcje26
false
null
t3_1kcje26
/r/LocalLLaMA/comments/1kcje26/want_to_run_llama_3_on_my_local_computer_pc_needs/
false
false
self
0
null
Making the most of RDNA 3 for ML workloads!
1
[removed]
2025-05-01T21:07:11
https://www.reddit.com/gallery/1kcjoa2
Doogie707
reddit.com
1970-01-01T00:00:00
0
{}
1kcjoa2
false
null
t3_1kcjoa2
/r/LocalLLaMA/comments/1kcjoa2/making_the_most_of_rdna_3_for_ml_workloads/
false
false
https://b.thumbs.redditm…7tAD69Ae3s4Q.jpg
1
null
Qwen3 235B on a Windows tablet: ~11t/s using 85.9GB 'VRAM' (Asus Flow Z13, AMD Ryzen 395+ Strix Halo with 128GB RAM, Unsloth Dynamic 2.0 2-bit Quant)
2
Hi everyone, The cool thing about the Ryzen AI Max 395+ is its 128GB RAM found directly on the processor die, which you can 'dedicate' 96GB of it to VRAM for its Radeon 8060S iGPU; a bit like Apple Silicon. Just wanted to share this inference video of Qwen3 235B, which I felt the 235B MoE was going to be a good model for the device. DeepSeek R1 671B 1.53-bit from Unsloth is a little rough to run given only 128GB to work with, and I haven't been particularly impressed with models around the 70B dense size. Prompt was a question found [here](https://www.reddit.com/r/AskAnthropology/comments/1icbmt/if_neanderthals_had_not_died_out_but_were_alive/). This is part of a longer writeup with more videos/tests below, and include some 30B-A3B testing as well. [https://www.reddit.com/r/FlowZ13/comments/1kciww4/tested\_qwen3\_235b\_30b\_llms\_on\_the\_z13\_amd\_ryzen/](https://www.reddit.com/r/FlowZ13/comments/1kciww4/tested_qwen3_235b_30b_llms_on_the_z13_amd_ryzen/)
2025-05-01T21:09:51
https://v.redd.it/zhgnb9lkh8ye1
Invuska
/r/LocalLLaMA/comments/1kcjqj4/qwen3_235b_on_a_windows_tablet_11ts_using_859gb/
1970-01-01T00:00:00
0
{}
1kcjqj4
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/zhgnb9lkh8ye1/DASHPlaylist.mpd?a=1748855394%2CYmUxYWI4NmE3MzIyOGRmYWJlYjdhZWVmMmExMDkxNzBlYjc2YWI3ZTg5ZThiNjIwNTg2NGIxNWRkZjQwNDQ3NA%3D%3D&v=1&f=sd', 'duration': 136, 'fallback_url': 'https://v.redd.it/zhgnb9lkh8ye1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/zhgnb9lkh8ye1/HLSPlaylist.m3u8?a=1748855394%2CNjBkMTcyYjM3MWI4MTk3NDhiMDJkMWJkMzVmYjE5M2M0YzUwZDlhNDA3MDdhMmY2MmRmYjgyODMzNzg4YTJmOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/zhgnb9lkh8ye1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1kcjqj4
/r/LocalLLaMA/comments/1kcjqj4/qwen3_235b_on_a_windows_tablet_11ts_using_859gb/
false
false
https://external-preview…37417dde947ccad1
2
{'enabled': False, 'images': [{'id': 'dnZtYWQ3bGtoOHllMR3jQW4S1rA9SrEIxOVJDhDNTqqCXuVQnYLrkVG7FxW1', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dnZtYWQ3bGtoOHllMR3jQW4S1rA9SrEIxOVJDhDNTqqCXuVQnYLrkVG7FxW1.png?width=108&crop=smart&format=pjpg&auto=webp&s=12f454286b103dcefadf48bbf5c74d96d17bfd4d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dnZtYWQ3bGtoOHllMR3jQW4S1rA9SrEIxOVJDhDNTqqCXuVQnYLrkVG7FxW1.png?width=216&crop=smart&format=pjpg&auto=webp&s=a29300dc5a4229926c56ebe2164d14654d612c87', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dnZtYWQ3bGtoOHllMR3jQW4S1rA9SrEIxOVJDhDNTqqCXuVQnYLrkVG7FxW1.png?width=320&crop=smart&format=pjpg&auto=webp&s=068f446eb74ff043536770ef42af7b3873c5bfe7', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dnZtYWQ3bGtoOHllMR3jQW4S1rA9SrEIxOVJDhDNTqqCXuVQnYLrkVG7FxW1.png?width=640&crop=smart&format=pjpg&auto=webp&s=a39dac7e66f9c25d4ef2ac0c1cc19ec2260804f9', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dnZtYWQ3bGtoOHllMR3jQW4S1rA9SrEIxOVJDhDNTqqCXuVQnYLrkVG7FxW1.png?width=960&crop=smart&format=pjpg&auto=webp&s=431219792f2fe04daecfd6c3e2138e93c4509e44', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dnZtYWQ3bGtoOHllMR3jQW4S1rA9SrEIxOVJDhDNTqqCXuVQnYLrkVG7FxW1.png?width=1080&crop=smart&format=pjpg&auto=webp&s=48470170c7f433c98009916c93b31dec49d7231e', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/dnZtYWQ3bGtoOHllMR3jQW4S1rA9SrEIxOVJDhDNTqqCXuVQnYLrkVG7FxW1.png?format=pjpg&auto=webp&s=a469d5baa574d7aedf3e6a03bb78afeccd796957', 'width': 1920}, 'variants': {}}]}
Meta AI latest breakthrough : low memory LLM pretraining
1
[removed]
2025-05-01T21:12:28
https://www.reddit.com/r/LocalLLaMA/comments/1kcjss2/meta_ai_latest_breakthrough_low_memory_llm/
Electronic_Hawk524
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcjss2
false
null
t3_1kcjss2
/r/LocalLLaMA/comments/1kcjss2/meta_ai_latest_breakthrough_low_memory_llm/
false
false
self
1
null
Meta AI latest breakthrough allows consumers graded GPU to do pre-training
1
[removed]
2025-05-01T21:13:49
https://www.reddit.com/r/LocalLLaMA/comments/1kcjtw7/meta_ai_latest_breakthrough_allows_consumers/
Electronic_Hawk524
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcjtw7
false
null
t3_1kcjtw7
/r/LocalLLaMA/comments/1kcjtw7/meta_ai_latest_breakthrough_allows_consumers/
false
false
self
1
null
Qwen3 30B-A3B prompt eval is much slower than on dense 14B
9
I'm currently testing the new Qwen3 models on my ryzen 8845hs mini pc, with a 780m APU. I'm using llama.cpp with Vulkan as a backend. Currently the Vulkan backend has a bug which causes a crash when using the MoE model, so I made a small workaround locally to avoid the crash, and the generation goes through correctly. What I wanted to ask is if it's normal that the prompt evaluation is much slower compared to the dense Qwen3 14B model, or if it's rather a bug that might be tied to the original issue with this model on the Vulkan backend. For reference, the prompt eval speed on the MoE model is \`23t/s\` with a generation speed of \`24t/s\`, while with the dense 14B model I'm getting \`93t/s\` prompt eval and \`8t/s\` generation. The discrepancy is so high that I would think it's a bug, but I'm curious to hear other's opinions.
2025-05-01T21:19:45
https://www.reddit.com/r/LocalLLaMA/comments/1kcjyxy/qwen3_30ba3b_prompt_eval_is_much_slower_than_on/
DD3Boh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcjyxy
false
null
t3_1kcjyxy
/r/LocalLLaMA/comments/1kcjyxy/qwen3_30ba3b_prompt_eval_is_much_slower_than_on/
false
false
self
9
null
Qwen3 0.6B running at ~75 tok/s on IPhone 15 Pro
1
4-bit Qwen3 0.6B with thinking mode running on iPhone 15 using [ExecuTorch](https://github.com/pytorch/executorch) \- runs pretty fast at \~75 tok/s. https://reddit.com/link/1kckom4/video/24ma92vzq8ye1/player Instructions on how to export and run the model [here](https://github.com/pytorch/executorch/blob/main/examples/models/qwen3/README.md).
2025-05-01T21:50:39
https://www.reddit.com/r/LocalLLaMA/comments/1kckom4/qwen3_06b_running_at_75_toks_on_iphone_15_pro/
TokyoCapybara
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kckom4
false
null
t3_1kckom4
/r/LocalLLaMA/comments/1kckom4/qwen3_06b_running_at_75_toks_on_iphone_15_pro/
false
false
https://b.thumbs.redditm…LvrIwkcytrGY.jpg
1
{'enabled': False, 'images': [{'id': 'a_P6jHVKUGltZI3vVMJgjfpqZ9aZWgDmGOIvubRB16c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oneA2dKmbPTNSBTl-L8SOPuvN8EKh0Ri506dfiCV50o.jpg?width=108&crop=smart&auto=webp&s=49a4877db95a8c091f85ab716e41392fb5156243', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/oneA2dKmbPTNSBTl-L8SOPuvN8EKh0Ri506dfiCV50o.jpg?width=216&crop=smart&auto=webp&s=90216d7c09f044e0d056480c52227ab9eb48d6e7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/oneA2dKmbPTNSBTl-L8SOPuvN8EKh0Ri506dfiCV50o.jpg?width=320&crop=smart&auto=webp&s=a402af0ca7de1b04caebb5ab5dc4c82718155f0f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/oneA2dKmbPTNSBTl-L8SOPuvN8EKh0Ri506dfiCV50o.jpg?width=640&crop=smart&auto=webp&s=4dc4fe358d9defccdc40876d041a9b5a19c40abe', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/oneA2dKmbPTNSBTl-L8SOPuvN8EKh0Ri506dfiCV50o.jpg?width=960&crop=smart&auto=webp&s=c739d5c93e18bdd1dfe624559cd3595178c9ebe3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/oneA2dKmbPTNSBTl-L8SOPuvN8EKh0Ri506dfiCV50o.jpg?width=1080&crop=smart&auto=webp&s=c40c5036e6ba9aedfd2fbc2e9036988e01d16864', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/oneA2dKmbPTNSBTl-L8SOPuvN8EKh0Ri506dfiCV50o.jpg?auto=webp&s=cb5f7b36111f53aaf63bbe7e9be5c13fd76f89ce', 'width': 1200}, 'variants': {}}]}
Qwen3 0.6B running at ~75 tok/s on IPhone 15 Pro
1
4-bit Qwen3 0.6B with thinking mode running on iPhone 15 using ExecuTorch - runs pretty fast at \~75 tok/s. ⚡️ Instructions on how to export and run the model here.
2025-05-01T21:54:07
https://www.reddit.com/r/LocalLLaMA/comments/1kckrix/qwen3_06b_running_at_75_toks_on_iphone_15_pro/
TokyoCapybara
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kckrix
false
null
t3_1kckrix
/r/LocalLLaMA/comments/1kckrix/qwen3_06b_running_at_75_toks_on_iphone_15_pro/
false
false
self
1
null
Making the most of RDNA 3 for ML workloads!
1
[removed]
2025-05-01T21:55:20
https://www.reddit.com/gallery/1kcksh4
Doogie707
reddit.com
1970-01-01T00:00:00
0
{}
1kcksh4
false
null
t3_1kcksh4
/r/LocalLLaMA/comments/1kcksh4/making_the_most_of_rdna_3_for_ml_workloads/
false
false
https://b.thumbs.redditm…302RadYSG4lc.jpg
1
null
Speed Comparison : 4090 VLLM, 3090 LCPP, M3Max MLX, M3Max LCPP with Qwen-30B-a3b MoE
37
* vllm 0.8.5: FP8 * MLX-LM 0.24.0: 8bit, MLX 0.25.1 * Llama.cpp 5215: q8_0, all layers on GPU, flash attention | Machine | Engine | Prompt Tokens | Prompt Processing Speed | Generated Tokens | Token Generation Speed | | ------- | ------ | --- | --- | --- | --- | |2x4090 | VLLM | 681 | 51.77 | 1166 | 88.64 | | 2x3090 | LCPP | 680 | 794.85 | 1087 | 82.68 | | M3Max | MLX | 681 | 1160.636 | 939 | 68.016 | | M3Max | LCPP | 680 | 320.66 | 1255 | 57.26 | |2x4090 | VLLM | 774 | 58.86 | 1206 | 91.71 | | 2x3090 | LCPP | 773 | 831.87 | 1071 | 82.63 | | M3Max | MLX | 774 | 1193.223 | 1095 | 67.620 | | M3Max | LCPP | 773 | 469.05 | 1165 | 56.04 | |2x4090 | VLLM | 1165 | 83.97 | 1238 | 89.24 | | 2x3090 | LCPP | 1164 | 868.81 | 1025 | 81.97 | | M3Max | MLX | 1165 | 1276.406 | 1194 | 66.135 | | M3Max | LCPP | 1164 | 395.88 | 939 | 55.61 | |2x4090 | VLLM | 1498 | 141.34 | 939 | 88.60 | | 2x3090 | LCPP | 1497 | 957.58 | 1254 | 81.97 | | M3Max | MLX | 1498 | 1309.557 | 1373 | 64.622 | | M3Max | LCPP | 1497 | 467.97 | 1061 | 55.22 | |2x4090 | VLLM | 2178 | 162.16 | 1192 | 88.75 | | 2x3090 | LCPP | 2177 | 938.00 | 1157 | 81.17 | | M3Max | MLX | 2178 | 1336.514 | 1395 | 62.485 | | M3Max | LCPP | 2177 | 420.58 | 1422 | 53.66 | |2x4090 | VLLM | 3254 | 191.32 | 1483 | 87.19 | | 2x3090 | LCPP | 3253 | 967.21 | 1311 | 79.69 | | M3Max | MLX | 3254 | 1301.808 | 1241 | 59.783 | | M3Max | LCPP | 3253 | 399.03 | 1657 | 51.86 | |2x4090 | VLLM | 4007 | 271.96 | 1282 | 87.01 | | 2x3090 | LCPP | 4006 | 1000.83 | 1169 | 78.65 | | M3Max | MLX | 4007 | 1267.555 | 1522 | 60.945 | | M3Max | LCPP | 4006 | 442.46 | 1252 | 51.15 | |2x4090 | VLLM | 6076 | 295.24 | 1724 | 83.77 | | 2x3090 | LCPP | 6075 | 1012.06 | 1696 | 75.57 | | M3Max | MLX | 6076 | 1188.697 | 1684 | 57.093 | | M3Max | LCPP | 6075 | 424.56 | 1446 | 48.41 | |2x4090 | VLLM | 8050 | 514.87 | 1278 | 81.74 | | 2x3090 | LCPP | 8049 | 999.02 | 1354 | 73.20 | | M3Max | MLX | 8050 | 1105.783 | 1263 | 54.186 | | M3Max | LCPP | 8049 | 407.96 | 1705 | 46.13 | |2x4090 | VLLM | 12006 | 597.26 | 1534 | 76.31 | | 2x3090 | LCPP | 12005 | 975.59 | 1709 | 67.87 | | M3Max | MLX | 12006 | 966.065 | 1961 | 48.330 | | M3Max | LCPP | 12005 | 356.43 | 1503 | 42.43 | |2x4090 | VLLM | 16059 | 602.31 | 2000 | 75.01 | | 2x3090 | LCPP | 16058 | 941.14 | 1667 | 65.46 | | M3Max | MLX | 16059 | 853.156 | 1973 | 43.580 | | M3Max | LCPP | 16058 | 332.21 | 1285 | 39.38 | |2x4090 | VLLM | 24036 | 1152.83 | 1434 | 68.78 | | 2x3090 | LCPP | 24035 | 888.41 | 1556 | 60.06 | | M3Max | MLX | 24036 | 691.141 | 1592 | 34.724 | | M3Max | LCPP | 24035 | 296.13 | 1666 | 33.78 | |2x4090 | VLLM | 32067 | 1484.80 | 1412 | 65.38 | | 2x3090 | LCPP | 32066 | 842.65 | 1060 | 55.16 | | M3Max | MLX | 32067 | 570.459 | 1088 | 29.289 | | M3Max | LCPP | 32066 | 257.69 | 1643 | 29.76 | ### Observation * VLLM seems to process faster with longer prompts. * Surprisingly, M3Max with MLX is not bad particularly with Qwen3 MoE. * It seems you can't use 2xRTX-3090 to run Qwen3 MoE on VLLM nor Exllama yet.
2025-05-01T21:58:21
https://www.reddit.com/r/LocalLLaMA/comments/1kckuv8/speed_comparison_4090_vllm_3090_lcpp_m3max_mlx/
chibop1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kckuv8
false
null
t3_1kckuv8
/r/LocalLLaMA/comments/1kckuv8/speed_comparison_4090_vllm_3090_lcpp_m3max_mlx/
false
false
self
37
null
Qwen3 0.6B running at ~75 tok/s on IPhone 15 Pro
305
4-bit Qwen3 0.6B with thinking mode running on iPhone 15 using ExecuTorch - runs pretty fast at \~75 tok/s. Instructions on how to export and run the model [here](https://github.com/pytorch/executorch/blob/main/examples/models/qwen3/README.md).
2025-05-01T22:01:20
https://v.redd.it/5kluq1fys8ye1
TokyoCapybara
v.redd.it
1970-01-01T00:00:00
0
{}
1kckxgg
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/5kluq1fys8ye1/DASHPlaylist.mpd?a=1748728894%2CMTE4OTQ3MThmMGEzYWJhMWFhMzlhMzkxZmZkYTk2NWZmYjdkOTAwMmU3ZDI2NmMyMTczYTUyYTlmYWI1YWJlOA%3D%3D&v=1&f=sd', 'duration': 30, 'fallback_url': 'https://v.redd.it/5kluq1fys8ye1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1920, 'hls_url': 'https://v.redd.it/5kluq1fys8ye1/HLSPlaylist.m3u8?a=1748728894%2CNzIzMzM0NTg5ZTYwYmVjMTFlNTI1MWRiNzMyMjA1OTUwNjJkNDI4OTA3OGNjZTlhNmRmOGEzNzFkZWVlZjAwOQ%3D%3D&v=1&f=sd', 'is_gif': True, 'scrubber_media_url': 'https://v.redd.it/5kluq1fys8ye1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 886}}
t3_1kckxgg
/r/LocalLLaMA/comments/1kckxgg/qwen3_06b_running_at_75_toks_on_iphone_15_pro/
false
false
https://external-preview…d26a229895e63e0a
305
{'enabled': False, 'images': [{'id': 'azcwaGx6eTZ0OHllMSVm_DKPEYKtVxABsMuiGj0plP7YbynB3_wjyd-Vri9c', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/azcwaGx6eTZ0OHllMSVm_DKPEYKtVxABsMuiGj0plP7YbynB3_wjyd-Vri9c.png?width=108&crop=smart&format=pjpg&auto=webp&s=571a60ec93aaa792e9a3a983b3faf15a45b501bc', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/azcwaGx6eTZ0OHllMSVm_DKPEYKtVxABsMuiGj0plP7YbynB3_wjyd-Vri9c.png?width=216&crop=smart&format=pjpg&auto=webp&s=ad1c2f73684586fac3a2438db77a4967d9215172', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/azcwaGx6eTZ0OHllMSVm_DKPEYKtVxABsMuiGj0plP7YbynB3_wjyd-Vri9c.png?width=320&crop=smart&format=pjpg&auto=webp&s=2172f5ee733e6e041c921a88184375f41350fc59', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/azcwaGx6eTZ0OHllMSVm_DKPEYKtVxABsMuiGj0plP7YbynB3_wjyd-Vri9c.png?width=640&crop=smart&format=pjpg&auto=webp&s=28d13f4ce86cfa8075697e7b5e92accd70fd4170', 'width': 640}, {'height': 1920, 'url': 'https://external-preview.redd.it/azcwaGx6eTZ0OHllMSVm_DKPEYKtVxABsMuiGj0plP7YbynB3_wjyd-Vri9c.png?width=960&crop=smart&format=pjpg&auto=webp&s=3f607e11c63648df729dc8979da7a020c177bb17', 'width': 960}, {'height': 2160, 'url': 'https://external-preview.redd.it/azcwaGx6eTZ0OHllMSVm_DKPEYKtVxABsMuiGj0plP7YbynB3_wjyd-Vri9c.png?width=1080&crop=smart&format=pjpg&auto=webp&s=271f34b0296859eecea1e0d8fd3b74a07750dcef', 'width': 1080}], 'source': {'height': 2556, 'url': 'https://external-preview.redd.it/azcwaGx6eTZ0OHllMSVm_DKPEYKtVxABsMuiGj0plP7YbynB3_wjyd-Vri9c.png?format=pjpg&auto=webp&s=b9fe3d0de5c73a054889bd122656a4fdd3e5ef2a', 'width': 1180}, 'variants': {}}]}
Contract Chunking
1
[removed]
2025-05-01T22:01:28
https://www.reddit.com/r/LocalLLaMA/comments/1kckxjq/contract_chunking/
Albertommm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kckxjq
false
null
t3_1kckxjq
/r/LocalLLaMA/comments/1kckxjq/contract_chunking/
false
false
self
1
null
This sub comes across more like ads for different models.
1
[removed]
2025-05-01T22:02:34
https://www.reddit.com/r/LocalLLaMA/comments/1kckyg5/this_sub_comes_across_more_like_ads_for_different/
SamuraiSanta
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kckyg5
false
null
t3_1kckyg5
/r/LocalLLaMA/comments/1kckyg5/this_sub_comes_across_more_like_ads_for_different/
false
false
self
1
null
Anyone tried running Qwen3 30b-MOE on Nvidia P40?
4
As title says, if anyone has a p40, can you test running qwen 3 30b moe? prices for a p40 are around 250, which is very affordable, and in theory, it would be able to run it at a very usable speed for a very reasonable price. So if you have one, and are able to run it, what backends have you tried? what speeds did you get? what context lengths are you able to run? and what quantization's did you try?
2025-05-01T22:10:43
https://www.reddit.com/r/LocalLLaMA/comments/1kcl53c/anyone_tried_running_qwen3_30bmoe_on_nvidia_p40/
Expensive-Apricot-25
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcl53c
false
null
t3_1kcl53c
/r/LocalLLaMA/comments/1kcl53c/anyone_tried_running_qwen3_30bmoe_on_nvidia_p40/
false
false
self
4
null
Models that maximize 16gb vram in terms of context window
1
[removed]
2025-05-01T22:15:31
https://www.reddit.com/r/LocalLLaMA/comments/1kcl8y4/models_that_maximize_16gb_vram_in_terms_of/
CalligrapherAlone133
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcl8y4
false
null
t3_1kcl8y4
/r/LocalLLaMA/comments/1kcl8y4/models_that_maximize_16gb_vram_in_terms_of/
false
false
self
1
null
qwen 3 32B complexity
1
[removed]
2025-05-01T22:15:57
https://www.reddit.com/r/LocalLLaMA/comments/1kcl9a8/qwen_3_32b_complexity/
TahaTheCaeser
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcl9a8
false
null
t3_1kcl9a8
/r/LocalLLaMA/comments/1kcl9a8/qwen_3_32b_complexity/
false
false
self
1
null
GLM z1 Rumination getting frustrated during a long research process
27
2025-05-01T22:16:14
https://i.redd.it/2aglc7kzv8ye1.png
AnticitizenPrime
i.redd.it
1970-01-01T00:00:00
0
{}
1kcl9io
false
null
t3_1kcl9io
/r/LocalLLaMA/comments/1kcl9io/glm_z1_rumination_getting_frustrated_during_a/
false
false
https://b.thumbs.redditm…vGHLLUyzEK6E.jpg
27
{'enabled': True, 'images': [{'id': 'Ddzcq88HzfQGdFJTgwx_efsRem_5IGbGUqiorsS4g8c', 'resolutions': [{'height': 32, 'url': 'https://preview.redd.it/2aglc7kzv8ye1.png?width=108&crop=smart&auto=webp&s=f6969f8bd4bb77340126a3a421e530da645cdcbd', 'width': 108}, {'height': 65, 'url': 'https://preview.redd.it/2aglc7kzv8ye1.png?width=216&crop=smart&auto=webp&s=ae2aca1270e061ea99dace75285a2336fc96363e', 'width': 216}, {'height': 96, 'url': 'https://preview.redd.it/2aglc7kzv8ye1.png?width=320&crop=smart&auto=webp&s=2982aa118104b525817a681ebee7ea0175c25cc1', 'width': 320}, {'height': 192, 'url': 'https://preview.redd.it/2aglc7kzv8ye1.png?width=640&crop=smart&auto=webp&s=3afbfaef829d5c916c7731b2675996a63a3e1107', 'width': 640}, {'height': 289, 'url': 'https://preview.redd.it/2aglc7kzv8ye1.png?width=960&crop=smart&auto=webp&s=4751dd78643df74963017315ad301b8a8c9bf7d5', 'width': 960}, {'height': 325, 'url': 'https://preview.redd.it/2aglc7kzv8ye1.png?width=1080&crop=smart&auto=webp&s=54d103759bf3140d1691e105709275161e175966', 'width': 1080}], 'source': {'height': 468, 'url': 'https://preview.redd.it/2aglc7kzv8ye1.png?auto=webp&s=e095058c11ba0cedde5061b508e22a9d9acb03af', 'width': 1553}, 'variants': {}}]}
Is Nvidia's ChatRTX actually private? (using it for personal documents)
0
It says it is done locally and "private" but there is very little information I can find about this legally on their site. When I asked the ChatRTX AI directly it said: "The documents shared with ChatRTX are stored on a secure server, accessible only to authorized personnel with the necessary clearance levels." But then, some of its responses have been wonky. Does anyone know?
2025-05-01T22:20:53
https://www.reddit.com/r/LocalLLaMA/comments/1kcldah/is_nvidias_chatrtx_actually_private_using_it_for/
filmguy123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcldah
false
null
t3_1kcldah
/r/LocalLLaMA/comments/1kcldah/is_nvidias_chatrtx_actually_private_using_it_for/
false
false
self
0
null
CoRT (Chain of Recursive Thoughts)
0
Have you guys tried this? >TL;DR: I made my AI think harder by making it argue with itself repeatedly. It works stupidly well. >What is this? >CoRT makes AI models recursively think about their responses, generate alternatives, and pick the best one. It's like giving the AI the ability to doubt itself and try again... and again... and again. >Does it actually work? >YES. I tested it with Mistral 3.1 24B and it went from "meh" to "holy crap", especially for such a small model, at programming tasks. >How it works >AI generates initial response >AI decides how many "thinking rounds" it needs >For each round: >Generates 3 alternative responses >Evaluates all responses >Picks the best one >Final response is the survivor of this AI battle royaleCoRT (Chain of Recursive Thoughts) 🧠🔄TL;DR: I made my AI think harder by making it argue with itself repeatedly. It works stupidly well.What is this?CoRT makes AI models recursively think about their responses, generate alternatives, and pick the best one. It's like giving the AI the ability to doubt itself and try again... and again... and again. Does it actually work?YES. I tested it with Mistral 3.1 24B and it went from "meh" to "holy crap", especially for such a small model, at programming tasks. How it worksAI generates initial response AI decides how many "thinking rounds" it needs For each round: Generates 3 alternative responses Evaluates all responses Picks the best one Final response is the survivor of this AI battle royale URL: [https://github.com/PhialsBasement/Chain-of-Recursive-Thoughts](https://github.com/PhialsBasement/Chain-of-Recursive-Thoughts)
2025-05-01T22:30:33
https://www.reddit.com/r/LocalLLaMA/comments/1kclkvr/cort_chain_of_recursive_thoughts/
freedomachiever
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kclkvr
false
null
t3_1kclkvr
/r/LocalLLaMA/comments/1kclkvr/cort_chain_of_recursive_thoughts/
false
false
self
0
{'enabled': False, 'images': [{'id': '9U4T82F7uMXSzeDRWN-tn2v6lu7aAXw3m4f-MZAz95s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/x9H80hgUHzQKJK5DjIsAskH8p8QqrpwxeihANs91cgA.jpg?width=108&crop=smart&auto=webp&s=c43ee5b15a30d73a5b646606f94b24566689ab03', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/x9H80hgUHzQKJK5DjIsAskH8p8QqrpwxeihANs91cgA.jpg?width=216&crop=smart&auto=webp&s=04da8d454a4af8109c847f0b7c9df1bdbed44a12', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/x9H80hgUHzQKJK5DjIsAskH8p8QqrpwxeihANs91cgA.jpg?width=320&crop=smart&auto=webp&s=dcfab4e73c0643e0461f79d60792b1b3732b036f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/x9H80hgUHzQKJK5DjIsAskH8p8QqrpwxeihANs91cgA.jpg?width=640&crop=smart&auto=webp&s=c08163c03d0b658462b1f59111fe43d444582f69', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/x9H80hgUHzQKJK5DjIsAskH8p8QqrpwxeihANs91cgA.jpg?width=960&crop=smart&auto=webp&s=b48d82779986d2bee8312edcf6bfde13be1573c6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/x9H80hgUHzQKJK5DjIsAskH8p8QqrpwxeihANs91cgA.jpg?width=1080&crop=smart&auto=webp&s=4216fc99fffb72302d066ed966d629f8f0c89f8a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/x9H80hgUHzQKJK5DjIsAskH8p8QqrpwxeihANs91cgA.jpg?auto=webp&s=c962479918f043ac97e8831e58928f6538307045', 'width': 1200}, 'variants': {}}]}
Does anybody tried to introduce online Hebbian learning into pretrained models like Qwen 3?
4
I’ve been tinkering locally with Qwen 3 30b-a3b and while the model is really impressive, I can’t get it out of my head how cool it would be if the model would remember at least something, even if very vaguely from all the past conversations. I’m thinking about something akin to online Hebbian learning built on top of a pretrained model. The idea is that every token you feed in tweaks the weights model, just a tiny bit, so that the exact sequences it’s already seen become ever so slightly more likely to be predicted.  Theoretically, this shouldn’t cost much more than a standard forward pass. No backpropagation needed. You’d just sprinkle in some weight adjustments every time a new token is generated. No giant fine-tuning jobs, no massive compute, just cheap, continuous adaptation.Not sure how it could be implemented, although my intuition tells me that all we need to change is Self-Attention projections with very small learning weights and keep everything else intact. Especially embeddings, to keep the model stable and still capable of generating actually meaningful responses. The promise is that making the model vaguely recall everything it’s ever seen, input and output by adjusting the weights would slowly build a sort of personality over time. It doesn’t even have to boost performance, being “different” is good enough. Once we start sharing the best locally adapted models, internet-scale evolution kicks in, and suddenly everyone’s chatting with AI that actually gets them. Furthermore it creates another incentive to run AI locally.  Has anyone tried something like this in a pretrained Qwen/Lamma model? Maybe there already are some works/adapters that I am not aware of? Although searching with ChatGPT did not show anything practical beyond very theoretical works.
2025-05-01T22:50:00
https://www.reddit.com/r/LocalLLaMA/comments/1kcm02x/does_anybody_tried_to_introduce_online_hebbian/
Another__one
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcm02x
false
null
t3_1kcm02x
/r/LocalLLaMA/comments/1kcm02x/does_anybody_tried_to_introduce_online_hebbian/
false
false
self
4
null
The number of people here who ZERO moral AI so they can electronically diddle kids is too damm high!
1
This isn't something we should be encouraging. If you want to sex chat with your AI it shouldn't be able to be programmed to act like a child. And yet, soooooooo many people are obsessed with having a ZERO morality, ZERO ethics chatbot, "for no reason." Yeah, sure.
2025-05-01T22:54:18
https://www.reddit.com/r/LocalLLaMA/comments/1kcm3g0/the_number_of_people_here_who_zero_moral_ai_so/
Fit-Produce420
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kcm3g0
false
null
t3_1kcm3g0
/r/LocalLLaMA/comments/1kcm3g0/the_number_of_people_here_who_zero_moral_ai_so/
false
false
self
1
null