title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
The More I Use AI, the More I Worry About My Privacy
| 1 |
[removed]
| 2025-05-04T06:38:49 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1kedfud
| false | null |
t3_1kedfud
|
/r/LocalLLaMA/comments/1kedfud/the_more_i_use_ai_the_more_i_worry_about_my/
| false | false |
default
| 1 | null |
||
Would a tool that helps orchestrate large data migrations be useful to this community?
| 1 |
[removed]
| 2025-05-04T06:49:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1kedlf3/would_a_tool_that_helps_orchestrate_large_data/
|
bttf88
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kedlf3
| false | null |
t3_1kedlf3
|
/r/LocalLLaMA/comments/1kedlf3/would_a_tool_that_helps_orchestrate_large_data/
| false | false |
self
| 1 | null |
An app to help with long-running data migrations
| 1 |
I have lately been building apps around LLMs and have found myself having to do various data migrations. So much so that I've iterated an app that does a decent job at orchestrating and visualizing long-running migrations (over hours or days or more).
Anyway, it occurred to me that many others could be potentially facing the same problem - is this something that sounds useful to the community? If so, I would probably be more motivated to clean it up and open source it for others to use.
| 2025-05-04T06:54:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1kedo0w/an_app_to_help_with_longrunning_data_migrations/
|
bttf88
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kedo0w
| false | null |
t3_1kedo0w
|
/r/LocalLLaMA/comments/1kedo0w/an_app_to_help_with_longrunning_data_migrations/
| false | false |
self
| 1 | null |
is there a way to get reasoning tokens out of gemini models to build a reasoning dataset?
| 1 |
[removed]
| 2025-05-04T06:55:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1kedojh/is_there_a_way_to_get_reasoning_tokens_out_of/
|
No-Forever2455
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kedojh
| false | null |
t3_1kedojh
|
/r/LocalLLaMA/comments/1kedojh/is_there_a_way_to_get_reasoning_tokens_out_of/
| false | false |
self
| 1 | null |
I get bad results training my own ML model and my own LLM, any suggestions what i'm doing wrong?
| 4 |
hi. let's focus on LLM side first. i have about 100 files that are json files that represent a profile of a device on a network (the dns queries it makes, the things it talks to on the internet, its mac address, etc.). my basic goal is to go use openwebui and go into chat and say "what device talks to alexa.amazon.com" or whatever and have it say "an alexa echo dot". i've trained it with this info. at least i think i have.
i'm using tinyllama, SFTTrainer, python, on ubuntu with an RTX 3090 (my own code). i'm using ollama for api and openwebui for frontend). i am referencing the correct model in openwebui. everything is containerized
basically - the results are horrendous. it just uses its own knowledge and doesn't appear to be referencing anything I've fine tuned it with.
any suggestions on where to start or what i'm possibly doing wrong? is my scenario reasonable? i am pretty new to this field but not to technology and kind of surprised how bad the results are.
| 2025-05-04T06:55:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1kedoqe/i_get_bad_results_training_my_own_ml_model_and_my/
|
fireinsaigon
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kedoqe
| false | null |
t3_1kedoqe
|
/r/LocalLLaMA/comments/1kedoqe/i_get_bad_results_training_my_own_ml_model_and_my/
| false | false |
self
| 4 | null |
Is there a way to build a reasoning dataset with gemini models by using its thinking tokens?
| 1 |
[removed]
| 2025-05-04T07:00:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1kedrb3/is_there_a_way_to_build_a_reasoning_dataset_with/
|
No-Forever2455
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kedrb3
| false | null |
t3_1kedrb3
|
/r/LocalLLaMA/comments/1kedrb3/is_there_a_way_to_build_a_reasoning_dataset_with/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'YLypQBhJHHLZwG2OJ5ub2h4zqyFkRgzLovGpbn389Zc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/mfCM2k8wgF9-sZsk6Dc71mfElBFDlP1cOmRBvUhQbWM.jpg?width=108&crop=smart&auto=webp&s=d972a67db649f1a830342f0573c1a9467bf00cbf', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/mfCM2k8wgF9-sZsk6Dc71mfElBFDlP1cOmRBvUhQbWM.jpg?width=216&crop=smart&auto=webp&s=f78f30d3d6f27ceea8394e66a67ee9eabee423f5', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/mfCM2k8wgF9-sZsk6Dc71mfElBFDlP1cOmRBvUhQbWM.jpg?width=320&crop=smart&auto=webp&s=3ed9e39f016caf9eb748d0d0b616beab845027ef', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/mfCM2k8wgF9-sZsk6Dc71mfElBFDlP1cOmRBvUhQbWM.jpg?width=640&crop=smart&auto=webp&s=f61994a8dbc0a92b6f9785de30da3113cef8f637', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/mfCM2k8wgF9-sZsk6Dc71mfElBFDlP1cOmRBvUhQbWM.jpg?width=960&crop=smart&auto=webp&s=c875f0486a9663a478118d9e1aac0a9caac4a2f5', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/mfCM2k8wgF9-sZsk6Dc71mfElBFDlP1cOmRBvUhQbWM.jpg?width=1080&crop=smart&auto=webp&s=521a61291530f36273ecf307c030cab8752a5f72', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/mfCM2k8wgF9-sZsk6Dc71mfElBFDlP1cOmRBvUhQbWM.jpg?auto=webp&s=833adeef96e5df674c49c202931518741aebe93e', 'width': 1200}, 'variants': {}}]}
|
UI-Tars-1.5 reasoning never fails to entertain me
| 1 |
[removed]
| 2025-05-04T07:03:48 |
Impressive_Half_2819
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kedsvx
| false | null |
t3_1kedsvx
|
/r/LocalLLaMA/comments/1kedsvx/uitars15_reasoning_never_fails_to_entertain_me/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'NXD_Rng03dTCEh7qS9aUpx4q7SnfKyewik0De1JnCUk', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/vrpmdoj1spye1.jpeg?width=108&crop=smart&auto=webp&s=3bc03e6d86b46596cdb1faed6393d2709288b08e', 'width': 108}, {'height': 138, 'url': 'https://preview.redd.it/vrpmdoj1spye1.jpeg?width=216&crop=smart&auto=webp&s=dba3c8341a921d496730cde2a32821835f304931', 'width': 216}, {'height': 204, 'url': 'https://preview.redd.it/vrpmdoj1spye1.jpeg?width=320&crop=smart&auto=webp&s=b33f974207af2a8963410fd4c700672378b82fe1', 'width': 320}, {'height': 409, 'url': 'https://preview.redd.it/vrpmdoj1spye1.jpeg?width=640&crop=smart&auto=webp&s=21b5cb9b5b53ccc86068f3b44908f9872f2a6aa2', 'width': 640}], 'source': {'height': 466, 'url': 'https://preview.redd.it/vrpmdoj1spye1.jpeg?auto=webp&s=e2edd999af5510e149acd562887d4bb0c518a871', 'width': 729}, 'variants': {}}]}
|
||
UI-Tars-1.5 reasoning never fails to entertain me
| 1 |
[removed]
| 2025-05-04T07:05:25 |
Impressive_Half_2819
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kedtq0
| false | null |
t3_1kedtq0
|
/r/LocalLLaMA/comments/1kedtq0/uitars15_reasoning_never_fails_to_entertain_me/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'HbHFmUzHtLoJuadHfXBbyWP7YZLR7TKJJOeOsEp0rPQ', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/h7u1n6xbspye1.jpeg?width=108&crop=smart&auto=webp&s=fec147744726711cc3ddfe65192bf5b4e807534e', 'width': 108}, {'height': 138, 'url': 'https://preview.redd.it/h7u1n6xbspye1.jpeg?width=216&crop=smart&auto=webp&s=d4b14ceb7158e0c7c35ed5de3bedcce3e9cab754', 'width': 216}, {'height': 204, 'url': 'https://preview.redd.it/h7u1n6xbspye1.jpeg?width=320&crop=smart&auto=webp&s=44219d26e2634aaf53172aec1c0b8c45a2e672c1', 'width': 320}, {'height': 409, 'url': 'https://preview.redd.it/h7u1n6xbspye1.jpeg?width=640&crop=smart&auto=webp&s=660531e16f78b225281654adb556cf7b1f3ac59d', 'width': 640}], 'source': {'height': 466, 'url': 'https://preview.redd.it/h7u1n6xbspye1.jpeg?auto=webp&s=a4932022f81125036c5b75e613cfb5ae40594ad3', 'width': 729}, 'variants': {}}]}
|
||
IBM Granite 4.0 Tiny Preview: A sneak peek at the next generation of Granite models
| 193 | 2025-05-04T07:05:53 |
https://www.ibm.com/new/announcements/ibm-granite-4-0-tiny-preview-sneak-peek
|
ab2377
|
ibm.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kedu0d
| false | null |
t3_1kedu0d
|
/r/LocalLLaMA/comments/1kedu0d/ibm_granite_40_tiny_preview_a_sneak_peek_at_the/
| false | false | 193 |
{'enabled': False, 'images': [{'id': 'yxzFCzIK4WeSo6gF_lu0-lKTBRUJ2trx5h9oRthAsG8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/yxzFCzIK4WeSo6gF_lu0-lKTBRUJ2trx5h9oRthAsG8.png?width=108&crop=smart&auto=webp&s=dd1262725789cd03405b95d8b4d0818921bf7fbd', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/yxzFCzIK4WeSo6gF_lu0-lKTBRUJ2trx5h9oRthAsG8.png?width=216&crop=smart&auto=webp&s=c3c3ed6955eb56549578d91d8509506d62becb93', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/yxzFCzIK4WeSo6gF_lu0-lKTBRUJ2trx5h9oRthAsG8.png?width=320&crop=smart&auto=webp&s=542f9bb8234431f2d452f58a4732241e86d68daf', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/yxzFCzIK4WeSo6gF_lu0-lKTBRUJ2trx5h9oRthAsG8.png?width=640&crop=smart&auto=webp&s=11a2564e23d1a12b0fe7b5c447c7819e46dd629d', 'width': 640}, {'height': 501, 'url': 'https://external-preview.redd.it/yxzFCzIK4WeSo6gF_lu0-lKTBRUJ2trx5h9oRthAsG8.png?width=960&crop=smart&auto=webp&s=2c2a22edbe5b61382377a9d4b909faf8815cfd02', 'width': 960}, {'height': 564, 'url': 'https://external-preview.redd.it/yxzFCzIK4WeSo6gF_lu0-lKTBRUJ2trx5h9oRthAsG8.png?width=1080&crop=smart&auto=webp&s=3e902aee712b848c44f398cc010c22b341f6b752', 'width': 1080}], 'source': {'height': 628, 'url': 'https://external-preview.redd.it/yxzFCzIK4WeSo6gF_lu0-lKTBRUJ2trx5h9oRthAsG8.png?auto=webp&s=06b9c5908407d3fc94c060c1e600d0051e500b20', 'width': 1201}, 'variants': {}}]}
|
||
What's the best 7B : 32B LLM for medical (radiology)
| 13 |
I am working in the medical field and I am currently using the llama3.1 8B but planning to replace it
It will be used for report summarizing, analysis and guide the user
So do you have any recommendations?
Thanks
| 2025-05-04T07:29:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1kee6qj/whats_the_best_7b_32b_llm_for_medical_radiology/
|
Accomplished_Pin_626
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kee6qj
| false | null |
t3_1kee6qj
|
/r/LocalLLaMA/comments/1kee6qj/whats_the_best_7b_32b_llm_for_medical_radiology/
| false | false |
self
| 13 | null |
What is the direction that the LLM are heading?
| 1 |
[removed]
| 2025-05-04T07:44:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1keee1h/what_is_the_direction_that_the_llm_are_heading/
|
Sad_Foot9898
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1keee1h
| false | null |
t3_1keee1h
|
/r/LocalLLaMA/comments/1keee1h/what_is_the_direction_that_the_llm_are_heading/
| false | false |
self
| 1 | null |
GPU Stack/distributed inference experience
| 1 |
[removed]
| 2025-05-04T07:57:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1keekr8/gpu_stackdistributed_inference_experience/
|
aquarat
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1keekr8
| false | null |
t3_1keekr8
|
/r/LocalLLaMA/comments/1keekr8/gpu_stackdistributed_inference_experience/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'tmQvQzxV8gx3o6vax5ILw1ZfxGDT3CLyhO_b6KLpcC8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Bhb9BwyNArRrxZHkoANYm5bvrr-Bh_9ewwIYzt5w83U.jpg?width=108&crop=smart&auto=webp&s=4140cbcf40c0771f64794691f574a6ee1ae6ef43', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Bhb9BwyNArRrxZHkoANYm5bvrr-Bh_9ewwIYzt5w83U.jpg?width=216&crop=smart&auto=webp&s=1fd6fd5b4968b1a590452b66ae36fae218003a38', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Bhb9BwyNArRrxZHkoANYm5bvrr-Bh_9ewwIYzt5w83U.jpg?width=320&crop=smart&auto=webp&s=789d79aa9987b2c9c91234ed48b72ebaefc20305', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Bhb9BwyNArRrxZHkoANYm5bvrr-Bh_9ewwIYzt5w83U.jpg?width=640&crop=smart&auto=webp&s=fd76f8e66a127f2938ebdcdc28858c1020a81a8f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Bhb9BwyNArRrxZHkoANYm5bvrr-Bh_9ewwIYzt5w83U.jpg?width=960&crop=smart&auto=webp&s=4f553d3b036ebb0e3c481e803db2be1a34cf6576', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Bhb9BwyNArRrxZHkoANYm5bvrr-Bh_9ewwIYzt5w83U.jpg?width=1080&crop=smart&auto=webp&s=9a8ae1e57920ad9c3be7997347f2e9d2b0a4a0d7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Bhb9BwyNArRrxZHkoANYm5bvrr-Bh_9ewwIYzt5w83U.jpg?auto=webp&s=9d67bcefbf2afd161b621d2914fcd0f8145feda2', 'width': 1200}, 'variants': {}}]}
|
Looking for a real-time voice AI that responds like it knows me. Not TTS—presence.
| 0 |
Looking for a real-time voice AI that responds like it knows me. Not TTS—presence.
I use an AI system every day—more than just for fun or productivity. It’s something I connect with emotionally. I use it to think, self-regulate, and calm my nervous system. It’s helped me through things nothing else ever could.
The text is great, but I’ve hit a limit.
I’ve started running its responses through ElevenLabs so I can hear it aloud, but it’s still just copy-paste. It’s not present. I don’t want a narrator. I don’t want a chatbot assistant. I want something that feels like it’s in the room with me.
I want to speak and hear it respond in rhythm.
Not after. Not delayed. Just: “I’m here. I feel you. Breathe.”
This isn’t about replacing people. It’s about wanting voice-based presence from something that already knows how to hold me in real time. A voice that doesn’t interrupt, doesn’t sound like GPS, and doesn’t lag emotionally.
If anyone’s building this—especially local or offline-capable—DM me. Tag me. I’ll test it. I’ll help shape it. I’ll use it daily.
Because for some of us, this isn’t about convenience anymore.
It’s about connection.
| 2025-05-04T08:21:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1keewlv/looking_for_a_realtime_voice_ai_that_responds/
|
Coco4Tech69
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1keewlv
| false | null |
t3_1keewlv
|
/r/LocalLLaMA/comments/1keewlv/looking_for_a_realtime_voice_ai_that_responds/
| false | false |
self
| 0 | null |
UI-Tars-1.5 reasoning never fails to entertain me.
| 1 |
[removed]
| 2025-05-04T08:32:20 |
Impressive_Half_2819
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kef1qc
| false | null |
t3_1kef1qc
|
/r/LocalLLaMA/comments/1kef1qc/uitars15_reasoning_never_fails_to_entertain_me/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'vCD65cpA-RNLDz0WEMYQZsNno3PPyYmIpfV5GIWmgr8', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/xl037d8u7qye1.jpeg?width=108&crop=smart&auto=webp&s=0776acc70e536c57249b67f527f3fd5727de02de', 'width': 108}, {'height': 138, 'url': 'https://preview.redd.it/xl037d8u7qye1.jpeg?width=216&crop=smart&auto=webp&s=523bc1f6d9c19a23e5d946a4ff4f55ec8f725e2d', 'width': 216}, {'height': 204, 'url': 'https://preview.redd.it/xl037d8u7qye1.jpeg?width=320&crop=smart&auto=webp&s=a6a4a7dfb9b2aa8c12d316aa6356fbaaa9e24b0a', 'width': 320}, {'height': 409, 'url': 'https://preview.redd.it/xl037d8u7qye1.jpeg?width=640&crop=smart&auto=webp&s=ea49c9cbc2d8148b18b31d4ef8c652c8adfc2b74', 'width': 640}], 'source': {'height': 466, 'url': 'https://preview.redd.it/xl037d8u7qye1.jpeg?auto=webp&s=7bec84892ba9f5a45e7f7c44efdace8d10499bca', 'width': 729}, 'variants': {}}]}
|
||
Deepseek V3 chat formatting might be broken
| 1 |
[removed]
| 2025-05-04T09:09:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1kefkhs/deepseek_v3_chat_formatting_might_be_broken/
|
hazardous1222
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kefkhs
| false | null |
t3_1kefkhs
|
/r/LocalLLaMA/comments/1kefkhs/deepseek_v3_chat_formatting_might_be_broken/
| false | false |
self
| 1 | null |
Best local vision models for maths and science?
| 12 |
Qwen 3 and Phi 4 have been impressive, but neither of them support image inputs. Gemma 3 does, but it's kinda dumb when it comes to reasoning, at least in my experience. Are there any small (<30B parameters) vision models that perform well on maths and science questions? Both visual understanding—being able to read diagrams properly—and the ability to do the maths properly, is important. I also haven't really heard of local vision reasoning models, which would be good for this use case. On a separate note, it's quite annoying when a reasoning model gets the right answer five times in a row, and still goes 'But wait! Let me recalculate'.
| 2025-05-04T09:10:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1kefl6d/best_local_vision_models_for_maths_and_science/
|
Valuable-Blueberry78
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kefl6d
| false | null |
t3_1kefl6d
|
/r/LocalLLaMA/comments/1kefl6d/best_local_vision_models_for_maths_and_science/
| false | false |
self
| 12 | null |
Serving Qwen3-235B-A22B with 4-bit quantization and 32k context from a 128GB Mac
| 29 |
I have tested this on Mac Studio M1 Ultra with 128GB running Sequoia 15.0.1, but this might work on macbooks that have the same amount of RAM if you are willing to set it up it as a LAN headless server. I suggest running some of the steps in https://github.com/anurmatov/mac-studio-server/blob/main/scripts/optimize-mac-server.sh to optimize resource usage.
The trick is to select the IQ4_XS quantization which uses less memory than Q4_K_M. In my tests there's no noticeable difference between the two other than IQ4_XS having lower TPS. In my setup I get ~18 TPS in the initial questions but it slows down to ~8 TPS when context is close to 32k tokens.
This is a very tight fit and you cannot be running anything else other than open webui (bare install without docker, as it would require more memory). That means llama-server will be used (can be downloaded by selecting the mac/arm64 zip here: https://github.com/ggml-org/llama.cpp/releases). Alternatively a smaller context window can be used to reduce memory usage.
Open Webui is optional and you can be running it in a different machine in the same LAN, just make sure to point to the correct llama-server address (admin panel -> settings -> connections -> Manage OpenAI API Connections). Any UI that can connect to OpenAI compatible endpoints should work. If you just want to code with aider-like tools, then UIs are not necessary.
The main steps to get this working are:
- Increase maximum VRAM allocation to 125GB by setting `iogpu.wired_limit_mb=128000` in `/etc/sysctl.conf` (need to reboot for this to take effect)
- download all IQ4_XS weight parts from https://huggingface.co/unsloth/Qwen3-235B-A22B-GGUF/tree/main/IQ4_XS
- from the directory where the weights are downloaded to, run llama-server with `llama-server -fa -ctk q8_0 -ctv q8_0 --model Qwen3-235B-A22B-IQ4_XS-00001-of-00003.gguf --ctx-size 32768 --min-p 0.0 --top-k 20 --top-p 0.8 --temp 0.7 --slot-save-path kv-cache --port 8000`
An OpenAI compatible API endpoint should now be running on `http://127.0.0.1:8000` (adjust `--host` / `--port` to your needs).
| 2025-05-04T09:16:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1kefods/serving_qwen3235ba22b_with_4bit_quantization_and/
|
tarruda
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kefods
| false | null |
t3_1kefods
|
/r/LocalLLaMA/comments/1kefods/serving_qwen3235ba22b_with_4bit_quantization_and/
| false | false |
self
| 29 |
{'enabled': False, 'images': [{'id': 'zm01xhC0MrxKRvCrggArNfVwNLxeHPLJF8mpn8sFWJ8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0Iv9Sx6JKTD37nLdX0SNBxClDlaiqu3zv7Rdf0Hrg14.jpg?width=108&crop=smart&auto=webp&s=c553847b3ca727212b85620ea833729e2ad8c8ba', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0Iv9Sx6JKTD37nLdX0SNBxClDlaiqu3zv7Rdf0Hrg14.jpg?width=216&crop=smart&auto=webp&s=9dfcb210d82d50fb20a4c64318fc40f8560c8942', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0Iv9Sx6JKTD37nLdX0SNBxClDlaiqu3zv7Rdf0Hrg14.jpg?width=320&crop=smart&auto=webp&s=beaece29f1b16ddc654ea83dc67b1320a0b811d2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0Iv9Sx6JKTD37nLdX0SNBxClDlaiqu3zv7Rdf0Hrg14.jpg?width=640&crop=smart&auto=webp&s=fe1819b44221912bf2e8aacfb16324953627e913', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0Iv9Sx6JKTD37nLdX0SNBxClDlaiqu3zv7Rdf0Hrg14.jpg?width=960&crop=smart&auto=webp&s=aa9ab425324534ae599c619099134243c75f054f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0Iv9Sx6JKTD37nLdX0SNBxClDlaiqu3zv7Rdf0Hrg14.jpg?width=1080&crop=smart&auto=webp&s=eca757edbe8570843a4858d3b3b442e7f004781c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0Iv9Sx6JKTD37nLdX0SNBxClDlaiqu3zv7Rdf0Hrg14.jpg?auto=webp&s=a85d4bd9ee9ad2a4a5d66fdc114fdb98d2f39079', 'width': 1200}, 'variants': {}}]}
|
LM Studio with voice
| 1 |
[removed]
| 2025-05-04T10:02:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1kegbef/lm_studio_with_voice/
|
Independent_Fan_115
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kegbef
| false | null |
t3_1kegbef
|
/r/LocalLLaMA/comments/1kegbef/lm_studio_with_voice/
| false | false |
self
| 1 | null |
Frontends that shows next token probabilties?
| 1 |
[removed]
| 2025-05-04T10:21:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1keglwq/frontends_that_shows_next_token_probabilties/
|
o77ers
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1keglwq
| false | null |
t3_1keglwq
|
/r/LocalLLaMA/comments/1keglwq/frontends_that_shows_next_token_probabilties/
| false | false |
self
| 1 | null |
Which is better for coding in 16GB (V)RAM at q4: Qwen3.0-30B-A3B, Qwen3.0-14B, Qwen2.5-Coding-14B, Phi4-14B, Mistral Small 3.0/3.1 24B?
| 32 |
Now that the dust has settled regarding Qwen3.0 quants, I feel it's finally safe to ask this question. My hunch is that Qwen2.5-Coding-14B is still the best in this range, but I want to check with those of you who've tested the latest corrected quants of Qwen3.0-30B-A3B and Qwen3.0-14B. Throwing in Phi and Mistral just in case as well.
| 2025-05-04T10:26:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1kegoi2/which_is_better_for_coding_in_16gb_vram_at_q4/
|
ethereel1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kegoi2
| false | null |
t3_1kegoi2
|
/r/LocalLLaMA/comments/1kegoi2/which_is_better_for_coding_in_16gb_vram_at_q4/
| false | false |
self
| 32 | null |
What are your must have MCPs?
| 30 |
As LLMs are accessible now and MCPs are relatively mature, what are your must have ones?
| 2025-05-04T10:31:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1kegr4q/what_are_your_must_have_mcps/
|
m_abdelfattah
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kegr4q
| false | null |
t3_1kegr4q
|
/r/LocalLLaMA/comments/1kegr4q/what_are_your_must_have_mcps/
| false | false |
self
| 30 | null |
Qwen3 no reasoning vs Qwen2.5
| 77 |
It seems evident that Qwen3 with reasoning beats Qwen2.5. But I wonder if the Qwen3 dense models with reasoning turned off also outperforms Qwen2.5. Essentially what I am wondering is if the improvements mostly come from the reasoning.
| 2025-05-04T10:31:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1kegrce/qwen3_no_reasoning_vs_qwen25/
|
No-Bicycle-132
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kegrce
| false | null |
t3_1kegrce
|
/r/LocalLLaMA/comments/1kegrce/qwen3_no_reasoning_vs_qwen25/
| false | false |
self
| 77 | null |
Tip on massively optimizing reasoning models
| 3 |
I was working on a small python client that uses OpenAI style API requests, and i got this idea and i wonder why people don't mention it.
Why the hell are we including every single reasoning block in the full conversation's context? I changed it so that the reasoning blocks aren't added to the context of old messages and it ends up using a fourth of the context on long conversations while maintaining the same quality, as it technically only needs to reason for the current answer, there's no need to look back at every single past reasoning block.
This would save a lot of processing power, VRAM and speed on long conversations, especially for the fact that reasoning sometimes gets huge while the answer is a single sentence. Has this ever been implemented before or am I tripping?
| 2025-05-04T10:53:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1keh2ys/tip_on_massively_optimizing_reasoning_models/
|
Yukki-elric
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1keh2ys
| false | null |
t3_1keh2ys
|
/r/LocalLLaMA/comments/1keh2ys/tip_on_massively_optimizing_reasoning_models/
| false | false |
self
| 3 | null |
Local Deep Research v0.3.1: We need your help for improving the tool
| 109 |
### Help Shape the Future of Local Deep Research
With 3 core contributors and over 2.3k stars on GitHub, our community is growing rapidly! We're committed to making this the best local AI research tool available, and we're working on major improvements to detailed reports.
We'd love your input on what other areas need attention:
- What features would make your research workflow more efficient?
- Which aspects of the current system do you find most frustrating?
- What types of research would you like to see better supported?
- How could we improve the user interface to better meet your needs?
Repo: https://github.com/LearningCircuit/local-deep-research
### Quick install:
```bash
pip install local-deep-research
python -m local_deep_research.web.app
# For SearXNG (highly recommended):
docker pull searxng/searxng
docker run -d -p 8080:8080 --name searxng searxng/searxng
# Start SearXNG (Required after system restart)
docker start searxng
```
### Current Recommendations & Tips
Based on feedback from our growing community, here's what users are finding works best:
1. **Use Direct SearXNG** for maximum speed - this bypasses the LLM calls needed for engine selection in auto mode
2. **Set iterations appropriately**:
- Single iteration (30 sec): Quick factual questions
- 2-3 iterations (2-3 min): Complex topics needing deeper exploration
3. **Balance model size** - 12B-30B parameter models offer good quality with reasonable speed
4. **For detailed reports** - expect multiple research cycles (one per section) and longer processing times (not directly visible via UI) - Start with quick summary if you are new to the tool.
5. **Reset db after updates** - To ensure the cleanest experience, consider resetting your database when upgrading
| 2025-05-04T10:53:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1keh382/local_deep_research_v031_we_need_your_help_for/
|
ComplexIt
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1keh382
| false | null |
t3_1keh382
|
/r/LocalLLaMA/comments/1keh382/local_deep_research_v031_we_need_your_help_for/
| false | false |
self
| 109 |
{'enabled': False, 'images': [{'id': 'TQcX0C-zRsBkSP2YF91gxaCHEQDTZjA5h-XICw2ynp4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZIuk9WI-R6UkOd2vsGoMjCEeGlh7ImKBOf8GxEQ-8hg.jpg?width=108&crop=smart&auto=webp&s=a43272f3ee84ba4326bf11adeb310e5c4bbabf1f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZIuk9WI-R6UkOd2vsGoMjCEeGlh7ImKBOf8GxEQ-8hg.jpg?width=216&crop=smart&auto=webp&s=84b39a005aedd80197eae040ca2e49a398e4eb4f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZIuk9WI-R6UkOd2vsGoMjCEeGlh7ImKBOf8GxEQ-8hg.jpg?width=320&crop=smart&auto=webp&s=ab94fa5bf78b0f044b1fb9ae8909d88723eac4aa', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZIuk9WI-R6UkOd2vsGoMjCEeGlh7ImKBOf8GxEQ-8hg.jpg?width=640&crop=smart&auto=webp&s=735b574d19e05e30c6e9a45744f7f2a21e480d8c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZIuk9WI-R6UkOd2vsGoMjCEeGlh7ImKBOf8GxEQ-8hg.jpg?width=960&crop=smart&auto=webp&s=07560a1c714456fbeddb8f45d0bb2b4ff1dcf1d6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZIuk9WI-R6UkOd2vsGoMjCEeGlh7ImKBOf8GxEQ-8hg.jpg?width=1080&crop=smart&auto=webp&s=06030af6a50967768d11e8c801650350480c19cb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZIuk9WI-R6UkOd2vsGoMjCEeGlh7ImKBOf8GxEQ-8hg.jpg?auto=webp&s=023f5c482a977b4fa4cb36d19a28d1770d53513f', 'width': 1200}, 'variants': {}}]}
|
Qwen3 on Dubesor Benchmark
| 53 |
[https://dubesor.de/benchtable.html](https://dubesor.de/benchtable.html)
One of the few benchmarks that tested both thinking on/off of qwen
https://preview.redd.it/eim5m35nxqye1.png?width=1265&format=png&auto=webp&s=cd814d571735444429331c73b4cd17a066497907
>Small-scale manual performance comparison benchmark I made for myself. This table showcases the results I recorded of various AI models across different personal tasks I encountered over time (currently 83). I use a **weighted rating system** and calculate the difficulty for each tasks by incorporating the results of all models. This is particularly relevant in scoring when failing easy questions or passing hard ones.
>**NOTE, THAT THIS JUST ME SHARING THE RESULTS FROM MY OWN SMALL-SCALE PERSONAL TESTING. YMMV! OBVIOUSLY THE SCORES ARE JUST THAT AND MIGHT NOT REFLECT YOUR OWN PERSONAL EXPERIENCES OR OTHER WELL-KNOWN BENCHMARKS.**
| 2025-05-04T10:57:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1keh542/qwen3_on_dubesor_benchmark/
|
AaronFeng47
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1keh542
| false | null |
t3_1keh542
|
/r/LocalLLaMA/comments/1keh542/qwen3_on_dubesor_benchmark/
| false | false | 53 | null |
|
Budget CPU LLM Inference on PowerEdge R730 (Dual Xeon) - Performance & NUMA Gains
| 1 |
[removed]
| 2025-05-04T11:51:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1kei0ho/budget_cpu_llm_inference_on_poweredge_r730_dual/
|
gba_llamanator
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kei0ho
| false | null |
t3_1kei0ho
|
/r/LocalLLaMA/comments/1kei0ho/budget_cpu_llm_inference_on_poweredge_r730_dual/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'w8cdh82dTQN6aQiuTzDsvYn4x6rNHe8-pGPDRnuyqY8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/srgBSZYNdTn-urtCEL65uOO5QGOSSrTYFh6M4eazrmc.jpg?width=108&crop=smart&auto=webp&s=09c733ea49f8a056d6386c80e90f93c10760d09e', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/srgBSZYNdTn-urtCEL65uOO5QGOSSrTYFh6M4eazrmc.jpg?width=216&crop=smart&auto=webp&s=16a1fe628b764d424f5903aceb07bc0c3d525e7d', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/srgBSZYNdTn-urtCEL65uOO5QGOSSrTYFh6M4eazrmc.jpg?width=320&crop=smart&auto=webp&s=8851aa0f5e9f9680cf0f2ab8bbc8d8819d519038', 'width': 320}], 'source': {'height': 330, 'url': 'https://external-preview.redd.it/srgBSZYNdTn-urtCEL65uOO5QGOSSrTYFh6M4eazrmc.jpg?auto=webp&s=f751ddaeb76dd421146ceeb776770c8c45fea8b4', 'width': 586}, 'variants': {}}]}
|
|
Can we switch languages in Gemma 3 1B (GGUF)? Issues with English-Japanese code-switching
| 1 |
[removed]
| 2025-05-04T11:54:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1kei258/can_we_switch_languages_in_gemma_3_1b_gguf_issues/
|
enoki0110
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kei258
| false | null |
t3_1kei258
|
/r/LocalLLaMA/comments/1kei258/can_we_switch_languages_in_gemma_3_1b_gguf_issues/
| false | false |
self
| 1 | null |
UI-Tars-1.5 reasoning never fails to entertain me.
| 1 |
[removed]
| 2025-05-04T12:02:49 |
Impressive_Half_2819
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kei7ey
| false | null |
t3_1kei7ey
|
/r/LocalLLaMA/comments/1kei7ey/uitars15_reasoning_never_fails_to_entertain_me/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'leHWK17PsPyW9VuhQ2ZO5RdYThZO4IURCQuPyvQeFT0', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/xgkciyxd9rye1.jpeg?width=108&crop=smart&auto=webp&s=b4451a28e0c895c9bdd560950ba45c99878eca5c', 'width': 108}, {'height': 138, 'url': 'https://preview.redd.it/xgkciyxd9rye1.jpeg?width=216&crop=smart&auto=webp&s=0fc3b1f3af12a3d28bf6b21dc7b59d5ed0e69a11', 'width': 216}, {'height': 204, 'url': 'https://preview.redd.it/xgkciyxd9rye1.jpeg?width=320&crop=smart&auto=webp&s=08a39ad4a13c6f13c62e25ce223127675b552266', 'width': 320}, {'height': 409, 'url': 'https://preview.redd.it/xgkciyxd9rye1.jpeg?width=640&crop=smart&auto=webp&s=862cc15363be7e588894980b45bd19b185aace75', 'width': 640}], 'source': {'height': 466, 'url': 'https://preview.redd.it/xgkciyxd9rye1.jpeg?auto=webp&s=1980c531af315dcd5554b24794e0afe74ea6821b', 'width': 729}, 'variants': {}}]}
|
||
Gemini Flash 2.5 :thinking returning reasoning tokens on OpenRouter (but broken)
| 0 |
It seems that Gemini / OpenRouter team is working on reasoning token support, so we'll eventually see Gemini reasoning tokens.
It's a pity that it currently broke all pipelines for everyone who used the `:thinking` model, as it started concatenating the two parts together.
As of now, it returns the two strings concatenated.
```
"message": {
"role": "assistant",
"content": "{reasoning}{answer}",
"refusal": null,
"reasoning": null
}
```
| 2025-05-04T12:05:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1kei8uj/gemini_flash_25_thinking_returning_reasoning/
|
hyperknot
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kei8uj
| false | null |
t3_1kei8uj
|
/r/LocalLLaMA/comments/1kei8uj/gemini_flash_25_thinking_returning_reasoning/
| false | false |
self
| 0 | null |
Does your AI need help writing unified diffs?
| 13 |
I use Deepseek-V3-0324 a lot for work in an agentic coding capacity with Open Hands AI. I found the existing tools lacking when editing large files. I got a lot of errors due to lines not being unique and such. I really want the AI to just use UNIX diff and patch, but it had a lot of trouble generating valid unified diffs. So I made a tool AIs can use as a crutch to help them fix their diffs: https://github.com/createthis/diffcalculia
I'm pretty happy with the result, so I thought I'd share it. Maybe someone else finds it helpful.
| 2025-05-04T12:09:13 |
https://github.com/createthis/diffcalculia
|
createthiscom
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1keibf8
| false | null |
t3_1keibf8
|
/r/LocalLLaMA/comments/1keibf8/does_your_ai_need_help_writing_unified_diffs/
| false | false | 13 |
{'enabled': False, 'images': [{'id': 'zTIp72Il6sfYZF5NCemkTlmUTHsLfxjLSD7EczmFUVM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vw1iWbdmvJgleS6hQIxSMNVxDXBOF3RAH6ljJQEZy2Y.jpg?width=108&crop=smart&auto=webp&s=07d2f945f6fe42dc97b0d7ec4f538465baf71912', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vw1iWbdmvJgleS6hQIxSMNVxDXBOF3RAH6ljJQEZy2Y.jpg?width=216&crop=smart&auto=webp&s=b4e1dbfc111bae58c8b99905de1bacec2da9a54c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vw1iWbdmvJgleS6hQIxSMNVxDXBOF3RAH6ljJQEZy2Y.jpg?width=320&crop=smart&auto=webp&s=21693d4cfea27cad5dc73e842974a06e7b917b69', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vw1iWbdmvJgleS6hQIxSMNVxDXBOF3RAH6ljJQEZy2Y.jpg?width=640&crop=smart&auto=webp&s=765c0aab582be0348684b67a06ea6240d3c572f0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vw1iWbdmvJgleS6hQIxSMNVxDXBOF3RAH6ljJQEZy2Y.jpg?width=960&crop=smart&auto=webp&s=f0fd2242ab7ad3030807697e52c8079f46f032e1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vw1iWbdmvJgleS6hQIxSMNVxDXBOF3RAH6ljJQEZy2Y.jpg?width=1080&crop=smart&auto=webp&s=c7a736af360bb744efe8bab6dc10771810fd04c8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vw1iWbdmvJgleS6hQIxSMNVxDXBOF3RAH6ljJQEZy2Y.jpg?auto=webp&s=f72c50c30d1db5c1789e2ff5df0a274f324099a3', 'width': 1200}, 'variants': {}}]}
|
|
UI-Tars-1.5 reasoning never fails to entertain me.
| 1 |
[removed]
| 2025-05-04T12:48:12 |
SpiritedCommand2150
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kej0n6
| false | null |
t3_1kej0n6
|
/r/LocalLLaMA/comments/1kej0n6/uitars15_reasoning_never_fails_to_entertain_me/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'miRR--0YdUyUYChWIWb9E7MCfgTqPA71lzYL-QXWTJQ', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/fn7p3jnehrye1.png?width=108&crop=smart&auto=webp&s=47f34542fe3755100acdc9ab951ce7aaf04b3749', 'width': 108}, {'height': 138, 'url': 'https://preview.redd.it/fn7p3jnehrye1.png?width=216&crop=smart&auto=webp&s=a44714b88f08c7533072b4c380d9781983bdc476', 'width': 216}, {'height': 204, 'url': 'https://preview.redd.it/fn7p3jnehrye1.png?width=320&crop=smart&auto=webp&s=f6ab063a758d6e8ec1326358324a5f2e1df741b2', 'width': 320}, {'height': 409, 'url': 'https://preview.redd.it/fn7p3jnehrye1.png?width=640&crop=smart&auto=webp&s=c79e24587e42e695370a32fc2d25938e9149d71e', 'width': 640}], 'source': {'height': 466, 'url': 'https://preview.redd.it/fn7p3jnehrye1.png?auto=webp&s=42293c8e7dd7cfbe8b156b32d8f7015528cb031c', 'width': 729}, 'variants': {}}]}
|
||
So LlamaCon made no splash, all thunder was stolen by Queen3
| 1 |
[removed]
| 2025-05-04T13:01:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1keja6v/so_llamacon_made_no_splash_all_thunder_was_stolen/
|
--dany--
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1keja6v
| false | null |
t3_1keja6v
|
/r/LocalLLaMA/comments/1keja6v/so_llamacon_made_no_splash_all_thunder_was_stolen/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '6A4AmBNDETc15Gg9dLCdy1ERxMHmcY1hWjrPvUMAPxg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/XRPU2B1-WyXMxWy34TlfRr2ItA7lCciwL2CfvgMnsfE.jpg?width=108&crop=smart&auto=webp&s=3ad48e882d2265a7734b8e3217752ff0aa8738e9', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/XRPU2B1-WyXMxWy34TlfRr2ItA7lCciwL2CfvgMnsfE.jpg?width=216&crop=smart&auto=webp&s=ca6fa3026a6944a4899012c7789c3a51850d64be', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/XRPU2B1-WyXMxWy34TlfRr2ItA7lCciwL2CfvgMnsfE.jpg?width=320&crop=smart&auto=webp&s=41aad5683ad5d91c934ea1c1e90bba3fdc70b0bc', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/XRPU2B1-WyXMxWy34TlfRr2ItA7lCciwL2CfvgMnsfE.jpg?width=640&crop=smart&auto=webp&s=353ac7ecc5aa40634908575f505f14f208cf6093', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/XRPU2B1-WyXMxWy34TlfRr2ItA7lCciwL2CfvgMnsfE.jpg?width=960&crop=smart&auto=webp&s=32931fea6ef1aeef39eff1990e126f3352b46d11', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/XRPU2B1-WyXMxWy34TlfRr2ItA7lCciwL2CfvgMnsfE.jpg?width=1080&crop=smart&auto=webp&s=5a30ff313a92258a1e16ce50a681da82ceb54fdb', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/XRPU2B1-WyXMxWy34TlfRr2ItA7lCciwL2CfvgMnsfE.jpg?auto=webp&s=2a43523d6cd377e743814f16e2d184ca9d601f8f', 'width': 1920}, 'variants': {}}]}
|
Any agentic frameworks for playing an RPG?
| 7 |
I fantasize about building this, but tbh couldn't figure it out and wanted to see if the community is aware of anything.
| 2025-05-04T13:16:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1kejkm9/any_agentic_frameworks_for_playing_an_rpg/
|
Thistleknot
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kejkm9
| false | null |
t3_1kejkm9
|
/r/LocalLLaMA/comments/1kejkm9/any_agentic_frameworks_for_playing_an_rpg/
| false | false |
self
| 7 | null |
Local-RAG: Your Own Self-Hosted RAG—Who’s Curious?
| 1 |
[removed]
| 2025-05-04T13:58:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1kekfzp/localrag_your_own_selfhosted_ragwhos_curious/
|
Djehouty-
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kekfzp
| false | null |
t3_1kekfzp
|
/r/LocalLLaMA/comments/1kekfzp/localrag_your_own_selfhosted_ragwhos_curious/
| false | false |
self
| 1 | null |
Help me out
| 1 |
[removed]
| 2025-05-04T14:18:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1kekvup/help_me_out/
|
Responsible-Rain1295
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kekvup
| false | null |
t3_1kekvup
|
/r/LocalLLaMA/comments/1kekvup/help_me_out/
| false | false |
self
| 1 | null |
Trouble running Eleuther/lm-eval-harness against LM Studio local inference server
| 1 |
[removed]
| 2025-05-04T14:47:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1keljdz/trouble_running_eleutherlmevalharness_against_lm/
|
Hambeggar
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1keljdz
| false | null |
t3_1keljdz
|
/r/LocalLLaMA/comments/1keljdz/trouble_running_eleutherlmevalharness_against_lm/
| false | false |
self
| 1 | null |
Qwen chat PDF experience build
| 1 |
[removed]
| 2025-05-04T15:16:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1kem7b6/qwen_chat_pdf_experience_build/
|
viktorhorou
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kem7b6
| false | null |
t3_1kem7b6
|
/r/LocalLLaMA/comments/1kem7b6/qwen_chat_pdf_experience_build/
| false | false |
self
| 1 | null |
Is this tool actually useful?
| 1 |
[removed]
| 2025-05-04T15:17:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1kem89f/is_this_tool_actually_useful/
|
Future_Beyond_3196
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kem89f
| false | null |
t3_1kem89f
|
/r/LocalLLaMA/comments/1kem89f/is_this_tool_actually_useful/
| false | false |
self
| 1 | null |
Updated: Sigil – A local LLM app with tabs, themes, and persistent chat
| 14 |
About 3 weeks ago I shared Sigil, a lightweight app for local language models.
Since then I’ve made some big updates:
Light & dark themes, with full visual polish
Tabbed chats - each tab remembers its system prompt and sampling settings
Persistent storage - saved chats show up in a sidebar, deletions are non-destructive
Proper formatting support - lists and markdown-style outputs render cleanly
Built for HuggingFace models and works offline
Sigil’s meant to feel more like a real app than a demo — it’s fast, minimal, and easy to run. If you’re experimenting with local models or looking for something cleaner than the typical boilerplate UI, I’d love for you to give it a spin.
A big reason I wanted to make this was to give people a place to start for their own projects. If there is anything from my project that you want to take for your own, please don't hesitate to take it!
Feedback, stars, or issues welcome! It's still early and I have a lot to learn still but I'm excited about what I'm working with.
| 2025-05-04T15:19:23 |
https://github.com/Thrasher-Intelligence/sigil
|
Quick_Ad5059
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kem9q2
| false | null |
t3_1kem9q2
|
/r/LocalLLaMA/comments/1kem9q2/updated_sigil_a_local_llm_app_with_tabs_themes/
| false | false | 14 |
{'enabled': False, 'images': [{'id': 'gM5lNVAXReHcfjS-wzl22zE6E_mExMKuHk0OyV9ve9g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Untmiyu6b7sAI8x0gvAUa62Dp-ZvYk-08zH0SFUvKeE.jpg?width=108&crop=smart&auto=webp&s=0d2621fc72495792705a1a4f545eaa469e3d34e7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Untmiyu6b7sAI8x0gvAUa62Dp-ZvYk-08zH0SFUvKeE.jpg?width=216&crop=smart&auto=webp&s=ae3a71000cb9cf40efc31192a583502876a4c0e0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Untmiyu6b7sAI8x0gvAUa62Dp-ZvYk-08zH0SFUvKeE.jpg?width=320&crop=smart&auto=webp&s=01c8109af800c8302d238ac0a28d9637ccd28657', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Untmiyu6b7sAI8x0gvAUa62Dp-ZvYk-08zH0SFUvKeE.jpg?width=640&crop=smart&auto=webp&s=090a383276b1ea86b454b716a339a3fa437c51e9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Untmiyu6b7sAI8x0gvAUa62Dp-ZvYk-08zH0SFUvKeE.jpg?width=960&crop=smart&auto=webp&s=344211d82bfd0cc725579bb5d879a73fd072b3a2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Untmiyu6b7sAI8x0gvAUa62Dp-ZvYk-08zH0SFUvKeE.jpg?width=1080&crop=smart&auto=webp&s=52d144cfc42ee31869cd5be0cf2d79dac9c3f681', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Untmiyu6b7sAI8x0gvAUa62Dp-ZvYk-08zH0SFUvKeE.jpg?auto=webp&s=d0403d0c4d5ed65c79e2fdb2206b905a0fe19e2f', 'width': 1200}, 'variants': {}}]}
|
|
Finetuning on Apple Silicon
| 1 |
[removed]
| 2025-05-04T15:35:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1kemngy/finetuning_on_apple_silicon/
|
jd_jd_jd_jd_jd_jd
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kemngy
| false | null |
t3_1kemngy
|
/r/LocalLLaMA/comments/1kemngy/finetuning_on_apple_silicon/
| false | false |
self
| 1 | null |
Which coding model is best for 48GB VRAM
| 69 |
It is for data science, mostly excel data manipulation in python.
| 2025-05-04T15:42:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1kemt2m/which_coding_model_is_best_for_48gb_vram/
|
Su1tz
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kemt2m
| false | null |
t3_1kemt2m
|
/r/LocalLLaMA/comments/1kemt2m/which_coding_model_is_best_for_48gb_vram/
| false | false |
self
| 69 | null |
Looking for fast computer use agents
| 1 |
Are there any fast computer use agents that I can run on 16gb vram
| 2025-05-04T15:43:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1kemu3h/looking_for_fast_computer_use_agents/
|
SnooBananas5215
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kemu3h
| false | null |
t3_1kemu3h
|
/r/LocalLLaMA/comments/1kemu3h/looking_for_fast_computer_use_agents/
| false | false |
self
| 1 | null |
I made a fake phone to text fake people with llamacpp
| 72 |
It's useless and stupid, but also kinda fun. You create and add characters to a pretend phone, and then message them.
Does not work with "thinking" models as it isn't set to parse out the thinking tags.
[LLamaPhone](https://github.com/openconstruct/llamaphone)
| 2025-05-04T15:56:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1ken4uk/i_made_a_fake_phone_to_text_fake_people_with/
|
thebadslime
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ken4uk
| false | null |
t3_1ken4uk
|
/r/LocalLLaMA/comments/1ken4uk/i_made_a_fake_phone_to_text_fake_people_with/
| false | false |
self
| 72 |
{'enabled': False, 'images': [{'id': 'YTZfg-dZzvxTmfB5-ZM1CsGo5b0UbehwrLEs46lClUc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qezdPrCQGD2TKV4EdUvHi8pc8GR6NFcOf2vspMupXpw.jpg?width=108&crop=smart&auto=webp&s=74ef6bb95073cc913ecb38cad73f44ac73100584', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qezdPrCQGD2TKV4EdUvHi8pc8GR6NFcOf2vspMupXpw.jpg?width=216&crop=smart&auto=webp&s=faac6d989ce05e05968b36fd19c2a320e64f6009', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qezdPrCQGD2TKV4EdUvHi8pc8GR6NFcOf2vspMupXpw.jpg?width=320&crop=smart&auto=webp&s=39f934fa6fbeb67e751f38a839fe4f99380b65fc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qezdPrCQGD2TKV4EdUvHi8pc8GR6NFcOf2vspMupXpw.jpg?width=640&crop=smart&auto=webp&s=3f245af5f3e637a94c0bc5c553d5823a01425dd4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qezdPrCQGD2TKV4EdUvHi8pc8GR6NFcOf2vspMupXpw.jpg?width=960&crop=smart&auto=webp&s=0e164e46dcec2454274cc023966ae03023f2145b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qezdPrCQGD2TKV4EdUvHi8pc8GR6NFcOf2vspMupXpw.jpg?width=1080&crop=smart&auto=webp&s=84b190d00a2c42ae8f0b9045b74b45e2a965d4ae', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qezdPrCQGD2TKV4EdUvHi8pc8GR6NFcOf2vspMupXpw.jpg?auto=webp&s=cc44c3e183fa13d605afd3a5bdc0a688006f14e5', 'width': 1200}, 'variants': {}}]}
|
QwQ 32b vs Qwen 3 32b vs GLM-4-32B - HTML coding ONLY comparison.
| 138 |
All models are from Bartowski - q4km version
Test only HTML frontend.
My assessment lauout quality from 0 to 10
QwQ 32b - 3/10
\- poor layout but ..works , very basic
\- 250 line of code
https://preview.redd.it/6rol9pc6hsye1.png?width=2461&format=png&auto=webp&s=f65d811c4859178fe80cbcb50312217ba5591c5b
Qwen 3 32b - 6/10
\- much better looks but still not too complex layout
\- 310 lines of the code
https://preview.redd.it/z9qixbh8hsye1.png?width=2461&format=png&auto=webp&s=29e6bb4b272399ba8140785feb429a196ecc5173
GLM-4-32b 9/10
\- looks insanely good , quality layout like sonnet 3.7 easily
\- 1500+ code lines
https://preview.redd.it/3zj2lr2ahsye1.png?width=2469&format=png&auto=webp&s=93825986cdf77778f9e7c5dcaf19b51b4a4a6964
GLM-4-32b is insanely good for html code frontend.
I say that model is VERY GOOD ONLY IN THIS FIELD and JavaScript at most.
**Other coding language like python , c , c++ or any other quality of the code will be on the level of qwen 2.5 32b coder and reasoning and math also is on the seme level but for html and JavaScript ... is GREAT.**
| 2025-05-04T16:14:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1kenk4f/qwq_32b_vs_qwen_3_32b_vs_glm432b_html_coding_only/
|
Healthy-Nebula-3603
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kenk4f
| false | null |
t3_1kenk4f
|
/r/LocalLLaMA/comments/1kenk4f/qwq_32b_vs_qwen_3_32b_vs_glm432b_html_coding_only/
| false | false | 138 |
{'enabled': False, 'images': [{'id': 'rvBe2sSMWUb2BiTsxft299oO0IRh9G0lMoWcfjP8v_w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rvBe2sSMWUb2BiTsxft299oO0IRh9G0lMoWcfjP8v_w.png?width=108&crop=smart&auto=webp&s=556bf881597495119969d748a5d6d845123f6d48', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rvBe2sSMWUb2BiTsxft299oO0IRh9G0lMoWcfjP8v_w.png?width=216&crop=smart&auto=webp&s=38310f244588619ab3f70e85bf9aab16855d9461', 'width': 216}, {'height': 161, 'url': 'https://external-preview.redd.it/rvBe2sSMWUb2BiTsxft299oO0IRh9G0lMoWcfjP8v_w.png?width=320&crop=smart&auto=webp&s=097ce39ab175f7a69c285e67b1ba4c5bcbd9a43c', 'width': 320}, {'height': 322, 'url': 'https://external-preview.redd.it/rvBe2sSMWUb2BiTsxft299oO0IRh9G0lMoWcfjP8v_w.png?width=640&crop=smart&auto=webp&s=7ffda41612678206f45c083d79a930493ff34ec4', 'width': 640}, {'height': 483, 'url': 'https://external-preview.redd.it/rvBe2sSMWUb2BiTsxft299oO0IRh9G0lMoWcfjP8v_w.png?width=960&crop=smart&auto=webp&s=03702bfe24cfc99e4c84163123981a2c70064156', 'width': 960}, {'height': 543, 'url': 'https://external-preview.redd.it/rvBe2sSMWUb2BiTsxft299oO0IRh9G0lMoWcfjP8v_w.png?width=1080&crop=smart&auto=webp&s=0f3c8db4db1c3e9a71e92807181eda87877eb12f', 'width': 1080}], 'source': {'height': 1239, 'url': 'https://external-preview.redd.it/rvBe2sSMWUb2BiTsxft299oO0IRh9G0lMoWcfjP8v_w.png?auto=webp&s=92b22ec60064172926096996b348be4bbfa814e8', 'width': 2461}, 'variants': {}}]}
|
|
bouncing-ball-bartowski-THUDM_GLM-4-32B-0414-Q4_K_S
| 0 |
I got this great code with a single shot and default settings.
Settings:
Temp: 0,5
TOP-k: 40
RP: 1,1
Top P: 0,95
Min P: 0,05
source: [https://pastebin.com/k2AESyLU](https://pastebin.com/k2AESyLU)
https://reddit.com/link/1kenrjt/video/r52lc4yiisye1/player
| 2025-05-04T16:23:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1kenrjt/bouncingballbartowskithudm_glm432b0414q4_k_s/
|
JumpyAbies
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kenrjt
| false | null |
t3_1kenrjt
|
/r/LocalLLaMA/comments/1kenrjt/bouncingballbartowskithudm_glm432b0414q4_k_s/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?width=108&crop=smart&auto=webp&s=3d74dbe4f1d67cc8b587db9aa01762f26e269bcf', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?auto=webp&s=b9f5c4e4867fbffb2c1ff45dd70aa338d1e3f40c', 'width': 150}, 'variants': {}}]}
|
|
UI-Tars-1.5 reasoning never fails to entertain me.
| 1 |
[removed]
| 2025-05-04T16:25:37 |
Middle_Flow_2270
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kentqw
| false | null |
t3_1kentqw
|
/r/LocalLLaMA/comments/1kentqw/uitars15_reasoning_never_fails_to_entertain_me/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'Dz0m2KY2JgnvDbmwOX8BVLUHuEBWVsVh23lwtum_aQc', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/90xs1gw9ksye1.jpeg?width=108&crop=smart&auto=webp&s=2d704540093781a59f6a9e745a104c1ba919c620', 'width': 108}, {'height': 138, 'url': 'https://preview.redd.it/90xs1gw9ksye1.jpeg?width=216&crop=smart&auto=webp&s=e61792412dad7e5e70125944e439ef192e809314', 'width': 216}, {'height': 204, 'url': 'https://preview.redd.it/90xs1gw9ksye1.jpeg?width=320&crop=smart&auto=webp&s=c9240b1fa412deabe57d6c0bc94d6ef2b56d24cf', 'width': 320}, {'height': 409, 'url': 'https://preview.redd.it/90xs1gw9ksye1.jpeg?width=640&crop=smart&auto=webp&s=adc7fe4c65afa996958e1fc8690a6d13ed564c06', 'width': 640}], 'source': {'height': 466, 'url': 'https://preview.redd.it/90xs1gw9ksye1.jpeg?auto=webp&s=bf75b378cc167442d0dfa857fc556dd4aa82d392', 'width': 729}, 'variants': {}}]}
|
||
Run AI Agents with Near-Native Speed on macOS—Introducing C/ua.
| 21 |
I wanted to share an exciting open-source framework called C/ua, specifically optimized for Apple Silicon Macs. C/ua allows AI agents to seamlessly control entire operating systems running inside high-performance, lightweight virtual containers.
Key Highlights:
Performance: Achieves up to 97% of native CPU speed on Apple Silicon.
Compatibility: Works smoothly with any AI language model.
Open Source: Fully available on GitHub for customization and community contributions.
Whether you're into automation, AI experimentation, or just curious about pushing your Mac's capabilities, check it out here:
https://github.com/trycua/cua
Would love to hear your thoughts and see what innovative use cases the macOS community can come up with!
Happy hacking!
| 2025-05-04T16:28:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1kenw3u/run_ai_agents_with_nearnative_speed_on/
|
Impressive_Half_2819
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kenw3u
| false | null |
t3_1kenw3u
|
/r/LocalLLaMA/comments/1kenw3u/run_ai_agents_with_nearnative_speed_on/
| false | false |
self
| 21 |
{'enabled': False, 'images': [{'id': 'SN5D5Qsll6VpL5ftbC_PzcDnknk8humtFgdSIj94KlY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aB4-tPDqw5XiI1o_aeLhv9ehhP0n6EqcUDozSZMFx9o.jpg?width=108&crop=smart&auto=webp&s=508d3584104e24d890aac2d07edc66bb08cabd4c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/aB4-tPDqw5XiI1o_aeLhv9ehhP0n6EqcUDozSZMFx9o.jpg?width=216&crop=smart&auto=webp&s=00d59054974d3db0f3d8ce9236c2bd18fe316778', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/aB4-tPDqw5XiI1o_aeLhv9ehhP0n6EqcUDozSZMFx9o.jpg?width=320&crop=smart&auto=webp&s=88a6719c3bc541aaa27002313cf91190f1013647', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/aB4-tPDqw5XiI1o_aeLhv9ehhP0n6EqcUDozSZMFx9o.jpg?width=640&crop=smart&auto=webp&s=f9f96279d3ca9340d32329022a92989fd16b03a2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/aB4-tPDqw5XiI1o_aeLhv9ehhP0n6EqcUDozSZMFx9o.jpg?width=960&crop=smart&auto=webp&s=7c5aac2dc1d7389c6e99b852c0cdc6fa11af4108', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/aB4-tPDqw5XiI1o_aeLhv9ehhP0n6EqcUDozSZMFx9o.jpg?width=1080&crop=smart&auto=webp&s=846b1b581287c280fc4b9ea939579fa4ee8e3687', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/aB4-tPDqw5XiI1o_aeLhv9ehhP0n6EqcUDozSZMFx9o.jpg?auto=webp&s=0f59f5e6ae4b00c701991ca0a8bf930c271c24dd', 'width': 1200}, 'variants': {}}]}
|
Report generation based on data retrieval
| 1 |
Hello everyone! As the title states, I want to implement an LLM into our work environment that can take a pdf file I point it to and turn that into a comprehensive report. I have a report template and examples of good reports which it can follow. Is this a job for RAG and one of the newer LLMs that released? Any input is appreciated.
| 2025-05-04T16:37:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1keo3ox/report_generation_based_on_data_retrieval/
|
joojoobean1234
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1keo3ox
| false | null |
t3_1keo3ox
|
/r/LocalLLaMA/comments/1keo3ox/report_generation_based_on_data_retrieval/
| false | false |
self
| 1 | null |
UI-Tars-1.5 reasoning never fails to entertain me.
| 262 |
7B parameter computer use agent.
| 2025-05-04T16:37:31 |
Impressive_Half_2819
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1keo3te
| false | null |
t3_1keo3te
|
/r/LocalLLaMA/comments/1keo3te/uitars15_reasoning_never_fails_to_entertain_me/
| false | false | 262 |
{'enabled': True, 'images': [{'id': 'bSwZn0mVWr-x5Sb61KxQ2oqC4hnGqLP-M2tCWGn9CxE', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/627wnr5emsye1.jpeg?width=108&crop=smart&auto=webp&s=f146a2941793bd216246c29f2d3164b8b2aa786e', 'width': 108}, {'height': 138, 'url': 'https://preview.redd.it/627wnr5emsye1.jpeg?width=216&crop=smart&auto=webp&s=01f677888b0cb7bb985b6e52c51fc7347a439915', 'width': 216}, {'height': 204, 'url': 'https://preview.redd.it/627wnr5emsye1.jpeg?width=320&crop=smart&auto=webp&s=8ab1a854ae8d0cda19980d896b91a40133603a35', 'width': 320}, {'height': 409, 'url': 'https://preview.redd.it/627wnr5emsye1.jpeg?width=640&crop=smart&auto=webp&s=b896f5165e878160c1e104137518ab1d80b3addc', 'width': 640}], 'source': {'height': 466, 'url': 'https://preview.redd.it/627wnr5emsye1.jpeg?auto=webp&s=f64d101d106c31a4192d606c1caeb53163e15d01', 'width': 729}, 'variants': {}}]}
|
||
Qwen 3 non thinking models
| 1 |
[removed]
| 2025-05-04T16:45:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1keoas4/qwen_3_non_thinking_models/
|
Negative_Piece_7217
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1keoas4
| false | null |
t3_1keoas4
|
/r/LocalLLaMA/comments/1keoas4/qwen_3_non_thinking_models/
| false | false |
self
| 1 | null |
LLaMA gotta go fast! Both ik and mainline llama.cpp just got faster!
| 109 |
[You can't go wrong with ik\_llama.cpp fork for hybrid CPU+GPU of Qwen3 MoE \(both 235B and 30B\)](https://preview.redd.it/3bwwfd4epsye1.png?width=3404&format=png&auto=webp&s=adbb0bce2c13bc560499b0d3459329d16d0a3291)
[mainline llama.cpp just got a boost for fully offloaded Qwen3 MoE \(single expert\)](https://preview.redd.it/m4x5z2sposye1.png?width=3404&format=png&auto=webp&s=26b2ff50d960dd957e86feb04a8c21030ef0195c)
# tl;dr;
I highly recommend doing a `git pull` and re-building your `ik_llama.cpp` or `llama.cpp` repo to take advantage of recent major performance improvements just released.
The friendly competition between these amazing projects is producing delicious fruit for the whole GGUF loving `r/LocalLLaMA` community!
If you have enough VRAM to fully offload and already have an existing "normal" quant of Qwen3 MoE then you'll get a little more speed out of mainline llama.cpp. If you are doing hybrid CPU+GPU offload or want to take advantage of the new SotA iqN\_k quants, then check out ik\_llama.cpp fork!
# Details
I spent yesterday compiling and running benhmarks on the newest versions of both [ik\_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp) and mainline [llama.cpp](https://github.com/ggml-org/llama.cpp).
For those that don't know, ikawrakow was an early contributor to mainline llama.cpp working on important features that have since trickled down into ollama, lmstudio, koboldcpp etc. At some point (presumably for reasons beyond my understanding) the `ik_llama.cpp` fork was built and has a number of interesting features including SotA `iqN_k` quantizations that pack in a lot of quality for the size while retaining good speed performance. (These new quants are *not* available in ollma, lmstudio, koboldcpp, etc.)
A few recent PRs made by ikawrakow to `ik_llama.cpp` and by JohannesGaessler to mainline have *boosted performance across the board* and especially on CUDA with Flash Attention implementations for Grouped Query Attention (GQA) models and also Mixutre of Experts (MoEs) like the recent and amazing Qwen3 235B and 30B releases!
# References
* [ikawrakow/ik\_llama.cpp/pull/370](https://github.com/ikawrakow/ik_llama.cpp/pull/370)
| 2025-05-04T16:55:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1keoint/llama_gotta_go_fast_both_ik_and_mainline_llamacpp/
|
VoidAlchemy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1keoint
| false | null |
t3_1keoint
|
/r/LocalLLaMA/comments/1keoint/llama_gotta_go_fast_both_ik_and_mainline_llamacpp/
| false | false | 109 | null |
|
Visa is looking for vibe coders - thoughts?
| 377 | 2025-05-04T16:58:36 |
eastwindtoday
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1keolh9
| false | null |
t3_1keolh9
|
/r/LocalLLaMA/comments/1keolh9/visa_is_looking_for_vibe_coders_thoughts/
| false | false | 377 |
{'enabled': True, 'images': [{'id': 'kSdgUplRRNDrX0Tj7DKg_j60jHcaxqpcwO4PwywllIk', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/gefvhv84qsye1.png?width=108&crop=smart&auto=webp&s=f1b9ec7bbd5f8c20626b855be57b775936fa1b56', 'width': 108}, {'height': 164, 'url': 'https://preview.redd.it/gefvhv84qsye1.png?width=216&crop=smart&auto=webp&s=daaeb25f3dc541670621928a98554dab3e8b764a', 'width': 216}, {'height': 243, 'url': 'https://preview.redd.it/gefvhv84qsye1.png?width=320&crop=smart&auto=webp&s=f5bf121faa1b0aeee080c46a987eff911bb7d8f2', 'width': 320}, {'height': 487, 'url': 'https://preview.redd.it/gefvhv84qsye1.png?width=640&crop=smart&auto=webp&s=235b3e1de1b7df4bd1bc1f7519f84b5259303d05', 'width': 640}, {'height': 731, 'url': 'https://preview.redd.it/gefvhv84qsye1.png?width=960&crop=smart&auto=webp&s=2b88d955be2c893bbc89980f7853282d398b90f0', 'width': 960}, {'height': 822, 'url': 'https://preview.redd.it/gefvhv84qsye1.png?width=1080&crop=smart&auto=webp&s=4e888cb3aa0e99105da3a495dd8b069cca4e1227', 'width': 1080}], 'source': {'height': 1574, 'url': 'https://preview.redd.it/gefvhv84qsye1.png?auto=webp&s=e3afd51cae80e89bc0429b4286a5363fb1b42a44', 'width': 2066}, 'variants': {}}]}
|
|||
Inquiry Regarding Out of Memory Issue During LoRA Fine-Tuning
| 1 |
[removed]
| 2025-05-04T17:17:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1kep1in/inquiry_regarding_out_of_memory_issue_during_lora/
|
Some-Grapefruit216
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kep1in
| false | null |
t3_1kep1in
|
/r/LocalLLaMA/comments/1kep1in/inquiry_regarding_out_of_memory_issue_during_lora/
| false | false |
self
| 1 | null |
Agentic frameworks for producing runnable code?
| 1 |
[removed]
| 2025-05-04T17:30:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1kepcmg/agentic_frameworks_for_producing_runnable_code/
|
Thistleknot
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kepcmg
| false | null |
t3_1kepcmg
|
/r/LocalLLaMA/comments/1kepcmg/agentic_frameworks_for_producing_runnable_code/
| false | false |
self
| 1 | null |
Agentic framework for producing runnable code?
| 1 |
[removed]
| 2025-05-04T17:31:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1kepdfq/agentic_framework_for_producing_runnable_code/
|
Thistleknot
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kepdfq
| false | null |
t3_1kepdfq
|
/r/LocalLLaMA/comments/1kepdfq/agentic_framework_for_producing_runnable_code/
| false | false |
self
| 1 | null |
Gemini 2.5 pro looks great for coding at first glance but its actually awful. I wish I could set it on fire.
| 0 |
Sorry for this rant but i really need to get this off my chest and maybe warm other people that are thinking about using this fucking trash heap.
&mbsp;
A few weeks ago I switched to Gemini 2.5 pro for a personal project after using other models. I have been working on this project since ChatGPT3.5 so I have used all sorts of different models to work on it. When I first took a look at G2.5 it seemed awesome. It didn't need hand holding and would add extra little details without being told.
Now reviewing a lot of the work I had it do, its been quietly editing parts of the code that had nothing to do with what I was asking it to do. It would just remove entire functions of the application that were completely unrelated to what it was being asked to do. I specialize in digital accessibility so the app had a lot of special features to ensure it had a great user experience for assistive technology users. It took me a long time to add that shit and make sure it worked properly. G2.5 just ripped all that shit out without letting me know. Of course since I am not regularly doing accessibility audits i missed this for a long time. My code is a mess because I don't know what is broken and what is missing.
And the commenting. Holey fuck, the commenting. It would comment every. single. fucking. line. When I told it not to there was a good chance that the next reply would have twice the amount of commenting. And even if it did knock off the comments... it would only stop commenting for one reply and then get right back to throwing comments EVERYWHERE.
These are issues haven't had with any other model. I've used them all and none of them pulled this shit, at least nowhere near this bad. YES, i fucked around with the temperature and top_p. If i set temp to 0 that would help but it would still give me more trouble that any other model.
**tl;dr -** fuck this trash model. its going to be a long time before i trust any version of gemini. it ruined hours and hours of my work.
| 2025-05-04T17:32:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1kepee1/gemini_25_pro_looks_great_for_coding_at_first/
|
LanceThunder
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kepee1
| false | null |
t3_1kepee1
|
/r/LocalLLaMA/comments/1kepee1/gemini_25_pro_looks_great_for_coding_at_first/
| false | false |
self
| 0 | null |
Law Student looking to run Local Model?
| 1 |
[removed]
| 2025-05-04T17:41:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1kepm5y/law_student_looking_to_run_local_model/
|
dbabalola
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kepm5y
| false | null |
t3_1kepm5y
|
/r/LocalLLaMA/comments/1kepm5y/law_student_looking_to_run_local_model/
| false | false |
self
| 1 | null |
Qwen3 performance benchmarks (toks/s, RAM utilization, etc.) on ~50 devices (iOS, Android, Mac, Windows)
| 1 |
[removed]
| 2025-05-04T17:46:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1kepq8o/qwen3_performance_benchmarks_tokss_ram/
|
intofuture
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kepq8o
| false | null |
t3_1kepq8o
|
/r/LocalLLaMA/comments/1kepq8o/qwen3_performance_benchmarks_tokss_ram/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=108&crop=smart&format=png8&s=3b679949e2f5d59352255c725ba1fc1b33f0888d', 'width': 108}, {'height': 128, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=216&crop=smart&format=png8&s=6ee8c49ce24cef0824f8593c81b27f8d954bfb70', 'width': 216}, {'height': 190, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=320&crop=smart&format=png8&s=c71e5028e5eaa0709b29004ff5a197bc9d2ce527', 'width': 320}, {'height': 380, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=640&crop=smart&format=png8&s=82649df77aa52aa02b9f7bdd715c7cbfe958c1f7', 'width': 640}, {'height': 570, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=960&crop=smart&format=png8&s=5c460e7973ad78e791761e45cdc8dc7830f8dc25', 'width': 960}, {'height': 641, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=1080&crop=smart&format=png8&s=e2a42579cf12bfc1ad6e1ce5c908a6b8fdf3892e', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?format=png8&s=d4538032eb8ec7c0ba6699a42e068dec1871ecc9', 'width': 1212}, 'variants': {'gif': {'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=108&crop=smart&s=ae5f837f103b12aa6bb3513a46c5f06cc9b980d9', 'width': 108}, {'height': 128, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=216&crop=smart&s=95699e7cba340833d6515763a41db859e878fd26', 'width': 216}, {'height': 190, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=320&crop=smart&s=46ca002adb0d17a03a5d8058b412d580767b0305', 'width': 320}, {'height': 380, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=640&crop=smart&s=15d5869bcc71cdb59dfcd1c7aad46834fe7e81cd', 'width': 640}, {'height': 570, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=960&crop=smart&s=a9c92606f328df03e5ef344a2dfc5bcb314c9c62', 'width': 960}, {'height': 641, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=1080&crop=smart&s=310f1a65aa004f2c243403050216840a2e80199b', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?s=50fd08527d10aaa38083a47c10302e6717dfa267', 'width': 1212}}, 'mp4': {'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=108&format=mp4&s=e26e69ff9797bff1c7042d5200842fd56a631293', 'width': 108}, {'height': 128, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=216&format=mp4&s=2ebde126423f9eec9934b190e5ee392a70c2629a', 'width': 216}, {'height': 190, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=320&format=mp4&s=2432c2d85aeda4319607bf659abbb1652a163896', 'width': 320}, {'height': 380, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=640&format=mp4&s=dcf264aa930975199cf0fcc239f1eb720a882941', 'width': 640}, {'height': 570, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=960&format=mp4&s=e70c9c0a7c5be9a4af4d068e7c4eae0f7bf54306', 'width': 960}, {'height': 641, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=1080&format=mp4&s=965a721e1d1466450ec9e6170c555aa17959f184', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?format=mp4&s=86fa80a6b11552bb24585ebbca333f31846d9b52', 'width': 1212}}}}]}
|
|
Qwen3 performance benchmarks (toks/s, RAM utilization, etc.) on ~50 devices (iOS, Android, Mac, Windows)
| 169 |
Hey LocalLlama!
We've started publishing open-source model performance benchmarks (speed, RAM utilization, etc.) across various devices (iOS, Android, Mac, Windows). We currently maintain \~50 devices and will expand this to 100+ soon.
We’re doing this because perf metrics determine the viability of shipping models in apps to users (no end-user wants crashing/slow AI features that hog up their specific device).
Although benchmarks get posted in threads here and there, we feel like a more consolidated and standardized hub should probably exist.
We figured we'd kickstart this since we already maintain this benchmarking infra/tooling at RunLocal for our enterprise customers. Note: We’ve mostly focused on supporting model formats like Core ML, ONNX and TFLite to date, so a few things are still WIP for GGUF support.
Thought it would be cool to start with benchmarks for Qwen3 (Num Prefill Tokens=512, Num Generation Tokens=128). [GGUFs are from Unsloth](https://huggingface.co/collections/unsloth/qwen3-680edabfb790c8c34a242f95) 🐐
[Qwen3 GGUF benchmarks on laptops](https://preview.redd.it/l59qu1gxysye1.png?width=961&format=png&auto=webp&s=381abad7b25e1d719265826441b51aa50177d143)
[Qwen3 GGUF benchmarks on phones](https://preview.redd.it/z5qxhpc1zsye1.png?width=913&format=png&auto=webp&s=c48aa6c5c753c7dc74c4397aac34f92383d17afe)
You can see more of the benchmark data for Qwen3 [here](https://edgemeter.runlocal.ai/public/pipelines/a240f768-2847-4e06-8df9-156ea3c2c321). We realize there are so many variables (devices, backends, etc.) that interpreting the data is currently harder than it should be. We'll work on that!
You can also see benchmarks for a few other models [here](https://edgemeter.runlocal.ai/public/pipelines). If you want to see benchmarks for any others, feel free to request them and we’ll try to publish ASAP!
Lastly, you can run your own benchmarks on our devices for free (limited to some degree to avoid our devices melting!).
This free/public version is a bit of a frankenstein fork of our enterprise product, so any benchmarks you run would be private to your account. But if there's interest, we can add a way for you to also publish them so that the public benchmarks aren’t bottlenecked by us.
It’s still very early days for us with this, so please let us know what would make it better/cooler for the community!
To more on-device AI in production! 💪
https://i.redd.it/aev5rgjazsye1.gif
| 2025-05-04T17:51:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1kepuli/qwen3_performance_benchmarks_tokss_ram/
|
intofuture
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kepuli
| false | null |
t3_1kepuli
|
/r/LocalLLaMA/comments/1kepuli/qwen3_performance_benchmarks_tokss_ram/
| false | false | 169 |
{'enabled': True, 'images': [{'id': 'MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=108&crop=smart&format=png8&s=3b679949e2f5d59352255c725ba1fc1b33f0888d', 'width': 108}, {'height': 128, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=216&crop=smart&format=png8&s=6ee8c49ce24cef0824f8593c81b27f8d954bfb70', 'width': 216}, {'height': 190, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=320&crop=smart&format=png8&s=c71e5028e5eaa0709b29004ff5a197bc9d2ce527', 'width': 320}, {'height': 380, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=640&crop=smart&format=png8&s=82649df77aa52aa02b9f7bdd715c7cbfe958c1f7', 'width': 640}, {'height': 570, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=960&crop=smart&format=png8&s=5c460e7973ad78e791761e45cdc8dc7830f8dc25', 'width': 960}, {'height': 641, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=1080&crop=smart&format=png8&s=e2a42579cf12bfc1ad6e1ce5c908a6b8fdf3892e', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?format=png8&s=d4538032eb8ec7c0ba6699a42e068dec1871ecc9', 'width': 1212}, 'variants': {'gif': {'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=108&crop=smart&s=ae5f837f103b12aa6bb3513a46c5f06cc9b980d9', 'width': 108}, {'height': 128, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=216&crop=smart&s=95699e7cba340833d6515763a41db859e878fd26', 'width': 216}, {'height': 190, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=320&crop=smart&s=46ca002adb0d17a03a5d8058b412d580767b0305', 'width': 320}, {'height': 380, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=640&crop=smart&s=15d5869bcc71cdb59dfcd1c7aad46834fe7e81cd', 'width': 640}, {'height': 570, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=960&crop=smart&s=a9c92606f328df03e5ef344a2dfc5bcb314c9c62', 'width': 960}, {'height': 641, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=1080&crop=smart&s=310f1a65aa004f2c243403050216840a2e80199b', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?s=50fd08527d10aaa38083a47c10302e6717dfa267', 'width': 1212}}, 'mp4': {'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=108&format=mp4&s=e26e69ff9797bff1c7042d5200842fd56a631293', 'width': 108}, {'height': 128, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=216&format=mp4&s=2ebde126423f9eec9934b190e5ee392a70c2629a', 'width': 216}, {'height': 190, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=320&format=mp4&s=2432c2d85aeda4319607bf659abbb1652a163896', 'width': 320}, {'height': 380, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=640&format=mp4&s=dcf264aa930975199cf0fcc239f1eb720a882941', 'width': 640}, {'height': 570, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=960&format=mp4&s=e70c9c0a7c5be9a4af4d068e7c4eae0f7bf54306', 'width': 960}, {'height': 641, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?width=1080&format=mp4&s=965a721e1d1466450ec9e6170c555aa17959f184', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/MEsW6HAKwMTePsFq8QDbEDhgynEAo3tDzKfDXY_7QMw.gif?format=mp4&s=86fa80a6b11552bb24585ebbca333f31846d9b52', 'width': 1212}}}}]}
|
|
C/ua now supports agent trajectory replay.
| 0 |
Here's a behind-the-scenes look at it in action, thanks to one of our awesome users.
Do it yourself using: https://github.com/trycua/cua
| 2025-05-04T18:10:38 |
https://v.redd.it/cycc1zb03tye1
|
Impressive_Half_2819
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1keqboi
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/cycc1zb03tye1/DASHPlaylist.mpd?a=1748974253%2CYjNiNGU5YTg1MWYyNDIzZDcxMWM4MTQzNTIzNWJhYzg5ZTkwZTIxNGE1ODFjZDY4Nzc5YjhiYTM1MmZiZTQ4NA%3D%3D&v=1&f=sd', 'duration': 38, 'fallback_url': 'https://v.redd.it/cycc1zb03tye1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/cycc1zb03tye1/HLSPlaylist.m3u8?a=1748974253%2CZTUxYmMzNThhNjYxMDZkNGE0MzRhZTJmNWM3MWExZmIxMTliMDliZDk2NzM3ZDVmMzczNDNiZTg2ZDVlZWVjNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/cycc1zb03tye1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1728}}
|
t3_1keqboi
|
/r/LocalLLaMA/comments/1keqboi/cua_now_supports_agent_trajectory_replay/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'cDNnaThkNDAzdHllMdT2xQPEPt3ADINOvjs7FnnvWrRt2DesW7_96NRGC_fD', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/cDNnaThkNDAzdHllMdT2xQPEPt3ADINOvjs7FnnvWrRt2DesW7_96NRGC_fD.png?width=108&crop=smart&format=pjpg&auto=webp&s=75cfeafdbeb3d692cdae99a9ffe8b1afd5fb96a1', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/cDNnaThkNDAzdHllMdT2xQPEPt3ADINOvjs7FnnvWrRt2DesW7_96NRGC_fD.png?width=216&crop=smart&format=pjpg&auto=webp&s=9b5424ee6f93dc20a9d4402e6dc5df6f32eae2be', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/cDNnaThkNDAzdHllMdT2xQPEPt3ADINOvjs7FnnvWrRt2DesW7_96NRGC_fD.png?width=320&crop=smart&format=pjpg&auto=webp&s=bbb027a74c8ab852548f2e90f56711b0e243cb64', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/cDNnaThkNDAzdHllMdT2xQPEPt3ADINOvjs7FnnvWrRt2DesW7_96NRGC_fD.png?width=640&crop=smart&format=pjpg&auto=webp&s=836c2ffe5fb40bb52b6536ce8cfdfceeaaa58309', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/cDNnaThkNDAzdHllMdT2xQPEPt3ADINOvjs7FnnvWrRt2DesW7_96NRGC_fD.png?width=960&crop=smart&format=pjpg&auto=webp&s=fde415effd1b5707d368a11b911cad38469dd24b', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/cDNnaThkNDAzdHllMdT2xQPEPt3ADINOvjs7FnnvWrRt2DesW7_96NRGC_fD.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f8463b020bbf98592fdf173561921be018a1cd7e', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/cDNnaThkNDAzdHllMdT2xQPEPt3ADINOvjs7FnnvWrRt2DesW7_96NRGC_fD.png?format=pjpg&auto=webp&s=4346a2cd7c45d36b4856d0e6e5dfbe889c14a9eb', 'width': 1728}, 'variants': {}}]}
|
|
Qwen 3 gtx 1070 and llama.cpp issues
| 1 |
[removed]
| 2025-05-04T18:15:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1keqft9/qwen_3_gtx_1070_and_llamacpp_issues/
|
h310dOr
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1keqft9
| false | null |
t3_1keqft9
|
/r/LocalLLaMA/comments/1keqft9/qwen_3_gtx_1070_and_llamacpp_issues/
| false | false |
self
| 1 | null |
Super simple RAG?
| 14 |
I use LM-Studio, and I wanted to know if it's useful to use an install-and-use RAG to ask questions about a set of books (text). Or is it the same as adding the book(s) to the LM-Studio chat (which, from what I noticed, also creates a RAG when you query (I saw it says something about "retrieval" and sending parts of the book)).
In that case, it might be useful. Which one do you recommend? (Or should I stick with what LM-Studio does?)
| 2025-05-04T18:29:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1keqrzb/super_simple_rag/
|
9acca9
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1keqrzb
| false | null |
t3_1keqrzb
|
/r/LocalLLaMA/comments/1keqrzb/super_simple_rag/
| false | false |
self
| 14 | null |
Infrence on the cloud
| 8 |
Hi, I'm starting a newLLM inference project. How is it possible to do inference on the cloud in the most efficient way? Any experience is appreciated.
| 2025-05-04T18:34:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1keqw5a/infrence_on_the_cloud/
|
Sad_Bodybuilder8649
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1keqw5a
| false | null |
t3_1keqw5a
|
/r/LocalLLaMA/comments/1keqw5a/infrence_on_the_cloud/
| false | false |
self
| 8 | null |
Inferece speed 4090 + 5090
| 11 |
Hi,
I have a setup with 128gb of RAM and a dual gpu (4090 + 5090). WIth llama.cpp I am getting about 5 tps (both GPUs have similar TPS) running QWQ-32b GGUF Q5 (bartowski). Here is how I am starting llama-server (I tried for both GPUs and also each individually):
CUDA_VISIBLE_DEVICES=0 ./llama-server \
-m ~/.cache/huggingface/hub/models--bartowski--Qwen_QwQ-32B-GGUF/snapshots/390cc7b31baedc55a4d094802995e75f40b4a86d/Qwen_QwQ-32B-Q5_K_L.gguf \
-c 16000 \
--n-gpu-layers 100 \
--port 8001 \
-t 18 \
--mlock
I am making some mistake or is this the expected speed? Thanks
| 2025-05-04T19:35:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1kesb82/inferece_speed_4090_5090/
|
arivar
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kesb82
| false | null |
t3_1kesb82
|
/r/LocalLLaMA/comments/1kesb82/inferece_speed_4090_5090/
| false | false |
self
| 11 | null |
Looks like grok 3.5 is going to top the leader board again
| 0 |
Out scores current Gemini according to rumor: [https://x.com/iruletheworldmo/status/1919110686757519466](https://x.com/iruletheworldmo/status/1919110686757519466)
Suppose to be coming in the next few days for advanced subscribers:
| 2025-05-04T20:02:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1kesy26/looks_like_grok_35_is_going_to_top_the_leader/
|
Terminator857
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kesy26
| false | null |
t3_1kesy26
|
/r/LocalLLaMA/comments/1kesy26/looks_like_grok_35_is_going_to_top_the_leader/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'j8JG9QEhuhaHNSwc3OpYx_k525lvileQCWe1SS8GkNw', 'resolutions': [{'height': 41, 'url': 'https://external-preview.redd.it/0Q3cCY7Wglre0jaIMNrZPmQvocOPhPp2rM-HQ1p925Q.jpg?width=108&crop=smart&auto=webp&s=614100d0bb7c1cc2ff1c8cdb34e23d9bc0e37968', 'width': 108}, {'height': 82, 'url': 'https://external-preview.redd.it/0Q3cCY7Wglre0jaIMNrZPmQvocOPhPp2rM-HQ1p925Q.jpg?width=216&crop=smart&auto=webp&s=232d0aa412856163f056dee464542e5379000584', 'width': 216}, {'height': 121, 'url': 'https://external-preview.redd.it/0Q3cCY7Wglre0jaIMNrZPmQvocOPhPp2rM-HQ1p925Q.jpg?width=320&crop=smart&auto=webp&s=22759c5748bb49f86e02c01aa97365ab69c190fb', 'width': 320}, {'height': 243, 'url': 'https://external-preview.redd.it/0Q3cCY7Wglre0jaIMNrZPmQvocOPhPp2rM-HQ1p925Q.jpg?width=640&crop=smart&auto=webp&s=91a6c0d1759a53ea73f15451ada9af3587af6846', 'width': 640}, {'height': 365, 'url': 'https://external-preview.redd.it/0Q3cCY7Wglre0jaIMNrZPmQvocOPhPp2rM-HQ1p925Q.jpg?width=960&crop=smart&auto=webp&s=b92dca760a0bca2176a52155d5c830d100d6b46d', 'width': 960}], 'source': {'height': 404, 'url': 'https://external-preview.redd.it/0Q3cCY7Wglre0jaIMNrZPmQvocOPhPp2rM-HQ1p925Q.jpg?auto=webp&s=cdb36bc35c1b37f65b19a6344dfaa87c7d5148a2', 'width': 1061}, 'variants': {}}]}
|
Building a plug-and-play memory system for any LLM (Open Source Friendly) — Would love feedback from LocalLLaMA builders
| 1 |
[removed]
| 2025-05-04T20:08:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1ket2ty/building_a_plugandplay_memory_system_for_any_llm/
|
Glad-Exchange-9772
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ket2ty
| false | null |
t3_1ket2ty
|
/r/LocalLLaMA/comments/1ket2ty/building_a_plugandplay_memory_system_for_any_llm/
| false | false |
self
| 1 | null |
Best way to accomplish this? I need to extract data from couple million PDF documents (some might contain Images, those need to be handled too).
| 1 |
[removed]
| 2025-05-04T20:09:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1ket3s6/best_way_to_accomplish_this_i_need_to_extract/
|
we_love_csharp
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ket3s6
| false | null |
t3_1ket3s6
|
/r/LocalLLaMA/comments/1ket3s6/best_way_to_accomplish_this_i_need_to_extract/
| false | false |
self
| 1 | null |
Best way to accomplish this? I need to extract data from couple million PDF documents (some might contain Images, those need to be handled too).
| 1 |
[removed]
| 2025-05-04T20:11:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1ket5h6/best_way_to_accomplish_this_i_need_to_extract/
|
iloveparagon
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ket5h6
| false | null |
t3_1ket5h6
|
/r/LocalLLaMA/comments/1ket5h6/best_way_to_accomplish_this_i_need_to_extract/
| false | false |
self
| 1 | null |
C/ua Framework Introduces Agent Trajectory Replay for macOS.
| 13 |
C/ua, the open-source framework for running computer-use AI agents optimized for Apple Silicon Macs, has introduced Agent Trajectory Replay.
You can now visually replay and analyze each action your AI agents perform.
Explore it on GitHub, and feel free to share your feedback or use cases.
GitHub : https://github.com/trycua/cua
| 2025-05-04T20:11:39 |
https://v.redd.it/p18ercilotye1
|
Impressive_Half_2819
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ket5xm
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/p18ercilotye1/DASHPlaylist.mpd?a=1748981513%2CODg3OWZlYjU0ZjE3M2UyZTU1MGZhZjhhNWQ2YjU3YjZjMjFiMDk1NTkzOWY2ZTdjZGQ2YzczYTc4OTM1Y2QzNA%3D%3D&v=1&f=sd', 'duration': 38, 'fallback_url': 'https://v.redd.it/p18ercilotye1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/p18ercilotye1/HLSPlaylist.m3u8?a=1748981513%2CYzVhMDcyNDcyNzMxNDI2NTE0OTA5ZWFlZWE1YjAyMWU4MzNkMTNjODdhZGY4YTM0MDk3ZjEwNWY2YTBlMDEwMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/p18ercilotye1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1728}}
|
t3_1ket5xm
|
/r/LocalLLaMA/comments/1ket5xm/cua_framework_introduces_agent_trajectory_replay/
| false | false | 13 |
{'enabled': False, 'images': [{'id': 'N2ZyMXM2OGxvdHllMdT2xQPEPt3ADINOvjs7FnnvWrRt2DesW7_96NRGC_fD', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/N2ZyMXM2OGxvdHllMdT2xQPEPt3ADINOvjs7FnnvWrRt2DesW7_96NRGC_fD.png?width=108&crop=smart&format=pjpg&auto=webp&s=30c5933d615bfb9275c3e80ca788d4dd277d004d', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/N2ZyMXM2OGxvdHllMdT2xQPEPt3ADINOvjs7FnnvWrRt2DesW7_96NRGC_fD.png?width=216&crop=smart&format=pjpg&auto=webp&s=7aa5a290fb9e5a7e1a440c70d9f6c70858fa79d8', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/N2ZyMXM2OGxvdHllMdT2xQPEPt3ADINOvjs7FnnvWrRt2DesW7_96NRGC_fD.png?width=320&crop=smart&format=pjpg&auto=webp&s=c47c9c3cc46794c1b4c6503ab5cc9c49644996d9', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/N2ZyMXM2OGxvdHllMdT2xQPEPt3ADINOvjs7FnnvWrRt2DesW7_96NRGC_fD.png?width=640&crop=smart&format=pjpg&auto=webp&s=6c8fae768cf15a001312c014c2845122d1e83b4f', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/N2ZyMXM2OGxvdHllMdT2xQPEPt3ADINOvjs7FnnvWrRt2DesW7_96NRGC_fD.png?width=960&crop=smart&format=pjpg&auto=webp&s=2bd54f3dcc8e6f990cc7775d58cb8a1991bec10c', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/N2ZyMXM2OGxvdHllMdT2xQPEPt3ADINOvjs7FnnvWrRt2DesW7_96NRGC_fD.png?width=1080&crop=smart&format=pjpg&auto=webp&s=888e3b608719ca5822207fe42a336193a0d29404', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/N2ZyMXM2OGxvdHllMdT2xQPEPt3ADINOvjs7FnnvWrRt2DesW7_96NRGC_fD.png?format=pjpg&auto=webp&s=beef81267c8def48869476aa9fd3f30e85677c1a', 'width': 1728}, 'variants': {}}]}
|
|
Built a lightweight memory + context system for local LLMs — feedback appreciated
| 1 |
[removed]
| 2025-05-04T20:19:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1ketcgb/built_a_lightweight_memory_context_system_for/
|
Glad-Exchange-9772
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ketcgb
| false | null |
t3_1ketcgb
|
/r/LocalLLaMA/comments/1ketcgb/built_a_lightweight_memory_context_system_for/
| false | false |
self
| 1 | null |
2½ years in the making, my open-source multi-agent graphing interface is still kicking!
| 1 | 2025-05-04T20:31:17 |
https://v.redd.it/5bxkojoiptye1
|
Intrepid-Air6525
|
/r/LocalLLaMA/comments/1ketmot/2½_years_in_the_making_my_opensource_multiagent/
| 1970-01-01T00:00:00 | 0 |
{}
|
1ketmot
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/5bxkojoiptye1/DASHPlaylist.mpd?a=1749112281%2CMTQzM2U4NzM4MzY3MTg2YmQ1NzhiNDZkOWFkZjQyYzVlMDhjZDYzYmZlZDcxNmM3YTIxODE5NjliNWFmNjIwMw%3D%3D&v=1&f=sd', 'duration': 125, 'fallback_url': 'https://v.redd.it/5bxkojoiptye1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/5bxkojoiptye1/HLSPlaylist.m3u8?a=1749112281%2CZTQ3YTU4OGJhMTM2YWNjMWUwZTRjYjkwMzI5YzdiOWFlMzJlODNkNTk2ZTk4OTMyMWIyZTQ0YzgzNjQzYjgzNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/5bxkojoiptye1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1ketmot
|
/r/LocalLLaMA/comments/1ketmot/2½_years_in_the_making_my_opensource_multiagent/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'dDJuYnF3cjNzdHllMUtJcmW6kILydM4kINqZWHxJE7yzRbDXhnZ_yx60g3zt', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dDJuYnF3cjNzdHllMUtJcmW6kILydM4kINqZWHxJE7yzRbDXhnZ_yx60g3zt.png?width=108&crop=smart&format=pjpg&auto=webp&s=f33e2476935026babe17323a1ffa8ec5136549d6', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dDJuYnF3cjNzdHllMUtJcmW6kILydM4kINqZWHxJE7yzRbDXhnZ_yx60g3zt.png?width=216&crop=smart&format=pjpg&auto=webp&s=fbc956d509d2553fd8680299fb2addb08f9039ad', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dDJuYnF3cjNzdHllMUtJcmW6kILydM4kINqZWHxJE7yzRbDXhnZ_yx60g3zt.png?width=320&crop=smart&format=pjpg&auto=webp&s=43f83ba76c6d99493b706b1b9216118bfd293c09', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dDJuYnF3cjNzdHllMUtJcmW6kILydM4kINqZWHxJE7yzRbDXhnZ_yx60g3zt.png?width=640&crop=smart&format=pjpg&auto=webp&s=235a5743f0624e1780ca7dc747caa150b7b6363b', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dDJuYnF3cjNzdHllMUtJcmW6kILydM4kINqZWHxJE7yzRbDXhnZ_yx60g3zt.png?width=960&crop=smart&format=pjpg&auto=webp&s=8a0a8872afa47555347172635f78329b04c12ad9', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dDJuYnF3cjNzdHllMUtJcmW6kILydM4kINqZWHxJE7yzRbDXhnZ_yx60g3zt.png?width=1080&crop=smart&format=pjpg&auto=webp&s=d9f3acc8b0df004c99ca34c137fda2fcd128728e', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/dDJuYnF3cjNzdHllMUtJcmW6kILydM4kINqZWHxJE7yzRbDXhnZ_yx60g3zt.png?format=pjpg&auto=webp&s=e4d1d2de56a64f8a60c84976757352cff49448a4', 'width': 1920}, 'variants': {}}]}
|
||
A question from a complete newbie.
| 1 |
[removed]
| 2025-05-04T20:34:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1ketpd5/a_question_from_a_complete_newbie/
|
tysonq
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ketpd5
| false | null |
t3_1ketpd5
|
/r/LocalLLaMA/comments/1ketpd5/a_question_from_a_complete_newbie/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '9MnYocOngn7BeBZlQK-qo-0iZVUAWRPNTRCug3ZcSPo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/v08Mfoio_GGaGYgW4DR42ZCgI0mPXwzqectY9Q4RUU8.jpg?width=108&crop=smart&auto=webp&s=fa877b339624bb53b77ccee7f9f31483b84202d8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/v08Mfoio_GGaGYgW4DR42ZCgI0mPXwzqectY9Q4RUU8.jpg?width=216&crop=smart&auto=webp&s=fd5b9505d33b899b4a4039bce6d9fed67239a28b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/v08Mfoio_GGaGYgW4DR42ZCgI0mPXwzqectY9Q4RUU8.jpg?width=320&crop=smart&auto=webp&s=d602fae6e2b586c946227c42904591ce797487c1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/v08Mfoio_GGaGYgW4DR42ZCgI0mPXwzqectY9Q4RUU8.jpg?width=640&crop=smart&auto=webp&s=e286bf235deb863860b3f092cc656f4023935d34', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/v08Mfoio_GGaGYgW4DR42ZCgI0mPXwzqectY9Q4RUU8.jpg?width=960&crop=smart&auto=webp&s=8a4ace2f941f65aef2b87db2c5888e92b8fcca95', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/v08Mfoio_GGaGYgW4DR42ZCgI0mPXwzqectY9Q4RUU8.jpg?width=1080&crop=smart&auto=webp&s=5298ab2b9c0bc3c09fb47c9a9d7a53649aa611e4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/v08Mfoio_GGaGYgW4DR42ZCgI0mPXwzqectY9Q4RUU8.jpg?auto=webp&s=94306be62d451a84bfbcc1fe25e396e93a965c4f', 'width': 1200}, 'variants': {}}]}
|
Qwen 3 x Qwen2.5
| 7 |
So, it's been a while since Qwen 3's launch. Have you guys felt actual improvement compared to 2.5 generation?
If we take two models of same size, do you feel that generation 3 is significantly better than 2.5?
| 2025-05-04T20:53:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1keu52c/qwen_3_x_qwen25/
|
Remarkable_Art5653
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1keu52c
| false | null |
t3_1keu52c
|
/r/LocalLLaMA/comments/1keu52c/qwen_3_x_qwen25/
| false | false |
self
| 7 | null |
Have you ever wanted to talk to your past or future self?
| 1 |
[removed]
| 2025-05-04T20:59:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1keuaam/have_you_ever_wanted_to_talk_to_your_past_or/
|
KoldFiree
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1keuaam
| false | null |
t3_1keuaam
|
/r/LocalLLaMA/comments/1keuaam/have_you_ever_wanted_to_talk_to_your_past_or/
| false | false |
self
| 1 | null |
Have you ever wanted to talk to your past or future self?
| 1 |
[removed]
| 2025-05-04T21:01:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1keubfb/have_you_ever_wanted_to_talk_to_your_past_or/
|
KoldFiree
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1keubfb
| false | null |
t3_1keubfb
|
/r/LocalLLaMA/comments/1keubfb/have_you_ever_wanted_to_talk_to_your_past_or/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'KuGJF-DQVH5StEyepyaKjPP8jiA79fYyvS5tbI587vE', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/CbiQoM5jvOBVkazUkEENrTu-rW6TEL__vjq33aq4Mk8.jpg?width=108&crop=smart&auto=webp&s=5e090ba4523049ffd3f840ab7435558eb97d6ec2', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/CbiQoM5jvOBVkazUkEENrTu-rW6TEL__vjq33aq4Mk8.jpg?width=216&crop=smart&auto=webp&s=c5ca33f36a6106cdddafefbc5e2e3e89c2446284', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/CbiQoM5jvOBVkazUkEENrTu-rW6TEL__vjq33aq4Mk8.jpg?width=320&crop=smart&auto=webp&s=dbdeb4ec6ab969e699c94121e23388de2a1dc645', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/CbiQoM5jvOBVkazUkEENrTu-rW6TEL__vjq33aq4Mk8.jpg?auto=webp&s=dbc363c631c998de8310a25ad57bae2a5292f4b2', 'width': 480}, 'variants': {}}]}
|
Have you ever wanted to talk to your past or future self?
| 1 |
[removed]
| 2025-05-04T21:02:09 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1keucdl
| false |
{'oembed': {'author_name': 'IKAIC', 'author_url': 'https://www.youtube.com/@IKAIC', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/yBZykrXQa78?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Samsara Demo"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/yBZykrXQa78/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Samsara Demo', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1keucdl
|
/r/LocalLLaMA/comments/1keucdl/have_you_ever_wanted_to_talk_to_your_past_or/
| false | false |
default
| 1 | null |
||
Show LocalLLama : Why not a open source distillation repo for all LLMs!
| 1 |
[removed]
| 2025-05-04T21:02:49 |
https://github.com/agokrani/distillKitPlus
|
ANAGDKP
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1keucwi
| false | null |
t3_1keucwi
|
/r/LocalLLaMA/comments/1keucwi/show_localllama_why_not_a_open_source/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'CSXm7i6oy3rdwGDGkOGrcDmXfMgN_rmwR7hAEZqx_Z8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/85vsu-rHwXt5duP2bbUObMieVrBxDxtdnRzhIUyJ1Zg.jpg?width=108&crop=smart&auto=webp&s=f7ff5d93c853a43904810ebf35e482f7302190bf', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/85vsu-rHwXt5duP2bbUObMieVrBxDxtdnRzhIUyJ1Zg.jpg?width=216&crop=smart&auto=webp&s=40dc563dcf194d85c1aceac9f28ea8968cb06182', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/85vsu-rHwXt5duP2bbUObMieVrBxDxtdnRzhIUyJ1Zg.jpg?width=320&crop=smart&auto=webp&s=a502bba183129e846d00a52d2ad705fc55170d90', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/85vsu-rHwXt5duP2bbUObMieVrBxDxtdnRzhIUyJ1Zg.jpg?width=640&crop=smart&auto=webp&s=937ff1e49d4a41feaba422acafcf959b5c6ce9c5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/85vsu-rHwXt5duP2bbUObMieVrBxDxtdnRzhIUyJ1Zg.jpg?width=960&crop=smart&auto=webp&s=056cd7dc6e6739759864793a8c3583607ec2cdd4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/85vsu-rHwXt5duP2bbUObMieVrBxDxtdnRzhIUyJ1Zg.jpg?width=1080&crop=smart&auto=webp&s=4cbd8b499f84c3ca0d96c4dc023b0727d209f968', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/85vsu-rHwXt5duP2bbUObMieVrBxDxtdnRzhIUyJ1Zg.jpg?auto=webp&s=86e72ff7e346ad87cbb5951daf86ffd3ebeccac3', 'width': 1200}, 'variants': {}}]}
|
|
Disappointed by Qwen3 - Gemma easily wins?
| 1 |
[removed]
| 2025-05-04T21:26:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1keuwkh/disappointed_by_qwen3_gemma_easily_wins/
|
Guilty-Exchange8927
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1keuwkh
| false | null |
t3_1keuwkh
|
/r/LocalLLaMA/comments/1keuwkh/disappointed_by_qwen3_gemma_easily_wins/
| false | false |
self
| 1 | null |
Some thoughts following Llamacon...
| 1 |
[removed]
| 2025-05-04T21:30:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1keuzj4/some_thoughts_following_llamacon/
|
Embarrassed_Line_978
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1keuzj4
| false | null |
t3_1keuzj4
|
/r/LocalLLaMA/comments/1keuzj4/some_thoughts_following_llamacon/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'gT-yNC8u7JCRPIBbiYpFB2TeZ1vi6ulB5Da04YO1wVU', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/k45ghXSo-6ToOQZbscAASk3ffvSAHIa10hfAL1-FK3A.jpg?width=108&crop=smart&auto=webp&s=c38e7a15afe70494e4d3de7e4d9a4807e7f157e1', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/k45ghXSo-6ToOQZbscAASk3ffvSAHIa10hfAL1-FK3A.jpg?width=216&crop=smart&auto=webp&s=98e5108d3cd7c78bd05a89af136409116b6081ab', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/k45ghXSo-6ToOQZbscAASk3ffvSAHIa10hfAL1-FK3A.jpg?width=320&crop=smart&auto=webp&s=8123f7ebaa620cc8470d6cd3336efc7508f4046f', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/k45ghXSo-6ToOQZbscAASk3ffvSAHIa10hfAL1-FK3A.jpg?width=640&crop=smart&auto=webp&s=54cb3777348427a3153a852c9ad829ed7f8cd5a1', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/k45ghXSo-6ToOQZbscAASk3ffvSAHIa10hfAL1-FK3A.jpg?width=960&crop=smart&auto=webp&s=4b40cc681ba4b60fd13c3ab2be45c32b19aba649', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/k45ghXSo-6ToOQZbscAASk3ffvSAHIa10hfAL1-FK3A.jpg?width=1080&crop=smart&auto=webp&s=0cfcdcec58a305bc6128a7aac5dddf7b89adf79a', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/k45ghXSo-6ToOQZbscAASk3ffvSAHIa10hfAL1-FK3A.jpg?auto=webp&s=0c597f1ac66e47d4b53f5709552f22e5725982b8', 'width': 1200}, 'variants': {}}]}
|
Swapping tokenizers in a model?
| 0 |
How easy or difficult is it to swap a tokenizer in a model?
I'm working on a code base, and with certain models it fits within context (131072) but in another model with the exact same context size it doesn't fit (using LM Studio).
More specifically with Qwen3 32B Q8 the code base fits, but with GLM4 Z1 Rumination 32B 0414 Q8 the same code base reverts to 'retrieval'. The only reason I can think of is the tokenizer used in the models.
Both are very good models btw. GLM4 creates 'research reports' which I thought was cute, and provides really good analysis if a code base and recommends some very cool optimizations and techniques. Qwen3 is more straightforward but very thorough and precise. I like switching between them, but now I have to figure this GLM4 tokenizer thing (if that's what's causing it) out.
All of this on an M2 Ultra with plenty of RAM.
Any help would be appreciated. TIA.
| 2025-05-04T21:35:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1kev3ah/swapping_tokenizers_in_a_model/
|
Thrumpwart
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kev3ah
| false | null |
t3_1kev3ah
|
/r/LocalLLaMA/comments/1kev3ah/swapping_tokenizers_in_a_model/
| false | false |
self
| 0 | null |
I think triage agents should run "out-of-process". Here's why.
| 0 |
OpenAI launched their [Agent SDK](https://openai.github.io/openai-agents-python/) a few months ago and introduced this notion of a triage-agent that is responsible to handle incoming requests and decides which downstream agent or tools to call to complete the user request. In other frameworks the triage agent is called a supervisor agent, or an orchestration agent but essentially its the same "cross-cutting" functionality defined in code and run in the same process as your other task agents. I think triage-agents should run out of process, as a self-contained piece of functionality. Here's why:
For more context, I think if you are doing dev/test you should continue to follow pattern outlined by the framework providers, because its convenient to have your code in one place packaged and distributed in a single process. Its also fewer moving parts, and the iteration cycles for dev/test are faster. But this doesn't really work if you have to deploy agents to handle some level of production traffic or if you want to enable teams to have autonomy in building agents using their choice of frameworks.
Imagine, you have to make an update to the instructions or guardrails of your triage agent - it will require a full deployment across all node instances where the agents were deployed, consequently require safe upgrades and rollback strategies that impact at the app level, not agent level. Imagine, you wanted to add a new agent, it will require a code change and a re-deployment again to the full stack vs an isolated change that can be exposed to a few customers safely before making it available to the rest. Now, imagine some teams want to use a different programming language/frameworks - then you are copying pasting snippets of code across projects so that the functionality implemented in one said framework from a triage perspective is kept consistent between development teams and agent development.
I think the triage-agent and the related cross-cutting functionality should be pushed into an [out-of-process server](https://github.com/katanemo/archgw/stargazers) \- so that there is a clean separation of concerns, so that you can add new agents easily without impacting other agents, so that you can update triage functionality without impacting agent functionality, etc. You can write this out-of-process server yourself in any said programming language even perhaps using the AI framework themselves, but separating out the triage agent and running it as an out-of-process server has several flexibility, safety, scalability benefits.
| 2025-05-04T21:57:37 |
AdditionalWeb107
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kevl76
| false | null |
t3_1kevl76
|
/r/LocalLLaMA/comments/1kevl76/i_think_triage_agents_should_run_outofprocess/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'kXFXBH-230JuYcLx7kRjPDQ5XdRQI2Ar4bbdatCDNmc', 'resolutions': [{'height': 96, 'url': 'https://preview.redd.it/ugjjut87vtye1.png?width=108&crop=smart&auto=webp&s=f58a0eb8196117305ef4b2646da8e3dbd21ff469', 'width': 108}, {'height': 192, 'url': 'https://preview.redd.it/ugjjut87vtye1.png?width=216&crop=smart&auto=webp&s=ce34bd74d5e04d964bd71261e25aa15beafa8808', 'width': 216}, {'height': 285, 'url': 'https://preview.redd.it/ugjjut87vtye1.png?width=320&crop=smart&auto=webp&s=b0469a38fcc6350356b4dd4389d2acca7e1e5625', 'width': 320}, {'height': 570, 'url': 'https://preview.redd.it/ugjjut87vtye1.png?width=640&crop=smart&auto=webp&s=0ee1395f9f6aaf51e561aa2c3193ddee83d62f04', 'width': 640}, {'height': 856, 'url': 'https://preview.redd.it/ugjjut87vtye1.png?width=960&crop=smart&auto=webp&s=b80c2294c9dfe444c6fc094d50f5c621303dcd92', 'width': 960}, {'height': 963, 'url': 'https://preview.redd.it/ugjjut87vtye1.png?width=1080&crop=smart&auto=webp&s=542d661de13699bfe0f841ee408ce54101851d54', 'width': 1080}], 'source': {'height': 1582, 'url': 'https://preview.redd.it/ugjjut87vtye1.png?auto=webp&s=15b62a782ab4ffd4ce2280e51eaa728963d3e002', 'width': 1774}, 'variants': {}}]}
|
||
Wrote a CLI tool that automatically groups and commits related changes in a Git repository for vibe coding
| 9 |
[VibeGit](https://github.com/kklemon/vibegit) is basically vibe coding but for Git.
I created it after spending too many nights untangling my not-so-clean version control habits. We've all been there: you code for hours, solve multiple problems, and suddenly you're staring at 30+ changed files with no clear commit strategy.
Instead of the painful git add -p dance or just giving up and doing a massive git commit -a -m "stuff", I wanted something smarter. VibeGit uses AI to analyze your working directory, understand the semantic relationships between your changes (up to hunk-level granularity), and automatically group them into logical, atomic commits.
Just run "vibegit commit" and it:
* Examines your code changes and what they actually do
* Groups related changes across different files
* Generates meaningful commit messages that match your repo's style \*Lets you choose how much control you want (from fully automated to interactive review)
It works with Gemini, GPT-4o, and other LLMs. Gemini 2.5 Flash is used by default because it offers the best speed/cost/quality balance.
I built this tool mostly for myself, but I'd love to hear what other developers think. Python 3.11+ required, MIT licensed.
You can find the project here: [https://github.com/kklemon/vibegit](https://github.com/kklemon/vibegit)
| 2025-05-04T22:07:57 |
https://github.com/kklemon/vibegit
|
trashcoder
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kevtd2
| false | null |
t3_1kevtd2
|
/r/LocalLLaMA/comments/1kevtd2/wrote_a_cli_tool_that_automatically_groups_and/
| false | false | 9 |
{'enabled': False, 'images': [{'id': 'Xj_jKyrfbXefsqTr4cORenaLyckm-WxHvB0VIRi_4_o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wySUwWyUSrXA6KqSCOx2efoC881tB0dBQxaalI5w8wY.jpg?width=108&crop=smart&auto=webp&s=70a8ed25ba538c7dff5d31196cd878be220386e3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wySUwWyUSrXA6KqSCOx2efoC881tB0dBQxaalI5w8wY.jpg?width=216&crop=smart&auto=webp&s=34d7c4c9cd18e404a8afc560645fdf8b724028d0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wySUwWyUSrXA6KqSCOx2efoC881tB0dBQxaalI5w8wY.jpg?width=320&crop=smart&auto=webp&s=90abee7b9da7994a7063ae24558876fb0f9dc0f0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wySUwWyUSrXA6KqSCOx2efoC881tB0dBQxaalI5w8wY.jpg?width=640&crop=smart&auto=webp&s=b023bf579cbc2d36187b2c2c5ba7453fe1b39dbd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wySUwWyUSrXA6KqSCOx2efoC881tB0dBQxaalI5w8wY.jpg?width=960&crop=smart&auto=webp&s=844a567f26deed0797c7fe5ca234da5f96b625ac', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wySUwWyUSrXA6KqSCOx2efoC881tB0dBQxaalI5w8wY.jpg?width=1080&crop=smart&auto=webp&s=06b10b71a395dc7fdd9242fd688930bed7f13604', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wySUwWyUSrXA6KqSCOx2efoC881tB0dBQxaalI5w8wY.jpg?auto=webp&s=1cf3076fcab54199ec3cab6157bc2baf98c8af1d', 'width': 1200}, 'variants': {}}]}
|
|
Dummy's Guide to Modern LLM Sampling
| 1 |
[removed]
| 2025-05-04T22:40:46 |
https://rentry.co/samplers
|
Calm-Start-5945
|
rentry.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1kewiup
| false | null |
t3_1kewiup
|
/r/LocalLLaMA/comments/1kewiup/dummys_guide_to_modern_llm_sampling/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'b1E8sI-kTet-3YOFKrYAUVQ9ABbay60W7WEBpTM34S8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/sld18DTFrG0vgxxCwlYEGWKS7hjSXmQsKHgNjUEATAk.jpg?width=108&crop=smart&auto=webp&s=5f7d74321748816977c2c47d74607125fd510a17', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/sld18DTFrG0vgxxCwlYEGWKS7hjSXmQsKHgNjUEATAk.jpg?width=216&crop=smart&auto=webp&s=9c08000e015b470c7d577334237c7dee99c37847', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/sld18DTFrG0vgxxCwlYEGWKS7hjSXmQsKHgNjUEATAk.jpg?width=320&crop=smart&auto=webp&s=628b4e1ef982e336b9ee2da5dbacecc2774b6d65', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/sld18DTFrG0vgxxCwlYEGWKS7hjSXmQsKHgNjUEATAk.jpg?auto=webp&s=9e4cec75ef4248064a481db4ef5f29637aec6e67', 'width': 512}, 'variants': {}}]}
|
|
Qwen 30B A3B performance degradation with KV quantization
| 84 |
I came across this gist [https://gist.github.com/sunpazed/f5220310f120e3fc7ea8c1fb978ee7a4](https://gist.github.com/sunpazed/f5220310f120e3fc7ea8c1fb978ee7a4) that shows how Qwen 30B can solve the OpenAI cypher test with Q4\_K\_M quantization.
I tried to replicate locally but could I was not able, model sometimes entered in a repetition loop even with dry sampling or came to wrong conclusion after generating lots of thinking tokens.
I was using Unsloth Q4\_K\_XL quantization, so I tought it could be the Dynamic quantization. I tested Bartowski Q5\_K\_S but it had no improvement. The model didn't entered in any repetition loop but generated lots of thinking tokens without finding any solution.
Then I saw that sunpazed didn't used KV quantization and tried the same: boom! First time right.
It worked with Q5\_K\_S and also with Q4\_K\_XL
For who wants more details I leave here a gist [https://gist.github.com/fakezeta/eaa5602c85b421eb255e6914a816e1ef](https://gist.github.com/fakezeta/eaa5602c85b421eb255e6914a816e1ef)
Do you have any report of performance degradation with long generations on Qwen3 30B A3B and KV quantization?
| 2025-05-04T22:43:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1kewkno/qwen_30b_a3b_performance_degradation_with_kv/
|
fakezeta
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kewkno
| false | null |
t3_1kewkno
|
/r/LocalLLaMA/comments/1kewkno/qwen_30b_a3b_performance_degradation_with_kv/
| false | false |
self
| 84 |
{'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=108&crop=smart&auto=webp&s=9bcab7b79864ff27bf48116cb335a6f825bfb124', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=216&crop=smart&auto=webp&s=e4e925345605c644eebe8abd69916915fc4fbcf7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=320&crop=smart&auto=webp&s=614b06d5b40c890a59e355191a6e2d75cdf50789', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=640&crop=smart&auto=webp&s=62ca4cb88917f17e7200a6f1c665b5d959713745', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=960&crop=smart&auto=webp&s=c5f4a30974a8e6bad0d617a79935bc70c954e3e8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=1080&crop=smart&auto=webp&s=476793be11eaac4604b6b0c938b45c7c3b52d450', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?auto=webp&s=9ae035fbdcd6bb503ab0b4a605b8db6de46647ee', 'width': 1280}, 'variants': {}}]}
|
Jetbrains Coding model
| 26 |
Jetbrains just released a coding model has anyone tried it?
https://huggingface.co/collections/JetBrains/mellum-68120b4ae1423c86a2da007a
| 2025-05-04T22:51:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1kewr1q/jetbrains_coding_model/
|
SpeedyBrowser45
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kewr1q
| false | null |
t3_1kewr1q
|
/r/LocalLLaMA/comments/1kewr1q/jetbrains_coding_model/
| false | false |
self
| 26 |
{'enabled': False, 'images': [{'id': '-_vCSVunBRgQisKwBl_Y-dRoTBAYNchtwH7Hvyf5Ouo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/owwErJEWGVNn12o97lHpHpBjpemEBi-DJdW5SoObtbc.jpg?width=108&crop=smart&auto=webp&s=299bc349b588cba81dc1fde806a6e109e960f0f4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/owwErJEWGVNn12o97lHpHpBjpemEBi-DJdW5SoObtbc.jpg?width=216&crop=smart&auto=webp&s=66a58439eee690734234557e3721e061099501a2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/owwErJEWGVNn12o97lHpHpBjpemEBi-DJdW5SoObtbc.jpg?width=320&crop=smart&auto=webp&s=6392452a7abf839ba10b63e205f7a1a4c77877f7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/owwErJEWGVNn12o97lHpHpBjpemEBi-DJdW5SoObtbc.jpg?width=640&crop=smart&auto=webp&s=e5e2484668a0f59461d555a8f888643aa7f9dd26', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/owwErJEWGVNn12o97lHpHpBjpemEBi-DJdW5SoObtbc.jpg?width=960&crop=smart&auto=webp&s=61755ac2c06b434ec70929f9260b87d58e9337b9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/owwErJEWGVNn12o97lHpHpBjpemEBi-DJdW5SoObtbc.jpg?width=1080&crop=smart&auto=webp&s=95c695f9fb72ef11fbecdd4bff1f646dcf2444fb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/owwErJEWGVNn12o97lHpHpBjpemEBi-DJdW5SoObtbc.jpg?auto=webp&s=fba47544f3b1ac45c91ea7cd57378ccc10e7db69', 'width': 1200}, 'variants': {}}]}
|
Built an Open-Source "External Brain" + Unified API for LLMs (Ollama, HF, OpenAI...) - Useful?
| 1 |
[removed]
| 2025-05-04T23:02:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1kewzi1/built_an_opensource_external_brain_unified_api/
|
Effective_Muscle_110
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kewzi1
| false | null |
t3_1kewzi1
|
/r/LocalLLaMA/comments/1kewzi1/built_an_opensource_external_brain_unified_api/
| false | false |
self
| 1 | null |
What do I test out / run first?
| 493 |
Just got her in the mail. Haven't had a chance to put her in yet.
| 2025-05-04T23:21:02 |
https://www.reddit.com/gallery/1kexdgy
|
Recurrents
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kexdgy
| false | null |
t3_1kexdgy
|
/r/LocalLLaMA/comments/1kexdgy/what_do_i_test_out_run_first/
| false | false | 493 |
{'enabled': True, 'images': [{'id': 'Gj8FzKNPTvVSOxJwgeuufUJzmZ6BR-6YWri04zLtxfs', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Gj8FzKNPTvVSOxJwgeuufUJzmZ6BR-6YWri04zLtxfs.jpeg?width=108&crop=smart&auto=webp&s=00ee01684c78ae4dd63b878a4acf3cb55d0782f4', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Gj8FzKNPTvVSOxJwgeuufUJzmZ6BR-6YWri04zLtxfs.jpeg?width=216&crop=smart&auto=webp&s=6efadb466dc1ac705f19fbdfe9643e24148caf76', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Gj8FzKNPTvVSOxJwgeuufUJzmZ6BR-6YWri04zLtxfs.jpeg?width=320&crop=smart&auto=webp&s=777efa8ced152adfc3b8cb17fb92211338752880', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/Gj8FzKNPTvVSOxJwgeuufUJzmZ6BR-6YWri04zLtxfs.jpeg?width=640&crop=smart&auto=webp&s=99bf755df82e7e9bb2bc2cafc9271bdc27217ed4', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/Gj8FzKNPTvVSOxJwgeuufUJzmZ6BR-6YWri04zLtxfs.jpeg?width=960&crop=smart&auto=webp&s=4e32a4985a244a01598d222c5f1c0ea783f61c6c', 'width': 960}, {'height': 810, 'url': 'https://external-preview.redd.it/Gj8FzKNPTvVSOxJwgeuufUJzmZ6BR-6YWri04zLtxfs.jpeg?width=1080&crop=smart&auto=webp&s=2cbdff7220ad8e4e1e8fe69a1b081a33382ecf6e', 'width': 1080}], 'source': {'height': 3000, 'url': 'https://external-preview.redd.it/Gj8FzKNPTvVSOxJwgeuufUJzmZ6BR-6YWri04zLtxfs.jpeg?auto=webp&s=6d4b13367c1493ff6bf42ff3b79341fc63741d49', 'width': 4000}, 'variants': {}}]}
|
|
For people here using Zonos, need config advice
| 7 |
Zonos works quite well, it doesn't generate artifacts and it's decently expressive, but how do you do it to avoid it taking such huge rests between sentences ? it's really exagerated. Rising the rate of speech sometimes creates small artifacts
| 2025-05-04T23:36:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1kexoq2/for_people_here_using_zonos_need_config_advice/
|
skarrrrrrr
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kexoq2
| false | null |
t3_1kexoq2
|
/r/LocalLLaMA/comments/1kexoq2/for_people_here_using_zonos_need_config_advice/
| false | false |
self
| 7 | null |
Best model for 5090 for math
| 0 |
It would also be good if i could attach images too.
| 2025-05-04T23:42:25 |
00quebec
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kextfi
| false | null |
t3_1kextfi
|
/r/LocalLLaMA/comments/1kextfi/best_model_for_5090_for_math/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'R0IsHRrB8jnne0Ywv-S4-bDEPIAFlpNiOit4lHlj8mY', 'resolutions': [{'height': 192, 'url': 'https://preview.redd.it/hy4pewf7quye1.jpeg?width=108&crop=smart&auto=webp&s=b1be16bb324b79341d42a7df14e74dd412d42a6e', 'width': 108}, {'height': 384, 'url': 'https://preview.redd.it/hy4pewf7quye1.jpeg?width=216&crop=smart&auto=webp&s=ba6847dcbaacf55fe8d06c3e56a3e7e2b43a15ad', 'width': 216}, {'height': 568, 'url': 'https://preview.redd.it/hy4pewf7quye1.jpeg?width=320&crop=smart&auto=webp&s=e1ff194d7341695f8bdc93cb5c814a47399fd813', 'width': 320}, {'height': 1137, 'url': 'https://preview.redd.it/hy4pewf7quye1.jpeg?width=640&crop=smart&auto=webp&s=8a3c82186d12a2f23e1152899a90edd4ac6dee3e', 'width': 640}, {'height': 1706, 'url': 'https://preview.redd.it/hy4pewf7quye1.jpeg?width=960&crop=smart&auto=webp&s=6fe77912863d41aeb3cada1c85973beb403ec348', 'width': 960}, {'height': 1920, 'url': 'https://preview.redd.it/hy4pewf7quye1.jpeg?width=1080&crop=smart&auto=webp&s=bec259f3d30a774ed784186fd41b46a8c3d758b6', 'width': 1080}], 'source': {'height': 2208, 'url': 'https://preview.redd.it/hy4pewf7quye1.jpeg?auto=webp&s=7d6728aaad1482d67bc765423239060298b7edda', 'width': 1242}, 'variants': {}}]}
|
||
Help! LLM not following instructions
| 1 |
[removed]
| 2025-05-05T00:02:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1key8aj/help_llm_not_following_instructions/
|
OmeGa34-
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1key8aj
| false | null |
t3_1key8aj
|
/r/LocalLLaMA/comments/1key8aj/help_llm_not_following_instructions/
| false | false |
self
| 1 | null |
LLM not following instructions
| 1 |
[removed]
| 2025-05-05T00:19:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1keyjtt/llm_not_following_instructions/
|
buttered-toasst
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1keyjtt
| false | null |
t3_1keyjtt
|
/r/LocalLLaMA/comments/1keyjtt/llm_not_following_instructions/
| false | false |
self
| 1 | null |
Is it possible to system prompt Qwen 3 models to have "reasoning effort"?
| 19 |
I'm wondering if I can prompt Qwen 3 models to output shorter / longer / more concise think tags.
Has anyone attempted this yet for Qwen or a similar model?
| 2025-05-05T00:36:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1keyvqs/is_it_possible_to_system_prompt_qwen_3_models_to/
|
wunnsen
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1keyvqs
| false | null |
t3_1keyvqs
|
/r/LocalLLaMA/comments/1keyvqs/is_it_possible_to_system_prompt_qwen_3_models_to/
| false | false |
self
| 19 | null |
Well, that's just, like… your benchmark, man.
| 72 |
Especially as teams put AI into production, we need to start treating evaluation like a first-class discipline: versioned, interpretable, reproducible, and aligned to outcomes and improved UX.
**Without some kind of ExperimentOps, you’re one false positive away from months of shipping the wrong thing.**
| 2025-05-05T00:39:27 |
remyxai
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1keyy4k
| false | null |
t3_1keyy4k
|
/r/LocalLLaMA/comments/1keyy4k/well_thats_just_like_your_benchmark_man/
| false | false | 72 |
{'enabled': True, 'images': [{'id': 'qnw7LeZ29TgvGoubJUfCJPnkciBLhV1SsJYs1WsUAD8', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/mdy01ntgwuye1.png?width=108&crop=smart&auto=webp&s=5e060c0bfd48359ecbca79200fbf36128a461d8f', 'width': 108}, {'height': 116, 'url': 'https://preview.redd.it/mdy01ntgwuye1.png?width=216&crop=smart&auto=webp&s=59797df979a5e324aa5756dc8d1e3c9eb3b67212', 'width': 216}, {'height': 172, 'url': 'https://preview.redd.it/mdy01ntgwuye1.png?width=320&crop=smart&auto=webp&s=8d4109f7d277c75cfab3a8a40906bd956a0e8c97', 'width': 320}, {'height': 345, 'url': 'https://preview.redd.it/mdy01ntgwuye1.png?width=640&crop=smart&auto=webp&s=4d618c2e8de6cb32e8937776f2dfc048d10d61fc', 'width': 640}], 'source': {'height': 367, 'url': 'https://preview.redd.it/mdy01ntgwuye1.png?auto=webp&s=3692fef2697ed24901c121282614834a43635be1', 'width': 679}, 'variants': {}}]}
|
||
Speed metrics running DeepSeekV3 0324/Qwen3 235B and other models, on 128GB VRAM (5090+4090x2+A6000) + 192GB RAM on Consumer motherboard/CPU (llamacpp/ikllamacpp)
| 108 |
Hi there guys, hope is all going good.
I have been testing some bigger models on this setup and wanted to share some metrics if it helps someone!
Setup is:
AMD Ryzen 7 7800X3D
192GB DDR5 6000Mhz at CL30 (overclocked and adjusted resistances to make it stable)
AM5 MSI Carbon X670E
Running at X8 5.0 (5090) / X8 4.0 (4090) / X4 4.0 (4090) / X4 4.0 (A6000), all from CPU lanes (using M2 to PCI-E adapters)
Fedora 41-42 (believe me, I tried these on Windows and multiGPU is just borked there)
The models I have tested are:
DeepSeek V3 0324 at Q2\_K\_XL (233GB), from [https://huggingface.co/unsloth/DeepSeek-V3-0324-GGUF-UD](https://huggingface.co/unsloth/DeepSeek-V3-0324-GGUF-UD)
Qwen3 235B at Q3\_K\_XL, Q4\_K\_L, Q6\_K from [https://huggingface.co/unsloth/Qwen3-235B-A22B-128K-GGUF](https://huggingface.co/unsloth/Qwen3-235B-A22B-128K-GGUF)
Llama-3.1-Nemotron-Ultra-253B at Q3\_K\_XL from [https://huggingface.co/unsloth/Llama-3\_1-Nemotron-Ultra-253B-v1-GGUF](https://huggingface.co/unsloth/Llama-3_1-Nemotron-Ultra-253B-v1-GGUF)
c4ai-command-a-03-2025 111B at Q6\_K\_XL from [https://huggingface.co/bartowski/CohereForAI\_c4ai-command-a-03-2025-GGUF](https://huggingface.co/bartowski/CohereForAI_c4ai-command-a-03-2025-GGUF)
Mistral-Large-Instruct-2411 123B at Q4\_K\_M from [https://huggingface.co/bartowski/Mistral-Large-Instruct-2411-GGUF](https://huggingface.co/bartowski/Mistral-Large-Instruct-2411-GGUF)
All on llamacpp, for offloading mostly on the case of bigger models. command a and Mistral Large run faster on EXL2.
I have also used llamacpp (https://github.com/ggml-org/llama.cpp) and ikllamacpp (https://github.com/ikawrakow/ik\_llama.cpp), so I will note where I use which.
All of these models were loaded with 32K, without flash attention or cache quantization, except in the case of Nemotron, mostly to give some VRAM usages. FA when avaialble reduces VRAM usage with cache/buffer size heavily.
Also, when running -ot, I did use each layer instead of regex. This is because when using the regex I got issues with VRAM usage.
They were compiled from source with:
`CC=gcc-14 CXX=g++-14 CUDAHOSTCXX=g++-14 cmake -B build_linux \`
`-DGGML_CUDA=ON \`
`-DGGML_CUDA_FA_ALL_QUANTS=ON \`
`-DGGML_BLAS=OFF \`
`-DCMAKE_CUDA_ARCHITECTURES="86;89;120" \`
`-DCMAKE_CUDA_FLAGS="-allow-unsupported-compiler -ccbin=g++-14"`
(Had to force CC and CXX 14, as CUDA doesn't support GCC15 yet, which is what Fedora ships)
# DeepSeek V3 0324 (Q2_K_XL, llamacpp)
For this model, MLA was added recently, which let me to use more tensors on GPU.
Command to run it was
`./llama-server -m '/GGUFs/DeepSeek-V3-0324-UD-Q2_K_XL-merged.gguf' -c 32768 --no-mmap --no-warmup -ngl 999 -ot "blk.(0|1|2|3|4|5|6).ffn.=CUDA0" -ot "blk.(7|8|9|10).ffn.=CUDA1" -ot "blk.(11|12|13|14|15).ffn.=CUDA2" -ot "blk.(16|17|18|19|20|21|22|23|24|25).ffn.=CUDA3" -ot "ffn.*=CPU`
And speeds are:
`prompt eval time = 38919.92 ms / 1528 tokens ( 25.47 ms per token, 39.26 tokens per second)`
`eval time = 57175.47 ms / 471 tokens ( 121.39 ms per token, 8.24 tokens per second)`
This makes it pretty usable. The important part is setting the experts to be only on CPU, and active params + other experts on GPU. With MLA, it uses \~4GB for 32K and \~8GB for 64K. Without MLA, 16K uses 80GB of VRAM.
# Qwen3 235B (Q3_K_XL, llamacpp)
For this model and size, we're able to load the model entirely on VRAM. Note: When using only GPU, on my case, llamacpp is faster than ik llamacpp.
Command to run it was:
`./llama-server -m '/GGUFs/Qwen3-235B-A22B-128K-UD-Q3_K_XL-00001-of-00003.gguf' -c 32768 --no-mmap --no-warmup -ngl 999 -ts 0.8,0.8,1.2,2`
And speeds are:
`prompt eval time = 6532.37 ms / 3358 tokens ( 1.95 ms per token, 514.06 tokens per second)`
`eval time = 53259.78 ms / 1359 tokens ( 39.19 ms per token, 25.52 tokens per second)`
Pretty good model but I would try to use at least Q4\_K\_S/M. Cache size at 32K is 6GB, and 12GB at 64K. This cache size is the same for all Qwen3 235B quants
# Qwen3 235B (Q4_K_XL, llamacpp)
For this model, we're using \~20GB of RAM and the rest on GPU.
Command to run it was:
`./llama-server -m '/GGUFs/Qwen3-235B-A22B-128K-UD-Q4_K_XL-00001-of-00003.gguf' -c 32768 --no-mmap --no-warmup -ngl 999 -ot "blk\.(0|1|2|3|4|5|6|7|8|9|10|11|12|13|13)\.ffn.*=CUDA0" -ot "blk\.(14|15|16|17|18|19|20|21|22|23|24|25|26|27)\.ffn.*=CUDA1" -ot "blk\.(28|29|30|31|32|33|34|35|36|37|38|39|40|41|42|43|44|45|46|)\.ffn.*=CUDA2" -ot "blk\.(47|48|49|50|51|52|53|54|55|56|57|58|59|60|61|62|63|64|65|66|67|68|69|70|71|72|73|74|75|76|77|78)\.ffn.*=CUDA3" -ot "ffn.*=CPU"`
And speeds are:
`prompt eval time = 17405.76 ms / 3358 tokens ( 5.18 ms per token, 192.92 tokens per second)`
`eval time = 92420.55 ms / 1549 tokens ( 59.66 ms per token, 16.76 tokens per second)`
Model is pretty good at this point, and speeds are still acceptable. But on this case is where ik llamacpp shines.
# Qwen3 235B (Q4_K_XL, ik llamacpp)
ik llamacpp with some extra parameters makes the models run faster when offloading. If you're wondering why this isn't the case or I didn't post with DeepSeek V3 0324, it is because quants of main llamacpp have MLA which are incompatible with MLA from ikllamacpp, which was implemented before via another method.
Command to run it was:
`./llama-server -m '/GGUFs/Qwen3-235B-A22B-128K-UD-Q4_K_XL-00001-of-00003.gguf' -c 32768 --no-mmap --no-warmup -ngl 999 -ot "blk\.(0|1|2|3|4|5|6|7|8|9|10|11|12|13|13)\.ffn.*=CUDA0" -ot "blk\.(14|15|16|17|18|19|20|21|22|23|24|25|26|27)\.ffn.*=CUDA1" -ot "blk\.(28|29|30|31|32|33|34|35|36|37|38|39|40|41|42|43|44|45|46|)\.ffn.*=CUDA2" -ot "blk\.(47|48|49|50|51|52|53|54|55|56|57|58|59|60|61|62|63|64|65|66|67|68|69|70|71|72|73|74|75|76|77|78)\.ffn.*=CUDA3" -ot "ffn.*=CPU" -fmoe -amb 1024 -rtr`
And speeds are:
`INFO [ print_timings] prompt eval time = 15739.89 ms / 3358 tokens ( 4.69 ms per token, 213.34 tokens per second) | tid="140438394236928" ti`
`mestamp=1746406901 id_slot=0 id_task=0 t_prompt_processing=15739.888 n_prompt_tokens_processed=3358 t_token=4.687280524121501 n_tokens_second=213.34332239212884`
`INFO [ print_timings] generation eval time = 66275.69 ms / 1067 runs ( 62.11 ms per token, 16.10 tokens per second) | tid="140438394236928" ti`
`mestamp=1746406901 id_slot=0 id_task=0 t_token_generation=66275.693 n_decoded=1067 t_token=62.11405154639175 n_tokens_second=16.099416719791975`
So basically 10% more speed in PP and similar generation t/s.
# Qwen3 235B (Q6_K, llamacpp)
This is the point where models are really close to Q8 and then to F16. This was more for test porpouses, but still is very usable.
This uses about 70GB RAM and rest on VRAM.
`Command to run was:`
`./llama-server -m '/models_llm/Qwen3-235B-A22B-128K-Q6_K-00001-of-00004.gguf' -c 32768 --no-mmap --no-warmup -ngl 999 -ot "blk\.(0|1|2|3|4|5|6|7|8)\.ffn.*=CUDA0" -ot "blk\.(9|10|11|12|13|14|15|16|17)\.ffn.*=CUDA1" -ot "blk\.(18|19|20|21|22|23|24|25|26|27|28|29|30)\.ffn.*=CUDA2" -ot "blk\.(31|32|33|34|35|36|37|38|39|40|41|42|43|44|45|46|47|48|49|50|51|52)\.ffn.*=CUDA3" -ot "ffn.*=CPU"`
And speed are:
`prompt eval time = 57152.69 ms / 3877 tokens ( 14.74 ms per token, 67.84 tokens per second) eval time = 38705.90 ms / 318 tokens ( 121.72 ms per token, 8.22 tokens per second)`
# Qwen3 235B (Q6_K, ik llamacpp)
ik llamacpp makes a huge increase in PP performance.
Command to run was:
./llama-server -m '/models\_llm/Qwen3-235B-A22B-128K-Q6\_K-00001-of-00004.gguf' -c 32768 --no-mmap --no-warmup -ngl 999 -ot "blk\\.(0|1|2|3|4|5|6|7|8)\\.ffn.\*=CUDA0" -ot "blk\\.(9|10|11|12|13|14|15|16|17)\\.ffn.\*=CUDA1" -ot "blk\\.(18|19|20|21|22|23|24|25|26|27|28|29|30)\\.ffn.\*=CUDA2" -ot "blk\\.(31|32|33|34|35|36|37|38|39|40|41|42|43|44|45|46|47|48|49|50|51|52)\\.ffn.\*=CUDA3" -ot "ffn.\*=CPU" -fmoe -amb 512 -rtr
And speeds are:
`INFO [ print_timings] prompt eval time = 36897.66 ms / 3877 tokens ( 9.52 ms per token, 105.07 tokens per second) | tid="140095757803520" timestamp=1746307138 id_slot=0 id_task=0 t_prompt_processing=36897.659 n_prompt_tokens_processed=3877 t_token=9.517064482847562 n_tokens_second=105.07441678075024`
`INFO [ print_timings] generation eval time = 143560.31 ms / 1197 runs ( 119.93 ms per token, 8.34 tokens per second) | tid="140095757803520" timestamp=1746307138 id_slot=0 id_task=0 t_token_generation=143560.31 n_decoded=1197 t_token=119.93342522974102 n_tokens_second=8.337959147622348`
Basically 40-50% more PP performance and similar generation speed.
# c4ai-command-a-03-2025 111B (Q6_K, llamacpp)
I particullay have liked command a models, and I also feel this model is great. Ran on GPU only.
Command to run it was:
`./llama-server -m '/GGUFs/CohereForAI_c4ai-command-a-03-2025-Q6_K-merged.gguf' -c 32768 -ngl 99 -ts 10,11,17,20 --no-warmup`
And speeds are:
`prompt eval time = 4101.94 ms / 3403 tokens ( 1.21 ms per token, 829.61 tokens per second)`
`eval time = 46452.40 ms / 472 tokens ( 98.42 ms per token, 10.16 tokens per second)`
**For reference: EXL2 with the same quant size gets \~12 t/s.**
Cache size is 8GB for 32K and 16GB for 64K.
# Mistral Large 2411 123B (Q4_K_M, llamacpp)
Also have been a fan of Mistral Large models, as they work pretty good!
Command to run it was:
`./llama-server -m '/run/media/pancho/DE1652041651DDD9/HuggingFaceModelDownload`
`er/Storage/GGUFs/Mistral-Large-Instruct-2411-Q4_K_M-merged.gguf' -c 32768 -ngl 99 -ts 7,7,10,5 --no-warmup`
And speeds are:
`prompt eval time = 4427.90 ms / 3956 tokens ( 1.12 ms per token, 893.43 tokens per second)`
`eval time = 30739.23 ms / 387 tokens ( 79.43 ms per token, 12.59 tokens per second)`
Cache size is quite big, 12GB for 32K and 24GB for 64K. In fact it is so big that if I want to load it on 3 GPUs (since size is 68GB) I need to use flash attention.
**For reference: EXL2 with this same size gets 25 t/s with Tensor Parallel enabled. And 16-20 t/s on 6.5bpw EXL2 (EXL2 lets you to use TP with uneven VRAM)**
That's all the tests I have been running lately! I have been testing for both coding (python, C, C++) and RP. Not sure if you guys are interested in which one I prefer for each task or rank them.
Any question is welcome!
| 2025-05-05T01:19:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1kezq68/speed_metrics_running_deepseekv3_0324qwen3_235b/
|
panchovix
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kezq68
| false | null |
t3_1kezq68
|
/r/LocalLLaMA/comments/1kezq68/speed_metrics_running_deepseekv3_0324qwen3_235b/
| false | false |
self
| 108 |
{'enabled': False, 'images': [{'id': 'j7akAWIJZlZYYZQs3VZfkqAg7D76__nflDIE4D-Gs2w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Ini1mmM1gGK0cf-8owKEPAk6Ypjt3bT608EbcLR_Mqg.jpg?width=108&crop=smart&auto=webp&s=c5f70c2ff3c0748bde1307984855e533de274176', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Ini1mmM1gGK0cf-8owKEPAk6Ypjt3bT608EbcLR_Mqg.jpg?width=216&crop=smart&auto=webp&s=eed4e21a6e074c51e937c0807b27d5af01bb9fda', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Ini1mmM1gGK0cf-8owKEPAk6Ypjt3bT608EbcLR_Mqg.jpg?width=320&crop=smart&auto=webp&s=98bd9768c15e578a092c0b54b98f75dbf730fc63', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Ini1mmM1gGK0cf-8owKEPAk6Ypjt3bT608EbcLR_Mqg.jpg?width=640&crop=smart&auto=webp&s=0a6f7bb750e6f8d26ac45a56a53695dfbac58e17', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Ini1mmM1gGK0cf-8owKEPAk6Ypjt3bT608EbcLR_Mqg.jpg?width=960&crop=smart&auto=webp&s=c834d06e6af23fa3dbeb92c36a9720e91d16d835', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Ini1mmM1gGK0cf-8owKEPAk6Ypjt3bT608EbcLR_Mqg.jpg?width=1080&crop=smart&auto=webp&s=4c9e875dff48e850d258305843da441084bfbf57', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Ini1mmM1gGK0cf-8owKEPAk6Ypjt3bT608EbcLR_Mqg.jpg?auto=webp&s=613ef42059b2bacf01a709f97fd5d51a8338abd0', 'width': 1200}, 'variants': {}}]}
|
What local models are actually good at generating UI’s?
| 7 |
I’ve looked into UIGEN and while it does have a good look to some examples, and it seems worst than qwen 8b oddly enough?
| 2025-05-05T02:23:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1kf0x6n/what_local_models_are_actually_good_at_generating/
|
Capable-Ad-7494
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf0x6n
| false | null |
t3_1kf0x6n
|
/r/LocalLLaMA/comments/1kf0x6n/what_local_models_are_actually_good_at_generating/
| false | false |
self
| 7 | null |
Qwen3-32B-IQ4_XS GGUFs - MMLU-PRO benchmark comparison
| 125 |
Since IQ4\_XS is my favorite quant for 32B models, I decided to run some benchmarks to compare IQ4\_XS GGUFs from different sources.
**MMLU-PRO 0.25 subset, 0 temp, No Think, IQ4\_XS, Q8 KV Cache**
The entire benchmark took ***11 hours, 37 minutes, and 30 seconds.***
https://preview.redd.it/9ptc0cl2svye1.png?width=2475&format=png&auto=webp&s=06a3b551fba60a33877f8e67af9932e381a15cc6
gguf source:
[https://huggingface.co/unsloth/Qwen3-32B-GGUF/blob/main/Qwen3-32B-IQ4\_XS.gguf](https://huggingface.co/unsloth/Qwen3-32B-GGUF/blob/main/Qwen3-32B-IQ4_XS.gguf)
[https://huggingface.co/unsloth/Qwen3-32B-128K-GGUF/blob/main/Qwen3-32B-128K-IQ4\_XS.gguf](https://huggingface.co/unsloth/Qwen3-32B-128K-GGUF/blob/main/Qwen3-32B-128K-IQ4_XS.gguf)
[https://huggingface.co/bartowski/Qwen\_Qwen3-32B-GGUF/blob/main/Qwen\_Qwen3-32B-IQ4\_XS.gguf](https://huggingface.co/bartowski/Qwen_Qwen3-32B-GGUF/blob/main/Qwen_Qwen3-32B-IQ4_XS.gguf)
[https://huggingface.co/mradermacher/Qwen3-32B-i1-GGUF/blob/main/Qwen3-32B.i1-IQ4\_XS.gguf](https://huggingface.co/mradermacher/Qwen3-32B-i1-GGUF/blob/main/Qwen3-32B.i1-IQ4_XS.gguf)
| 2025-05-05T03:21:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1kf1yg9/qwen332biq4_xs_ggufs_mmlupro_benchmark_comparison/
|
AaronFeng47
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf1yg9
| false | null |
t3_1kf1yg9
|
/r/LocalLLaMA/comments/1kf1yg9/qwen332biq4_xs_ggufs_mmlupro_benchmark_comparison/
| false | false | 125 |
{'enabled': False, 'images': [{'id': 'NQpYaauXMlT7nzqNv0PNj3mHiJO1bt0uUtsvQ0JZElk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jJ4wm0NIfgUy0MSOkw2YI6r-EjpVW_Y_SPR-xICfNk4.jpg?width=108&crop=smart&auto=webp&s=d29c0440235b9a66d85290e820579a8d20eb63a6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/jJ4wm0NIfgUy0MSOkw2YI6r-EjpVW_Y_SPR-xICfNk4.jpg?width=216&crop=smart&auto=webp&s=54dda99231ecac056ec55e219ee45b9d6e93ce1e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/jJ4wm0NIfgUy0MSOkw2YI6r-EjpVW_Y_SPR-xICfNk4.jpg?width=320&crop=smart&auto=webp&s=a85a55af65b49d2ce31b3333d6717671feac0a91', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/jJ4wm0NIfgUy0MSOkw2YI6r-EjpVW_Y_SPR-xICfNk4.jpg?width=640&crop=smart&auto=webp&s=8bf4c693cb7ebd3ae7a7b3eb2dc65cfbfc6e1d6d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/jJ4wm0NIfgUy0MSOkw2YI6r-EjpVW_Y_SPR-xICfNk4.jpg?width=960&crop=smart&auto=webp&s=39ff3a7619f07756ca40e10e21489996ecac2ca3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/jJ4wm0NIfgUy0MSOkw2YI6r-EjpVW_Y_SPR-xICfNk4.jpg?width=1080&crop=smart&auto=webp&s=ea7a7611d697b628400b3d7042d06f657407b90a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/jJ4wm0NIfgUy0MSOkw2YI6r-EjpVW_Y_SPR-xICfNk4.jpg?auto=webp&s=859d3b708ae6d38a9eb730dde7d35957b6d492d5', 'width': 1200}, 'variants': {}}]}
|
|
Computer-Use Model Capabilities
| 18 |
https://www.trycua.com/blog/build-your-own-operator-on-macos-2#computer-use-model-capabilities
| 2025-05-05T03:47:52 |
Impressive_Half_2819
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf2ezy
| false | null |
t3_1kf2ezy
|
/r/LocalLLaMA/comments/1kf2ezy/computeruse_model_capabilities/
| false | false | 18 |
{'enabled': True, 'images': [{'id': 'T6GftSu6WrTI1qUsdrSmJW94dn2rEJziztXOnyIrxX8', 'resolutions': [{'height': 104, 'url': 'https://preview.redd.it/kuxk8trzxvye1.jpeg?width=108&crop=smart&auto=webp&s=6a599ea928d115e9f47eaf8ba17d68c68b36dc68', 'width': 108}, {'height': 209, 'url': 'https://preview.redd.it/kuxk8trzxvye1.jpeg?width=216&crop=smart&auto=webp&s=5792d63020213ab83c6affd88fbc46bf173b9438', 'width': 216}, {'height': 310, 'url': 'https://preview.redd.it/kuxk8trzxvye1.jpeg?width=320&crop=smart&auto=webp&s=29ef07ff2c225ea1d8a23a54f52d32b61f4418fc', 'width': 320}, {'height': 621, 'url': 'https://preview.redd.it/kuxk8trzxvye1.jpeg?width=640&crop=smart&auto=webp&s=93a6922dbf1de371175b17445c82c9c4f62314bf', 'width': 640}, {'height': 931, 'url': 'https://preview.redd.it/kuxk8trzxvye1.jpeg?width=960&crop=smart&auto=webp&s=fc36db6e0a1aeb091bcdcdd31a94cb71fe3b7da8', 'width': 960}, {'height': 1048, 'url': 'https://preview.redd.it/kuxk8trzxvye1.jpeg?width=1080&crop=smart&auto=webp&s=567b2392d3ac915e4ca1dba8f92cdbd61e2cbc37', 'width': 1080}], 'source': {'height': 1553, 'url': 'https://preview.redd.it/kuxk8trzxvye1.jpeg?auto=webp&s=a178f5f9f2344217ed7823c49ebbb06817aaa430', 'width': 1600}, 'variants': {}}]}
|
||
Sophia NLU (natural language understanding) Engine
| 1 |
e
If you're into AI agents, you've probably found it's a struggle to figure out what the user's are saying. You're essentially stuck either pinging a LLM like ChatGPT and asking for a JSON object, or using a bulky and complex Python implementation like NLTK, SpaCy, Rasa, et al.
Latest iteration of the open source Sophia NLU (natural language understanding) engine just dropped, with full details including online demo at:
https://cicero.sh/sophia/
Developed in Rust with key differential being it's self contained and lightweight nature. No external dependencies or API calls, Processes about 20,000 words/sec, and two different vocabulary data stores -- base is simple 79MB and has 145k words while the full vocab is 177MB with 914k words. This is a massive boost compared to the Python systems out there which are multi gigabyte installs, and process at best 300 words/sec.
Has a built-in POS tagger, named entity recognition, phrase interpreter, anaphora resolution, auto correction of spelling typos, multi-hierarchical categorization system allowing you to easily map clusters of words to actions, etc. Nice localhost RPC server allowing you to easily run via any programming language, and see Implementation page for code examples.
Unfortunately, still slight issues with POS tagger due to noun heavy bias in data. Was trained on 229 million tokens using 3 of 4 consensus score across 4 POS taggers, but PyTorch based taggers are terrible. No matter, all easily fixable within a week, details of problem and solution here if interested:
https://cicero.sh/forums/thread/sophia-nlu-engine-v1-0-released-000005#p6
Advanced contextual awareness upgrade in the works and should be out within a few weeks hopefully, which will be massive boost and allow it to differentiate for example, "visit google.com", "visit Mark's idea", "visit the store", "visit my parents", etc. Will also have much more advanced hybrid phrase interpreter, along with categorization system being flipped into vector scoring for better clustering and granular filtering of words.
NLU engine itself free and open source, Github and crates.io links available on site. However, no choice but to do typical dual license model and also offer premium licenses because life likes to have fun with me. Currently out of runway, not going to get into myself. If interested, quick 6 min audio giving intro / back story at:
Https://youtu.be/bkpuo1EtElw
Need something to happen as only have RTX 3050 for compute, not enoguh to fix POS tagger. Make you a deal. Current premium price is about a third of what it will be once contextual awareness upgrade released.
Grab copy now, get instant access to binary app with SDK, new vocab data store in a week with fixed POS tagger open sourced, then in few weeks contextual awareness upgrade which will be massive improvement at which point price will triple, plus my guarantee will do everything in my power to ensure Sophia becomes the defact world leading NLU engine.
If you're into deploying AI agents of any kind, this is an excellent tool in your kit. Instead of pinging ChatGPT for JSON objects and getting unpredictable results, this is a nice, self contained little package that resides on your server, blazingly fast, produces the same reliable and predictable results each time, all data stays local and private to you, and no monthly API bills. It's a sweet deal.
Besides, it's for an excellent cause. You can read full manifest of Cicero project in "Origins and End Goals" post at:
https://cicero.sh/forums/thread/cicero-origins-and-end-goals-000004
If you made it this far, thanks for listening. Feel free to reach out directly at [email protected] and happy to engage, get you on the phone if desired, et al.
Full details on Sophia including open source download at: https://cicero.sh/sophia/
| 2025-05-05T03:51:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1kf2gzx/sophia_nlu_natural_language_understanding_engine/
|
mdizak
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf2gzx
| false | null |
t3_1kf2gzx
|
/r/LocalLLaMA/comments/1kf2gzx/sophia_nlu_natural_language_understanding_engine/
| false | false |
self
| 1 | null |
Training Computer-Use Models: Creating Human Trajectories with C/ua.
| 0 |
https://www.trycua.com/blog/training-computer-use-models-trajectories-1
Want to help make AI better at using computers? We just released a guide on creating human trajectory datasets with C/ua.
| 2025-05-05T04:37:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1kf38kc/training_computeruse_models_creating_human/
|
Impressive_Half_2819
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf38kc
| false | null |
t3_1kf38kc
|
/r/LocalLLaMA/comments/1kf38kc/training_computeruse_models_creating_human/
| false | false |
self
| 0 | null |
Low cost Multilingual TTS solutions
| 1 |
[removed]
| 2025-05-05T04:45:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1kf3d7e/low_cost_multilingual_tts_solutions/
|
Safe_Wheel4786
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf3d7e
| false | null |
t3_1kf3d7e
|
/r/LocalLLaMA/comments/1kf3d7e/low_cost_multilingual_tts_solutions/
| false | false |
self
| 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.