title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
i dont think from now we should considered the claude in the ai race . there valuation is going to be down no doubt . there will be no legacy bcz its never started . they just relevant in the last year this year they will be vanished in the year nobody will ever know there name
0
there are too many products which providing the better value and they are free , claude is just to aggressive over the censorship and also they are not providing any value , even open source model r better then there top model . u know what they did they just make there employee rich lol im sure every mf in that company is now a millionaire
2025-05-06T15:46:13
https://i.redd.it/6ls7tz2ql6ze1.png
Select_Dream634
i.redd.it
1970-01-01T00:00:00
0
{}
1kg7szt
false
null
t3_1kg7szt
/r/LocalLLaMA/comments/1kg7szt/i_dont_think_from_now_we_should_considered_the/
false
false
https://external-preview…260a3d971263d659
0
{'enabled': True, 'images': [{'id': 'OOSqTfXjoT8oz1Ma2tWRtGtikw5xxki5xSd66o8n2Yg', 'resolutions': [{'height': 92, 'url': 'https://preview.redd.it/6ls7tz2ql6ze1.png?width=108&crop=smart&auto=webp&s=64ded76024e3eeae3e0228e9ff5fbbfd9ddb7442', 'width': 108}, {'height': 184, 'url': 'https://preview.redd.it/6ls7tz2ql6ze1.png?width=216&crop=smart&auto=webp&s=5c9cbcc2386c76855283219e62faa98e2751ec24', 'width': 216}, {'height': 273, 'url': 'https://preview.redd.it/6ls7tz2ql6ze1.png?width=320&crop=smart&auto=webp&s=a75b12734ce07ed2583041993b1c4f71938f343e', 'width': 320}, {'height': 546, 'url': 'https://preview.redd.it/6ls7tz2ql6ze1.png?width=640&crop=smart&auto=webp&s=2a97fe9df07ee5d1266451817f056f3d5b749aa8', 'width': 640}, {'height': 819, 'url': 'https://preview.redd.it/6ls7tz2ql6ze1.png?width=960&crop=smart&auto=webp&s=f0b8746d94ac169475b5c00718b9525946c7509b', 'width': 960}, {'height': 921, 'url': 'https://preview.redd.it/6ls7tz2ql6ze1.png?width=1080&crop=smart&auto=webp&s=09c149b7165eef02eea042cf25a511c104583ff1', 'width': 1080}], 'source': {'height': 3072, 'url': 'https://preview.redd.it/6ls7tz2ql6ze1.png?auto=webp&s=bb8bf4ea3ce32243d6f1cfe6397078ecca59d2f9', 'width': 3600}, 'variants': {}}]}
Base vs Instruct for embedding models. What's the difference?
2
For the life of me, I can't understand why an instruct variant would be needed for an embedding model. I understand and use instruct models for inferencing with LLMs, but when I got into working with embeddings, I simply just can't wrap my head around the idea. For example, this makes perfect sense to me: [https://huggingface.co/intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) However, I don't understand the added benefit (if any) when I prepend an instruction to the prompts like here [https://huggingface.co/intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct) The context is the same, same passage, same knowledge with or without the instruction prepended. What's the difference? When to use which?
2025-05-06T15:53:44
https://www.reddit.com/r/LocalLLaMA/comments/1kg7zsb/base_vs_instruct_for_embedding_models_whats_the/
No-Break-7922
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg7zsb
false
null
t3_1kg7zsb
/r/LocalLLaMA/comments/1kg7zsb/base_vs_instruct_for_embedding_models_whats_the/
false
false
self
2
{'enabled': False, 'images': [{'id': 'yk1Jhur895HBhohRWP9rWcRD7DYI0OqsKu_DyOqyZZ0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/yk1Jhur895HBhohRWP9rWcRD7DYI0OqsKu_DyOqyZZ0.png?width=108&crop=smart&auto=webp&s=2b719b81f6ca9013fa180bc1687afc5a8eabfd9f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/yk1Jhur895HBhohRWP9rWcRD7DYI0OqsKu_DyOqyZZ0.png?width=216&crop=smart&auto=webp&s=8576ef777b96f487a73b495621467e97d8aabe5f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/yk1Jhur895HBhohRWP9rWcRD7DYI0OqsKu_DyOqyZZ0.png?width=320&crop=smart&auto=webp&s=a4e242be30cdc6a915dc59cbf9548809be132379', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/yk1Jhur895HBhohRWP9rWcRD7DYI0OqsKu_DyOqyZZ0.png?width=640&crop=smart&auto=webp&s=391a3a6bdc824e9b1cc0ca7271f5404f00a346d2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/yk1Jhur895HBhohRWP9rWcRD7DYI0OqsKu_DyOqyZZ0.png?width=960&crop=smart&auto=webp&s=a636b20d9ed21ce96747d2ba0ce028dc380b58c0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/yk1Jhur895HBhohRWP9rWcRD7DYI0OqsKu_DyOqyZZ0.png?width=1080&crop=smart&auto=webp&s=b53eed701f7823429a678a2463b8b1b46ae9020c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/yk1Jhur895HBhohRWP9rWcRD7DYI0OqsKu_DyOqyZZ0.png?auto=webp&s=f8c40bc2431b6dd04f4f9b3affebf028e91daed3', 'width': 1200}, 'variants': {}}]}
Local VLM for Chart/Image Analysis and understanding on base M3 Ultra? Qwen 2.5 & Gemma 27B Not Cutting It.
1
Hi all, I'm looking for recommendations for a local Vision Language Model (VLM) that excels at chart and image understanding, specifically running on my Mac Studio M3 Ultra with 96GB of unified memory. I've tried Qwen 2.5 and Gemma 27B (8-bit MLX version), but they're struggling with accuracy on tasks like: Explaining tables: They often invent random values. Converting charts to tables: Significant hallucination and incorrect structuring. I've noticed Gemini Flash performs much better on these. Are there any local VLMs you'd suggest that can deliver more reliable and accurate results for these specific chart/image interpretation tasks? Appreciate any insights or recommendations!
2025-05-06T15:54:44
https://www.reddit.com/r/LocalLLaMA/comments/1kg80ps/local_vlm_for_chartimage_analysis_and/
Own_Editor8742
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg80ps
false
null
t3_1kg80ps
/r/LocalLLaMA/comments/1kg80ps/local_vlm_for_chartimage_analysis_and/
false
false
self
1
null
LLama 4 Maverick Finetuning for OCR from Food Packaging
1
[removed]
2025-05-06T16:02:03
https://www.reddit.com/r/LocalLLaMA/comments/1kg87fs/llama_4_maverick_finetuning_for_ocr_from_food/
No-Reindeer-9968
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg87fs
false
null
t3_1kg87fs
/r/LocalLLaMA/comments/1kg87fs/llama_4_maverick_finetuning_for_ocr_from_food/
false
false
self
1
null
What's the best TTS model for training a new language?
1
[removed]
2025-05-06T16:23:35
https://www.reddit.com/r/LocalLLaMA/comments/1kg8qlw/whats_the_best_tts_model_for_training_a_new/
Delt_a1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg8qlw
false
null
t3_1kg8qlw
/r/LocalLLaMA/comments/1kg8qlw/whats_the_best_tts_model_for_training_a_new/
false
false
self
1
null
What are the main use cases for smaller models?
0
I see a lot of hype around this, and many people talk about privacy and of course egde devices. I would argue that a massive use case for smaller models in multi-agent systems is actually AI safety. Curious why others might be so excited about them in this Reddit thread.
2025-05-06T16:27:38
https://www.reddit.com/r/LocalLLaMA/comments/1kg8u6m/what_are_the_main_use_cases_for_smaller_models/
omnisvosscio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg8u6m
false
null
t3_1kg8u6m
/r/LocalLLaMA/comments/1kg8u6m/what_are_the_main_use_cases_for_smaller_models/
false
false
self
0
null
I have 4x3090, what is the cheapest options to create a local LLM?
1
As the title says, I have 4 3090s lying around. They are the remnants of crypto mining years ago, I kept them for AI workloads like stable diffusion. So I thought I could build my own local LLM. So far, my research yielded this: the cheapest option would be a used threadripper + X399 board which would give me enough pcie lanes for all 4 gpus and enough slots for at least 128gb RAM. Is this the cheapest option? Or am I missing something?
2025-05-06T16:39:20
https://www.reddit.com/r/LocalLLaMA/comments/1kg94o2/i_have_4x3090_what_is_the_cheapest_options_to/
DeMischi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg94o2
false
null
t3_1kg94o2
/r/LocalLLaMA/comments/1kg94o2/i_have_4x3090_what_is_the_cheapest_options_to/
false
false
self
1
null
New SOTA music generation model
1
Ace-step is a multilingual 3.5B parameters music generation model. They released training code, LoRa training code and will release more stuff soon. It supports 19 languages, instrumental styles, vocal techniques, and more. I’m pretty exited because it’s really good, I never heard anything like it. Project website: https://ace-step.github.io/ GitHub: https://github.com/ace-step/ACE-Step HF: https://huggingface.co/ACE-Step/ACE-Step-v1-3.5B
2025-05-06T16:52:05
https://v.redd.it/35iizpjty6ze1
PearSilicon
/r/LocalLLaMA/comments/1kg9fv4/new_sota_music_generation_model/
1970-01-01T00:00:00
0
{}
1kg9fv4
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/35iizpjty6ze1/DASHPlaylist.mpd?a=1749271929%2CYzJmNzAzNTZiZTIwMGM1OWQ1ZjEyMWM2NGUyMjhmOTMxYjcxOTU2OTgzNzAzMTVhOGZjNWM1MTg1MjIzY2M3Ng%3D%3D&v=1&f=sd', 'duration': 152, 'fallback_url': 'https://v.redd.it/35iizpjty6ze1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/35iizpjty6ze1/HLSPlaylist.m3u8?a=1749271929%2CMjFlMTM1MzJjNTM3YTNhM2ExNDViYzZkMjNhZWI0N2U2NjhlMDg0NGNlYWNhN2ViOTRhM2MzNzlmMTNhMGYyNg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/35iizpjty6ze1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1kg9fv4
/r/LocalLLaMA/comments/1kg9fv4/new_sota_music_generation_model/
false
false
https://external-preview…987300df23bc5953
1
{'enabled': False, 'images': [{'id': 'OXVoMDN4Z3R5NnplMUATahysLltY5LFjwkyeKdWeoWJNo8-MZQBD68gR6Fn5', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/OXVoMDN4Z3R5NnplMUATahysLltY5LFjwkyeKdWeoWJNo8-MZQBD68gR6Fn5.png?width=108&crop=smart&format=pjpg&auto=webp&s=ce920db87288c32ea0cba9ce8c75158f826bcaa7', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/OXVoMDN4Z3R5NnplMUATahysLltY5LFjwkyeKdWeoWJNo8-MZQBD68gR6Fn5.png?width=216&crop=smart&format=pjpg&auto=webp&s=4b6f057a75ea74639df94ece744f126926685a75', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/OXVoMDN4Z3R5NnplMUATahysLltY5LFjwkyeKdWeoWJNo8-MZQBD68gR6Fn5.png?width=320&crop=smart&format=pjpg&auto=webp&s=eca75c0c2dcfd4ea8faa534201833b63aad73462', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/OXVoMDN4Z3R5NnplMUATahysLltY5LFjwkyeKdWeoWJNo8-MZQBD68gR6Fn5.png?width=640&crop=smart&format=pjpg&auto=webp&s=44c95fa723dbe7b21084828a068825052ed44bf3', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/OXVoMDN4Z3R5NnplMUATahysLltY5LFjwkyeKdWeoWJNo8-MZQBD68gR6Fn5.png?width=960&crop=smart&format=pjpg&auto=webp&s=6f518bed154c2c31017fdb531f3434eed1aca3db', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/OXVoMDN4Z3R5NnplMUATahysLltY5LFjwkyeKdWeoWJNo8-MZQBD68gR6Fn5.png?width=1080&crop=smart&format=pjpg&auto=webp&s=d3344989639f0b2c51046aff7616f4c765ac5a13', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/OXVoMDN4Z3R5NnplMUATahysLltY5LFjwkyeKdWeoWJNo8-MZQBD68gR6Fn5.png?format=pjpg&auto=webp&s=459fb59a1746fe884f57882f48bfe57568b4c60f', 'width': 1080}, 'variants': {}}]}
New SOTA music generation model
897
Ace-step is a multilingual 3.5B parameters music generation model. They released training code, LoRa training code and will release more stuff soon. It supports 19 languages, instrumental styles, vocal techniques, and more. I’m pretty exited because it’s really good, I never heard anything like it. Project website: https://ace-step.github.io/ GitHub: https://github.com/ace-step/ACE-Step HF: https://huggingface.co/ACE-Step/ACE-Step-v1-3.5B
2025-05-06T16:56:14
https://v.redd.it/gf0uynfhz6ze1
topiga
/r/LocalLLaMA/comments/1kg9jkq/new_sota_music_generation_model/
1970-01-01T00:00:00
0
{}
1kg9jkq
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/gf0uynfhz6ze1/DASHPlaylist.mpd?a=1749272176%2CNWRjZjdiZjg4NWY2NGFlOGQwYjAyNWY3MzE1Y2JiMWRkNzkzMjNjOWZhZDQyN2FlMzQ1YTM5ZWZmZmZjMGIyNQ%3D%3D&v=1&f=sd', 'duration': 152, 'fallback_url': 'https://v.redd.it/gf0uynfhz6ze1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/gf0uynfhz6ze1/HLSPlaylist.m3u8?a=1749272176%2CYmIwNGUwZWU4ZmNhNjA5MDY3NmUwZmVjNzZmZmFkOTQ0YjZiNDczZGM3M2JjMWFiMDYwNzkwNDE0MWFiODc2ZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/gf0uynfhz6ze1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1kg9jkq
/r/LocalLLaMA/comments/1kg9jkq/new_sota_music_generation_model/
false
false
https://external-preview…4b49a7b01ca83a06
897
{'enabled': False, 'images': [{'id': 'N2dybzhkY2h6NnplMUATahysLltY5LFjwkyeKdWeoWJNo8-MZQBD68gR6Fn5', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/N2dybzhkY2h6NnplMUATahysLltY5LFjwkyeKdWeoWJNo8-MZQBD68gR6Fn5.png?width=108&crop=smart&format=pjpg&auto=webp&s=a3eb69bffe72259329fff150c659794dd58382bf', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/N2dybzhkY2h6NnplMUATahysLltY5LFjwkyeKdWeoWJNo8-MZQBD68gR6Fn5.png?width=216&crop=smart&format=pjpg&auto=webp&s=1b80fff3f91b0a333a983c74d3fb018f04414624', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/N2dybzhkY2h6NnplMUATahysLltY5LFjwkyeKdWeoWJNo8-MZQBD68gR6Fn5.png?width=320&crop=smart&format=pjpg&auto=webp&s=bbab74120bfe587f940e21c195fb4b1b30587ea5', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/N2dybzhkY2h6NnplMUATahysLltY5LFjwkyeKdWeoWJNo8-MZQBD68gR6Fn5.png?width=640&crop=smart&format=pjpg&auto=webp&s=0217c77f0ce4a9f6ac9efb576402758ad7d8f25d', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/N2dybzhkY2h6NnplMUATahysLltY5LFjwkyeKdWeoWJNo8-MZQBD68gR6Fn5.png?width=960&crop=smart&format=pjpg&auto=webp&s=cb078b9f412f662b0a6334d03476ff274bf7d323', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/N2dybzhkY2h6NnplMUATahysLltY5LFjwkyeKdWeoWJNo8-MZQBD68gR6Fn5.png?width=1080&crop=smart&format=pjpg&auto=webp&s=8ce3dfd7235433b1cac4fe26f531393906357f65', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/N2dybzhkY2h6NnplMUATahysLltY5LFjwkyeKdWeoWJNo8-MZQBD68gR6Fn5.png?format=pjpg&auto=webp&s=cca2b8ef0bee5de2f5900392c2f0bb938353189e', 'width': 1080}, 'variants': {}}]}
RTX5060TI 16gb or 3080 10gb?
1
[removed]
2025-05-06T16:57:21
https://www.reddit.com/r/LocalLLaMA/comments/1kg9kl4/rtx5060ti_16gb_or_3080_10gb/
akachan1228
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg9kl4
false
null
t3_1kg9kl4
/r/LocalLLaMA/comments/1kg9kl4/rtx5060ti_16gb_or_3080_10gb/
false
false
self
1
null
How long before we start seeing ads intentionally shoved into LLM training data?
79
I was watching the new season of Black Mirror the other night, the “Common People” episode specifically. The episode touched on how ridiculous subscriptions tiers are and how products become “enshitified” as companies try to squeeze profit out of previously good products by making them terrible with ads and add-ons. There’s a part of the episode where the main character starts literally serving ads without being consciously aware she’s doing it. Like she just starts blurting out ad copy as part of the context of a conversation she’s having with someone (think Tourette’s Syndrome but with ads instead of cursing). Anyways, the episode got me thinking about LLMs and how we are still in the we’ll-figure-out-how-to-monetize-all-this-research-stuff-later attitude that companies seem to have right now. At some point, there will probably be an enshitification phase for Local LLMs, right? They know all of us folks running this stuff at home are taking advantage of all the expensive compute they paid for to train these models. How long before they are forced by their investors to recoup on that investment. Am I wrong in thinking we will likely see ads injected directly into models’ training data to be served as LLM answers contextually (like in the Black Mirror episode)? I’m envisioning it going something like this: Me: How many R’s are in Strawberry? LLM: There are 3 r’s in Strawberry. Speaking of strawberries, have you tried Driscoll’s Organic Strawberries, you can find them at Sprout. 🍓 😋 Do you think we will see something like this at the training data level or as LORA / QLORA, or would that completely wreck an LLM’s performance?
2025-05-06T16:59:38
https://www.reddit.com/r/LocalLLaMA/comments/1kg9mjs/how_long_before_we_start_seeing_ads_intentionally/
Porespellar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kg9mjs
false
null
t3_1kg9mjs
/r/LocalLLaMA/comments/1kg9mjs/how_long_before_we_start_seeing_ads_intentionally/
false
false
self
79
null
Running Qwen3-235B-A22B, and LLama 4 Maverick locally at the same time on a 6x RTX 3090 Epyc system. Qwen runs at 25 tokens/second on 5x GPU. Maverick runs at 20 tokens/second on one GPU, and CPU.
65
2025-05-06T17:10:57
https://youtu.be/36pDNgBSktY
SuperChewbacca
youtu.be
1970-01-01T00:00:00
0
{}
1kg9x4d
false
{'oembed': {'author_name': 'Chris Stephens', 'author_url': 'https://www.youtube.com/@chrisstephens9460', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/36pDNgBSktY?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Running Qwen3-235B-A22B and Llama-4-Maverick-17B-128E-Instruct at the same time on 6x RTX 3090 &amp; CPU"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/36pDNgBSktY/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Running Qwen3-235B-A22B and Llama-4-Maverick-17B-128E-Instruct at the same time on 6x RTX 3090 & CPU', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1kg9x4d
/r/LocalLLaMA/comments/1kg9x4d/running_qwen3235ba22b_and_llama_4_maverick/
false
false
https://b.thumbs.redditm…JSwfeGp01NVg.jpg
65
{'enabled': False, 'images': [{'id': 'f-VpVJ5bgifDHMwj5DRx6s9V8E4M-u8rjOZuzyX84B0', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/0SS75ZjmnIRJGxHj8_wMEO0mXFgif2vTYaGkpkwpErM.jpg?width=108&crop=smart&auto=webp&s=5d262cb28d7b385936519cd8ef6b370998997433', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/0SS75ZjmnIRJGxHj8_wMEO0mXFgif2vTYaGkpkwpErM.jpg?width=216&crop=smart&auto=webp&s=50bf57758f98a57f02a9b7f7a8a4ff7216f5ab65', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/0SS75ZjmnIRJGxHj8_wMEO0mXFgif2vTYaGkpkwpErM.jpg?width=320&crop=smart&auto=webp&s=f91671d3cb2441f2f3cc315aee8ba80f640e57e5', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/0SS75ZjmnIRJGxHj8_wMEO0mXFgif2vTYaGkpkwpErM.jpg?auto=webp&s=4e40c89a41dc5de683fc80fab4f30a2c7dabceda', 'width': 480}, 'variants': {}}]}
Audio transcribe options?
5
Looking for something that can transcribe DND sessions. Audio recordings are about 4 hours long. I have a 16 core CPU, 96GB of Ram, and a 5070ti.
2025-05-06T17:19:11
https://www.reddit.com/r/LocalLLaMA/comments/1kga4m9/audio_transcribe_options/
LingonberryGreen8881
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kga4m9
false
null
t3_1kga4m9
/r/LocalLLaMA/comments/1kga4m9/audio_transcribe_options/
false
false
self
5
null
The best model for writing stories
1
[removed]
2025-05-06T17:19:49
https://www.reddit.com/r/LocalLLaMA/comments/1kga55l/the_best_model_for_writing_stories/
maorui1234
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kga55l
false
null
t3_1kga55l
/r/LocalLLaMA/comments/1kga55l/the_best_model_for_writing_stories/
false
false
self
1
null
Best model to run on a homelab machine on ollama
0
We can run 32b models on dev machines with good token rate and better output quality, but if need a model to run for background jobs 24/7 on a low-fi homelab machine, what model is best as of today?
2025-05-06T17:24:18
https://www.reddit.com/r/LocalLLaMA/comments/1kga99k/best_model_to_run_on_a_homelab_machine_on_ollama/
ich3ckmat3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kga99k
false
null
t3_1kga99k
/r/LocalLLaMA/comments/1kga99k/best_model_to_run_on_a_homelab_machine_on_ollama/
false
false
self
0
null
KoboldCpp no acceleration using 3090? 3.76T/s
1
[removed]
2025-05-06T17:33:28
https://www.reddit.com/r/LocalLLaMA/comments/1kgahi9/koboldcpp_no_acceleration_using_3090_376ts/
IllustriousArtist345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgahi9
false
null
t3_1kgahi9
/r/LocalLLaMA/comments/1kgahi9/koboldcpp_no_acceleration_using_3090_376ts/
false
false
self
1
null
Recommend me a model (May 2025)?
1
[removed]
2025-05-06T17:44:29
https://www.reddit.com/r/LocalLLaMA/comments/1kgarjb/recommend_me_a_model_may_2025/
Foreskin_and_seven
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgarjb
false
null
t3_1kgarjb
/r/LocalLLaMA/comments/1kgarjb/recommend_me_a_model_may_2025/
false
false
self
1
null
Best current model for Mac Studio M2 Ultra 192GB
1
[removed]
2025-05-06T17:46:34
https://www.reddit.com/r/LocalLLaMA/comments/1kgatd9/best_current_model_for_mac_studio_m2_ultra_192gb/
Foreskin_and_seven
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgatd9
false
null
t3_1kgatd9
/r/LocalLLaMA/comments/1kgatd9/best_current_model_for_mac_studio_m2_ultra_192gb/
false
false
self
1
null
how can I run AI models with intel graphics xe?
1
[removed]
2025-05-06T17:51:41
https://www.reddit.com/r/LocalLLaMA/comments/1kgaxy4/how_can_i_run_ai_models_with_intel_graphics_xe/
No_Farmer_495
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgaxy4
false
null
t3_1kgaxy4
/r/LocalLLaMA/comments/1kgaxy4/how_can_i_run_ai_models_with_intel_graphics_xe/
false
false
self
1
null
Not happy with ~32B models. What's the minimum size of an LLM to be truly useful for engineering tasks?
0
By "useful" I mean able to do a relatively complex and multi-faceted problem, such as designing a solar system, a basic DIY drone, or even a computer system, given clear requirements, and without an ENDLESS back-and-force to make sure it understands aforementioned requirements.
2025-05-06T18:20:50
https://www.reddit.com/r/LocalLLaMA/comments/1kgbolo/not_happy_with_32b_models_whats_the_minimum_size/
ParaboloidalCrest
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgbolo
false
null
t3_1kgbolo
/r/LocalLLaMA/comments/1kgbolo/not_happy_with_32b_models_whats_the_minimum_size/
false
false
self
0
null
AGI current progress and when it will be achieved 100%
0
2025-05-06T18:21:42
https://www.reddit.com/gallery/1kgbpes
d4z7wk
reddit.com
1970-01-01T00:00:00
0
{}
1kgbpes
false
null
t3_1kgbpes
/r/LocalLLaMA/comments/1kgbpes/agi_current_progress_and_when_it_will_be_achieved/
false
false
https://b.thumbs.redditm…mK5x0gtOCfWA.jpg
0
null
Still build your own RAG eval system in 2025?
1
I'm lately thinking about a revamp of a crude eval setup for a RAG system. This self-built solution is not well maintained and could use some new features. I'm generally wary of frameworks, especially in the AI engineering space. Too many contenders moving to quickly for me to wanna bet on someone. Requirements rule out anything externally hosted. Must remain fully autonomous and open source. Need to support any kind of models, locally-hosted or API providers, ideally just using litellm as a proxy. Need full transparency and control over prompts (for judge LLM) and metrics (and generally following the ideas behind [12-factor-agents](https://github.com/humanlayer/12-factor-agents)). Cost-efficient LLM judge. For example should be able to use embeddings-based similarity against ground truth answers and only fall back on LLM judge when similarity score is below a certain threshold (RAGAS is reported to waste many times the amount tokens for each question as the RAG LLM itself does). Need to be able to test app layers in isolation (retrieval layer and end2end). Should support eval of multi-turn conversations (LLM judge/agent that dynamically interacts with system based on some kind of playbook). Should support different categories of questions with different assessment metrics for each category (e.g. factual quality, alignment behavior, resistance to jailbreaks etc.). Integrates well with kubernetes, opentelemetry, gitlab-ci etc. Otel instrumentations are already in place and it would be nice to be able to access otel trace id in eval reports or eval metrics exported to prometheus. Any thoughts on that? Are you using frameworks that support all or most of what I want and are you happy with those? Or would you recommend sticking with a custom self-made solution?
2025-05-06T18:22:46
https://www.reddit.com/r/LocalLLaMA/comments/1kgbqbo/still_build_your_own_rag_eval_system_in_2025/
mnze_brngo_7325
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgbqbo
false
null
t3_1kgbqbo
/r/LocalLLaMA/comments/1kgbqbo/still_build_your_own_rag_eval_system_in_2025/
false
false
self
1
{'enabled': False, 'images': [{'id': 'GQAkFe4nG8W_NK3Ul6ovhu-bghG43ssKgzPuaXbQ8FQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XUwWSTTjCSOL7jB5e-30YwsmXMUSDymQGlpJVY9C8CM.jpg?width=108&crop=smart&auto=webp&s=10bd59809e6bfb499f4ffa763e75f8b5ef570198', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/XUwWSTTjCSOL7jB5e-30YwsmXMUSDymQGlpJVY9C8CM.jpg?width=216&crop=smart&auto=webp&s=5ac5129ab71ba4da0a46557d0181cccc51e7a6af', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/XUwWSTTjCSOL7jB5e-30YwsmXMUSDymQGlpJVY9C8CM.jpg?width=320&crop=smart&auto=webp&s=f0a0b8f9f3db64079052d4ed16a47fa8aab2df34', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/XUwWSTTjCSOL7jB5e-30YwsmXMUSDymQGlpJVY9C8CM.jpg?width=640&crop=smart&auto=webp&s=20a59fcd26bfdbfad342d495f9db89c6ba27006d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/XUwWSTTjCSOL7jB5e-30YwsmXMUSDymQGlpJVY9C8CM.jpg?width=960&crop=smart&auto=webp&s=996592578408f3e31bd9af6a4f0a4924a02121ea', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/XUwWSTTjCSOL7jB5e-30YwsmXMUSDymQGlpJVY9C8CM.jpg?width=1080&crop=smart&auto=webp&s=9aea8e80f33b9d589b6afed70601ce1c2ec62864', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/XUwWSTTjCSOL7jB5e-30YwsmXMUSDymQGlpJVY9C8CM.jpg?auto=webp&s=39311c82ead78f33b3791a254d3fa37e7cae754f', 'width': 1200}, 'variants': {}}]}
Recently saved an MSI Trident 3 from the local eWaste facility. Looking for ideas?
1
So, as the title suggests I recently snagged an MSI Trident 3 from the local eWaste group for literal pennies. It's one of those custom-ITX "console" PC's. It has the following stats. I have already securely wiped the storage and reinstalled Windows 11. However, I'm willing to put Ubuntu, Arch, or another flavor of Linux on it. **System Overview** - **OS:** Windows 11 Pro 64-bit - **CPU:** Intel Core i9-10900 @ 2.80GHz - **RAM:** 64 GB DDR4 @ 1330MHz - **GPU:** NVIDIA GeForce GTX 1650 SUPER 6 GB - **Motherboard:** MSI MS-B9321 **Storage:** - **2TB Seagate SSD** - **1TB Samsung NVMe** I'm looking for ideas on what to run outside of adding yet another piece of my existing mini-home lab. Are there any recent models that could fit to make this into an always-on LLM machine for vibe coding, and general knowledge? Thanks for any suggestions in advance.
2025-05-06T18:49:30
https://www.reddit.com/r/LocalLLaMA/comments/1kgce5w/recently_saved_an_msi_trident_3_from_the_local/
NighthawkXL
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgce5w
false
null
t3_1kgce5w
/r/LocalLLaMA/comments/1kgce5w/recently_saved_an_msi_trident_3_from_the_local/
false
false
self
1
null
From my local FB Marketplace...
0
https://www.facebook.com/share/1CnayYv949/
2025-05-06T18:54:12
https://www.reddit.com/r/LocalLLaMA/comments/1kgcigi/from_my_local_fb_marketplace/
synexo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgcigi
false
null
t3_1kgcigi
/r/LocalLLaMA/comments/1kgcigi/from_my_local_fb_marketplace/
false
false
self
0
null
Working on mcp-compose, inspired by docker compose.
16
2025-05-06T19:07:39
https://github.com/phildougherty/mcp-compose
RandomRobot01
github.com
1970-01-01T00:00:00
0
{}
1kgcubl
false
null
t3_1kgcubl
/r/LocalLLaMA/comments/1kgcubl/working_on_mcpcompose_inspired_by_docker_compose/
false
false
https://a.thumbs.redditm…Ne-eVlUc7in0.jpg
16
{'enabled': False, 'images': [{'id': 'oJCXcpx3lwmjT4Tk7uJ5vB8fiNt-7vIrB0FhnFEZIj8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hgHlf0iVM81-P-Bl7y-Cpm7xk08CYic76diQyKNjlGg.jpg?width=108&crop=smart&auto=webp&s=ad75a7c393637aec07728611b1910a4efc88ba4d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hgHlf0iVM81-P-Bl7y-Cpm7xk08CYic76diQyKNjlGg.jpg?width=216&crop=smart&auto=webp&s=4c202ee0b9446d545d62450c1419c79cea18c025', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hgHlf0iVM81-P-Bl7y-Cpm7xk08CYic76diQyKNjlGg.jpg?width=320&crop=smart&auto=webp&s=1bf88744fe1ff9d9949003757161b14e1778b40a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hgHlf0iVM81-P-Bl7y-Cpm7xk08CYic76diQyKNjlGg.jpg?width=640&crop=smart&auto=webp&s=473d6f91aa6bcf5f2e96b7bd2fc0bbd57a9b49c5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hgHlf0iVM81-P-Bl7y-Cpm7xk08CYic76diQyKNjlGg.jpg?width=960&crop=smart&auto=webp&s=de67f5c2a46b631dc492a48de63712595bcb0a51', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hgHlf0iVM81-P-Bl7y-Cpm7xk08CYic76diQyKNjlGg.jpg?width=1080&crop=smart&auto=webp&s=4c32bb280699611745444fd910d5d80999e3888e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hgHlf0iVM81-P-Bl7y-Cpm7xk08CYic76diQyKNjlGg.jpg?auto=webp&s=d916016b5d96d4b79f40142d45f90f1a4728850c', 'width': 1200}, 'variants': {}}]}
Trying to understand quant sizes versus inference speed
1
[removed]
2025-05-06T19:21:31
https://www.reddit.com/r/LocalLLaMA/comments/1kgd6iv/trying_to_understand_quant_sizes_versus_inference/
Primary-Wear-2460
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgd6iv
false
null
t3_1kgd6iv
/r/LocalLLaMA/comments/1kgd6iv/trying_to_understand_quant_sizes_versus_inference/
false
false
self
1
null
Kurdish TTS model Sorani - Which architecture?
0
[removed]
2025-05-06T19:24:09
https://www.reddit.com/r/LocalLLaMA/comments/1kgd8po/kurdish_tts_model_sorani_which_architecture/
The_Heaven_Dragon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgd8po
false
null
t3_1kgd8po
/r/LocalLLaMA/comments/1kgd8po/kurdish_tts_model_sorani_which_architecture/
false
false
self
0
null
Kurdish TTS model Sorani - Which architecture?
0
[removed]
2025-05-06T19:26:09
https://www.reddit.com/r/LocalLLaMA/comments/1kgdael/kurdish_tts_model_sorani_which_architecture/
The_Heaven_Dragon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgdael
false
null
t3_1kgdael
/r/LocalLLaMA/comments/1kgdael/kurdish_tts_model_sorani_which_architecture/
false
false
self
0
null
Biggest pain point when deploying local models?
1
[removed]
2025-05-06T19:30:47
https://www.reddit.com/r/LocalLLaMA/comments/1kgdefw/biggest_pain_point_when_deploying_local_models/
LiquidAI_Team
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgdefw
false
null
t3_1kgdefw
/r/LocalLLaMA/comments/1kgdefw/biggest_pain_point_when_deploying_local_models/
false
false
self
1
null
Modelos de embedding para textos largos de ollama
1
[removed]
2025-05-06T19:38:12
https://www.reddit.com/r/LocalLLaMA/comments/1kgdkw9/modelos_de_embedding_para_textos_largos_de_ollama/
Effective_Budget7594
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgdkw9
false
null
t3_1kgdkw9
/r/LocalLLaMA/comments/1kgdkw9/modelos_de_embedding_para_textos_largos_de_ollama/
false
false
self
1
null
The real reason OpenAI bought WindSurf
518
For those who don’t know, today it was announced that OpenAI bought WindSurf, the AI-assisted IDE, for 3 billion USD. Previously, they tried to buy Cursor, the leading company that offers AI-assisted IDE, but didn’t agree on the details (probably on the price). Therefore, they settled for the second biggest player in terms of market share, WindSurf. Why? A lot of people question whether this is a wise move from OpenAI considering that these companies have limited innovation, since they don’t own the models and their IDE is just a fork of VS code. Many argued that the reason for this purchase is to acquire the market position, the user base, since these platforms are already established with a big number of users. I disagree in some degree. It’s not about the users per se, it’s about the training data they create. It doesn’t even matter which model users choose to use inside the IDE, Gemini2.5, Sonnet3.7, doesn’t really matter. There is a huge market that will be created very soon, and that’s coding agents. Some rumours suggest that OpenAI would sell them for 10k USD a month! These kind of agents/models need the exact kind of data that these AI-assisted IDEs collect. Therefore, they paid the 3 billion to buy the training data they’d need to train their future coding agent models. What do you think?
2025-05-06T19:40:33
https://i.redd.it/knqgtodvs7ze1.jpeg
ResearchCrafty1804
i.redd.it
1970-01-01T00:00:00
0
{}
1kgdmz6
false
null
t3_1kgdmz6
/r/LocalLLaMA/comments/1kgdmz6/the_real_reason_openai_bought_windsurf/
false
false
https://b.thumbs.redditm…dvT30IaoOP7o.jpg
518
{'enabled': True, 'images': [{'id': 'KaVkXuJPZfDu0D1bTLwdECl-25Yih2Lj1p7X5KMGQIU', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/knqgtodvs7ze1.jpeg?width=108&crop=smart&auto=webp&s=39fe19906f9c5f6c5145aa18b4dbac1ff4288683', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/knqgtodvs7ze1.jpeg?width=216&crop=smart&auto=webp&s=3799edbc4a98831732de4d3f15c538ccc859c2c9', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/knqgtodvs7ze1.jpeg?width=320&crop=smart&auto=webp&s=1c9209460a5b6d56e11e883b85f3a1d02266f4cc', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/knqgtodvs7ze1.jpeg?width=640&crop=smart&auto=webp&s=b31b8bf514ff9c2407608d699ee65ce7c164f986', 'width': 640}, {'height': 640, 'url': 'https://preview.redd.it/knqgtodvs7ze1.jpeg?width=960&crop=smart&auto=webp&s=a4d3e5c7b4993a68bd4f118fc8367c6236a675cb', 'width': 960}, {'height': 720, 'url': 'https://preview.redd.it/knqgtodvs7ze1.jpeg?width=1080&crop=smart&auto=webp&s=bdfe2b83cb0a7f3db13cd0cec068134476b00cc0', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/knqgtodvs7ze1.jpeg?auto=webp&s=53bdffb4db329cf852b2614b27636788d09453ca', 'width': 1536}, 'variants': {}}]}
My 3090 benchmark result (SD 1.5 Image Generation Benchmark)
0
2025-05-06T19:53:56
https://i.redd.it/emhlgm49v7ze1.jpeg
yachty66
i.redd.it
1970-01-01T00:00:00
0
{}
1kgdyp6
false
null
t3_1kgdyp6
/r/LocalLLaMA/comments/1kgdyp6/my_3090_benchmark_result_sd_15_image_generation/
false
false
https://b.thumbs.redditm…rj4BN8bRE8cY.jpg
0
{'enabled': True, 'images': [{'id': 'IJyijlvgE1d_KynN-V1eKOqS2UDmenFNcc9Itb0odWE', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/emhlgm49v7ze1.jpeg?width=108&crop=smart&auto=webp&s=0718452fe56338e6a5b5eaee7122cb2a355b8132', 'width': 108}, {'height': 174, 'url': 'https://preview.redd.it/emhlgm49v7ze1.jpeg?width=216&crop=smart&auto=webp&s=f08f76e22f6c2ba08322fed726a16bf42c1a6537', 'width': 216}, {'height': 258, 'url': 'https://preview.redd.it/emhlgm49v7ze1.jpeg?width=320&crop=smart&auto=webp&s=1616390190996ef8185690d9341e215fe9f54a52', 'width': 320}, {'height': 516, 'url': 'https://preview.redd.it/emhlgm49v7ze1.jpeg?width=640&crop=smart&auto=webp&s=a600c487fc6c80eb2c6b0a866b5b9b700a47dd83', 'width': 640}, {'height': 774, 'url': 'https://preview.redd.it/emhlgm49v7ze1.jpeg?width=960&crop=smart&auto=webp&s=aa4f1119c501a13e3593b2bf16d07e52b7b1b162', 'width': 960}, {'height': 871, 'url': 'https://preview.redd.it/emhlgm49v7ze1.jpeg?width=1080&crop=smart&auto=webp&s=82c85ccee3723131a3363fa0a9a5eac224396344', 'width': 1080}], 'source': {'height': 871, 'url': 'https://preview.redd.it/emhlgm49v7ze1.jpeg?auto=webp&s=49b34a677343908d73e433b3ff52a3265796c197', 'width': 1080}, 'variants': {}}]}
i get update to my iphone 6s huh lol #apple #iPhone
1
2025-05-06T20:19:14
https://i.redd.it/scjar407z7ze1.png
Current-Gazelle-725
i.redd.it
1970-01-01T00:00:00
0
{}
1kgelp5
false
null
t3_1kgelp5
/r/LocalLLaMA/comments/1kgelp5/i_get_update_to_my_iphone_6s_huh_lol_apple_iphone/
false
false
https://b.thumbs.redditm…gozGeH7ey8rM.jpg
1
{'enabled': True, 'images': [{'id': 'TnSkQoLGEgDvw3ZRiiln1R4UzBzjhqE6sw7FRjwT3Fs', 'resolutions': [{'height': 192, 'url': 'https://preview.redd.it/scjar407z7ze1.png?width=108&crop=smart&auto=webp&s=0714fe211d305f78685c8924609510ce54d43c58', 'width': 108}, {'height': 384, 'url': 'https://preview.redd.it/scjar407z7ze1.png?width=216&crop=smart&auto=webp&s=0a085f527f456bfdde790bafae6f2d6d3a1cb5f6', 'width': 216}, {'height': 569, 'url': 'https://preview.redd.it/scjar407z7ze1.png?width=320&crop=smart&auto=webp&s=331428fa08aeaea000fe635cf401736f4c06614b', 'width': 320}, {'height': 1138, 'url': 'https://preview.redd.it/scjar407z7ze1.png?width=640&crop=smart&auto=webp&s=74d22e03e2c47a98487b26e8ace9b70b3d02fbb3', 'width': 640}], 'source': {'height': 1334, 'url': 'https://preview.redd.it/scjar407z7ze1.png?auto=webp&s=89ff78706f723811466e4752db973d1ab00287c4', 'width': 750}, 'variants': {}}]}
Qwen3 4b prompt format and setting s
1
I am using chatterui on Android (which uses llama.cpp internally) what chat format should I use and what tmp and topk and other setting should i use When i increase generated tokens past 1500 the model respond as if my message is empty anyone help?
2025-05-06T20:25:01
https://www.reddit.com/r/LocalLLaMA/comments/1kgeqr0/qwen3_4b_prompt_format_and_setting_s/
Killerx7c
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgeqr0
false
null
t3_1kgeqr0
/r/LocalLLaMA/comments/1kgeqr0/qwen3_4b_prompt_format_and_setting_s/
false
false
self
1
null
Apply formatting to Jinja chat templates directly from the Hugging Face model card (+ new playground)
22
Since Jinja templates can be extremely difficult to read and edit, we decided to add formatting support to \`@huggingface/jinja\`, the JavaScript library we use for parsing and rendering chat templates. This also means you can format these templates directly from the model card on Hugging Face! We hope you like it and would love to hear your feedback! 🤗 You can also try it using our new Jinja playground: [https://huggingface.co/spaces/Xenova/jinja-playground](https://huggingface.co/spaces/Xenova/jinja-playground)
2025-05-06T20:25:19
https://v.redd.it/l2ajr0fnt6ze1
xenovatech
v.redd.it
1970-01-01T00:00:00
0
{}
1kger0a
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/l2ajr0fnt6ze1/DASHPlaylist.mpd?a=1749155131%2CZmUwOTgyZjI2YTExNzdhZDFkMDEwZWYyM2Y2Y2E5YTRhNWQ5MDQ5ZDJiMGFmMzEwMTA5MTNiOTFiNTMxOWVmNg%3D%3D&v=1&f=sd', 'duration': 15, 'fallback_url': 'https://v.redd.it/l2ajr0fnt6ze1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1082, 'hls_url': 'https://v.redd.it/l2ajr0fnt6ze1/HLSPlaylist.m3u8?a=1749155131%2COTdjOWNhYTkzZTlkOWY5YzdlZWNmOTg3NjYzZGY4ZWRjNzZlM2M0YzI1ZjNkYmM2ZWU5YTg4ZTExOTVhMWI4OQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/l2ajr0fnt6ze1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1kger0a
/r/LocalLLaMA/comments/1kger0a/apply_formatting_to_jinja_chat_templates_directly/
false
false
https://external-preview…9e8d0e18c481579a
22
{'enabled': False, 'images': [{'id': 'MG0ycHcxZm50NnplMTkv8XYcU_q3RLacngNsWPOdQeDOAcczKW6s_baevJOJ', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MG0ycHcxZm50NnplMTkv8XYcU_q3RLacngNsWPOdQeDOAcczKW6s_baevJOJ.png?width=108&crop=smart&format=pjpg&auto=webp&s=1effa49e580ecee45667295fb3a6c1228a501daa', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MG0ycHcxZm50NnplMTkv8XYcU_q3RLacngNsWPOdQeDOAcczKW6s_baevJOJ.png?width=216&crop=smart&format=pjpg&auto=webp&s=41d81af67fc20487d91fff99aba5b562687ae469', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/MG0ycHcxZm50NnplMTkv8XYcU_q3RLacngNsWPOdQeDOAcczKW6s_baevJOJ.png?width=320&crop=smart&format=pjpg&auto=webp&s=f807a5d467f0fe42bc65045f87bb9610edba58bb', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/MG0ycHcxZm50NnplMTkv8XYcU_q3RLacngNsWPOdQeDOAcczKW6s_baevJOJ.png?width=640&crop=smart&format=pjpg&auto=webp&s=7786af7a63722288689f360a14161801722296cb', 'width': 640}, {'height': 961, 'url': 'https://external-preview.redd.it/MG0ycHcxZm50NnplMTkv8XYcU_q3RLacngNsWPOdQeDOAcczKW6s_baevJOJ.png?width=960&crop=smart&format=pjpg&auto=webp&s=20cde340a822ddbd666eeff48cdc27fa6f0f1316', 'width': 960}, {'height': 1081, 'url': 'https://external-preview.redd.it/MG0ycHcxZm50NnplMTkv8XYcU_q3RLacngNsWPOdQeDOAcczKW6s_baevJOJ.png?width=1080&crop=smart&format=pjpg&auto=webp&s=b420203b719ca8e23acfd15e2bd38559d62d778e', 'width': 1080}], 'source': {'height': 1602, 'url': 'https://external-preview.redd.it/MG0ycHcxZm50NnplMTkv8XYcU_q3RLacngNsWPOdQeDOAcczKW6s_baevJOJ.png?format=pjpg&auto=webp&s=35c49e6805b161432391bb3f3883e695c63da840', 'width': 1600}, 'variants': {}}]}
Homelab buying strategy
0
Hello guys so dzr doing great with 2x 3090 watercooled on W790. I did use both for personnal and professional stuff. I use it for code, helpng a friend optimise his AI workflow, translating subtitles, personnal projects, and i did test and use quire a lot of models. So it works fine with 2x24. Now a friend of mine speaks about CrewAI, another one games on his new 5090 so I feel limited. Should I go RTX Pro 6000 Blackwell ? or should i try 4x 5070Ti/5080 ? or 2x 5090 ? i dont want to add 2 more 3090 because of power and heat... tensor parralelism with pcie gen 5 should play nicely, so i think multi gpu is ok
2025-05-06T20:45:51
https://www.reddit.com/r/LocalLLaMA/comments/1kgf97p/homelab_buying_strategy/
Opteron67
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgf97p
false
null
t3_1kgf97p
/r/LocalLLaMA/comments/1kgf97p/homelab_buying_strategy/
false
false
self
0
null
Embedding templates for long texts.
1
[removed]
2025-05-06T20:46:04
https://www.reddit.com/r/LocalLLaMA/comments/1kgf9et/embedding_templates_for_long_texts/
Effective_Budget7594
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgf9et
false
null
t3_1kgf9et
/r/LocalLLaMA/comments/1kgf9et/embedding_templates_for_long_texts/
false
false
self
1
null
Interactive Aider Dashboard
1
[removed]
2025-05-06T20:51:12
https://www.reddit.com/r/LocalLLaMA/comments/1kgfdwf/interactive_aider_dashboard/
gsurrel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgfdwf
false
null
t3_1kgfdwf
/r/LocalLLaMA/comments/1kgfdwf/interactive_aider_dashboard/
false
false
self
1
null
something I found out
0
Grok 3 has been very, very uncensored. It is willing to do some pretty nasty stuff. Unlike chatgpt / deepseek. Now, what I wonder is, why are there almost no models at that quality? I am not talking having a 900B model or anything, but something smaller, that can be ran on a 12gb vram card. I have looked at the UGC or whatever it is called Benchmark, and really, the top performing one, still has stupid gaurdrails that Grok does not. SO am I looking wrong, or do I just have a model that is just too small and is incapable of running uncensored and raw like Grok? not saying I need a model locally like grok, I am just looking for a better replacement then the ones I have now, which are not doing an amazing job. System: 32gb system ram (already used like 50% at least) and 12gb vram, if that helps at all. Thanks in advance!
2025-05-06T20:58:30
https://www.reddit.com/r/LocalLLaMA/comments/1kgfk4k/something_i_found_out/
Minute_Attempt3063
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgfk4k
false
null
t3_1kgfk4k
/r/LocalLLaMA/comments/1kgfk4k/something_i_found_out/
false
false
self
0
null
What model to use?
1
[removed]
2025-05-06T21:15:55
https://www.reddit.com/r/LocalLLaMA/comments/1kgfzhk/what_model_to_use/
sheep_b3d
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgfzhk
false
null
t3_1kgfzhk
/r/LocalLLaMA/comments/1kgfzhk/what_model_to_use/
false
false
self
1
null
How to: Setup Discord bot that connects to your LLM via ollama.
1
[removed]
2025-05-06T21:30:56
https://www.reddit.com/r/LocalLLaMA/comments/1kggcea/how_to_setup_discord_bot_that_connects_to_your/
Robots_Never_Die
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kggcea
false
null
t3_1kggcea
/r/LocalLLaMA/comments/1kggcea/how_to_setup_discord_bot_that_connects_to_your/
false
false
self
1
{'enabled': False, 'images': [{'id': '_QTobzuJkr1Zm6t-xAciOuvRRUG3sFX1cl1tVTmHCMU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/_QTobzuJkr1Zm6t-xAciOuvRRUG3sFX1cl1tVTmHCMU.png?width=108&crop=smart&auto=webp&s=bdb62c26400e65a1d5708b787fc16439f7f27e51', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/_QTobzuJkr1Zm6t-xAciOuvRRUG3sFX1cl1tVTmHCMU.png?auto=webp&s=8c1b082ea5f212605d31f644c8066170e72d286f', 'width': 200}, 'variants': {}}]}
The Hugging Face Tag Mystery: How Do Datasets Get Modalities, Libraries, Formats, etc., Without YAML in README?
1
[removed]
2025-05-06T21:31:05
https://i.redd.it/ch8zd0xsa8ze1.png
MadPelmewka
i.redd.it
1970-01-01T00:00:00
0
{}
1kggcim
false
null
t3_1kggcim
/r/LocalLLaMA/comments/1kggcim/the_hugging_face_tag_mystery_how_do_datasets_get/
false
false
https://external-preview…41f23404b400495e
1
{'enabled': True, 'images': [{'id': 'mnFG-HzBlaGQMXJnOULjt8g9raDN3lj8I-dQvPHYgPM', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/ch8zd0xsa8ze1.png?width=108&crop=smart&auto=webp&s=c580c179b16cc7a9973f9312573a465ab4c9f6d4', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/ch8zd0xsa8ze1.png?width=216&crop=smart&auto=webp&s=05dd565c2fb481fce39be01716a219340f91d4ae', 'width': 216}, {'height': 191, 'url': 'https://preview.redd.it/ch8zd0xsa8ze1.png?width=320&crop=smart&auto=webp&s=dbc15e9519131f7d490de15601f9a9af4255a8e3', 'width': 320}, {'height': 382, 'url': 'https://preview.redd.it/ch8zd0xsa8ze1.png?width=640&crop=smart&auto=webp&s=aa721c50c52d8016788fb456dabd8b212886d0d7', 'width': 640}, {'height': 574, 'url': 'https://preview.redd.it/ch8zd0xsa8ze1.png?width=960&crop=smart&auto=webp&s=8bb4100e165079eb08888592708ddbfe0fb5163f', 'width': 960}], 'source': {'height': 625, 'url': 'https://preview.redd.it/ch8zd0xsa8ze1.png?auto=webp&s=555deb5fd4c7939a8f9f24dc1523e2b4a9d4dfb7', 'width': 1045}, 'variants': {}}]}
I built an AI code review agent in a few hours, here's what I learned
1
2025-05-06T21:31:09
https://www.sourcebot.dev/blog/review-agent-learnings
lowpolydreaming
sourcebot.dev
1970-01-01T00:00:00
0
{}
1kggck3
false
null
t3_1kggck3
/r/LocalLLaMA/comments/1kggck3/i_built_an_ai_code_review_agent_in_a_few_hours/
false
false
https://external-preview…347f4a19b4ed4077
1
{'enabled': False, 'images': [{'id': 'ltXyso57yNawTJXvbaH2Yu94f0TNCBqOri7txirT5CU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ltXyso57yNawTJXvbaH2Yu94f0TNCBqOri7txirT5CU.png?width=108&crop=smart&auto=webp&s=cb311ff58b75fae0f69bc23c1c6412870a75cd49', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ltXyso57yNawTJXvbaH2Yu94f0TNCBqOri7txirT5CU.png?width=216&crop=smart&auto=webp&s=ef4e1b5294dee0f3c4cb003620fb7ff3f1eedf22', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ltXyso57yNawTJXvbaH2Yu94f0TNCBqOri7txirT5CU.png?width=320&crop=smart&auto=webp&s=4468f9d712094a9e6cc2ea331b0b057aaf6880da', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ltXyso57yNawTJXvbaH2Yu94f0TNCBqOri7txirT5CU.png?width=640&crop=smart&auto=webp&s=82b6c8d65ef5f591bf0b64e364a5c384156ebc07', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ltXyso57yNawTJXvbaH2Yu94f0TNCBqOri7txirT5CU.png?width=960&crop=smart&auto=webp&s=ab763dae94eb6fa3e80008a1d691242a17090a24', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ltXyso57yNawTJXvbaH2Yu94f0TNCBqOri7txirT5CU.png?width=1080&crop=smart&auto=webp&s=d2b194217ecabdc66125af71d2e894c4ec881af5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ltXyso57yNawTJXvbaH2Yu94f0TNCBqOri7txirT5CU.png?auto=webp&s=76cbdcacdea620dcfb94c21c55777d19abb3720d', 'width': 1200}, 'variants': {}}]}
How to: Setup Discord bot that connects to your LLM via ollama.
0
[removed]
2025-05-06T21:32:38
https://github.com/RBND/Silus-Blue
Robots_Never_Die
github.com
1970-01-01T00:00:00
0
{}
1kggdsm
false
null
t3_1kggdsm
/r/LocalLLaMA/comments/1kggdsm/how_to_setup_discord_bot_that_connects_to_your/
false
false
default
0
null
We now have local computer-use! M3 Pro 18GB running both UI-TARS-1.5-7B-6bit and a macOS sequoia VM entirely locally using MLX and c/ua at ~30second/action
106
2025-05-06T21:38:03
https://v.redd.it/6okp9ioq38ze1
a6oo
v.redd.it
1970-01-01T00:00:00
0
{}
1kggif3
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/6okp9ioq38ze1/DASHPlaylist.mpd?a=1749159499%2CNjJkZTEzZTVjZTRmZDRjYzExOTNkZWVjMmQ4NDFlYTg5NTE0NGFlYmY3NTY4Yjg5MTI0ZjFjOTJiZGUzZTQ3OA%3D%3D&v=1&f=sd', 'duration': 20, 'fallback_url': 'https://v.redd.it/6okp9ioq38ze1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/6okp9ioq38ze1/HLSPlaylist.m3u8?a=1749159499%2CZTNhYTllNzcwMTZmNmViNWMwMWQxNDM3OTk3YzQ0MDhkNDQ0NzlkNGJiMzFiZjkyNzgwNDBkMjQ4MGRmMDJmYg%3D%3D&v=1&f=sd', 'is_gif': True, 'scrubber_media_url': 'https://v.redd.it/6okp9ioq38ze1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1152}}
t3_1kggif3
/r/LocalLLaMA/comments/1kggif3/we_now_have_local_computeruse_m3_pro_18gb_running/
false
false
https://external-preview…a1646dacd4d78366
106
{'enabled': False, 'images': [{'id': 'YTl4dGg5ZXVkOHplMdxnn65NNKVAPJFD0pCsNWgZyolHWVTVVUjy0pasvAbK', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/YTl4dGg5ZXVkOHplMdxnn65NNKVAPJFD0pCsNWgZyolHWVTVVUjy0pasvAbK.png?width=108&crop=smart&format=pjpg&auto=webp&s=6c3a95a5def7b02465ad24ecc4a4bb41f0f3ae00', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/YTl4dGg5ZXVkOHplMdxnn65NNKVAPJFD0pCsNWgZyolHWVTVVUjy0pasvAbK.png?width=216&crop=smart&format=pjpg&auto=webp&s=dcd424d95fd4de507d8a839435dbbeff548b01ef', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/YTl4dGg5ZXVkOHplMdxnn65NNKVAPJFD0pCsNWgZyolHWVTVVUjy0pasvAbK.png?width=320&crop=smart&format=pjpg&auto=webp&s=0a331b784bb2e12ae71b36c03c618bb7ee7572c0', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/YTl4dGg5ZXVkOHplMdxnn65NNKVAPJFD0pCsNWgZyolHWVTVVUjy0pasvAbK.png?width=640&crop=smart&format=pjpg&auto=webp&s=ed1c9adc18589fcbee90b0d4ff661c66c184ba59', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/YTl4dGg5ZXVkOHplMdxnn65NNKVAPJFD0pCsNWgZyolHWVTVVUjy0pasvAbK.png?width=960&crop=smart&format=pjpg&auto=webp&s=a561a44af4a00fe5cdae6cd5e02edfd0c158ae70', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/YTl4dGg5ZXVkOHplMdxnn65NNKVAPJFD0pCsNWgZyolHWVTVVUjy0pasvAbK.png?width=1080&crop=smart&format=pjpg&auto=webp&s=ff06acc8b4b1ebc3bf934e2794955c4d3db61904', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/YTl4dGg5ZXVkOHplMdxnn65NNKVAPJFD0pCsNWgZyolHWVTVVUjy0pasvAbK.png?format=pjpg&auto=webp&s=3982e456692d738e32078d72b6e81367c415687f', 'width': 1152}, 'variants': {}}]}
🚀 Efficient Text-to-Image with Flux.1 + DFloat11 — Lossless Compression Enables Fast Image Generation Under 24GB VRAM
1
[removed]
2025-05-06T21:52:10
https://www.reddit.com/r/LocalLLaMA/comments/1kgguek/efficient_texttoimage_with_flux1_dfloat11/
LeanModels
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgguek
false
null
t3_1kgguek
/r/LocalLLaMA/comments/1kgguek/efficient_texttoimage_with_flux1_dfloat11/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Wox_S9sbYW0Xmx9U1fJdMhjinT4PLJ7U6VfKKeagu80', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1ccvF7KXs6Bx16Z-kGTP9vs30JMK9BcX3anL8eASVPc.jpg?width=108&crop=smart&auto=webp&s=8eddf9457ad7fb2cd2c1af30dd6f15451e0b0bb2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/1ccvF7KXs6Bx16Z-kGTP9vs30JMK9BcX3anL8eASVPc.jpg?width=216&crop=smart&auto=webp&s=7e6b25ae45225de0011378baff88b97279da4acf', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/1ccvF7KXs6Bx16Z-kGTP9vs30JMK9BcX3anL8eASVPc.jpg?width=320&crop=smart&auto=webp&s=1aaa59a54db3d763674e177629561e7b594e0a58', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/1ccvF7KXs6Bx16Z-kGTP9vs30JMK9BcX3anL8eASVPc.jpg?width=640&crop=smart&auto=webp&s=e5ff00b02158c5cd550a647719ee31a66daaa1b6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/1ccvF7KXs6Bx16Z-kGTP9vs30JMK9BcX3anL8eASVPc.jpg?width=960&crop=smart&auto=webp&s=49dfccd3e20b75f083c4e6564ceef007ee8dd510', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/1ccvF7KXs6Bx16Z-kGTP9vs30JMK9BcX3anL8eASVPc.jpg?width=1080&crop=smart&auto=webp&s=07719cc517bc10e81d39e5a698405f2d0036ba4c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/1ccvF7KXs6Bx16Z-kGTP9vs30JMK9BcX3anL8eASVPc.jpg?auto=webp&s=531aa38e345e574386475291f69085a202c3f771', 'width': 1200}, 'variants': {}}]}
🚀 Efficient Text-to-Image with Flux.1 + DFloat11 — Lossless Compression Enables Fast Image Generation Under 24GB VRAM
1
[removed]
2025-05-06T21:54:03
https://www.reddit.com/r/LocalLLaMA/comments/1kggw01/efficient_texttoimage_with_flux1_dfloat11/
LeanModels
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kggw01
false
null
t3_1kggw01
/r/LocalLLaMA/comments/1kggw01/efficient_texttoimage_with_flux1_dfloat11/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Wox_S9sbYW0Xmx9U1fJdMhjinT4PLJ7U6VfKKeagu80', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1ccvF7KXs6Bx16Z-kGTP9vs30JMK9BcX3anL8eASVPc.jpg?width=108&crop=smart&auto=webp&s=8eddf9457ad7fb2cd2c1af30dd6f15451e0b0bb2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/1ccvF7KXs6Bx16Z-kGTP9vs30JMK9BcX3anL8eASVPc.jpg?width=216&crop=smart&auto=webp&s=7e6b25ae45225de0011378baff88b97279da4acf', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/1ccvF7KXs6Bx16Z-kGTP9vs30JMK9BcX3anL8eASVPc.jpg?width=320&crop=smart&auto=webp&s=1aaa59a54db3d763674e177629561e7b594e0a58', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/1ccvF7KXs6Bx16Z-kGTP9vs30JMK9BcX3anL8eASVPc.jpg?width=640&crop=smart&auto=webp&s=e5ff00b02158c5cd550a647719ee31a66daaa1b6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/1ccvF7KXs6Bx16Z-kGTP9vs30JMK9BcX3anL8eASVPc.jpg?width=960&crop=smart&auto=webp&s=49dfccd3e20b75f083c4e6564ceef007ee8dd510', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/1ccvF7KXs6Bx16Z-kGTP9vs30JMK9BcX3anL8eASVPc.jpg?width=1080&crop=smart&auto=webp&s=07719cc517bc10e81d39e5a698405f2d0036ba4c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/1ccvF7KXs6Bx16Z-kGTP9vs30JMK9BcX3anL8eASVPc.jpg?auto=webp&s=531aa38e345e574386475291f69085a202c3f771', 'width': 1200}, 'variants': {}}]}
I was shocked how Qwen3-235b-a22b is really good at math
50
Hello and I was searching for a “Free Math AI” and I am also a user of Qwen, besides DeepSeek and I don’t use ChatGPT anymore since a year. But yeah, when I tried the strongest model from Qwen with some Math questions from the 2024 Austrian state exam (Matura). I was quite shocked how it correctly answered. I used also the Exam solutions PDF from the 2024 Matura and they were pretty correct. I used thinking and the maximum Thinking budget of 38,912 tokens on their Website. I know that Math and AI is always a topic for itself, because AI does more prediction than thinking, but I am really positive that LLMs could do really almost perfect Math in the Future. I first thought with their claim that it excels in Math was a (marketing) lie, but I am confident to say is that can do math. So, what do you think and do you also use this model to solve your math questions?
2025-05-06T22:09:20
https://www.reddit.com/r/LocalLLaMA/comments/1kgh8zw/i_was_shocked_how_qwen3235ba22b_is_really_good_at/
Surealistic_Sight
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgh8zw
false
null
t3_1kgh8zw
/r/LocalLLaMA/comments/1kgh8zw/i_was_shocked_how_qwen3235ba22b_is_really_good_at/
false
false
self
50
null
What formats/quantization is fastest for certain CPUs or GPUs? Is this straightforward?
4
Do certain cpu's or gpu's work with certain formats faster? Or is it mainly just about accuracy trade offs / memory / speed (as a result of using less memory due to smaller sizes etc) or is there more to it? I have a Macbook M1 with only 8gb but it got me wondering if I should be choosing certain types of models when on my Macbook, certain types on my i5-12600k/no gpu PC.
2025-05-06T22:13:51
https://www.reddit.com/r/LocalLLaMA/comments/1kghcq8/what_formatsquantization_is_fastest_for_certain/
wuu73
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kghcq8
false
null
t3_1kghcq8
/r/LocalLLaMA/comments/1kghcq8/what_formatsquantization_is_fastest_for_certain/
false
false
self
4
null
Lowest Possible Idle Power LLM Server
1
[removed]
2025-05-06T22:14:11
https://www.reddit.com/r/LocalLLaMA/comments/1kghd10/lowest_possible_idle_power_llm_server/
grownupusername69
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kghd10
false
null
t3_1kghd10
/r/LocalLLaMA/comments/1kghd10/lowest_possible_idle_power_llm_server/
false
false
self
1
null
could i use this gpu and this pc
1
[removed]
2025-05-06T22:22:54
https://www.reddit.com/r/LocalLLaMA/comments/1kghk7q/could_i_use_this_gpu_and_this_pc/
Anonymous_ERRORs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kghk7q
false
null
t3_1kghk7q
/r/LocalLLaMA/comments/1kghk7q/could_i_use_this_gpu_and_this_pc/
false
false
https://b.thumbs.redditm…PqUQcVo9dSKk.jpg
1
null
Can music generation models make mashups of preexisting songs?
7
I would like to replicate the website rave.dj locally, especially since its service is super unreliable at times. Would music generation models be the solution here, or should I look into something else?
2025-05-06T23:01:34
https://www.reddit.com/r/LocalLLaMA/comments/1kgifmq/can_music_generation_models_make_mashups_of/
ishtarcrab
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgifmq
false
null
t3_1kgifmq
/r/LocalLLaMA/comments/1kgifmq/can_music_generation_models_make_mashups_of/
false
false
self
7
null
Using a local runtime to run models for an open source project vs. HF transformers library
7
Today, some of the models (like [Arch Guard](https://huggingface.co/katanemo/Arch-Guard)) used in our open-source project are loaded into memory and used via the transformers library from HF. The benefit of using a library to load models is that I don't require additional prerequisites steps for developers when the download and use our local[ proxy server](https://github.com/katanemo/archgw) for agents. This makes packaging and deployment easy. But the downside of using a library is that I inherit unnecessary dependency bloat, and I’m not necessarily taking advantage of runtime-level optimizations for speed, memory efficiency, or parallelism. I also give up flexibility in how the model is served—for example, I can't easily scale it across processes, share it between multiple requests efficiently, or plug into optimized model serving projects like vLLM, Llama.cpp, etc. As we evolve the architecture, we’re exploring moving model execution into dedicated runtime, and I wanted to learn from the community how do they feel and manage this trade-off today for other open source projects, and for this scenario what runtime would you recommend?
2025-05-06T23:15:11
https://www.reddit.com/r/LocalLLaMA/comments/1kgiq9k/using_a_local_runtime_to_run_models_for_an_open/
AdditionalWeb107
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgiq9k
false
null
t3_1kgiq9k
/r/LocalLLaMA/comments/1kgiq9k/using_a_local_runtime_to_run_models_for_an_open/
false
false
self
7
{'enabled': False, 'images': [{'id': '4EP5gqzAJ_-b75BfOuT-bLseckujfqYmN6sIG_C7pAo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/4EP5gqzAJ_-b75BfOuT-bLseckujfqYmN6sIG_C7pAo.png?width=108&crop=smart&auto=webp&s=8c2578516d86b12933b08297ae1726ba0d2016be', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/4EP5gqzAJ_-b75BfOuT-bLseckujfqYmN6sIG_C7pAo.png?width=216&crop=smart&auto=webp&s=3bf5c6da05fd5eeb5f21e201643fb0f24d1e16fe', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/4EP5gqzAJ_-b75BfOuT-bLseckujfqYmN6sIG_C7pAo.png?width=320&crop=smart&auto=webp&s=1da1bf49103581283207376c1f2137e4940bd326', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/4EP5gqzAJ_-b75BfOuT-bLseckujfqYmN6sIG_C7pAo.png?width=640&crop=smart&auto=webp&s=97d8ee85e741785a3b2a7b285aaa1a0045bf394b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/4EP5gqzAJ_-b75BfOuT-bLseckujfqYmN6sIG_C7pAo.png?width=960&crop=smart&auto=webp&s=534ca3fe2172b49821c6c070ee615153d8de7fbc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/4EP5gqzAJ_-b75BfOuT-bLseckujfqYmN6sIG_C7pAo.png?width=1080&crop=smart&auto=webp&s=adc3e7ced038ffda50b3777de5d4d7e838457f58', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/4EP5gqzAJ_-b75BfOuT-bLseckujfqYmN6sIG_C7pAo.png?auto=webp&s=0f46725c3f87dc70e57dc7e1baf147ed086e579a', 'width': 1200}, 'variants': {}}]}
Blazing fast ASR / STT on Apple Silicon
61
I posted about NVIDIAs updated ASR model a few days ago, hoping someone would be motivated to create an MLX version. My internet pleas were answered by: https://github.com/senstella/parakeet-mlx Even on my old M1 8GB Air, it transcribed 11 minutes of audio in 14 seconds. Almost 60x real-time.
2025-05-06T23:55:15
https://www.reddit.com/r/LocalLLaMA/comments/1kgjkgf/blazing_fast_asr_stt_on_apple_silicon/
bio_risk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgjkgf
false
null
t3_1kgjkgf
/r/LocalLLaMA/comments/1kgjkgf/blazing_fast_asr_stt_on_apple_silicon/
false
false
self
61
{'enabled': False, 'images': [{'id': 'iq-6fmSuDyofwVx0YkjmzInZLzNpRb_tYl2L5EMPEjo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0nEvDCL0eGNq8PPOam_d1fO6xVMBAgQ-BImyhI6aj5Y.jpg?width=108&crop=smart&auto=webp&s=56d063b35ba2c7e8e93d6a9f3a2baaee7fd556cf', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0nEvDCL0eGNq8PPOam_d1fO6xVMBAgQ-BImyhI6aj5Y.jpg?width=216&crop=smart&auto=webp&s=743cfaae356d32cb2d9025d48ed5bd15727c5438', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0nEvDCL0eGNq8PPOam_d1fO6xVMBAgQ-BImyhI6aj5Y.jpg?width=320&crop=smart&auto=webp&s=c5a4a761eaec34f462a7ef9deab9b330da5402b4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0nEvDCL0eGNq8PPOam_d1fO6xVMBAgQ-BImyhI6aj5Y.jpg?width=640&crop=smart&auto=webp&s=9706377c6d27d6c8f016eb9e9ce733f7182ba5e1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0nEvDCL0eGNq8PPOam_d1fO6xVMBAgQ-BImyhI6aj5Y.jpg?width=960&crop=smart&auto=webp&s=5a500f8aa9102205a6c2874cbac2f92bf53d4f61', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0nEvDCL0eGNq8PPOam_d1fO6xVMBAgQ-BImyhI6aj5Y.jpg?width=1080&crop=smart&auto=webp&s=d116fb8ced419df44c432de441a6319e387737ae', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0nEvDCL0eGNq8PPOam_d1fO6xVMBAgQ-BImyhI6aj5Y.jpg?auto=webp&s=1ee6a1804cf8d4873f827bf53194b93e7f955d51', 'width': 1200}, 'variants': {}}]}
Only the new MoE models are the real Qwen3.
0
From livebench and lmarena, we can see the dense Qwen3s are only slightly better than QwQ. Architecturally speaking, they are identical to QwQ except number of attention heads increased from 40 to 64 and intermediate\_size decreased from 27648 to 25600 for the 32B models. Essentially, dense Qwen3 is a small tweak of QwQ plus fine tune. On the other hand, we are seeing substantial improvement for the 235B-A22B in lmarena that put it on par with gemma 3 27b. Based on my reading on this reddit, people seems to be getting mixed feeling when comparing Qwen3 32b to QwQ 32b. So if you are not resource rich and happy with QwQ 32b, then give Qwen3 32b a try and see what's going on. If it doesn't work well for your use case, then stick with the old one. Of course, not bother to try Qwen3 32b shouldn't hurt you much. On the other hand, if you have the resource, then you should give 235B-A22B a try.
2025-05-07T00:41:13
https://www.reddit.com/r/LocalLLaMA/comments/1kgkido/only_the_new_moe_models_are_the_real_qwen3/
Ok_Warning2146
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgkido
false
null
t3_1kgkido
/r/LocalLLaMA/comments/1kgkido/only_the_new_moe_models_are_the_real_qwen3/
false
false
self
0
null
Best Way to Use Groq Models with Tool Calling?
1
[removed]
2025-05-07T00:41:44
https://www.reddit.com/r/LocalLLaMA/comments/1kgkir6/best_way_to_use_groq_models_with_tool_calling/
Slow-Cauliflower-374
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgkir6
false
null
t3_1kgkir6
/r/LocalLLaMA/comments/1kgkir6/best_way_to_use_groq_models_with_tool_calling/
false
false
self
1
null
AWQ 4-bit outperforms GGUF 8-bit in almost every way
24
I get GGUF's convenience, especially for CPU/Mac users, which likely drives its popularity. Great tooling, too. But on GPUs? My experience is that even 8-bit GGUF often trails behind 4-bit AWQ in responsiveness, accuracy, and coherence. This isn't a small gap. It makes me wonder if GGUF's Mac/CPU accessibility is overshadowing AWQ's raw performance advantage on GPUs, especially with backends like vLLM or SGLang where AWQ shines (lower latency, better quality). If you're on a GPU and serious about performance, AWQ seems like the stronger pick, yet it feels under-discussed.
2025-05-07T01:03:25
https://www.reddit.com/r/LocalLLaMA/comments/1kgkyap/awq_4bit_outperforms_gguf_8bit_in_almost_every_way/
Acceptable-State-271
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgkyap
false
null
t3_1kgkyap
/r/LocalLLaMA/comments/1kgkyap/awq_4bit_outperforms_gguf_8bit_in_almost_every_way/
false
false
self
24
{'enabled': False, 'images': [{'id': 'ppqa6NCzdi5G8TQO0y_V-Alrj4HL0TEao5F2DMLCsNs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/C-FAj3RLHZt9Ua5u26lAIUU4H6v3wzdscf5PCaqdBSc.jpg?width=108&crop=smart&auto=webp&s=7ae59317ff8eca6945964bb72fef3b2d928bddc9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/C-FAj3RLHZt9Ua5u26lAIUU4H6v3wzdscf5PCaqdBSc.jpg?width=216&crop=smart&auto=webp&s=348652bf47f00f57322af0b783cd67e238efa10e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/C-FAj3RLHZt9Ua5u26lAIUU4H6v3wzdscf5PCaqdBSc.jpg?width=320&crop=smart&auto=webp&s=702024f8316f76d60ce4d6a7655badb39817f0c3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/C-FAj3RLHZt9Ua5u26lAIUU4H6v3wzdscf5PCaqdBSc.jpg?width=640&crop=smart&auto=webp&s=6d93a2520e9a60f7ccb31faefb1df0b89b617535', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/C-FAj3RLHZt9Ua5u26lAIUU4H6v3wzdscf5PCaqdBSc.jpg?width=960&crop=smart&auto=webp&s=82ccda197ddceb77c5ca670057312f45ae47391e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/C-FAj3RLHZt9Ua5u26lAIUU4H6v3wzdscf5PCaqdBSc.jpg?width=1080&crop=smart&auto=webp&s=de81eda4656290b03e0898846dd744296c1f72cb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/C-FAj3RLHZt9Ua5u26lAIUU4H6v3wzdscf5PCaqdBSc.jpg?auto=webp&s=5c720027c53cc034b0093a313cac23ce597226e0', 'width': 1200}, 'variants': {}}]}
Is the 'using memory instead of video memory' tec mature now?
0
(I'm using StableDiffusion+LORA. ) Note that this does not include Apple Mac, which standardized on memory a long time ago (MAC's computing speed is too slow). I use a 4090 48G for my AI work. I've seen some posts saying that the NVIDIA driver automatically supports the use of memory for AI, and some posts saying that this is not normal and that it slows things down.
2025-05-07T01:20:18
https://www.reddit.com/r/LocalLLaMA/comments/1kgla26/is_the_using_memory_instead_of_video_memory_tec/
Mois_Du_sang
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgla26
false
null
t3_1kgla26
/r/LocalLLaMA/comments/1kgla26/is_the_using_memory_instead_of_video_memory_tec/
false
false
self
0
null
Sometimes looking back gives a better sense of progress
22
In chatbot Arena I was testing Qwen 4B against state of the art models from a year ago. Using the side by side comparison in Arena, Qwen 4 blew the older model aways. Asking a question about "random number generation methods" the difference was night and day. Some of Qwens advice was excellent. Even on historical questions Qwen was miles better. All by a model thats only 4GB parameters.
2025-05-07T01:32:49
https://www.reddit.com/r/LocalLLaMA/comments/1kglith/sometimes_looking_back_gives_a_better_sense_of/
Brave_Sheepherder_39
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kglith
false
null
t3_1kglith
/r/LocalLLaMA/comments/1kglith/sometimes_looking_back_gives_a_better_sense_of/
false
false
self
22
null
Huawei Atlas 300I 32GB
37
Just saw the Huawei Altas 300I 32GB version is now about USD265 on China Taobao. || || |Parameters|Atlas 300I Inference Card Model: 3000/3010| |Form Factor|Half-height half-length PCIe standard card| |AI Processor|Ascend Processor| |Memory|LPDDR4X, 32 GB, total bandwidth 204.8 GB/s| |Encoding/ Decoding|• H.264 hardware decoding, 64-channel 1080p 30 FPS (8-channel 3840 x 2160 @ 60 FPS) • H.265 hardware decoding, 64-channel 1080p 30 FPS (8-channel 3840 x 2160 @ 60 FPS) • H.264 hardware encoding, 4-channel 1080p 30 FPS • H.265 hardware encoding, 4-channel 1080p 30 FPS • JPEG decoding: 4-channel 1080p 256 FPS; encoding: 4-channel 1080p 64 FPS; maximum resolution: 8192 x 4320 • PNG decoding: 4-channel 1080p 48 FPS; maximum resolution: 4096 x 2160| |PCIe|PCIe x16 Gen3.0| |Power Consumption|Maximum: 67 W| |Operating Temperature|0°C to 55°C (32°F to +131°F)| |Dimensions (W x D)|169.5 mm x 68.9 mm (6.67 in. x 2.71 in.)| Parameters || || ||Atlas 300I Inference Card Model: 3000/3010| |Form Factor|Half-height half-length PCIe standard card| |AI Processor|Ascend Processor| |Memory|LPDDR4X, 32 GB, total bandwidth 204.8 GB/s| |Encoding/ Decoding|• H.264 hardware decoding, 64-channel 1080p 30 FPS (8-channel 3840 x 2160 @ 60 FPS) • H.265 hardware decoding, 64-channel 1080p 30 FPS (8-channel 3840 x 2160 @ 60 FPS) • H.264 hardware encoding, 4-channel 1080p 30 FPS • H.265 hardware encoding, 4-channel 1080p 30 FPS • JPEG decoding: 4-channel 1080p 256 FPS; encoding: 4-channel 1080p 64 FPS; maximum resolution: 8192 x 4320 • PNG decoding: 4-channel 1080p 48 FPS; maximum resolution: 4096 x 2160| |PCIe|PCIe x16 Gen3.0| |Power Consumption|Maximum: 67 W| |Operating Temperature|0°C to 55°C (32°F to +131°F)| |Dimensions (W x D)|169.5 mm x 68.9 mm (6.67 in. x 2.71 in.)| Wonder how is the support. According to their website, can run 4 of them together. Anyone has any idea?
2025-05-07T01:48:31
https://www.reddit.com/r/LocalLLaMA/comments/1kgltqs/huawei_atlas_300i_32gb/
kruzibit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgltqs
false
null
t3_1kgltqs
/r/LocalLLaMA/comments/1kgltqs/huawei_atlas_300i_32gb/
false
false
self
37
null
Best local models for code and/or summarizing text? also decent context window..
0
I don't have a real GPU but my CPU can work for the models that fit in ram (32gb) (I read that even the GPU on the CPU.. can be used for inference.. with up to half the ram accessible) . I was thinking of making an overnight code summarizer, just to recursively go through all the code files of a project and 'compress it' by summarizing all functions, files, directories, etc. so when needed i can substitute a summarized file to give an LLM the info without having to give it ALL the info. Anyways, i have noticed quality going up with smaller models. Curious what people have been finding useful lately? Played around with Gemma 3 and Gwen 3, Smol (360mb). Seems not too long ago when all small models seemed to just suck completely.. although they still kinda do lol. Also curious, if you can fine tune these small ones to work better for some of the tasks that the bigger ones can do as-is. Gemma 3 seems unusually great.. like damn 1b? whaaaat
2025-05-07T02:10:38
https://www.reddit.com/r/LocalLLaMA/comments/1kgm96e/best_local_models_for_code_andor_summarizing_text/
wuu73
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgm96e
false
null
t3_1kgm96e
/r/LocalLLaMA/comments/1kgm96e/best_local_models_for_code_andor_summarizing_text/
false
false
self
0
null
Parts choices?
1
[removed]
2025-05-07T02:14:18
https://www.reddit.com/r/LocalLLaMA/comments/1kgmbli/parts_choices/
ng_uhh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgmbli
false
null
t3_1kgmbli
/r/LocalLLaMA/comments/1kgmbli/parts_choices/
false
false
self
1
null
How to run Qwen3 models inference API with enable_thinking=false using llama.cpp
10
I know vllm and SGLang can do it easily but how about llama.cpp? I've found a PR which exactly aims this feature: https://github.com/ggml-org/llama.cpp/pull/13196 But llama.cpp team seems not interested.
2025-05-07T02:29:16
https://www.reddit.com/r/LocalLLaMA/comments/1kgmlrn/how_to_run_qwen3_models_inference_api_with_enable/
soulhacker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgmlrn
false
null
t3_1kgmlrn
/r/LocalLLaMA/comments/1kgmlrn/how_to_run_qwen3_models_inference_api_with_enable/
false
false
self
10
{'enabled': False, 'images': [{'id': 'UND9jTxH0hpuLNN3YApwSf3Y1YZugkc7ZTZWCblHUX0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7Mf3t9WijTGdPJAbkyO98ONXJJerZz4ddLWNYGS9cQ0.jpg?width=108&crop=smart&auto=webp&s=f54bbb8a59cdfe3d5835b0f8e71fb9cbbfca0c69', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7Mf3t9WijTGdPJAbkyO98ONXJJerZz4ddLWNYGS9cQ0.jpg?width=216&crop=smart&auto=webp&s=e61fce38a67113353709a8f5afe6ba522879324e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7Mf3t9WijTGdPJAbkyO98ONXJJerZz4ddLWNYGS9cQ0.jpg?width=320&crop=smart&auto=webp&s=aabe8b8172278fc4d3f591b3601a67726e703a36', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7Mf3t9WijTGdPJAbkyO98ONXJJerZz4ddLWNYGS9cQ0.jpg?width=640&crop=smart&auto=webp&s=cfabc2739b531f3bf2f079bb130e6804a7f33b47', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7Mf3t9WijTGdPJAbkyO98ONXJJerZz4ddLWNYGS9cQ0.jpg?width=960&crop=smart&auto=webp&s=71ed4976d193591ec43492ecc399b18c6aa6e1dc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7Mf3t9WijTGdPJAbkyO98ONXJJerZz4ddLWNYGS9cQ0.jpg?width=1080&crop=smart&auto=webp&s=75d2a72067c8cdfe5891d8d0239523787983477e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7Mf3t9WijTGdPJAbkyO98ONXJJerZz4ddLWNYGS9cQ0.jpg?auto=webp&s=a9767b413236976954864c22e4b3a90ab84d9738', 'width': 1200}, 'variants': {}}]}
How do your AI agents interpret user input?
0
Let's try another tact. For those who deploy AI agents, how do you interpret your user's input, then map that to an action? I'm assuming most just ping a LLM and request a JSON object? Isn't that fraught with issues though? First the latency, plus unpredictable nature of LLMs which will sometimes give an invalid response that your side doesn't expect. Most importantly, don't you miss a good amount of the user input, since you're essentially just pinging a LLM with an unknown block of text and asking it to select from say 1 of 10 possible answers? That must be causing frustration amongst your users, and loss of business on your end, no? Isn't that why things like Rabbit R1 and Humane AI pin were such a disaster? They were both just pinging ChatGPT asking what the user said, then going from there? Working on an advanced NLU engine for my own Rust based home AI assistant coined Cicero. I did a piss poor job explaning last time, so here, this should quickly and clearly explain current implementation with short Python / Javascript examples: https://cicero.sh/sophia/implementation Then contextual awareness upgrade is underway, and once done, along side the input returned in nicely interpreted phrases with their respective verb / noun clauses broken down, it will also have vectors for questions, imperatives, declaratives, sentiments. All wil be broken down in a way that can be mapped to software. All local, no APIs, blazingly fast, etc. I'm just wondering, is it even worth it to develop that out? Or what would you like to see in terms of mapping user input into your software, or are you happy with pinging LLMs for JSON objects, or? Looking for the lay of the land here...
2025-05-07T02:30:25
https://www.reddit.com/r/LocalLLaMA/comments/1kgmmk9/how_do_your_ai_agents_interpret_user_input/
mdizak
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgmmk9
false
null
t3_1kgmmk9
/r/LocalLLaMA/comments/1kgmmk9/how_do_your_ai_agents_interpret_user_input/
false
false
self
0
null
Qwen3 doesn't like LM Studio on my gaming rig?
1
[removed]
2025-05-07T02:46:09
https://www.reddit.com/r/LocalLLaMA/comments/1kgmxky/qwen3_doesnt_like_lm_studio_on_my_gaming_rig/
Abandoned_Brain
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgmxky
false
null
t3_1kgmxky
/r/LocalLLaMA/comments/1kgmxky/qwen3_doesnt_like_lm_studio_on_my_gaming_rig/
false
false
self
1
null
Jorney of increasing Pre Processing T/s on DeepSeek Q2_K_XL with ~120GB VRAM and ~140GB RAM (7800X3D, 6000Mhz), from 39 t/s to 66 t/s to 100 t/s to 126 t/s, thanks to PCI-E 5.0 and MLA+FA PR.
54
Hi there guys, hope you're doing okay. I did a post some days ago about my setup and some models [https://www.reddit.com/r/LocalLLaMA/comments/1kezq68/speed\_metrics\_running\_deepseekv3\_0324qwen3\_235b/](https://www.reddit.com/r/LocalLLaMA/comments/1kezq68/speed_metrics_running_deepseekv3_0324qwen3_235b/) Setup is: * AMD Ryzen 7 7800X3D * 192GB DDR5 6000Mhz at CL30 (overclocked and adjusted resistances to make it stable) * RTX 5090 MSI Vanguard LE SOC, flashed to Gigabyte Aorus Master VBIOS. * RTX 4090 ASUS TUF, flashed to Galax HoF VBIOS. * RTX 4090 Gigabyte Gaming OC, flashed to Galax HoF VBIOS. * RTX A6000 (Ampere) * AM5 MSI Carbon X670E * Running at X8 5.0 (5090) / X8 4.0 (4090) / X4 4.0 (4090) / X4 4.0 (A6000), all from CPU lanes (using M2 to PCI-E adapters) * Fedora 41-42 (believe me, I tried these on Windows and multiGPU is just borked there) So I noticed that the GPU 0 (4090 at X8 4.0) was getting saturated at 13 GiB/s. So as someone suggested on the issues [https://huggingface.co/unsloth/DeepSeek-V3-0324-GGUF-UD/discussions/2](https://huggingface.co/unsloth/DeepSeek-V3-0324-GGUF-UD/discussions/2), his GPU was getting saturated at 26 GiB/s, which is the speed that the 5090 does at X8 5.0. So this was the first step, I did export CUDA\_VISIBLE\_DEVICES=2,0,1,3 This is (5090 X8 5.0, 4090 X8 4.0, 4090 X4 4.0, A6000 X4 4.0). So this was the first step to increase the model speed. So, first running with `./llama-server -m '/GGUFs/DeepSeek-V3-0324-UD-Q2_K_XL-merged.gguf' -c 32768 --no-mmap --no-warmup -ngl 999 -ot "blk.(0|1|2|3|4|5|6).ffn.=CUDA0" -ot "blk.(7|8|9|10).ffn.=CUDA1" -ot "blk.(11|12|13|14|15).ffn.=CUDA2" -ot "blk.(16|17|18|19|20|21|22|23|24|25).ffn.=CUDA3" -ot "ffn.*=CPU` I was getting `prompt eval time = 38919.92 ms / 1528 tokens ( 25.47 ms per token, 39.26 tokens per second)` `eval time = 57175.47 ms / 471 tokens ( 121.39 ms per token, 8.24 tokens per second)` Then, I did `export CUDA_VISIBLE_DEVICES=2,0,1,3` With the same command I got `prompt eval time = 49257.75 ms / 3252 tokens ( 15.15 ms per token, 66.02 tokens per second)` `eval time = 46322.14 ms / 436 tokens ( 106.24 ms per token, 9.41 tokens per second)` So a huge increase in performance, thanks to just changing the device that does PP. Now, take in mind now the 5090 gets saturated at 26-27 GiB/s. I tried at X16 5.0 but I got max 28-29 GiB/s, so I think there is a limit somewhere or it can't use more. So, then, I was checking PRs and found this one: [https://github.com/ggml-org/llama.cpp/pull/13306](https://github.com/ggml-org/llama.cpp/pull/13306) This PR lets you use MLA (which takes 16K ctx from 80GB to 2GB), and then, FA, which reduces the buffer sizes on each GPU from 4.4GB to 400 MB! So, running: `./llama-server -m '/GGUFs/DeepSeek-V3-0324-UD-Q2_K_XL-merged.gguf' -c 32768 --no-mmap --no-warmup -v -ngl 99 --override-tensor 'blk\.([0-7])\..*_exps\.=CUDA0' --override-tensor 'blk\.([8-9]|1[0-1])\..*_exps\.=CUDA1' --override-tensor 'blk\.(1[2-6])\..*_exps\.=CUDA2' --override-tensor 'blk\.(1[7-9]|2[0-6])\..*_exps\.=CUDA3' -fa --override-tensor 'blk\..*_exps\.=CPU' -mg 0 --ubatch-size 1024` I got `prompt eval time = 34965.38 ms / 3565 tokens ( 9.81 ms per token, 101.96 tokens per second)` `eval time = 45389.59 ms / 416 tokens ( 109.11 ms per token, 9.17 tokens per second)` So, we have went about 1t/s more on generation speed, but we have increased PP performance by 54%. This uses a bit, bit more VRAM but still perfectly to use 32K, 64K or even 128K (GPUs have about 8GB left) Then, I went ahead and increased ubatch again, to 1536. So running the same command as above, but changing --ubatch-size from 1024 to 1536, I got these speeds. `prompt eval time = 28097.73 ms / 3565 tokens ( 7.88 ms per token, 126.88 tokens per second)` `eval time = 43426.93 ms / 404 tokens ( 107.49 ms per token, 9.30 tokens per second)` **This is an 25.7% increase over -ub 1024, 92.4% increase over -ub 512 and 225% increase over -ub 512 and PCI-E X8 4.0.** This makes this model really usable! So now I'm even tempted to test Q3\_K\_XL! Q2\_K\_XL is 250GB and Q3\_K\_XL is 296GB, which should fit in 320GB total memory.
2025-05-07T02:46:09
https://www.reddit.com/r/LocalLLaMA/comments/1kgmxla/jorney_of_increasing_pre_processing_ts_on/
panchovix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgmxla
false
null
t3_1kgmxla
/r/LocalLLaMA/comments/1kgmxla/jorney_of_increasing_pre_processing_ts_on/
false
false
self
54
{'enabled': False, 'images': [{'id': 'oWpjfIqblL-7PXkl6mL2Gm8aHkLpgCYMa00ayDkPKlc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ApEmdCyIm-2QftnvAYAszPSMJwbWoSRJWMEYZ07Q0MQ.jpg?width=108&crop=smart&auto=webp&s=1b4695a38f0b805dde2aefa76495a0975197389e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ApEmdCyIm-2QftnvAYAszPSMJwbWoSRJWMEYZ07Q0MQ.jpg?width=216&crop=smart&auto=webp&s=d6d38751c25ebedd96cb7056af96163315656b4c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ApEmdCyIm-2QftnvAYAszPSMJwbWoSRJWMEYZ07Q0MQ.jpg?width=320&crop=smart&auto=webp&s=4c35d0326f6c30009adbe89642aad91d44c4aa43', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ApEmdCyIm-2QftnvAYAszPSMJwbWoSRJWMEYZ07Q0MQ.jpg?width=640&crop=smart&auto=webp&s=21ad76bf8d3966e9fa414c3500cdcf3aa8b1fe6e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ApEmdCyIm-2QftnvAYAszPSMJwbWoSRJWMEYZ07Q0MQ.jpg?width=960&crop=smart&auto=webp&s=b769b3f28b6f1bf4f9df3f326c25b6ca3b3bb30a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ApEmdCyIm-2QftnvAYAszPSMJwbWoSRJWMEYZ07Q0MQ.jpg?width=1080&crop=smart&auto=webp&s=05fff342a568c325402d99325e10b1c6179d134d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ApEmdCyIm-2QftnvAYAszPSMJwbWoSRJWMEYZ07Q0MQ.jpg?auto=webp&s=b3fc217729a74e46e801aa4b432754fd26f3b20b', 'width': 1200}, 'variants': {}}]}
LM Studio and Qwen3 fighting on my PC?
1
[removed]
2025-05-07T02:53:44
https://www.reddit.com/r/LocalLLaMA/comments/1kgn2pj/lm_studio_and_qwen3_fighting_on_my_pc/
Abandoned_Brain
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgn2pj
false
null
t3_1kgn2pj
/r/LocalLLaMA/comments/1kgn2pj/lm_studio_and_qwen3_fighting_on_my_pc/
false
false
self
1
null
Is this a good deal?
1
2025-05-07T02:55:20
https://i.imgur.com/EanTzY4.png
LsDmT
i.imgur.com
1970-01-01T00:00:00
0
{}
1kgn3t3
false
null
t3_1kgn3t3
/r/LocalLLaMA/comments/1kgn3t3/is_this_a_good_deal/
false
false
https://b.thumbs.redditm…SebRPE4cezNQ.jpg
1
{'enabled': True, 'images': [{'id': 'Lmagwh1RTApgRpuZ8vBb_YTlhHBXhjohzdMXjl21NZE', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/RPd7msRVbhc9OQ_FG2Tl1dryra9ssKhYSHbmRMjYxso.png?width=108&crop=smart&auto=webp&s=7f1868521e8e25960d7d563c32354d58bb1f969c', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/RPd7msRVbhc9OQ_FG2Tl1dryra9ssKhYSHbmRMjYxso.png?width=216&crop=smart&auto=webp&s=e95a211479dd7ef21bb1e6dd79148d7d835d4d7a', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/RPd7msRVbhc9OQ_FG2Tl1dryra9ssKhYSHbmRMjYxso.png?width=320&crop=smart&auto=webp&s=02543fa68d9a60b2ca6072bfee4d149249da5087', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/RPd7msRVbhc9OQ_FG2Tl1dryra9ssKhYSHbmRMjYxso.png?width=640&crop=smart&auto=webp&s=ebaf09b716f5fe92a56e66db936c80cdcbae91d3', 'width': 640}, {'height': 1920, 'url': 'https://external-preview.redd.it/RPd7msRVbhc9OQ_FG2Tl1dryra9ssKhYSHbmRMjYxso.png?width=960&crop=smart&auto=webp&s=0c373fbc72229f0ad27dbe813df43912114ec309', 'width': 960}, {'height': 2160, 'url': 'https://external-preview.redd.it/RPd7msRVbhc9OQ_FG2Tl1dryra9ssKhYSHbmRMjYxso.png?width=1080&crop=smart&auto=webp&s=6c046db7a94ae1f585806357fc37c12fec5e5576', 'width': 1080}], 'source': {'height': 2992, 'url': 'https://external-preview.redd.it/RPd7msRVbhc9OQ_FG2Tl1dryra9ssKhYSHbmRMjYxso.png?auto=webp&s=a8f28ae4d6b0fe295f4ae77ad028420c2eb46242', 'width': 1344}, 'variants': {}}]}
Machine Uprising: Skynet is here
0
2025-05-07T03:47:56
https://v.redd.it/sd3udsms7aze1
Such-Caregiver-3460
v.redd.it
1970-01-01T00:00:00
0
{}
1kgo29e
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/sd3udsms7aze1/DASHPlaylist.mpd?a=1749181690%2CM2NmNGQ2NzkyZmVmMDI2OWRiZmI2NmI4YzlkYjY1NWVhYzIxZGFlMDBhOWE5NWUyNDI2NjA3NGNkZjFjMzZhYQ%3D%3D&v=1&f=sd', 'duration': 31, 'fallback_url': 'https://v.redd.it/sd3udsms7aze1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/sd3udsms7aze1/HLSPlaylist.m3u8?a=1749181690%2CZGY4MzI0N2RhOGRhMjcwYmUyNzA4OTcxMDMxYTNiZTVkYmMxMmQ3NTI5MzI2YzQ2NjNlNWM5YTgyMTYwODMyMQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/sd3udsms7aze1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1200}}
t3_1kgo29e
/r/LocalLLaMA/comments/1kgo29e/machine_uprising_skynet_is_here/
false
false
https://external-preview…c8a3b4274cd71b7c
0
{'enabled': False, 'images': [{'id': 'cGxvMGFwbXM3YXplMVj5nPlUE8HBIU8MixP2ICN7MA1Rpea9i4pSFfpaqLYG', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/cGxvMGFwbXM3YXplMVj5nPlUE8HBIU8MixP2ICN7MA1Rpea9i4pSFfpaqLYG.png?width=108&crop=smart&format=pjpg&auto=webp&s=d8580123cb262516440228c565d305ede57b13f1', 'width': 108}, {'height': 129, 'url': 'https://external-preview.redd.it/cGxvMGFwbXM3YXplMVj5nPlUE8HBIU8MixP2ICN7MA1Rpea9i4pSFfpaqLYG.png?width=216&crop=smart&format=pjpg&auto=webp&s=7477f7a7ec576973054419dad20b8c219d7cc925', 'width': 216}, {'height': 192, 'url': 'https://external-preview.redd.it/cGxvMGFwbXM3YXplMVj5nPlUE8HBIU8MixP2ICN7MA1Rpea9i4pSFfpaqLYG.png?width=320&crop=smart&format=pjpg&auto=webp&s=5c56ab416224b10004ed31635d83d14bc78f8e1e', 'width': 320}, {'height': 384, 'url': 'https://external-preview.redd.it/cGxvMGFwbXM3YXplMVj5nPlUE8HBIU8MixP2ICN7MA1Rpea9i4pSFfpaqLYG.png?width=640&crop=smart&format=pjpg&auto=webp&s=3d35400b55a74b7ef5e1d0e1f1ac5a855192f605', 'width': 640}, {'height': 576, 'url': 'https://external-preview.redd.it/cGxvMGFwbXM3YXplMVj5nPlUE8HBIU8MixP2ICN7MA1Rpea9i4pSFfpaqLYG.png?width=960&crop=smart&format=pjpg&auto=webp&s=9f1707d75b2bd87d9017e84af8da1f16c0fc572a', 'width': 960}, {'height': 648, 'url': 'https://external-preview.redd.it/cGxvMGFwbXM3YXplMVj5nPlUE8HBIU8MixP2ICN7MA1Rpea9i4pSFfpaqLYG.png?width=1080&crop=smart&format=pjpg&auto=webp&s=7b52206f7789b4498e76972c2d09f69926250a2a', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/cGxvMGFwbXM3YXplMVj5nPlUE8HBIU8MixP2ICN7MA1Rpea9i4pSFfpaqLYG.png?format=pjpg&auto=webp&s=9b78cdc4a8278ebbc3cdaf69b3105c186f5c2625', 'width': 1200}, 'variants': {}}]}
Qwen3-30B-A3B GGUFs MMLU-PRO benchmark comparison - Q6_K / Q5_K_M / Q4_K_M / Q3_K_M
126
**MMLU-PRO 0.25 subset(3003 questions), 0 temp, No Think, Q8 KV Cache** **Qwen3-30B-A3B-Q6\_K / Q5\_K\_M / Q4\_K\_M / Q3\_K\_M** The entire benchmark took **10 hours 32 minutes 19 seconds**. >I wanted to test unsloth dynamic ggufs as well, but ollama still can't run those ggufs properly, and yes I downloaded v0.6.8, lm studio can run them but doesn't support batching. So I only tested \_K\_M ggufs https://preview.redd.it/n8uisayb8aze1.png?width=445&format=png&auto=webp&s=5e2ef9b9f7f01091787bc58917ea58a7fe07d814 https://preview.redd.it/rlopilhc8aze1.png?width=1123&format=png&auto=webp&s=972522557abddeafa03ea3033ef2f3e05e396038 https://preview.redd.it/sqzkrdkd8aze1.png?width=2003&format=png&auto=webp&s=6f19a8a0d4d6ee9552209ff8da9b0f9f3d51923a https://preview.redd.it/s35vihde8aze1.png?width=1235&format=png&auto=webp&s=9261ee5820639594e218ed77edff47a3ea4dcb8d # Q8 KV Cache / No kv cache quant https://preview.redd.it/te4noxve8aze1.png?width=2005&format=png&auto=webp&s=1c14e380265fe29f0805bf030dada4d452f5e86a *Processing img gxppkzef8aze1...* ggufs: [https://huggingface.co/unsloth/Qwen3-30B-A3B-GGUF](https://huggingface.co/unsloth/Qwen3-30B-A3B-GGUF)
2025-05-07T03:56:08
https://www.reddit.com/r/LocalLLaMA/comments/1kgo7d4/qwen330ba3b_ggufs_mmlupro_benchmark_comparison_q6/
AaronFeng47
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgo7d4
false
null
t3_1kgo7d4
/r/LocalLLaMA/comments/1kgo7d4/qwen330ba3b_ggufs_mmlupro_benchmark_comparison_q6/
false
false
https://external-preview…be9e1b5168407e9d
126
{'enabled': False, 'images': [{'id': 'luDTORHovWSvyyKGyVuQUU_AS82WswbZpoHOp59s5cs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/luDTORHovWSvyyKGyVuQUU_AS82WswbZpoHOp59s5cs.png?width=108&crop=smart&auto=webp&s=c2c44c19e8827b309d5c17f1121f09f95308618c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/luDTORHovWSvyyKGyVuQUU_AS82WswbZpoHOp59s5cs.png?width=216&crop=smart&auto=webp&s=1ad77ec4bacf99c62c117da2f1de3d938b6669fa', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/luDTORHovWSvyyKGyVuQUU_AS82WswbZpoHOp59s5cs.png?width=320&crop=smart&auto=webp&s=997c98802346f382ff93eccbf5d366273922b997', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/luDTORHovWSvyyKGyVuQUU_AS82WswbZpoHOp59s5cs.png?width=640&crop=smart&auto=webp&s=80203fde524d99b74a2b8e4185b0d45043a2a35e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/luDTORHovWSvyyKGyVuQUU_AS82WswbZpoHOp59s5cs.png?width=960&crop=smart&auto=webp&s=d8794698d5eef590392c6a9f2237f28ae0ac7bdf', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/luDTORHovWSvyyKGyVuQUU_AS82WswbZpoHOp59s5cs.png?width=1080&crop=smart&auto=webp&s=53f57c98c1d61543ded3a77bad7b5395840961f7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/luDTORHovWSvyyKGyVuQUU_AS82WswbZpoHOp59s5cs.png?auto=webp&s=ac55c5ba50a777c355f89b66991849d3664544c2', 'width': 1200}, 'variants': {}}]}
How to identify whether a model would fit in my RAM?
3
Very straightforward question. I do not have a CPU machine. I usually run LLMs on CPU and have 24GB RAM. The Qwen3-30B-A3B-UD-Q4_K_XL.gguf model has been quite popular these days with a size of ~18 GB. If we directly compare the size, the model would fit in my RAM and I should be able to run it. I've not tried running the model yet, will do on weekends. However, if you are aware of any other factors that should be considered to answer whether it runs smoothly or not, please let me know. Additionally, a similar question I have is around speed. Can I know an approximate number of tokens/sec based on model size and CPU specs?
2025-05-07T04:14:08
https://www.reddit.com/r/LocalLLaMA/comments/1kgoiy6/how_to_identify_whether_a_model_would_fit_in_my/
OneCuriousBrain
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgoiy6
false
null
t3_1kgoiy6
/r/LocalLLaMA/comments/1kgoiy6/how_to_identify_whether_a_model_would_fit_in_my/
false
false
self
3
null
OpenWebUI sampling settings
14
TLDR: llama.cpp is not affected by ALL OpenWebUI sampling settings. Use console arguments ADDITIONALLY. In OpenWebUI you can setup API connection using two options: * Ollama * OpenAI API Also, you can tune model settings on model page. Like system prompt, top p, top k, etc. And I always doing same thing - run model with llama.cpp, tune recommended parameters from UI, use OpenWebUI as OpenAI server backed by llama.cpp. And it works fine! I mean, I noticed here and there was incoherences in output, sometimes chinese and so on. But it's LLM, it works this way, especially quantized. But yesterday I was investigating why CUDA is slow with multi-gpu Qwen3 30BA3B (https://github.com/ggml-org/llama.cpp/issues/13211). I enabled debug output and started playing with console arguments, batch sizes, tensor overrides and so on. And noticed generation parameters are different from OpenWebUI settings. Long story short, OpenWebUI only sends `top_p` and `temperature` for OpenAI API endpoints. No `top_k`, `min_p` and other settings will be applied to your model from request. There is request body in llama.cpp logs: ```json {"stream": true, "model": "qwen3-4b", "messages": [{"role": "system", "content": "/no_think"}, {"role": "user", "content": "I need to invert regex `^blk\\.[0-9]*\\..*(exps).*$`. Write only inverted correct regex. Don't explain anything."}, {"role": "assistant", "content": "`^(?!blk\\.[0-9]*\\..*exps.*$).*$`"}, {"role": "user", "content": "Thanks!"}], "temperature": 0.7, "top_p": 0.8} ``` As I can see, it's TOO OpenAI compatible. This means most of model settings in OpenWebUI are just for ollama and will not be applied to OpenAI Compatible providers. So, if youre setup is same as mine, go and check your sampling parameters - maybe your model is underperforming a bit.
2025-05-07T04:38:22
https://www.reddit.com/r/LocalLLaMA/comments/1kgoxmo/openwebui_sampling_settings/
Nepherpitu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgoxmo
false
null
t3_1kgoxmo
/r/LocalLLaMA/comments/1kgoxmo/openwebui_sampling_settings/
false
false
self
14
{'enabled': False, 'images': [{'id': '9xIyKbGcN_7tPJjI-egvqSxoD8pYvMHAk_DpghGZCbw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0faoc_tJdeSSitlYjEumvIeZ-n0COZD5vtjTnoQwGFk.jpg?width=108&crop=smart&auto=webp&s=ecd5de53842cf0c98eab3006c1132f845b06e10d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0faoc_tJdeSSitlYjEumvIeZ-n0COZD5vtjTnoQwGFk.jpg?width=216&crop=smart&auto=webp&s=16a33eb81e833e6d43cadd8404574b198cf54074', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0faoc_tJdeSSitlYjEumvIeZ-n0COZD5vtjTnoQwGFk.jpg?width=320&crop=smart&auto=webp&s=72ed3b1cae45c0e94b4aa4c3846f532679bd9f94', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0faoc_tJdeSSitlYjEumvIeZ-n0COZD5vtjTnoQwGFk.jpg?width=640&crop=smart&auto=webp&s=2b791c39d70e3aff9d8c98e0418d2aa464e5f4ad', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0faoc_tJdeSSitlYjEumvIeZ-n0COZD5vtjTnoQwGFk.jpg?width=960&crop=smart&auto=webp&s=5ae485fae4ba4309c2338c2611a1d98675885acb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0faoc_tJdeSSitlYjEumvIeZ-n0COZD5vtjTnoQwGFk.jpg?width=1080&crop=smart&auto=webp&s=33e454f89987a9277556964836df93407e321a6f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0faoc_tJdeSSitlYjEumvIeZ-n0COZD5vtjTnoQwGFk.jpg?auto=webp&s=31dfcf4802075b6b988d509b3beb85630636b62a', 'width': 1200}, 'variants': {}}]}
ik_llama and ktransformers are fast, but they completely break OpenAI style tool calling and structured responses
32
I've been testing local LLM frameworks like **ik\_llama** and **ktransformers** because they offer great performance on large moe models like Qwen3-235B and DeepSeek-V3-0324 685billion parameters. But there’s a serious issue I haven’t seen enough people talk about them breaking OpenAI-compatible features like tool calling and structured JSON responses. Even though they expose a `/v1/chat/completions` endpoint and claim OpenAI compatibility, neither `ik_llama` nor `ktransformers` properly handle: the tools or function field in a request or emitting valid JSON when expected To work around this, I wrote a local wrapper that: * intercepts chat completions * enriches prompts with tool metadata * parses and transforms the output into OpenAI-compatible responses This lets me continue using fast backends while preserving tool calling logic. If anyone else is hitting this issue: how are you solving it? I’m curious if others are patching the backend, modifying prompts, or intercepting responses like I am. Happy to share details if people are interested in the wrapper. If you want to make use of my hack here is the repo for it: [https://github.com/Teachings/FastAgentAPI](https://github.com/Teachings/FastAgentAPI) I also did a walkthrough of how to set it up: [https://www.youtube.com/watch?v=JGo9HfkzAmc](https://www.youtube.com/watch?v=JGo9HfkzAmc)
2025-05-07T05:35:46
https://www.reddit.com/r/LocalLLaMA/comments/1kgpujo/ik_llama_and_ktransformers_are_fast_but_they/
texasdude11
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgpujo
false
null
t3_1kgpujo
/r/LocalLLaMA/comments/1kgpujo/ik_llama_and_ktransformers_are_fast_but_they/
false
false
self
32
{'enabled': False, 'images': [{'id': 'JNqLiHVnp6uXvXevZq1VE6irCg8dwOhrAxsuPigTO5U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JNqLiHVnp6uXvXevZq1VE6irCg8dwOhrAxsuPigTO5U.png?width=108&crop=smart&auto=webp&s=311030e02000aceb23b8ec88d4935ae15b46b783', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/JNqLiHVnp6uXvXevZq1VE6irCg8dwOhrAxsuPigTO5U.png?width=216&crop=smart&auto=webp&s=82e91904d3da0d291e651c981fc214b842411a99', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/JNqLiHVnp6uXvXevZq1VE6irCg8dwOhrAxsuPigTO5U.png?width=320&crop=smart&auto=webp&s=c6dc9d611114172b4b737ce09ec6519e83fef140', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/JNqLiHVnp6uXvXevZq1VE6irCg8dwOhrAxsuPigTO5U.png?width=640&crop=smart&auto=webp&s=f868c1d15af1b7c45d3452bd6bd8bf02e6780392', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/JNqLiHVnp6uXvXevZq1VE6irCg8dwOhrAxsuPigTO5U.png?width=960&crop=smart&auto=webp&s=c43a4de7d53a0ac93088e68cf9c4ce4c333104be', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/JNqLiHVnp6uXvXevZq1VE6irCg8dwOhrAxsuPigTO5U.png?width=1080&crop=smart&auto=webp&s=9eec8b48d5d16392df7dc8634fb3fb159e19ad50', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/JNqLiHVnp6uXvXevZq1VE6irCg8dwOhrAxsuPigTO5U.png?auto=webp&s=5f584edd193e09011e6cbea0dd6631287dc16888', 'width': 1200}, 'variants': {}}]}
Help needed — running mlx models with tool calling / jinja templates
0
Recently I’ve been experimenting with mlx models in my local environment. As a starting point, I have been using mlx_lm.server to serve HF models, however I notice that they fail to properly format LLM responses into an OpenAI wrapped API response (tools calls, etc). I have overridden the chat template with the models recommended jinja format, but to no avail. Any resources you folks could point me to? Thanks in advance.
2025-05-07T06:14:29
https://www.reddit.com/r/LocalLLaMA/comments/1kgqfc2/help_needed_running_mlx_models_with_tool_calling/
sunpazed
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgqfc2
false
null
t3_1kgqfc2
/r/LocalLLaMA/comments/1kgqfc2/help_needed_running_mlx_models_with_tool_calling/
false
false
self
0
null
zero phantom cloud tax, zero dollar debugging agent munchkin
13
qwen3 30B straight rizzen but i wanted it to rizz my errors, so been tweaking on building cloi - local debugging agent that runs in your terminal the setup deadass simple af, cloi catches your error tracebacks, spins up your local LLM (zero api keys, absolutely no cloud tax), and only with consent (we not crossing boundaries frfr), yeets some clean af patches straight to your files. last time i posted, y'all went absolutely unhinged and starred my project 212 times in 4 days, iykyk. got me hitting that dopamine like it's on demon time. just dropped some new patches while on this hopium; cloi now rizzes with whatever model you got on ollama - literally plug and slay. it's an open source vibe check so feel free to roast it: [https://github.com/cloi-ai/cloi](https://github.com/cloi-ai/cloi) p.s. skibidi toilet fr (not /s)
2025-05-07T06:44:13
https://i.redd.it/rmtqu32x2bze1.gif
AntelopeEntire9191
i.redd.it
1970-01-01T00:00:00
0
{}
1kgqv23
false
null
t3_1kgqv23
/r/LocalLLaMA/comments/1kgqv23/zero_phantom_cloud_tax_zero_dollar_debugging/
false
false
https://external-preview…ddbcbce7e29dd49d
13
{'enabled': True, 'images': [{'id': 'QbmErHIaoe_N3F9vrYqiLhr76Nd9BYsBwKJH_hedL_I', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/rmtqu32x2bze1.gif?width=108&crop=smart&format=png8&s=dfa20cc051ba0b76cd19016cf6505da03892c8cc', 'width': 108}, {'height': 139, 'url': 'https://preview.redd.it/rmtqu32x2bze1.gif?width=216&crop=smart&format=png8&s=c59606415a1bd3bfcce22811ba05b509e08892c8', 'width': 216}, {'height': 207, 'url': 'https://preview.redd.it/rmtqu32x2bze1.gif?width=320&crop=smart&format=png8&s=fb4a1cc3e56d0244ae61322fc123571e30ca4dd5', 'width': 320}, {'height': 414, 'url': 'https://preview.redd.it/rmtqu32x2bze1.gif?width=640&crop=smart&format=png8&s=1f3002828db7530086fe918209b02c50d71b32cd', 'width': 640}, {'height': 621, 'url': 'https://preview.redd.it/rmtqu32x2bze1.gif?width=960&crop=smart&format=png8&s=08a254d06eeea86fcdf3d0825c4bd7e9bc16285b', 'width': 960}, {'height': 699, 'url': 'https://preview.redd.it/rmtqu32x2bze1.gif?width=1080&crop=smart&format=png8&s=5872be49616663d36f37e63e2ae27790b1fdab44', 'width': 1080}], 'source': {'height': 912, 'url': 'https://preview.redd.it/rmtqu32x2bze1.gif?format=png8&s=5c36ee8efba912f5133cd39e18c3c307b0d4934b', 'width': 1408}, 'variants': {'gif': {'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/rmtqu32x2bze1.gif?width=108&crop=smart&s=51a7a3a5df040fee671c9c24e06aeddd00f3e0b8', 'width': 108}, {'height': 139, 'url': 'https://preview.redd.it/rmtqu32x2bze1.gif?width=216&crop=smart&s=629d260be7d376f0905f94bcd07fb9ddbd2d435c', 'width': 216}, {'height': 207, 'url': 'https://preview.redd.it/rmtqu32x2bze1.gif?width=320&crop=smart&s=2a7b235ff845e2d43633c1ce735fe5d1b9a7e6c3', 'width': 320}, {'height': 414, 'url': 'https://preview.redd.it/rmtqu32x2bze1.gif?width=640&crop=smart&s=051f74843904faa237fc8f88717302116905bc2c', 'width': 640}, {'height': 621, 'url': 'https://preview.redd.it/rmtqu32x2bze1.gif?width=960&crop=smart&s=b018b4fc6f2221894371451b02bb945e4853f18e', 'width': 960}, {'height': 699, 'url': 'https://preview.redd.it/rmtqu32x2bze1.gif?width=1080&crop=smart&s=56fa3fe472781d21f3069c564d9b89ca96fb684c', 'width': 1080}], 'source': {'height': 912, 'url': 'https://preview.redd.it/rmtqu32x2bze1.gif?s=26362e868432c85af4ef8ed843e8c2100bb68b7b', 'width': 1408}}, 'mp4': {'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/rmtqu32x2bze1.gif?width=108&format=mp4&s=ef31dd2c56dee97153e847fa6794f01a5cb7139d', 'width': 108}, {'height': 139, 'url': 'https://preview.redd.it/rmtqu32x2bze1.gif?width=216&format=mp4&s=1eb5e95f5862e8908350003ff1e711a39597cb70', 'width': 216}, {'height': 207, 'url': 'https://preview.redd.it/rmtqu32x2bze1.gif?width=320&format=mp4&s=005a65051a2508832540e7876b199b6ee3a8bf8c', 'width': 320}, {'height': 414, 'url': 'https://preview.redd.it/rmtqu32x2bze1.gif?width=640&format=mp4&s=b87a647130b59955a34a667dde47b7a48151a3ad', 'width': 640}, {'height': 621, 'url': 'https://preview.redd.it/rmtqu32x2bze1.gif?width=960&format=mp4&s=32892b6490367b78f8d3f34c956736bde6027083', 'width': 960}, {'height': 699, 'url': 'https://preview.redd.it/rmtqu32x2bze1.gif?width=1080&format=mp4&s=beb2fddac43b427518b203a204449ef69d4c5ebf', 'width': 1080}], 'source': {'height': 912, 'url': 'https://preview.redd.it/rmtqu32x2bze1.gif?format=mp4&s=d43f9d8178e792056755fa34c4fb8790457a8b74', 'width': 1408}}}}]}
Qwen3-235B-A22B and Qwen3-14B rank 2nd and 4th on Kagi’s LLM benchmark
36
2025-05-07T06:45:56
https://help.kagi.com/kagi/ai/llm-benchmark.html
Shamp0oo
help.kagi.com
1970-01-01T00:00:00
0
{}
1kgqw08
false
null
t3_1kgqw08
/r/LocalLLaMA/comments/1kgqw08/qwen3235ba22b_and_qwen314b_rank_2nd_and_4th_on/
false
false
https://b.thumbs.redditm…HzCnz-tS_B5Y.jpg
36
{'enabled': False, 'images': [{'id': '6uRWmaMEzCnEZGPfPVUw7iEewMtmhc0nN-VxRGp3fFU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Hw7kJv_JneYm55PcK7LkdRzAPhO5o4gr4OcKc4__3Nw.jpg?width=108&crop=smart&auto=webp&s=c3d19f8772b433096d0b6464ede9fee80d923227', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/Hw7kJv_JneYm55PcK7LkdRzAPhO5o4gr4OcKc4__3Nw.jpg?width=216&crop=smart&auto=webp&s=ae592cde88328f5ec90f5543fe799384e4e2c0ac', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/Hw7kJv_JneYm55PcK7LkdRzAPhO5o4gr4OcKc4__3Nw.jpg?width=320&crop=smart&auto=webp&s=f7ca78af89fc7c7e60b78c644b0c5f79d64a8a0d', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/Hw7kJv_JneYm55PcK7LkdRzAPhO5o4gr4OcKc4__3Nw.jpg?width=640&crop=smart&auto=webp&s=0ad6cf9e97cf2006fdfe2b75e079d91ccb83a824', 'width': 640}, {'height': 501, 'url': 'https://external-preview.redd.it/Hw7kJv_JneYm55PcK7LkdRzAPhO5o4gr4OcKc4__3Nw.jpg?width=960&crop=smart&auto=webp&s=85e33240531820bf4b57f157b0537954519bb50d', 'width': 960}, {'height': 564, 'url': 'https://external-preview.redd.it/Hw7kJv_JneYm55PcK7LkdRzAPhO5o4gr4OcKc4__3Nw.jpg?width=1080&crop=smart&auto=webp&s=6ac56b9403db84335d8120158ecf77df403246dc', 'width': 1080}], 'source': {'height': 627, 'url': 'https://external-preview.redd.it/Hw7kJv_JneYm55PcK7LkdRzAPhO5o4gr4OcKc4__3Nw.jpg?auto=webp&s=f0bbdefa8aebc0e4021d7c967805255e31157aee', 'width': 1200}, 'variants': {}}]}
Self-improving AI unlocked?
233
**Absolute Zero: Reinforced Self-play Reasoning with Zero Data** Abstract: > Reinforcement learning with verifiable rewards (RLVR) has shown promise in enhancing the reasoning capabilities of large language models by learning directly from outcome-based rewards. **Recent RLVR works that operate under the zero setting avoid supervision in labeling the reasoning process, but still depend on manually curated collections of questions and answers for training. The scarcity of high-quality, human-produced examples raises concerns about the long-term scalability of relying on human supervision**, a challenge already evident in the domain of language model pretraining. Furthermore, in a hypothetical future where AI surpasses human intelligence, tasks provided by humans may offer limited learning potential for a superintelligent system. To address these concerns, **we propose a new RLVR paradigm called Absolute Zero, in which a single model learns to propose tasks that maximize its own learning progress and improves reasoning by solving them, without relying on any external data. Under this paradigm, we introduce the Absolute Zero Reasoner (AZR), a system that self-evolves its training curriculum and reasoning ability by using a code executor to both validate proposed code reasoning tasks and verify answers, serving as an unified source of verifiable reward to guide open-ended yet grounded learning. Despite being trained entirely without external data, AZR achieves overall SOTA performance on coding and mathematical reasoning tasks, outperforming existing zero-setting models that rely on tens of thousands of in-domain human-curated examples**. Furthermore, we demonstrate that AZR can be effectively applied across different model scales and is compatible with various model classes. [Paper](https://arxiv.org/pdf/2505.03335) [Thread](https://x.com/AndrewZ45732491/status/1919920459748909288) [GitHub](https://github.com/LeapLabTHU/Absolute-Zero-Reasoner) [Hugging Face](https://huggingface.co/papers/2505.03335)
2025-05-07T07:13:24
https://www.reddit.com/r/LocalLLaMA/comments/1kgrab2/selfimproving_ai_unlocked/
FeathersOfTheArrow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgrab2
false
null
t3_1kgrab2
/r/LocalLLaMA/comments/1kgrab2/selfimproving_ai_unlocked/
false
false
self
233
null
GMK EVO-X2 Ryzen AI Max+ 395 Mini PC review: Qwen3 235B Unsloth Q2_K 14.71 tokens/s, 235B Q3_K_S 10.51 tokens/s
1
[removed]
2025-05-07T07:13:28
https://www.reddit.com/r/LocalLLaMA/comments/1kgracd/gmk_evox2_ryzen_ai_max_395_mini_pc_review_qwen3/
NZT33
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgracd
false
null
t3_1kgracd
/r/LocalLLaMA/comments/1kgracd/gmk_evox2_ryzen_ai_max_395_mini_pc_review_qwen3/
false
false
self
1
{'enabled': False, 'images': [{'id': 'IwFmHQEpOTQbLgTlJKC1lvyJghRjfN_7w50_VfbTtOA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/qH485SJx4calush0vagqjiZmzvZENloL6dDm3U4FBkU.jpg?width=108&crop=smart&auto=webp&s=22b958fe376f7e1c3411beeeb6d117ae190a2ee0', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/qH485SJx4calush0vagqjiZmzvZENloL6dDm3U4FBkU.jpg?width=216&crop=smart&auto=webp&s=4a09b8f2a4209bdd6fd3daf8cbe00bb30e20291a', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/qH485SJx4calush0vagqjiZmzvZENloL6dDm3U4FBkU.jpg?width=320&crop=smart&auto=webp&s=fe66170dbcf8dbf0a94a369c7b2b8dc1edf4c0f6', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/qH485SJx4calush0vagqjiZmzvZENloL6dDm3U4FBkU.jpg?auto=webp&s=3d90d6a76784db8069aadc76c3685ebb14858a1c', 'width': 480}, 'variants': {}}]}
GMK EVO-X2 Ryzen AI Max+ 395 Mini PC review: Qwen3 235B Unsloth Q2_K 14.71 tokens/s, 235B Q3_K_S 10.51 tokens/s
1
2025-05-07T07:15:59
https://www.youtube.com/watch?v=UXjg6Iew9lg
NZT33
youtube.com
1970-01-01T00:00:00
0
{}
1kgrblq
false
{'oembed': {'author_name': 'jack stone', 'author_url': 'https://www.youtube.com/@jackstone', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/UXjg6Iew9lg?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="满血Qwen3 235B本地部署15t/s!Stable Diffusion 3.5 Large文生图本地部署!128G内存8060S最强核显!极摩客EVO-X2 AI Max+ 395迷你主机评测!"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/UXjg6Iew9lg/hqdefault.jpg', 'thumbnail_width': 480, 'title': '满血Qwen3 235B本地部署15t/s!Stable Diffusion 3.5 Large文生图本地部署!128G内存8060S最强核显!极摩客EVO-X2 AI Max+ 395迷你主机评测!', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1kgrblq
/r/LocalLLaMA/comments/1kgrblq/gmk_evox2_ryzen_ai_max_395_mini_pc_review_qwen3/
false
false
https://b.thumbs.redditm…Cqv18idP4kZM.jpg
1
{'enabled': False, 'images': [{'id': 'IwFmHQEpOTQbLgTlJKC1lvyJghRjfN_7w50_VfbTtOA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/qH485SJx4calush0vagqjiZmzvZENloL6dDm3U4FBkU.jpg?width=108&crop=smart&auto=webp&s=22b958fe376f7e1c3411beeeb6d117ae190a2ee0', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/qH485SJx4calush0vagqjiZmzvZENloL6dDm3U4FBkU.jpg?width=216&crop=smart&auto=webp&s=4a09b8f2a4209bdd6fd3daf8cbe00bb30e20291a', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/qH485SJx4calush0vagqjiZmzvZENloL6dDm3U4FBkU.jpg?width=320&crop=smart&auto=webp&s=fe66170dbcf8dbf0a94a369c7b2b8dc1edf4c0f6', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/qH485SJx4calush0vagqjiZmzvZENloL6dDm3U4FBkU.jpg?auto=webp&s=3d90d6a76784db8069aadc76c3685ebb14858a1c', 'width': 480}, 'variants': {}}]}
Reduced GenAI Backend Dev Time by 30-40% with Strapi: Sharing Our Initial Findings
0
We've been developing AI solutions and wanted to share a significant efficiency gain we've experienced using Strapi for our backend infrastructure, specifically for Generative AI projects. The key outcome has been a **reduction in admin and backend development/management time by an estimated 30%.** This has allowed us to allocate more resources towards core AI development and accelerate our project timelines. We found this quite impactful and thought it might be a useful insight for others in the community. Strapi offers a really solid foundation for GenAI platforms, though you might need to tweak some of the logic depending on your specific use case. It's definitely proven to be a powerful accelerator for us.
2025-05-07T07:16:02
https://www.reddit.com/r/LocalLLaMA/comments/1kgrbmy/reduced_genai_backend_dev_time_by_3040_with/
No-Reindeer-9968
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgrbmy
false
null
t3_1kgrbmy
/r/LocalLLaMA/comments/1kgrbmy/reduced_genai_backend_dev_time_by_3040_with/
false
false
self
0
null
Best Open-Source Model for Summarizing SQL Query Results – Currently Trying Qwen3 30B A3B
1
2025-05-07T07:19:05
https://ollama.com/library/qwen3
Appropriate_Bus_989
ollama.com
1970-01-01T00:00:00
0
{}
1kgrd4r
false
null
t3_1kgrd4r
/r/LocalLLaMA/comments/1kgrd4r/best_opensource_model_for_summarizing_sql_query/
false
false
https://external-preview…e75bc00f189acaf8
1
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=108&crop=smart&auto=webp&s=3dc759de0e8fa36d241c5728d41ee3cf022cab96', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=216&crop=smart&auto=webp&s=6ccf136f5d3091254a0067a3bc5d6c7df9d62d89', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=320&crop=smart&auto=webp&s=2530aa4ecbcf7899ec0d023e217fe24af15fe0a6', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=640&crop=smart&auto=webp&s=8e51add1cab39c7614eb13e6195f23c5b4eeb417', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=960&crop=smart&auto=webp&s=750a6d42fd91c5a6e9a9c069e74247c877644e97', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=1080&crop=smart&auto=webp&s=9eab390b865b031211658564ad5fe5241c9661c5', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?auto=webp&s=a080c4707584d3aa14134960cda9ba2d339b93a3', 'width': 1200}, 'variants': {}}]}
Best Open-Source Model for Summarizing SQL Query Results – Currently Trying Qwen3 30B A3B
1
[removed]
2025-05-07T07:20:19
https://www.reddit.com/r/LocalLLaMA/comments/1kgrdq0/best_opensource_model_for_summarizing_sql_query/
Appropriate_Bus_989
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgrdq0
false
null
t3_1kgrdq0
/r/LocalLLaMA/comments/1kgrdq0/best_opensource_model_for_summarizing_sql_query/
false
false
self
1
null
How far away is it from LLM empowering various industries?
0
Now we see LLM getting progressively stronger over people, but if you go out and experience the world, you can't seem to find any LLM. **What do you all think LLM's biggest impact on the world will be?**
2025-05-07T07:27:42
https://www.reddit.com/r/LocalLLaMA/comments/1kgrhe3/how_far_away_is_it_from_llm_empowering_various/
EducationalOwl6246
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgrhe3
false
null
t3_1kgrhe3
/r/LocalLLaMA/comments/1kgrhe3/how_far_away_is_it_from_llm_empowering_various/
false
false
self
0
null
New ""Open-Source"" Video generation model
702
LTX-Video is the first DiT-based video generation model that can generate high-quality videos in *real-time*. It can generate 30 FPS videos at 1216×704 resolution, faster than it takes to watch them. The model is trained on a large-scale dataset of diverse videos and can generate high-resolution videos with realistic and diverse content. The model supports text-to-image, image-to-video, keyframe-based animation, video extension (both forward and backward), video-to-video transformations, and any combination of these features. To be honest, I don't view it as open-source, not even open-weight. The license is weird, not a license we know of, and there's "Use Restrictions". By doing so, it is NOT open-source. Yes, the restrictions are honest, and I invite you to read them, [here is an example](https://static.lightricks.com/legal/LTXV-13b-0.9.7-dev.pdf), but I think they're just doing this to protect themselves. GitHub: [https://github.com/Lightricks/LTX-Video](https://github.com/Lightricks/LTX-Video) HF: [https://huggingface.co/Lightricks/LTX-Video](https://huggingface.co/Lightricks/LTX-Video) (FP8 coming soon) Documentation: [https://www.lightricks.com/ltxv-documentation](https://www.lightricks.com/ltxv-documentation) Tweet: [https://x.com/LTXStudio/status/1919751150888239374](https://x.com/LTXStudio/status/1919751150888239374)
2025-05-07T07:32:32
https://v.redd.it/i4ioviud9bze1
topiga
v.redd.it
1970-01-01T00:00:00
0
{}
1kgrjor
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/i4ioviud9bze1/DASHPlaylist.mpd?a=1749195168%2CYzc4MmZmNWRmNWVlNTc4NjhlMzU4MTk3ODkxYjE5ZDA4NTliZjI1NzM0NWYxM2UyNGUxMzE1YTZkYTQ3NDcxMg%3D%3D&v=1&f=sd', 'duration': 48, 'fallback_url': 'https://v.redd.it/i4ioviud9bze1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/i4ioviud9bze1/HLSPlaylist.m3u8?a=1749195168%2CYjhiZWUwNzI4OThhODNiNjQwODgyZTY1NzkxMjYxNTE4M2I4YTZmMGQwZDVkNzY5NTFhOTU5Zjg5YzQzOGM3ZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/i4ioviud9bze1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1kgrjor
/r/LocalLLaMA/comments/1kgrjor/new_opensource_video_generation_model/
false
false
https://external-preview…0fe3e2f6283bdbad
702
{'enabled': False, 'images': [{'id': 'ZHdlOHlodmQ5YnplMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZHdlOHlodmQ5YnplMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?width=108&crop=smart&format=pjpg&auto=webp&s=e8a9d285663aa023b3d8290430bcd36c8e28ca34', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ZHdlOHlodmQ5YnplMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?width=216&crop=smart&format=pjpg&auto=webp&s=c9c2aba9bab35df60cacc8c27dfcd5163d469cef', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ZHdlOHlodmQ5YnplMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?width=320&crop=smart&format=pjpg&auto=webp&s=e859e05ea2d6c7d3c88572dd350b956298a76ddb', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ZHdlOHlodmQ5YnplMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?width=640&crop=smart&format=pjpg&auto=webp&s=b42dc5c42831896deeef51e12ac9e19e4221790d', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ZHdlOHlodmQ5YnplMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?width=960&crop=smart&format=pjpg&auto=webp&s=82156b671b7370a75591129e8f7c93c772f487e3', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ZHdlOHlodmQ5YnplMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?width=1080&crop=smart&format=pjpg&auto=webp&s=2bc21d54ffc8ac5963847a981215a8160b3fd354', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ZHdlOHlodmQ5YnplMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?format=pjpg&auto=webp&s=4a3fd18e7081b905129dc10c621bbae15d0aa39e', 'width': 1920}, 'variants': {}}]}
تتن
1
[removed]
2025-05-07T07:33:24
https://www.reddit.com/r/LocalLLaMA/comments/1kgrk36/تتن/
bilaljomaa07
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgrk36
false
null
t3_1kgrk36
/r/LocalLLaMA/comments/1kgrk36/تتن/
false
false
self
1
null
I wrote a basic multimodal (Image and Text) agentic layer for my custom finetuned model
0
I had trouble integrating Langgraph to my custom fine-tuned Llama 3.2 11B Instruct vision. I wrote a simple (and probably buggy) multimodal agentic layer from scratch. Here is a link to my Kaggle notebook - [link](https://www.kaggle.com/code/pranavupadhyaya/notebook9de6b64a65) . Please give your feedback and any changes I can implement. Currently, it runs agents only serially, as I don't know if I can use multiprocessing on Kaggle. Your feedback is appreciated.
2025-05-07T08:05:20
https://www.reddit.com/r/LocalLLaMA/comments/1kgrzjo/i_wrote_a_basic_multimodal_image_and_text_agentic/
ConfectionAfter2366
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgrzjo
false
null
t3_1kgrzjo
/r/LocalLLaMA/comments/1kgrzjo/i_wrote_a_basic_multimodal_image_and_text_agentic/
false
false
self
0
null
3090+3060+3060 llama.cpp benchmarks / tips
41
**Building LocalLlama Machine – Episode 3: Performance Optimizations** In the previous episode, I had all three GPUs mounted directly in the motherboard slots. Now, I’ve **moved one 3090 onto a riser** to make it a bit happier. Let’s use this setup for benchmarking. Some people ask whether it's allowed to mix different GPUs, in this tutorial, I’ll explain how to handle that topic. First, let’s try some smaller models. In the first screenshot, you can see the results for Qwen3 8B and Qwen3 14B. These models are small enough to fit entirely inside a 3090, so the 3060s are not needed. If we disable them, we see a performance boost: from **48 to 82** tokens per second, and from **28 to 48**. Next, we switch to **Qwen3 32B**. This model is larger, and to run it in Q8, you need more than a single 3090. However, in `llama.cpp`, we can control how the tensors are split. For example, we can allocate more memory on the first card and less on the second and third. These values are discovered experimentally for each model, so your optimal settings may vary. If the values are incorrect, the model won't load, for instance, it might try to allocate 26GB on a 24GB GPU. We can improve performance from the default **13.0** tokens per second to **15.6** by adjusting the tensor split. Furthermore, we can go even higher, to **16.4 tokens per second**, by using the "row" split mode. This mode was broken in `llama.cpp` until recently, so make sure you're using the latest version of the code. Now let’s try **Nemotron 49B**. I really like this model, though I can't run it fully in Q8 yet, that’s a good excuse to buy another 3090! For now, let's use Q6. With some tuning, we can go **from 12.4 to 14.1 tokens per second**. Not bad. Then we move on to a 70B model. I'm using **DeepSeek-R1-Distill-Llama-70B** in Q4. We start at **10.3** tokens per second and improve to **12.1**. **Gemma3 27B** is a different case. With optimized tensor split values, we boost performance from 14.9 to **18.9 tokens per second**. However, using `sm` row mode slightly decreases the speed to 18.5. Finally, we see similar behavior with **Mistral Small 24B** (why is it called Llama 13B?). Performance goes from 18.8 to **28.2 tokens per second** with tensor split, but again, `sm` row mode reduces it slightly to 26.1. So, you’ll need to experiment with your favorite models and your specific setup, but now you know the direction to take on your journey. Good luck!
2025-05-07T08:10:14
https://www.reddit.com/gallery/1kgs1z7
jacek2023
reddit.com
1970-01-01T00:00:00
0
{}
1kgs1z7
false
null
t3_1kgs1z7
/r/LocalLLaMA/comments/1kgs1z7/309030603060_llamacpp_benchmarks_tips/
false
false
default
41
null
super micro 7048
0
Quick question about the Supermicro 7048 setup with 2 RTX 3090 cards. Do you think it’ll handle AI tasks well? my use case is family of 8 and have a small business (no image generation). I’m also curious about the CPU support, cooling needs, and if you think the performance of 40-70 tokens/s up to 1000 tokens/s is realistic for this setup. Thanks!
2025-05-07T08:15:07
https://www.reddit.com/r/LocalLLaMA/comments/1kgs4dj/super_micro_7048/
AfraidScheme433
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgs4dj
false
null
t3_1kgs4dj
/r/LocalLLaMA/comments/1kgs4dj/super_micro_7048/
false
false
self
0
null
Best small ollama model for big context?
1
[removed]
2025-05-07T08:17:23
https://www.reddit.com/r/LocalLLaMA/comments/1kgs5gh/best_small_ollama_model_for_big_context/
LabEnvironmental4874
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgs5gh
false
null
t3_1kgs5gh
/r/LocalLLaMA/comments/1kgs5gh/best_small_ollama_model_for_big_context/
false
false
self
1
null
AI Model Training: Where Does Your Data Really Come From? (And Why It Matters)
1
[removed]
2025-05-07T08:30:16
https://www.reddit.com/r/LocalLLaMA/comments/1kgsbll/ai_model_training_where_does_your_data_really/
crm_path_finder
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgsbll
false
null
t3_1kgsbll
/r/LocalLLaMA/comments/1kgsbll/ai_model_training_where_does_your_data_really/
false
false
self
1
null
AI Workstation for €15,000
1
[removed]
2025-05-07T08:59:43
https://www.reddit.com/r/LocalLLaMA/comments/1kgspm6/ai_workstation_for_15000/
LilJockel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgspm6
false
null
t3_1kgspm6
/r/LocalLLaMA/comments/1kgspm6/ai_workstation_for_15000/
false
false
self
1
null
What's the landscape for 70B-ish sized LLMs right now? Any notable ones?
1
[removed]
2025-05-07T08:59:47
https://www.reddit.com/r/LocalLLaMA/comments/1kgspn5/whats_the_landscape_for_70bish_sized_llms_right/
NewspaperFormal5330
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgspn5
false
null
t3_1kgspn5
/r/LocalLLaMA/comments/1kgspn5/whats_the_landscape_for_70bish_sized_llms_right/
false
false
self
1
null
Any one test Suna? Seems awesome and its open source!
0
About to set this up locally, seems to work with local LLMs too? via [LiteLLM](https://github.com/BerriAI/litellm) [https://github.com/kortix-ai/suna](https://github.com/kortix-ai/suna)
2025-05-07T09:09:04
https://www.reddit.com/r/LocalLLaMA/comments/1kgsuci/any_one_test_suna_seems_awesome_and_its_open/
Fit_Voice_3842
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgsuci
false
null
t3_1kgsuci
/r/LocalLLaMA/comments/1kgsuci/any_one_test_suna_seems_awesome_and_its_open/
false
false
self
0
{'enabled': False, 'images': [{'id': 'K4JIjhsxEolnTdfKbrVPm4EDuWSKS4KGEczgQ_cEopk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/sjB6G69LXpk7_CX3dQobOqLHDVjVI0wsXa2FwYrmbuk.jpg?width=108&crop=smart&auto=webp&s=162917fa1fedb6348dbf1203d52ff775ca90317f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/sjB6G69LXpk7_CX3dQobOqLHDVjVI0wsXa2FwYrmbuk.jpg?width=216&crop=smart&auto=webp&s=124b3a81872a6e3c195be828ee9e6a708d1df7b7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/sjB6G69LXpk7_CX3dQobOqLHDVjVI0wsXa2FwYrmbuk.jpg?width=320&crop=smart&auto=webp&s=9409e9551134c991a858599b51be2594c0aec918', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/sjB6G69LXpk7_CX3dQobOqLHDVjVI0wsXa2FwYrmbuk.jpg?width=640&crop=smart&auto=webp&s=c18ddc0a0fa35add192bf575591d236afd4febc9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/sjB6G69LXpk7_CX3dQobOqLHDVjVI0wsXa2FwYrmbuk.jpg?width=960&crop=smart&auto=webp&s=0e21b32040c6d7d0993ddb42f6a8deb7e02824a9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/sjB6G69LXpk7_CX3dQobOqLHDVjVI0wsXa2FwYrmbuk.jpg?width=1080&crop=smart&auto=webp&s=335012c144b185219e7f66648c4b9092facf548a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/sjB6G69LXpk7_CX3dQobOqLHDVjVI0wsXa2FwYrmbuk.jpg?auto=webp&s=f9e4d96cf6caa3f6c3b61cc59571666b2caccf11', 'width': 1200}, 'variants': {}}]}
Local LLM Build – €15,000 Budget for Max AI Power (Multi-GPU or Single Rig?)
1
[removed]
2025-05-07T09:13:59
https://www.reddit.com/r/LocalLLaMA/comments/1kgswrr/local_llm_build_15000_budget_for_max_ai_power/
LilJockel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgswrr
false
null
t3_1kgswrr
/r/LocalLLaMA/comments/1kgswrr/local_llm_build_15000_budget_for_max_ai_power/
false
false
self
1
null
nanoVLM: A minimal Vision-Language Model with a LLaMA-style decoder — now open source
164
Hey all — we just open-sourced **nanoVLM**, a lightweight Vision-Language Model (VLM) built from scratch in **pure PyTorch**, with a **LLaMA-style decoder**. It's designed to be simple, hackable, and easy to train — the full model is just \~750 lines of code. Why it's interesting: * Achieves **35.3% on MMStar** with only **6 hours of training on a single H100,** matching SmolVLM-256M performance — but using 100x fewer GPU hours. * Can be trained in a **free Google Colab notebook** * Great for learning, prototyping, or building your own VLMs Architecture: * Vision encoder: **SigLiP-ViT** * Language decoder: **LLaMA-style** * Modality projector connecting the two Inspired by nanoGPT, this is like the VLM version — compact and easy to understand. Would love to see someone try running this on local hardware or mixing it with other projects. Repo: [https://github.com/huggingface/nanoVLM](https://github.com/huggingface/nanoVLM)
2025-05-07T09:37:59
https://www.reddit.com/r/LocalLLaMA/comments/1kgt8m5/nanovlm_a_minimal_visionlanguage_model_with_a/
zKingFrist
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgt8m5
false
null
t3_1kgt8m5
/r/LocalLLaMA/comments/1kgt8m5/nanovlm_a_minimal_visionlanguage_model_with_a/
false
false
self
164
{'enabled': False, 'images': [{'id': '3tG83a7uuOanZQddx3Krr8lDkwg82IGw7kMOnfVVgO4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3tG83a7uuOanZQddx3Krr8lDkwg82IGw7kMOnfVVgO4.png?width=108&crop=smart&auto=webp&s=ec8443e3076da522bf23469fe699c314ad3e46de', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3tG83a7uuOanZQddx3Krr8lDkwg82IGw7kMOnfVVgO4.png?width=216&crop=smart&auto=webp&s=f8001f48b64ca317e0a7933a7bd0387593e6d6d2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3tG83a7uuOanZQddx3Krr8lDkwg82IGw7kMOnfVVgO4.png?width=320&crop=smart&auto=webp&s=d5097656e27d8f1d09a9aeb3df3b86bb93e347a7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3tG83a7uuOanZQddx3Krr8lDkwg82IGw7kMOnfVVgO4.png?width=640&crop=smart&auto=webp&s=3ead1dddda54d05c5c1809f1201a9efc2d0e90c3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3tG83a7uuOanZQddx3Krr8lDkwg82IGw7kMOnfVVgO4.png?width=960&crop=smart&auto=webp&s=76264b71d346094269a095a472c26e75fc126298', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3tG83a7uuOanZQddx3Krr8lDkwg82IGw7kMOnfVVgO4.png?width=1080&crop=smart&auto=webp&s=01ebfeecfae969b279e2d90e3053e9a346263f10', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3tG83a7uuOanZQddx3Krr8lDkwg82IGw7kMOnfVVgO4.png?auto=webp&s=66f251921627c917d8c13fa560d3c368915f250f', 'width': 1200}, 'variants': {}}]}
lmarena..
1
[removed]
2025-05-07T09:41:03
https://www.reddit.com/r/LocalLLaMA/comments/1kgta4t/lmarena/
Nors1k
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgta4t
false
null
t3_1kgta4t
/r/LocalLLaMA/comments/1kgta4t/lmarena/
false
false
self
1
null
lmarena?
1
[removed]
2025-05-07T09:43:03
https://www.reddit.com/r/LocalLLaMA/comments/1kgtb49/lmarena/
Nors1k
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgtb49
false
null
t3_1kgtb49
/r/LocalLLaMA/comments/1kgtb49/lmarena/
false
false
self
1
null
lmarena..
1
[removed]
2025-05-07T10:01:57
https://www.reddit.com/r/LocalLLaMA/comments/1kgtkyg/lmarena/
Nors1k
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgtkyg
false
null
t3_1kgtkyg
/r/LocalLLaMA/comments/1kgtkyg/lmarena/
false
false
self
1
null
FreedomAI
1
[removed]
2025-05-07T10:21:35
https://www.reddit.com/r/LocalLLaMA/comments/1kgtvrj/freedomai/
Vxrtu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kgtvrj
false
null
t3_1kgtvrj
/r/LocalLLaMA/comments/1kgtvrj/freedomai/
false
false
self
1
null
Show LocalLLama : Why not a open source distillation repo for all LLMs!
1
[removed]
2025-05-07T10:29:01
https://github.com/agokrani/distillKitPlus
ANAGDKP
github.com
1970-01-01T00:00:00
0
{}
1kgtzy7
false
null
t3_1kgtzy7
/r/LocalLLaMA/comments/1kgtzy7/show_localllama_why_not_a_open_source/
false
false
https://a.thumbs.redditm…EUju3KQwmO28.jpg
1
{'enabled': False, 'images': [{'id': 'PYymGJLuCsA51MDdPGHSVguJeZ189kTNcHhLb5eqCkg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Nh8KfD6B2KeowptcM1hogK7xfkKHYA8XR7xPF4AfKis.jpg?width=108&crop=smart&auto=webp&s=e7b646d4cc9cc08a89188d546bb89364146f90e7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Nh8KfD6B2KeowptcM1hogK7xfkKHYA8XR7xPF4AfKis.jpg?width=216&crop=smart&auto=webp&s=ea4057af6f194b2d9d038b77bcf05893ab3cdb87', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Nh8KfD6B2KeowptcM1hogK7xfkKHYA8XR7xPF4AfKis.jpg?width=320&crop=smart&auto=webp&s=b472410b0eeea508ed17c0707d537be232b2a463', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Nh8KfD6B2KeowptcM1hogK7xfkKHYA8XR7xPF4AfKis.jpg?width=640&crop=smart&auto=webp&s=ab83efd794a7b3a58b9feec1a9918b502a0aaf7b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Nh8KfD6B2KeowptcM1hogK7xfkKHYA8XR7xPF4AfKis.jpg?width=960&crop=smart&auto=webp&s=a9dc4261a88b95a50d15a59809c3c7fb653f941b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Nh8KfD6B2KeowptcM1hogK7xfkKHYA8XR7xPF4AfKis.jpg?width=1080&crop=smart&auto=webp&s=9d2c292820ddb59e1faf368b2c45a20b6782a81d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Nh8KfD6B2KeowptcM1hogK7xfkKHYA8XR7xPF4AfKis.jpg?auto=webp&s=ef933870a8e14526f8032dc4658271d51f9ce662', 'width': 1200}, 'variants': {}}]}