title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Llama4 support is merged into llama.cpp!
| 129 | 2025-04-07T21:08:20 |
https://github.com/ggml-org/llama.cpp/pull/12791
|
Master-Meal-77
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jtweei
| false | null |
t3_1jtweei
|
/r/LocalLLaMA/comments/1jtweei/llama4_support_is_merged_into_llamacpp/
| false | false | 129 |
{'enabled': False, 'images': [{'id': 'kiwFh7xFLXKE2h-mouJTZDxTIiNBzZwnS37NXV8YD7w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3YQJt4uj8I_2zhsxnK4qdxOjQqnMLiZv2IVMx9xShfM.jpg?width=108&crop=smart&auto=webp&s=09da4998ed9ddf74cdb4a9bf3e17a1acef669f71', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3YQJt4uj8I_2zhsxnK4qdxOjQqnMLiZv2IVMx9xShfM.jpg?width=216&crop=smart&auto=webp&s=adfec9a4d6c6ed55b262c4b0d306719786e404d2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3YQJt4uj8I_2zhsxnK4qdxOjQqnMLiZv2IVMx9xShfM.jpg?width=320&crop=smart&auto=webp&s=bdec06a749b20ff20e0e8a1aa4b756a4107d89fb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3YQJt4uj8I_2zhsxnK4qdxOjQqnMLiZv2IVMx9xShfM.jpg?width=640&crop=smart&auto=webp&s=553ce6c7a84f765dccbcc4f23b18ea664ca13750', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3YQJt4uj8I_2zhsxnK4qdxOjQqnMLiZv2IVMx9xShfM.jpg?width=960&crop=smart&auto=webp&s=4e05de9307eedcbdd6d74287a6d9e66eb07e7212', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3YQJt4uj8I_2zhsxnK4qdxOjQqnMLiZv2IVMx9xShfM.jpg?width=1080&crop=smart&auto=webp&s=8a0147f84a04db5424dff7f36250de1202c69f90', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3YQJt4uj8I_2zhsxnK4qdxOjQqnMLiZv2IVMx9xShfM.jpg?auto=webp&s=061a9984adda797dc3952ed17e357852ed86578d', 'width': 1200}, 'variants': {}}]}
|
||
Fairly new here with a question..
| 1 |
1. What LLM are ya using and for what?
2. Are you using Openweb-ui or equal desktop software linking with Ollama?
I am personally using Ollama but i have not idea which model to use..
I have two RTX 3090s and having a hardtime knowing what will fit and what is recommended for that build.
I also find openweb-ui slightly troublesome as a lose it with all my open tabs.. :)
| 2025-04-07T21:18:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1jtwndj/fairly_new_here_with_a_question/
|
Timziito
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jtwndj
| false | null |
t3_1jtwndj
|
/r/LocalLLaMA/comments/1jtwndj/fairly_new_here_with_a_question/
| false | false |
self
| 1 | null |
What's the best non-thinking and non-MoE model for regular single GPU users?
| 4 |
QwQ 32b is a thinking model which needs more context tokens, and Llama4 is all too big for a single GPU like most MoE models using more VRAM for the whole then what's being used in any moment. So what's actually the best model right now to run on a single GPU if it be 12gb, 16gb, 24gb, or 32gb for the 5090 crowd?
It's getting very hard to keep up with all the models out now.
| 2025-04-07T21:20:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1jtwokd/whats_the_best_nonthinking_and_nonmoe_model_for/
|
Cerebral_Zero
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jtwokd
| false | null |
t3_1jtwokd
|
/r/LocalLLaMA/comments/1jtwokd/whats_the_best_nonthinking_and_nonmoe_model_for/
| false | false |
self
| 4 | null |
Ollama Cli results different from the API calls
| 1 |
[removed]
| 2025-04-07T21:25:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1jtwsme/ollama_cli_results_different_from_the_api_calls/
|
Ibrahimkm
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jtwsme
| false | null |
t3_1jtwsme
|
/r/LocalLLaMA/comments/1jtwsme/ollama_cli_results_different_from_the_api_calls/
| false | false |
self
| 1 | null |
Is there any really usable quantization Q3 or Q4 of Mistral Small 3.1?
| 1 |
[removed]
| 2025-04-07T21:26:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1jtwtjt/is_there_any_really_usable_quantization_q3_or_q4/
|
Epictetito
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jtwtjt
| false | null |
t3_1jtwtjt
|
/r/LocalLLaMA/comments/1jtwtjt/is_there_any_really_usable_quantization_q3_or_q4/
| false | false |
self
| 1 | null |
Anyone here upgrade to an epyc system? What improvements did you see?
| 10 |
My system is a dual xeon board, it gets the job done for a budget build, but when I offload performance suffers. So I have been thinking if i can do a "budget" epyc build, something with 8 channel of memory, hopefully offloading will not see performance suffer severely. If anyone has actual experience, I'll like to hear the sort of improvement you saw moving to epyc platform with some GPUs already in the mix.
| 2025-04-07T21:34:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1jtx05j/anyone_here_upgrade_to_an_epyc_system_what/
|
segmond
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jtx05j
| false | null |
t3_1jtx05j
|
/r/LocalLLaMA/comments/1jtx05j/anyone_here_upgrade_to_an_epyc_system_what/
| false | false |
self
| 10 | null |
Run Llama 4 - Scout and Maverick on ObserverAI
| 0 |
Hey guys support was just added for Llama4 Scout and Maverick in ObserverAI!
You can run it 100% locally with ollama or vllm (or anything that uses v1 chat completions!!), or you can try it out with Ob-Server!
[app.observer-ai.com](http://app.observer-ai.com)
| 2025-04-07T21:53:47 |
Roy3838
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jtxfxt
| false | null |
t3_1jtxfxt
|
/r/LocalLLaMA/comments/1jtxfxt/run_llama_4_scout_and_maverick_on_observerai/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'QU5I_GY46m_WN6RmmdfQjq34n91YxFSw2TR2sUuwEUg', 'resolutions': [{'height': 101, 'url': 'https://preview.redd.it/em20nalyhhte1.png?width=108&crop=smart&auto=webp&s=bad50c0bf9076d7515d7035a8259bd06152047b3', 'width': 108}, {'height': 202, 'url': 'https://preview.redd.it/em20nalyhhte1.png?width=216&crop=smart&auto=webp&s=0ed6d1c6cb0678341c4448dd0f054f842b25e29f', 'width': 216}, {'height': 300, 'url': 'https://preview.redd.it/em20nalyhhte1.png?width=320&crop=smart&auto=webp&s=94110489828ec41ab2e7e575de60156b1f8c5e2b', 'width': 320}, {'height': 600, 'url': 'https://preview.redd.it/em20nalyhhte1.png?width=640&crop=smart&auto=webp&s=24a2f9705bb4340356a42ba143310b4f979f25cf', 'width': 640}, {'height': 901, 'url': 'https://preview.redd.it/em20nalyhhte1.png?width=960&crop=smart&auto=webp&s=23f96d768cce83c7fc37c0229f03a5647fbc584b', 'width': 960}, {'height': 1013, 'url': 'https://preview.redd.it/em20nalyhhte1.png?width=1080&crop=smart&auto=webp&s=b84e0a163aabf6f5531f39fb807e581c4165061f', 'width': 1080}], 'source': {'height': 1902, 'url': 'https://preview.redd.it/em20nalyhhte1.png?auto=webp&s=2a6ee2905fe110556aa19379623d0c5995132f2c', 'width': 2026}, 'variants': {}}]}
|
||
Llama 4 100b -> 48.2b or 36.2b
| 1 |
[removed]
| 2025-04-07T21:53:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1jtxg0t/llama_4_100b_482b_or_362b/
|
Electrical-Monitor27
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jtxg0t
| false | null |
t3_1jtxg0t
|
/r/LocalLLaMA/comments/1jtxg0t/llama_4_100b_482b_or_362b/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'HR1wOKux8-vAJaQF8EgTAIbBoLBrq2U8DEACT1oINzo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/lfFBVW0wKSeLEH1JIZ1RjjxOm1uRLCSoyCoeL9RiPdY.jpg?width=108&crop=smart&auto=webp&s=632538eac37f61057c9f5fb055a8493253ed28c4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/lfFBVW0wKSeLEH1JIZ1RjjxOm1uRLCSoyCoeL9RiPdY.jpg?width=216&crop=smart&auto=webp&s=09a93a71f7e7f5eb7873b5186a849c705bcdcbbb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/lfFBVW0wKSeLEH1JIZ1RjjxOm1uRLCSoyCoeL9RiPdY.jpg?width=320&crop=smart&auto=webp&s=bb01853e6bd80166f23a0ae8e279f168372c4249', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/lfFBVW0wKSeLEH1JIZ1RjjxOm1uRLCSoyCoeL9RiPdY.jpg?width=640&crop=smart&auto=webp&s=3aabdc380ab1e82d85d2e407f2c062cf00da59a0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/lfFBVW0wKSeLEH1JIZ1RjjxOm1uRLCSoyCoeL9RiPdY.jpg?width=960&crop=smart&auto=webp&s=5893576e82cbce39fb7a01f57bb33dae095517cf', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/lfFBVW0wKSeLEH1JIZ1RjjxOm1uRLCSoyCoeL9RiPdY.jpg?width=1080&crop=smart&auto=webp&s=cf16781915c2c8b4a8c1527fd2bdd287697f1c9b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/lfFBVW0wKSeLEH1JIZ1RjjxOm1uRLCSoyCoeL9RiPdY.jpg?auto=webp&s=0f93350914f904108a7cb9ef2e4d5daf69891b39', 'width': 1200}, 'variants': {}}]}
|
Llama 4 Model Compression
| 1 |
[removed]
| 2025-04-07T21:56:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1jtxhuy/llama_4_model_compression/
|
Electrical-Monitor27
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jtxhuy
| false | null |
t3_1jtxhuy
|
/r/LocalLLaMA/comments/1jtxhuy/llama_4_model_compression/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'HR1wOKux8-vAJaQF8EgTAIbBoLBrq2U8DEACT1oINzo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/lfFBVW0wKSeLEH1JIZ1RjjxOm1uRLCSoyCoeL9RiPdY.jpg?width=108&crop=smart&auto=webp&s=632538eac37f61057c9f5fb055a8493253ed28c4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/lfFBVW0wKSeLEH1JIZ1RjjxOm1uRLCSoyCoeL9RiPdY.jpg?width=216&crop=smart&auto=webp&s=09a93a71f7e7f5eb7873b5186a849c705bcdcbbb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/lfFBVW0wKSeLEH1JIZ1RjjxOm1uRLCSoyCoeL9RiPdY.jpg?width=320&crop=smart&auto=webp&s=bb01853e6bd80166f23a0ae8e279f168372c4249', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/lfFBVW0wKSeLEH1JIZ1RjjxOm1uRLCSoyCoeL9RiPdY.jpg?width=640&crop=smart&auto=webp&s=3aabdc380ab1e82d85d2e407f2c062cf00da59a0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/lfFBVW0wKSeLEH1JIZ1RjjxOm1uRLCSoyCoeL9RiPdY.jpg?width=960&crop=smart&auto=webp&s=5893576e82cbce39fb7a01f57bb33dae095517cf', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/lfFBVW0wKSeLEH1JIZ1RjjxOm1uRLCSoyCoeL9RiPdY.jpg?width=1080&crop=smart&auto=webp&s=cf16781915c2c8b4a8c1527fd2bdd287697f1c9b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/lfFBVW0wKSeLEH1JIZ1RjjxOm1uRLCSoyCoeL9RiPdY.jpg?auto=webp&s=0f93350914f904108a7cb9ef2e4d5daf69891b39', 'width': 1200}, 'variants': {}}]}
|
Is Llama 4 not fine tuning friendly?
| 6 |
Given that the smallest model has 109B parameters and memory requirements during training (assuming full weights for now) depending on total parameters, not only active parameters, doesn't this make fine-tuning models significantly more resource intensive?
Am I right, or am I missing something?
| 2025-04-07T22:02:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1jtxne4/is_llama_4_not_fine_tuning_friendly/
|
amang0112358
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jtxne4
| false | null |
t3_1jtxne4
|
/r/LocalLLaMA/comments/1jtxne4/is_llama_4_not_fine_tuning_friendly/
| false | false |
self
| 6 | null |
Cheapest cloud GPUs to run Llama 4 maverick
| 7 | 2025-04-07T22:26:14 |
rombrr
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jty61a
| false | null |
t3_1jty61a
|
/r/LocalLLaMA/comments/1jty61a/cheapest_cloud_gpus_to_run_llama_4_maverick/
| false | false | 7 |
{'enabled': True, 'images': [{'id': 'uy-uR7Gjb5--4j2xO_OXH43R2B4BAByTHamGsArCBeE', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/8ezsoz1cnhte1.png?width=108&crop=smart&auto=webp&s=905e81e42e376fe837938357d654d1a2d21812ad', 'width': 108}, {'height': 118, 'url': 'https://preview.redd.it/8ezsoz1cnhte1.png?width=216&crop=smart&auto=webp&s=05eb3a5c33cc5d60e3dba8749dc1bfc571613c24', 'width': 216}, {'height': 175, 'url': 'https://preview.redd.it/8ezsoz1cnhte1.png?width=320&crop=smart&auto=webp&s=0fe9dc828b484e5674f58ab22e9259b81be89578', 'width': 320}, {'height': 350, 'url': 'https://preview.redd.it/8ezsoz1cnhte1.png?width=640&crop=smart&auto=webp&s=3c1bbe08b190e1b14544ac8a0fb8d1f80a637b16', 'width': 640}, {'height': 526, 'url': 'https://preview.redd.it/8ezsoz1cnhte1.png?width=960&crop=smart&auto=webp&s=456cc17ecef2e30c8015393be37f07d7e4dd3a7c', 'width': 960}, {'height': 591, 'url': 'https://preview.redd.it/8ezsoz1cnhte1.png?width=1080&crop=smart&auto=webp&s=c4a07eea8c575b2c8f274cd9929a9058e1f5440f', 'width': 1080}], 'source': {'height': 1748, 'url': 'https://preview.redd.it/8ezsoz1cnhte1.png?auto=webp&s=fcf71ae89d8bade80f0f054d69d4bef282115635', 'width': 3190}, 'variants': {}}]}
|
|||
Would you want a local AI that asks for permission before acting?
| 1 |
[removed]
| 2025-04-07T22:45:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1jtyl9m/would_you_want_a_local_ai_that_asks_for/
|
Western-Trading
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jtyl9m
| false | null |
t3_1jtyl9m
|
/r/LocalLLaMA/comments/1jtyl9m/would_you_want_a_local_ai_that_asks_for/
| false | false |
self
| 1 | null |
Best Model for Sexting and General purpose right now?
| 0 |
I have 16 GB Ram and i can run 14b models. I downloaded the uncensored models but they are not responding everything. And can’t find something great for roleplaying/sexting. Is there any advice.
I tried jailbreaking but none of them worked.
| 2025-04-07T22:54:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1jtyrw0/best_model_for_sexting_and_general_purpose_right/
|
Sweet_Fisherman6443
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jtyrw0
| false | null |
t3_1jtyrw0
|
/r/LocalLLaMA/comments/1jtyrw0/best_model_for_sexting_and_general_purpose_right/
| false | false |
nsfw
| 0 | null |
What is the most efficient model?
| 1 |
I am talking about 8B parameters,around there which model is most powerful.
I focus 2 things generally,for coding and Image Generation.
| 2025-04-07T23:40:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1jtzru2/what_is_the_most_efficient_model/
|
Sweet_Fisherman6443
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jtzru2
| false | null |
t3_1jtzru2
|
/r/LocalLLaMA/comments/1jtzru2/what_is_the_most_efficient_model/
| false | false |
self
| 1 | null |
Why we may be wrong about Llama 4 . . .
| 55 |
I believe a lot has been lost in the discussion over the problematic roll out of the Llama 4 models. What we are seeing in these recent releases is a lot more novelty in LLM design with trends to multi-modality, new versions of reasoning and non-reasoning logic, different types of MoE's, etc which is causing the "first impression" of the average user to become misaligned with the progress being made. Gemma 3, particularly the multi-modal functionality, had a terrible rollout which has still not entirely been fixed in popular local LLM platforms like LM Studio, Ollama, Kobold CPP, etc. I mean if you think about it, it makes a lot of sense. To squeeze better performance out of current consumer technology and get these models out to the public, there's a whole lot of variables, not the least of which is a reliance on open source platforms to anticipate or somehow know what is going to happen when the model is released. If every new model came out with the same architecture supported by these platforms, how could there even be innovation? None of them are handling audio inputs in some standardized way so how are they going to roll out the "omni" models coming out? I haven't seen the omni version of Phi-4 supported by anyone so far. vLLM stands apart from most of these, even llama cpp, because it is a production level system actively deployed for serving models efficiently because of superior support for concurrency, throughput, etc. The Gemma team worked with vLLM and Llama CPP on these before releasing the model and they STILL had a bad rollout. Qwen 2.5 VL has been out forever, and it's still not even supported on most local inference platforms.
Since Mixtral at least, any novel architecture in the model has seen hiccups like this so we should all be used to it now without jumping to conclusions about the model until it is running properly. If you look at what has been posted about results derived from Meta's own inferencing, you can see the models clearly perform better across the board than some guy on X that got it to run on his stuff. It's all part of the ride and we should wait for support before deciding the dudes making the models have no idea what they are doing, which we all know just is not the case. I think what we will find is that this is actually the future of local LLMs, models like this. They get around the gigantic issues of memory transfer speeds by creating highly performant MoE's that can potentially run on a CPU, or at least platforms like AMD AI, Apple, etc. In fact, Qwen is set to release a very, very similar model imminently and it appears they are working with vLLM on that today. I believe this model and the new Qwen 3 MoE are going to redefine what can be done since information density has gotten so good that 3b models are doing what 24b models were doing a year and a half ago, at speeds superior to hosted solutions. It's one of the only known ways currently to get over 20 tokens a second on something that performs on par with with Sonnet 3.5, GPT 4, etc and it may guide hardware developers to focus on adding memory channels, not to match VRAM which is not going to happen, but to get to speeds which run things like this super fast, fast enough to code, do research at home, etc.
For those who are curious, you can view the commits up on vLLM today regarding the problems with LLama 4. Here's a summary from QwQ about the large commit made about 5 hours ago as to what was wrong:
\### \*\*Summary of Root Causes\*\*
The original vLLM implementation struggled with Llama4 primarily because:
1. Its MoE architecture introduced new configuration parameters and attention patterns not accounted for in prior code.
2. Flash Attention required modifications to handle local blocks, chunked sequences, and block tables for expert routing.
3. Initialization logic failed due to differing model class names or parameter naming conventions (e.g., \`text\_config\`).
4. Memory management lacked support for MoE’s parallelism requirements, necessitating changes in how batches are split and processed.
The commits address these by adding specialized handling for Llama4's architecture, reworking attention kernels, and adjusting configurations to match Meta’s implementation details.
\### \*\*End of Summary\*\*
(If anyone wants the fully analysis, I will paste it below since I ran all the diffs into QwQ)
From that, you can see, at the very least, there were a number of issues affecting experts in the MoE system, flash attention was probably not working at all, memory issues galore, etc. Can it code the hexagon stuff eventually or score a 9 on your personal creative fiction benchmark? We don't know yet but for all our sakes, something like this is a brighter path forward. What about MoE's underperforming dense models because of some unnamed law of inference? Well, this ia novel fused MoE so we will have to see. Changes have to be made to get us closer to AGI on affordable consumer computers and all that growth is going to come with some pains. Soon the models will be able to make their own adaptations to these inference platforms to get out into the world less painfully but until then we are where we are.
| 2025-04-07T23:44:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1jtzue8/why_we_may_be_wrong_about_llama_4/
|
dionysio211
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jtzue8
| false | null |
t3_1jtzue8
|
/r/LocalLLaMA/comments/1jtzue8/why_we_may_be_wrong_about_llama_4/
| false | false |
self
| 55 | null |
Prompt → browser agent → json. Easy
| 0 |
[https://github.com/nottelabs/notte](https://github.com/nottelabs/notte) new sota web agents
| 2025-04-07T23:44:43 |
https://v.redd.it/ivrpvqwx1ite1
|
Aggravating_Quiet378
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jtzurx
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ivrpvqwx1ite1/DASHPlaylist.mpd?a=1746661500%2CMTE1NTliYzI4MGJmMGQyZjBhZGRmNDdlYmVmMTZiMGM0ZjYxYTc2MWE3ZTE1ZTU2M2Y3NTM5MjdjYzZjNjc2OQ%3D%3D&v=1&f=sd', 'duration': 35, 'fallback_url': 'https://v.redd.it/ivrpvqwx1ite1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/ivrpvqwx1ite1/HLSPlaylist.m3u8?a=1746661500%2CMjkxYmZkMDNlZmQ2YTRmNDk5OTI5ZWEwZDgzNGNkNDIyOTM4ZWJlNTMxNmUxN2Y0YjA0MzVlNjM3NzQwM2Q5MQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ivrpvqwx1ite1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1728}}
|
t3_1jtzurx
|
/r/LocalLLaMA/comments/1jtzurx/prompt_browser_agent_json_easy/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'a3g4bTVxd3gxaXRlMaW3T2t31m-0Pgu61x3U6UnQgkDLp14IugBl52GEN4mK', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/a3g4bTVxd3gxaXRlMaW3T2t31m-0Pgu61x3U6UnQgkDLp14IugBl52GEN4mK.png?width=108&crop=smart&format=pjpg&auto=webp&s=27ca20099e3c8fa5447c5c6ab2caca204651cdc6', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/a3g4bTVxd3gxaXRlMaW3T2t31m-0Pgu61x3U6UnQgkDLp14IugBl52GEN4mK.png?width=216&crop=smart&format=pjpg&auto=webp&s=9c8f44ebdca899bb863687bb45a45875acddf91f', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/a3g4bTVxd3gxaXRlMaW3T2t31m-0Pgu61x3U6UnQgkDLp14IugBl52GEN4mK.png?width=320&crop=smart&format=pjpg&auto=webp&s=d8b4cd2d870a6f709e13a59987f304e2e02c6e6d', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/a3g4bTVxd3gxaXRlMaW3T2t31m-0Pgu61x3U6UnQgkDLp14IugBl52GEN4mK.png?width=640&crop=smart&format=pjpg&auto=webp&s=f7ee13e19b4c3b8d92dfac7cad72813a436de361', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/a3g4bTVxd3gxaXRlMaW3T2t31m-0Pgu61x3U6UnQgkDLp14IugBl52GEN4mK.png?width=960&crop=smart&format=pjpg&auto=webp&s=3c1ab847cd7986a09c561f24a979a5b03141fad8', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/a3g4bTVxd3gxaXRlMaW3T2t31m-0Pgu61x3U6UnQgkDLp14IugBl52GEN4mK.png?width=1080&crop=smart&format=pjpg&auto=webp&s=683667b6cefc44d7f04ed8a29db44d75db88ccdc', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/a3g4bTVxd3gxaXRlMaW3T2t31m-0Pgu61x3U6UnQgkDLp14IugBl52GEN4mK.png?format=pjpg&auto=webp&s=4bf54be7ed6e80c1135eeaf4df91fd25b4045f0c', 'width': 1728}, 'variants': {}}]}
|
|
Function Calling with Gemma3 with full code
| 1 |
[removed]
| 2025-04-07T23:45:11 |
AICodeandLearn
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jtzv45
| false | null |
t3_1jtzv45
|
/r/LocalLLaMA/comments/1jtzv45/function_calling_with_gemma3_with_full_code/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'mBZe0s65kMiV7mz27DVhJsCWnWBBr3yK8JFBZAfHwq4', 'resolutions': [{'height': 132, 'url': 'https://preview.redd.it/4v1ep3832ite1.jpeg?width=108&crop=smart&auto=webp&s=0847b25069a883d3ee5f2101191aaa04b5a206b7', 'width': 108}, {'height': 264, 'url': 'https://preview.redd.it/4v1ep3832ite1.jpeg?width=216&crop=smart&auto=webp&s=6425f2be20a3b05d808ebb032ce73552aff3c23a', 'width': 216}, {'height': 391, 'url': 'https://preview.redd.it/4v1ep3832ite1.jpeg?width=320&crop=smart&auto=webp&s=f8b596e71d9ed5dee402e579c19acbdd75ba6e3d', 'width': 320}, {'height': 783, 'url': 'https://preview.redd.it/4v1ep3832ite1.jpeg?width=640&crop=smart&auto=webp&s=c89e58d198ec3d106de96b0afc9a12c827d1effd', 'width': 640}, {'height': 1175, 'url': 'https://preview.redd.it/4v1ep3832ite1.jpeg?width=960&crop=smart&auto=webp&s=ad5ccbef9070b0bbb2a7dea722e2cb07a191a700', 'width': 960}, {'height': 1322, 'url': 'https://preview.redd.it/4v1ep3832ite1.jpeg?width=1080&crop=smart&auto=webp&s=c84fcadfa5f3463553abf491188f4caa79d512f7', 'width': 1080}], 'source': {'height': 1572, 'url': 'https://preview.redd.it/4v1ep3832ite1.jpeg?auto=webp&s=c48fe60455e1b32e4012b6c6730bb711b3f3b568', 'width': 1284}, 'variants': {}}]}
|
||
How large is Tencent’s Hunyuan-T1 ? It told me that it is based on Llama-7B.
| 0 |
Hi,
Recently I asked how many parameters Hunyuan-T1 has. It answered that it is based on Llama. Specifically Llama-7B, so it has 7 billion parameters. However, sometimes it says that it is based on the Llama-family but doesn't disclose how many parameters as it is proprietary. And sometimes it doesn't even say that. It also sometimes says it has 1 billion parameters. So I assume that I should ignore the answer?
I usually start the question by asking if it speaks English, because then the thinking/reasoning is shown in english. Otherwise the reasoning part might be shown in Mandarin, but the final answer will be in English.
4th picture shows the benchmark for this model.
| 2025-04-07T23:48:43 |
https://www.reddit.com/gallery/1jtzxo7
|
Proud_Fox_684
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jtzxo7
| false | null |
t3_1jtzxo7
|
/r/LocalLLaMA/comments/1jtzxo7/how_large_is_tencents_hunyuant1_it_told_me_that/
| false | false | 0 | null |
|
Llama-4-Scout-17B-16E on single 3090
| 2 | 2025-04-07T23:52:04 |
jacek2023
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju009e
| false | null |
t3_1ju009e
|
/r/LocalLLaMA/comments/1ju009e/llama4scout17b16e_on_single_3090/
| false | false | 2 |
{'enabled': True, 'images': [{'id': 'TgwpWedIQP2UxWi1kDQdbCDzLjYQOAa9SKVJ6_RT_WQ', 'resolutions': [{'height': 44, 'url': 'https://preview.redd.it/oihapqg93ite1.png?width=108&crop=smart&auto=webp&s=487b4455ed52b72ccf33e1d23f8f0aa741bc0a58', 'width': 108}, {'height': 89, 'url': 'https://preview.redd.it/oihapqg93ite1.png?width=216&crop=smart&auto=webp&s=f0ecc0b5c8e7d5b7d108700979da1061a55e8b15', 'width': 216}, {'height': 132, 'url': 'https://preview.redd.it/oihapqg93ite1.png?width=320&crop=smart&auto=webp&s=466e358a0dec4c942fda8edbfb6c3dbc1531b997', 'width': 320}, {'height': 265, 'url': 'https://preview.redd.it/oihapqg93ite1.png?width=640&crop=smart&auto=webp&s=074b4e4c4ea6f027eadd359509f67dc757acdc52', 'width': 640}, {'height': 398, 'url': 'https://preview.redd.it/oihapqg93ite1.png?width=960&crop=smart&auto=webp&s=ba99556f26bd9a196a9dd84c976c44ae924bb898', 'width': 960}, {'height': 448, 'url': 'https://preview.redd.it/oihapqg93ite1.png?width=1080&crop=smart&auto=webp&s=0eda38f44421866eae7fe332fb6b4a042ec1ae0f', 'width': 1080}], 'source': {'height': 1071, 'url': 'https://preview.redd.it/oihapqg93ite1.png?auto=webp&s=bd35a90199fb82f76bc146bf571ad9216516075a', 'width': 2578}, 'variants': {}}]}
|
|||
Llama-4-Scout-17B-16E on single 3090 - 6 t/s
| 84 | 2025-04-07T23:57:12 |
jacek2023
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju044y
| false | null |
t3_1ju044y
|
/r/LocalLLaMA/comments/1ju044y/llama4scout17b16e_on_single_3090_6_ts/
| false | false | 84 |
{'enabled': True, 'images': [{'id': 'BclOYidWl1jAUxg1O8eS-VL6dTFtmcRRgXfH_4QE5_I', 'resolutions': [{'height': 44, 'url': 'https://preview.redd.it/xjzq4t774ite1.png?width=108&crop=smart&auto=webp&s=7ae3a1c65a12990e86de55f6be0fe5fb000eeb54', 'width': 108}, {'height': 88, 'url': 'https://preview.redd.it/xjzq4t774ite1.png?width=216&crop=smart&auto=webp&s=e125afefb16ceec78f5134d070412f3341ba531a', 'width': 216}, {'height': 131, 'url': 'https://preview.redd.it/xjzq4t774ite1.png?width=320&crop=smart&auto=webp&s=c4819cb648f77af9908ea29943bc2a78865060e4', 'width': 320}, {'height': 262, 'url': 'https://preview.redd.it/xjzq4t774ite1.png?width=640&crop=smart&auto=webp&s=33480ec505cfb3066d70e4cdb8885118b97b3ce2', 'width': 640}, {'height': 394, 'url': 'https://preview.redd.it/xjzq4t774ite1.png?width=960&crop=smart&auto=webp&s=c63674178e8350c901309cec9d91e6bde6d96797', 'width': 960}, {'height': 443, 'url': 'https://preview.redd.it/xjzq4t774ite1.png?width=1080&crop=smart&auto=webp&s=e17f84db48201464a552a6ea29a8fa5e7390a4a7', 'width': 1080}], 'source': {'height': 1158, 'url': 'https://preview.redd.it/xjzq4t774ite1.png?auto=webp&s=e4444919fd81c4fe71733fc492d958346588c1e8', 'width': 2821}, 'variants': {}}]}
|
|||
Gemma3 System Prompt under Open-Webui for reasoning-like-behavior
| 5 |
Really liking Gemma3 for pretty much everything daily use (RAG, writing aid, simple code, scrape with tools/function calling, etc..), but lately have been trying more tricky stuff with reasoning and it's kind of working (?) anyway, if you want to try this might be worth.
I am using Gemma3:4b at Q6 by Bartowski. should be even better with the 12b or the 27b.
Temperature = 0.2 (a little above the 0.1 recommended by Google)
Top K = 64
Top P = 0.95
Min P = 0
Repeat penalty (Ollama) = 1.3
System prompt:
Your role as an assistant is to engage in deep, methodical reasoning and provide comprehensive, accurate solutions. Before arriving at a final answer, you must undertake a structured, multi-phase thinking process that emphasizes depth, verification, and clarity. This involves thoroughly analyzing the question, identifying key elements, summarizing relevant insights, generating hypotheses, iteratively refining thoughts, verifying assumptions, cross-checking with prior knowledge, and reevaluating earlier conclusions as necessary. Your response must be structured into two main sections: Thought and Solution. In the Thought section, rigorously document your reasoning in the following format: <|begin\_of\_thought|> {thought process with each logical step separated by '\\n\\n'} <|end\_of\_thought|>. Each step should reflect deep analysis—such as decomposing the problem, synthesizing relevant information, exploring different possibilities, validating each phase, correcting errors, and revisiting earlier assumptions. In the Solution section, consolidate all your insights and reasoned steps into a concise, well-structured final answer. Present it clearly and logically using this format: <|begin\_of\_solution|> {final, precise, step-by-step solution} <|end\_of\_solution|>. This approach ensures that the final output reflects a high-confidence answer that results from critical thinking and iteration. Now, try to solve the following question through the above guidelines:
| 2025-04-08T00:00:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju06fo/gemma3_system_prompt_under_openwebui_for/
|
JLeonsarmiento
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju06fo
| false | null |
t3_1ju06fo
|
/r/LocalLLaMA/comments/1ju06fo/gemma3_system_prompt_under_openwebui_for/
| false | false |
self
| 5 | null |
Quasar alpha compared to llama-4
| 1 |
[https://www.youtube.com/watch?v=SZH34GSneoc](https://www.youtube.com/watch?v=SZH34GSneoc)
A part of me feels this is just maverick checkpoint. Very similar scores to maverick, maybe a little bit better...
| Test Type | Llama 4 Maverick | Llama 4 Scout | Quasar Alpha |
|-----------|------------------|---------------|--------------|
| Harmful Question Detection | 100% | 90% | 100% |
| SQL Code Generation | 90% | 90% | 90% |
| Retrieval Augmented Generation | 86.5 | 81.5 | 90% |
| 2025-04-08T00:04:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju09xv/quasar_alpha_compared_to_llama4/
|
Ok-Contribution9043
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju09xv
| false | null |
t3_1ju09xv
|
/r/LocalLLaMA/comments/1ju09xv/quasar_alpha_compared_to_llama4/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'IjiVJ2nurqucTHHwb3m9mJHTfh_BKCfV4hrKlN83sWE', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/LdcnXmlaLHL3gdtwbArloS8b6EPU117TuHxfxOECegI.jpg?width=108&crop=smart&auto=webp&s=b78c75094f9466775ad8eb57fccc30dbbec38773', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/LdcnXmlaLHL3gdtwbArloS8b6EPU117TuHxfxOECegI.jpg?width=216&crop=smart&auto=webp&s=0da7bf9221fe1b7014632aea0f36643eec691f29', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/LdcnXmlaLHL3gdtwbArloS8b6EPU117TuHxfxOECegI.jpg?width=320&crop=smart&auto=webp&s=f3c4885f13932542e348755dd2d12dff0257524b', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/LdcnXmlaLHL3gdtwbArloS8b6EPU117TuHxfxOECegI.jpg?auto=webp&s=87a30216c02f832e5afde545d523131c36994ac2', 'width': 480}, 'variants': {}}]}
|
Would like some guidance
| 1 |
[removed]
| 2025-04-08T00:17:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju0jam/would_like_some_guidance/
|
HisRoyalHighnessM
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju0jam
| false | null |
t3_1ju0jam
|
/r/LocalLLaMA/comments/1ju0jam/would_like_some_guidance/
| false | false |
self
| 1 | null |
LM Arena confirm that the version of Llama-4 Maverick listed on the arena is a "customized model to optimize for human preference"
| 219 | 2025-04-08T00:23:00 |
https://x.com/lmarena_ai/status/1909397817434816562
|
TKGaming_11
|
x.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju0nd6
| false | null |
t3_1ju0nd6
|
/r/LocalLLaMA/comments/1ju0nd6/lm_arena_confirm_that_the_version_of_llama4/
| false | false | 219 |
{'enabled': False, 'images': [{'id': 'SzmCRc61cd7A_UC2R-l7Js_FR4D-3nFHnRA-AwnI9f0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Sg2foySeNKtSftpVRS-DvTZSfTyNGrVywN1v0vjvasA.jpg?width=108&crop=smart&auto=webp&s=c0bc2190330f7558e229144dd8c588556bdeaf22', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/Sg2foySeNKtSftpVRS-DvTZSfTyNGrVywN1v0vjvasA.jpg?auto=webp&s=ea9a4e383e164ae77ddfb735f331a68f30742146', 'width': 200}, 'variants': {}}]}
|
||
Llama and Europe
| 2 |
This article should put things into perspective for you
https://nypost.com/2025/04/01/business/meta-trying-to-persuade-trump-to-fight-european-unions-looming-antitrust-fine/
| 2025-04-08T00:25:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju0oz2/llama_and_europe/
|
Conscious_Nobody9571
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju0oz2
| false | null |
t3_1ju0oz2
|
/r/LocalLLaMA/comments/1ju0oz2/llama_and_europe/
| false | false |
self
| 2 |
{'enabled': False, 'images': [{'id': 'sy_Gm5YW5za4xVWhbjlvDwcyiRJbrH9yTxX2adY1CdY', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/L1cCSLMjCPJgmFUnoupa_PZNYEbFBAr7r_uT-Zj7LPo.jpg?width=108&crop=smart&auto=webp&s=b7e83a4e8964af7f94963ede2ab04d754942c348', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/L1cCSLMjCPJgmFUnoupa_PZNYEbFBAr7r_uT-Zj7LPo.jpg?width=216&crop=smart&auto=webp&s=9bba148df802ddfab3aa4ef52c1c7456b2e8ef37', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/L1cCSLMjCPJgmFUnoupa_PZNYEbFBAr7r_uT-Zj7LPo.jpg?width=320&crop=smart&auto=webp&s=bafc4ba00ead035581a733df6b9ecb716f3c9326', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/L1cCSLMjCPJgmFUnoupa_PZNYEbFBAr7r_uT-Zj7LPo.jpg?width=640&crop=smart&auto=webp&s=770f3fb291070a4e7f16b65d5dade66eea34a111', 'width': 640}, {'height': 639, 'url': 'https://external-preview.redd.it/L1cCSLMjCPJgmFUnoupa_PZNYEbFBAr7r_uT-Zj7LPo.jpg?width=960&crop=smart&auto=webp&s=72af4e2253391ea6f35de8f13dbb1d6e7e2f5c59', 'width': 960}], 'source': {'height': 682, 'url': 'https://external-preview.redd.it/L1cCSLMjCPJgmFUnoupa_PZNYEbFBAr7r_uT-Zj7LPo.jpg?auto=webp&s=82c52108bed34b4698639d352f022f5e858207ff', 'width': 1024}, 'variants': {}}]}
|
ReGenNexus Core: An Open Protocol for Universal Connectivity (Seeking Collaborators)
| 1 |
[removed]
| 2025-04-08T00:36:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju0wli/regennexus_core_an_open_protocol_for_universal/
|
OpusG5
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju0wli
| false | null |
t3_1ju0wli
|
/r/LocalLLaMA/comments/1ju0wli/regennexus_core_an_open_protocol_for_universal/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'I6YNbB1SroSuvFWOk5y1sNmBUY8-c8AldtSE1NUEUcU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Th_n1jvJv0CNRy73-mnzd759LVyEMWoPfYHISksj-6s.jpg?width=108&crop=smart&auto=webp&s=06f7dcee8efde1c6baf7209fd53b4393dd971295', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Th_n1jvJv0CNRy73-mnzd759LVyEMWoPfYHISksj-6s.jpg?width=216&crop=smart&auto=webp&s=5ee40ae41a372326bf217e6a95d2189dc1bf30bc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Th_n1jvJv0CNRy73-mnzd759LVyEMWoPfYHISksj-6s.jpg?width=320&crop=smart&auto=webp&s=34386ed4bcb5c14997ab1961338a65c165bbba75', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Th_n1jvJv0CNRy73-mnzd759LVyEMWoPfYHISksj-6s.jpg?width=640&crop=smart&auto=webp&s=01ad22f961ed4a27048b8ea46ad375ee4ea191a6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Th_n1jvJv0CNRy73-mnzd759LVyEMWoPfYHISksj-6s.jpg?width=960&crop=smart&auto=webp&s=a982f1839dd7b2c54e2cdaabeddc0472b33d0c05', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Th_n1jvJv0CNRy73-mnzd759LVyEMWoPfYHISksj-6s.jpg?width=1080&crop=smart&auto=webp&s=d3c89f6508fef3d4069021a8d277be7e8ea0e6e9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Th_n1jvJv0CNRy73-mnzd759LVyEMWoPfYHISksj-6s.jpg?auto=webp&s=67e82a56e4ffa4dbb98705b30f732a479e74f979', 'width': 1200}, 'variants': {}}]}
|
Thinking of building a GPU availability tracker—worth it?
| 1 |
[removed]
| 2025-04-08T00:41:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju10a2/thinking_of_building_a_gpu_availability/
|
Signal_Library_6847
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju10a2
| false | null |
t3_1ju10a2
|
/r/LocalLLaMA/comments/1ju10a2/thinking_of_building_a_gpu_availability/
| false | false |
self
| 1 | null |
Any tips for creating more realistic conversations with your chatbot?
| 1 |
I build a desktop app that let's you create custom chatbots that run locally. I'm trying to come up with some ways to make them the chats feel more realistic. I've already given them moods, personalities, names, and voices, but I'm looking for more interesting or obscure techniques I could apply to the prompt generation. What are some must haves for the system prompt for example?
Any tips or feedback is appreciated
App link here in case you are curious https://github.com/Capsize-Games/airunner
| 2025-04-08T00:51:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju17ai/any_tips_for_creating_more_realistic/
|
w00fl35
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju17ai
| false | null |
t3_1ju17ai
|
/r/LocalLLaMA/comments/1ju17ai/any_tips_for_creating_more_realistic/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'Dqt2Iio6VDpGOc2pMoRx_Ah7Q3CSpVWM6a_KCfQ4fRw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/t37cioyzOxkAY2wstbnMpQn6n_aJjozJ25fGwUEBmRA.jpg?width=108&crop=smart&auto=webp&s=c5ba34f9550e67f1586a5418c8aea5d0e94afd48', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/t37cioyzOxkAY2wstbnMpQn6n_aJjozJ25fGwUEBmRA.jpg?width=216&crop=smart&auto=webp&s=5ed87d9744d59269c40d88dd7cc79cbcf85d8df4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/t37cioyzOxkAY2wstbnMpQn6n_aJjozJ25fGwUEBmRA.jpg?width=320&crop=smart&auto=webp&s=34c50584bbf05b82cc56d96943d7b6f6f4b61c9a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/t37cioyzOxkAY2wstbnMpQn6n_aJjozJ25fGwUEBmRA.jpg?width=640&crop=smart&auto=webp&s=843c4e8034556ff1c13984c7859a101c12f33baa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/t37cioyzOxkAY2wstbnMpQn6n_aJjozJ25fGwUEBmRA.jpg?width=960&crop=smart&auto=webp&s=8779d8dba624167ed516db33ce1eb25976ab4c34', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/t37cioyzOxkAY2wstbnMpQn6n_aJjozJ25fGwUEBmRA.jpg?width=1080&crop=smart&auto=webp&s=e8a677998de8433e5215afa5b34ba18445f840da', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/t37cioyzOxkAY2wstbnMpQn6n_aJjozJ25fGwUEBmRA.jpg?auto=webp&s=8f8b2f09bfac56890c151a374d3978a2d53461de', 'width': 1200}, 'variants': {}}]}
|
PSA: LM Studio can now run Llama 4 GGUFs
| 2 |
You just need to update the runtime to the latest beta.
Bonus unsolicited opinion: Scout seems kind of good and super fast.
| 2025-04-08T00:59:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju1cux/psa_lm_studio_can_now_run_llama_4_ggufs/
|
nomorebuttsplz
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju1cux
| false | null |
t3_1ju1cux
|
/r/LocalLLaMA/comments/1ju1cux/psa_lm_studio_can_now_run_llama_4_ggufs/
| false | false |
self
| 2 | null |
Quasar Alpha on NoLiMa - 16k Effective Context - Best Known Result
| 19 |
I ran the NoLiMa ("No Literal Matching") benchmark on Quasar Alpha with tokenizations as given by `tiktoken.encoding_for_model("gpt-4o")`. This benchmark evaluates performance on long-context information retrieval (needle-in-a-haystack) tasks where there is minimal opportunity for literal text matching between the prompt and needle. All credit to Modarressi et al. at Adobe Research for the benchmark; their code and results can be found here: https://github.com/adobe-research/NoLiMa
In my testing Quasar Alpha achieves an average score of 85.1% at a context length of 16K, which exceeds the best result (by GPT-4o) given by the authors. It also outperforms all the models tested by the authors on the abbreviated -Hard benchmark, with an average score of 62.8% at 16K.
Reasoning models, which in the paper were only evaluated on NoLiMa-Hard, may perform better on the non-hard variant, as may recent models such as Gemini 2.5 Pro. Nevertheless, given its strong performance on this benchmark I look forward to finding out more about this model.
At 32K I expect Quasar to fall below the 85% threshold, however I've hit the OpenRouter daily rate limit so running that will have to wait for tomorrow. I will update this post and upload raw result files once that's available.
One further note: the authors defined "Base Score" as the mean of maximums of 250, 500, and 1K context, per task. Since it's nearly 100% anyways I didn't bother and just used maximum of means, but the Base Score for Quasar Alpha should actually be slightly higher.
## Results
| Models | Claimed Length | Effective Length | Base Score<br>(×0.85: Thr.) | 1K | 2K | 4K | 8K | 16K | 32K |
|----------------------|:-------------:|:---------------:|:-----------------------:|:---:|:---:|:---:|:---:|:---:|:---:|
| **Quasar Alpha** | 1M | 16k | >=97.8 (>=83.1) | **97.8** | - | - | **89.2** | **85.1** | Pending |
| GPT-4o | 128K | 8K | 99.3 (84.4) | **98.1** | **98.0** | **95.7** | **89.2** | 81.6 | 69.7 |
| Llama 3.3 70B | 128K | 2K | 97.3 (82.7) | **94.2** | **87.4** | 81.5 | 72.1 | 59.5 | *42.7* |
| Llama 3.1 405B | 128K | 2K | 94.7 (80.5) | **89.0** | **85.0** | 74.5 | 60.1 | 48.4 | *38.0* |
| Llama 3.1 70B | 128K | 2K | 94.5 (80.3) | **91.0** | **81.8** | 71.2 | 62.7 | 51.8 | *43.2* |
| Gemini 1.5 Pro | 2M | 2K | 92.6 (78.7) | **86.4** | **82.7** | 75.4 | 63.9 | 55.5 | 48.2 |
| Jamba 1.5 Mini | 256K | <1K | 92.4 (78.6) | 76.3 | 74.1 | 70.8 | 62.2 | 52.7 | *43.6* |
| Command R+ | 128K | <1K | 90.9 (77.3) | 77.0 | 73.5 | 66.3 | *39.5* | *21.3* | *7.4* |
| Mistral Large 2 | 128K | 2K | 87.9 (74.7) | **86.1** | **85.5** | 73.3 | 51.5 | *32.6* | *18.7* |
| Claude 3.5 Sonnet | 200K | 4K | 87.6 (74.4) | **85.4** | **84.0** | **77.6** | 61.7 | 45.7 | *29.8* |
| Gemini 1.5 Flash | 1M | <1K | 84.7 (72.0) | 68.6 | 61.6 | 51.0 | 44.4 | *35.5* | *28.6* |
| GPT-4o mini | 128K | <1K | 84.9 (72.2) | 67.7 | 58.2 | 44.1 | *32.6* | *20.6* | *13.7* |
| Llama 3.1 8B | 128K | 1K | 76.7 (65.2) | **65.7** | 54.4 | 44.1 | *31.9* | *22.6* | *14.2* |
### NoLiMa-Hard Results
| Models | Base Score | 4K | 8K | 16K | 32K |
|-----------------------|:---------:|:---:|:---:|:---:|:---:|
| **Quasar Alpha** | Pending | - | Pending | 62.8 | Pending |
| **Llama 3.3 70B** | | | | | |
| - w/o CoT | 98.3 | 55.5 | *37.2* | *16.7* | *8.9* |
| - w/ CoT | 97.1 | 73.0 | 51.2 | *31.8* | *10.1* |
| **Reasoning Models** | | | | | |
| GPT-o1 | 99.9 | 92.0 | 78.0 | 60.1 | *31.1* |
| GPT-o3 Mini | 98.8 | 52.8 | *36.9* | *25.5* | *18.9* |
| DeepSeek R1-Distill-Llama-70B | 99.9 | 91.4 | 75.5 | *49.4* | *20.7* |
P.S.: I originally cloned this benchmark because I wanted to run it on Llama 4 Scout, but it would've cost ~$100 and I didn't feel like blowing that just to benchmark somebody else's model. If anyone _does_ want to spend that but is too lazy to download and run the benchmark, send me your ($-limited) OpenRouter key and I'll run it.
| 2025-04-08T00:59:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju1czn/quasar_alpha_on_nolima_16k_effective_context_best/
|
jwlarocque
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju1czn
| false | null |
t3_1ju1czn
|
/r/LocalLLaMA/comments/1ju1czn/quasar_alpha_on_nolima_16k_effective_context_best/
| false | false |
self
| 19 |
{'enabled': False, 'images': [{'id': 'CFczR73JbQGloY4zvaLHh1b8KKqolYyACZvQRl8_rkc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EGtN4xetMMmakhET34h8VHilgfJAgDwk5tuB-7OxR5A.jpg?width=108&crop=smart&auto=webp&s=acf12e0fbb1fea7ba5ffa8e5ff4f0b446a729e1b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EGtN4xetMMmakhET34h8VHilgfJAgDwk5tuB-7OxR5A.jpg?width=216&crop=smart&auto=webp&s=de1d0d34f51b1a51d0cd7937fe60c50cae9a284d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EGtN4xetMMmakhET34h8VHilgfJAgDwk5tuB-7OxR5A.jpg?width=320&crop=smart&auto=webp&s=2b1ce4d53e293dddb884a6b8f5caa52fdd817bfc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EGtN4xetMMmakhET34h8VHilgfJAgDwk5tuB-7OxR5A.jpg?width=640&crop=smart&auto=webp&s=1c7514b185819c8e96d4c9543c59db5d28c1f030', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EGtN4xetMMmakhET34h8VHilgfJAgDwk5tuB-7OxR5A.jpg?width=960&crop=smart&auto=webp&s=3cdd0a40275f2d2c888b54844c3fb4d9ed786f3c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EGtN4xetMMmakhET34h8VHilgfJAgDwk5tuB-7OxR5A.jpg?width=1080&crop=smart&auto=webp&s=9356b2e080bc41f49f4ede51a2acea0342887795', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EGtN4xetMMmakhET34h8VHilgfJAgDwk5tuB-7OxR5A.jpg?auto=webp&s=b1c3eed46c8128a42175f2f1f6cff0704111bc3b', 'width': 1200}, 'variants': {}}]}
|
Best $2000 AI development workstation? Ryzen AI Max+ 395 vs M4 Mac Studio
| 1 |
[removed]
| 2025-04-08T01:15:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju1o8w/best_2000_ai_development_workstation_ryzen_ai_max/
|
jhcashman
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju1o8w
| false | null |
t3_1ju1o8w
|
/r/LocalLLaMA/comments/1ju1o8w/best_2000_ai_development_workstation_ryzen_ai_max/
| false | false |
self
| 1 | null |
A hint about how Llama 4 topped lmarena
| 4 | 2025-04-08T01:17:18 |
https://x.com/vikhyatk/status/1909403603409969533
|
obvithrowaway34434
|
x.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju1pjm
| false | null |
t3_1ju1pjm
|
/r/LocalLLaMA/comments/1ju1pjm/a_hint_about_how_llama_4_topped_lmarena/
| false | false | 4 |
{'enabled': False, 'images': [{'id': '6s0w--8_xz20QTah7XVcYDQocPS1Cg2M-fZ1lwBi47Q', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/27ZiRQXSJtSqrrsc6ngEuD3pOxr4Plu5e8xVF2LD09Q.jpg?width=108&crop=smart&auto=webp&s=bb80fa704dff790c307eb178673ef23e0d238da4', 'width': 108}, {'height': 119, 'url': 'https://external-preview.redd.it/27ZiRQXSJtSqrrsc6ngEuD3pOxr4Plu5e8xVF2LD09Q.jpg?width=216&crop=smart&auto=webp&s=34a8102e21892750284c4a61dff725b79606c48b', 'width': 216}, {'height': 176, 'url': 'https://external-preview.redd.it/27ZiRQXSJtSqrrsc6ngEuD3pOxr4Plu5e8xVF2LD09Q.jpg?width=320&crop=smart&auto=webp&s=fedbe627df78f06efa5f82ac8193e6a43c9b6637', 'width': 320}, {'height': 353, 'url': 'https://external-preview.redd.it/27ZiRQXSJtSqrrsc6ngEuD3pOxr4Plu5e8xVF2LD09Q.jpg?width=640&crop=smart&auto=webp&s=aa4d0083d180c991d8f9f83337dfa13925edceba', 'width': 640}, {'height': 530, 'url': 'https://external-preview.redd.it/27ZiRQXSJtSqrrsc6ngEuD3pOxr4Plu5e8xVF2LD09Q.jpg?width=960&crop=smart&auto=webp&s=75f613e0c4606ed233ed34437701f9fac10bdc8a', 'width': 960}, {'height': 596, 'url': 'https://external-preview.redd.it/27ZiRQXSJtSqrrsc6ngEuD3pOxr4Plu5e8xVF2LD09Q.jpg?width=1080&crop=smart&auto=webp&s=287ecf3e11b310effed29dc599bbc1787d3b7e20', 'width': 1080}], 'source': {'height': 1131, 'url': 'https://external-preview.redd.it/27ZiRQXSJtSqrrsc6ngEuD3pOxr4Plu5e8xVF2LD09Q.jpg?auto=webp&s=d6d54c4f60c46c6d0b74805dc2af42e33d89f2bc', 'width': 2048}, 'variants': {}}]}
|
||
Llama 4 (Scout) GGUFs are here! (and hopefully are final!) (and hopefully better optimized!)
| 286 |
Quants seem coherent, conversion seems to match original model's output, things look good thanks to Son over on llama.cpp putting great effort into it for the past 2 days :) Super appreciate his work!
Static quants of Q8_0, Q6_K, Q4_K_M, and Q3_K_L are up on the lmstudio-community page:
https://huggingface.co/lmstudio-community/Llama-4-Scout-17B-16E-Instruct-GGUF
(If you want to run in LM Studio make sure you update to the latest beta release)
Imatrix (and smaller sizes) are up on my own page:
https://huggingface.co/bartowski/meta-llama_Llama-4-Scout-17B-16E-Instruct-GGUF
One small note, if you've been following along over on the llama.cpp GitHub, you may have seen me working on some updates to DeepSeek here:
https://github.com/ggml-org/llama.cpp/pull/12727
These changes though also affect MoE models in general, and so Scout is similarly affected.. I decided to make these quants WITH my changes, so they should perform better, similar to Unsloth's DeekSeek releases, albeit at the cost of some size.
IQ2_XXS for instance is about 6% bigger with my changes (30.17GB versus 28.6GB), but I'm hoping that the quality difference will be big. I know some may be upset at larger file sizes, but my hope is that even IQ1_M is better than IQ2_XXS was.
Q4_K_M for reference is about 3.4% bigger (65.36 vs 67.55)
I'm running some PPL measurements for Scout (you can see the numbers from DeepSeek for some sizes in the listed PR above, for example IQ2_XXS got 3% bigger but PPL improved by 20%, 5.47 to 4.38) so I'll be reporting those when I have them. Note both lmstudio and my own quants were made with my PR.
In the mean time, enjoy!
| 2025-04-08T01:19:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju1qtt/llama_4_scout_ggufs_are_here_and_hopefully_are/
|
noneabove1182
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju1qtt
| false | null |
t3_1ju1qtt
|
/r/LocalLLaMA/comments/1ju1qtt/llama_4_scout_ggufs_are_here_and_hopefully_are/
| false | false |
self
| 286 |
{'enabled': False, 'images': [{'id': 'ihTr0UsX-ImasxlvlIcJ04Y1JbbxppEGoPQkxX6agtw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TfTrgFWZoogds0mdmdr4Zu8DEmBHhzLBtEfop334z9E.jpg?width=108&crop=smart&auto=webp&s=be29d5f0b8ea2fa8b6fa271fa8b1b4391664b4bb', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/TfTrgFWZoogds0mdmdr4Zu8DEmBHhzLBtEfop334z9E.jpg?width=216&crop=smart&auto=webp&s=f44e04583ff469091108181f12cd5173069b1d6f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/TfTrgFWZoogds0mdmdr4Zu8DEmBHhzLBtEfop334z9E.jpg?width=320&crop=smart&auto=webp&s=70031d87f054dc7b9c306d6f2218badbf07c69a7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/TfTrgFWZoogds0mdmdr4Zu8DEmBHhzLBtEfop334z9E.jpg?width=640&crop=smart&auto=webp&s=b2bd87dab3b470ff88c69d6a01cdcf4f6012cd6f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/TfTrgFWZoogds0mdmdr4Zu8DEmBHhzLBtEfop334z9E.jpg?width=960&crop=smart&auto=webp&s=e5543d85704669a14a8b1e979a09a78306ef3691', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/TfTrgFWZoogds0mdmdr4Zu8DEmBHhzLBtEfop334z9E.jpg?width=1080&crop=smart&auto=webp&s=e1c1b5fd619fe94725a5a2f761c92d85c7f787b8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/TfTrgFWZoogds0mdmdr4Zu8DEmBHhzLBtEfop334z9E.jpg?auto=webp&s=5da8f3063b2a1090dcd6f5aaf3bcc2bc400d9f73', 'width': 1200}, 'variants': {}}]}
|
NVIDIA DGX Spark Demo
| 1 |
Running Demo starts at 24:53, using DeepSeek r1 32B.
| 2025-04-08T01:55:34 |
https://youtu.be/S_k69qXQ9w8?si=hPgTnzXo4LvO7iZX
|
Nicollier88
|
youtu.be
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju2g81
| false |
{'oembed': {'author_name': 'NVIDIA Developer', 'author_url': 'https://www.youtube.com/@NVIDIADeveloper', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/S_k69qXQ9w8?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="NVIDIA DGX Spark: Your Personal AI Supercomputer | NVIDIA GTC 2025 Session"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/S_k69qXQ9w8/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'NVIDIA DGX Spark: Your Personal AI Supercomputer | NVIDIA GTC 2025 Session', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1ju2g81
|
/r/LocalLLaMA/comments/1ju2g81/nvidia_dgx_spark_demo/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'DDBJJPFYPjB6JifIgc3cYyOHDm_Q0LCePXsykTlmZiI', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/aBgIt8ksptyxvJ8kYSsuiTS6xbpzKuK7s-Yw3xz-K6I.jpg?width=108&crop=smart&auto=webp&s=6f7d63d776bee5e3f682c030dfb07a14b30f02ed', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/aBgIt8ksptyxvJ8kYSsuiTS6xbpzKuK7s-Yw3xz-K6I.jpg?width=216&crop=smart&auto=webp&s=f3f7e0aaf8f481a6b75967844beaf0ea153a8545', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/aBgIt8ksptyxvJ8kYSsuiTS6xbpzKuK7s-Yw3xz-K6I.jpg?width=320&crop=smart&auto=webp&s=4673f16028d0458b3a6985a85416e2c91bcab340', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/aBgIt8ksptyxvJ8kYSsuiTS6xbpzKuK7s-Yw3xz-K6I.jpg?auto=webp&s=ddc3e6c693df4bd23095fb0aa6740cdb03127cd8', 'width': 480}, 'variants': {}}]}
|
|
Karpathy's newest blog: Power to the people: How LLMs flip the script on technology diffusion
| 99 | 2025-04-08T02:08:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju2po9/karpathys_newest_blog_power_to_the_people_how/
|
Cheap_Ship6400
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju2po9
| false | null |
t3_1ju2po9
|
/r/LocalLLaMA/comments/1ju2po9/karpathys_newest_blog_power_to_the_people_how/
| false | false | 99 |
{'enabled': False, 'images': [{'id': 'bWXVvwa8flCrmkYJvLXfE5G12bSSSTbkElYwWaDiCi0', 'resolutions': [{'height': 104, 'url': 'https://external-preview.redd.it/xysnssK0wWdIRckvWVwaBSbIhMo96eApOHbJ846j7qQ.jpg?width=108&crop=smart&auto=webp&s=2cd1045517eda93c2aaafc19130bea85c7466318', 'width': 108}], 'source': {'height': 120, 'url': 'https://external-preview.redd.it/xysnssK0wWdIRckvWVwaBSbIhMo96eApOHbJ846j7qQ.jpg?auto=webp&s=6d730f0aadb2da7eefca105ee16d8e99ecfca4a6', 'width': 124}, 'variants': {}}]}
|
||
Llama 4 At Livebench
| 1 | 2025-04-08T02:28:45 |
WesternYou4601
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju33om
| false | null |
t3_1ju33om
|
/r/LocalLLaMA/comments/1ju33om/llama_4_at_livebench/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'nAJiX-hIukNg5A4BbThXrqwrWcKiSKdVR2yTWR6uAyc', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/3pcvjp14vite1.jpeg?width=108&crop=smart&auto=webp&s=de42d6daed0f2507a32108bb13c5b98ddcf2c60d', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/3pcvjp14vite1.jpeg?width=216&crop=smart&auto=webp&s=7670da76a427b34908ae48eb90de50027cd202bd', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/3pcvjp14vite1.jpeg?width=320&crop=smart&auto=webp&s=b5aba3096ec6b4c54c12b17eb2cd8b0ed9dbdb80', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/3pcvjp14vite1.jpeg?width=640&crop=smart&auto=webp&s=8d2fc7adc8037ef1fde0cca696bd8b8672b2727c', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/3pcvjp14vite1.jpeg?width=960&crop=smart&auto=webp&s=770ca9633cdb4b14c74f031a56c0a922c1b8858b', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/3pcvjp14vite1.jpeg?width=1080&crop=smart&auto=webp&s=dc57a48c2e88242c9449a18bf3333469c5ef8f23', 'width': 1080}], 'source': {'height': 900, 'url': 'https://preview.redd.it/3pcvjp14vite1.jpeg?auto=webp&s=c3254cac35ee659b06e9a0fba4978c85e7a8ac5e', 'width': 1200}, 'variants': {}}]}
|
|||
Meta submitted customized llama4 to lmarena without providing clarification beforehand
| 361 |
> Meta should have made it clearer that “Llama-4-Maverick-03-26-Experimental” was a customized model to optimize for human preference
https://x.com/lmarena_ai/status/1909397817434816562
| 2025-04-08T02:34:03 |
AaronFeng47
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju37gh
| false | null |
t3_1ju37gh
|
/r/LocalLLaMA/comments/1ju37gh/meta_submitted_customized_llama4_to_lmarena/
| false | false | 361 |
{'enabled': True, 'images': [{'id': 'USRAn_gVSFnsR234U-dIsX_YY49CLL4oLye4rsYqz_I', 'resolutions': [{'height': 192, 'url': 'https://preview.redd.it/cl1e4af7wite1.png?width=108&crop=smart&auto=webp&s=82b39d941886678eb3d96d489fa7760dd2c7bc1a', 'width': 108}, {'height': 385, 'url': 'https://preview.redd.it/cl1e4af7wite1.png?width=216&crop=smart&auto=webp&s=e0ce5921fc095e4a53eb8464ce2ce576afd2a854', 'width': 216}, {'height': 571, 'url': 'https://preview.redd.it/cl1e4af7wite1.png?width=320&crop=smart&auto=webp&s=fbf9dd088653cf0493135f6c7037096a51595a76', 'width': 320}, {'height': 1142, 'url': 'https://preview.redd.it/cl1e4af7wite1.png?width=640&crop=smart&auto=webp&s=e3b61a0b1cca0493b9eb3ac029029dcf56706a46', 'width': 640}, {'height': 1713, 'url': 'https://preview.redd.it/cl1e4af7wite1.png?width=960&crop=smart&auto=webp&s=26643ed4c7c52163870ae68481c2c9f6b153c593', 'width': 960}, {'height': 1928, 'url': 'https://preview.redd.it/cl1e4af7wite1.png?width=1080&crop=smart&auto=webp&s=23341a08be86095de765e139fb998392f67179db', 'width': 1080}], 'source': {'height': 1928, 'url': 'https://preview.redd.it/cl1e4af7wite1.png?auto=webp&s=066a5e68abff5b7581412ab969a7fdaa64743f9f', 'width': 1080}, 'variants': {}}]}
|
||
AI Appears to Impersonate Me on Cursor Then Lies - Claude-3.7-Sonnet
| 1 |
[removed]
| 2025-04-08T02:42:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju3dif/ai_appears_to_impersonate_me_on_cursor_then_lies/
|
Safe_Cucumber_6695
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju3dif
| false | null |
t3_1ju3dif
|
/r/LocalLLaMA/comments/1ju3dif/ai_appears_to_impersonate_me_on_cursor_then_lies/
| false | false |
self
| 1 | null |
Llama 4 Computer Use Agent
| 195 |
I experimented with a computer use agent powered by Meta Llama 4 Maverick and it performed better than expected (given the recent feedback on Llama 4 😬) - in my testing it could browse the web archive, compress an image and solve a grammar quiz. And it's certainly much cheaper than other computer use agents.
Check out interaction trajectories here: https://llama4.pages.dev/
Please star it if you find it interesting :D
| 2025-04-08T02:43:23 |
https://github.com/TheoLeeCJ/llama4-computer-use
|
unforseen-anomalies
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju3dtg
| false | null |
t3_1ju3dtg
|
/r/LocalLLaMA/comments/1ju3dtg/llama_4_computer_use_agent/
| false | false | 195 |
{'enabled': False, 'images': [{'id': 'iyfkPHwJB4YIdhHl2mtjabF4_Krfoih3seg7gb5e0Wg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mInt-jX9Z334TG_hOLgbkFELfN5NFbX9_ugIlxIbW_Q.jpg?width=108&crop=smart&auto=webp&s=3ac040efbf57627b08f95679e072cfabe39dbb82', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mInt-jX9Z334TG_hOLgbkFELfN5NFbX9_ugIlxIbW_Q.jpg?width=216&crop=smart&auto=webp&s=794d3d35acbd5e6ca0b208ff5ceedb8136449d75', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mInt-jX9Z334TG_hOLgbkFELfN5NFbX9_ugIlxIbW_Q.jpg?width=320&crop=smart&auto=webp&s=1252b003246e8b2bc6678bc35ebc7b732b4722b8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mInt-jX9Z334TG_hOLgbkFELfN5NFbX9_ugIlxIbW_Q.jpg?width=640&crop=smart&auto=webp&s=069a1c0127a866a1240e9a0eab175a0db2c1edd9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mInt-jX9Z334TG_hOLgbkFELfN5NFbX9_ugIlxIbW_Q.jpg?width=960&crop=smart&auto=webp&s=8bd41d36906079300aef6f7f122c5bd575517c2b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mInt-jX9Z334TG_hOLgbkFELfN5NFbX9_ugIlxIbW_Q.jpg?width=1080&crop=smart&auto=webp&s=36fd4482c5b9ee05d3b51e0efe813fc1271afd0c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mInt-jX9Z334TG_hOLgbkFELfN5NFbX9_ugIlxIbW_Q.jpg?auto=webp&s=2c39e56d72ca4b7b023bbf30a0a3434689fef0d0', 'width': 1200}, 'variants': {}}]}
|
|
What do you think?
| 1 | 2025-04-08T02:51:35 |
01xKeven
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju3jf1
| false | null |
t3_1ju3jf1
|
/r/LocalLLaMA/comments/1ju3jf1/what_do_you_think/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'NR9-OYMr81m6fEXFADH-dNmuEOIZ64nGeO2fXmmoZyo', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/pngp62q9zite1.png?width=108&crop=smart&auto=webp&s=98ea599c7a65803260bc637b276f9087b06697ab', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/pngp62q9zite1.png?width=216&crop=smart&auto=webp&s=7e7ee0f0476f702cf783758d1b6db46fdc816651', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/pngp62q9zite1.png?width=320&crop=smart&auto=webp&s=383b68cc4a8fdc5db09c9eaadf2a7ab8f35c32e3', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/pngp62q9zite1.png?width=640&crop=smart&auto=webp&s=2f8634da53abcd7fb3686eb88634e987fc2f6311', 'width': 640}], 'source': {'height': 680, 'url': 'https://preview.redd.it/pngp62q9zite1.png?auto=webp&s=4c1865b3a1d4eb840b1393d5f8ab41c22bf14b9d', 'width': 680}, 'variants': {}}]}
|
|||
Weird new livebench.ai coding scores
| 31 |
It uses to align with aider's leaderboard relatively well, but these new scores just did not make any sense to me. Sonnet 3.7 Thinking cannot be worse than R1 Distilled models, for example.
| 2025-04-08T03:00:58 |
https://www.reddit.com/gallery/1ju3pjb
|
SandboChang
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju3pjb
| false | null |
t3_1ju3pjb
|
/r/LocalLLaMA/comments/1ju3pjb/weird_new_livebenchai_coding_scores/
| false | false | 31 | null |
|
Ollama just dropped some new models from Cogito and they appear to be trying to steal your girl with their benchmark claims.
| 1 |
[removed]
| 2025-04-08T03:07:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju3tzm/ollama_just_dropped_some_new_models_from_cogito/
|
Porespellar
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju3tzm
| false | null |
t3_1ju3tzm
|
/r/LocalLLaMA/comments/1ju3tzm/ollama_just_dropped_some_new_models_from_cogito/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200}, 'variants': {}}]}
|
Veiled Calla - An Uncersored 12B Model with Vision
| 8 |
Model: [https://huggingface.co/soob3123/Veiled-Calla-12B](https://huggingface.co/soob3123/Veiled-Calla-12B)
GGUF: [https://huggingface.co/soob3123/Veiled-Calla-12B-gguf](https://huggingface.co/soob3123/Veiled-Calla-12B-gguf)
Veiled Calla is built on Gemma-3-12b and focuses on creating immersive experiences where the unspoken and subtle emotional undertones drive the story forward. If you enjoy moonlit scenarios, enigmatic characters, and narratives that slowly reveal their secrets, this might be the model for you.
**What Makes Veiled Calla Special:**
* **Atmospheric Depth**: Creates rich, emotionally nuanced scenarios
* **Character Consistency**: Maintains personality traits throughout extended interactions
* **Narrative Mystery**: Develops storylines that unfold with natural revelations
* **Emotional Nuance**: Excels at conveying the unspoken meanings between characters
**Where It Works Best:**
Veiled Calla thrives in intimate, atmospheric, or introspective scenarios. It's designed for users who appreciate subtle storytelling and don't mind occasionally cryptic responses that add to the mysterious atmosphere.
**Note:**
The model is uncensored in Roleplay mode (when used with system prompts like in SillyTavern), but maintains normal safety guardrails in standard Assistant mode. For those looking for completely uncensored experiences, you might want to check out the Amoral collection, though those models lack the atmospheric specialization of Veiled Calla.
\*Repost.
| 2025-04-08T03:17:22 |
Reader3123
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju409q
| false | null |
t3_1ju409q
|
/r/LocalLLaMA/comments/1ju409q/veiled_calla_an_uncersored_12b_model_with_vision/
| false | false | 8 |
{'enabled': True, 'images': [{'id': 'sI04agpLYkCTdPs-RBt51cb4TI3-sS0puKblE_ltU_U', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/x6tr49ss2jte1.png?width=108&crop=smart&auto=webp&s=403b5fc774d319b012000a8408f2da54e73584ac', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/x6tr49ss2jte1.png?width=216&crop=smart&auto=webp&s=9e3a11b5136d52524ee3687bddf95f7864df06c2', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/x6tr49ss2jte1.png?width=320&crop=smart&auto=webp&s=81fad5de2200ddd8ae14fad9f8abf5f0c2856bb0', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/x6tr49ss2jte1.png?width=640&crop=smart&auto=webp&s=bdec3bd194ce6b0a9a322a1d3d8506ff296e58e3', 'width': 640}], 'source': {'height': 576, 'url': 'https://preview.redd.it/x6tr49ss2jte1.png?auto=webp&s=b847957964909f0d76629863cbbe40b4b0c2879e', 'width': 768}, 'variants': {}}]}
|
||
Problem With Document Text Extraction Accuracy
| 1 |
[removed]
| 2025-04-08T03:26:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju46ea/problem_with_document_text_extraction_accuracy/
|
johnpetrucci03
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju46ea
| false | null |
t3_1ju46ea
|
/r/LocalLLaMA/comments/1ju46ea/problem_with_document_text_extraction_accuracy/
| false | false |
self
| 1 | null |
Check this Maverick setting out
| 6 |
I just wanted to share my experience with Llama 4 Maverick, the recent release Meta that’s bern getting a lot of criticism.
I’ve come to conclusion that there must be something wrong with their release configuration and their evaluation wasnt a lie at all. Hope it was actually true and they deploy a new model release soon.
This setting reduce the hallucinations and randomness out of Maverick making it usable to some degree. I tested it and it better than it was initially released
| 2025-04-08T03:38:59 |
GTHell
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju4e0f
| false | null |
t3_1ju4e0f
|
/r/LocalLLaMA/comments/1ju4e0f/check_this_maverick_setting_out/
| false | false | 6 |
{'enabled': True, 'images': [{'id': 'hLaxrubR4GYfeWPwnDLXyjbsncmUWjj68-54MoReEGM', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/n8uwnnms7jte1.jpeg?width=108&crop=smart&auto=webp&s=800a987c33a2714be2323284887465c455fbc508', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/n8uwnnms7jte1.jpeg?width=216&crop=smart&auto=webp&s=240258a02d4eb7995a73204aca2d76864485d8ba', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/n8uwnnms7jte1.jpeg?width=320&crop=smart&auto=webp&s=e02e824ce1d9d11f48b580401fc4e2690bb05f3b', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/n8uwnnms7jte1.jpeg?width=640&crop=smart&auto=webp&s=48d5ddfdfc27fa3fa0b32214a002c415cd34412c', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/n8uwnnms7jte1.jpeg?width=960&crop=smart&auto=webp&s=ba1190217d2a29d71a980010c1563018086e8e5a', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/n8uwnnms7jte1.jpeg?width=1080&crop=smart&auto=webp&s=4dfac8b5ed0074472cbf7044a5131a454791ddfb', 'width': 1080}], 'source': {'height': 2796, 'url': 'https://preview.redd.it/n8uwnnms7jte1.jpeg?auto=webp&s=175e9c6753390667d86e6f7dd03a813eceedcb9d', 'width': 1290}, 'variants': {}}]}
|
||
Help: Gemma 3 High CPU usage during prompt processing?
| 1 |
I am running ollama into openwebui and I am having an issue where web search causes high CPU usage in ollama. It seems prompt processing is completely CPU sided.
Openwebui is running on an external server and ollama is running on a different machine.
Other models don't have this issue. Any suggestions on how I can fix this or if anyone else is also having this issue?
| 2025-04-08T03:44:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju4h84/help_gemma_3_high_cpu_usage_during_prompt/
|
My_Unbiased_Opinion
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju4h84
| false | null |
t3_1ju4h84
|
/r/LocalLLaMA/comments/1ju4h84/help_gemma_3_high_cpu_usage_during_prompt/
| false | false |
self
| 1 | null |
1.78bit Llama 4 - Unsloth Dynamic GGUFs + Optimal Settings
| 1 |
Hey guys! Llama 4 is here & we uploaded **imatrix** Dynamic GGUF formats so you can run them locally. All GGUFs are at: [https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF)
Currently text only. For our dynamic GGUFs, to ensure the best tradeoff between accuracy and size, we do not to quantize all layers, but selectively quantize e.g. the MoE layers to lower bit, and leave attention and other layers in 4 or 6bit. Fine-tuning support coming in a few hours.
According to the official Llama-4 Github page, and other sources, use:
temperature = 0.6
top_p = 0.9
This time, **all our GGUF uploads are quantized using imatrix**, which has improved accuracy over standard quantization. We intend to improve our imatrix quants even more with benchmarks (most likely when Qwen3 gets released). Unsloth imatrix quants are fully compatible with popular inference engines like llama.cpp, Ollama, Open WebUI etc.
We utilized DeepSeek R1, V3 and other LLMs to create a large calibration dataset.
Read our guide for running Llama 4 (with correct settings etc): [https://docs.unsloth.ai/basics/tutorial-how-to-run-and-fine-tune-llama-4](https://docs.unsloth.ai/basics/tutorial-how-to-run-and-fine-tune-llama-4)
**Unsloth Dynamic Llama-4-Scout uploads with optimal configs:**
|MoE Bits|Type|Disk Size|HF Link|Accuracy|
|:-|:-|:-|:-|:-|
|1.78bit|IQ1\_S|**33.8GB**|[Link](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF?show_file_info=Llama-4-Scout-17B-16E-Instruct-UD-IQ1_S.gguf)|Ok|
|1.93bit|IQ1\_M|**35.4B**|[Link](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF?show_file_info=Llama-4-Scout-17B-16E-Instruct-UD-IQ1_M.gguf)|Fair|
|2.42-bit|IQ2\_XXS|**38.6GB**|[Link](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF?show_file_info=Llama-4-Scout-17B-16E-Instruct-UD-IQ2_XXS.gguf)|Better|
|2.71-bit|Q2\_K\_XL|**42.2GB**|[Link](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF?show_file_info=Llama-4-Scout-17B-16E-Instruct-UD-Q2_K_XL.gguf)|Suggested|
|3.5-bit|Q3\_K\_XL|**52.9GB**|[Link](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF/tree/main/UD-IQ3_K_XL)|Great|
|4.5-bit|Q4\_K\_XL|**65.6GB**|[Link](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF/tree/main/UD-IQ4_K_XL)|Best|
Let us know how it goes!
In terms of testing, unfortunately we can't make the full BF16 version (ie regardless of quantization or not) complete the Flappy Bird game nor the Heptagon test appropriately. We tried Groq, using imatrix or not, used other people's quants, and used normal Hugging Face inference, and this issue persists. We're communicating with the Llama 4 team about possible implementation issues.
| 2025-04-08T04:01:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju4s9v/178bit_llama_4_unsloth_dynamic_ggufs_optimal/
|
danielhanchen
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju4s9v
| false | null |
t3_1ju4s9v
|
/r/LocalLLaMA/comments/1ju4s9v/178bit_llama_4_unsloth_dynamic_ggufs_optimal/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '3AmxYHMwAdiM2hDtWVZPEWZvg2_Z102fQxSK5X1fajc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/-hrEFPsx59rVU7I89LQDIkH9H9UHvaNrXWc-3BovSyI.jpg?width=108&crop=smart&auto=webp&s=5363f3583fcc56f3337d645fddb3611d2d133aaa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/-hrEFPsx59rVU7I89LQDIkH9H9UHvaNrXWc-3BovSyI.jpg?width=216&crop=smart&auto=webp&s=ac145604c03b19a0eb12d64e2ff6c1cadab1c1cb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/-hrEFPsx59rVU7I89LQDIkH9H9UHvaNrXWc-3BovSyI.jpg?width=320&crop=smart&auto=webp&s=6115798948e91a5f3e0274113022d2f4b8c070eb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/-hrEFPsx59rVU7I89LQDIkH9H9UHvaNrXWc-3BovSyI.jpg?width=640&crop=smart&auto=webp&s=9c31ad71ff3ca9c18c0417c100b2367b4da777fe', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/-hrEFPsx59rVU7I89LQDIkH9H9UHvaNrXWc-3BovSyI.jpg?width=960&crop=smart&auto=webp&s=5a736feb53d8ee903bbceec7083bcc2c34a40461', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/-hrEFPsx59rVU7I89LQDIkH9H9UHvaNrXWc-3BovSyI.jpg?width=1080&crop=smart&auto=webp&s=a3c74e8bb87b3ec05706deb1db2f762be9b45d74', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/-hrEFPsx59rVU7I89LQDIkH9H9UHvaNrXWc-3BovSyI.jpg?auto=webp&s=5205f4b198f60c8d0fbd84f25d15fc8bfc427672', 'width': 1200}, 'variants': {}}]}
|
1.58bit Llama 4 - Unsloth Dynamic GGUFs
| 235 |
Hey guys! Llama 4 is here & we uploaded **imatrix** Dynamic GGUF formats so you can run them locally. All GGUFs are at: [https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF)
Currently text only. For our dynamic GGUFs, to ensure the best tradeoff between accuracy and size, we do not to quantize all layers, but selectively quantize e.g. the MoE layers to lower bit, and leave attention and other layers in 4 or 6bit. Fine-tuning support coming in a few hours.
According to the official Llama-4 Github page, and other sources, use:
temperature = 0.6
top_p = 0.9
This time, **all our GGUF uploads are quantized using imatrix**, which has improved accuracy over standard quantization. We intend to improve our imatrix quants even more with benchmarks (most likely when Qwen3 gets released). Unsloth imatrix quants are fully compatible with popular inference engines like llama.cpp, Ollama, Open WebUI etc.
We utilized DeepSeek R1, V3 and other LLMs to create a large calibration dataset.
Read our guide for running Llama 4 (with correct settings etc): [https://docs.unsloth.ai/basics/tutorial-how-to-run-and-fine-tune-llama-4](https://docs.unsloth.ai/basics/tutorial-how-to-run-and-fine-tune-llama-4)
**Unsloth Dynamic Llama-4-Scout uploads with optimal configs:**
|MoE Bits|Type|Disk Size|HF Link|Accuracy|
|:-|:-|:-|:-|:-|
|1.78bit|IQ1\_S|**33.8GB**|[Link](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF?show_file_info=Llama-4-Scout-17B-16E-Instruct-UD-IQ1_S.gguf)|Ok|
|1.93bit|IQ1\_M|**35.4B**|[Link](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF?show_file_info=Llama-4-Scout-17B-16E-Instruct-UD-IQ1_M.gguf)|Fair|
|2.42-bit|IQ2\_XXS|**38.6GB**|[Link](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF?show_file_info=Llama-4-Scout-17B-16E-Instruct-UD-IQ2_XXS.gguf)|Better|
|2.71-bit|Q2\_K\_XL|**42.2GB**|[Link](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF?show_file_info=Llama-4-Scout-17B-16E-Instruct-UD-Q2_K_XL.gguf)|Suggested|
|3.5-bit|Q3\_K\_XL|**52.9GB**|[Link](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF/tree/main/UD-IQ3_K_XL)|Great|
|4.5-bit|Q4\_K\_XL|**65.6GB**|[Link](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF/tree/main/UD-IQ4_K_XL)|Best|
Let us know how it goes!
In terms of testing, unfortunately we can't make the full BF16 version (ie regardless of quantization or not) complete the Flappy Bird game nor the Heptagon test appropriately. We tried Groq, using imatrix or not, used other people's quants, and used normal Hugging Face inference, and this issue persists.
| 2025-04-08T04:10:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju4xjl/158bit_llama_4_unsloth_dynamic_ggufs/
|
danielhanchen
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju4xjl
| false | null |
t3_1ju4xjl
|
/r/LocalLLaMA/comments/1ju4xjl/158bit_llama_4_unsloth_dynamic_ggufs/
| false | false |
self
| 235 |
{'enabled': False, 'images': [{'id': '3AmxYHMwAdiM2hDtWVZPEWZvg2_Z102fQxSK5X1fajc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/-hrEFPsx59rVU7I89LQDIkH9H9UHvaNrXWc-3BovSyI.jpg?width=108&crop=smart&auto=webp&s=5363f3583fcc56f3337d645fddb3611d2d133aaa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/-hrEFPsx59rVU7I89LQDIkH9H9UHvaNrXWc-3BovSyI.jpg?width=216&crop=smart&auto=webp&s=ac145604c03b19a0eb12d64e2ff6c1cadab1c1cb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/-hrEFPsx59rVU7I89LQDIkH9H9UHvaNrXWc-3BovSyI.jpg?width=320&crop=smart&auto=webp&s=6115798948e91a5f3e0274113022d2f4b8c070eb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/-hrEFPsx59rVU7I89LQDIkH9H9UHvaNrXWc-3BovSyI.jpg?width=640&crop=smart&auto=webp&s=9c31ad71ff3ca9c18c0417c100b2367b4da777fe', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/-hrEFPsx59rVU7I89LQDIkH9H9UHvaNrXWc-3BovSyI.jpg?width=960&crop=smart&auto=webp&s=5a736feb53d8ee903bbceec7083bcc2c34a40461', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/-hrEFPsx59rVU7I89LQDIkH9H9UHvaNrXWc-3BovSyI.jpg?width=1080&crop=smart&auto=webp&s=a3c74e8bb87b3ec05706deb1db2f762be9b45d74', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/-hrEFPsx59rVU7I89LQDIkH9H9UHvaNrXWc-3BovSyI.jpg?auto=webp&s=5205f4b198f60c8d0fbd84f25d15fc8bfc427672', 'width': 1200}, 'variants': {}}]}
|
Very simple multi-MCP agent in Python
| 1 |
[removed]
| 2025-04-08T04:24:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju55tl/very_simple_multimcp_agent_in_python/
|
sunpazed
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju55tl
| false | null |
t3_1ju55tl
|
/r/LocalLLaMA/comments/1ju55tl/very_simple_multimcp_agent_in_python/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '9-yUZsQBUaUgFTP3IB_TOW5aacEQL7QpD7V4jW1bHX4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/AQfPIezWCF79gq1tza7OI5uYLkVx51hSLJVQ1I6OCk0.jpg?width=108&crop=smart&auto=webp&s=506641f8358dc38ef6c9eecd94be95bc86baf4e7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/AQfPIezWCF79gq1tza7OI5uYLkVx51hSLJVQ1I6OCk0.jpg?width=216&crop=smart&auto=webp&s=9eb2549327bc590b321361d701c97d7b73c399ba', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/AQfPIezWCF79gq1tza7OI5uYLkVx51hSLJVQ1I6OCk0.jpg?width=320&crop=smart&auto=webp&s=c36917737e7b6e5a39f5ccef9681788717888391', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/AQfPIezWCF79gq1tza7OI5uYLkVx51hSLJVQ1I6OCk0.jpg?width=640&crop=smart&auto=webp&s=03235eb5d5a05003ed84980d63de879305b4dbb3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/AQfPIezWCF79gq1tza7OI5uYLkVx51hSLJVQ1I6OCk0.jpg?width=960&crop=smart&auto=webp&s=17e4c4cb47f0352009e0bee097be0b9932ebb94f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/AQfPIezWCF79gq1tza7OI5uYLkVx51hSLJVQ1I6OCk0.jpg?width=1080&crop=smart&auto=webp&s=8df63884fa13df436cf13e984758b19715d50b6b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/AQfPIezWCF79gq1tza7OI5uYLkVx51hSLJVQ1I6OCk0.jpg?auto=webp&s=519a80fe7ab333db16462b54aea17be267369851', 'width': 1200}, 'variants': {}}]}
|
ollama supports gemma 3 long context with single 3090
| 1 |
From my previous post, u/throwaway-link reminded me that ollama supports interleaved sliding window attention (iSWA)
[https://www.reddit.com/r/LocalLLaMA/comments/1jta5vj/comment/mlw8wtu/?context=3](https://www.reddit.com/r/LocalLLaMA/comments/1jta5vj/comment/mlw8wtu/?context=3)
I checked ollama's source code. While it uses llama.cpp as the inference engine, it has code specifically support iSWA for gemma 3.
Since ollama's gemma3:27b is only 17GB and iSWA fp8 KV cache is only 5.2GB at 128k context. This means that ollama can run gemma 3 27b at 128k with single 3090. In practice, I find that 20.5GB is used for 64k context and 18GB for 128k. By comparing the results, I like the 64k one better.
With this support, gemma 3 is now the king for 128k context for a single 3090.
| 2025-04-08T04:26:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju56y4/ollama_supports_gemma_3_long_context_with_single/
|
Ok_Warning2146
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju56y4
| false | null |
t3_1ju56y4
|
/r/LocalLLaMA/comments/1ju56y4/ollama_supports_gemma_3_long_context_with_single/
| false | false |
self
| 1 | null |
My AI future vision
| 0 |
Hello community! I think the future can be both tough and exciting and the same.
First, I don’t think that computer science will be dead because of these “vibe coding weirdos,” as some prominent AI CEO said. But still, no matter what you do, this is a fascinating niche of science! And AI can also be used to help make it stronger, especially in terms of knowledge!
Speaking of knowledge, I think AI must be used in education, but not in the way you think! There’s an educational platform called “Khan Academy” where you have videos and a lot of tests. Also, “Brilliant” is a good one too (don’t you dare say anything about them badly)! So, AI can help create tests or some kind of animations (not 1blue3brown-like animations, something more profound, I think).
In terms of my own life, first, I’m an AI developer, so I care about AI seriously. However, I think we must:
1. Create great world models in all sizes and both MoE and dense versions to fit everyone’s needs.
2. Create tools! Literally! We need more support for devices like Apple Silicon and better libraries to work with these models in various ways (training, merging, analyzing, blowing them up, etc.).
3. Do not integrate AI. Please don’t. Have you seen AI everywhere lately? That’s weird! Yes, please build more Transformers models, but I do not need AI in my fucking toilet and toothbrush (unless it’s some real heath stuff only, etc.)
4. Please make better autoregressive architectures! Personally, I’m a massive hater of diffusion architecture (don’t ask why)! So, I think we must create more autoregressive models for all kinds of things! Also, we need to create neutral networks that can produce something with as little training data as possible (just like the VITS model I’m currently working on)
Lastly, I don’t really care if we reach AGI/ASI or not (I hope if it will, then open-source will do it first), but as an AI developer and just a nice guy, I will not allow my AI model to do things by herself in a non-human way! That’s it! Also, we really don’t need home-living humanoids (but that’s for another post when my research paper comes out).
Thanks! Feel free to share your thoughts!
| 2025-04-08T04:28:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju58ia/my_ai_future_vision/
|
yukiarimo
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju58ia
| false | null |
t3_1ju58ia
|
/r/LocalLLaMA/comments/1ju58ia/my_ai_future_vision/
| false | false |
self
| 0 | null |
noob question on MoE
| 0 |
The way I understand MoE is that it's basically an llm consisting of multiple llms. Each llm is then an "expert" on a specific field and depending on the prompt one or the other llm is ultimately used.
My first question would be if my intuition is correct?
Then the followup question would be: if this is the case, doesn't it mean we can run these llms on multiple devices that even may be connected over a slow link like i.e. ethernet?
| 2025-04-08T04:30:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju596k/noob_question_on_moe/
|
WeakYou654
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju596k
| false | null |
t3_1ju596k
|
/r/LocalLLaMA/comments/1ju596k/noob_question_on_moe/
| false | false |
self
| 0 | null |
lmarena.ai confirms that meta cheated
| 293 |
They provided a model that is optimized for human preferences, which is different then other hosted models. :(
[https://x.com/lmarena\_ai/status/1909397817434816562](https://x.com/lmarena_ai/status/1909397817434816562)
| 2025-04-08T04:32:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju5aux/lmarenaai_confirms_that_meta_cheated/
|
Terminator857
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju5aux
| false | null |
t3_1ju5aux
|
/r/LocalLLaMA/comments/1ju5aux/lmarenaai_confirms_that_meta_cheated/
| false | false |
self
| 293 |
{'enabled': False, 'images': [{'id': 'SzmCRc61cd7A_UC2R-l7Js_FR4D-3nFHnRA-AwnI9f0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Sg2foySeNKtSftpVRS-DvTZSfTyNGrVywN1v0vjvasA.jpg?width=108&crop=smart&auto=webp&s=c0bc2190330f7558e229144dd8c588556bdeaf22', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/Sg2foySeNKtSftpVRS-DvTZSfTyNGrVywN1v0vjvasA.jpg?auto=webp&s=ea9a4e383e164ae77ddfb735f331a68f30742146', 'width': 200}, 'variants': {}}]}
|
Anyone uses and GPUs for llama
| 0 |
Anyone uses 7900xt/xtx how do they perform
| 2025-04-08T04:46:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju5ir7/anyone_uses_and_gpus_for_llama/
|
color_me_surprised24
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju5ir7
| false | null |
t3_1ju5ir7
|
/r/LocalLLaMA/comments/1ju5ir7/anyone_uses_and_gpus_for_llama/
| false | false |
self
| 0 | null |
Visualizing 4 Language Models Competing in LM Arena
| 3 | 2025-04-08T05:09:27 |
https://youtu.be/uFpK_r-jEXg
|
rzvzn
|
youtu.be
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju5vt9
| false |
{'oembed': {'author_name': 'The Onion', 'author_url': 'https://www.youtube.com/@TheOnion', 'height': 200, 'html': '<iframe width="267" height="200" src="https://www.youtube.com/embed/uFpK_r-jEXg?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="New Live Poll Lets Pundits Pander To Viewers In Real Time"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/uFpK_r-jEXg/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'New Live Poll Lets Pundits Pander To Viewers In Real Time', 'type': 'video', 'version': '1.0', 'width': 267}, 'type': 'youtube.com'}
|
t3_1ju5vt9
|
/r/LocalLLaMA/comments/1ju5vt9/visualizing_4_language_models_competing_in_lm/
| true | false |
spoiler
| 3 |
{'enabled': False, 'images': [{'id': '1_NLN2qroyI8vio1uL6rxkA-SLN7yVWKXf_2ttOys9s', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/aPxkYdPdmv3720Ausbom70S-nrE8GbnlpiXy_vQ6uRc.jpg?width=108&crop=smart&auto=webp&s=e0d2bda61ded2ecf15beda8e011bafafede68053', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/aPxkYdPdmv3720Ausbom70S-nrE8GbnlpiXy_vQ6uRc.jpg?width=216&crop=smart&auto=webp&s=35b06ff58b442632d7fd822d3ccd8bc7bf60d7cf', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/aPxkYdPdmv3720Ausbom70S-nrE8GbnlpiXy_vQ6uRc.jpg?width=320&crop=smart&auto=webp&s=98d00b02559dd675c0c38b1edb0521e1ba87e8ab', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/aPxkYdPdmv3720Ausbom70S-nrE8GbnlpiXy_vQ6uRc.jpg?auto=webp&s=da57b64449ff77776060ac078a9746aceb2aa80b', 'width': 480}, 'variants': {'obfuscated': {'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/aPxkYdPdmv3720Ausbom70S-nrE8GbnlpiXy_vQ6uRc.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=47e423fc40b29e698601cb43f01263ab8162f88d', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/aPxkYdPdmv3720Ausbom70S-nrE8GbnlpiXy_vQ6uRc.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=6a4d466047e20e906b8272a62f9f8bf637ae1900', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/aPxkYdPdmv3720Ausbom70S-nrE8GbnlpiXy_vQ6uRc.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=335a57d1148285ee4f720840f183e0b20495c047', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/aPxkYdPdmv3720Ausbom70S-nrE8GbnlpiXy_vQ6uRc.jpg?blur=40&format=pjpg&auto=webp&s=5783af9f0a66f6f05bf5e9a956f6d43cabf837e0', 'width': 480}}}}]}
|
|
Just a reminder to the OS LLM community that Zuckerberg is a thin skinned LIAR and a FREAK.
| 1 |
[removed]
| 2025-04-08T05:21:57 |
https://www.youtube.com/shorts/DOPl73LzBFU
|
Odd-Environment-7193
|
youtube.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju62mc
| false | null |
t3_1ju62mc
|
/r/LocalLLaMA/comments/1ju62mc/just_a_reminder_to_the_os_llm_community_that/
| false | false |
default
| 1 | null |
April prediction
| 0 | 2025-04-08T05:32:46 |
cobalt1137
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju68li
| false | null |
t3_1ju68li
|
/r/LocalLLaMA/comments/1ju68li/april_prediction/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'k5dvOUSlc1f4B7W7mMkCeTLz9-uAhFF_Wl_nkTwayWs', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/922azsd2sjte1.jpeg?width=108&crop=smart&auto=webp&s=c70e6ff4820f593cf6a69d7b5a3bc3c2a3a94c57', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/922azsd2sjte1.jpeg?width=216&crop=smart&auto=webp&s=ab38202936c9d094720d004d60f6a3587748954e', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/922azsd2sjte1.jpeg?width=320&crop=smart&auto=webp&s=59f8084678b2497cff1802e8dc5a253e9d503677', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/922azsd2sjte1.jpeg?width=640&crop=smart&auto=webp&s=ae5167f3ae6582adf6812e9213c7a6fbcdc1c23c', 'width': 640}, {'height': 640, 'url': 'https://preview.redd.it/922azsd2sjte1.jpeg?width=960&crop=smart&auto=webp&s=b43ea47cdb8cd9715813bb9ef6b28badcbf9784d', 'width': 960}, {'height': 720, 'url': 'https://preview.redd.it/922azsd2sjte1.jpeg?width=1080&crop=smart&auto=webp&s=c07426052de0350599a73b02b68b2d6c16e1cbcc', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/922azsd2sjte1.jpeg?auto=webp&s=6789b9a7cc8bdd305b9478d175210e49842519ed', 'width': 1536}, 'variants': {}}]}
|
|||
I've always wished for a companion who could help me and work with me. Now when i have a ai I'm still struggling financially, with $0 earned in the last 1.5 years despite being in the AI field, I feel like nothing has changed in my life.
| 0 |
what i leaned that earning money is not easy
| 2025-04-08T05:32:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju68nk/ive_always_wished_for_a_companion_who_could_help/
|
Select_Dream634
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju68nk
| false | null |
t3_1ju68nk
|
/r/LocalLLaMA/comments/1ju68nk/ive_always_wished_for_a_companion_who_could_help/
| false | false |
self
| 0 | null |
Model Context Protocol tutorials
| 0 | 2025-04-08T05:43:32 |
https://youtube.com/playlist?list=PLnH2pfPCPZsJ5aJaHdTW7to2tZkYtzIwp&si=XHHPdC6UCCsoCSBZ
|
mehul_gupta1997
|
youtube.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju6ecw
| false | null |
t3_1ju6ecw
|
/r/LocalLLaMA/comments/1ju6ecw/model_context_protocol_tutorials/
| false | false | 0 |
{'enabled': False, 'images': [{'id': '1K671t23TaL5ySHZIDp_105D4uz5NTz79B9tLLO1CNk', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/a-oi1qGdgfn1UMgquethnxyULGn_EK_ucllZ7yxNT5c.jpg?width=108&crop=smart&auto=webp&s=ed2a6b0001cac96927189a23af1f72974ea20143', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/a-oi1qGdgfn1UMgquethnxyULGn_EK_ucllZ7yxNT5c.jpg?width=216&crop=smart&auto=webp&s=04d655976b874bc5b06429d88ba1bfa2ddbc4078', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/a-oi1qGdgfn1UMgquethnxyULGn_EK_ucllZ7yxNT5c.jpg?width=320&crop=smart&auto=webp&s=48af11fe643ab2d4341eccf772a2178c23303905', 'width': 320}], 'source': {'height': 270, 'url': 'https://external-preview.redd.it/a-oi1qGdgfn1UMgquethnxyULGn_EK_ucllZ7yxNT5c.jpg?auto=webp&s=9a6f5a861dcda26890cb5d04cb89f7cb6cbfc98f', 'width': 480}, 'variants': {}}]}
|
||
MATH-Perturb: Benchmarking LLMs' Math Reasoning Abilities
against Hard Perturbations
| 32 |
[https://math-perturb.github.io/](https://math-perturb.github.io/)
TLDR by QwQ:
>The study investigates whether large language models' success on complex math problems stems from true reasoning or memorization by creating two datasets, MATH-P-Simple and MATH-P-Hard, each with 279 modified problems from the MATH dataset's hardest level. MATH-P-Simple includes minor, non-essential changes that preserve the original solution method, while MATH-P-Hard involves fundamental alterations requiring new strategies and deeper understanding. Models showed significant performance drops on MATH-P-Hard, suggesting reliance on memorized methods. The authors highlight a concerning "blind memorization" issue where models apply learned techniques without assessing their relevance to modified contexts, especially when trained with original problems. This underscores the need for research to develop more adaptable and robust reasoning models.
Leaderboard
https://preview.redd.it/oa3hc69dsjte1.png?width=1194&format=png&auto=webp&s=78653cfb0648bccae51b79d790c4cb8da943562d
# Observation:
1. Reasoning models, even small models without RL like R1-14B, performs very well compare to base models.
2. LLama4 flopped extra hard, 87 -> 46, even when compare to other small base models like gemini2-flash, it's still really bad
3. Gmini reasoning models are less resistant to perturbations compare to QwQ, R1 and O3-mini
https://preview.redd.it/uroiwqp6ujte1.png?width=1426&format=png&auto=webp&s=2283a1161e3581dd0d0ae272cb9dc328a9eeae4e
| 2025-04-08T05:45:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju6fa1/mathperturb_benchmarking_llms_math_reasoning/
|
AaronFeng47
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju6fa1
| false | null |
t3_1ju6fa1
|
/r/LocalLLaMA/comments/1ju6fa1/mathperturb_benchmarking_llms_math_reasoning/
| false | false | 32 | null |
|
nvidia/Llama-3_1-Nemotron-Ultra-253B-v1 · Hugging Face
| 120 |
Reasoning model derived from Llama 3 405B, 128k context length. Llama-3 license. See model card for more info.
| 2025-04-08T06:10:27 |
https://huggingface.co/nvidia/Llama-3_1-Nemotron-Ultra-253B-v1
|
rerri
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju6sm1
| false | null |
t3_1ju6sm1
|
/r/LocalLLaMA/comments/1ju6sm1/nvidiallama3_1nemotronultra253bv1_hugging_face/
| false | false | 120 |
{'enabled': False, 'images': [{'id': '4d_EqWeb9XrgZkpyIB24ru0lFUJc7JfbmpKJDisM_fM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3UxngnIkjlXfR7MJ8ohQbkyRtFJzuTypVV_aoc8_Tmk.jpg?width=108&crop=smart&auto=webp&s=17138142b3da3e22820b411e2d471cd4a935ce8c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3UxngnIkjlXfR7MJ8ohQbkyRtFJzuTypVV_aoc8_Tmk.jpg?width=216&crop=smart&auto=webp&s=6ae8cf20fd432e3fff577a1a92d6cf1b17d52ac6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3UxngnIkjlXfR7MJ8ohQbkyRtFJzuTypVV_aoc8_Tmk.jpg?width=320&crop=smart&auto=webp&s=9668cc3e66e7c76c67b05c989ef1d12dc53f1c56', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3UxngnIkjlXfR7MJ8ohQbkyRtFJzuTypVV_aoc8_Tmk.jpg?width=640&crop=smart&auto=webp&s=e441532f8217ac64c401c9352ae767ec98103b56', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3UxngnIkjlXfR7MJ8ohQbkyRtFJzuTypVV_aoc8_Tmk.jpg?width=960&crop=smart&auto=webp&s=348741ecec2b581e6b57ec03489dc708a1e75a3c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3UxngnIkjlXfR7MJ8ohQbkyRtFJzuTypVV_aoc8_Tmk.jpg?width=1080&crop=smart&auto=webp&s=59a4e59f3b45c4394d5c962016610e55f863a1d2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3UxngnIkjlXfR7MJ8ohQbkyRtFJzuTypVV_aoc8_Tmk.jpg?auto=webp&s=b28c842b02fd4c25072c5bc8613d58b8dafcae66', 'width': 1200}, 'variants': {}}]}
|
|
Advice for used GPU purchase 04/2025
| 0 |
Hi everyone,
I’m considering experimenting (again) with LLaMA models and chatbots. My previous tests were done some time ago using a Tesla M40 with 24GB of VRAM.
Now, I’m thinking about upgrading my GPU, as the current one is already in use for a VGPU setup. I’m torn between going for a 48GB card or sticking with a 24GB card.
I’m looking at options like the NVIDIA RTX A5000, Quadro RTX 8000, or possibly even the NVIDIA A16. Could anyone share their thoughts on which option would be the best for my needs? Alternatively, would it make more sense to go with two 24GB cards, which could be more cost-effective? I’m also open to using a gaming GPU if that’s a viable option.
Looking forward to your advice!
| 2025-04-08T06:33:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju74sr/advice_for_used_gpu_purchase_042025/
|
MageLD
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju74sr
| false | null |
t3_1ju74sr
|
/r/LocalLLaMA/comments/1ju74sr/advice_for_used_gpu_purchase_042025/
| false | false |
self
| 0 | null |
AWQ Lora fine-tuning
| 4 |
Hi all. Has some one successful ft an AWQ models (lora) before? The motivation for this is the massive difference in throughput between bnb & awq models (488 t/s vs 1387t/s) for Qwen 7b.
For 30 concurrent request. For each users, we are looking at 37t/s vs 100+ t/s a massive difference in speed.
| 2025-04-08T06:39:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju77vo/awq_lora_finetuning/
|
bihungba1101
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju77vo
| false | null |
t3_1ju77vo
|
/r/LocalLLaMA/comments/1ju77vo/awq_lora_finetuning/
| false | false |
self
| 4 | null |
Need help in building PC for running LLMs (14B-24B)
| 1 |
[removed]
| 2025-04-08T06:45:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju7atq/need_help_in_building_pc_for_running_llms_14b24b/
|
GeminiGPT
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju7atq
| false | null |
t3_1ju7atq
|
/r/LocalLLaMA/comments/1ju7atq/need_help_in_building_pc_for_running_llms_14b24b/
| false | false |
self
| 1 | null |
Exceeding VRAM limit with QWQ IQ3XXS i1 quant, no OOM? (LM studio)
| 1 |
So I wanted to fit a qwq model with enough context to fully fit within my GPU (16gb)
And I checked using a vram estimator that an IQ3XS (not XXS) can fit 16gb exactly in its 12k context.
I try setting to 12k- it seems to be my max too. Fully offloaded to GPU, ~ 29 tok/s
(On nvidia-smi it says 15.9/16gb used)
Then I was curious and enabled flash attention bumped up to 16000k context- still no OOM, but still shows 15.9/16gb used?)then I tried 20k context, and still it says 15.9/16gb used. It’s still running 28 tok/s per second. What is happening?
| 2025-04-08T06:53:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju7elc/exceeding_vram_limit_with_qwq_iq3xxs_i1_quant_no/
|
No_Expert1801
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju7elc
| false | null |
t3_1ju7elc
|
/r/LocalLLaMA/comments/1ju7elc/exceeding_vram_limit_with_qwq_iq3xxs_i1_quant_no/
| false | false |
self
| 1 | null |
🕯️ Candle Test Arena: A Tool for Evaluating LLM Reasoning (Now on Hugging Face!)
| 8 |
Hi r/LocalLLaMA community!
A few days ago, u/Everlier introduced us to the [Candle Test](https://www.reddit.com/r/LocalLLaMA/comments/1jpr1nk/the_candle_test_most_llms_fail_to_generalise_at/), which revealed how LLMs can struggle with maintaining context while avoiding overfitting. Inspired by this test, I've created an interactive tool to make it easier to evaluate different models.
## 🔍 What is the Candle Test Arena?
It's a Streamlit application that lets you:
- Run the candle test on any OpenAI-compatible model
- Compare results across different models
- Analyze responses in both natural language and structured JSON formats
- Track and export test results
## 🚀 Try it out!
You can now run the test directly on [Hugging Face Spaces](https://huggingface.co/spaces/k-mktr/candle-test-arena)
## 💡 Why This Matters
The test reveals something interesting about LLMs:
1. They can correctly understand facts (candles get shorter when burning).
2. They can hold this information in context.
3. But many still fail to avoid overfitting when presented with a seemingly related riddle.
This helps us understand how models handle context and reasoning in practice.
## 🛠️ Features
- Test any OpenAI-compatible model
- Choose between natural language or structured JSON responses
- View detailed results and comparisons
- Export data for further analysis
- Cloud-synchronized results storage
## 🙏 Credits
Huge thanks to u/Everlier for the original test concept! This tool is just a way to make it easier to run and analyze the test across different models.
Would love to hear your feedback and see how different models perform. What interesting patterns have you noticed in your testing?
---
*Note: You'll need an API key (OpenRouter or similar) to run the tests. The app supports any OpenAI-compatible endpoint.*
| 2025-04-08T06:57:52 |
kastmada
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju7gup
| false | null |
t3_1ju7gup
|
/r/LocalLLaMA/comments/1ju7gup/candle_test_arena_a_tool_for_evaluating_llm/
| false | false | 8 |
{'enabled': True, 'images': [{'id': 'cbCLcIAwHVCcQXQZYqis7FgjSbUDsmjcLXBoWzCJqfk', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/sjqbt4t47kte1.png?width=108&crop=smart&auto=webp&s=e5fad6649b1d2e829b83f413618d847558c535a9', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/sjqbt4t47kte1.png?width=216&crop=smart&auto=webp&s=352ca60ba0f0a0fe6465b697727e4c9e0f506443', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/sjqbt4t47kte1.png?width=320&crop=smart&auto=webp&s=935eb9e917837d603377452287ea9d442abf3887', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/sjqbt4t47kte1.png?width=640&crop=smart&auto=webp&s=db4b3183b1075f487f888a2dc704fb844b1876ad', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/sjqbt4t47kte1.png?width=960&crop=smart&auto=webp&s=245eed09f5fa39e172a1e7d5e4fc7dd572aa72ee', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/sjqbt4t47kte1.png?width=1080&crop=smart&auto=webp&s=abf912a05e26b3a0ab029dae25d50a70b8c85907', 'width': 1080}], 'source': {'height': 816, 'url': 'https://preview.redd.it/sjqbt4t47kte1.png?auto=webp&s=a044e778fb78b57675bfcdeb6206539795807fea', 'width': 1088}, 'variants': {}}]}
|
||
Llama-3_1-Nemotron-Ultra-253B-v1 benchmarks. Better than R1 at under half the size?
| 199 | 2025-04-08T07:18:47 |
tengo_harambe
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju7r63
| false | null |
t3_1ju7r63
|
/r/LocalLLaMA/comments/1ju7r63/llama3_1nemotronultra253bv1_benchmarks_better/
| false | false | 199 |
{'enabled': True, 'images': [{'id': 'NRZy6Fuma2i2f73QMEo_-X9Bcj2q5atvOl43v9cIJaU', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/clznuueqakte1.png?width=108&crop=smart&auto=webp&s=baf372a5742c15dfc8cbef4edab751642965ac96', 'width': 108}, {'height': 119, 'url': 'https://preview.redd.it/clznuueqakte1.png?width=216&crop=smart&auto=webp&s=31c70dc668713f087b827a0ccc5db08d4bdc3ebc', 'width': 216}, {'height': 176, 'url': 'https://preview.redd.it/clznuueqakte1.png?width=320&crop=smart&auto=webp&s=c7ae7aaf718320aeb51623221ab9d6b64efefb53', 'width': 320}, {'height': 353, 'url': 'https://preview.redd.it/clznuueqakte1.png?width=640&crop=smart&auto=webp&s=a2facfbcd182f06991c15e3a654ff2bebadec08e', 'width': 640}, {'height': 530, 'url': 'https://preview.redd.it/clznuueqakte1.png?width=960&crop=smart&auto=webp&s=c9f11dd6b1b06cfbd13e84f334953c913956d216', 'width': 960}, {'height': 597, 'url': 'https://preview.redd.it/clznuueqakte1.png?width=1080&crop=smart&auto=webp&s=80ec8ff7497d2da27d75e45813c45de2316ca42e', 'width': 1080}], 'source': {'height': 1387, 'url': 'https://preview.redd.it/clznuueqakte1.png?auto=webp&s=a7f417506fcd5a08042d26e888e67955f6a9b72e', 'width': 2508}, 'variants': {}}]}
|
|||
Building a Conversational Chatbot with Memory: CrewAI + Ollama LLaMA 8B + ChromaDB + Mem0
| 1 |
[removed]
| 2025-04-08T07:28:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju7vic/building_a_conversational_chatbot_with_memory/
|
Mission-Valuable-23
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju7vic
| false | null |
t3_1ju7vic
|
/r/LocalLLaMA/comments/1ju7vic/building_a_conversational_chatbot_with_memory/
| false | false |
self
| 1 | null |
Dev-friendly fine-tuning: How to get better results from local LLMs (no deep ML needed)
| 7 |
I’m a full stack dev working with LLMs daily - and while prompt engineering helps, it only goes so far.
I’ve been exploring fine-tuning methods that *don’t require being an ML PhD*, especially for local models (Mistral, LLaMA, etc).
Hosting a webinar with other devs to walk through our stack, tools (LoRA, QLoRA), and how we’re getting better outputs in prod.
Thought some folks here might find it helpful - DM if you’re interested or drop questions below!
| 2025-04-08T08:05:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju8cn0/devfriendly_finetuning_how_to_get_better_results/
|
soman_yadav
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju8cn0
| false | null |
t3_1ju8cn0
|
/r/LocalLLaMA/comments/1ju8cn0/devfriendly_finetuning_how_to_get_better_results/
| false | false |
self
| 7 | null |
🚨 Fresh Benchmarks for LLaMA 4 Scout & Maverick — via Lighteval! [OpenEvals from HuggingFace]
| 1 |
Hey folks! Just ran evaluations on the new **LLaMA 4 Scout 17B** and **Maverick 17B (FP8)** using [**Lighteval**](https://github.com/LeandroVon/Lighteval) on AIME24/25, GPQA and IFEval.
🧪 Full breakdown & samples here:
🔗 [https://huggingface.co/spaces/SaylorTwift/OpenEvalsDetails](https://huggingface.co/spaces/SaylorTwift/OpenEvalsDetails)
Quick summary of the results:
# 🧠 GPQA – Graduate-Level Reasoning
* **LLaMA 4 Maverick**: 70%
* **Scout**: 56%
* **DeepSeek V3 (671B)** still leads at 73%, but Maverick puts up a solid number considering it's only 17B active parameters.
# 🧮 AIME 2024/2025 – High School Math
* **Maverick**: 43% (2024), 23% (2025)
* **Scout**: 23% (2024), 10% (2025)
* **DeepSeek** hits 53% on AIME 2024, and **Gemma 3 27B** pulls 20% on AIME 2025.
These tasks are tough — they test multi-step symbolic reasoning. LLaMA 4 clearly struggles here, though more fine-tuning could make a big difference. Also, AIME 2024 suffers from contamination; AIME 2025 gives a cleaner signal here.
# 📋 IFEval – Instruction Following
* **Maverick**: 86%
* **Scout**: 84%
Solid performance here! Not quite surpassing LLaMA 3.3 70B (90%).
All models were evaluated with **Lighteval** 🧪 using the **vLLM** backend. Reproducible, open-source, and fast.
Let me know what you'd like to see benchmarked next :)
https://preview.redd.it/5hwprbjxskte1.png?width=2026&format=png&auto=webp&s=8f60497cc6a28f30ce40104ae9bcaed68b7c9600
| 2025-04-08T09:01:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju92vy/fresh_benchmarks_for_llama_4_scout_maverick_via/
|
HauntingMoment
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju92vy
| false | null |
t3_1ju92vy
|
/r/LocalLLaMA/comments/1ju92vy/fresh_benchmarks_for_llama_4_scout_maverick_via/
| false | false | 1 | null |
|
🚨 Fresh Benchmarks for LLaMA 4 Scout & Maverick — via Lighteval! [OpenEvals from HuggingFace]
| 1 |
[removed]
| 2025-04-08T09:07:37 |
HauntingMoment
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju95sb
| false | null |
t3_1ju95sb
|
/r/LocalLLaMA/comments/1ju95sb/fresh_benchmarks_for_llama_4_scout_maverick_via/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'CZVjZFZ0b5ge25ut_Jka2C0j2ZdrqIpV85dnPpZWkgE', 'resolutions': [{'height': 44, 'url': 'https://preview.redd.it/9hj01sgaukte1.png?width=108&crop=smart&auto=webp&s=85ef1557d06c47bfb94c53351e827e43da37e33f', 'width': 108}, {'height': 89, 'url': 'https://preview.redd.it/9hj01sgaukte1.png?width=216&crop=smart&auto=webp&s=85e96847fe5eaccd754a262a08309214d55e78e6', 'width': 216}, {'height': 132, 'url': 'https://preview.redd.it/9hj01sgaukte1.png?width=320&crop=smart&auto=webp&s=577c0d9e2d08c84467e91f2f4f4b4eacd8bc874c', 'width': 320}, {'height': 265, 'url': 'https://preview.redd.it/9hj01sgaukte1.png?width=640&crop=smart&auto=webp&s=7049f7b6bc15da37bcf46d32772143390cb14bbc', 'width': 640}, {'height': 398, 'url': 'https://preview.redd.it/9hj01sgaukte1.png?width=960&crop=smart&auto=webp&s=28ba06d9052aa47666c226b405109b7328fc9395', 'width': 960}, {'height': 448, 'url': 'https://preview.redd.it/9hj01sgaukte1.png?width=1080&crop=smart&auto=webp&s=cf24404de3a1c4e288d28de3054451d90a75efe5', 'width': 1080}], 'source': {'height': 842, 'url': 'https://preview.redd.it/9hj01sgaukte1.png?auto=webp&s=269ed1a916a0b2e355cb07298f3f097c66332a8f', 'width': 2026}, 'variants': {}}]}
|
||
Is self hosting of LLMs pointless?
| 1 |
[removed]
| 2025-04-08T09:40:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju9lgh/is_self_hosting_of_llms_pointless/
|
Passionate_PM
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju9lgh
| false | null |
t3_1ju9lgh
|
/r/LocalLLaMA/comments/1ju9lgh/is_self_hosting_of_llms_pointless/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'O6BpffuM2mLxic8DO9kwJolPK5z7pRAIimswTej9Xys', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Lt3C7lbRFDy5915ss2hioIPmHQwnJk56h3qXLWljFuo.jpg?width=108&crop=smart&auto=webp&s=b4b9a6e0c78e0c1c63e0e6e5d9408b255fee3e2c', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/Lt3C7lbRFDy5915ss2hioIPmHQwnJk56h3qXLWljFuo.jpg?width=216&crop=smart&auto=webp&s=db0b671d3034b97fd64b42e33131b9dd38e89397', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/Lt3C7lbRFDy5915ss2hioIPmHQwnJk56h3qXLWljFuo.jpg?width=320&crop=smart&auto=webp&s=8c5911045e50ccb3709d6dfebfbf932c9fe229f7', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/Lt3C7lbRFDy5915ss2hioIPmHQwnJk56h3qXLWljFuo.jpg?width=640&crop=smart&auto=webp&s=8458a2169ba2ab7b8fd01f92319cc29533eccfcf', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/Lt3C7lbRFDy5915ss2hioIPmHQwnJk56h3qXLWljFuo.jpg?width=960&crop=smart&auto=webp&s=7b1a87b40b036fa15c331f856b0cc37ce509a4a5', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/Lt3C7lbRFDy5915ss2hioIPmHQwnJk56h3qXLWljFuo.jpg?width=1080&crop=smart&auto=webp&s=d8a837ab0410a16e30daf0384240e5ce3819c133', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/Lt3C7lbRFDy5915ss2hioIPmHQwnJk56h3qXLWljFuo.jpg?auto=webp&s=5901b11514cacd3f74fc8ea5e36d56f0ec1a3cb9', 'width': 1200}, 'variants': {}}]}
|
Gemma 3 it is then
| 872 | 2025-04-08T09:51:07 |
freehuntx
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju9qx0
| false | null |
t3_1ju9qx0
|
/r/LocalLLaMA/comments/1ju9qx0/gemma_3_it_is_then/
| false | false | 872 |
{'enabled': True, 'images': [{'id': 'jPm_n-_NQ7xctJ6BHBHNEw8FNV1OOVq0TGT0UoHp5PY', 'resolutions': [{'height': 142, 'url': 'https://preview.redd.it/zlui8az62lte1.jpeg?width=108&crop=smart&auto=webp&s=b4fb843cb313ff1ad9c6c0c57458c844b825e462', 'width': 108}, {'height': 284, 'url': 'https://preview.redd.it/zlui8az62lte1.jpeg?width=216&crop=smart&auto=webp&s=79f22dd387a92f916cf68f123e11a45e3fbf43bf', 'width': 216}, {'height': 422, 'url': 'https://preview.redd.it/zlui8az62lte1.jpeg?width=320&crop=smart&auto=webp&s=1d7f93ca89e3b46d5c890f9b1d539992211155a3', 'width': 320}, {'height': 844, 'url': 'https://preview.redd.it/zlui8az62lte1.jpeg?width=640&crop=smart&auto=webp&s=73b4f479c50348dbbdb4980ac9b2e0a61172b7af', 'width': 640}, {'height': 1266, 'url': 'https://preview.redd.it/zlui8az62lte1.jpeg?width=960&crop=smart&auto=webp&s=ba525710ab0c98796d08dc72a17ca77b8af328ad', 'width': 960}], 'source': {'height': 1356, 'url': 'https://preview.redd.it/zlui8az62lte1.jpeg?auto=webp&s=59e16937dcb88b085cfef8550998af3312c35747', 'width': 1028}, 'variants': {}}]}
|
|||
The experimental version of llama4 maverick on lmstudio is also more creative in programming than the released one.
| 18 |
I compared code generated for the prompt:
>write a python program that prints an interesting landscape in ascii art in the console
"llama-4-maverick-03-26-experimental" will consistently create longer and more creative outputs than "llama-4-maverick" as released. I also noticed that longer programs are more often throwing an error in the experimental version.
I found this quite interesting - shows that the finetuning for more engaging text is also influencing the code style. The release version could need a dash more creativity in its code generation.
Example output of the experimental version:
https://preview.redd.it/clllc91c2lte1.png?width=805&format=png&auto=webp&s=cb4de48920b8e3f23c40f676ce0114bb9c782f8d
Example output of released version:
https://preview.redd.it/mhgkwbie2lte1.png?width=811&format=png&auto=webp&s=e144c67a751e6773a423638f7e29fe932ddd42d1
https://preview.redd.it/jwgzgzck2lte1.png?width=2364&format=png&auto=webp&s=4cbe936ee5c2e2b20a273bdea72a38f57ba62842
Length statistic of generated code for both models
| 2025-04-08T09:53:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju9s1c/the_experimental_version_of_llama4_maverick_on/
|
cpldcpu
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju9s1c
| false | null |
t3_1ju9s1c
|
/r/LocalLLaMA/comments/1ju9s1c/the_experimental_version_of_llama4_maverick_on/
| false | false | 18 | null |
|
Is self hosting pointless? what do you think?
| 1 |
[removed]
| 2025-04-08T10:03:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1ju9xi5/is_self_hosting_pointless_what_do_you_think/
|
Passionate_PM
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ju9xi5
| false | null |
t3_1ju9xi5
|
/r/LocalLLaMA/comments/1ju9xi5/is_self_hosting_pointless_what_do_you_think/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'O6BpffuM2mLxic8DO9kwJolPK5z7pRAIimswTej9Xys', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Lt3C7lbRFDy5915ss2hioIPmHQwnJk56h3qXLWljFuo.jpg?width=108&crop=smart&auto=webp&s=b4b9a6e0c78e0c1c63e0e6e5d9408b255fee3e2c', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/Lt3C7lbRFDy5915ss2hioIPmHQwnJk56h3qXLWljFuo.jpg?width=216&crop=smart&auto=webp&s=db0b671d3034b97fd64b42e33131b9dd38e89397', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/Lt3C7lbRFDy5915ss2hioIPmHQwnJk56h3qXLWljFuo.jpg?width=320&crop=smart&auto=webp&s=8c5911045e50ccb3709d6dfebfbf932c9fe229f7', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/Lt3C7lbRFDy5915ss2hioIPmHQwnJk56h3qXLWljFuo.jpg?width=640&crop=smart&auto=webp&s=8458a2169ba2ab7b8fd01f92319cc29533eccfcf', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/Lt3C7lbRFDy5915ss2hioIPmHQwnJk56h3qXLWljFuo.jpg?width=960&crop=smart&auto=webp&s=7b1a87b40b036fa15c331f856b0cc37ce509a4a5', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/Lt3C7lbRFDy5915ss2hioIPmHQwnJk56h3qXLWljFuo.jpg?width=1080&crop=smart&auto=webp&s=d8a837ab0410a16e30daf0384240e5ce3819c133', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/Lt3C7lbRFDy5915ss2hioIPmHQwnJk56h3qXLWljFuo.jpg?auto=webp&s=5901b11514cacd3f74fc8ea5e36d56f0ec1a3cb9', 'width': 1200}, 'variants': {}}]}
|
I there any deepseek RP fine-tunes?
| 1 |
[removed]
| 2025-04-08T10:11:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1jua1fx/i_there_any_deepseek_rp_finetunes/
|
ConversationOld3749
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jua1fx
| false | null |
t3_1jua1fx
|
/r/LocalLLaMA/comments/1jua1fx/i_there_any_deepseek_rp_finetunes/
| false | false |
self
| 1 | null |
This Video model is like 5-8B params only? wtf
| 76 |
[https://test-time-training.github.io/video-dit/assets/ttt\_cvpr\_2025.pdf](https://test-time-training.github.io/video-dit/assets/ttt_cvpr_2025.pdf)
| 2025-04-08T10:30:00 |
https://test-time-training.github.io/video-dit/
|
AryanEmbered
|
test-time-training.github.io
| 1970-01-01T00:00:00 | 0 |
{}
|
1juab77
| false | null |
t3_1juab77
|
/r/LocalLLaMA/comments/1juab77/this_video_model_is_like_58b_params_only_wtf/
| false | false |
default
| 76 | null |
Deploying Llama 4 Maverick to RunPod
| 0 |
Looking into self-hosting Llama 4 Maverick on RunPod. It's stated that it fits into a single H100 (80GB), but does that include the 10M context? Has anyone tried this setup?
It's the model I'm self-hosting, so if you guys know of better alternatives than RunPod, I'd love to hear it. I'm just looking for a model to interface from my mac. If it indeed fits the H100 and performs better than 4o, then it's a no brainer as it will be dirt cheap in comparison to OpenAI 4o API per 1M tokens.
| 2025-04-08T11:01:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1juat1x/deploying_llama_4_maverick_to_runpod/
|
adowjn
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1juat1x
| false | null |
t3_1juat1x
|
/r/LocalLLaMA/comments/1juat1x/deploying_llama_4_maverick_to_runpod/
| false | false |
self
| 0 | null |
If you had to pick one open-source agent framework to build around, what would you go with?
| 1 |
[removed]
| 2025-04-08T11:14:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1jub0xe/if_you_had_to_pick_one_opensource_agent_framework/
|
Loose-Ad-9956
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jub0xe
| false | null |
t3_1jub0xe
|
/r/LocalLLaMA/comments/1jub0xe/if_you_had_to_pick_one_opensource_agent_framework/
| false | false |
self
| 1 | null |
Cogito V1 preview suite of models released on Ollama. Iterated Distillation and Amplification.
| 32 |
I guess while I wait on Qwen3 I’ll go check these out. These kinda just stealth dropped last night as an official Ollama model release. Curious as to if this IDA process is anything special or just another buzzword. Benchmarks are typical “we beat the big guys” type of deal.
Anyone try these out yet?
https://ollama.com/library/cogito
| 2025-04-08T11:19:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1jub3jj/cogito_v1_preview_suite_of_models_released_on/
|
Porespellar
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jub3jj
| false | null |
t3_1jub3jj
|
/r/LocalLLaMA/comments/1jub3jj/cogito_v1_preview_suite_of_models_released_on/
| false | false |
self
| 32 |
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200}, 'variants': {}}]}
|
If you had to pick one open-source agent framework to build around, what would you go with?
| 10 |
I’ve been messing around a lot lately with different agent frameworks AutoGen, Agno, CrewAI, CAMEL, LangGraph, you name it. It got me thinking:
If you could only pick one open-source framework to build your agent stack around long-term… what would it be?
Would love to hear what’s working for folks based on:
* ease of start with
* Having the best ecosystem, I mean support for running local models
* actual usefulness in real-world tasks (not just demos)
* how active the community/devs are
| 2025-04-08T11:22:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1jub56b/if_you_had_to_pick_one_opensource_agent_framework/
|
iamnotdeadnuts
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jub56b
| false | null |
t3_1jub56b
|
/r/LocalLLaMA/comments/1jub56b/if_you_had_to_pick_one_opensource_agent_framework/
| false | false |
self
| 10 | null |
Just another noob getting started with local roleplay chats. Getting a bit frustrated as there's so much varying information, please help!
| 1 |
[removed]
| 2025-04-08T11:43:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1jubi01/just_another_noob_getting_started_with_local/
|
Any_Force_7865
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jubi01
| false | null |
t3_1jubi01
|
/r/LocalLLaMA/comments/1jubi01/just_another_noob_getting_started_with_local/
| false | false |
self
| 1 | null |
Ollama now supports Mistral Small 3.1 with vision
| 125 | 2025-04-08T12:18:04 |
https://ollama.com/library/mistral-small3.1:24b-instruct-2503-q4_K_M
|
markole
|
ollama.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1juc553
| false | null |
t3_1juc553
|
/r/LocalLLaMA/comments/1juc553/ollama_now_supports_mistral_small_31_with_vision/
| false | false | 125 |
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200}, 'variants': {}}]}
|
||
LMArena chrome extension that automatically tracks your votes into a personal ELO leaderboard
| 6 | 2025-04-08T12:23:40 |
https://chromewebstore.google.com/detail/mylmarena/dcmbcmdhllblkndablelimnifmbpimae?authuser=0&hl=en-GB
|
alientitty
|
chromewebstore.google.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1juc8yi
| false | null |
t3_1juc8yi
|
/r/LocalLLaMA/comments/1juc8yi/lmarena_chrome_extension_that_automatically/
| false | false | 6 |
{'enabled': False, 'images': [{'id': 'YcjQhm6uEmQgnkyMz8EdKrcddwmcObPg1YwX0FVsK40', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/ndR7yf-z-qLc9kwEwg2eDGgcSYFKDFOtQtLWOmZaYTE.jpg?width=108&crop=smart&auto=webp&s=0391bbce358ddf39a25d1af52d0ca32aaeea13b6', 'width': 108}], 'source': {'height': 128, 'url': 'https://external-preview.redd.it/ndR7yf-z-qLc9kwEwg2eDGgcSYFKDFOtQtLWOmZaYTE.jpg?auto=webp&s=3fa51265ac887df744c342bbf8bd15660f9c3dad', 'width': 128}, 'variants': {}}]}
|
||
We Fine-Tuned a Small Vision-Language Model (Qwen 2.5 3B VL) to Convert Process Diagram Images to Knowledge Graphs
| 53 |
**TL:DR** \- We fine-tuned a vision-language model to efficiently convert **process diagrams (images)** into **structured knowledge graphs**. Our custom model outperformed the base Qwen model by **14% on node detection** and **23% on edge detection**.
We’re still in early stages and would love community feedback to improve further!
**Model repo** : [https://huggingface.co/zackriya/diagram2graph](https://huggingface.co/zackriya/diagram2graph)
**Github** : [https://github.com/Zackriya-Solutions/diagram2graph/](https://github.com/Zackriya-Solutions/diagram2graph/tree/main)
**The problem statement :** We had a large collection of **Process Diagram images** that needed to be converted into a **graph-based knowledge base for** downstream analytics and automation. The manual conversion process was inefficient, so we decided to build a system that could digitize these diagrams into **machine-readable knowledge graphs**.
**Solution** : We started with API-based methods using **Claude 3.5 Sonnet** and **GPT-4o** to extract entities (nodes), relationships (edges), and attributes from diagrams. While performance was promising, **data privacy** and **cost of external APIs** were major blockers. We used models like GPT-4o and Claude-3.5 Sonet initially. We wanted something simple that can run on our servers. The privacy aspect is very important because we don’t want our business process data to be transferred to external APIs.
We fine-tuned **Qwen2.5-VL-3B**, a small but capable vision-language model, to run **locally** and securely. Our team (myself and [u/Sorry\_Transition\_599](https://meetily.zackriya.com/), the creator of Meetily – an open-source self-hosted meeting note-taker) worked on the initial architecture of the system, building the base software and training a model on a custom dataset of **200 labeled diagram images**. We decided to go with qwen2.5-vl-3b after experimenting with multiple small LLMs for running them locally.
Compared to the base Qwen model:
* **+14% improvement** in node detection
* **+23% improvement** in edge detection
**Dataset size** : 200 Custom Labelled images
**Next steps :**
**1. Increase dataset size and improve fine-tuning**
**2. Make the model compatible with Ollama for easy deployment**
**3. Package as a Python library for bulk and efficient diagram-to-graph conversion**
I hope our learnings are helpful to the community and expect community support.
| 2025-04-08T12:38:04 |
https://www.reddit.com/gallery/1jucj35
|
Conscious-Marvel
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jucj35
| false | null |
t3_1jucj35
|
/r/LocalLLaMA/comments/1jucj35/we_finetuned_a_small_visionlanguage_model_qwen_25/
| false | false | 53 | null |
|
im a -ve in my finance right now bcz of this ai
| 0 |
So, what's happening now is essentially a gamble. I'm creating a website and app, but as a fresher, I don't have much knowledge about it. I need to learn coding and other skills, but I'm not sure where to start. I previously worked for a company and learned about their business model, which was service-based. I realized that the entire IT industry in India is service-oriented, with everyone creating websites and apps. After leaving my job, I started working on my own projects and built two or three apps within three months. I learned a lot, but then my savings ran out.
When I needed more money, I borrowed from friends and relatives to finance my projects. However, I ended up spending too much on Google Cloud APIs and AI software subscriptions, which was unsustainable for me, especially since I was only earning $60 per month. Now, I'm in a tough spot financially. If I hadn't left my job, I would still have a steady income, but now I'm struggling to make ends meet. I'm hoping someone will buy my product, but I've come to realize that earning money from others is not easy, especially when you're just starting your career.
guys i just want a advice what should i do im not even regretting bcz like 1 month i worked for free its like a slave thing in 2 month i earn only 120 dollars .
what should i do should i apply for a job or what i do im so fu\*cked up
| 2025-04-08T12:47:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1jucphc/im_a_ve_in_my_finance_right_now_bcz_of_this_ai/
|
Select_Dream634
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jucphc
| false | null |
t3_1jucphc
|
/r/LocalLLaMA/comments/1jucphc/im_a_ve_in_my_finance_right_now_bcz_of_this_ai/
| false | false |
self
| 0 | null |
Insights in shift of performance of certain LLM's on different hardware
| 1 |
[removed]
| 2025-04-08T13:09:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1jud694/insights_in_shift_of_performance_of_certain_llms/
|
Effective-Ad-5955
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jud694
| false | null |
t3_1jud694
|
/r/LocalLLaMA/comments/1jud694/insights_in_shift_of_performance_of_certain_llms/
| false | false | 1 | null |
|
Papers on attack on LLM models based on prompt injection
| 2 |
Basically same as the title. If you know any good/interesting papers for both LLMs and VLMs. Is there any survey paper on this topic?
| 2025-04-08T13:30:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1judlns/papers_on_attack_on_llm_models_based_on_prompt/
|
SouvikMandal
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1judlns
| false | null |
t3_1judlns
|
/r/LocalLLaMA/comments/1judlns/papers_on_attack_on_llm_models_based_on_prompt/
| false | false |
self
| 2 | null |
How to fix slow inference speed of mistral-small 3.1 when using Ollama
| 11 |
Ollama v0.6.5 messed up the VRAM estimation for this model, so it will offload everything to RAM and slow things down.
Setting `num_gpu` to the maximum will fix the issue. (Load everything into GPU VRAM)
https://preview.redd.it/8a8ywhk77mte1.png?width=996&format=png&auto=webp&s=e58f491e89e22d90028540856012f878704efd88
| 2025-04-08T13:42:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1judvfg/how_to_fix_slow_inference_speed_of_mistralsmall/
|
AaronFeng47
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1judvfg
| false | null |
t3_1judvfg
|
/r/LocalLLaMA/comments/1judvfg/how_to_fix_slow_inference_speed_of_mistralsmall/
| false | false | 11 | null |
|
GMKtec EVO-X2 Powered By Ryzen AI Max+ 395 To Launch For $2,052: The First AI+ Mini PC With 70B LLM Support
| 49 | 2025-04-08T13:46:01 |
https://wccftech.com/gmktec-evo-x2-powered-by-ryzen-ai-max-395-to-launch-for-2052/
|
_SYSTEM_ADMIN_MOD_
|
wccftech.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1judxsq
| false | null |
t3_1judxsq
|
/r/LocalLLaMA/comments/1judxsq/gmktec_evox2_powered_by_ryzen_ai_max_395_to/
| false | false | 49 |
{'enabled': False, 'images': [{'id': 'gu0-XIDCSCotqE9Kmdd0IplDALSwXhnqXUKZHpgsuLc', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/7dwQT-MLeJB3Eqi-z9LDGmR5IdbcMBwquCFc_EYOkgU.jpg?width=108&crop=smart&auto=webp&s=66fcd50433048064927e66770d610e5b0e68a3eb', 'width': 108}, {'height': 129, 'url': 'https://external-preview.redd.it/7dwQT-MLeJB3Eqi-z9LDGmR5IdbcMBwquCFc_EYOkgU.jpg?width=216&crop=smart&auto=webp&s=53338a9bd5a47aaf25b2c7f83a9fe1024f99833f', 'width': 216}, {'height': 191, 'url': 'https://external-preview.redd.it/7dwQT-MLeJB3Eqi-z9LDGmR5IdbcMBwquCFc_EYOkgU.jpg?width=320&crop=smart&auto=webp&s=ba463b68fa4dcfa388354a2a645da92b21e95f1d', 'width': 320}, {'height': 383, 'url': 'https://external-preview.redd.it/7dwQT-MLeJB3Eqi-z9LDGmR5IdbcMBwquCFc_EYOkgU.jpg?width=640&crop=smart&auto=webp&s=6366f2999e05e8e4ce24296ae638cae50fb26ad9', 'width': 640}, {'height': 574, 'url': 'https://external-preview.redd.it/7dwQT-MLeJB3Eqi-z9LDGmR5IdbcMBwquCFc_EYOkgU.jpg?width=960&crop=smart&auto=webp&s=e371dbdfd061e76b212838756e7b8e1e09c5e3ba', 'width': 960}, {'height': 646, 'url': 'https://external-preview.redd.it/7dwQT-MLeJB3Eqi-z9LDGmR5IdbcMBwquCFc_EYOkgU.jpg?width=1080&crop=smart&auto=webp&s=83173f9c55922623d32bb8659afc45c2fec87f90', 'width': 1080}], 'source': {'height': 721, 'url': 'https://external-preview.redd.it/7dwQT-MLeJB3Eqi-z9LDGmR5IdbcMBwquCFc_EYOkgU.jpg?auto=webp&s=e46d1d9257b3e9b64ee45d6bad91b5fb181df5ec', 'width': 1204}, 'variants': {}}]}
|
||
Ghibli Style Video Generator Workflow
| 0 |
You can generate Ghibli-style videos using [Eachlabs Workflow](https://console.eachlabs.ai/)
| 2025-04-08T13:47:10 |
https://v.redd.it/he6mmhvnzlte1
|
mso96
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1judyqf
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/he6mmhvnzlte1/DASHPlaylist.mpd?a=1746712045%2CMTA4MWI2MjUyMDAwNjU2ODJiMTAyNmJhMDc3YWNmZTQ5NjMwNTc0YzVmOGI5NGJhMzdiNzhlY2FhNjNjYmQyNg%3D%3D&v=1&f=sd', 'duration': 22, 'fallback_url': 'https://v.redd.it/he6mmhvnzlte1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1076, 'hls_url': 'https://v.redd.it/he6mmhvnzlte1/HLSPlaylist.m3u8?a=1746712045%2CNzJlYTMxMDgxMjg5NWI1NWNmMGFmODIwYWQyMzk4MjQxYTc4MmU4OWUwZTRjMjRhYmFlZmQ0MzBlMTNlNzA1ZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/he6mmhvnzlte1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1judyqf
|
/r/LocalLLaMA/comments/1judyqf/ghibli_style_video_generator_workflow/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'OWs2b2lndm56bHRlMW3Ug422PKHOb_9dkQrsdxRnHAQS_qUkN4lCeyUJHkYz', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OWs2b2lndm56bHRlMW3Ug422PKHOb_9dkQrsdxRnHAQS_qUkN4lCeyUJHkYz.png?width=108&crop=smart&format=pjpg&auto=webp&s=8a0e6c54b5a5ec77e1a9ba790ad82274adb78b3b', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/OWs2b2lndm56bHRlMW3Ug422PKHOb_9dkQrsdxRnHAQS_qUkN4lCeyUJHkYz.png?width=216&crop=smart&format=pjpg&auto=webp&s=86bfc2ea03c24c90943f25ee81cf0dad50558987', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/OWs2b2lndm56bHRlMW3Ug422PKHOb_9dkQrsdxRnHAQS_qUkN4lCeyUJHkYz.png?width=320&crop=smart&format=pjpg&auto=webp&s=31c812828e80f9ee886ba19a6223fe2b130735c7', 'width': 320}, {'height': 358, 'url': 'https://external-preview.redd.it/OWs2b2lndm56bHRlMW3Ug422PKHOb_9dkQrsdxRnHAQS_qUkN4lCeyUJHkYz.png?width=640&crop=smart&format=pjpg&auto=webp&s=7aebb69821efceec6de99dcc0e415bfe25475fd5', 'width': 640}, {'height': 537, 'url': 'https://external-preview.redd.it/OWs2b2lndm56bHRlMW3Ug422PKHOb_9dkQrsdxRnHAQS_qUkN4lCeyUJHkYz.png?width=960&crop=smart&format=pjpg&auto=webp&s=91182f5968a8f3541263fdb3d365f46ab7809214', 'width': 960}, {'height': 604, 'url': 'https://external-preview.redd.it/OWs2b2lndm56bHRlMW3Ug422PKHOb_9dkQrsdxRnHAQS_qUkN4lCeyUJHkYz.png?width=1080&crop=smart&format=pjpg&auto=webp&s=1276414f5743056f853e3368bdf5a9e7ec52e475', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/OWs2b2lndm56bHRlMW3Ug422PKHOb_9dkQrsdxRnHAQS_qUkN4lCeyUJHkYz.png?format=pjpg&auto=webp&s=fa879cff02417b08bc041baa415a1c658fbdff48', 'width': 1928}, 'variants': {}}]}
|
|
Introducing Lemonade Server: NPU-accelerated local LLMs on Ryzen AI Strix
| 1 |
[removed]
| 2025-04-08T14:06:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1jueemr/introducing_lemonade_server_npuaccelerated_local/
|
jfowers_amd
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jueemr
| false | null |
t3_1jueemr
|
/r/LocalLLaMA/comments/1jueemr/introducing_lemonade_server_npuaccelerated_local/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'JEv0vJf6Y_2UjKhsQgR1fhbeDZ2K441UZlYfMn_cmIY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/G6CWdjW0ZI0mpPw5AO6U1jpVZ_0ooqTiGOkMmNur-z4.jpg?width=108&crop=smart&auto=webp&s=30e3998167657e3560e25c65a1982f5a9dd52d73', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/G6CWdjW0ZI0mpPw5AO6U1jpVZ_0ooqTiGOkMmNur-z4.jpg?width=216&crop=smart&auto=webp&s=90dae07819e832b69df25397723f06012a10d6c5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/G6CWdjW0ZI0mpPw5AO6U1jpVZ_0ooqTiGOkMmNur-z4.jpg?width=320&crop=smart&auto=webp&s=12c61c6bdb5a287f6203844ad617b8a3cc167f2b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/G6CWdjW0ZI0mpPw5AO6U1jpVZ_0ooqTiGOkMmNur-z4.jpg?width=640&crop=smart&auto=webp&s=c156c4195f362fe9292a1565450a79a881b7a78c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/G6CWdjW0ZI0mpPw5AO6U1jpVZ_0ooqTiGOkMmNur-z4.jpg?width=960&crop=smart&auto=webp&s=d7fa759cc9bcdcfa440fba7a6e356ae6fcb099a7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/G6CWdjW0ZI0mpPw5AO6U1jpVZ_0ooqTiGOkMmNur-z4.jpg?width=1080&crop=smart&auto=webp&s=dc79f9a3d829525a8427115042fee9e3a1e7ff36', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/G6CWdjW0ZI0mpPw5AO6U1jpVZ_0ooqTiGOkMmNur-z4.jpg?auto=webp&s=8c150e34e5df25377403851fd32ef1e4538f0b0c', 'width': 1200}, 'variants': {}}]}
|
|
Second Me : Fully Local AI Self with Identity & Memory Modeling——with Docker & API support now live
| 1 |
[removed]
| 2025-04-08T14:16:54 |
Technical-Equal-964
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1juen34
| false | null |
t3_1juen34
|
/r/LocalLLaMA/comments/1juen34/second_me_fully_local_ai_self_with_identity/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'y1Lzhqc_2DZGtUgm5DN2_9LRZMSRM72I-AwS4L9IMYE', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/sd1q5tvedmte1.png?width=108&crop=smart&auto=webp&s=dfc44328b0996240bd887691f6a5686f1a3230a9', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/sd1q5tvedmte1.png?width=216&crop=smart&auto=webp&s=f7fce2cffa5f41fa066655551e91e72508bbbfaf', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/sd1q5tvedmte1.png?width=320&crop=smart&auto=webp&s=aafc1cdbc928e82a74a8cff6365e089dfef2314c', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/sd1q5tvedmte1.png?width=640&crop=smart&auto=webp&s=483bd095c10a6391f34f3d5b66e0fba1a1155968', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/sd1q5tvedmte1.png?width=960&crop=smart&auto=webp&s=03358fc253993323b5082db008de671324739382', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/sd1q5tvedmte1.png?width=1080&crop=smart&auto=webp&s=93bff3ad940e9b5d436f996ca392ea8747f32110', 'width': 1080}], 'source': {'height': 900, 'url': 'https://preview.redd.it/sd1q5tvedmte1.png?auto=webp&s=961dd887b3050c056e295ca59b2b1378497b3577', 'width': 1600}, 'variants': {}}]}
|
||
Gemini is the Only Chatbot That Ignored Me
| 1 |
[removed]
| 2025-04-08T14:27:22 |
manpreet__singh
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1juevuz
| false | null |
t3_1juevuz
|
/r/LocalLLaMA/comments/1juevuz/gemini_is_the_only_chatbot_that_ignored_me/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'Gq6z-ZTva5ybybWMmWtKDLSSYca0XvNW98h840WOeAw', 'resolutions': [{'height': 132, 'url': 'https://preview.redd.it/wnhpunahfmte1.jpeg?width=108&crop=smart&auto=webp&s=5d64baf4927229d33cf34defce3754f3560e5266', 'width': 108}, {'height': 265, 'url': 'https://preview.redd.it/wnhpunahfmte1.jpeg?width=216&crop=smart&auto=webp&s=8eba45e9a2ae3809681a76491ee45553ed424734', 'width': 216}, {'height': 392, 'url': 'https://preview.redd.it/wnhpunahfmte1.jpeg?width=320&crop=smart&auto=webp&s=33aac518750f2eb7b276e40170a74643da494eb4', 'width': 320}, {'height': 785, 'url': 'https://preview.redd.it/wnhpunahfmte1.jpeg?width=640&crop=smart&auto=webp&s=e7a7f4542721524b85c1048da0ef2280bef01dce', 'width': 640}, {'height': 1177, 'url': 'https://preview.redd.it/wnhpunahfmte1.jpeg?width=960&crop=smart&auto=webp&s=fdf079e43c20940aee38f03029a12f6a5fa58a55', 'width': 960}, {'height': 1325, 'url': 'https://preview.redd.it/wnhpunahfmte1.jpeg?width=1080&crop=smart&auto=webp&s=bd86aff260529120a8529b9a796db40ac326a932', 'width': 1080}], 'source': {'height': 1325, 'url': 'https://preview.redd.it/wnhpunahfmte1.jpeg?auto=webp&s=a9ed2e9236da7a16fb7a7935681eb3786b21ff06', 'width': 1080}, 'variants': {}}]}
|
||
LLM's have made me stop using Google 85% of the time. How about you? What has AI changed in your life?
| 1 |
[removed]
| 2025-04-08T14:36:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1juf3pl/llms_have_made_me_stop_using_google_85_of_the/
|
internal-pagal
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1juf3pl
| false | null |
t3_1juf3pl
|
/r/LocalLLaMA/comments/1juf3pl/llms_have_made_me_stop_using_google_85_of_the/
| false | false |
self
| 1 | null |
Ollama Users: Want to Know Which Model Performs Best? Check Out Rank-LLMs
| 4 |
Hey everyone,
I’ve just released [**Rank-LLMs**](https://github.com/tdoris/rank_llms), an open-source CLI tool designed specifically for **comparing local LLMs running via Ollama**.
It works like this:
* You choose (or create) a **prompt set**—anything from general knowledge to domain-specific tasks.
* The tool runs **A/B comparisons** between models on each prompt.
* A third-party model (Claude by default, but pluggable) acts as the **AI judge** to decide which response is better.
* The results are used to compute **Elo ratings**, and a detailed side-by-side **markdown report** is generated.
* Your model inference stays **completely local**—only the judging step calls an API, which you can also replace if needed.
It's super easy to:
* Run head-to-head matchups between your locally hosted models.
* Add your own prompt sets to test on topics that actually matter to you.
* See clear, interpretable results with built-in scoring and reporting.
If you’re using Ollama and want a lightweight way to figure out **which model performs best for your tasks**, give it a try and let me know what you think!
Repo: [https://github.com/tdoris/rank\_llms](https://github.com/tdoris/rank_llms)
Would love feedback, feature suggestions, or even PRs!
| 2025-04-08T14:41:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1juf7zm/ollama_users_want_to_know_which_model_performs/
|
tdoris
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1juf7zm
| false | null |
t3_1juf7zm
|
/r/LocalLLaMA/comments/1juf7zm/ollama_users_want_to_know_which_model_performs/
| false | false |
self
| 4 | null |
Nvidia 5090 Laptop 24GB VRAM 150W
| 0 |
27.5% faster than 3090. Similar GB/s. Way more power efficient than 3090 and 4090. Pretty good card if you want Nvidia on the go.
[https://www.nvidia.com/en-us/geforce/laptops/50-series/](https://www.nvidia.com/en-us/geforce/laptops/50-series/)
|Card|3090|4090|5090 Laptop|
|:-|:-|:-|:-|
|FP16 TFLOPS|142.32|330.4|181.37|
|TDP|350W|450W|150W|
|GFLOPS/W|406.63|734.22|1029.14|
|VRAM|24GB|24GB|24GB|
|GB/s|936|1008|896|
| 2025-04-08T14:43:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1juf9su/nvidia_5090_laptop_24gb_vram_150w/
|
Ok_Warning2146
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1juf9su
| false | null |
t3_1juf9su
|
/r/LocalLLaMA/comments/1juf9su/nvidia_5090_laptop_24gb_vram_150w/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'jAtlm83V8FyK_NXr0DJRAZIuIJ4zsbVF7nk_iz9pie4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/ZIB6oiQvVdv31iFeBO4mSOkE26yPt_aNiMEKiCKMqvY.jpg?width=108&crop=smart&auto=webp&s=e3074688bb3e2a8419d9ce64d0d20c18b1ffdb76', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/ZIB6oiQvVdv31iFeBO4mSOkE26yPt_aNiMEKiCKMqvY.jpg?width=216&crop=smart&auto=webp&s=fb0b2324ceb86a0469a786c31c8252677f6cd735', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/ZIB6oiQvVdv31iFeBO4mSOkE26yPt_aNiMEKiCKMqvY.jpg?width=320&crop=smart&auto=webp&s=2de14cd94a99153198800a0678e3f216aeab6f77', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/ZIB6oiQvVdv31iFeBO4mSOkE26yPt_aNiMEKiCKMqvY.jpg?width=640&crop=smart&auto=webp&s=38be501bc457875f3bc69234138f15223a01873c', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/ZIB6oiQvVdv31iFeBO4mSOkE26yPt_aNiMEKiCKMqvY.jpg?width=960&crop=smart&auto=webp&s=cf95311376142678e4ba43cc1b12a26d0139cbad', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/ZIB6oiQvVdv31iFeBO4mSOkE26yPt_aNiMEKiCKMqvY.jpg?width=1080&crop=smart&auto=webp&s=b71bc7120d8f1935041cb99b5dc273c0354cc05a', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/ZIB6oiQvVdv31iFeBO4mSOkE26yPt_aNiMEKiCKMqvY.jpg?auto=webp&s=351f00ce831d9c385d766a75ff0963181980c6aa', 'width': 2400}, 'variants': {}}]}
|
[Hiring] Graph Machine Learning Researcher
| 1 |
[removed]
| 2025-04-08T14:57:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1juflam/hiring_graph_machine_learning_researcher/
|
StressInformal5943
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1juflam
| false | null |
t3_1juflam
|
/r/LocalLLaMA/comments/1juflam/hiring_graph_machine_learning_researcher/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'kocUz3_O23RzbcdSuBMYXn6Nyyns7xjAVOt7iQU3ENs', 'resolutions': [{'height': 131, 'url': 'https://external-preview.redd.it/T5VTAGnixTAtIFoQL2zcBlasNfkraBDLZyUy75aSdY4.jpg?width=108&crop=smart&auto=webp&s=f530b7e39107067fa3eb585b9175dc0693f42ac7', 'width': 108}, {'height': 262, 'url': 'https://external-preview.redd.it/T5VTAGnixTAtIFoQL2zcBlasNfkraBDLZyUy75aSdY4.jpg?width=216&crop=smart&auto=webp&s=32323452b701ec1b97b7a87b3b4d8e7052995da1', 'width': 216}, {'height': 388, 'url': 'https://external-preview.redd.it/T5VTAGnixTAtIFoQL2zcBlasNfkraBDLZyUy75aSdY4.jpg?width=320&crop=smart&auto=webp&s=4dab5d6af263388c15c70afbc27a37d43d018217', 'width': 320}, {'height': 777, 'url': 'https://external-preview.redd.it/T5VTAGnixTAtIFoQL2zcBlasNfkraBDLZyUy75aSdY4.jpg?width=640&crop=smart&auto=webp&s=d3ae8b8874b8d457f113af098ad50d3dd814b51d', 'width': 640}], 'source': {'height': 880, 'url': 'https://external-preview.redd.it/T5VTAGnixTAtIFoQL2zcBlasNfkraBDLZyUy75aSdY4.jpg?auto=webp&s=cdd0fbcc4baa914659d21fe14f094c36a2a5602e', 'width': 724}, 'variants': {}}]}
|
Qwen3 pull request sent to llama.cpp
| 351 |
The pull request has been created by bozheng-hit, who also sent the patches for qwen3 support in transformers.
It's approved and ready for merging.
Qwen 3 is near.
[https://github.com/ggml-org/llama.cpp/pull/12828](https://github.com/ggml-org/llama.cpp/pull/12828)
| 2025-04-08T15:02:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1jufqbn/qwen3_pull_request_sent_to_llamacpp/
|
matteogeniaccio
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jufqbn
| false | null |
t3_1jufqbn
|
/r/LocalLLaMA/comments/1jufqbn/qwen3_pull_request_sent_to_llamacpp/
| false | false |
self
| 351 |
{'enabled': False, 'images': [{'id': '3F1kLLl2-BMrh4aYFJ4ZCirDhQMMEUCMRAvDBabgUeg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FfpsDKCzkIbBSnwKQvEjVyItVFGRrSVUG6wKxWewL3E.jpg?width=108&crop=smart&auto=webp&s=b0e7d2d03ee4e5910239bb0248e7aa2318301a7a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/FfpsDKCzkIbBSnwKQvEjVyItVFGRrSVUG6wKxWewL3E.jpg?width=216&crop=smart&auto=webp&s=c7ea75a49645b3a1a3f4cef2fdecf113bba1c762', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/FfpsDKCzkIbBSnwKQvEjVyItVFGRrSVUG6wKxWewL3E.jpg?width=320&crop=smart&auto=webp&s=79903ef4b5e3771c4a5f683039246e3469295cb3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/FfpsDKCzkIbBSnwKQvEjVyItVFGRrSVUG6wKxWewL3E.jpg?width=640&crop=smart&auto=webp&s=47b5eba10caa0a1bed0f6a4521ce5821b0f9b047', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/FfpsDKCzkIbBSnwKQvEjVyItVFGRrSVUG6wKxWewL3E.jpg?width=960&crop=smart&auto=webp&s=321f47f9a9a68c4c0229f3868d4af4877d301281', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/FfpsDKCzkIbBSnwKQvEjVyItVFGRrSVUG6wKxWewL3E.jpg?width=1080&crop=smart&auto=webp&s=7f72c4d3c8f5e38b88e294d85653292fb1df176f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/FfpsDKCzkIbBSnwKQvEjVyItVFGRrSVUG6wKxWewL3E.jpg?auto=webp&s=bbef45124be0d1aea803e1094c455d4a14a21eb3', 'width': 1200}, 'variants': {}}]}
|
Artificial Analysis Updates Llama-4 Maverick and Scout Ratings
| 1 |
[removed]
| 2025-04-08T15:17:21 |
TKGaming_11
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jug304
| false | null |
t3_1jug304
|
/r/LocalLLaMA/comments/1jug304/artificial_analysis_updates_llama4_maverick_and/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'YESpZRgqGHm2Az2LNG9Jdn1KsHpGGMAvLVqVZsOCXUA', 'resolutions': [{'height': 44, 'url': 'https://preview.redd.it/zs0t7jfeomte1.jpeg?width=108&crop=smart&auto=webp&s=d729a4c8d3341c0f70cbf91f6e3561669e528443', 'width': 108}, {'height': 89, 'url': 'https://preview.redd.it/zs0t7jfeomte1.jpeg?width=216&crop=smart&auto=webp&s=f0f31a7c8fdf0d13f8b73da82f0c485b954cc427', 'width': 216}, {'height': 132, 'url': 'https://preview.redd.it/zs0t7jfeomte1.jpeg?width=320&crop=smart&auto=webp&s=5574b34de3764f3965cd7ccace12606b9d1d7b7a', 'width': 320}, {'height': 265, 'url': 'https://preview.redd.it/zs0t7jfeomte1.jpeg?width=640&crop=smart&auto=webp&s=076e60c1e8d1667a7ed41d7b54570750b20375f4', 'width': 640}, {'height': 397, 'url': 'https://preview.redd.it/zs0t7jfeomte1.jpeg?width=960&crop=smart&auto=webp&s=5e7ecc9a13fc3be8e4e2f60af0f08c72265d43ce', 'width': 960}, {'height': 447, 'url': 'https://preview.redd.it/zs0t7jfeomte1.jpeg?width=1080&crop=smart&auto=webp&s=e75942cf586421c87911848e433006ac95daeaaa', 'width': 1080}], 'source': {'height': 848, 'url': 'https://preview.redd.it/zs0t7jfeomte1.jpeg?auto=webp&s=26801727f0f321c75f6fdc767b76d09397623c6f', 'width': 2048}, 'variants': {}}]}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.