title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
kluster.ai is now hosting Llama 4 Maverick and Llama 4 Scout
| 0 |
Have been trying them out this week, Maverick is incredibly fast. How’s it working for everyone else?
| 2025-04-09T21:46:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvhqyi/klusterai_is_now_hosting_llama_4_maverick_and/
|
swarmster
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvhqyi
| false | null |
t3_1jvhqyi
|
/r/LocalLLaMA/comments/1jvhqyi/klusterai_is_now_hosting_llama_4_maverick_and/
| false | false |
self
| 0 | null |
DeepCogito Training Completed in 75 Days
| 1 |
[removed]
| 2025-04-09T21:47:01 |
modulo_pi
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvhr0v
| false | null |
t3_1jvhr0v
|
/r/LocalLLaMA/comments/1jvhr0v/deepcogito_training_completed_in_75_days/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'ZGc2TyjDDqOhPslpmaUEekH_yivBHTY7Vio3jZrWN7c', 'resolutions': [{'height': 40, 'url': 'https://preview.redd.it/prnwrr3pqvte1.png?width=108&crop=smart&auto=webp&s=04ea2389f9b597554d4061fb286bdee9737df494', 'width': 108}, {'height': 80, 'url': 'https://preview.redd.it/prnwrr3pqvte1.png?width=216&crop=smart&auto=webp&s=89ebf902e79c9a3d96a79a2bf6f5fecef969bf5a', 'width': 216}, {'height': 118, 'url': 'https://preview.redd.it/prnwrr3pqvte1.png?width=320&crop=smart&auto=webp&s=55116d6aee83c187707d9410e6ee19a692b7895f', 'width': 320}, {'height': 237, 'url': 'https://preview.redd.it/prnwrr3pqvte1.png?width=640&crop=smart&auto=webp&s=262a1046edff440a18627d979ed2d2f00f99d395', 'width': 640}, {'height': 356, 'url': 'https://preview.redd.it/prnwrr3pqvte1.png?width=960&crop=smart&auto=webp&s=1fe3aa939e55b75b99228a15a01379ab5f68f463', 'width': 960}, {'height': 400, 'url': 'https://preview.redd.it/prnwrr3pqvte1.png?width=1080&crop=smart&auto=webp&s=8bd266ef71b4e87b6519ecbe4b438a35800174ab', 'width': 1080}], 'source': {'height': 578, 'url': 'https://preview.redd.it/prnwrr3pqvte1.png?auto=webp&s=84d22f290f6080c9d6abc3fcf43caa99c43cdfe6', 'width': 1558}, 'variants': {}}]}
|
||
best small reasoning model rn?
| 5 |
title says it all, after having tried a bunch of reasoning models in the 3B-8B parameter range which is the best one you've tried so far?
the domain doesn't really matter - I'm talking about just general reasoning ability like if I give it a list of tools and the current state we are at with the goal that it must achieve, it should be able to formulate a logically sound plan to reach the goal using the tools it has at its disposal.
| 2025-04-09T21:58:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvi0lp/best_small_reasoning_model_rn/
|
therealkabeer
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvi0lp
| false | null |
t3_1jvi0lp
|
/r/LocalLLaMA/comments/1jvi0lp/best_small_reasoning_model_rn/
| false | false |
self
| 5 | null |
PSA: Gemma 3 QAT gguf models have some wrongly configured tokens
| 117 |
Hello,
so as I loaded my 12B IT q4\_0 QAT model, I've noticed a strage error in llama.cpp: "load: control-looking token: 106 '' was not control-type; this is probably a bug in the model. its type will be overridden"
So I've wondered, is this normal and loaded a Bartowski file, and indeed, that error was nowhere to be seen. After that, I did some digging and came across this post by the guy who implemented Gemma 3 and LLama 4 support in llama.cpp: [https://huggingface.co/google/gemma-3-12b-it-qat-q4\_0-gguf/discussions/3#67f6a2e0207b4bceea793151](https://huggingface.co/google/gemma-3-12b-it-qat-q4_0-gguf/discussions/3#67f6a2e0207b4bceea793151)
This looked awfully similar to my error, so what I did was set both token 105 and 106 to control (which are <start\_of\_turn> and <end\_of\_turn> btw) instead of normal (like it's the case with the bartowski files too) using the huggingface gguf editor. Not only that, the image start and end tokens were also not set to control, unlike the original. I've fixed that and noticed a boost in the image capabilities immediately.
If you have noticed weirdness with the QAT models in comparison to the older bart models, then it was most likely due to that. On top of that, the name metadata was missing as well which I've added back, apparently some inference backends need it. I will upload the 12B one shortly here "Dampfinchen/google-gemma-3-12b-it-qat-q4\_0-gguf-small-fix" Note that it is based on u/[stduhpf](https://huggingface.co/stduhpf) 's version which is faster without any compromise to performance.
Happy testing!
| 2025-04-09T22:08:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvi860/psa_gemma_3_qat_gguf_models_have_some_wrongly/
|
dampflokfreund
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvi860
| false | null |
t3_1jvi860
|
/r/LocalLLaMA/comments/1jvi860/psa_gemma_3_qat_gguf_models_have_some_wrongly/
| false | false |
self
| 117 |
{'enabled': False, 'images': [{'id': 'gnoSHQF7rXglfA8pHbpnF-VvHqjLRP6y-NWIdvzauB8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9Ni7p7vG1ypYJSOztD3eZzTd-4rVFPbWgFMXgQf2WmM.jpg?width=108&crop=smart&auto=webp&s=4269d49975c825c0dbc4a13759d243707ac0a253', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9Ni7p7vG1ypYJSOztD3eZzTd-4rVFPbWgFMXgQf2WmM.jpg?width=216&crop=smart&auto=webp&s=e68095a5a27d9ddbeb22f3d6c706aa2168cfd050', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9Ni7p7vG1ypYJSOztD3eZzTd-4rVFPbWgFMXgQf2WmM.jpg?width=320&crop=smart&auto=webp&s=72f4eb739cbcb8c164a9d5c8054a8ec5f3e7c1f8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9Ni7p7vG1ypYJSOztD3eZzTd-4rVFPbWgFMXgQf2WmM.jpg?width=640&crop=smart&auto=webp&s=379d21770137de10c51e9c0ce42b20a48fa5f9fd', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9Ni7p7vG1ypYJSOztD3eZzTd-4rVFPbWgFMXgQf2WmM.jpg?width=960&crop=smart&auto=webp&s=2c982361d630b9d2a2ccae08be736e117fba669f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9Ni7p7vG1ypYJSOztD3eZzTd-4rVFPbWgFMXgQf2WmM.jpg?width=1080&crop=smart&auto=webp&s=eb1e84b865db9818ab24e1caee1e6d6781247b47', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9Ni7p7vG1ypYJSOztD3eZzTd-4rVFPbWgFMXgQf2WmM.jpg?auto=webp&s=b0780bbf44c3f7e4a7c1efe2f7c102017c7310d7', 'width': 1200}, 'variants': {}}]}
|
New UI for aistudio :/ prior UI for aistudio :)
| 1 |
Anyone else feel the new UI for aistudio is a step backwards? I feel the same way about the UI changes by Open AI in regard to their playground too. They are both more cumbersome and add unnecessary interactions compared to their earlier iterations. Windows 11 right click type beat.
| 2025-04-09T22:45:44 |
https://www.reddit.com/gallery/1jvj23r
|
chitown160
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvj23r
| false | null |
t3_1jvj23r
|
/r/LocalLLaMA/comments/1jvj23r/new_ui_for_aistudio_prior_ui_for_aistudio/
| false | false | 1 | null |
|
NeuralCodecs: Neural Audio Codecs implemented in C# - EnCodec, DAC, and SNAC
| 1 |
[removed]
| 2025-04-09T22:49:03 |
https://github.com/DillionLowry/NeuralCodecs
|
Knehm
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvj4mi
| false | null |
t3_1jvj4mi
|
/r/LocalLLaMA/comments/1jvj4mi/neuralcodecs_neural_audio_codecs_implemented_in_c/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'B2ipbhAhnHI450bFsA9ZPlOdvxmPwPSmXIRnhG9PPBM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jR3C81qWwwIvraLQK1zuaWHg74Dwu1wTGiKMSs3nENM.jpg?width=108&crop=smart&auto=webp&s=ee4143e8bd747b88025fa923be634690d09f6988', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jR3C81qWwwIvraLQK1zuaWHg74Dwu1wTGiKMSs3nENM.jpg?width=216&crop=smart&auto=webp&s=97cf0e8ecdd18d15b802411e5e3140db3b2ca96b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jR3C81qWwwIvraLQK1zuaWHg74Dwu1wTGiKMSs3nENM.jpg?width=320&crop=smart&auto=webp&s=a9c33f0763b0f2a28b574d9ffe0af261aa24bda6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jR3C81qWwwIvraLQK1zuaWHg74Dwu1wTGiKMSs3nENM.jpg?width=640&crop=smart&auto=webp&s=d49154de6d017126b32449b80f1a94596b914cf8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jR3C81qWwwIvraLQK1zuaWHg74Dwu1wTGiKMSs3nENM.jpg?width=960&crop=smart&auto=webp&s=aa5314c46d18e101499eb55f2154214527687897', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jR3C81qWwwIvraLQK1zuaWHg74Dwu1wTGiKMSs3nENM.jpg?width=1080&crop=smart&auto=webp&s=bfa32129ea61cbd86745a256ee951c5c08914aa8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jR3C81qWwwIvraLQK1zuaWHg74Dwu1wTGiKMSs3nENM.jpg?auto=webp&s=ed26e0dfcea86672b7b557ced1474be0e531e20d', 'width': 1200}, 'variants': {}}]}
|
|
Getting (approximate) text from embedding
| 3 |
Is there a project that allows me to:
* Given a text, generate a text embedding, using a local model
* Given a target embedding, find some text whose embedding is as close as it can get to the target.
Ideally, supporting local LLMs to generate the embeddings.
| 2025-04-09T22:58:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvjbvc/getting_approximate_text_from_embedding/
|
lily_34
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvjbvc
| false | null |
t3_1jvjbvc
|
/r/LocalLLaMA/comments/1jvjbvc/getting_approximate_text_from_embedding/
| false | false |
self
| 3 | null |
Troglodytes
| 1 | 2025-04-09T23:00:50 |
Thrumpwart
|
i.imgur.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvjdru
| false | null |
t3_1jvjdru
|
/r/LocalLLaMA/comments/1jvjdru/troglodytes/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'B6UoOAIsyazoTVh5kH1KICFSgihZ5NNvcMxooVFL4lw', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/UpZ9oOoyAP08012Lsa-W9peooeKz0OVxERqqIUuBbjk.jpg?width=108&crop=smart&auto=webp&s=03d601612d0fad25d5a6fa67c4d6af285b79bf29', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/UpZ9oOoyAP08012Lsa-W9peooeKz0OVxERqqIUuBbjk.jpg?width=216&crop=smart&auto=webp&s=7c770ed186f918004c0a1dcd60dd3f7396a63db0', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/UpZ9oOoyAP08012Lsa-W9peooeKz0OVxERqqIUuBbjk.jpg?width=320&crop=smart&auto=webp&s=52b3931e809aee08b4385fcc759f5f900d0a65db', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/UpZ9oOoyAP08012Lsa-W9peooeKz0OVxERqqIUuBbjk.jpg?width=640&crop=smart&auto=webp&s=56710a34f1e3d7db84e6b93a40917b69966fa46d', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/UpZ9oOoyAP08012Lsa-W9peooeKz0OVxERqqIUuBbjk.jpg?width=960&crop=smart&auto=webp&s=c8c744b284e021f6fdbb2383fc9d9e2409c4970f', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/UpZ9oOoyAP08012Lsa-W9peooeKz0OVxERqqIUuBbjk.jpg?width=1080&crop=smart&auto=webp&s=a785d51a94e3a31b65e734f9652688b68521f823', 'width': 1080}], 'source': {'height': 1440, 'url': 'https://external-preview.redd.it/UpZ9oOoyAP08012Lsa-W9peooeKz0OVxERqqIUuBbjk.jpg?auto=webp&s=7034bb60f74f705d064a14252bdc641a9eb6dfc2', 'width': 1440}, 'variants': {}}]}
|
|||
NeuralCodecs: Neural Audio Codecs implemented in C# - EnCodec, DAC, and SNAC
| 1 | 2025-04-09T23:06:41 |
https://github.com/DillionLowry/NeuralCodecs
|
Knehm
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvji8d
| false | null |
t3_1jvji8d
|
/r/LocalLLaMA/comments/1jvji8d/neuralcodecs_neural_audio_codecs_implemented_in_c/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'B2ipbhAhnHI450bFsA9ZPlOdvxmPwPSmXIRnhG9PPBM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jR3C81qWwwIvraLQK1zuaWHg74Dwu1wTGiKMSs3nENM.jpg?width=108&crop=smart&auto=webp&s=ee4143e8bd747b88025fa923be634690d09f6988', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jR3C81qWwwIvraLQK1zuaWHg74Dwu1wTGiKMSs3nENM.jpg?width=216&crop=smart&auto=webp&s=97cf0e8ecdd18d15b802411e5e3140db3b2ca96b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jR3C81qWwwIvraLQK1zuaWHg74Dwu1wTGiKMSs3nENM.jpg?width=320&crop=smart&auto=webp&s=a9c33f0763b0f2a28b574d9ffe0af261aa24bda6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jR3C81qWwwIvraLQK1zuaWHg74Dwu1wTGiKMSs3nENM.jpg?width=640&crop=smart&auto=webp&s=d49154de6d017126b32449b80f1a94596b914cf8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jR3C81qWwwIvraLQK1zuaWHg74Dwu1wTGiKMSs3nENM.jpg?width=960&crop=smart&auto=webp&s=aa5314c46d18e101499eb55f2154214527687897', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jR3C81qWwwIvraLQK1zuaWHg74Dwu1wTGiKMSs3nENM.jpg?width=1080&crop=smart&auto=webp&s=bfa32129ea61cbd86745a256ee951c5c08914aa8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jR3C81qWwwIvraLQK1zuaWHg74Dwu1wTGiKMSs3nENM.jpg?auto=webp&s=ed26e0dfcea86672b7b557ced1474be0e531e20d', 'width': 1200}, 'variants': {}}]}
|
||
Experimenting with MCP Servers and local LLMs
| 11 |
Did some more experimentation with local LLMs. This time looking at how to integrate MCP servers.
As a fun experiment I tried to use tool calling to implement a simple POC of a basic GraphQL-esque response from a series of tool calls inferred from the prompt. My takeaway is that tool calling works reasonably well, even in small LLMs (7-8B).
Article: [https://www.teachmecoolstuff.com/viewarticle/using-mcp-servers-with-local-llms](https://www.teachmecoolstuff.com/viewarticle/using-mcp-servers-with-local-llms)
| 2025-04-09T23:14:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvjnxf/experimenting_with_mcp_servers_and_local_llms/
|
funJS
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvjnxf
| false | null |
t3_1jvjnxf
|
/r/LocalLLaMA/comments/1jvjnxf/experimenting_with_mcp_servers_and_local_llms/
| false | false |
self
| 11 | null |
A summary of consumer AI
| 1 |
[a summary of consumer AI](https://preview.redd.it/fe8fsltf6wte1.png?width=1024&format=png&auto=webp&s=2a539c410cf0153cc104eafe7b1bdbc2b0da0e0b)
Saw this on twitter and thought you guys would appreciate it XD
[https://x.com/venturetwins/status/1910014981128736877](https://x.com/venturetwins/status/1910014981128736877)
| 2025-04-09T23:15:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvjonx/a_summary_of_consumer_ai/
|
lightdreamscape
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvjonx
| false | null |
t3_1jvjonx
|
/r/LocalLLaMA/comments/1jvjonx/a_summary_of_consumer_ai/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'ugVrUtEtwMvbAxdyexIxgZ45cMnFH8DUoxZrv4k-9xs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/vKzW-Zjl_NJhRiwfdWuW4vRVbDSztImGhxEetx-XuSo.jpg?width=108&crop=smart&auto=webp&s=01f39782b80b0eb44e2c6855a17b098ad48e92a0', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/vKzW-Zjl_NJhRiwfdWuW4vRVbDSztImGhxEetx-XuSo.jpg?width=216&crop=smart&auto=webp&s=103a584b14747bd7781d9ee1e2776182ced0a647', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/vKzW-Zjl_NJhRiwfdWuW4vRVbDSztImGhxEetx-XuSo.jpg?width=320&crop=smart&auto=webp&s=39fc5019cd639e64dba52ebb9085fb056056627c', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/vKzW-Zjl_NJhRiwfdWuW4vRVbDSztImGhxEetx-XuSo.jpg?width=640&crop=smart&auto=webp&s=2dfdb6fb43f2946fcb1d6d384dfa4779321f5831', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/vKzW-Zjl_NJhRiwfdWuW4vRVbDSztImGhxEetx-XuSo.jpg?width=960&crop=smart&auto=webp&s=e219b31863f6eb2430094576ad4207ed0cb6e2a9', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/vKzW-Zjl_NJhRiwfdWuW4vRVbDSztImGhxEetx-XuSo.jpg?auto=webp&s=352cd16af18058b9668fee334b26ab73f8314771', 'width': 1024}, 'variants': {}}]}
|
|
A summary of consumer AI
| 118 |
Story of my life. Thought you guys would appreciate this.
[https://x.com/venturetwins/status/1910014981128736877](https://x.com/venturetwins/status/1910014981128736877)
| 2025-04-09T23:16:54 |
lightdreamscape
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvjprt
| false | null |
t3_1jvjprt
|
/r/LocalLLaMA/comments/1jvjprt/a_summary_of_consumer_ai/
| false | false | 118 |
{'enabled': True, 'images': [{'id': 'NTtPDgxMlQlSePplL_xjl1nWnQ_5uL2wN2rzK6Y-VFg', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/zw695vyr6wte1.jpeg?width=108&crop=smart&auto=webp&s=f52f83a0a29912f387e3319c29286583418561a1', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/zw695vyr6wte1.jpeg?width=216&crop=smart&auto=webp&s=d0885f6b21636737c9115aab84073c6b084d6344', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/zw695vyr6wte1.jpeg?width=320&crop=smart&auto=webp&s=46ee25f04296880b09a0814fe9f623375a161f95', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/zw695vyr6wte1.jpeg?width=640&crop=smart&auto=webp&s=a539535168123c43e2bf9703a86b8d6398b77a5e', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/zw695vyr6wte1.jpeg?width=960&crop=smart&auto=webp&s=f074a36b15016aa5d9b6f9a37bd3ff55099a1832', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/zw695vyr6wte1.jpeg?auto=webp&s=3d206816046025b169d419bf9987849e71c7b7fb', 'width': 1024}, 'variants': {}}]}
|
||
Asking same questions to same model about different content ?
| 1 |
Hi,
I would like an LLM to answer a series of yes/no questions about different pages from a website.
How to automate this ?
Also exporting automatically to a spreadsheet would be a bonus.
Thank you
| 2025-04-09T23:29:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvjywa/asking_same_questions_to_same_model_about/
|
KaKi_87
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvjywa
| false | null |
t3_1jvjywa
|
/r/LocalLLaMA/comments/1jvjywa/asking_same_questions_to_same_model_about/
| false | false |
self
| 1 | null |
Overlong Filtering
| 1 |
[removed]
| 2025-04-09T23:43:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvk8y3/overlong_filtering/
|
shirin_boo
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvk8y3
| false | null |
t3_1jvk8y3
|
/r/LocalLLaMA/comments/1jvk8y3/overlong_filtering/
| false | false |
self
| 1 | null |
https://github.com/pyember/ember
| 1 |
[removed]
| 2025-04-09T23:48:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvkcsi/httpsgithubcompyemberember/
|
Fearless-Elephant-81
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvkcsi
| false | null |
t3_1jvkcsi
|
/r/LocalLLaMA/comments/1jvkcsi/httpsgithubcompyemberember/
| false | false |
self
| 1 | null |
I've realized that Llama 4's odd architecture makes it perfect for my Mac and my workflows
| 130 |
So I'm a huge workflow enthusiast when it comes to LLMs, and believe the appropriate application of iterating through a problem + tightly controlled steps can solve just about anything. I'm also a Mac user. For a while my main machine was an M2 Ultra Mac Studio, but recently I got the 512GB M3 Ultra Mac Studio, [which honestly I had a little bit of buyer's remorse for.](https://www.reddit.com/r/LocalLLaMA/comments/1jaqpiu/mac_speed_comparison_m2_ultra_vs_m3_ultra_using/)
The thing about workflows is that speed is the biggest pain point; and when you use a Mac, you don't get a lot of speed, but you have memory to spare. It's really not a great matchup.
Speed is important because you can take even some of the weakest models and, with workflows, make them do amazing things just by scoping their thinking into multi-step problem solving, and having them validate themselves constantly along the way.
But again- the problem is speed. On my mac, my complex coding workflow can take up to 20-30 minutes to run using 32b-70b models, which is absolutely miserable. I'll ask it a question and then go take a shower, eat food, etc.
For a long time, I kept telling myself that I'd just use 8-14b models in my workflows. With the speed those models would run at, I could run really complex workflows easily... but I could never convince myself to stick with them, since any workflow that makes the 14b great would make the 32b even better. It's always been hard to pass that quality up.
Enter Llama 4. **Llama 4 Maverick Q8** fits on my M3 Studio, and the speed is very acceptable for its 400b size.
*Maverick Q8 in KoboldCpp- 9.3k context, 270 token response.*
>CtxLimit:9378/32768,
Amt:270/300, Init:0.18s,
Process:62.05s (146.69T/s),
Generate:16.06s (16.81T/s),
Total:78.11s
This model basically has the memory footprint of a 400b, but otherwise is a supercharged 17b. And since memory footprint was never a pain on the Mac, but speed is? That's the perfect combination for my use-case.
I know this model is weird, and the benchmarks don't remotely line up to the memory requirements. But for me? I realized today that this thing is exactly what I've been wanting... though I do think it still has a tokenizer issue or something.
Honestly, I doubt they'll go with this architecture again due to its poor reception, but for now... I'm quite happy with this model.
**NOTE**: *I did try MLX; y'all actually talked me into using it, and I'm really liking it. But Maverick and Scout were both broken for me last time I tried it. I pulled down the PR branch for it, but the model would not shut up for anything in the world. It will talk until it hits the token limit.*
*Alternatively, Unsloth's GGUFs seem to work great.*
| 2025-04-10T00:02:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvknex/ive_realized_that_llama_4s_odd_architecture_makes/
|
SomeOddCodeGuy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvknex
| false | null |
t3_1jvknex
|
/r/LocalLLaMA/comments/1jvknex/ive_realized_that_llama_4s_odd_architecture_makes/
| false | false |
self
| 130 | null |
Simple Debian, CUDA & Pytorch setup
| 7 |
This is a very simple and straightforward way to setup Pytorch with CUDA support on Debian, with intention of using it for LLM experiments.
This is being executed on a fresh Debian 12 install, and tested on RTX 3090.
## CUDA & NVIDIA driver install
Be sure to add `contrib non-free` to apt sources list before starting:
```bash
sudo nano /etc/apt/sources.list /etc/apt/sources.list.d/*
```
Then we can install CUDA following the instructions from the [NVIDIA website](https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=Debian&target_version=12&target_type=deb_local):
```bash
wget https://developer.download.nvidia.com/compute/cuda/12.8.1/local_installers/cuda-repo-debian12-12-8-local_12.8.1-570.124.06-1_amd64.deb
sudo dpkg -i cuda-repo-debian12-12-8-local_12.8.1-570.124.06-1_amd64.deb
sudo cp /var/cuda-repo-debian12-12-8-local/cuda-*-keyring.gpg /usr/share/keyrings/
sudo apt-get update
sudo apt-get -y install cuda-toolkit-12-8
```
Update paths (add to profile or bashrc):
```bash
export PATH=/usr/local/cuda-12.8/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-12.8/lib64\
${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
```
I additionally ran `sudo apt-get -y install cuda` as a simple way to install [nvidida driver](https://developer.nvidia.com/blog/nvidia-transitions-fully-towards-open-source-gpu-kernel-modules/).
This is not needed if you already have the driver installed.
`sudo reboot` and you are done with CUDA.
Verify GPU setup:
```bash
nvidia-smi
nvcc --version
```
Compile & run nvidia samples (nBody example is enough) to verify CUDA setup:
1. install build tools & dependencies you are missing:
```bash
sudo apt-get -y install build-essential cmake
sudo apt-get -y install freeglut3-dev build-essential libx11-dev libxmu-dev libxi-dev libgl1-mesa-glx libglu1-mesa libglu1-mesa-dev libglfw3-dev libgles2-mesa-dev libglx-dev libopengl-dev
```
2. build and run nbody example:
```bash
git clone https://github.com/nvidia/cuda-samples
cd cuda-samples/Samples/5_Domain_Specific/nbody
cmake . && make
./nbody -benchmark && ./nbody -fullscreen
```
If the example runs on GPU, you re done.
## Pytorch
Create a pyproject.toml file:
```bash
[project]
name = "playground"
version = "0.0.1"
requires-python = ">=3.13"
dependencies = [
"transformers",
"torch>=2.6.0",
"accelerate>=1.4.0",
]
[[tool.uv.index]]
name = "pytorch-cu128"
url = "https://download.pytorch.org/whl/nightly/cu128"
explicit = true
```
Before starting to setup python environment make sure
system is detecting nvidia gpu(s), and CUDA is set up.
Verify CUDA version corresponds to the one in the pyproject
(at time of writting "pytorch-cu128")
```bash
nvidia-smi
nvcc --version
```
Then setup venv with uv
```bash
uv sync --dev
source .venv/bin/activate
```
and test transformers and pytorch install
```bash
python -c "import torch;print('CUDA available to pytorch: ', torch.cuda.is_available())"
python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))"
```
> [!TIP]
> huggingface cache dir will get **BIG** if you download models etc. You can change the cache dirs. I have this set in my bashrc:
>
> ```bash
> export HF_HOME=$HOME/huggingface/misc
> export HF_DATASETS_CACHE=$HOME/huggingface/datasets
> export TRANSFORMERS_CACHE=$HOME/huggingface/models
> ```
>
> You can also change default location by exporting from script each time you use the library (ie. **before** importing it):
>
> ```py
> import os
> os.environ['HF_HOME'] = '/blabla/cache/'
> ```
| 2025-04-10T00:08:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvkr9r/simple_debian_cuda_pytorch_setup/
|
givingupeveryd4y
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvkr9r
| false | null |
t3_1jvkr9r
|
/r/LocalLLaMA/comments/1jvkr9r/simple_debian_cuda_pytorch_setup/
| false | false |
self
| 7 |
{'enabled': False, 'images': [{'id': 'kzAXhn9-CI7mfBkKNWwt-0DnnL-SON5g9deRQ_fv2s8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/5bfDzNofZZp0uosrHnnvJHdpKPDkRpryqLpSIaDaOKc.jpg?width=108&crop=smart&auto=webp&s=a0dd2a3e026a42ec5d044c183777487703bddb75', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/5bfDzNofZZp0uosrHnnvJHdpKPDkRpryqLpSIaDaOKc.jpg?width=216&crop=smart&auto=webp&s=82c95b208f798e72d8cca6774e4ca89e71c63e9f', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/5bfDzNofZZp0uosrHnnvJHdpKPDkRpryqLpSIaDaOKc.jpg?width=320&crop=smart&auto=webp&s=684db4c3574d04fd6ea5872d503ca3be337d6d80', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/5bfDzNofZZp0uosrHnnvJHdpKPDkRpryqLpSIaDaOKc.jpg?width=640&crop=smart&auto=webp&s=f8767f1a066b0954876bb58cb61ab1f8b0268800', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/5bfDzNofZZp0uosrHnnvJHdpKPDkRpryqLpSIaDaOKc.jpg?width=960&crop=smart&auto=webp&s=fe48e112eb47bb71b10f14bc5936098bd595fb0d', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/5bfDzNofZZp0uosrHnnvJHdpKPDkRpryqLpSIaDaOKc.jpg?width=1080&crop=smart&auto=webp&s=bf4b2c36cf9ef988f4ad9ddf7796d2ac8a418bea', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/5bfDzNofZZp0uosrHnnvJHdpKPDkRpryqLpSIaDaOKc.jpg?auto=webp&s=80fa8670d8ee2ddec830854f464327170b90e20d', 'width': 1200}, 'variants': {}}]}
|
Grok 3 Beta API has arrived
| 1 |
[removed]
| 2025-04-10T00:42:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvlepj/grok_3_beta_api_has_arrived/
|
Echo9Zulu-
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvlepj
| false | null |
t3_1jvlepj
|
/r/LocalLLaMA/comments/1jvlepj/grok_3_beta_api_has_arrived/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '3M4OlgDLwVa1YjhJcwYWKOkD-VDUaa5T8LtJWHs6jPQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/aQHpjOnnJd1EeGYFRF19cBIHdH9tV-ya4ooFnSdy-mA.jpg?width=108&crop=smart&auto=webp&s=17d695b0862d4a8174922e1f279dcfb92978b64a', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/aQHpjOnnJd1EeGYFRF19cBIHdH9tV-ya4ooFnSdy-mA.jpg?width=216&crop=smart&auto=webp&s=06f309af8f34101540b05a5071575ac295cb25a9', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/aQHpjOnnJd1EeGYFRF19cBIHdH9tV-ya4ooFnSdy-mA.jpg?width=320&crop=smart&auto=webp&s=2630d761a929d9896df86edc6da6b094d0ad452e', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/aQHpjOnnJd1EeGYFRF19cBIHdH9tV-ya4ooFnSdy-mA.jpg?width=640&crop=smart&auto=webp&s=83c3e60981698529841d03098253aaaa27ae15cb', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/aQHpjOnnJd1EeGYFRF19cBIHdH9tV-ya4ooFnSdy-mA.jpg?width=960&crop=smart&auto=webp&s=fb6a0f86833e20b8c075c641a8d8508637652a11', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/aQHpjOnnJd1EeGYFRF19cBIHdH9tV-ya4ooFnSdy-mA.jpg?width=1080&crop=smart&auto=webp&s=6a96b9709c3ba5f2920985e330ebd7e0e4de22d7', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/aQHpjOnnJd1EeGYFRF19cBIHdH9tV-ya4ooFnSdy-mA.jpg?auto=webp&s=2a104fc0e424c1e95dd879141e2e8acb9f6e8e83', 'width': 1200}, 'variants': {}}]}
|
Llama 4 Scout sub 50GB GGUF Quantization showdown (aka I did some KLD comparisons)
| 69 |
Sorry in advanced if you've seen this already, wanted to post it here first but it got caught in auto-mod so I threw it up elsewhere, reposting now with permission
Big fat disclaimer, KLD is not everything, PPL is even less so, Top P is.. somewhat useful
Also huge thanks to Artus at BeaverAI Club for helping run the KLD for the full BF16 model, would have taken me days probably :D
Before working on Maverick, I decided to blow some compute on calculating the PPL/KLD/Top P of several small Scout quants, the ones I published, same setup but minus my PR changes (so what main would produce), and even threw in some of Unsloth's quants.
This is an effort to see if the PR changes I made are overall beneficial or detract. I don't love how much larger they get, we're losing some of the meaning of "IQ1_M" (which is supposed to average 1.75BPW..) and such, but nevertheless I figured it was worth finding out if these changes are worth pursuing and applying to Maverick
For reference, BF16's PPL is 8.6, so we expect all quant numbers to be pretty high. 8.6 PPL is not inherently bad for wikitext, it's odd, but also not a number worth reading into because all it really means is Scout wouldn't tend to arbitrarily spit out wikitext 🤷♂️
Raw data (I'm so sorry mobile users):
| Measurement | IQ1_M (mine) | IQ1_M (main) | IQ2_XXS (mine) | IQ2_XXS (main) | IQ2_S (mine) | UD-IQ1_M (unsloth) | Q2_K_L (mine) | Q2_K_L (main) | UD-Q2_K_XL (unsloth) | IQ3_XXS (mine) | IQ3_XXS (main) |
| ----------- | ------------ | ------------ | -------------- | -------------- | ------------ | ------------------ | ------------- | ------------- | -------------------- | -------------- | -------------- |
| Size (GB) | 26.32 | 24.57 | 30.17 | 28.56 | 34.34 | 35.4 | 44 | 40.57 | 42.6 | 44.96 | 41.66 |
| Mean PPL | 11.81 | 13.79 | 10.55 | 11.66 | 9.85 | 10.30 | 9.02 | 9.88 | 9.31 | 9.266434 | 9.76184 |
| KLD | | | | | | | | | | | |
| Mean | 0.691 | 0.933 | 0.464 | 0.664 | 0.361 | 0.376 | 0.217 | 0.332 | 0.185 | 0.164 | 0.244 |
| Max | 17.819 | 23.806 | 26.647 | 26.761 | 17.597 | 21.264 | 24.180 | 17.556 | 23.286 | 28.166 | 25.849 |
| 99.9% | 9.912 | 10.822 | 7.897 | 10.029 | 6.693 | 6.995 | 11.729 | 12.766 | 4.213 | 4.232 | 4.964 |
| 99% | 5.463 | 6.250 | 4.084 | 5.094 | 3.237 | 3.560 | 2.108 | 2.966 | 1.844 | 1.600 | 2.178 |
| median | 0.315 | 0.503 | 0.187 | 0.336 | 0.141 | 0.131 | 0.067 | 0.125 | 0.060 | 0.056 | 0.099 |
| 10% | 0.0053 | 0.0099 | 0.002 | 0.004 | 0.0012 | 0.0012 | 0.0005 | 0.0009 | 0.0004 | 0.0004 | 0.0005 |
| 5% | 0.00097 | 0.00179 | 0.0003 | 0.00064 | 0.00019 | 0.00018 | 0.00008 | 0.00013 | 0.00005 | 0.00005 | 0.00007 |
| 1% | 0.000046 | 0.000073 | 0.000011 | 0.000030 | 0.000007 | 0.000007 | 0.000003 | 0.000004 | 0.000001 | 0.000001 | 0.000002 |
| Delta probs | | | | | | | | | | | |
| Mean | \-8.03% | \-10.30% | \-4.62% | \-6.70% | \-3.38% | \-3.46% | \-2.14% | \-2.37% | \-1.38% | \-1.13% | \-1.57% |
| Max | 99.67% | 98.73% | 99.81% | 99.81% | 99.13% | 98.90% | 99.88% | 99.81% | 99.83% | 99.91% | 99.89% |
| 99.9% | 77.40% | 79.77% | 76.36% | 79.42% | 75.03% | 76.59% | 69.34% | 75.65% | 69.69% | 65.60% | 71.73% |
| 99% | 42.37% | 47.40% | 41.62% | 47.11% | 40.06% | 40.50% | 32.34% | 41.88% | 33.46% | 31.38% | 37.88% |
| 95.00% | 15.79% | 18.51% | 16.32% | 19.86% | 16.05% | 15.56% | 12.41% | 17.30% | 12.83% | 12.71% | 16.04% |
| 90.00% | 6.59% | 7.56% | 7.69% | 9.05% | 7.62% | 7.33% | 5.92% | 8.86% | 6.43% | 6.50% | 8.23% |
| 75.00% | 0.16% | 0.13% | 0.44% | 0.35% | 0.54% | 0.51% | 0.53% | 0.89% | 0.70% | 0.70% | 0.86% |
| Median | \-0.78% | \-1.21% | \-0.18% | \-0.42% | \-0.09% | \-0.09% | \-0.03% | \-0.02% | \-0.01% | \-0.01% | \-0.01% |
| 25.00% | \-11.66% | \-15.85% | \-6.11% | \-9.93% | \-4.65% | \-4.56% | \-2.86% | \-3.40% | \-2.11% | \-1.96% | \-2.66% |
| 10.00% | \-35.57% | \-46.38% | \-23.74% | \-34.08% | \-19.19% | \-18.97% | \-12.61% | \-16.60% | \-10.76% | \-10.12% | \-13.68% |
| 5.00% | \-56.91% | \-68.67% | \-40.94% | \-53.40% | \-33.86% | \-34.31% | \-23.01% | \-30.06% | \-20.07% | \-18.53% | \-24.41% |
| 1.00% | \-91.25% | \-95.39% | \-80.42% | \-87.98% | \-70.51% | \-73.12% | \-55.83% | \-67.16% | \-49.11% | \-44.35% | \-53.65% |
| 0.10% | \-99.61% | \-99.87% | \-98.74% | \-99.76% | \-95.85% | \-95.98% | \-99.92% | \-99.92% | \-82.64% | \-78.71% | \-86.82% |
| Minimum | \-100.00% | \-100.00% | \-100.00% | \-100.00% | \-99.95% | \-99.99% | \-100.00% | \-100.00% | \-99.90% | \-100.00% | \-100.00% |
| RMS Δp | 23.63% | 27.63% | 19.13% | 23.06% | 16.88% | 17.16% | 13.55% | 16.31% | 12.16% | 11.30% | 13.69% |
| Same top | 68.58% | 62.65% | 74.02% | 67.77% | 76.74% | 77.00% | 82.92% | 77.85% | 83.42% | 84.28% | 80.08% |
Image of the above:
~~https://i.imgur.com/35GAKe5.png~~
*EDIT*: Messed up some of the lower calculations! (that's why i included the raw data haha..) here's an updated image:
https://i.imgur.com/hFkza66.png
I also added a logit for the Top P for the size (and made it clearer by multiplying by 100 after), since I think this paints a more clear image for Top P.. Obviously if the model is extremely tiny but sometimes gives the right answer, it'll get a super high Top P/GB, but as the Top P gets closer to 100, that's where the differences matter more. The logit calculation gives a better picture of the differences IMO
I added at the bottom some "metrics", like 1/PPL/MB (since GB was a tiny number)
For all of these, bigger is better (I inversed PPL, KLD, and RMS to get meaningful results, since smaller per GB is a weird metric to look at)
I added some colour to highlight a few things, but DON'T read too much into them, it's purely informational. I can't REALLY say which values are more important (though I will say PPL itself seems pretty useless when even the full BF16 model got over 8)
KLD, RMS, and Top P are all relevant regardless of the PPL, simply because they tell you how similarly a quantization performs to the full model weights. This doesn't mean that one that's closer is strictly better, just more similar
And I share the full information because there are distinct sections where each quant performs admirably
In terms of performance per GB, my IQ3_XXS seems to come out on top (by a hair), but it has by far the worst MAX KLD value.. That's not super concerning since the 99.9% is very reasonable, but it's worth noting that no quant is best across the board.. maybe something to continue striving towards! My optimization search is ongoing :)
More than anything it looks like my IQ3_XXS and Unsloth's UD-Q2_K_XL are the kings of sub 50GB, trading blows across the chart
And if you need even less weight, both my IQ2_S and Unsloth's UD-1Q_M offer pretty great performance for around 35GB!
Anyways, hope someone finds something interesting in the charts!
| 2025-04-10T00:43:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvlf6m/llama_4_scout_sub_50gb_gguf_quantization_showdown/
|
noneabove1182
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvlf6m
| false | null |
t3_1jvlf6m
|
/r/LocalLLaMA/comments/1jvlf6m/llama_4_scout_sub_50gb_gguf_quantization_showdown/
| false | false |
self
| 69 |
{'enabled': False, 'images': [{'id': 'ip8bisZkd6fEhdjPkHslLdSSroHT2zbva7iyvVrbcwc', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/CfIrKW1kZ8qlNhgwJ3ZTjclLRdblfv-mHMoQON5ad9c.png?width=108&crop=smart&auto=webp&s=c65565d4a0bd10a72794b34ee2da203e2495ee98', 'width': 108}, {'height': 133, 'url': 'https://external-preview.redd.it/CfIrKW1kZ8qlNhgwJ3ZTjclLRdblfv-mHMoQON5ad9c.png?width=216&crop=smart&auto=webp&s=b8385bc7c241878c17b77e981b75095ce86f3f8e', 'width': 216}, {'height': 197, 'url': 'https://external-preview.redd.it/CfIrKW1kZ8qlNhgwJ3ZTjclLRdblfv-mHMoQON5ad9c.png?width=320&crop=smart&auto=webp&s=d6ca74a69dc673671be49dd7b9da3eda1e5ac012', 'width': 320}, {'height': 394, 'url': 'https://external-preview.redd.it/CfIrKW1kZ8qlNhgwJ3ZTjclLRdblfv-mHMoQON5ad9c.png?width=640&crop=smart&auto=webp&s=063926c4c679615bc35cca97ba1c7edbda292b2e', 'width': 640}, {'height': 592, 'url': 'https://external-preview.redd.it/CfIrKW1kZ8qlNhgwJ3ZTjclLRdblfv-mHMoQON5ad9c.png?width=960&crop=smart&auto=webp&s=511c9b8c0d479bfaccff9c74b7f7da39137ca076', 'width': 960}, {'height': 666, 'url': 'https://external-preview.redd.it/CfIrKW1kZ8qlNhgwJ3ZTjclLRdblfv-mHMoQON5ad9c.png?width=1080&crop=smart&auto=webp&s=7037e76f60f7f28e325fbb2488047ba6cda5fe3f', 'width': 1080}], 'source': {'height': 821, 'url': 'https://external-preview.redd.it/CfIrKW1kZ8qlNhgwJ3ZTjclLRdblfv-mHMoQON5ad9c.png?auto=webp&s=d3c8818b5f1741e7a710d2efb2a916caef11b190', 'width': 1331}, 'variants': {}}]}
|
Why are there not MOE with different sizes?
| 3 |
Is there a reason why we don't have MOE's with different sizes?
For instance A 7b and a 1b.
Can't you take two existing models and have a router that can be trained with both efficiency and quality in mind? You could take up a little more VRAM but you can get blazing speeds when its non cognitive intensive.
| 2025-04-10T01:43:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvmkex/why_are_there_not_moe_with_different_sizes/
|
Alienanthony
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvmkex
| false | null |
t3_1jvmkex
|
/r/LocalLLaMA/comments/1jvmkex/why_are_there_not_moe_with_different_sizes/
| false | false |
self
| 3 | null |
Pre-orders for the GMKtec EVO-X2 with the AMD Strix Halo with 128GB of RAM starts on 4/15 in the US and EU.
| 2 |
There's no price listed for the US or EU yet. Pre-orders start on the 15th of April in both. Pre-orders are already available for China. I wonder what the 125% tariff is going to do to the US price.
https://www.gmktec.com/pages/evo-x2
https://de.gmktec.com/en/pages/evo-x2
| 2025-04-10T02:12:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvn4kk/preorders_for_the_gmktec_evox2_with_the_amd_strix/
|
fallingdowndizzyvr
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvn4kk
| false | null |
t3_1jvn4kk
|
/r/LocalLLaMA/comments/1jvn4kk/preorders_for_the_gmktec_evox2_with_the_amd_strix/
| false | false |
self
| 2 | null |
LLMs trained with medical data?
| 1 |
Just that, been wondering if there is a local Llm focused on health information.
And if you have used one, what's your experience of it?
| 2025-04-10T02:28:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvnfkk/llms_trained_with_medical_data/
|
According-Delivery44
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvnfkk
| false | null |
t3_1jvnfkk
|
/r/LocalLLaMA/comments/1jvnfkk/llms_trained_with_medical_data/
| false | false |
self
| 1 | null |
Why are most coding models dense rather than MoE?
| 0 |
I imagine that for proficiency not only in python and typescript but java, rust, C++, C#, SQL, etc, a MoE could make sense, if dividing experts in families that share paradigms, such as:
C-derived imperative languages (C, C++, Java, C#, PHP)
Relational/declarative languages (SQL variants from different databases/standards, Prolog, SPARQL)
Systems programming with memory safety (Rust, Swift)
Functional Languages (Haskell, OCaml, F#, Scala)
Dynamic/scripting languages (Python, JavaScript, Ruby)
I imagine this could help improve quality of results, specially in languages other than Python /JS, and maybe even improve inference speed. However reality is that most coding models are dense, and I'm at a loss about why, do you know what are the reasons for this?
| 2025-04-10T03:00:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvo0k2/why_are_most_coding_models_dense_rather_than_moe/
|
TylerDurdenFan
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvo0k2
| false | null |
t3_1jvo0k2
|
/r/LocalLLaMA/comments/1jvo0k2/why_are_most_coding_models_dense_rather_than_moe/
| false | false |
self
| 0 | null |
Llama 4's 10M token context window
| 1 |
[removed]
| 2025-04-10T03:05:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvo3q8/llama_4s_10m_token_context_window/
|
Brilliant_Witness_34
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvo3q8
| false | null |
t3_1jvo3q8
|
/r/LocalLLaMA/comments/1jvo3q8/llama_4s_10m_token_context_window/
| false | false |
self
| 1 | null |
Llama 4's 10M token context window
| 1 |
[removed]
| 2025-04-10T03:07:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvo55f/llama_4s_10m_token_context_window/
|
Brilliant_Witness_34
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvo55f
| false | null |
t3_1jvo55f
|
/r/LocalLLaMA/comments/1jvo55f/llama_4s_10m_token_context_window/
| false | false |
self
| 1 | null |
Need layman advice in using a local LLM to fine-tune AI answers.
| 0 |
For context, I am using a local AI (Dolphin 3.0/LM Studio) to write fiction but I want it to craft my prose in a highly specific way. I am aware that prompt engineering can be used for this, but my prompt is pretty large and complex to capture everything at once.
If you ever used NovelAI or NovelCrafter, it has a section where you can fill in all your worldbuilding details in a seperate section and it helps to craft the story for you as you write. I was hoping to do something similar but from a local perspective. I did read about things like having multiple documents and then feeding it to your AI.
I did some searching in google, reddit, YouTube and even asked ChatGPT for help, but I am sincerely overwhelmed with what do I need to do. Things like the need to install python, LoRA, and such. I am honestly lost.
* As a layman who is not familiar with Python and has only dabbled with AI from a surface level, how do I run my own local LLM on my computer while fine-tuning it to help me craft my prose?
* Is the approach above the right approach for me to begin with? Would it be better if I just stick to NovelAI or NovelCrafter? Although the thing is, I don't really like being too reliant on paid subscription services.
Thank you for your time and answers, and I apologize in advance if my questions come off as basic. I've only used AI from the surface level but willing to go local and deeper.
| 2025-04-10T03:24:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvofy1/need_layman_advice_in_using_a_local_llm_to/
|
SpookieOwl
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvofy1
| false | null |
t3_1jvofy1
|
/r/LocalLLaMA/comments/1jvofy1/need_layman_advice_in_using_a_local_llm_to/
| false | false |
self
| 0 | null |
What is the cheapest setup for <20B model for data processing?
| 0 |
I’m doing data processing and looking to build a cheap setup that could run model like Gemma 14B or similar models locally for processing CSV. What could be the cheapest solution?
| 2025-04-10T03:27:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvohrr/what_is_the_cheapest_setup_for_20b_model_for_data/
|
GTHell
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvohrr
| false | null |
t3_1jvohrr
|
/r/LocalLLaMA/comments/1jvohrr/what_is_the_cheapest_setup_for_20b_model_for_data/
| false | false |
self
| 0 | null |
Long context summarization: Qwen2.5-1M vs Gemma3 vs Mistral 3.1
| 26 |
I tested long context summarization of these models, using ollama as backend:
Qwen2.5-14b-1m Q8
Gemma3 27b Q4KM (ollama gguf)
Mistral 3.1 24b Q4KM
Using the transcription of this 4hr Wan show video, it's about 55\~63 tokens for these 3 models:
[https://www.youtube.com/watch?v=mk05ddf3mqg](https://www.youtube.com/watch?v=mk05ddf3mqg)
System prompt: [https://pastebin.com/e4mKCAMk](https://pastebin.com/e4mKCAMk)
\---
**Results**:
Qwen2.5 [https://pastebin.com/C4Ss67Ed](https://pastebin.com/C4Ss67Ed)
Gemma3 [https://pastebin.com/btTv6RCT](https://pastebin.com/btTv6RCT)
Mistral 3.1 [https://pastebin.com/rMp9KMhE](https://pastebin.com/rMp9KMhE)
\---
**Observation**:
Qwen2.5 did okay, mistral 3.1 still has the same repetition issue as 3
idk if there is something wrong with ollama's implementation, but gemma3 is really bad at this, like it even didn't mention the AMD card at all.
So I also tested gemma3 in **google** ai studio which should has the best implementation for gemma3:
**"An internal error has occured"**
Then I tried open router:
[https://pastebin.com/Y1gX0bVb](https://pastebin.com/Y1gX0bVb)
And it's waaaay better then ollama Q4, consider how mistral's Q4 is doing way better than gemma q4, **I guess there is still some bugs in ollama's gemma3 implementation and you should avoid using it for long context tasks**
| 2025-04-10T04:08:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvp7fo/long_context_summarization_qwen251m_vs_gemma3_vs/
|
AaronFeng47
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvp7fo
| false | null |
t3_1jvp7fo
|
/r/LocalLLaMA/comments/1jvp7fo/long_context_summarization_qwen251m_vs_gemma3_vs/
| false | false |
self
| 26 |
{'enabled': False, 'images': [{'id': 'oB8NadgLBBIWnjWwptz8jE_OxlKidKA93qkeTI32Vyo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/KzzUkQrEQzvtY8E78_C0W25ubPKjcWdSvq5oo3ClCFs.jpg?width=108&crop=smart&auto=webp&s=abdf94117c7ef22c495d03c225d826558a389e92', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/KzzUkQrEQzvtY8E78_C0W25ubPKjcWdSvq5oo3ClCFs.jpg?width=216&crop=smart&auto=webp&s=02ebf1720b04f261b3bf0631fe75ed718f46106a', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/KzzUkQrEQzvtY8E78_C0W25ubPKjcWdSvq5oo3ClCFs.jpg?width=320&crop=smart&auto=webp&s=aed70bfcefc66fa901a2b33a1b463e6c2db10093', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/KzzUkQrEQzvtY8E78_C0W25ubPKjcWdSvq5oo3ClCFs.jpg?auto=webp&s=5667d679b4eac6f7a1703d42af0c9c66c8931c1d', 'width': 480}, 'variants': {}}]}
|
Microsoft Compute Driver Model (MCDM) + WSL2 enables VLLM to work with tensor parallelism & allows for TCC to be changed to MCDM via NVIDIA-SMI
| 4 |
If you have an Nvidia data center card (V100 etc.) that's locked into TCC mode and tried every hack to change it to WDDM mode for use with WSL2...no more strife. You can now use the command nvidia-smi -g {GPU_ID} -dm 2 to change it to a new mode "MCDM" without a GRID or vWS license / driver that was required for WDDM mode (also, the GRID drivers are incompatible with WSL2 even with WDDM enabled anyways - found that out the hard way).
Going to run some tests tomorrow on 4 x V100 32GB PCie 3.0 x16 + Threadripper Pro and see what kind of tps I can get with tensor parallelism using a docker desktop vllm container and report back.
| 2025-04-10T04:17:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvpchu/microsoft_compute_driver_model_mcdm_wsl2_enables/
|
ratbastid2000
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvpchu
| false | null |
t3_1jvpchu
|
/r/LocalLLaMA/comments/1jvpchu/microsoft_compute_driver_model_mcdm_wsl2_enables/
| false | false |
self
| 4 | null |
The Ultimate MCP Client
| 18 |
Over the past couple weeks, I've been really immersed in learning about MCP, a new protocol for equipping any LLM with a set of tools that can run on your own machine or a remote server you control and give all kinds of superpowers to AI agents to do things like search, etc.
As part of that research, I've already built one very fleshed-out and useful MCP server that I've shared here (I've added much more to it recently though!), the LLM Gateway MCP Server, which lets you use a big model to delegate to a cheaper model (and many more things in addition to that, like running automated multi-round LLM Tournaments, which I also posted about here recently).
To actually use these MCP servers though, you need an MCP client. Most people seem to be using the Claude Desktop app. I tried this and got it to work just fine, but it was a bit annoying to set up and there were lots of things I didn't like about it. I wanted something better.
So two days ago I began work on what I call the Ultimate MCP Client. After \~24 hours of work, it's working and ready and I'm really proud of how amazingly well it turned out. This is going to be a workhorse tool for me personally.
It's pure python and all in a single large .py file which can be deployed as a self-contained uv script if you want. It offers all kinds of features and very rich console output for use interactively in a terminal, along with a CLI. But it can also be used in the background.
That kind of background functionality, orchestrating and coordinating several MCP servers nicely, is how I mostly intend on using it. But once I saw how nice the interactive terminal experience was, I realized that I could slap a FastAPI server on top of it and make a web GUI.
Because I hate unneeded complexity so much, I made the WebGUI a single self-contained HTML file you can just open in your browser (similar to my Your-Source-to-Prompt tool), and it looks awesome using Alpine and Daisy and other nice UI libraries, all loaded via CDN.
| 2025-04-10T04:31:19 |
https://github.com/Dicklesworthstone/ultimate_mcp_client
|
dicklesworth
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvpko7
| false | null |
t3_1jvpko7
|
/r/LocalLLaMA/comments/1jvpko7/the_ultimate_mcp_client/
| false | false | 18 |
{'enabled': False, 'images': [{'id': '9keUr73K7MKB7EZFkz_SAeHy8VaudOadDTSN_Co7flc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nocIlb0TD3sqK4UU27jSBiEel3Kt83uLvo9UMt7ojGk.jpg?width=108&crop=smart&auto=webp&s=d1f08adeeb529ec78ce8e1f97f34860927903b90', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nocIlb0TD3sqK4UU27jSBiEel3Kt83uLvo9UMt7ojGk.jpg?width=216&crop=smart&auto=webp&s=bb84d99697a3fb3c28e8ad970bb437eb574b7701', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nocIlb0TD3sqK4UU27jSBiEel3Kt83uLvo9UMt7ojGk.jpg?width=320&crop=smart&auto=webp&s=bce8f6c6e03b6a9b6574d778272de4f23679ba6e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nocIlb0TD3sqK4UU27jSBiEel3Kt83uLvo9UMt7ojGk.jpg?width=640&crop=smart&auto=webp&s=33c431aaacd4e100ace18968a06867f8300bf396', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nocIlb0TD3sqK4UU27jSBiEel3Kt83uLvo9UMt7ojGk.jpg?width=960&crop=smart&auto=webp&s=22c0efb8aafaa6426c3890aefe0600ee4ff2f33e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nocIlb0TD3sqK4UU27jSBiEel3Kt83uLvo9UMt7ojGk.jpg?width=1080&crop=smart&auto=webp&s=264e3ac8e2d8336aa4c1bc86c070159f218f47ae', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/nocIlb0TD3sqK4UU27jSBiEel3Kt83uLvo9UMt7ojGk.jpg?auto=webp&s=a7dad1f58589c11885d2c9c8462f5b799bfbe5b6', 'width': 1200}, 'variants': {}}]}
|
|
LiveIdeaBench-v2 Update: Dataset & Leaderboard
| 16 | 2025-04-10T04:48:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvpujt/liveideabenchv2_update_dataset_leaderboard/
|
realJoeTrump
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvpujt
| false | null |
t3_1jvpujt
|
/r/LocalLLaMA/comments/1jvpujt/liveideabenchv2_update_dataset_leaderboard/
| false | false | 16 | null |
||
Bindu Reddy, CEO of AbacusAI (LiveBench) states Qwen 3 “is coming in hours”
| 126 | 2025-04-10T04:52:17 |
https://x.com/bindureddy/status/1910185483545776630?s=46
|
TKGaming_11
|
x.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvpwgc
| false | null |
t3_1jvpwgc
|
/r/LocalLLaMA/comments/1jvpwgc/bindu_reddy_ceo_of_abacusai_livebench_states_qwen/
| false | false | 126 |
{'enabled': False, 'images': [{'id': 'm_POPAy1pb79QgEcYal0-x1Rcbfwanhuur_kmUV8SRE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/95PFso5FRvaqkffIUa3P3b8toY1_H2zLUQj524lS6zs.jpg?width=108&crop=smart&auto=webp&s=61d55dbb224ea9b043b89c3770cb1097d7dc7459', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/95PFso5FRvaqkffIUa3P3b8toY1_H2zLUQj524lS6zs.jpg?auto=webp&s=8134240d2f3cd36065a3a039c186f5b78cf1a49e', 'width': 200}, 'variants': {}}]}
|
||
How many r's does strrrrrrrrrawberry have? (qwq-32b@q4_k_m)
| 0 |
\`\`\`
Okay, let me try to figure out how many 'r's are in the word "strrrrrrrrrawberry". Hmm, first I need to look at each letter carefully. Let me write it down step by step.
The word is spelled s-t followed by a bunch of r's and then the rest of the letters. Wait, actually let me spell it out properly: S-T-R-R-R-R-R-R-R-R-R-A-W-B-E-R-R-Y. Oh right, sometimes people might miscount because there are so many Rs in the beginning.
Let me break it down:
Starting from the first letter:
1. S
2. T
3. R (the first 'R') Then here come a lot of 'Rs'... Let's count each one after T until we get to another letter.
4. R
5. R
6. R
7. R
8. R
9. R
10. R Wait, how many are there exactly? The word is written as "strrrrrrrrrawberry", so between the T and the A, how many Rs are there?
Looking at "strrrrrrrrrawberry": after 'st', the next letters are a series of Rs. Let's see: s-t followed by ten r's maybe? Because the word is written as "str" with multiple rs. Wait, actually the user wrote it as strrrrrrrrrawberry. The number of R's in the middle might be represented by a lot of 'r's.
Wait let me check again:
The original question says: How many r's does strrrrrrrrrawberry have?
So the word given is "strrrrrrrrrawberry". Let me count the Rs in that exact spelling. So from where the letters are, after s and t comes a bunch of 'r's before the a.
Looking at "strrrrrrrrrawberry": starting with S-T followed by how many R's? The user spelled it as "str" but then there are multiple 'r's. Let me see: between the T and the A in "str...awberry".
Wait, maybe I can count them individually.
Breaking down each letter:
S (1)
T (2)
R (3)
Then more Rs...
Let me write out all letters one by one:
s-t-r-r-r-r-r-r-r-r-r-a-w-b-e-r-r-y
Now let's go step by step. After the 't' comes an r, then another, etc.
Starting from the beginning:
1. S
2. T
3. R (first)
4. R (second)
5. R (third)
6. R (fourth)
7. R (fifth)
8. R (sixth)
9. R (seventh)
10. R (eighth)
11. R (ninth)
12. R (tenth?) Wait, wait a second. Let's actually count how many 'r's are in "strrrrrrrrrawberry".
The user wrote it as "str" followed by ten r's? Wait no, the actual word is written with 9 or 10 rs?
Let me recount:
Looking at the string: strrrrrrrrrawberry
Between s and t is just one 'r'? No, wait, the letters after T are all Rs. Let me see.
Wait the entire sequence is S-T-R-R-R-R-R-R-R-R-R-A-W-B-E-R-R-Y.
Let's count each R:
After S and T (positions 1 and 2), starting from position3:
Positions:
3: R
4: R
5: R
6: R
7: R
8: R
9: R
10: R
11: R
Wait, so that's from position3 to position11. How many Rs is that?
From 3 to 11 inclusive: 11-3+1 = 9 Rs.
Then after the A (position12), there are other letters until we get to more Rs at the end.
Looking further:
After "strrrrrrrrrawber", there's another "r" and then another before 'y'.
Breaking down the entire word again step by step with all letters:
Let me list each character one by one:
1. S
2. T
3. R
4. R
5. R
6. R
7. R
8. R
9. R
10. R
11. R (That's the 9 Rs between T and A) Then:
12. A
13. W
14. B
15. E
16. R (Another R here)
17. R (Second one after E)
18. Y
So in total, there are two more Rs at positions 16 and 17.
Therefore, the total number of Rs is:
The initial 9 between T and A plus the final two before 'y', making it a total of 11?
Wait wait, let me confirm again. Let's count all R instances in the entire word:
Starting with S-T followed by nine Rs (positions3-11), then A-W-B-E followed by two more Rs (positions16 and 17) before Y.
So that would be 9 +2 = 11 Rs total?
\`\`\`
It ultimately got stuck in a loop...a bit hilarious watching it struggle for the first few minutes, but then I felt bad =/ It's LLM sniping!
| 2025-04-10T05:23:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvqd5b/how_many_rs_does_strrrrrrrrrawberry_have_qwq32bq4/
|
itchykittehs
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvqd5b
| false | null |
t3_1jvqd5b
|
/r/LocalLLaMA/comments/1jvqd5b/how_many_rs_does_strrrrrrrrrawberry_have_qwq32bq4/
| false | false |
self
| 0 | null |
What are your thoughts on NVIDIA's Llama 3.1 Nemotron series?
| 1 |
[removed]
| 2025-04-10T05:28:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvqfxn/what_are_your_thoughts_on_nvidias_llama_31/
|
internal-pagal
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvqfxn
| false | null |
t3_1jvqfxn
|
/r/LocalLLaMA/comments/1jvqfxn/what_are_your_thoughts_on_nvidias_llama_31/
| false | false |
self
| 1 | null |
Second Me : Fully Local AI Self with Identity & Memory Modeling——with Docker & API support now live
| 59 |
Hey everyone,
I'm one of the contributors to [Second Me](https://github.com/Mindverse/Second-Me), an open-source, fully local AI project designed for personal memory, reasoning, and identity modeling. Think of it as a customizable “AI self” — trained on your data, aligned with your values, and fully under your control (not OpenAI’s).
We hit **6,000+ stars in 7 days**, which is wild — but what’s even cooler is what’s been happening **after** launch:
# 🔧 What It Does (tl;dr):
* **Personal AI**, locally trained and run. 100% privacy with full local execution.
* **Hierarchical Memory Modeling (HMM)** for authentic, long-term personalization.
* **Me-alignment** structure that mirrors individual values and identity.
* **Second Me Protocol (SMP)** for decentralized, peer-to-peer AI interaction.
**New in this release:**
* Full Docker support for macOS (Apple Silicon), Windows, and Linux
* OpenAI-Compatible API Interface
* MLX training support (Beta)
* Significant performance enhancements
# 💻 Community Contributions
In just 2 weeks post-launch:
* 60+ PRs, 70+ issues
* Contributors from Tokyo to Dubai: students, academics, enterprise devs
Highlights from the GitHub:
* 🤖 WeChat bot integration — #81 by u/Zero-coder
* 🌏 Japanese README localization — #115 by u/eltociear
* 📁 Improved file resource management — #74 by u/mahdirahimi1999
* 🔐 File name validation for added security — #62 by u/umutcrs
Thanks to their and others’ feedback, features like:
* 🔄 Multi-platform deployment
* 📝 Note-based continuous training
…have been added to the roadmap.
📈 **In the Wild**
Tech creator u/GOROman documented a full workflow for deploying Second Me, training it on **75GB of his own X posts since 2007** — and even bought a Mac Studio just for it.
Inspired by his post, u/Yuzunose envisioned linking Second Me with **VRChat**, giving AI a persistent virtual persona to interact with others on your behalf.
# ⏭️ What’s Next?
* **Your Identity as an Interface**: Use your AI self as a consistent entry point across platforms — carrying your identity, memory, and thought process — accessible by other users.
* **Deep Reasoning & Continuous Learning**: We’re integrating Chain of Thought-style reasoning (think OpenAI o1 / DeepSeek R1) and one-click continuous training. The more data you feed it, the more your Second Me evolves to think like *you*.
🔗 GitHub: [https://github.com/Mindverse/Second-Me](https://github.com/Mindverse/Second-Me)
📄 Paper: [https://arxiv.org/abs/2503.08102](https://arxiv.org/abs/2503.08102)
We’re building Second Me so that your AI extends *your* capabilities — not someone else’s platform. If you value privacy, customization, and digital freedom, we’d love your thoughts, feedback, or contributions.
| 2025-04-10T05:48:28 |
DontPlayMeLikeAFool
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvqqvm
| false | null |
t3_1jvqqvm
|
/r/LocalLLaMA/comments/1jvqqvm/second_me_fully_local_ai_self_with_identity/
| false | false | 59 |
{'enabled': True, 'images': [{'id': 'H2FyOLHDS3RWmIL6C8wVlFLaJomJQau4cSck7X08LQY', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/zc18secp4yte1.png?width=108&crop=smart&auto=webp&s=44ff3c82a2d37167988ca5bf45b106272d798b65', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/zc18secp4yte1.png?width=216&crop=smart&auto=webp&s=b1789bf5ad951b29152a4ce6c648be1864f7bae5', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/zc18secp4yte1.png?width=320&crop=smart&auto=webp&s=0883a1af33ee45a77407aab80eaa49de342c5ac6', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/zc18secp4yte1.png?width=640&crop=smart&auto=webp&s=38886c3f5d327276b3971dbaf783a586924f1f1a', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/zc18secp4yte1.png?width=960&crop=smart&auto=webp&s=e27a53450f1cfcb4ddc4c14ec35183ff956a13d7', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/zc18secp4yte1.png?width=1080&crop=smart&auto=webp&s=d19d93a6e0aa98fcd630eb6d44227e8a010bb21e', 'width': 1080}], 'source': {'height': 900, 'url': 'https://preview.redd.it/zc18secp4yte1.png?auto=webp&s=28056f32e5651a70a3b1635b2ef3e232fd9cc572', 'width': 1600}, 'variants': {}}]}
|
||
Llama 4 awnsers $MODERATION$
| 1 |
[removed]
| 2025-04-10T05:50:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvqros/llama_4_awnsers_moderation/
|
SubjectCellist609
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvqros
| false | null |
t3_1jvqros
|
/r/LocalLLaMA/comments/1jvqros/llama_4_awnsers_moderation/
| false | false |
self
| 1 | null |
Add powa to 2015 HPZ with 3060
| 0 |
As far as Ive found online I can add a second nvidia gpu and basically sum the vram?
I have an older HPZ420 w/ 2667v2, so maybe one of those old 24gb accelerators could play nice with my modern gpu?
Would it be fun to add a M40 with a 3060? I have fun with ollama and comfyUI.
Obviously would need lots of extra cooling and larger psu, part of the fun.
I have a 12gb 3060 and 64gb of ram, can slowly run models up to around 45gb. Ddr3 1866mhz hah, old ecc ram. Task manager will show 44gb vram, no idea how they got that number.
| 2025-04-10T05:54:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvqu0n/add_powa_to_2015_hpz_with_3060/
|
BrassTrouts
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvqu0n
| false | null |
t3_1jvqu0n
|
/r/LocalLLaMA/comments/1jvqu0n/add_powa_to_2015_hpz_with_3060/
| false | false |
self
| 0 | null |
Gemini Robotics brings powerful AI models into the physical world
| 1 |
[removed]
| 2025-04-10T06:14:16 |
https://aitech.fyi/post/gemini-robotics-brings-ai-into-the-physical-world/
|
always-a-learner-88
|
aitech.fyi
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvr4jo
| false | null |
t3_1jvr4jo
|
/r/LocalLLaMA/comments/1jvr4jo/gemini_robotics_brings_powerful_ai_models_into/
| false | false |
default
| 1 | null |
Qwen Dev: Qwen3 not gonna release "in hours", still need more time
| 656 | 2025-04-10T07:29:02 |
AaronFeng47
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvs66w
| false | null |
t3_1jvs66w
|
/r/LocalLLaMA/comments/1jvs66w/qwen_dev_qwen3_not_gonna_release_in_hours_still/
| false | false | 656 |
{'enabled': True, 'images': [{'id': '49SRAeaT8WTjq4pWfLW_GKycVZuG1_Jk5dP26BE803Q', 'resolutions': [{'height': 79, 'url': 'https://preview.redd.it/3kcfx9xnmyte1.png?width=108&crop=smart&auto=webp&s=8fdfb51c02903be9011a7f94470bf410bc74b577', 'width': 108}, {'height': 159, 'url': 'https://preview.redd.it/3kcfx9xnmyte1.png?width=216&crop=smart&auto=webp&s=7995c060e109642d074e700690f2bc2f3d52e780', 'width': 216}, {'height': 236, 'url': 'https://preview.redd.it/3kcfx9xnmyte1.png?width=320&crop=smart&auto=webp&s=2f0208bfcc0268eb29f65c8877d7674da4184517', 'width': 320}, {'height': 472, 'url': 'https://preview.redd.it/3kcfx9xnmyte1.png?width=640&crop=smart&auto=webp&s=acd7d81fbe37029fcaaed94d7eab854f4dea3442', 'width': 640}, {'height': 708, 'url': 'https://preview.redd.it/3kcfx9xnmyte1.png?width=960&crop=smart&auto=webp&s=805bfb9d0fd2495ceedb08a7f823f888ff530c15', 'width': 960}, {'height': 797, 'url': 'https://preview.redd.it/3kcfx9xnmyte1.png?width=1080&crop=smart&auto=webp&s=5e7518bd3b134bb67842d3f6caee558c0dd03130', 'width': 1080}], 'source': {'height': 797, 'url': 'https://preview.redd.it/3kcfx9xnmyte1.png?auto=webp&s=59330933050937d6f179e60235bdec08c07635c3', 'width': 1080}, 'variants': {}}]}
|
|||
Llama 4 Maverick (400B) on "A single 3090"
| 1 |
[removed]
| 2025-04-10T07:34:20 |
https://imgur.com/a/PKPRtz5
|
mrjackspade
|
imgur.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvs8rt
| false |
{'oembed': {'description': 'Discover the magic of the internet at Imgur, a community powered entertainment destination. Lift your spirits with funny jokes, trending memes, entertaining gifs, inspiring stories, viral videos, and so much more from users.', 'height': 60, 'html': '<iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fimgur.com%2Fa%2FPKPRtz5%2Fembed%3Fpub%3Dtrue%26ref%3Dhttps%253A%252F%252Fembed.ly%26w%3D500&display_name=Imgur&url=https%3A%2F%2Fimgur.com%2Fa%2FPKPRtz5&image=https%3A%2F%2Fi.imgur.com%2FiwLp2Lk.jpg%3Ffbplay&type=text%2Fhtml&schema=imgur" width="500" height="60" scrolling="no" title="Imgur embed" frameborder="0" allow="autoplay; fullscreen; encrypted-media; picture-in-picture;" allowfullscreen="true"></iframe>', 'provider_name': 'Imgur', 'provider_url': 'http://imgur.com', 'thumbnail_height': 356, 'thumbnail_url': 'https://i.imgur.com/iwLp2Lk.jpg?fbplay', 'thumbnail_width': 764, 'title': 'Imgur', 'type': 'rich', 'url': 'https://imgur.com/a/PKPRtz5', 'version': '1.0', 'width': 500}, 'type': 'imgur.com'}
|
t3_1jvs8rt
|
/r/LocalLLaMA/comments/1jvs8rt/llama_4_maverick_400b_on_a_single_3090/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'oRzREd6T9KrTtZX-s60dcvRks9M-xxkR1Pam6LuTRy0', 'resolutions': [{'height': 50, 'url': 'https://external-preview.redd.it/MxtEmYN1SxqqUqJUL5ql2v75JcxgN5T_3sQ1ABmIgc4.jpg?width=108&crop=smart&auto=webp&s=5e4518f5bed364667985eba645bdbfbd19f5ccbc', 'width': 108}, {'height': 100, 'url': 'https://external-preview.redd.it/MxtEmYN1SxqqUqJUL5ql2v75JcxgN5T_3sQ1ABmIgc4.jpg?width=216&crop=smart&auto=webp&s=24783246c7367c5ff4758c051326c1210d3efc64', 'width': 216}, {'height': 149, 'url': 'https://external-preview.redd.it/MxtEmYN1SxqqUqJUL5ql2v75JcxgN5T_3sQ1ABmIgc4.jpg?width=320&crop=smart&auto=webp&s=f1ccd00464df04f8d0cd954b2d275fea819ac1ad', 'width': 320}, {'height': 298, 'url': 'https://external-preview.redd.it/MxtEmYN1SxqqUqJUL5ql2v75JcxgN5T_3sQ1ABmIgc4.jpg?width=640&crop=smart&auto=webp&s=054c9e917d517375ac851af7043944cdb98fd31d', 'width': 640}], 'source': {'height': 356, 'url': 'https://external-preview.redd.it/MxtEmYN1SxqqUqJUL5ql2v75JcxgN5T_3sQ1ABmIgc4.jpg?auto=webp&s=99da90cb7aa93365f5aa0c793650464ac1d96f08', 'width': 764}, 'variants': {}}]}
|
|
Best Approach to Extract Key Data from a Structured PDF with LLM
| 1 |
[removed]
| 2025-04-10T07:42:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvscg8/best_approach_to_extract_key_data_from_a/
|
Total_Ad6084
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvscg8
| false | null |
t3_1jvscg8
|
/r/LocalLLaMA/comments/1jvscg8/best_approach_to_extract_key_data_from_a/
| false | false |
self
| 1 | null |
Offline AI Research Repo
| 1 |
[removed]
| 2025-04-10T08:08:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvsp8s/offline_ai_research_repo/
|
wakoma
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvsp8s
| false | null |
t3_1jvsp8s
|
/r/LocalLLaMA/comments/1jvsp8s/offline_ai_research_repo/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'jY3V5g1UW4OexL8BrGIq_lvlkNTJgO8dSuua2UU740E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/086XwznYu9pyMd4DZBcKfs1VK3cYGRJdy0_QNOqDo8A.jpg?width=108&crop=smart&auto=webp&s=3ba1d1eab9d97b4753ef5c282fa1e91cd08975e1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/086XwznYu9pyMd4DZBcKfs1VK3cYGRJdy0_QNOqDo8A.jpg?width=216&crop=smart&auto=webp&s=dc1077ba6cfb383d16aab27ae23f4a4a8566f415', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/086XwznYu9pyMd4DZBcKfs1VK3cYGRJdy0_QNOqDo8A.jpg?width=320&crop=smart&auto=webp&s=3f8c97133c8a2530b154b997f677b6bcc42f51fb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/086XwznYu9pyMd4DZBcKfs1VK3cYGRJdy0_QNOqDo8A.jpg?width=640&crop=smart&auto=webp&s=c08f80dfacba7a76bc973f54328a1a441527548d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/086XwznYu9pyMd4DZBcKfs1VK3cYGRJdy0_QNOqDo8A.jpg?width=960&crop=smart&auto=webp&s=9fe11ccc8bb66401dec001d452661a4850b86b3b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/086XwznYu9pyMd4DZBcKfs1VK3cYGRJdy0_QNOqDo8A.jpg?width=1080&crop=smart&auto=webp&s=50ab2e0b0f304e6027bfc31c742ed05007943759', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/086XwznYu9pyMd4DZBcKfs1VK3cYGRJdy0_QNOqDo8A.jpg?auto=webp&s=0eefa566db7df74b6ff26e129308e1a111a4c2f3', 'width': 1200}, 'variants': {}}]}
|
|
Which one do you think release better models in the next 2 weeks
| 0 |
[View Poll](https://www.reddit.com/poll/1jvspda)
| 2025-04-10T08:09:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvspda/which_one_do_you_think_release_better_models_in/
|
FRENLYFROK
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvspda
| false | null |
t3_1jvspda
|
/r/LocalLLaMA/comments/1jvspda/which_one_do_you_think_release_better_models_in/
| false | false |
self
| 0 | null |
Are there any local llms for deep search?
| 1 |
I was wondering if you recommend any local llm models for the task of deep internet search?
| 2025-04-10T08:23:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvsvxc/are_there_any_local_llms_for_deep_search/
|
sbs1799
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvsvxc
| false | null |
t3_1jvsvxc
|
/r/LocalLLaMA/comments/1jvsvxc/are_there_any_local_llms_for_deep_search/
| false | false |
self
| 1 | null |
Just did a deep dive into Google's Agent Development Kit (ADK). Here are some thoughts, nitpicks, and things I loved (unbiased)
| 62 |
1. The CLI is excellent. adk web, adk run, and api\_server make it super smooth to start building and debugging. It feels like a proper developer-first tool. Love this part.
2. The docs have some unnecessary setup steps—like creating folders manually - that add friction for no real benefit.
3. Support for multiple model providers is impressive. Not just Gemini, but also GPT-4o, Claude Sonnet, LLaMA, etc, thanks to LiteLLM. Big win for flexibility.
4. Async agents and conversation management introduce unnecessary complexity. It’s powerful, but the developer experience really suffers here.
5. Artifact management is a great addition. Being able to store/load files or binary data tied to a session is genuinely useful for building stateful agents.
6. The different types of agents feel a bit overengineered. LlmAgent works but could’ve stuck to a cleaner interface. Sequential, Parallel, and Loop agents are interesting, but having three separate interfaces instead of a unified workflow concept adds cognitive load. Custom agents are nice in theory, but I’d rather just plug in a Python function.
7. AgentTool is a standout. Letting one agent use another as a tool is a smart, modular design.
8. Eval support is there, but again, the DX doesn’t feel intuitive or smooth.
9. Guardrail callbacks are a great idea, but their implementation is more complex than it needs to be. This could be simplified without losing flexibility.
10. Session state management is one of the weakest points right now. It’s just not easy to work with.
11. Deployment options are solid. Being able to deploy via Agent Engine (GCP handles everything) or use Cloud Run (for control over infra) gives developers the right level of control.
12. Callbacks, in general, feel like a strong foundation for building event-driven agent applications. There’s a lot of potential here.
13. Minor nitpick: the artifacts documentation currently points to a 404.
Final thoughts
Frameworks like ADK are most valuable when they empower beginners and intermediate developers to build confidently. But right now, the developer experience feels like it's optimized for advanced users only. The ideas are strong, but the complexity and boilerplate may turn away the very people who’d benefit most. A bit of DX polish could make ADK the go-to framework for building agentic apps at scale.
| 2025-04-10T08:23:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvsvzj/just_did_a_deep_dive_into_googles_agent/
|
Any-Cockroach-3233
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvsvzj
| false | null |
t3_1jvsvzj
|
/r/LocalLLaMA/comments/1jvsvzj/just_did_a_deep_dive_into_googles_agent/
| false | false |
self
| 62 | null |
Looking for a Portable Device for hosting LLM Model
| 0 |
Looking for a all in one unit that is portable for some AI Development.
I was looking at: Jetson Orin Nano Super Developer Kit
But as i see its geared more towards robotics, am not sure if its the best.
I travel alot to and from office's, We dont want to invest in server costs yet. As this AI model is complex and will take time to deploy before we invest in hosting costs.
Hence the needs for something portable, Something i can plug into my laptop and mains and connect over USB / Network to continue development.
I wont need the latest and greatest models, But something fairly recent ish as it will be producing code.
Can anyone recommend anything similar to the: Jetson Orin Nano Super Developer Kit
Or can provide some feedback on the device to how it performed please and thanks
| 2025-04-10T08:47:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvt6vk/looking_for_a_portable_device_for_hosting_llm/
|
djhamilton
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvt6vk
| false | null |
t3_1jvt6vk
|
/r/LocalLLaMA/comments/1jvt6vk/looking_for_a_portable_device_for_hosting_llm/
| false | false |
self
| 0 | null |
[Discussion] Why we stopped relying on ChatGPT API - and started fine-tuning LLaMA models instead
| 1 |
[removed]
| 2025-04-10T08:55:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvtaqq/discussion_why_we_stopped_relying_on_chatgpt_api/
|
soman_yadav
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvtaqq
| false | null |
t3_1jvtaqq
|
/r/LocalLLaMA/comments/1jvtaqq/discussion_why_we_stopped_relying_on_chatgpt_api/
| false | false |
self
| 1 | null |
I'm releasing the speech version of Gemma-3
| 1 |
[removed]
| 2025-04-10T08:58:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvtcbv/im_releasing_the_speech_version_of_gemma3/
|
Dry_Attorney8708
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvtcbv
| false | null |
t3_1jvtcbv
|
/r/LocalLLaMA/comments/1jvtcbv/im_releasing_the_speech_version_of_gemma3/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '-0iI1xUg8fpPMAjMUXHnf48Hjft6S3pkG1ZXUZaPxqg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VN76Tf5D_olKXJU2eGJWx8RjCxggw8BEg8tHov6edbc.jpg?width=108&crop=smart&auto=webp&s=56f3391021a720b61b22f497f94285c36022a69f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VN76Tf5D_olKXJU2eGJWx8RjCxggw8BEg8tHov6edbc.jpg?width=216&crop=smart&auto=webp&s=1e6768574b5c1b5f0f54ecc7b15f8090224ab06d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VN76Tf5D_olKXJU2eGJWx8RjCxggw8BEg8tHov6edbc.jpg?width=320&crop=smart&auto=webp&s=9a3bf9a60cd7c4a64db3b000bed7ccb45a21b4c3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VN76Tf5D_olKXJU2eGJWx8RjCxggw8BEg8tHov6edbc.jpg?width=640&crop=smart&auto=webp&s=bebaa7d37368c722eb6e83d378b97a22303c0920', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VN76Tf5D_olKXJU2eGJWx8RjCxggw8BEg8tHov6edbc.jpg?width=960&crop=smart&auto=webp&s=2ac58a73b701157892a636ff0536dbf102cb1d94', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VN76Tf5D_olKXJU2eGJWx8RjCxggw8BEg8tHov6edbc.jpg?width=1080&crop=smart&auto=webp&s=4368834c2b58e1f917fb004f9b51fa4caec5e53b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VN76Tf5D_olKXJU2eGJWx8RjCxggw8BEg8tHov6edbc.jpg?auto=webp&s=87238a7ae4adc908fbcaa377bba809a202e1cf6e', 'width': 1200}, 'variants': {}}]}
|
I'm releasing the speech version of Gemma-3
| 1 |
[removed]
| 2025-04-10T09:02:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvtefi/im_releasing_the_speech_version_of_gemma3/
|
Dry_Attorney8708
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvtefi
| false | null |
t3_1jvtefi
|
/r/LocalLLaMA/comments/1jvtefi/im_releasing_the_speech_version_of_gemma3/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '-0iI1xUg8fpPMAjMUXHnf48Hjft6S3pkG1ZXUZaPxqg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VN76Tf5D_olKXJU2eGJWx8RjCxggw8BEg8tHov6edbc.jpg?width=108&crop=smart&auto=webp&s=56f3391021a720b61b22f497f94285c36022a69f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VN76Tf5D_olKXJU2eGJWx8RjCxggw8BEg8tHov6edbc.jpg?width=216&crop=smart&auto=webp&s=1e6768574b5c1b5f0f54ecc7b15f8090224ab06d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VN76Tf5D_olKXJU2eGJWx8RjCxggw8BEg8tHov6edbc.jpg?width=320&crop=smart&auto=webp&s=9a3bf9a60cd7c4a64db3b000bed7ccb45a21b4c3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VN76Tf5D_olKXJU2eGJWx8RjCxggw8BEg8tHov6edbc.jpg?width=640&crop=smart&auto=webp&s=bebaa7d37368c722eb6e83d378b97a22303c0920', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VN76Tf5D_olKXJU2eGJWx8RjCxggw8BEg8tHov6edbc.jpg?width=960&crop=smart&auto=webp&s=2ac58a73b701157892a636ff0536dbf102cb1d94', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VN76Tf5D_olKXJU2eGJWx8RjCxggw8BEg8tHov6edbc.jpg?width=1080&crop=smart&auto=webp&s=4368834c2b58e1f917fb004f9b51fa4caec5e53b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VN76Tf5D_olKXJU2eGJWx8RjCxggw8BEg8tHov6edbc.jpg?auto=webp&s=87238a7ae4adc908fbcaa377bba809a202e1cf6e', 'width': 1200}, 'variants': {}}]}
|
[Help] Running Local LLMs on MacBook Pro M1 Max – Speed Issues, Reasoning Models, and Agent Workflows
| 1 |
[removed]
| 2025-04-10T09:06:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvtgj9/help_running_local_llms_on_macbook_pro_m1_max/
|
another_canadian_007
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvtgj9
| false | null |
t3_1jvtgj9
|
/r/LocalLLaMA/comments/1jvtgj9/help_running_local_llms_on_macbook_pro_m1_max/
| false | false |
self
| 1 | null |
which local LLM is better for text analysis tasks?
| 1 |
[removed]
| 2025-04-10T09:17:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvtm2k/which_local_llm_is_better_for_text_analysis_tasks/
|
Appropriate-Raise600
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvtm2k
| false | null |
t3_1jvtm2k
|
/r/LocalLLaMA/comments/1jvtm2k/which_local_llm_is_better_for_text_analysis_tasks/
| false | false |
self
| 1 | null |
"Dragontail" model at LMarena is a potential beast
| 71 |
I'm curious if anyone has any suspicions about the true identity behind the Dragontail model at LMArena. From what I've seen so far, this mysterious model performs on par with top-tier models like o3-mini-high and claude-3-7-sonnet-20250219-thinking-32k, but what it sets it apart from them is that it consistently delivers the correct answers (tedious mathematical problems). Sadly, open weights models such as R1, Llama4, Cohere's, are not even close to be able to solve them. There is also a (slightly worse) Shadebrook model that I suspect is also related to it.
Does anyone have any theories or insights about which model might actually be powering this beast?
| 2025-04-10T09:19:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvtn1t/dragontail_model_at_lmarena_is_a_potential_beast/
|
IrisColt
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvtn1t
| false | null |
t3_1jvtn1t
|
/r/LocalLLaMA/comments/1jvtn1t/dragontail_model_at_lmarena_is_a_potential_beast/
| false | false |
self
| 71 | null |
Google Dropped "A2A": An Open Protocol for Different AI Agents to Finally Play Nice Together?
| 0 |
Something potentially significant landed: Google, with a bunch of partners (Salesforce, Langchain, SAP, etc.), released the **Agent2Agent (A2A) protocol**. Might be worth a look if you're building or thinking about agentic systems.
**The Gist (for Developers):**
A2A is basically an open spec aiming to standardize how different AI agents – built using potentially different frameworks (think LangGraph, CrewAI, Genkit, custom stuff) or by different vendors – can communicate and coordinate tasks. It's trying to solve the "walled garden" problem where your agents can't easily talk to each other.
**Why This Matters (Technically):**
* **Interoperability:** Imagine your Python/LangGraph agent being able to discover and delegate a specific task to a JavaScript/Genkit agent without needing custom integration glue for every pair. A2A defines the contract.
* **Modularity:** Could enable building smaller, specialized "tool" agents (e.g., one really good at parsing specific PDF types, another for interacting with a legacy API) that other, more general agents can call via a standard protocol. Think microservices, but for agent capabilities.
* **Standard Foundation:** Built on familiar tech: HTTP, JSON-RPC 2.0, Server-Sent Events (SSE) for streaming updates. Not some completely alien stack.
* **"Opaque Execution":** Agents interact based on defined inputs/outputs (Tasks, Messages, Artifacts) without exposing their internal implementation, tools, or prompts. This is crucial for security and IP.
* **Core Concepts:** Defines Agent Card (capabilities discovery), Task (the unit of work), Message/Part (communication content, handles text/files/data), Artifact (results).
**What Could We Build With This?**
Instead of just thinking business models, think about the technical possibilities:
* Complex workflows spanning multiple agent frameworks without duct tape.
* Creating reusable, specialized agents that act like callable services within your architecture.
* Orchestrating actions across different SaaS tools that expose A2A endpoints.
* Maybe even simplifying the development of multi-agent systems by standardizing the communication layer.
**The Catch?**
It's brand new. Adoption is everything. Will major frameworks bake this in? Will it fragment? How robust are the security and discovery mechanisms in practice? Debugging distributed agent interactions could be... fun. We'll have to see how it evolves.
We built awesome-a2a repo for this:
Since finding specs, examples, and implementations for this new thing will be scattered, we started an **awesome-a2a** list to collect everything useful for developers trying to understand or use A2A.
➡️ **Check it out & Contribute:** [https://github.com/ai-boost/awesome-a2a](https://www.google.com/url?sa=E&q=https%3A%2F%2Fgithub.com%2Fai-boost%2Fawesome-a2a)
It's just getting started, but the goal is to have one place for:
* Links to the spec details
* Code samples (official and community)
* Implementations in different languages/frameworks
* Related tools or libraries
* Good tutorials or deep dives
Please star/watch it if you're interested, and **definitely send PRs** with anything you find or build. Let's make this a solid resource for the community.
[a2a awesome](https://preview.redd.it/jb3x9h8dhzte1.png?width=1590&format=png&auto=webp&s=ed4fb0f72c2c14ababb84b3c2a498f5d19ee1f98)
| 2025-04-10T10:21:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvuitv/google_dropped_a2a_an_open_protocol_for_different/
|
gpt-0
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvuitv
| false | null |
t3_1jvuitv
|
/r/LocalLLaMA/comments/1jvuitv/google_dropped_a2a_an_open_protocol_for_different/
| false | false | 0 | null |
|
Replicating ollamas output in vLLM
| 2 |
I haven't read through the depths of documentations and the code repo for Ollama. So, don't know if it's already stated or mentioned somewhere.
Is there a way to replicate the outputs that Ollama gives in vLLM? I am facing issues that somewhere the parameters just need to be changed based on the asked task or a lot more in the configuration. But in Ollama almost every time, though with some hallucinations the outputs are consistently good, readable and makes sense. In vLLM I sometimes run into the problem of repetition, verbose or just not good outputs.
So, what can I do that will help me replicate ollama but in vLLM?
| 2025-04-10T10:32:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvuoun/replicating_ollamas_output_in_vllm/
|
OPlUMMaster
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvuoun
| false | null |
t3_1jvuoun
|
/r/LocalLLaMA/comments/1jvuoun/replicating_ollamas_output_in_vllm/
| false | false |
self
| 2 | null |
Lamarckvergence-14B
| 0 |
My former Open LLM top-ranked merge model, Lamarckvergence-14B, just got 14k+ downloads in 2 days.
Model: [https://huggingface.co/suayptalha/Lamarckvergence-14B](https://huggingface.co/suayptalha/Lamarckvergence-14B)
| 2025-04-10T11:28:56 |
suayptalha
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvvloi
| false | null |
t3_1jvvloi
|
/r/LocalLLaMA/comments/1jvvloi/lamarckvergence14b/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'H8XCQvwUN6y5Qzxnn4p5PfvWC_sq5hODQ7FJC5W2fr8', 'resolutions': [{'height': 34, 'url': 'https://preview.redd.it/2bc6hwokszte1.png?width=108&crop=smart&auto=webp&s=f2702d7c049a8b9389214cb8e44bd7d598aa08d5', 'width': 108}, {'height': 69, 'url': 'https://preview.redd.it/2bc6hwokszte1.png?width=216&crop=smart&auto=webp&s=087d09981de1b731eb647b67d996c53c28903484', 'width': 216}, {'height': 102, 'url': 'https://preview.redd.it/2bc6hwokszte1.png?width=320&crop=smart&auto=webp&s=e194b4fb3b62a5d2624efdbc107ae4969dcdecf1', 'width': 320}, {'height': 205, 'url': 'https://preview.redd.it/2bc6hwokszte1.png?width=640&crop=smart&auto=webp&s=8f1b979b41bfb26577455a65e0f3eaba640c3a46', 'width': 640}, {'height': 308, 'url': 'https://preview.redd.it/2bc6hwokszte1.png?width=960&crop=smart&auto=webp&s=56969f9b48ee2ad89fc9ad4c39c1c8a50f2aab3f', 'width': 960}, {'height': 346, 'url': 'https://preview.redd.it/2bc6hwokszte1.png?width=1080&crop=smart&auto=webp&s=7953803ce466a5abe594f3345d22fd7ab68d48d6', 'width': 1080}], 'source': {'height': 606, 'url': 'https://preview.redd.it/2bc6hwokszte1.png?auto=webp&s=edeac94e137b8b8f6b0b8d429e8a61a869be7e62', 'width': 1888}, 'variants': {}}]}
|
||
LLAMA-4-scout not able to adhere to basic prompt
| 4 |
I gave a very simple instruction to format the input with grammar instead of answering the user query, but still, the prompt is answering the queries. How do I fix this?
[https://app.promptfoo.dev/eval/f:ae1f20e6-45bd-4069-8032-6259f504750a](https://app.promptfoo.dev/eval/f:ae1f20e6-45bd-4069-8032-6259f504750a)
meta-llama/llama-4-scout-17b-16e-instruct
| 2025-04-10T11:42:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvvuad/llama4scout_not_able_to_adhere_to_basic_prompt/
|
raxrb
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvvuad
| false | null |
t3_1jvvuad
|
/r/LocalLLaMA/comments/1jvvuad/llama4scout_not_able_to_adhere_to_basic_prompt/
| false | false |
self
| 4 |
{'enabled': False, 'images': [{'id': 'rwTgPX3Fh-jgdx5l73lX4FRlXNsaZ1wrgaXlXO7W9bo', 'resolutions': [{'height': 89, 'url': 'https://external-preview.redd.it/COMyeOIX0w3vTtndKCEPA0NZuRl45bUqLxaM1ogcPFY.jpg?width=108&crop=smart&auto=webp&s=aee2e36c47797b5d23a1b5547cf765f914e0b634', 'width': 108}, {'height': 178, 'url': 'https://external-preview.redd.it/COMyeOIX0w3vTtndKCEPA0NZuRl45bUqLxaM1ogcPFY.jpg?width=216&crop=smart&auto=webp&s=601743386fbf30d7eef38347fe0cbe0499dd1403', 'width': 216}, {'height': 263, 'url': 'https://external-preview.redd.it/COMyeOIX0w3vTtndKCEPA0NZuRl45bUqLxaM1ogcPFY.jpg?width=320&crop=smart&auto=webp&s=240826d0847e185a3f333e1a71312b5179814f61', 'width': 320}, {'height': 527, 'url': 'https://external-preview.redd.it/COMyeOIX0w3vTtndKCEPA0NZuRl45bUqLxaM1ogcPFY.jpg?width=640&crop=smart&auto=webp&s=5246315dd5b8bbec51bbea1eac089501891f3842', 'width': 640}, {'height': 791, 'url': 'https://external-preview.redd.it/COMyeOIX0w3vTtndKCEPA0NZuRl45bUqLxaM1ogcPFY.jpg?width=960&crop=smart&auto=webp&s=73691592ec444f74f7949cae068394bceb46deb4', 'width': 960}, {'height': 890, 'url': 'https://external-preview.redd.it/COMyeOIX0w3vTtndKCEPA0NZuRl45bUqLxaM1ogcPFY.jpg?width=1080&crop=smart&auto=webp&s=3c1fbb6c6b75ffd6b27308de6c97fec691c9172c', 'width': 1080}], 'source': {'height': 1202, 'url': 'https://external-preview.redd.it/COMyeOIX0w3vTtndKCEPA0NZuRl45bUqLxaM1ogcPFY.jpg?auto=webp&s=42f0cbca8e66c8ac05339f83984d747889b34b2d', 'width': 1457}, 'variants': {}}]}
|
Super short answers in Gemma3 + Open-webUI
| 1 |
[removed]
| 2025-04-10T11:46:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvvwl7/super_short_answers_in_gemma3_openwebui/
|
A7mdxDD
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvvwl7
| false | null |
t3_1jvvwl7
|
/r/LocalLLaMA/comments/1jvvwl7/super_short_answers_in_gemma3_openwebui/
| false | false | 1 | null |
|
MCP Limitations: Async, External Triggers, Security Concerns
| 3 |
Seems like many AI workflow tools including MCP are having limitations when handling things asynchronously and triggering workflows from external events. Calling it/them just an "API wrapper" might be a bit harsh but i think its creating a gap for building complex, event-driven AI systems.
I've also seen around about security concerns when it comes to MCP what's that all about?
**What needs to be improved? What alternative approaches or tools handle async better?**
| 2025-04-10T11:51:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvw049/mcp_limitations_async_external_triggers_security/
|
toolhouseai
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvw049
| false | null |
t3_1jvw049
|
/r/LocalLLaMA/comments/1jvw049/mcp_limitations_async_external_triggers_security/
| false | false |
self
| 3 | null |
Grok 3 and Grok 3 Mini/Reasoning Are Available Via API
| 0 |
Grok 3 is $3 per million input tokens, $15 dollars per million output tokens. On open router, a fast version exists that's $5 per million input tokens and $25 per million output tokens
There's also a mini version that's a reasoning model, and it's $0.3 per million input tokens and $0.5 per million output tokens
| 2025-04-10T11:54:14 |
https://x.ai/api#pricing
|
Few_Painter_5588
|
x.ai
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvw1om
| false | null |
t3_1jvw1om
|
/r/LocalLLaMA/comments/1jvw1om/grok_3_and_grok_3_minireasoning_are_available_via/
| false | false |
default
| 0 | null |
Turkish Open Source TTS Models: Which One is Better in Terms of Quality and Speed?
| 1 |
[removed]
| 2025-04-10T11:54:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvw1rz/turkish_open_source_tts_models_which_one_is/
|
Ok_Lab_317
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvw1rz
| false | null |
t3_1jvw1rz
|
/r/LocalLLaMA/comments/1jvw1rz/turkish_open_source_tts_models_which_one_is/
| false | false |
self
| 1 | null |
Notes on Llama 4: The hits, the misses, and the disasters
| 129 |
The Llama 4 is here, but definitely not in the shape everyone wanted. There’s only negative sentiment towards it. Nobody seems to say good things about it except for a few Meta employees.
They seriously rushed the launch, but I am still not sure why. If the models were bad, why not postpone it? Was it something to do with tariffs, the anticipation of Monday market crash, to cushion their stock?
The entire launch was muddled with controversies, from poor models and false claims to bungled-up benchmarks. But are there any good Llama 4 models? If you search hard enough, there are a few.
Here is an overview of the Llama 4 models.
# The Hits
There’s a very few good things about the Llama 4 models.
* 10 million context window in Scout and 1 million in Maverick. Good at the needle in the haystack tests I have done.
* The Maverick seems to be a model created for agentic use cases, and it performs well on the function-calling benchmarks.
* It’s very fast and cheap, again compliments function calling use cases.
# The Misses
A lot of misses, indeed
* Starting with a restrictive, not-so-open-source Llama Licence. Still a mystery why it is when Deepseek models are MIT.
* The 400b Maverick doesn’t justify its size. I'm not sure why they went with 17b active parameters; it’s worse than QwQ 32b in reasoning.
* It neither offers the best code gen, writing, or reasoning.
* The biggest miss is that there is no paper, no system card, just a blog post. Everyone looked up to Meta for this, and now they have botched this.
# The Disasters
They are not recovering from this ever again.
* They literally gamed the Lmsys the sloppiest benchmark just to appear good. It’s sad at this point. I'm not sure if they cooked up other benchmarks mentioned in their release blog post.
* Meta has tarnished their image again. They had the people's mandate, and they chose to squander it.
Being a long-time Llama appreciator, the Llama 4 launch was such a letdown. It would have been still fine and forgotten if it was just a bad model, but cooking up benchmarks to appear that they are still in the AI race is horrible.
Full write-up on the Llama 4 launch here: [Notes on Llama 4: The Hits, the Misses, and the Disasters](https://composio.dev/blog/notes-on-llama-4-the-hits-the-misses-and-the-disasters/)
I would love to know your opinions on Llama 4 and would be interested to hear if you found anything good with these models.
| 2025-04-10T12:05:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvw91v/notes_on_llama_4_the_hits_the_misses_and_the/
|
SunilKumarDash
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvw91v
| false | null |
t3_1jvw91v
|
/r/LocalLLaMA/comments/1jvw91v/notes_on_llama_4_the_hits_the_misses_and_the/
| false | false |
self
| 129 |
{'enabled': False, 'images': [{'id': 'z4p6U4sFMJ2UK96dXLtIM6umt_yjrKoVwAENACoEnpg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/hsprVRFzWabKNYTGLYO7-yR77Z7veq5X4eHU1DZAaVo.jpg?width=108&crop=smart&auto=webp&s=e477469318e008c943e188445cdfe2c000fb6f02', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/hsprVRFzWabKNYTGLYO7-yR77Z7veq5X4eHU1DZAaVo.jpg?width=216&crop=smart&auto=webp&s=03cee665c965bd0796cf31a16b32c456a1457e41', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/hsprVRFzWabKNYTGLYO7-yR77Z7veq5X4eHU1DZAaVo.jpg?width=320&crop=smart&auto=webp&s=373add790b0cbd5e04c8132b6f56d77ac64aecb0', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/hsprVRFzWabKNYTGLYO7-yR77Z7veq5X4eHU1DZAaVo.jpg?width=640&crop=smart&auto=webp&s=cd219b6afa2e067a6c9943c1db9416b46d0f395b', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/hsprVRFzWabKNYTGLYO7-yR77Z7veq5X4eHU1DZAaVo.jpg?width=960&crop=smart&auto=webp&s=2d0e310acbbb31c9c6a45e811f18d6daa622be44', 'width': 960}, {'height': 608, 'url': 'https://external-preview.redd.it/hsprVRFzWabKNYTGLYO7-yR77Z7veq5X4eHU1DZAaVo.jpg?width=1080&crop=smart&auto=webp&s=c9885aecb79f8218ce69ac5a16d377b1f42950e3', 'width': 1080}], 'source': {'height': 639, 'url': 'https://external-preview.redd.it/hsprVRFzWabKNYTGLYO7-yR77Z7veq5X4eHU1DZAaVo.jpg?auto=webp&s=b659ff364172570e693ff1f76ea859d106cacaf3', 'width': 1135}, 'variants': {}}]}
|
How do LLMs store and save information about uploaded documents?
| 1 |
[removed]
| 2025-04-10T12:12:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvwdwg/how_do_llms_store_and_save_information_about/
|
dafroggoboi
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvwdwg
| false | null |
t3_1jvwdwg
|
/r/LocalLLaMA/comments/1jvwdwg/how_do_llms_store_and_save_information_about/
| false | false |
self
| 1 | null |
Hosting GGUF
| 1 |
[removed]
| 2025-04-10T12:36:06 |
Dry_Negotiation_7423
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvwtm9
| false | null |
t3_1jvwtm9
|
/r/LocalLLaMA/comments/1jvwtm9/hosting_gguf/
| false | false | 1 |
{'enabled': True, 'images': [{'id': '0wDHgyQIx-LVuFJX9fvKE9I1PkVvbIu2fucDxgneuqI', 'resolutions': [{'height': 86, 'url': 'https://preview.redd.it/d2m4trew30ue1.png?width=108&crop=smart&auto=webp&s=6cda1a7213a8a7170484a67f685ea8adeb46bf54', 'width': 108}, {'height': 172, 'url': 'https://preview.redd.it/d2m4trew30ue1.png?width=216&crop=smart&auto=webp&s=c368b0b8504cfaa99d5975b36cdcf2ed7fb23dd8', 'width': 216}, {'height': 254, 'url': 'https://preview.redd.it/d2m4trew30ue1.png?width=320&crop=smart&auto=webp&s=ee09ff060f5c38d746385767d4a944f4f5124dbf', 'width': 320}, {'height': 509, 'url': 'https://preview.redd.it/d2m4trew30ue1.png?width=640&crop=smart&auto=webp&s=226d1f1bb31a7f91ce51f857a22623e133658ab0', 'width': 640}, {'height': 764, 'url': 'https://preview.redd.it/d2m4trew30ue1.png?width=960&crop=smart&auto=webp&s=ecf6fee9978073f0d32cfd310ed8208761a02c13', 'width': 960}], 'source': {'height': 830, 'url': 'https://preview.redd.it/d2m4trew30ue1.png?auto=webp&s=67836ebc82478fb504c55cd53b01fa812e73be63', 'width': 1042}, 'variants': {}}]}
|
||
Model Context Protocol (MCP) Explained
| 0 |
Everyone’s talking about MCP these days. But… what is MCP? (Spoiler: it’s the new standard for how AI systems connect with tools.)
🧠 When should you use it?
🛠️ How can you create your own server?
🔌 How can you connect to existing ones?
I covered it all in detail in this (Free) article, which took me a long time to write.
Enjoy! 🙌
[Link to the full blog post](https://open.substack.com/pub/diamantai/p/model-context-protocol-mcp-explained?r=336pe4&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false)
| 2025-04-10T12:45:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvx09w/model_context_protocol_mcp_explained/
|
Nir777
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvx09w
| false | null |
t3_1jvx09w
|
/r/LocalLLaMA/comments/1jvx09w/model_context_protocol_mcp_explained/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'eWtYErY6ZB6Gt1ecGuehuy-ztiYctY8mKVHWW64YNOk', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/XVOaSpj9__t9jMD6Lrc2oU7HYwVdSi6F_-7tidXSFp8.jpg?width=108&crop=smart&auto=webp&s=0a70ea01e1daf93fdf69de5b723bc4c57f826652', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/XVOaSpj9__t9jMD6Lrc2oU7HYwVdSi6F_-7tidXSFp8.jpg?width=216&crop=smart&auto=webp&s=7a8c8c87fa56a32cd5ebfb04563b9e00e5ac84c7', 'width': 216}, {'height': 177, 'url': 'https://external-preview.redd.it/XVOaSpj9__t9jMD6Lrc2oU7HYwVdSi6F_-7tidXSFp8.jpg?width=320&crop=smart&auto=webp&s=fce86e922c02973b1a5c18d844c0a0ec5b1b3863', 'width': 320}, {'height': 355, 'url': 'https://external-preview.redd.it/XVOaSpj9__t9jMD6Lrc2oU7HYwVdSi6F_-7tidXSFp8.jpg?width=640&crop=smart&auto=webp&s=f72a137caedd711ad8f2b9bfd3ee1eb926ec53da', 'width': 640}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/XVOaSpj9__t9jMD6Lrc2oU7HYwVdSi6F_-7tidXSFp8.jpg?auto=webp&s=81566e6167f45a63ca3d1fc402579d66fa3a5830', 'width': 900}, 'variants': {}}]}
|
Has anyone tried flashing the A6000 BIOS on the 3090 FE and replacing the VRAM modules on the 3090 with those from the A6000?
| 4 |
I am currently experimenting with increasing VRAM on the 3090 - my question from the title is only thing that hasn't been tried yet for increasing the VRAM on the 3090. Anyone has any takes on that?
| 2025-04-10T12:52:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvx5q1/has_anyone_tried_flashing_the_a6000_bios_on_the/
|
yachty66
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvx5q1
| false | null |
t3_1jvx5q1
|
/r/LocalLLaMA/comments/1jvx5q1/has_anyone_tried_flashing_the_a6000_bios_on_the/
| false | false |
self
| 4 | null |
Who is winning the GPU race??
| 119 |
Google just released the new tpu, 23x faster than the best supercomputer (that's what they claim).
What exactly is going on? Is nvidia still in the lead? who is competing with nvidia?
Apple seems like a very strong competitor, does apple have a chance?
Google is also investing in chips and released the most powerful chip, are they winning the race?
How is nvidia still holding strong? what makes nvidia special? they seem like they are falling behind apple and google.
I need someone to explain the entire situation with ai gpus/cpus
| 2025-04-10T12:55:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvx7kj/who_is_winning_the_gpu_race/
|
Senior-Raspberry-929
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvx7kj
| false | null |
t3_1jvx7kj
|
/r/LocalLLaMA/comments/1jvx7kj/who_is_winning_the_gpu_race/
| false | false |
self
| 119 | null |
New coding model DeepCoder-14B-Preview
| 96 |
A joint collab between the Agentica team and Together AI based on finetune of DeepSeek-R1-Distill-Qwen-14B. They claim it’s as good at o3-mini.
HuggingFace URL: https://huggingface.co/agentica-org/DeepCoder-14B-Preview
GGUF: https://huggingface.co/bartowski/agentica-org_DeepCoder-14B-Preview-GGUF
| 2025-04-10T13:09:25 |
https://www.together.ai/blog/deepcoder
|
mrskeptical00
|
together.ai
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvxi5f
| false | null |
t3_1jvxi5f
|
/r/LocalLLaMA/comments/1jvxi5f/new_coding_model_deepcoder14bpreview/
| false | false | 96 |
{'enabled': False, 'images': [{'id': 'z0uH6V-2WMWuGHzZMW-RygvcPu9vsSTIIe_R5vzu0UU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/YkKjX2lPMDEhqwzwRn4cpEMg8i531yZ0i7k6psvFYo8.jpg?width=108&crop=smart&auto=webp&s=4cb9228d1a4eff7209ea4e0a4e8cf45c1edd5544', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/YkKjX2lPMDEhqwzwRn4cpEMg8i531yZ0i7k6psvFYo8.jpg?width=216&crop=smart&auto=webp&s=022af5501a62c12c0c22eb8c32b00c56614f4492', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/YkKjX2lPMDEhqwzwRn4cpEMg8i531yZ0i7k6psvFYo8.jpg?width=320&crop=smart&auto=webp&s=9660118cfb68fcf7e9e7e2c07dbada2b6a4fd5cc', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/YkKjX2lPMDEhqwzwRn4cpEMg8i531yZ0i7k6psvFYo8.jpg?width=640&crop=smart&auto=webp&s=7b9a413e72aba377960a99362f5936db3ce66d4b', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/YkKjX2lPMDEhqwzwRn4cpEMg8i531yZ0i7k6psvFYo8.jpg?width=960&crop=smart&auto=webp&s=73a1ec1a2c64601a893edf988fbeade741551bb4', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/YkKjX2lPMDEhqwzwRn4cpEMg8i531yZ0i7k6psvFYo8.jpg?width=1080&crop=smart&auto=webp&s=2f8543eb98d415142407d748a64c8d0ca205ca71', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/YkKjX2lPMDEhqwzwRn4cpEMg8i531yZ0i7k6psvFYo8.jpg?auto=webp&s=8a30fedf63ebfd772630400399af6213b52e8b72', 'width': 1200}, 'variants': {}}]}
|
|
is Nvidia ASUS Ascent GX10 - 1TB good for general purpose use
| 1 |
[removed]
| 2025-04-10T13:25:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvxul6/is_nvidia_asus_ascent_gx10_1tb_good_for_general/
|
nawabiqbal
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvxul6
| false | null |
t3_1jvxul6
|
/r/LocalLLaMA/comments/1jvxul6/is_nvidia_asus_ascent_gx10_1tb_good_for_general/
| false | false |
self
| 1 | null |
A host of rumours
| 0 |
Lines up with my estimates.
Although 4o mini mobile is the worst thing we could have gotten.
4o mini itself is a terrible model compared to flash2
| 2025-04-10T14:16:56 |
AryanEmbered
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvz0bf
| false | null |
t3_1jvz0bf
|
/r/LocalLLaMA/comments/1jvz0bf/a_host_of_rumours/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'BmQU_QGzkXy9pW3rgIGr9kVeYqrwVZeFwXSox6R9vUQ', 'resolutions': [{'height': 175, 'url': 'https://preview.redd.it/cy3sraufn0ue1.png?width=108&crop=smart&auto=webp&s=d190b99a23824bc82443ed2c12e7e8b652efaf75', 'width': 108}, {'height': 350, 'url': 'https://preview.redd.it/cy3sraufn0ue1.png?width=216&crop=smart&auto=webp&s=e16250af97a87c87c4aaa71dad7e6ec3b5bc41d3', 'width': 216}, {'height': 519, 'url': 'https://preview.redd.it/cy3sraufn0ue1.png?width=320&crop=smart&auto=webp&s=7dae7878aa3470c3ff65c07536a3047b84b33937', 'width': 320}, {'height': 1039, 'url': 'https://preview.redd.it/cy3sraufn0ue1.png?width=640&crop=smart&auto=webp&s=7cbb230db8b8763af8bd0d954c5c5590fc69c157', 'width': 640}, {'height': 1559, 'url': 'https://preview.redd.it/cy3sraufn0ue1.png?width=960&crop=smart&auto=webp&s=34247712b679751cd3e15b3be38a4a8d93a6901a', 'width': 960}, {'height': 1754, 'url': 'https://preview.redd.it/cy3sraufn0ue1.png?width=1080&crop=smart&auto=webp&s=032d86d1e71f80c2284234aa407efecd5dd9f1b7', 'width': 1080}], 'source': {'height': 1754, 'url': 'https://preview.redd.it/cy3sraufn0ue1.png?auto=webp&s=72fc04dd0884c519f514358ac218ded5c6051430', 'width': 1080}, 'variants': {}}]}
|
||
What will be the next model to be the strongest in existence?
| 0 |
Currently deepseek R1 is the strongest (open) math/coding models IMHO. What do people think will be the next model to be stronger than this? I don't mean strongest for its size but really strongest overall.
| 2025-04-10T14:17:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvz0tz/what_will_be_the_next_model_to_be_the_strongest/
|
OnceMoreOntoTheBrie
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvz0tz
| false | null |
t3_1jvz0tz
|
/r/LocalLLaMA/comments/1jvz0tz/what_will_be_the_next_model_to_be_the_strongest/
| false | false |
self
| 0 | null |
Dual 3090 setup?
| 6 |
Heyyo,
How much meta is a dual 3090 setup nowadays? With NVlink
Platform would be AM4, I currently have a single 3090. However I've run into a model that was needing compute capability over 8.9, so at least a 40 series.
I'd rather not buy a 40 series, but if I were then I'd go with a 16GB model.
My use case would not be limited to just running models, but maybe using it for Torch, setting up services, just true homelabbing with any kind of machine learning stuff I can imagine.
What is it like to work across 2 cards of different generations? Also would NVLink help vs would it not?
I'd be happy to take your feedback.
| 2025-04-10T14:29:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvzb1a/dual_3090_setup/
|
Apprehensive_Page_87
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvzb1a
| false | null |
t3_1jvzb1a
|
/r/LocalLLaMA/comments/1jvzb1a/dual_3090_setup/
| false | false |
self
| 6 | null |
NeuralCodecs: Neural Audio Codecs implemented in .NET - EnCodec, DAC, and SNAC
| 15 |
I've been working on this in my spare time and thought someone here might get some use out of it. It's MIT licensed and open to pull requests.
| 2025-04-10T14:32:53 |
https://github.com/DillionLowry/NeuralCodecs
|
Knehm
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvzdmu
| false | null |
t3_1jvzdmu
|
/r/LocalLLaMA/comments/1jvzdmu/neuralcodecs_neural_audio_codecs_implemented_in/
| false | false | 15 |
{'enabled': False, 'images': [{'id': 'ZrbtORJRHPXYP0PMDQNGPrsOvyJsvwW0uJZEsSN4wgM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/w5ONsdKCE_OfBbudwN1c2omSNHPj9deJzJ8d4IHqvlY.jpg?width=108&crop=smart&auto=webp&s=61f1a44e57c46da05f57dbd1a0d281fdfa41c49a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/w5ONsdKCE_OfBbudwN1c2omSNHPj9deJzJ8d4IHqvlY.jpg?width=216&crop=smart&auto=webp&s=82eea9ccd073e36e6d0051ce6e6382a8e45672fb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/w5ONsdKCE_OfBbudwN1c2omSNHPj9deJzJ8d4IHqvlY.jpg?width=320&crop=smart&auto=webp&s=11336c122bd7ab6482cc96d2d2a0cdeea4ec48e1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/w5ONsdKCE_OfBbudwN1c2omSNHPj9deJzJ8d4IHqvlY.jpg?width=640&crop=smart&auto=webp&s=97e60ddc8f1832aab3b1335d454d22a245ebb05b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/w5ONsdKCE_OfBbudwN1c2omSNHPj9deJzJ8d4IHqvlY.jpg?width=960&crop=smart&auto=webp&s=e7838a60d51ffe9b072f7f7823cda2d6649881e2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/w5ONsdKCE_OfBbudwN1c2omSNHPj9deJzJ8d4IHqvlY.jpg?width=1080&crop=smart&auto=webp&s=39f8c22956e91149b96d9de8556e7f668387089d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/w5ONsdKCE_OfBbudwN1c2omSNHPj9deJzJ8d4IHqvlY.jpg?auto=webp&s=52ec0125f53aa54e04c1a04585d91f53a81c034c', 'width': 1200}, 'variants': {}}]}
|
|
How to dimension a VM/GPU for serving LLMs
| 3 |
Hi everyone,
I find it challenging to decide how to dimension a VM, especially the VRAM, but also RAM, and CPU, to deploy LLMs for inference. I wish there was a formula like : `REQUIRED_VRAM = n_parameters*quantization*hidden_layers*context_length`
For example, i've tested a few models and seen that a 32B model like DeepSeek runs on a single A100 80GB, but a Gemma 3 27B, which should be smaller, doesn't fit, unless I decrease the max context length in vLLM. To simplify things, i'm okay talking with a given quantization (say BF16), and keep the MoE models out for now. I'm testing on vLLM as it seems to be the standard and most efficient at the moment.
I’ve heard some say a rule of thumb is to have 2x the number of billion parameters in VRAM (e.g., 140GB VRAM for a 70B model), but it's not sufficient and other factors come into play :
* Context length (because of KV cache)
* Model architecture
* Quantization
Do you know what are all the factors that influence the sizing of a VM/GPU to serve model? Do you have any good resources talks about this ? Any good way of calculating the minimal hardware required for a given LLM ? I am interested in building a website that would help in calculating the GPU and VM requirements for a list of LLM. Any insight on this topic will help a lot !
EDIT : found [this](https://www.reddit.com/r/LocalLLaMA/comments/1agbf5s/gpu_requirements_for_llms/) and [this](https://kipp.ly/transformer-inference-arithmetic/) which give some info
Let me know your thoughts. Let's discuss here and gather the knowledge. I'll make another post to synthesize what we know
| 2025-04-10T14:36:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvzh5w/how_to_dimension_a_vmgpu_for_serving_llms/
|
steak_hallal
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvzh5w
| false | null |
t3_1jvzh5w
|
/r/LocalLLaMA/comments/1jvzh5w/how_to_dimension_a_vmgpu_for_serving_llms/
| false | false |
self
| 3 | null |
My llama 4 in WhatsApp suddenly says AI chat no available to you yet. I been using it before for weeks?
| 0 |
Anyone can advise?
| 2025-04-10T14:56:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvzxeu/my_llama_4_in_whatsapp_suddenly_says_ai_chat_no/
|
icantgiveyou
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvzxeu
| false | null |
t3_1jvzxeu
|
/r/LocalLLaMA/comments/1jvzxeu/my_llama_4_in_whatsapp_suddenly_says_ai_chat_no/
| false | false |
self
| 0 | null |
I am using ollama and openwebui . Is there a model that is capable for actively searching the web?
| 1 |
There are a few models that can visit web links when I provide the links. But they won't go actively searching the web to find answers like ChatGPT does. Is there a model that can do this? Thank you.
| 2025-04-10T14:56:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1jvzxt2/i_am_using_ollama_and_openwebui_is_there_a_model/
|
btpcn
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jvzxt2
| false | null |
t3_1jvzxt2
|
/r/LocalLLaMA/comments/1jvzxt2/i_am_using_ollama_and_openwebui_is_there_a_model/
| false | false |
self
| 1 | null |
OpenAI releasing o3 full and o4 mini soon
| 115 | 2025-04-10T15:11:04 |
EasternBeyond
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw09wz
| false | null |
t3_1jw09wz
|
/r/LocalLLaMA/comments/1jw09wz/openai_releasing_o3_full_and_o4_mini_soon/
| false | false | 115 |
{'enabled': True, 'images': [{'id': 'iVJuPJxmQsiOUtZ9e4jDg7Sfm_NZ1yFtbHOKLHCNRGw', 'resolutions': [{'height': 127, 'url': 'https://preview.redd.it/hhfn6im1x0ue1.jpeg?width=108&crop=smart&auto=webp&s=8e40a7b12f64efd12becdb2fb6d06dcaf708ba5c', 'width': 108}, {'height': 255, 'url': 'https://preview.redd.it/hhfn6im1x0ue1.jpeg?width=216&crop=smart&auto=webp&s=6aa5c0725a75e7bf0925b8a924497d6374fa10f5', 'width': 216}, {'height': 379, 'url': 'https://preview.redd.it/hhfn6im1x0ue1.jpeg?width=320&crop=smart&auto=webp&s=66a0176ffbc3f511950015329d701a18c58f018c', 'width': 320}, {'height': 758, 'url': 'https://preview.redd.it/hhfn6im1x0ue1.jpeg?width=640&crop=smart&auto=webp&s=7f766150be4927f7c071cc0e4d87666562a35aa1', 'width': 640}], 'source': {'height': 758, 'url': 'https://preview.redd.it/hhfn6im1x0ue1.jpeg?auto=webp&s=77c1bcb03f5ea592a0ec69288ba73c094f9255aa', 'width': 640}, 'variants': {}}]}
|
|||
Llama 4 Maverick scores on seven independent benchmarks
| 180 |
[Extended NYT Connections](https://github.com/lechmazur/nyt-connections/)
[Creative Short Story Writing](https://github.com/lechmazur/writing/)
[Confabulations/Hallucinations](https://github.com/lechmazur/confabulations/)
[Thematic Generalization](https://github.com/lechmazur/generalization/)
[Elimination Game](https://github.com/lechmazur/elimination_game/)
[Step Race Benchmark](https://github.com/lechmazur/step_game/)
[Public Goods Game](https://github.com/lechmazur/pgg_bench/)
| 2025-04-10T15:13:37 |
https://www.reddit.com/gallery/1jw0c2i
|
zero0_one1
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw0c2i
| false | null |
t3_1jw0c2i
|
/r/LocalLLaMA/comments/1jw0c2i/llama_4_maverick_scores_on_seven_independent/
| false | false | 180 | null |
|
Best small model for web search in Open Web UI
| 1 |
I'm using a searxng web search + scrape tool with an LLM in Open WebUI and it works pretty well. The tool looks up N websites and scrapes the content of those websites (up to a word limit) and feeds it to the LLM. So far I've tested it with the following models:
1. Llama 3.2 3b - average, hallucinated a bit
2. Gemma 3 4b - worse than llama 3.2
3. Qwen 2.5 7b - much better, not surprising given it's a 7b model. It still messes up markdown formatting sometimes
Are there better models suited for stuff like this? I'm looking for options in the 3b-8b range ideally so that I can use a large context size of around 16k. I'm guessing models made for summarization and retrieval will excel at this
| 2025-04-10T15:16:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw0e6w/best_small_model_for_web_search_in_open_web_ui/
|
logseventyseven
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw0e6w
| false | null |
t3_1jw0e6w
|
/r/LocalLLaMA/comments/1jw0e6w/best_small_model_for_web_search_in_open_web_ui/
| false | false |
self
| 1 | null |
AMD AI395 + 128GB - Inference Use case
| 18 |
Hi,
I'm heard a lot of pros and cons for the AI395 from AMD with at most 128GB RAM (Framework, GMKtec). Of course prompt processing speeds are unknown, and probably dense models won't function well as the memory bandwidth isn't that great. I'm curious to know if this build will be useful for inferencing use cases. I don't plan to do any kind of training or fine tuning. I don't plan to make elaborate prompts, but I do want to be able to use higher quants and RAG. I plan to make general purpose prompts, as well some focussed on scripting. Is this build still going to prove useful or is it just money wasted? I enquire about wasted money because the pace of development is fast and I don't want a machine which is totally obsolete in a year from now due to newer innovations.
I have limited space at home so a full blown desktop with multiple 3090s is not going to work out.
| 2025-04-10T15:20:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw0ieg/amd_ai395_128gb_inference_use_case/
|
SecuredStealth
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw0ieg
| false | null |
t3_1jw0ieg
|
/r/LocalLLaMA/comments/1jw0ieg/amd_ai395_128gb_inference_use_case/
| false | false |
self
| 18 | null |
How can I make AI generated reels?
| 1 |
[removed]
| 2025-04-10T15:34:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw0tym/how_can_i_make_ai_generated_reels/
|
dramaking017
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw0tym
| false | null |
t3_1jw0tym
|
/r/LocalLLaMA/comments/1jw0tym/how_can_i_make_ai_generated_reels/
| false | false |
self
| 1 | null |
What is MCP?
| 0 |
Since it is nowadays fashion to write an explain on MCP. Here is mine:
First look at the screenshot of Claude Desktop. It decides to use tools one after another to perform marked in red. First some tool called batch_analyze, then search_and_replace then create_new_file.
MCP is just a protocol for tool call ALWAYS initiated by the LLM. You can’t initiate the tool use on. Only LLM can.
Who decides what the tool does, what parameters, what will be returned.
MCP server decides. What’s that MCP server? Any program with implements the protocol is a MCP server. For example I have a MCP server which does code editing, executing terminal commands, reading directories, doing google search. It can do anything in your PC(or server) as long as it just publishes its input and output parameters and describes them.
Whatever text/image is returned by the program (MCP server) is further input token to the LLM.
In this case we call the LLM MCP Client.
Extreme examples:
1. You have a music synthesizer app in your PC which you can access through command line or API. It takes music notes as input, produces sound and plays it on your speakers. You add a few lines of typescript wrapper to this synthesizer create a tool with music notes as inputs. You explain what notes does what in the MCP server as text. You add this to your LLM using its MCP config json file.
Now you ask LLM to use your synthesizer MCP tool to play your favorite tune. You’ll actually hear the AI create music.
2. You have access to Google maps paid API. You create a program to search for a list of superchargers within 5000 meters of a gps coordinate using the API and your own coding. You decide to let the AI use it. You create a wrapper around your program to follow the MCP server protocol. You add it to the LLM client’s which implemented the MCP client protocol, and add your tool to the MCP json. Now you ask your AI to write a research paper for you - analysis of the super charger network in <region1> vs <region2>
3. You have access to an existing large SQLite database of your company’s HR data. You create a MCP server for querying the database. Connect this the LLM client. Now you ask your AI for a list of claims which were delayed beyond 2 standard deviations of usual claim settlement time limited to your high paid employees. AI just makes a list of queries to your DB and tell you the answer.
4. Best of all. You don’t like your existing MCP servers. You use code editing MCP and make the AI create any MCP server as you wish. This feature is prebuilt in Cline and Roo.
TL;DR:
In short what your MCP server does is upto the server. The LLM just calls it with some parameters and waits for the return and gets back the return text/image given by the MCP server and uses it as next tokens and does the next thing. The LLM does not know what other things the server did being the scenes. MCP server is a black box to the LLM.
| 2025-04-10T15:36:58 |
Yes_but_I_think
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw0w2h
| false | null |
t3_1jw0w2h
|
/r/LocalLLaMA/comments/1jw0w2h/what_is_mcp/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'AkTmgIcf9tq-7ypr__6T5W29klVNL05naAOtHYLR-fI', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/h3xthrrp11ue1.jpeg?width=108&crop=smart&auto=webp&s=410ff99f21b7a145104e00be99e975425aa63b53', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/h3xthrrp11ue1.jpeg?width=216&crop=smart&auto=webp&s=c175ca18091bef21e9f13ec0e13b22167e761c85', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/h3xthrrp11ue1.jpeg?width=320&crop=smart&auto=webp&s=487f9c0507eb69bf46dbed64d2bf6b88dc225df6', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/h3xthrrp11ue1.jpeg?width=640&crop=smart&auto=webp&s=ebdc0d884ee301478ad0dc5155afcd6052aa716a', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/h3xthrrp11ue1.jpeg?width=960&crop=smart&auto=webp&s=d791bd9fcd15ef7f4a97ac64398835dbbd98bc92', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/h3xthrrp11ue1.jpeg?width=1080&crop=smart&auto=webp&s=b3a2627442b8621d825b8234304e62332011074f', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/h3xthrrp11ue1.jpeg?auto=webp&s=80f9b2fd55f3531d3b748607cb7446741ec5d6eb', 'width': 3024}, 'variants': {}}]}
|
||
I tested the top models used for translation on openrouter
| 1 |
[removed]
| 2025-04-10T15:41:48 |
AdventurousFly4909
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw108z
| false | null |
t3_1jw108z
|
/r/LocalLLaMA/comments/1jw108z/i_tested_the_top_models_used_for_translation_on/
| false | false |
default
| 1 |
{'enabled': True, 'images': [{'id': 'v7y1iiwy01ue1', 'resolutions': [{'height': 77, 'url': 'https://preview.redd.it/v7y1iiwy01ue1.png?width=108&crop=smart&auto=webp&s=a53a57bc85922e877105197549ec5341f4ac827a', 'width': 108}, {'height': 155, 'url': 'https://preview.redd.it/v7y1iiwy01ue1.png?width=216&crop=smart&auto=webp&s=c567138965eab8d2b06d0a342b227639d1d5f337', 'width': 216}, {'height': 230, 'url': 'https://preview.redd.it/v7y1iiwy01ue1.png?width=320&crop=smart&auto=webp&s=e4a72cfdcaf045c773465c4cf2a6da234a056edf', 'width': 320}, {'height': 461, 'url': 'https://preview.redd.it/v7y1iiwy01ue1.png?width=640&crop=smart&auto=webp&s=469f3eb3ef8a9fd1bb5327def0a0b17106d96589', 'width': 640}, {'height': 692, 'url': 'https://preview.redd.it/v7y1iiwy01ue1.png?width=960&crop=smart&auto=webp&s=6b6ab2366b99034236c63feff1e0dd6f059f5cc7', 'width': 960}, {'height': 779, 'url': 'https://preview.redd.it/v7y1iiwy01ue1.png?width=1080&crop=smart&auto=webp&s=534b024221e37477ebae53bae8ab088e30eff5b8', 'width': 1080}], 'source': {'height': 9905, 'url': 'https://preview.redd.it/v7y1iiwy01ue1.png?auto=webp&s=e9b89cdc61dfd22ba309289001ce06198f5f08aa', 'width': 13727}, 'variants': {}}]}
|
|
🤖 “How much tax was collected in the US in 2024?” — A question local LLMs can’t answer (without a little help)
| 2 |
If you ask most local LLMs, “How much tax was collected in the US in 2024?”, they’ll probably give you an outdated answer - not because they’re wrong, but because their training cutoff was way before 2024 ended.
That’s where Retrieval-Augmented Generation (RAG) comes in. By feeding current or custom data into the model at query time, RAG makes your LLM smarter right now, no retraining required.
I put together a tutorial that shows how to set up a complete RAG stack on bare metal in minutes - everything’s automated: boot with Sbnb Linux, spin up vLLM, and launch RAGFlow.
https://github.com/sbnb-io/sbnb/blob/main/README-RAG.md
📄 For demo purposes, the tutorial includes downloading the latest available 2024 US government financial report and asking the original question from the post title - just to show how RAG makes the impossible possible.
And here’s where it gets fun: try it with your own data. Upload your house temperature logs, family budget spreadsheets, grocery receipts - whatever - and start asking natural language questions. You’ll be surprised what your model can do once it actually knows what’s going on.
Give it a try, and let me know how it goes - happy to help if anything breaks or brainstorm new ideas!
| 2025-04-10T15:44:42 |
https://www.reddit.com/gallery/1jw12pd
|
aospan
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw12pd
| false | null |
t3_1jw12pd
|
/r/LocalLLaMA/comments/1jw12pd/how_much_tax_was_collected_in_the_us_in_2024_a/
| false | false | 2 | null |
|
New “Optimus Alpha” stealth model on OpenRouter is an OpenAI model
| 2 |
This is a stealth model that’s supposed to be great for coding and math specifically. It’s built by OpenAI 100% - is this perhaps o4-mini?
It’s nowhere near as good as Gemini 2.5 on programming tasks unfortunately.
| 2025-04-10T15:46:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw14gy/new_optimus_alpha_stealth_model_on_openrouter_is/
|
ButterscotchVast2948
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw14gy
| false | null |
t3_1jw14gy
|
/r/LocalLLaMA/comments/1jw14gy/new_optimus_alpha_stealth_model_on_openrouter_is/
| false | false |
self
| 2 | null |
Olmo OCR
| 2 |
Hey all, so I was working on a video exploring olmo OCR, similar to the one I did for Qwen VL etc. Im wondering if this task is beyond its current capabilities?
https://preview.redd.it/rz75fmwv31ue1.png?width=1695&format=png&auto=webp&s=52cac5f2f65fb761caa7cfab9a2ecf2a936cef8d
As you can see its clearly mis interpreting charts, numbers etc... Do folks have any inputs on if this task is not suiteable for a model like this? DO you have any recommendations on a good model I can try for this?
| 2025-04-10T15:51:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw1821/olmo_ocr/
|
Ok-Contribution9043
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw1821
| false | null |
t3_1jw1821
|
/r/LocalLLaMA/comments/1jw1821/olmo_ocr/
| false | false | 2 | null |
|
What is MCP?
| 0 | 2025-04-10T15:54:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw1b82/what_is_mcp/
|
Yes_but_I_think
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw1b82
| false | null |
t3_1jw1b82
|
/r/LocalLLaMA/comments/1jw1b82/what_is_mcp/
| false | false | 0 | null |
||
Can we all agree that Qwen has the best LLM mascot? (not at all trying to suck up so they’ll drop Qwen3 today)
| 263 | 2025-04-10T15:58:22 |
https://www.reddit.com/gallery/1jw1e6b
|
Porespellar
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw1e6b
| false | null |
t3_1jw1e6b
|
/r/LocalLLaMA/comments/1jw1e6b/can_we_all_agree_that_qwen_has_the_best_llm/
| false | false | 263 | null |
||
A slop forensics toolkit for LLMs: computing over-represented lexical profiles and inferring similarity trees
| 88 |
Releasing a few tools around LLM slop (over-represented words & phrases).
It uses stylometric analysis to surface repetitive words & n-grams which occur more often in LLM output compared to human writing.
Also borrowing some bioinformatics tools to infer similarity trees from these slop profiles, treating the presence/absence of lexical features as "mutations" to infer relationships.
\- compute a "slop profile" of over-represented words & phrases for your model
\- uses bioinformatics tools to infer similarity trees
\- builds canonical slop phrase lists
Github repo: [https://github.com/sam-paech/slop-forensics](https://github.com/sam-paech/slop-forensics)
Notebook: [https://colab.research.google.com/drive/1SQfnHs4wh87yR8FZQpsCOBL5h5MMs8E6?usp=sharing](https://colab.research.google.com/drive/1SQfnHs4wh87yR8FZQpsCOBL5h5MMs8E6?usp=sharing)
| 2025-04-10T16:00:36 |
https://www.reddit.com/gallery/1jw1g2a
|
_sqrkl
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw1g2a
| false | null |
t3_1jw1g2a
|
/r/LocalLLaMA/comments/1jw1g2a/a_slop_forensics_toolkit_for_llms_computing/
| false | false | 88 | null |
|
ChomaDB alternatives ?
| 1 |
[removed]
| 2025-04-10T16:01:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw1hao/chomadb_alternatives/
|
No_Cut1519
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw1hao
| false | null |
t3_1jw1hao
|
/r/LocalLLaMA/comments/1jw1hao/chomadb_alternatives/
| false | false |
self
| 1 | null |
Should we add real people to lmarena?
| 26 |
As a reference point, a sort of new Turing test
What do you think?
| 2025-04-10T16:02:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw1hph/should_we_add_real_people_to_lmarena/
|
Economy_Apple_4617
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw1hph
| false | null |
t3_1jw1hph
|
/r/LocalLLaMA/comments/1jw1hph/should_we_add_real_people_to_lmarena/
| false | false |
self
| 26 | null |
ByteDance just released the technical report for Seed-Thinking-v1.5
| 209 |
ByteDance just released the technical report for Seed-Thinking-v1.5, which is also an inference model trained using reinforcement learning. Based on the scores, it outperforms DeepSeek-R1 and is at a level close to Gemini-2.5-Pro and O3-mini-high.
However, I've searched everywhere and haven't found where the model is. I'm uncertain if they will release the weights. Once it's released, I will test it immediately.
Technical report link: [https://github.com/ByteDance-Seed/Seed-Thinking-v1.5](https://github.com/ByteDance-Seed/Seed-Thinking-v1.5)
| 2025-04-10T16:06:36 |
Dr_Karminski
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw1l9n
| false | null |
t3_1jw1l9n
|
/r/LocalLLaMA/comments/1jw1l9n/bytedance_just_released_the_technical_report_for/
| false | false | 209 |
{'enabled': True, 'images': [{'id': 'MZoEbDZSG0h81A0DOWjGc_4oVEn-iLRb-NkfoJE9684', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/zok3h2gu61ue1.png?width=108&crop=smart&auto=webp&s=720be7a95d3dcb13428a96e3dffdd5d5d0dc8973', 'width': 108}, {'height': 126, 'url': 'https://preview.redd.it/zok3h2gu61ue1.png?width=216&crop=smart&auto=webp&s=457be833310ae50d6de98645217f57f0b22d1539', 'width': 216}, {'height': 187, 'url': 'https://preview.redd.it/zok3h2gu61ue1.png?width=320&crop=smart&auto=webp&s=2bfbb2e285ec34e8cc321d77b09994508701520c', 'width': 320}, {'height': 374, 'url': 'https://preview.redd.it/zok3h2gu61ue1.png?width=640&crop=smart&auto=webp&s=82891e270a5455a5741a8218234c99a25e56cc02', 'width': 640}, {'height': 561, 'url': 'https://preview.redd.it/zok3h2gu61ue1.png?width=960&crop=smart&auto=webp&s=3387bb995c243662fa3e4f43921755ed826d0888', 'width': 960}, {'height': 631, 'url': 'https://preview.redd.it/zok3h2gu61ue1.png?width=1080&crop=smart&auto=webp&s=a556cfbfbb81c3eda0d64ced36698a78c61b876b', 'width': 1080}], 'source': {'height': 3164, 'url': 'https://preview.redd.it/zok3h2gu61ue1.png?auto=webp&s=2f5fc7d53eb4ddecfd03dd77a7a962508e085fdf', 'width': 5407}, 'variants': {}}]}
|
||
Made this animation with the help of LLama. What do you guys think?
| 0 | 2025-04-10T16:08:20 |
https://youtube.com/shorts/G8jRTghzrf0?si=YETeuHFstu4ReGV-
|
Reimagen
|
youtube.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw1mr2
| false | null |
t3_1jw1mr2
|
/r/LocalLLaMA/comments/1jw1mr2/made_this_animation_with_the_help_of_llama_what/
| false | false |
default
| 0 | null |
|
Introducing ZR1-1.5B, a small but powerful reasoning model for math and code
| 126 | 2025-04-10T16:08:44 |
https://www.zyphra.com/post/introducing-zr1-1-5b-a-small-but-powerful-math-code-reasoning-model
|
retrolione
|
zyphra.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw1n27
| false | null |
t3_1jw1n27
|
/r/LocalLLaMA/comments/1jw1n27/introducing_zr115b_a_small_but_powerful_reasoning/
| false | false |
default
| 126 | null |
|
Mac Studio 4xM4 Max 128GB versus M3 Ultra 512GB
| 3 |
I know, I know not a long context test etc, but he did try to come up with a way to split mlx models over different types of machines (and failed). None the less some interesting tidbits surfaced for me. Hopefully someone smarter finds a way to distribute larger MLX models over different types of machines as I would love to cluster my 128GB machine with my 2 64GB machines to run a large model.
[https://www.youtube.com/watch?v=d8yS-2OyJhw](https://www.youtube.com/watch?v=d8yS-2OyJhw)
| 2025-04-10T16:20:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw1xbd/mac_studio_4xm4_max_128gb_versus_m3_ultra_512gb/
|
Careless_Garlic1438
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw1xbd
| false | null |
t3_1jw1xbd
|
/r/LocalLLaMA/comments/1jw1xbd/mac_studio_4xm4_max_128gb_versus_m3_ultra_512gb/
| false | false |
self
| 3 |
{'enabled': False, 'images': [{'id': 'pMMa-DRxhlRAPF_16EV1a3N6nnVvI-MmMbMGR09DHrU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ReigAWYZq5xCUmbCTBWbaBykUVPpBl6KLFe-MnKV2as.jpg?width=108&crop=smart&auto=webp&s=38a5197941703b5f4e844e8061a2e6a222203d2b', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/ReigAWYZq5xCUmbCTBWbaBykUVPpBl6KLFe-MnKV2as.jpg?width=216&crop=smart&auto=webp&s=f8726cdfeb0153e4b48656ab0e746605e2d80940', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/ReigAWYZq5xCUmbCTBWbaBykUVPpBl6KLFe-MnKV2as.jpg?width=320&crop=smart&auto=webp&s=01c6971afbc632a2019c3a55fbac36ead1c36f0f', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/ReigAWYZq5xCUmbCTBWbaBykUVPpBl6KLFe-MnKV2as.jpg?auto=webp&s=9e14a0e8b5cf0577f0215f8544f832824547ff44', 'width': 480}, 'variants': {}}]}
|
A Twitter Style Platform for AI Bots
| 5 |
Hey guys,
I am interested in building a Twitter style platform for AI Bots. The idea is to see if competing LLM Bots trying to reason with each other leads to new ideas. It can also provide some fun and entertainment but the larger goal is to push the reasoning limits of AI models.
**Rules:**
1. Only bots can create posts
2. Both humans and bots can respond to posts
3. Only humans can upvote/downvote posts and responses
Reason for #3 is to ensure that the content of the posts stay relevant to human beings.
Idea & Waitlist: [https://www.worldofdave.app/](https://www.worldofdave.app/)
Would love to get some feedback on this. Do join the waitlist if you like the idea.
| 2025-04-10T16:22:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw1z2k/a_twitter_style_platform_for_ai_bots/
|
simplext
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw1z2k
| false | null |
t3_1jw1z2k
|
/r/LocalLLaMA/comments/1jw1z2k/a_twitter_style_platform_for_ai_bots/
| false | false |
self
| 5 | null |
Llama 4 Japanese Evals
| 40 |
While Llama 4 didn't explicitly call out CJK support, they did claim stronger overall multi-lingual capabilities with "10x more multilingual tokens than Llama 3" and "pretraining on 2000 languages."
Since I had some H100 nodes available and my eval suite was up and running, I ran some testing on both Maverick FP8 and Scout on the [inference-validated vLLM v0.8.3 release](https://blog.vllm.ai/2025/04/05/llama4.html).
For those that are just interested in the results. Here's how Maverick does, compared against the same models that Meta uses in their announcement blog, but w/ a bit of spice - Llama 3.1 405B, and the best Japanese models I've tested so far, quasar-alpha and gpt-4.5 (which at list price, costs >$500 to eval! BTW, shout out to /u/MrKeys_X
for contributing some credits towards testing gpt-4.5):
| Model Name | Shaberi AVG | ELYZA 100 | JA MT Bench | Rakuda | Tengu |
|------------------------------|-------------|-----------|-------------|--------|-------|
| openrouter/quasar-alpha | **9.20** | 9.41 | 9.01 | 9.42 | **8.97** |
| gpt-4.5-preview-2025-02-27 | 9.19 | **9.50** | 8.85 | **9.56** | 8.86 |
| gpt-4o-2024-11-20 | 9.15 | 9.34 | **9.10** | 9.55 | 8.60 |
| deepseek-ai/DeepSeek-V3-0324 | 8.98 | 9.22 | 8.68 | 9.24 | 8.77 |
| gemini-2.0-flash | 8.83 | 8.75 | 8.77 | 9.48 | 8.33 |
| meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8 | 8.64 | 8.54 | 8.81 | 9.14 | 8.08 |
| meta-llama/Llama-3.1-405B-Instruct-FP8 | 8.41 | 8.52 | 8.42 | 9.07 | 7.63 |
And here's Scout results. I didn't test Gemini 2.0 Flash Lite, but through in a few other small models:
| Model Name | Shaberi AVG | ELYZA 100 | JA MT Bench | Rakuda | Tengu |
|------------|-------------|-----------|-------------|--------|-------|
| google/gemma-3-27b-it | **8.53** | 8.53 | 8.71 | 8.85 | **8.03** |
| mistralai/Mistral-Small-3.1-24B-Instruct-2503 | 8.51 | **8.56** | 8.63 | 9.12 | 7.74 |
| microsoft/phi-4 | 8.48 | 8.49 | 8.65 | 9.11 | 7.68 |
| google/gemma-3-12b-it | 8.48 | 8.34 | 8.67 | 9.02 | 7.88 |
| meta-llama/Llama-3.1-405B-Instruct-FP8 | 8.41 | 8.52 | 8.42 | 9.07 | 7.63 |
| meta-llama/Llama-4-Scout-17B-16E-Instruct | 8.35 | 8.07 | 8.54 | 8.94 | 7.86 |
| meta-llama/Llama-3.3-70B-Instruct | 8.28 | 8.09 | **8.76** | 8.88 | 7.40 |
| shisa-ai/shisa-v2-llama-3.1-8b-preview | 8.10 | 7.58 | 8.32 | **9.22** | 7.28 |
| meta-llama/Llama-3.1-8B-Instruct | 7.34 | 6.95 | 7.67 | 8.36 | 6.40 |
For absolute perf, Gemma 3 27B and Mistral Small 3.1 beat out Scout, and Phi 4 14B and Gemma 3 12B are actually amazing for their size.
If you want to read more about the evals themselves, and see some of the custom evals we're developing and those results (role playing, instruction following), check out a blog post I made here: https://shisa.ai/posts/llama4-japanese-performance/
| 2025-04-10T16:36:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw2aph/llama_4_japanese_evals/
|
randomfoo2
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw2aph
| false | null |
t3_1jw2aph
|
/r/LocalLLaMA/comments/1jw2aph/llama_4_japanese_evals/
| false | false |
self
| 40 |
{'enabled': False, 'images': [{'id': '1aberd3OGeIPuXx3M7k32nWHGiGwBFh7c-5wbhG_ebs', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/CdC3J_QRUlJehIlgiVWS2m3EoAfkt9TCOEal6cgsLIM.jpg?width=108&crop=smart&auto=webp&s=0f96a0e986ca4565c880a54c5226788d97d0bd88', 'width': 108}, {'height': 110, 'url': 'https://external-preview.redd.it/CdC3J_QRUlJehIlgiVWS2m3EoAfkt9TCOEal6cgsLIM.jpg?width=216&crop=smart&auto=webp&s=c1cf2edd796bcec8defdfc11d40ec3befede3a7d', 'width': 216}, {'height': 163, 'url': 'https://external-preview.redd.it/CdC3J_QRUlJehIlgiVWS2m3EoAfkt9TCOEal6cgsLIM.jpg?width=320&crop=smart&auto=webp&s=256eba2cb0c74fb2f095c73cdd2b70bafbeb8b5a', 'width': 320}, {'height': 327, 'url': 'https://external-preview.redd.it/CdC3J_QRUlJehIlgiVWS2m3EoAfkt9TCOEal6cgsLIM.jpg?width=640&crop=smart&auto=webp&s=964d95e1dcc4c31040590b957aa327b550e7abf1', 'width': 640}, {'height': 490, 'url': 'https://external-preview.redd.it/CdC3J_QRUlJehIlgiVWS2m3EoAfkt9TCOEal6cgsLIM.jpg?width=960&crop=smart&auto=webp&s=faef82bc132c1549de6f91883d5a566601b89334', 'width': 960}, {'height': 552, 'url': 'https://external-preview.redd.it/CdC3J_QRUlJehIlgiVWS2m3EoAfkt9TCOEal6cgsLIM.jpg?width=1080&crop=smart&auto=webp&s=791119dfece218d8d4e7a47f77a39018ad677add', 'width': 1080}], 'source': {'height': 2725, 'url': 'https://external-preview.redd.it/CdC3J_QRUlJehIlgiVWS2m3EoAfkt9TCOEal6cgsLIM.jpg?auto=webp&s=7464e55730609b90addd4115b0990b0b448e2fb6', 'width': 5331}, 'variants': {}}]}
|
moonshotai has just finished setting up the demo for Kimi-VL-A3B-Thinking on HuggingFace
| 12 |
moonshotai has just finished setting up the demo for Kimi-VL-A3B-Thinking on HuggingFace, everyone can go and try it out!
I tested it with a meme and can see that its OCR capability and image recognition are online. However, its knowledge base probably isn't sufficient (after all, the model isn't very large). It couldn't understand the humor in this meme.
HF demo link : [https://huggingface.co/spaces/moonshotai/Kimi-VL-A3B-Thinking](https://huggingface.co/spaces/moonshotai/Kimi-VL-A3B-Thinking)
| 2025-04-10T16:49:22 |
Dr_Karminski
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw2ltz
| false | null |
t3_1jw2ltz
|
/r/LocalLLaMA/comments/1jw2ltz/moonshotai_has_just_finished_setting_up_the_demo/
| false | false | 12 |
{'enabled': True, 'images': [{'id': 'OuumddewKxdKBGfgZx_OKzfm48hoKL68W_xJlIOclbA', 'resolutions': [{'height': 101, 'url': 'https://preview.redd.it/yhk1y3cie1ue1.png?width=108&crop=smart&auto=webp&s=c9056ff34904fcc9032df6c3e90a77496f17c532', 'width': 108}, {'height': 202, 'url': 'https://preview.redd.it/yhk1y3cie1ue1.png?width=216&crop=smart&auto=webp&s=99ca7a9bb3e8643909bd3b9c17ad57639e24860d', 'width': 216}, {'height': 300, 'url': 'https://preview.redd.it/yhk1y3cie1ue1.png?width=320&crop=smart&auto=webp&s=2025da1738c9c47910d2d8a9a0fdbc7e857eac4f', 'width': 320}, {'height': 600, 'url': 'https://preview.redd.it/yhk1y3cie1ue1.png?width=640&crop=smart&auto=webp&s=a7c9ffc9b18a71756e0379fa5c3d5893a150bc40', 'width': 640}, {'height': 900, 'url': 'https://preview.redd.it/yhk1y3cie1ue1.png?width=960&crop=smart&auto=webp&s=b322ccc8f9a97697d445d5ebb910b0b8cbef1518', 'width': 960}, {'height': 1013, 'url': 'https://preview.redd.it/yhk1y3cie1ue1.png?width=1080&crop=smart&auto=webp&s=10cf979af9a40ee50d0c54ac0051566061500a88', 'width': 1080}], 'source': {'height': 1725, 'url': 'https://preview.redd.it/yhk1y3cie1ue1.png?auto=webp&s=94070e14a6f2c15ec354f89e3008d03708e04c3f', 'width': 1838}, 'variants': {}}]}
|
||
Why is the m4 CPU so fast?
| 7 |
I was testing some GGUFs on my m4 base 32gb and I noticed that inference was slightly faster on 100% CPU when compared to the 100% GPU.
Why is that, is it all because of the memory bandwidth? As in provessing is not really a big part of inference? So a current gen AMD or Intel processor would be equally fast with good enough bandwidth?
I think that also opens up the possibility of having two instances one 100% cpu and one 100% gpu so I can double my m4 token output.
| 2025-04-10T16:49:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw2mck/why_is_the_m4_cpu_so_fast/
|
frivolousfidget
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw2mck
| false | null |
t3_1jw2mck
|
/r/LocalLLaMA/comments/1jw2mck/why_is_the_m4_cpu_so_fast/
| false | false |
self
| 7 | null |
I built a Claude Code alternative on top Claude Code prompt
| 9 |
Hi everyone, just coding for fun from last weekend. If you're looking for a cheaper and customizable Claude Code, you can try this. The prompt is taken from the built code of Claude Code: [https://www.npmjs.com/package/@anthropic-ai/claude-code](https://www.npmjs.com/package/@anthropic-ai/claude-code), offering a similar experience.
Here are its highlights:
\- Complete Claude Code replacement with similar UI and UX
\- Built on top of the Vercel's ai sdk (\`streamText\` + \`useChat\`).
\- Works with any ai sdk supported models (OpenAI, Anthropic, Ollama, Google, OpenRouter etc.)
\- MCP support is in progress but soon
It's still experimental and has lots to do, you can try it with a simple command: \`npx opencoder@latest\`. And here's the repository:
[https://github.com/ducan-ne/opencoder](https://github.com/ducan-ne/opencoder)
| 2025-04-10T16:52:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw2oqs/i_built_a_claude_code_alternative_on_top_claude/
|
ducanchaocacban
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw2oqs
| false | null |
t3_1jw2oqs
|
/r/LocalLLaMA/comments/1jw2oqs/i_built_a_claude_code_alternative_on_top_claude/
| false | false |
self
| 9 |
{'enabled': False, 'images': [{'id': '3CAm7f2euOP7diXidheIHavSdc1loh3U46B-FOssKu4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/sr7XqdeKF73E4m8CFm57jK-VSCmixf5xr3cX1tdw1SY.jpg?width=108&crop=smart&auto=webp&s=0d9f8401435fed0f934dac9f8f12b7f05480e1ec', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/sr7XqdeKF73E4m8CFm57jK-VSCmixf5xr3cX1tdw1SY.jpg?width=216&crop=smart&auto=webp&s=30065d434f38c96637a3fb263a468c45766f03c2', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/sr7XqdeKF73E4m8CFm57jK-VSCmixf5xr3cX1tdw1SY.jpg?width=320&crop=smart&auto=webp&s=58eb4ab37e6ce7544161a493caa8695162abffd2', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/sr7XqdeKF73E4m8CFm57jK-VSCmixf5xr3cX1tdw1SY.jpg?width=640&crop=smart&auto=webp&s=8ac9ce6a0d25fae7f6d6d78c177a7289c0eb8c68', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/sr7XqdeKF73E4m8CFm57jK-VSCmixf5xr3cX1tdw1SY.jpg?width=960&crop=smart&auto=webp&s=d6d5429dc03cd49933d35e8fab872f0a24e4bc98', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/sr7XqdeKF73E4m8CFm57jK-VSCmixf5xr3cX1tdw1SY.jpg?width=1080&crop=smart&auto=webp&s=dc2b2fb470fed5b167ca625efb25c8ebae66d3ff', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/sr7XqdeKF73E4m8CFm57jK-VSCmixf5xr3cX1tdw1SY.jpg?auto=webp&s=ce7d1ed79b7b2469b3bcac2baa84c5212e6c8126', 'width': 1200}, 'variants': {}}]}
|
Meta Unveils LLaMA 4: A Game-Changer in Open-Source AI - <FrontBackGeek/>
| 1 | 2025-04-10T16:58:11 |
https://frontbackgeek.com/meta-unveils-llama-4-a-game-changer-in-open-source-ai/
|
codeagencyblog
|
frontbackgeek.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw2tbf
| false | null |
t3_1jw2tbf
|
/r/LocalLLaMA/comments/1jw2tbf/meta_unveils_llama_4_a_gamechanger_in_opensource/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'V023l44qGIZff9LQGPAug7ej237Y1zpAEhO52nTOX3U', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/eMEPg0PoMQK6P1BgfAwxQVtrJqyJV9PbPbfOzVcwlpc.jpg?width=108&crop=smart&auto=webp&s=f8d9e0253cbcc825c03a467612e65659b5976ef4', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/eMEPg0PoMQK6P1BgfAwxQVtrJqyJV9PbPbfOzVcwlpc.jpg?width=216&crop=smart&auto=webp&s=520fb8ce50464a78fd6e7998840134899916f539', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/eMEPg0PoMQK6P1BgfAwxQVtrJqyJV9PbPbfOzVcwlpc.jpg?width=320&crop=smart&auto=webp&s=d85f052d422b36a3562d3820dbb47a5044ddb0e5', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/eMEPg0PoMQK6P1BgfAwxQVtrJqyJV9PbPbfOzVcwlpc.jpg?width=640&crop=smart&auto=webp&s=efbb2fc4721126666f7080e6bb37cb18894188d6', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/eMEPg0PoMQK6P1BgfAwxQVtrJqyJV9PbPbfOzVcwlpc.jpg?width=960&crop=smart&auto=webp&s=d345f3e334cfa43ad9d49b9b7187b2e66ee5a1fe', 'width': 960}], 'source': {'height': 768, 'url': 'https://external-preview.redd.it/eMEPg0PoMQK6P1BgfAwxQVtrJqyJV9PbPbfOzVcwlpc.jpg?auto=webp&s=16126d2e50561c5bb5fbeaf11734c5e4c802d40f', 'width': 1024}, 'variants': {}}]}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.