title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How to run llama 3.3 70b locally.
| 3 |
My 5090 is coming tomorrow, and I want to run llama 3.3 70b locally. I also have system ram with 128gb 6400 Mt. Could this setup run this model, and with Which settings for vllm.
| 2025-04-23T16:20:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1k63bkb/how_to_run_llama_33_70b_locally/
|
anedisi
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k63bkb
| false | null |
t3_1k63bkb
|
/r/LocalLLaMA/comments/1k63bkb/how_to_run_llama_33_70b_locally/
| false | false |
self
| 3 | null |
A summary of the progress AMD has made to improve it's AI capabilities in the past 4 months from SemiAnalysis
| 155 |
In this report, we will discuss the many positive changes AMD has made. They are on the right track but need to increase the R&D budget for GPU hours and make further investments in AI talent. We will provide additional recommendations and elaborate on AMD management’s blind spot: how they are uncompetitive in the race for AI Software Engineers due to compensation structure benchmarking to the wrong set of companies.
| 2025-04-23T16:30:32 |
https://semianalysis.com/2025/04/23/amd-2-0-new-sense-of-urgency-mi450x-chance-to-beat-nvidia-nvidias-new-moat/?access_token=eyJhbGciOiJFUzI1NiIsImtpZCI6InNlbWlhbmFseXNpcy5wYXNzcG9ydC5vbmxpbmUiLCJ0eXAiOiJKV1QifQ.eyJhdWQiOiJzZW1pYW5hbHlzaXMucGFzc3BvcnQub25saW5lIiwiYXpwIjoiS1NncVhBaGFmZmtwVjQzbmt0UU1INSIsImVudCI6eyJhdWQiOlsiNThZNVhua2U4U1ZnTkFRRm5GZUVIQiJdLCJ1cmkiOlsiaHR0cHM6Ly9zZW1pYW5hbHlzaXMuY29tLzIwMjUvMDQvMjMvYW1kLTItMC1uZXctc2Vuc2Utb2YtdXJnZW5jeS1taTQ1MHgtY2hhbmNlLXRvLWJlYXQtbnZpZGlhLW52aWRpYXMtbmV3LW1vYXQvIl19LCJleHAiOjE3NDgwMDM1MTgsImlhdCI6MTc0NTQxMTUxOCwiaXNzIjoiaHR0cHM6Ly9zZW1pYW5hbHlzaXMucGFzc3BvcnQub25saW5lL29hdXRoIiwic2NvcGUiOiJmZWVkOnJlYWQgYXJ0aWNsZTpyZWFkIGFzc2V0OnJlYWQgY2F0ZWdvcnk6cmVhZCBlbnRpdGxlbWVudHMiLCJzdWIiOiIyaUFXTUs0U0F2RFU3WkpaTGdzR2NYIiwidXNlIjoiYWNjZXNzIn0.K4tPYV6TgV6HszD-hFW0Vql1f9IXKrEx9ZjL2SxfSXAqHYkdk4uCxhwq_Iu4oWCjSyXPCveZLaNDQ19GD3ua9Q
|
takuonline
|
semianalysis.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k63kpq
| false | null |
t3_1k63kpq
|
/r/LocalLLaMA/comments/1k63kpq/a_summary_of_the_progress_amd_has_made_to_improve/
| false | false | 155 |
{'enabled': False, 'images': [{'id': '8bFAF4VphRv-Y90qMsaV2WJPp89zTRQdY9l4kVgqbMg', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/mWvSuKRH-R24cuYFUnmlYRdyhyET4x6NAvj4TSYw978.jpg?width=108&crop=smart&auto=webp&s=7ccd266831565f607b0603f7296bfbd3de93a733', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/mWvSuKRH-R24cuYFUnmlYRdyhyET4x6NAvj4TSYw978.jpg?width=216&crop=smart&auto=webp&s=5877a4e8d49686e8d34ec64b6a76cf5efbf0ed38', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/mWvSuKRH-R24cuYFUnmlYRdyhyET4x6NAvj4TSYw978.jpg?width=320&crop=smart&auto=webp&s=e0417ab156e6ba1e20aa9b48a2d7a657f71658cf', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/mWvSuKRH-R24cuYFUnmlYRdyhyET4x6NAvj4TSYw978.jpg?width=640&crop=smart&auto=webp&s=0a3b94039246ff5688098f9a39ede8e0e6a75a64', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/mWvSuKRH-R24cuYFUnmlYRdyhyET4x6NAvj4TSYw978.jpg?width=960&crop=smart&auto=webp&s=3de4ec607ca43b5a9ca0f3da21b2e4c9d2f42c9d', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/mWvSuKRH-R24cuYFUnmlYRdyhyET4x6NAvj4TSYw978.jpg?width=1080&crop=smart&auto=webp&s=a68e698ab4a4b59adeb31419dbc91b8098e7718d', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/mWvSuKRH-R24cuYFUnmlYRdyhyET4x6NAvj4TSYw978.jpg?auto=webp&s=17d0fe21373376f4d2b0334cd006880841ad7a67', 'width': 1200}, 'variants': {}}]}
|
|
Aider appreciation post
| 38 |
Aider-chat just hits too right for me.
It is powerful, yet light and clean.
It lives in terminal, yet is simply approachable.
It can do all the work, yet encourages to bring-your-own-context.
It's free, yet it just works.
What more is needed, for one who can code, yet cannot code.
(Disclaimer: No chatgpt was used to write this. Only heart.)
| 2025-04-23T16:34:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1k63o9h/aider_appreciation_post/
|
myoddity
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k63o9h
| false | null |
t3_1k63o9h
|
/r/LocalLLaMA/comments/1k63o9h/aider_appreciation_post/
| false | false |
self
| 38 | null |
Any LLM backends that auto-unload models like Ollama?
| 6 |
So I've been playing with lots of LLMs over the past couple years but now looking to move some of my GPUs to my homelab server and I wanted to setup a whole-house multi-purpose AI server. As the intent was to run ComfyUI for image generation and some form of LLM backend.
Currently I run Open WebUI + LiteLLM on my server to hit my gaming rig (which might be running Ollama, Oobabooga, or Koboldcpp). Additionally, 5 separate instances of SillyTavern (one for each person in the house). Mostly so we can keep all of our data separate (like OWUI everyone is using different logins via passkeys). I'd like to also give the others the ability to do image generation (likely by just attaching OWUI, to keep the data separate).
Though I really like the tweakability of Ooba and Kobold, it's real convenient that Ollama has a configurable unload so I don't have to think about it. Especially knowing that image/video generation will eat VRAM too.
Are there any other alternatives? As I type this I'm looking at llama-swap which has a TTL function which may do the job. Based on my use case, is that the right way to go?
Hardware is an Epyc 7713 (64-core Zen3) / 512 GB ECC-R DDR4-3200 / 2x 3090
| 2025-04-23T16:37:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1k63qy6/any_llm_backends_that_autounload_models_like/
|
sepffuzzball
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k63qy6
| false | null |
t3_1k63qy6
|
/r/LocalLLaMA/comments/1k63qy6/any_llm_backends_that_autounload_models_like/
| false | false |
self
| 6 | null |
How have you actually implemented LLMs at work or as a consultant?
| 4 |
Hey everyone :)
I’m curious how people here have practically brought LLMs into work settings.
Did you set up a cloud environment and fine-tune an open-source model? Did you buy enterprise access for your whole department? Set up a quantized model behind an API? Distill something yourself? Maybe even buy some sort of Nvidia DGX Pod???
How did you handle infrastructure? (MCP? GCP? Hugging Face endpoints?), cost calculations, and version churn....like, how do you avoid building something that feels outdated 3 months later?
Also: how did you explain LLM limitations to stakeholders who don’t get why hallucinations happen? (Like, “yes, it sounds confident, but it’s sampling from a probability distribution where the tails aren’t well learned due to sparse data.” You know.)
Would love to hear anything ranging from MVP hacks to enterprise-scale rollouts. How did you explain things in front of management?
| 2025-04-23T16:38:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1k63ri6/how_have_you_actually_implemented_llms_at_work_or/
|
Proud_Fox_684
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k63ri6
| false | null |
t3_1k63ri6
|
/r/LocalLLaMA/comments/1k63ri6/how_have_you_actually_implemented_llms_at_work_or/
| false | false |
self
| 4 | null |
Longer context for
bitnet-b1.58-2B-4T?
| 4 |
I noticed that bitnet-b1.58-2B-4T states "Context Length: Maximum sequence length of 4096 tokens." Has anyone found whether this model can do extended context (eg. 32000) or do we need to stick with other models like Gemma 3 4b for now?
| 2025-04-23T16:45:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1k63y5s/longer_context_for_bitnetb1582b4t/
|
pneuny
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k63y5s
| false | null |
t3_1k63y5s
|
/r/LocalLLaMA/comments/1k63y5s/longer_context_for_bitnetb1582b4t/
| false | false |
self
| 4 | null |
Llama 4 - Scout: best quantization resource and comparison to Llama 3.3
| 8 |
The two primary resources I’ve seen to get for Scout (GGUF for us GPU poor), seems to be Unsloth and Bartowski… both of which seems to do something non-traditional compared to density models like Llama 70b 3.3. So which one is the best or am I missing one? At first blush Bartowski seems to perform better but then again my first attempt with Unsloth was a smaller quant… so I’m curious what others think.
Then for llama 3.3 vs scout it seems comparable with maybe llama 3.3 having better performance and scout definitely far faster at the same performance.
| 2025-04-23T16:53:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1k644of/llama_4_scout_best_quantization_resource_and/
|
silenceimpaired
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k644of
| false | null |
t3_1k644of
|
/r/LocalLLaMA/comments/1k644of/llama_4_scout_best_quantization_resource_and/
| false | false |
self
| 8 |
{'enabled': False, 'images': [{'id': 'ZIUXvi_Ejkf39qQ2uDPz1Ttn_Rs2V1-Zpnt68OBVmMg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ILqVJoH0HY0rlgCtiW3ZEKFuq3-bAC_0h2Y6tOPnhf0.jpg?width=108&crop=smart&auto=webp&s=62b31b91a7514f635718f66fc7f8e61280b4e6a0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ILqVJoH0HY0rlgCtiW3ZEKFuq3-bAC_0h2Y6tOPnhf0.jpg?width=216&crop=smart&auto=webp&s=7796fdea07851b4bbc20ac3d59eae6d27ac6a16f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ILqVJoH0HY0rlgCtiW3ZEKFuq3-bAC_0h2Y6tOPnhf0.jpg?width=320&crop=smart&auto=webp&s=21532154e1a123c743e5d1985e2252162e5738a7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ILqVJoH0HY0rlgCtiW3ZEKFuq3-bAC_0h2Y6tOPnhf0.jpg?width=640&crop=smart&auto=webp&s=44b1adeaa57e15fcbef93bb883728f656779bb85', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ILqVJoH0HY0rlgCtiW3ZEKFuq3-bAC_0h2Y6tOPnhf0.jpg?width=960&crop=smart&auto=webp&s=a25bf438028d32ef95e07a106a51932121687d81', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ILqVJoH0HY0rlgCtiW3ZEKFuq3-bAC_0h2Y6tOPnhf0.jpg?width=1080&crop=smart&auto=webp&s=f52fcee5324631f071a6a664dced41e108684924', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ILqVJoH0HY0rlgCtiW3ZEKFuq3-bAC_0h2Y6tOPnhf0.jpg?auto=webp&s=5ae031cd92f0392f743da604c6c4151d921fe194', 'width': 1200}, 'variants': {}}]}
|
How do current open weights / local LLMs stack up according to lmarena?
| 0 |
Top: at rank 5 is DeepSeek-V3-0324 with an ELO score of 1402.
Rank 11, Gemma 3, 1372.
Rank 15, QWQ-32B, 1316 ELO score.
Rank 18, Command-A, 1303
Rank 35, Llama-4 , ELO score of 1271.
lmarena dot ai/?leaderboard
| 2025-04-23T17:21:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1k64ung/how_do_current_open_weights_local_llms_stack_up/
|
Terminator857
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k64ung
| false | null |
t3_1k64ung
|
/r/LocalLLaMA/comments/1k64ung/how_do_current_open_weights_local_llms_stack_up/
| false | false |
self
| 0 | null |
Example representing 95% of the AI-related content YouTube wants me to watch right now:
| 1 |
[removed]
| 2025-04-23T17:25:11 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1k64xon
| false | null |
t3_1k64xon
|
/r/LocalLLaMA/comments/1k64xon/example_representing_95_of_the_airelated_content/
| false | false |
default
| 1 | null |
||
The best translator is a hybrid translator - combining a corpus of LLMs
| 84 | 2025-04-23T17:28:49 |
https://nuenki.app/blog/the_best_translator_is_a_hybrid_translator
|
Nuenki
|
nuenki.app
| 1970-01-01T00:00:00 | 0 |
{}
|
1k650xj
| false | null |
t3_1k650xj
|
/r/LocalLLaMA/comments/1k650xj/the_best_translator_is_a_hybrid_translator/
| false | false | 84 |
{'enabled': False, 'images': [{'id': 'sl5AWBXJbnd8seHGhHam-my2xN8-2MTiLgaFFv_9VgQ', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/DtSOjZDQhCIQrR9MXzfYsDwli-PvO8iAuPXRBhYivls.jpg?width=108&crop=smart&auto=webp&s=79a054dd227c6f5432f86d0aad2f733d56deb387', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/DtSOjZDQhCIQrR9MXzfYsDwli-PvO8iAuPXRBhYivls.jpg?width=216&crop=smart&auto=webp&s=36d6fa0f550c1aa87b8842476a42ab5e7983d775', 'width': 216}, {'height': 175, 'url': 'https://external-preview.redd.it/DtSOjZDQhCIQrR9MXzfYsDwli-PvO8iAuPXRBhYivls.jpg?width=320&crop=smart&auto=webp&s=a9e59b0b9832d1d263060216c6712ab86736cf73', 'width': 320}, {'height': 351, 'url': 'https://external-preview.redd.it/DtSOjZDQhCIQrR9MXzfYsDwli-PvO8iAuPXRBhYivls.jpg?width=640&crop=smart&auto=webp&s=33bb3dd09e1348f194cfb304ced2dd662da82a0f', 'width': 640}, {'height': 527, 'url': 'https://external-preview.redd.it/DtSOjZDQhCIQrR9MXzfYsDwli-PvO8iAuPXRBhYivls.jpg?width=960&crop=smart&auto=webp&s=b61c5111ebbc99dd0da8775eb45acd9ee039349d', 'width': 960}, {'height': 593, 'url': 'https://external-preview.redd.it/DtSOjZDQhCIQrR9MXzfYsDwli-PvO8iAuPXRBhYivls.jpg?width=1080&crop=smart&auto=webp&s=c4871bfcc51572f134a18d7c42ca6e7ba566fac5', 'width': 1080}], 'source': {'height': 2096, 'url': 'https://external-preview.redd.it/DtSOjZDQhCIQrR9MXzfYsDwli-PvO8iAuPXRBhYivls.jpg?auto=webp&s=068fb20ca78df0694ec410b05a2982f47c0ae5d0', 'width': 3811}, 'variants': {}}]}
|
||
Why do some models suck at following basic tasks?
| 5 |
I've been working on a RAG web chat application for a couple of weeks. I am using Llama-3.1-Nemotron-Nano-8B to summarise the first question of a user in a chat history (as we all know it from ChatGPT). My prompt basically says to summarise the text into 4 words, no punctuation, no special characters. Unfortunately, the model adds a period to the sentence quite often. I am also working with a lot of abbreviations, sometimes the model just makes up a meaning of an abbreviation that is just wrong and uses it as a summary. Why is that?
I've also been using Llama 3.3 Nemotron to figure out if two chunks of text share a similar meaning. The prompt was to reply "YES" if the chunks are similar, otherwise "NO". Most of the time the model was generating an explanation why they are similar or why not. Sometimes forgetting YES or NO, sometimes writing lowercase. Why is it so hard for models to follow instructions and not imagining something that wasn't asked for?
| 2025-04-23T17:33:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1k655l3/why_do_some_models_suck_at_following_basic_tasks/
|
LM1117
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k655l3
| false | null |
t3_1k655l3
|
/r/LocalLLaMA/comments/1k655l3/why_do_some_models_suck_at_following_basic_tasks/
| false | false |
self
| 5 | null |
LlamaCon is in 6 days
| 102 |
[Zuck, Ghodsi, Nadella](https://preview.redd.it/kcvsj160emwe1.png?width=597&format=png&auto=webp&s=c2f3a091dd458f203a46e49bc23ef13ce69aeeda)
🦙 **LlamaCon – April 29, 2025**
Meta's first-ever developer conference dedicated to their open-source AI, held **in person** at Meta HQ in Menlo Park, CA — with **select sessions live-streamed online**.
Agenda:
**10:00 AM PST – LlamaCon Keynote**
Celebrating the open-source community and showcasing the latest in the Llama model ecosystem.
**Speakers:**
• Chris Cox – Chief Product Officer, Meta
• Manohar Paluri – VP of AI, Meta
• Angela Fan – Research Scientist in Generative AI, Meta
**10:45 AM PST – A Conversation with Mark Zuckerberg & Ali Ghodsi**
Open source AI, building with LLMs, and advice for founders.
**Speakers:**
• Mark Zuckerberg – Founder & CEO, Meta
• Ali Ghodsi – Co-founder & CEO, Databricks
**4:00 PM PST – A Conversation with Mark Zuckerberg & Satya Nadella**
AI trends, real-world applications, and future outlooks.
**Speakers:**
• Mark Zuckerberg – Founder & CEO, Meta
• Satya Nadella – Chairman & CEO, Microsoft
🔗 [Link](https://www.llama.com/events/llamacon/2025/?utm_source=llama-home&utm_medium=llama-referral&utm_campaign=llama-utm&utm_offering=llamacon-learnmore&utm_product=llama)
| 2025-04-23T17:34:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1k655wa/llamacon_is_in_6_days/
|
iamn0
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k655wa
| false | null |
t3_1k655wa
|
/r/LocalLLaMA/comments/1k655wa/llamacon_is_in_6_days/
| false | false | 102 |
{'enabled': False, 'images': [{'id': 'fdY2l2O4c1HrQvjZVUwfHHMLM4_7v6ierEj4As4rSfA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/__nAvfZl_lg7YJNwP-IInXWe8ebatQ8ExlHyPqG5yUM.jpg?width=108&crop=smart&auto=webp&s=f20a5a41af33b03f8b27c649abc17c6ddd62358b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/__nAvfZl_lg7YJNwP-IInXWe8ebatQ8ExlHyPqG5yUM.jpg?width=216&crop=smart&auto=webp&s=dabc4cc65be7573303a9bb20ddf13c38ea8bd56c', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/__nAvfZl_lg7YJNwP-IInXWe8ebatQ8ExlHyPqG5yUM.jpg?width=320&crop=smart&auto=webp&s=bfb62209f8ac5ec1ae7fb3155905fcdbb97f1f72', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/__nAvfZl_lg7YJNwP-IInXWe8ebatQ8ExlHyPqG5yUM.jpg?width=640&crop=smart&auto=webp&s=953181b2ca7b3814274602f6fa358f0bc7519113', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/__nAvfZl_lg7YJNwP-IInXWe8ebatQ8ExlHyPqG5yUM.jpg?width=960&crop=smart&auto=webp&s=176419541281c23cac0a79d71e414d06b9ee6877', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/__nAvfZl_lg7YJNwP-IInXWe8ebatQ8ExlHyPqG5yUM.jpg?width=1080&crop=smart&auto=webp&s=4a3ed0c515e9c8b212889483552a0f5941878f67', 'width': 1080}], 'source': {'height': 945, 'url': 'https://external-preview.redd.it/__nAvfZl_lg7YJNwP-IInXWe8ebatQ8ExlHyPqG5yUM.jpg?auto=webp&s=cd4beb85df8518eb93f029337bd6015a03cf3aef', 'width': 1800}, 'variants': {}}]}
|
|
Unpopular Opinion: I'm Actually Loving Llama-4-Scout
| 52 |
I've seen a lot of negativity surrounding the new Llama-4-Scout, and I wanted to share my experience is completely different. I love especially the natural tone and large context understanding
I'm curious to hear if anyone else is having a positive experience with Llama-4-Scout, or if there are specific use cases where it shines. What are your thoughts?
| 2025-04-23T17:41:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1k65cmy/unpopular_opinion_im_actually_loving_llama4scout/
|
Far_Buyer_7281
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k65cmy
| false | null |
t3_1k65cmy
|
/r/LocalLLaMA/comments/1k65cmy/unpopular_opinion_im_actually_loving_llama4scout/
| false | false |
self
| 52 | null |
Macbook pro m4 vs windows laptop with rtx4060
| 0 |
So I’ll be getting a new laptop and am confused between mac or windows. I’m still new and confused what would work for me.
| 2025-04-23T17:41:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1k65cwd/macbook_pro_m4_vs_windows_laptop_with_rtx4060/
|
One_Pirate_1720
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k65cwd
| false | null |
t3_1k65cwd
|
/r/LocalLLaMA/comments/1k65cwd/macbook_pro_m4_vs_windows_laptop_with_rtx4060/
| false | false |
self
| 0 | null |
Ecne AI Report Builder
| 1 |
I've just finished reworking a part of my podcasting script into a standalone little project that will search Google/Brave (Using their API's) with some given keywords for website articles based on the given topic.
It will then process everything, send to your choice of an OpenAI-API Compatible LLM to summarize each individual article with key information and score based on how relevant the article is to the Topic.
It will then collect all the summaries scored highly relevant, and additional resources you provide (txt, PDFs, Docx files), and create a report paper on this information.
I'm still tweaking and testing different models for the summaries, and report generating but so far Google Gemini 2.0 Flash works good and free to use with their API. I've also tested QwQ-32B and have added some login to ignore <think> </think> tags for the process and only provide the information requested.
I wanted to make this a seperate project from my all-in-one podcast project, due to the possibility of using it with a wrapper. Asking my local AI can you research this topic, and set some guidance for instance like that I only want information within the past year only, and then have the LLM in the backend call the project with the set parameters to meet the request, and let it do the task in the background until the answer is ready.
| 2025-04-23T17:50:56 |
https://github.com/ETomberg391/Ecne-AI-Report-Builder
|
Dundell
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k65l1f
| false | null |
t3_1k65l1f
|
/r/LocalLLaMA/comments/1k65l1f/ecne_ai_report_builder/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'rfSlUIbQtQaIcNKzJHCwDbaotB788eGDy3HPXMsfj4s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rfSlUIbQtQaIcNKzJHCwDbaotB788eGDy3HPXMsfj4s.png?width=108&crop=smart&auto=webp&s=0c7e65c26c355e086ca6eaf6c35e082deb2455ce', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rfSlUIbQtQaIcNKzJHCwDbaotB788eGDy3HPXMsfj4s.png?width=216&crop=smart&auto=webp&s=96ce943d5df8c47abd4a09b360ce16031d8f1fe5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rfSlUIbQtQaIcNKzJHCwDbaotB788eGDy3HPXMsfj4s.png?width=320&crop=smart&auto=webp&s=aaeff7fc4be39c7ee6aa6b62f2535653fa43c5f7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rfSlUIbQtQaIcNKzJHCwDbaotB788eGDy3HPXMsfj4s.png?width=640&crop=smart&auto=webp&s=649a9e037d7fe740daf6f128020e370654f83978', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rfSlUIbQtQaIcNKzJHCwDbaotB788eGDy3HPXMsfj4s.png?width=960&crop=smart&auto=webp&s=b893ccd3d0183e7b9199cba540374da7735f7dda', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rfSlUIbQtQaIcNKzJHCwDbaotB788eGDy3HPXMsfj4s.png?width=1080&crop=smart&auto=webp&s=f36f29044d0828d350b17fe84a98516f0a63b914', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rfSlUIbQtQaIcNKzJHCwDbaotB788eGDy3HPXMsfj4s.png?auto=webp&s=f59e04ccee44914ba1965d2f5e7edfcdc0fe0cd7', 'width': 1200}, 'variants': {}}]}
|
|
Experiment: Can determinism of LLM output be predicted with output probabilities? TL;DR Not that I could find
| 6 |
Graph of probability distributions of parsed out answer tokens mean (blue/left), entire response tokens mean (red/right) at varied levels of determinism, 2/5 means that the maximum exact same response count was 2 out of 5 runs. 5/5 means all 5 runs had same exact response.
I was unable to find any connection between probability and determinism.
Data was 100 multiple choice questions from MMLU college math task. More details and experiments at: [https://github.com/breckbaldwin/llm-stability/blob/main/experiments/logprob/analysis.ipynb](https://github.com/breckbaldwin/llm-stability/blob/main/experiments/logprob/analysis.ipynb)
This was in response to a comment from u/randomfoo2 in the thread: [https://github.com/breckbaldwin/llm-stability/blob/main/experiments/logprob/analysis.ipynb](https://github.com/breckbaldwin/llm-stability/blob/main/experiments/logprob/analysis.ipynb)
| 2025-04-23T17:51:44 |
Skiata
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k65lqd
| false | null |
t3_1k65lqd
|
/r/LocalLLaMA/comments/1k65lqd/experiment_can_determinism_of_llm_output_be/
| false | false | 6 |
{'enabled': True, 'images': [{'id': 'qs0Eza5sJEfRpUTMLyui1lmy_SGx3zVoEtmSc-7VCYk', 'resolutions': [{'height': 75, 'url': 'https://preview.redd.it/bjsr17hvfmwe1.png?width=108&crop=smart&auto=webp&s=305c0fa9719dab8cb8a6c04153fe48f3fbb5dcdd', 'width': 108}, {'height': 151, 'url': 'https://preview.redd.it/bjsr17hvfmwe1.png?width=216&crop=smart&auto=webp&s=3affaaf5aae4f6cb0ec49d83d43d401cd697bc1f', 'width': 216}, {'height': 224, 'url': 'https://preview.redd.it/bjsr17hvfmwe1.png?width=320&crop=smart&auto=webp&s=85561b087508429cc8d9123454fedbb43b75e920', 'width': 320}, {'height': 448, 'url': 'https://preview.redd.it/bjsr17hvfmwe1.png?width=640&crop=smart&auto=webp&s=646aadffd7a16676db783ad2334f50cf8c1722b7', 'width': 640}, {'height': 672, 'url': 'https://preview.redd.it/bjsr17hvfmwe1.png?width=960&crop=smart&auto=webp&s=7d9bb03153855c7ea94aa25c4ecc592d021f414c', 'width': 960}], 'source': {'height': 700, 'url': 'https://preview.redd.it/bjsr17hvfmwe1.png?auto=webp&s=777bdb864479d3f7bfcd9d56b210205b0ddc2c79', 'width': 1000}, 'variants': {}}]}
|
||
AnythingLLM and its ability to control the computer?
| 1 |
[removed]
| 2025-04-23T18:00:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1k65tfu/anythingllm_and_its_ability_to_control_the/
|
mayyasayd
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k65tfu
| false | null |
t3_1k65tfu
|
/r/LocalLLaMA/comments/1k65tfu/anythingllm_and_its_ability_to_control_the/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'eyOa-uQZ_D7hJ1V2VwxzSaBoMTGHcdX7cnefeUU-8I4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/6OBI6ALBeCywOHUFYlFmu8uoUmVc8u-e5ag_mQ-eKp8.jpg?width=108&crop=smart&auto=webp&s=1a2d5f288cefbb765426f48ddc28c1ec314c5cf3', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/6OBI6ALBeCywOHUFYlFmu8uoUmVc8u-e5ag_mQ-eKp8.jpg?width=216&crop=smart&auto=webp&s=eacccb791530ad5d5e51b04c0e09253d38b2e924', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/6OBI6ALBeCywOHUFYlFmu8uoUmVc8u-e5ag_mQ-eKp8.jpg?width=320&crop=smart&auto=webp&s=0f192852fc684743e9dead1604eacd7b9573df0a', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/6OBI6ALBeCywOHUFYlFmu8uoUmVc8u-e5ag_mQ-eKp8.jpg?auto=webp&s=564f48b6aad52fecca26e3582da86dad4ec921a8', 'width': 480}, 'variants': {}}]}
|
Anyone else dealing with cold start issues when juggling multiple LLMs locally?
| 0 |
been experimenting with running multiple LLMs on a single GPU , switching between TinyLlama, Qwen, Mistral, etc. One thing that keeps popping up is cold start lag when a model hasn’t been used for a bit and needs to be reloaded into VRAM.
Curious how others here are handling this. Are you running into the same thing? Any tricks for speeding up model switching or avoiding reloads altogether?
Just trying to understand if this is a common bottleneck or if I’m overthinking it. Would love to hear how the rest of you are juggling multiple models locally.
Appreciate it.
| 2025-04-23T18:10:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6630c/anyone_else_dealing_with_cold_start_issues_when/
|
pmv143
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6630c
| false | null |
t3_1k6630c
|
/r/LocalLLaMA/comments/1k6630c/anyone_else_dealing_with_cold_start_issues_when/
| false | false |
self
| 0 | null |
Best local open source voice cloning software that supposts Intel ARC B580?
| 1 |
[removed]
| 2025-04-23T18:11:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1k663ff/best_local_open_source_voice_cloning_software/
|
Mourek369
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k663ff
| false | null |
t3_1k663ff
|
/r/LocalLLaMA/comments/1k663ff/best_local_open_source_voice_cloning_software/
| false | false |
self
| 1 | null |
Anyone try UI-TARS-1.5-7B new model from ByteDance
| 59 |
In summary, It allows AI to use your computer or web browser.
source: [https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B](https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B)
I tried to use it with Ollama and connected it to UI-TARS Desktop, but it failed to follow the prompt. It just took multiple screenshots. What's your experience with it?
[UI TARS Desktop](https://preview.redd.it/8sfb6fc8lmwe1.png?width=1737&format=png&auto=webp&s=38228461bcca820366ff63549025975f0070f5ec)
| 2025-04-23T18:13:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1k665cg/anyone_try_uitars157b_new_model_from_bytedance/
|
Muted-Celebration-47
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k665cg
| false | null |
t3_1k665cg
|
/r/LocalLLaMA/comments/1k665cg/anyone_try_uitars157b_new_model_from_bytedance/
| false | false | 59 |
{'enabled': False, 'images': [{'id': 'K-nuCwSqiI4KBASrJ7e6URTfZXINugdkgV3aR0dsGB4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/MEb00d1gLWxp1-4OYIxzng3fr7CjMC7BtYqeV0pZ5Zc.jpg?width=108&crop=smart&auto=webp&s=19a8ec3d013a10f1fbf7923e12c8a06a3b782300', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/MEb00d1gLWxp1-4OYIxzng3fr7CjMC7BtYqeV0pZ5Zc.jpg?width=216&crop=smart&auto=webp&s=6ef1c201d86be18e5fe8b0e020a106e4edc601d1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/MEb00d1gLWxp1-4OYIxzng3fr7CjMC7BtYqeV0pZ5Zc.jpg?width=320&crop=smart&auto=webp&s=f2d490996442de65734f78ba03c4aae07de97a26', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/MEb00d1gLWxp1-4OYIxzng3fr7CjMC7BtYqeV0pZ5Zc.jpg?width=640&crop=smart&auto=webp&s=c79e173fc5596ae79fab0463a41a31cb51d11923', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/MEb00d1gLWxp1-4OYIxzng3fr7CjMC7BtYqeV0pZ5Zc.jpg?width=960&crop=smart&auto=webp&s=ca00f1c20632ba0aab01b98ba5c6552552068451', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/MEb00d1gLWxp1-4OYIxzng3fr7CjMC7BtYqeV0pZ5Zc.jpg?width=1080&crop=smart&auto=webp&s=8d5444fc6aac2f0e65b02b7e348a5b1929de9897', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/MEb00d1gLWxp1-4OYIxzng3fr7CjMC7BtYqeV0pZ5Zc.jpg?auto=webp&s=e40f6bbb01ae7a7bd55e99beb176ad796a9bf442', 'width': 1200}, 'variants': {}}]}
|
|
Built a runtime system that stabilizes LLM behavior — no memory, no retrain. Just sharing the trace.
| 1 | 2025-04-23T18:32:00 |
Robin898989
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k66m54
| false | null |
t3_1k66m54
|
/r/LocalLLaMA/comments/1k66m54/built_a_runtime_system_that_stabilizes_llm/
| false | false |
default
| 1 |
{'enabled': True, 'images': [{'id': 'lt01i4diomwe1', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/lt01i4diomwe1.png?width=108&crop=smart&auto=webp&s=7b438f282cbff0ede3899af2e8faf360f806bc65', 'width': 108}, {'height': 146, 'url': 'https://preview.redd.it/lt01i4diomwe1.png?width=216&crop=smart&auto=webp&s=768af8c14a7f96c4ef41c6909ef833a5770a51bf', 'width': 216}, {'height': 217, 'url': 'https://preview.redd.it/lt01i4diomwe1.png?width=320&crop=smart&auto=webp&s=03f35ef163b8ff386d04734334a4522e14eee87a', 'width': 320}, {'height': 435, 'url': 'https://preview.redd.it/lt01i4diomwe1.png?width=640&crop=smart&auto=webp&s=d87dfc6c8e611fbc79c8f6b819a84f7eb404a84b', 'width': 640}, {'height': 653, 'url': 'https://preview.redd.it/lt01i4diomwe1.png?width=960&crop=smart&auto=webp&s=c1ccdf65248675e1e2e99f26136f8d98fc860dee', 'width': 960}], 'source': {'height': 715, 'url': 'https://preview.redd.it/lt01i4diomwe1.png?auto=webp&s=d9b867997e9d02c260bf7ab123aa7328cb9f7feb', 'width': 1051}, 'variants': {}}]}
|
||
Is this a good PC for MoE models on CPU?
| 4 |
I was thinking about:
- SUPERMICRO X10SRA
- Intel Xeon E5-2699 V4 2,20GHZ
- 4x RAM DIMM ECC REG 64GB
It's pretty cheap and I could connect multiple 3090s to it, but I was wondering is this a good base for Llama 4 models like Scout and Maverick? To put Q4 into the RAM and then quickly access two experts of 17B
Can I expect 10 t/s?
Modern server motherboards are like 10x more expensive.
| 2025-04-23T18:45:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1k66y3b/is_this_a_good_pc_for_moe_models_on_cpu/
|
jacek2023
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k66y3b
| false | null |
t3_1k66y3b
|
/r/LocalLLaMA/comments/1k66y3b/is_this_a_good_pc_for_moe_models_on_cpu/
| false | false |
self
| 4 | null |
Motherboard for Local Server
| 1 |
I'm not familiar with server hardware so I was wondering if anyone in the community had any favorites. Also no preference on CPU support. But was curious if anyone found that one brand works better than another.
| 2025-04-23T19:39:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1k68a82/motherboard_for_local_server/
|
Regarded-Trader
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k68a82
| false | null |
t3_1k68a82
|
/r/LocalLLaMA/comments/1k68a82/motherboard_for_local_server/
| false | false |
self
| 1 | null |
Claude web client optimization be like:
| 1 |
[removed]
| 2025-04-23T19:47:34 |
deadb3
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k68hll
| false | null |
t3_1k68hll
|
/r/LocalLLaMA/comments/1k68hll/claude_web_client_optimization_be_like/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'igVob8pEc8yZuX6gIReWl9vYLNhVuKfHemWYdZMaj-s', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/uih09m2a2nwe1.jpeg?width=108&crop=smart&auto=webp&s=cc2fe8888abfb38481757100baf2645337fce5b2', 'width': 108}, {'height': 90, 'url': 'https://preview.redd.it/uih09m2a2nwe1.jpeg?width=216&crop=smart&auto=webp&s=30d13af74dcf24156272607fe67e9c7d24620d82', 'width': 216}, {'height': 133, 'url': 'https://preview.redd.it/uih09m2a2nwe1.jpeg?width=320&crop=smart&auto=webp&s=4d4c2e516f9f1c8840054146e2f42a62e09be27f', 'width': 320}, {'height': 267, 'url': 'https://preview.redd.it/uih09m2a2nwe1.jpeg?width=640&crop=smart&auto=webp&s=c5252f435794b6a2fc518096a29d8e204024427b', 'width': 640}, {'height': 401, 'url': 'https://preview.redd.it/uih09m2a2nwe1.jpeg?width=960&crop=smart&auto=webp&s=88f112d7f5a317a1bd64916f99eb18a39b80fad7', 'width': 960}, {'height': 451, 'url': 'https://preview.redd.it/uih09m2a2nwe1.jpeg?width=1080&crop=smart&auto=webp&s=61c93036de4a3966868aa433df4bcade6d17d9c5', 'width': 1080}], 'source': {'height': 1230, 'url': 'https://preview.redd.it/uih09m2a2nwe1.jpeg?auto=webp&s=440e608d903769046ec391206b94be5a80bf9d06', 'width': 2940}, 'variants': {}}]}
|
||
Make a simple AI agent locally using Cogito v1
| 1 | 2025-04-23T21:01:00 |
https://youtu.be/JkoDPJFuE9w?si=3k-34tLWv0rh_ihD
|
planged
|
youtu.be
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6aa2p
| false |
{'oembed': {'author_name': 'Pendar Hadinezhad', 'author_url': 'https://www.youtube.com/@pendarhadinezhad6084', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/JkoDPJFuE9w?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="معرفی مدل Cogito v1 Preview | مسیر نوین به سوی AGI با IDA"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/JkoDPJFuE9w/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'معرفی مدل Cogito v1 Preview | مسیر نوین به سوی AGI با IDA', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1k6aa2p
|
/r/LocalLLaMA/comments/1k6aa2p/make_a_simple_ai_agent_locally_using_cogito_v1/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'D_3UcH31hoOd-MlAakEEK1zFILIgZGsh4_AAkoDBs2I', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/7vClrCcgKl-AEVbrPd5nbHva9AEMnCxoeLtkTkGZMJE.jpg?width=108&crop=smart&auto=webp&s=0d64771411859bdb485ef275b00ddd77b2ea3e6b', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/7vClrCcgKl-AEVbrPd5nbHva9AEMnCxoeLtkTkGZMJE.jpg?width=216&crop=smart&auto=webp&s=87a291ced12cba058178fcd2faab32e58b0b7e17', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/7vClrCcgKl-AEVbrPd5nbHva9AEMnCxoeLtkTkGZMJE.jpg?width=320&crop=smart&auto=webp&s=f2ec5edf9266954ea829ff9f4b82b38211e29e7d', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/7vClrCcgKl-AEVbrPd5nbHva9AEMnCxoeLtkTkGZMJE.jpg?auto=webp&s=ac96d7db2d77889a1866c9b937928457301cafef', 'width': 480}, 'variants': {}}]}
|
||
Bartowski just updated his glm-4-32B quants. working in lmstudio soon?
| 236 | 2025-04-23T21:02:39 |
https://huggingface.co/bartowski/THUDM_GLM-4-32B-0414-GGUF/tree/main
|
ieatrox
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6ably
| false | null |
t3_1k6ably
|
/r/LocalLLaMA/comments/1k6ably/bartowski_just_updated_his_glm432b_quants_working/
| false | false | 236 |
{'enabled': False, 'images': [{'id': '1dsrduASN30gTvRnqUSwbFZ2_-PQSB60TEoileTChzU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3NYpVgamx1NXpydfb32BxQDBSawDgIlUbaanFyS12QE.jpg?width=108&crop=smart&auto=webp&s=07a178b85d55ec32d797f982626a2bff5c10ae0d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3NYpVgamx1NXpydfb32BxQDBSawDgIlUbaanFyS12QE.jpg?width=216&crop=smart&auto=webp&s=395aa09ceb3ad8f59346f588b7110cf3dcb5bff8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3NYpVgamx1NXpydfb32BxQDBSawDgIlUbaanFyS12QE.jpg?width=320&crop=smart&auto=webp&s=ac067de4bd178d306d759828861bf2ed8439a049', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3NYpVgamx1NXpydfb32BxQDBSawDgIlUbaanFyS12QE.jpg?width=640&crop=smart&auto=webp&s=e09f35ea9f5809bb0108aaeb81cfcd9b214c0a72', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3NYpVgamx1NXpydfb32BxQDBSawDgIlUbaanFyS12QE.jpg?width=960&crop=smart&auto=webp&s=de5afc9d441b131caf9ac1287900180316ea80a3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3NYpVgamx1NXpydfb32BxQDBSawDgIlUbaanFyS12QE.jpg?width=1080&crop=smart&auto=webp&s=9cc76b3e21f9f89bca962a8fd7448bcc0f094b5e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3NYpVgamx1NXpydfb32BxQDBSawDgIlUbaanFyS12QE.jpg?auto=webp&s=c817bd1dcc21c6e7ba9455e6121c7ab1fa45f493', 'width': 1200}, 'variants': {}}]}
|
||
Open Source multi-user event-driven asynchronous in-browser speech-enabled crowd-sourced AI orchestration for Llama, Llava and SD 1.5 supports CLAUDE API and HUGGINGFACE API
| 0 | ERROR: type should be string, got "https://github.com/jimpames/RENTAHAL-FOUNDATION\n\n\nOpen Source multi-user event-driven asynchronous in-browser speech-enabled crowd-sourced AI orchestration\n\nIt took me almost a year to develop\n\nv1 and v2 are there - I'm not quite finished with the refactor in v2 - almost.\n\nno kernel - 100% event driven" | 2025-04-23T21:21:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6art2/open_source_multiuser_eventdriven_asynchronous/
|
CHEVISION
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6art2
| false | null |
t3_1k6art2
|
/r/LocalLLaMA/comments/1k6art2/open_source_multiuser_eventdriven_asynchronous/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'CelucPmnU7i9AKM2viGXWqGa0BlpziPXjTPn03eJewA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TaZ-RS1IybOyCkd12zKv039x3EdNF-XZRu-msC6S9aY.jpg?width=108&crop=smart&auto=webp&s=22c216549989573f30079f6c49f571a80e12d1c5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/TaZ-RS1IybOyCkd12zKv039x3EdNF-XZRu-msC6S9aY.jpg?width=216&crop=smart&auto=webp&s=270d8cdf956d0f727a7e9627c56ebe65776fe0cc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/TaZ-RS1IybOyCkd12zKv039x3EdNF-XZRu-msC6S9aY.jpg?width=320&crop=smart&auto=webp&s=abbcfb0fe9e9d22567ea9e5bdc4422e128caeca5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/TaZ-RS1IybOyCkd12zKv039x3EdNF-XZRu-msC6S9aY.jpg?width=640&crop=smart&auto=webp&s=de25bf42a410b9bc8072f89f12c3213d543a905a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/TaZ-RS1IybOyCkd12zKv039x3EdNF-XZRu-msC6S9aY.jpg?width=960&crop=smart&auto=webp&s=7803d78b397aecdbcd3311b8041d6c8792626078', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/TaZ-RS1IybOyCkd12zKv039x3EdNF-XZRu-msC6S9aY.jpg?width=1080&crop=smart&auto=webp&s=c05150e8d6d60f18b9ab76cc7a33aa98ae4c8275', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/TaZ-RS1IybOyCkd12zKv039x3EdNF-XZRu-msC6S9aY.jpg?auto=webp&s=9b06191c587dee1ce4aece30a5f84e4ce5000e1d', 'width': 1200}, 'variants': {}}]}
|
Is the performance loss when allocating layers from the GPU to the CPU linear?
| 1 |
[removed]
| 2025-04-23T21:55:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6blql/is_the_performance_loss_when_allocating_layers/
|
Roubbes
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6blql
| false | null |
t3_1k6blql
|
/r/LocalLLaMA/comments/1k6blql/is_the_performance_loss_when_allocating_layers/
| false | false |
self
| 1 | null |
LlamaCon is less than a week away. Anyone want to put down some concrete predictions?
| 4 |
I think we'll see:
* Maverick and Scout reasoning models
* Behemoth open-source release. This could be the SOTA open-source non-reasoning model so I really hope they release it.
Things we probably won't see but I'd really want:
* An even smaller llama4 model (maybe even a dense 50B model distilled from Behemoth)
* 2-bit post-trained [ParetoQ](https://www.reddit.com/r/LocalLLaMA/comments/1jig5re/meta_released_a_paper_last_month_that_seems_to/) Maverick and Scout models
| 2025-04-23T22:02:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6brsq/llamacon_is_less_than_a_week_away_anyone_want_to/
|
jd_3d
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6brsq
| false | null |
t3_1k6brsq
|
/r/LocalLLaMA/comments/1k6brsq/llamacon_is_less_than_a_week_away_anyone_want_to/
| false | false |
self
| 4 | null |
Possible to integrate cloud n8n with local LLM?
| 0 |
Working on an internal use AI bot for my job, and currently I have a workflow setup through n8n that contains an AI agent who uses Pinecone as a vector store for RAG within the bot. Everything works great, and I’m currently running Claude 3.7 Sonnet on there, but obviously that requires a paid API key. One of the things my managers would like to move towards is more local hosting to reduce costs over time, starting with the LLM.
Would it be possible to integrate a locally hosted LLM with cloud n8n? Essentially I could swap the LLM model node in my workflow for something that connects to my locally hosted LLM.
If this isnt possible, is my best best to host both the LLM and n8n locally? Then some vector store like Qdrant locally as well? (Don’t believe Pinecone has the best locally hosted options which is a bummer)
I greatly appreciate any advice, thanks
| 2025-04-23T22:05:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6bttv/possible_to_integrate_cloud_n8n_with_local_llm/
|
Spartan098
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6bttv
| false | null |
t3_1k6bttv
|
/r/LocalLLaMA/comments/1k6bttv/possible_to_integrate_cloud_n8n_with_local_llm/
| false | false |
self
| 0 | null |
Science Fair Agents run locally
| 4 |
Corporate AI ML LLM Agent Science Fair Open-Source Framework Development In Progress
We have successfully achieved the main goals of Phase 1 and the initial steps of Phase 2:
✅ Architectural Skeleton Built (Interfaces, Agent Service Components,)
✅ Redis Services Implemented and Integrated
✅ Core Task Flow Operational and Resource Monitoring Service. (Orchestrator -> Queue -> Worker -> Agent -> State)
✅ Optimistic Locking (Task Assignment & Agent State)
✅ Basic Science Fair Agents and Dynamic Simulation Workflow Modules (OrganicChemistryAgent, MolecularBiologyAgent, FractalAgent, HopfieldAgent, DataScienceAgent, ChaosTheoryAgent, EntropyAgent, AstrophysicsAgent, RoboticsAgent, EnvironmentalScienceAgent, MachineLearningAgent, MemoryAgent, CreativeAgent, ValidationAgent, InformationTheoryAgent, HypothesisAgent, ContextAwareAgent, MultiModalAgent, CollaborativeAgent, TemporalPrimeAgent, CuriosityQRLAgent, LLMAgent, LLaDATaskAgent, Physics, Quantum Qiskit circuit creation/simulation, Generic)
✅ LLMAgent With Interactive NLP/Command Parsing: Prompt console with API calls to Ollama and multi-step commands. (Phase 2 will integrate a local transformers pipeline.)
Now we can confidently move deeper into Phase 2:
1. Refine Performance Metrics: Enhance perf\_score with deep and meaningful insight extraction for each agent.
2. Monitoring: Implement the comprehensive metric collection in NodeProbe and aggregation in ResourceMonitoringService.
3. Reinforcement Learning.
Here is one example
[https://github.com/CorporateStereotype/ScienceFair/](https://github.com/CorporateStereotype/ScienceFair/)
| 2025-04-23T23:08:39 |
https://v.redd.it/pasdcmxv1owe1
|
Financial_Pick8394
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6d8zt
| false |
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/pasdcmxv1owe1/DASHPlaylist.mpd?a=1748041733%2CYWUzNjA0YzI1MWYwNjg1MDQ5ODc2OWY1MmIyNThhZTNiY2QzYWQ5YzM1NmViZTU0YjhiY2RiOTE4YWQxZDk5Yg%3D%3D&v=1&f=sd', 'duration': 29, 'fallback_url': 'https://v.redd.it/pasdcmxv1owe1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/pasdcmxv1owe1/HLSPlaylist.m3u8?a=1748041733%2CNWVhYmJiMzRhN2M4OWM0ZGE0Yjc3OWU3NjZkYWIzMGVjMDFlOThhZWY5NjdkNWE0Njg5ODE0NDgzNDllZDM5MQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/pasdcmxv1owe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1094}}
|
t3_1k6d8zt
|
/r/LocalLLaMA/comments/1k6d8zt/science_fair_agents_run_locally/
| false | false | 4 |
{'enabled': False, 'images': [{'id': 'cmR5bHRteHYxb3dlMZcDW0ukQe6jBnQ3FptNd_RPfnjJWOo6z3EdN2Pnmdo9', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/cmR5bHRteHYxb3dlMZcDW0ukQe6jBnQ3FptNd_RPfnjJWOo6z3EdN2Pnmdo9.png?width=108&crop=smart&format=pjpg&auto=webp&s=e34192f8359ce488333781825be6c48f8510c364', 'width': 108}, {'height': 142, 'url': 'https://external-preview.redd.it/cmR5bHRteHYxb3dlMZcDW0ukQe6jBnQ3FptNd_RPfnjJWOo6z3EdN2Pnmdo9.png?width=216&crop=smart&format=pjpg&auto=webp&s=bb33e214fdcb8a384035b1b085900fb6656c880f', 'width': 216}, {'height': 210, 'url': 'https://external-preview.redd.it/cmR5bHRteHYxb3dlMZcDW0ukQe6jBnQ3FptNd_RPfnjJWOo6z3EdN2Pnmdo9.png?width=320&crop=smart&format=pjpg&auto=webp&s=d1af612f0ab499846a3b79f5c437a19a0ce0d853', 'width': 320}, {'height': 421, 'url': 'https://external-preview.redd.it/cmR5bHRteHYxb3dlMZcDW0ukQe6jBnQ3FptNd_RPfnjJWOo6z3EdN2Pnmdo9.png?width=640&crop=smart&format=pjpg&auto=webp&s=740b11959717ac2410aeac0d06432edd95de01d6', 'width': 640}, {'height': 631, 'url': 'https://external-preview.redd.it/cmR5bHRteHYxb3dlMZcDW0ukQe6jBnQ3FptNd_RPfnjJWOo6z3EdN2Pnmdo9.png?width=960&crop=smart&format=pjpg&auto=webp&s=d6a3820eadd86983b3051e145ed53a8a50008d07', 'width': 960}, {'height': 710, 'url': 'https://external-preview.redd.it/cmR5bHRteHYxb3dlMZcDW0ukQe6jBnQ3FptNd_RPfnjJWOo6z3EdN2Pnmdo9.png?width=1080&crop=smart&format=pjpg&auto=webp&s=1d157e37be41d47975d9506b769a6ada2e745fe5', 'width': 1080}], 'source': {'height': 974, 'url': 'https://external-preview.redd.it/cmR5bHRteHYxb3dlMZcDW0ukQe6jBnQ3FptNd_RPfnjJWOo6z3EdN2Pnmdo9.png?format=pjpg&auto=webp&s=0ac509d8c991df0ba7ae326f9472bbfeac1e3223', 'width': 1480}, 'variants': {}}]}
|
|
Fastest model for some demo slop gen?
| 0 |
Using deepcoder:1.5b - need to generate few thousand pages with some roughly believable content. The quality is good enough, the speed, not that much . I don't have TPM but getting about pageful every 5 seconds. Is it the way I drive it? 2x3090 both GPU/PCU busy ... thoughts appreciated.
| 2025-04-23T23:13:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6dczd/fastest_model_for_some_demo_slop_gen/
|
Otherwise-Tiger3359
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6dczd
| false | null |
t3_1k6dczd
|
/r/LocalLLaMA/comments/1k6dczd/fastest_model_for_some_demo_slop_gen/
| false | false |
self
| 0 | null |
Is there a voice cloning model that's good enough to run with 16GB RAM?
| 1 |
[removed]
| 2025-04-23T23:15:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6dej9/is_there_a_voice_cloning_model_thats_good_enough/
|
idiotbandwidth
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6dej9
| false | null |
t3_1k6dej9
|
/r/LocalLLaMA/comments/1k6dej9/is_there_a_voice_cloning_model_thats_good_enough/
| false | false |
self
| 1 | null |
Welcome everyone, let's discuss this question.
| 1 |
[removed]
| 2025-04-23T23:44:14 |
https://bitly.cx/U99u
|
Lonely-Pirate3823
|
bitly.cx
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6e0lj
| false | null |
t3_1k6e0lj
|
/r/LocalLLaMA/comments/1k6e0lj/welcome_everyone_lets_discuss_this_question/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'qc64yiVrVFnLyASLyPIXSXhOTe6ILQqTkPGVv3aTUtQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/W9C8a9s1nDE4CI84Lyec7JNTk8Bm8e2QfmM-hIqqaSQ.jpg?width=108&crop=smart&auto=webp&s=6a8b7190cb585fbaf4ead33e2d128b4576a12ef8', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/W9C8a9s1nDE4CI84Lyec7JNTk8Bm8e2QfmM-hIqqaSQ.jpg?width=216&crop=smart&auto=webp&s=35a31c8f99f73cd1847a16141e78e5f249064ee3', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/W9C8a9s1nDE4CI84Lyec7JNTk8Bm8e2QfmM-hIqqaSQ.jpg?width=320&crop=smart&auto=webp&s=795cec0ea949a267c019522f07fbfea145e64cde', 'width': 320}, {'height': 333, 'url': 'https://external-preview.redd.it/W9C8a9s1nDE4CI84Lyec7JNTk8Bm8e2QfmM-hIqqaSQ.jpg?width=640&crop=smart&auto=webp&s=34b7d510d9b2941790edaf3777417f42ee8ab432', 'width': 640}], 'source': {'height': 431, 'url': 'https://external-preview.redd.it/W9C8a9s1nDE4CI84Lyec7JNTk8Bm8e2QfmM-hIqqaSQ.jpg?auto=webp&s=93cd3f7611f99238cdf903fbd104fea16d4f5f44', 'width': 828}, 'variants': {}}]}
|
|
Open-source LLM for generating system prompts
| 1 |
[removed]
| 2025-04-23T23:47:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6e3gw/opensource_llm_for_generating_system_prompts/
|
rishabhbajpai24
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6e3gw
| false | null |
t3_1k6e3gw
|
/r/LocalLLaMA/comments/1k6e3gw/opensource_llm_for_generating_system_prompts/
| false | false |
self
| 1 | null |
Dual RTX 5060 Ti: The Ultimate Budget Solution for 32GB VRAM LLM Inference at $858 | Hardware Corner
| 0 |
Bandwidth is low compared to top tier cards, but interesting idea.
| 2025-04-24T00:01:59 |
https://www.hardware-corner.net/dual-rtx-5060-ti-price-for-llm-build-20250415/
|
dylan_dev
|
hardware-corner.net
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6ee3u
| false | null |
t3_1k6ee3u
|
/r/LocalLLaMA/comments/1k6ee3u/dual_rtx_5060_ti_the_ultimate_budget_solution_for/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'MNunZFLFh71P6OtzCNtQXG4EHwR90e850fFqcyi58vY', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/ccJDwffzv34hZTE0pZW-Q2ic_BTN9lKdlCc8zYPpZIo.jpg?width=108&crop=smart&auto=webp&s=7d4a31175bdd4b704246e3368af7bf7c8ae45eb0', 'width': 108}, {'height': 128, 'url': 'https://external-preview.redd.it/ccJDwffzv34hZTE0pZW-Q2ic_BTN9lKdlCc8zYPpZIo.jpg?width=216&crop=smart&auto=webp&s=9a5988cd27df3a6f603039aa42708df060063d82', 'width': 216}, {'height': 189, 'url': 'https://external-preview.redd.it/ccJDwffzv34hZTE0pZW-Q2ic_BTN9lKdlCc8zYPpZIo.jpg?width=320&crop=smart&auto=webp&s=88a2bb2d9d003b4cd2746908db47f64cec7ab7f8', 'width': 320}, {'height': 379, 'url': 'https://external-preview.redd.it/ccJDwffzv34hZTE0pZW-Q2ic_BTN9lKdlCc8zYPpZIo.jpg?width=640&crop=smart&auto=webp&s=5a222ff33790bf805e7b4ef9b49ab55f818fd0d3', 'width': 640}, {'height': 569, 'url': 'https://external-preview.redd.it/ccJDwffzv34hZTE0pZW-Q2ic_BTN9lKdlCc8zYPpZIo.jpg?width=960&crop=smart&auto=webp&s=d080fa9ba6ea5be9b7d887ba6293b947e836a3f3', 'width': 960}], 'source': {'height': 607, 'url': 'https://external-preview.redd.it/ccJDwffzv34hZTE0pZW-Q2ic_BTN9lKdlCc8zYPpZIo.jpg?auto=webp&s=b66679c863eabe43c0956676af1c7b7d252703a2', 'width': 1024}, 'variants': {}}]}
|
|
Charlie Mnemonic
| 7 |
Hello. So I became super interested in the open source LLM overlay called Charlie Mnemonic. It was designed as an AI assistant, but what really interests me is the custom, robust, long term memory system. The design is super intriguing, including two layers of long term memory, a layer of episodic memory, a layer of recent memory, the ability to write and read a notes.txt file for even more memory and context, and a really slick memory management and prioritization system.
the best part is it's all done without actually touching the AI model, mostly via specialized prompt injection.
Anyway, the project was designed for ChatGPT models or Claude, both over the cloud. It keeps track of API costs and all. They also claimed to support local offline LLM models, but never actually finished implementing that functionality.
I spent the last week studying all the code related to forming and sending prompts to figure out why it wouldn't work with a local LLM even though it claims it can. I found several areas that I had to rewrite or add to in order to support local LLM, and even fixed a couple generic bugs along the way (for example, if you set timezone to UTC within the settings, prompts stop working).
I'm making this post in case anyone finds themselves in a similar situation and wants help making the charlie mnemonic overlay work with a locally hosted Ollama LLM, so they can ask for help and I can help, as I'm quite familiar with it at this point.
| 2025-04-24T00:20:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6er8t/charlie_mnemonic/
|
kor34l
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6er8t
| false | null |
t3_1k6er8t
|
/r/LocalLLaMA/comments/1k6er8t/charlie_mnemonic/
| false | false |
self
| 7 | null |
Need model recommendations to parse html
| 4 |
Must run in 8GB vram cards ... What is the model that can go beyond newspaper3K for this task ? The smaller the better !
Thanks
| 2025-04-24T00:21:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6esb4/need_model_recommendations_to_parse_html/
|
skarrrrrrr
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6esb4
| false | null |
t3_1k6esb4
|
/r/LocalLLaMA/comments/1k6esb4/need_model_recommendations_to_parse_html/
| false | false |
self
| 4 | null |
LMM for LLMs. A practical guide to building AI apps
| 1 |
[removed]
| 2025-04-24T00:46:07 |
https://dev.to/salman_paracha_ea278514b4/an-l-mm-for-llm-agents-254b
|
Necessary_Reveal1460
|
dev.to
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6f9vi
| false | null |
t3_1k6f9vi
|
/r/LocalLLaMA/comments/1k6f9vi/lmm_for_llms_a_practical_guide_to_building_ai_apps/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'Ke8l5dm3FqXn8UNxcwtey6VnYADHvVvelf1CM1NjKeE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BsTWCnrWciD8q-MtCeyuNFBL4TBlFmDtH2WOO0oSmG8.jpg?width=108&crop=smart&auto=webp&s=4b993da8f731461e94f1dc38f8ff27b746ff847c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BsTWCnrWciD8q-MtCeyuNFBL4TBlFmDtH2WOO0oSmG8.jpg?width=216&crop=smart&auto=webp&s=ab708cb50126cf179bc4c0c9c1fb88eea7f2b6f0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BsTWCnrWciD8q-MtCeyuNFBL4TBlFmDtH2WOO0oSmG8.jpg?width=320&crop=smart&auto=webp&s=1120acc835b09bf4a348496a43706e4349055dd0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BsTWCnrWciD8q-MtCeyuNFBL4TBlFmDtH2WOO0oSmG8.jpg?width=640&crop=smart&auto=webp&s=98ac54c51de521ad9808dcd8067ca78155e53483', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BsTWCnrWciD8q-MtCeyuNFBL4TBlFmDtH2WOO0oSmG8.jpg?width=960&crop=smart&auto=webp&s=ef550b666b5664b1029236c6def6fb39c41f3578', 'width': 960}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/BsTWCnrWciD8q-MtCeyuNFBL4TBlFmDtH2WOO0oSmG8.jpg?auto=webp&s=acc2b5e3bf23e6fe733227442a0ba5069658efc2', 'width': 1000}, 'variants': {}}]}
|
|
Native tool calling
| 2 |
Hi folks,
I'm wondering if the community has agreed on what makes a model support "native" tool calling. I will start by ruling out training a model to use a _specific_ tool like was done with llama 3.2 and what OpenAI provides, because I believe those are called built-in tools. Other than that, what criteria should be met?
- Tool use incorporated during training?
- Special tokens dedicated to tool calling? (eg Hermes' <tool_call>)?
- Tool call support in provided default chat template?
- Something else?
Also, I'm wondering if there is any work comparing performance of tool calling between native and non-native models. Or maybe between base non-native models and native fine-tunes.
| 2025-04-24T00:58:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6fikx/native_tool_calling/
|
V0dros
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6fikx
| false | null |
t3_1k6fikx
|
/r/LocalLLaMA/comments/1k6fikx/native_tool_calling/
| false | false |
self
| 2 | null |
Beyond Autoregression: Discrete Diffusion for Complex Reasoning and Planning
| 9 |
Abstract
>Autoregressive language models, despite their impressive capabilities, struggle with complex reasoning and long-term planning tasks. We introduce discrete diffusion models as a novel solution to these challenges. Through the lens of subgoal imbalance, we demonstrate how diffusion models effectively learn difficult subgoals that elude autoregressive approaches. We propose Multi-Granularity Diffusion Modeling (MGDM), which prioritizes subgoals based on difficulty during learning. On complex tasks like Countdown, Sudoku, and Boolean Satisfiability Problems, MGDM significantly outperforms autoregressive models without using search techniques. For instance, MGDM achieves 91.5\\% and 100\\% accuracy on Countdown and Sudoku, respectively, compared to 45.8\\% and 20.7\\% for autoregressive models. Our work highlights the potential of diffusion-based approaches in advancing AI capabilities for sophisticated language understanding and problem-solving tasks. All associated codes are available at [https://github.com/HKUNLP/diffusion-vs-ar](https://github.com/HKUNLP/diffusion-vs-ar)
| 2025-04-24T00:59:16 |
https://arxiv.org/abs/2410.14157v3
|
ninjasaid13
|
arxiv.org
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6fj84
| false | null |
t3_1k6fj84
|
/r/LocalLLaMA/comments/1k6fj84/beyond_autoregression_discrete_diffusion_for/
| false | false |
default
| 9 | null |
SurveyGO:Open DeepResearch. Automated AI-generated surveys
| 9 |
By TsinghuaNLP team, great job guys !
SurveyGO can turn massive paper piles into high-quality, concise, citation-rich surveys.
👍 Under the hood lies **LLM×MapReduce‑V2**, a novel test-time scaling strategy designed to enhance LLMs' ability to process extremely long inputs.
🌐 Demo: [https://surveygo.thunlp.org/](https://surveygo.thunlp.org/)
📄 Paper: [https://arxiv.org/abs/2504.05732](https://arxiv.org/abs/2504.05732)
💻 Code: [GitHub - thunlp/LLMxMapReduce](https://github.com/thunlp/LLMxMapReduce/)
| 2025-04-24T01:20:06 |
https://surveygo.thunlp.org/
|
Lynncc6
|
surveygo.thunlp.org
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6fy0j
| false | null |
t3_1k6fy0j
|
/r/LocalLLaMA/comments/1k6fy0j/surveygoopen_deepresearch_automated_aigenerated/
| false | false |
default
| 9 | null |
SEO for AI LLM-based Search Engines | AI Visibility Tracking
| 1 |
[removed]
| 2025-04-24T01:28:43 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6g45u
| false |
{'oembed': {'author_name': 'Jasper Hallamore', 'author_url': 'https://www.youtube.com/@jasper.studio', 'height': 200, 'html': '<iframe width="267" height="200" src="https://www.youtube.com/embed/XlhxICYdySE?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="SEO for AI LLM-based Search Engines | AI Visibility Tracking"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/XlhxICYdySE/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'SEO for AI LLM-based Search Engines | AI Visibility Tracking', 'type': 'video', 'version': '1.0', 'width': 267}, 'type': 'youtube.com'}
|
t3_1k6g45u
|
/r/LocalLLaMA/comments/1k6g45u/seo_for_ai_llmbased_search_engines_ai_visibility/
| false | false |
default
| 1 | null |
||
Creating a fine-tuned model for News Evaluations
| 2 |
I'm trying to build a news significance evaluation model. So basically, I have an annotated dataset, it looks a little something like this
title,url,category,
final_score,
impact,scale,potential,legacy,novelty,credibility,positivity
Top NIH Ebola Specialist Says Quarantines Will Jeopardize Americans,https://www.huffingtonpost.com/entry/ebola-quarantine_n_6049936.html,POLITICS,
5.1,
5,6,5,4,5,8,3
Longtime Gun Owner Ashton Kutcher Says 'Enough Is Enough' After Vegas Massacre,https://www.huffingtonpost.com/entry/ashton-kutcher-las-vegas-massacre_us_59d3378fe4b048a44324bd09,POLITICS,
4.5,
5,4,6,4,3,7,4
Basically, a news article, the headline and a set of scores ChatGPT generates on how impactful the news article is
This was generated using ChatGPT by asking it to generate scores for each article. Then I attempt to finetune a Llama - 1B using QLoRA so that I have a mini model that generates news significance scores. I would like the model to achieve similar results to ChatGPT annotated dataset. But when I do inference, I'm getting a variety of issues like the quanitised model just churning out examples from my prompt. For example, the prompt was to produce a structured response of significance values depending on this news article
More than 50,000 killed in Gaza since Israel offensive began, Hamas-run ministry says
It then returned
"scale": 2,
"impact": 2.1,
"potential": 3,
"legacy": 1,
"novelty": 2,
"credibility": 8,
"positivity": 8
Which was a calibration example I used in the prompt.
So my prompt was
[https://pastebin.com/ehJ84kS0](https://pastebin.com/ehJ84kS0)
(I attached it as a pastebin because its too long.
I asked it for reasoning but it wont provide this.
If someone could point to where I'm going wrong, I've attached my Google Colab here to see
[https://colab.research.google.com/drive/1l-JBypqf-Fh93uKWRAp42mtOy6bgV3nL#scrollTo=81ls3m8Hp4K6](https://colab.research.google.com/drive/1l-JBypqf-Fh93uKWRAp42mtOy6bgV3nL#scrollTo=81ls3m8Hp4K6)
Please let me know if any extra details is needed
| 2025-04-24T01:43:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6gf26/creating_a_finetuned_model_for_news_evaluations/
|
mayodoctur
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6gf26
| false | null |
t3_1k6gf26
|
/r/LocalLLaMA/comments/1k6gf26/creating_a_finetuned_model_for_news_evaluations/
| false | false |
self
| 2 |
{'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?width=108&crop=smart&auto=webp&s=3d74dbe4f1d67cc8b587db9aa01762f26e269bcf', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?auto=webp&s=b9f5c4e4867fbffb2c1ff45dd70aa338d1e3f40c', 'width': 150}, 'variants': {}}]}
|
Just upgraded from an M1 MacBook Pro to an m4 MacBook Pro... Anyone else get load coil whine with LLMs?
| 2 |
(load = loud .. but honestly its not loud relatively speaking :) )
My M1 was dead silent, my new M4 MacBook Pro running a model in Ollama makes a very noticeable fast chirping sound (It's very faint, but noticeable and not something the M1 Pro had). Anyone else experience this or is there something wrong with this thing ?
| 2025-04-24T02:13:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6h0au/just_upgraded_from_an_m1_macbook_pro_to_an_m4/
|
cmndr_spanky
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6h0au
| false | null |
t3_1k6h0au
|
/r/LocalLLaMA/comments/1k6h0au/just_upgraded_from_an_m1_macbook_pro_to_an_m4/
| false | false |
self
| 2 | null |
SmolBoi: watercooled 3x RTX 3090 FE & EPYC 7642 in O11D (with build pics)
| 66 |
Hi all,
The initial idea for build started with a single RTX 3090 FE I bought about a year and a half ago, right after the crypto crash. Over the next few months, I bought two more 3090 FEs.
From the beginning, my criteria for this build were:
* Buy components based on good deals I find in local classifieds, ebay, or tech forums.
* Everything that can be bought 2nd hand, shall be bought 2nd hand.
* I already had a Lian Li O11D case (not XL, not Evo), so everything shall fit there.
* Watercooled to keep noise and temps low despite the size.
* ATX motherboard to give myself a bit more space inside the case.
* Xeon Scalable or Epyc: I want plenty PCIe lanes, U.2 for storage, lots of RAM, plenty of bandwidth, and I want it cheap.
* U.2 SSDs because they're cheaper and more reliable.
Took a couple more months to source all components, but in the end, here is what ended in this rig, along with purchase price:
* Supermicro H12SSL-i: 300€.
* AMD EPYC 7642: 220€ (bought a few of those together)
* 512GB 8x64GB Samsung DDR4-2666 ECCRDIMM: 350€
* 3x RTX 3090 FE: 1550€
* 2x Samsung PM1735 1.6TB U.2 Gen 4 SSD: 125€
* 256GB M.2 Gen 3 NVME: 15€
* 4x Bykski waterblocks: 60€/block
* Bykski waterblock GPU bridge: 24€
* Alphacool Eisblock XPX Pro 1U: 65€
* EVGA 1600W PSU: 100€
* 3x RTX 3090 FE 21-pin power adapter cable: 45€
* 3x PCIe Gen 4 x16 risers: 70€
* EK 360mm 45mm + 2x alphacool 360mm 30mm: 100€
* EK Quantum Kinetic 120mm reservoir: 35€
* Xylem D5 pump: 35€
* 10x Arctic P12 Max: 70€ (9 used)
* Arctic P8 Max: 5€
* tons of fittings from Aliexpress: 50-70€
* Lian Li X11 upright GPU mount: 15€
* Anti-sagging GPU brace: 8€
* 5M fishtank 10x13mm PVC tube: 10€
* Custom Aluminum plate for upright GPU mount: 45€
Total: \~3400€
I'm excluding the Mellanox ConnextX-3 56gb infiniband. It's not technically needed, and it was like 13€.
As you can see in the pictures, it's a pretty tight fit. Took a lot of planning and redesign to make everything fit in.
My initial plan was to just plug the watercooled cards into the motherboard witha triple bridge (Bykski sells those, and they'll even make you a custom bridge if you ask nicely, which is why I went for their blocks). Unbeknown to me, the FE cards I went with because they're shorter (I thought easier fit) are also quite a bit taller than reference cards. This made it impossible to fit the cards in the case, as even low profile fitting adapter (the piece that converts the ports on the block to G1/4 fittings) was too high to fit in my case. I explored other case options that could fit three 360mm radiators but couldn't find any that would also have enough height for the blocks.
This height issue necessitated a radical rethinking of how I'd fit the GPUs. I started playing with one GPU with the block attached inside the case to see how I could fit them, and the idea of dangling two from the top of the case was born. I knew Lian Li sold the upright GPU mount, but that was for the EVO. I didn't want to buy the EVO because that would mean reducing the top radiator to 240mm, and I wanted that to be 45mm to do the heavy lifting of removing most heat.
I used my rudimentary OpenSCAD skills to design a plate that would screw to a 120mm fan and provide mounting holes for the upright GPU bracket. With that, I could hang two GPUs. I used JLCPCB to make 2 of them. With two out of the way, finding a place for the 3rd GPU was much easier. The 2nd plate ended having the perfect hole spacing for mounting the PCIe riser connector, providing a base for the 3rd GPU. An anti-sagging GPU brace provided the last bit of support needed to keep the 3rd GPU safe.
As you can see in the pictures, the aluminum (2mm 7075) plate is bent. This was because the case was left on it's side with the two GPUs dangling for well over a month. It was supposed to a few hours, but health issues stopped the build abruptly. The motherboard also died on me (common issue with H12SSL, cost 50€ to fix at Supermicro, including shipping. Motherboard price includes repair cost), which delayed things further. The pictures are from reassembling after I got it back.
The loop (from coldest side) out of the bottom radiator, into the two GPUs, on to the the 3rd GPU, then pump, into the CPU, onwards to the top radiator, leading to the side radiator, and back to the bottom radiator. Temps on the GPUs peak ~51C so far. Though the board's BMC monitors GPU temps directly (I didn't know it could), having the warmest water go to the CPU means the fans will ramp up even if there's no CPU load. The pump PWM is not connected, keeping it at max rpm on purpose for high circulation. Cooling is provided by distilled water with a few drops of Iodine. Been running that on my quad P40 rig for months now without issue.
At idle, the rig is very quiet. Fans idle at 1-1.1k rpm. Haven't checked RPM under load.
Model storage is provided by the two Gen4 PM1735s in RAID0 configuration. Haven't benchmarked them yet, but I saw 13GB/s on nvtop while loading Qwen 32B and Nemotron 49B. The GPUs report Gen4 X16 in nvtop, but I haven't checked for errors. I am blowen by the speed with which models load from disk, even when I tested with --no-mmap.
DeepSeek V3 is still downloading...
And now, for some LLM inference numbers using llama.cpp (b5172). I filled the loop yesterday and got Ubuntu installed today, so I haven't gotten to try vLLM yet. GPU power is the default 350W. Apart from Gemma 3 QAT, all models are Q8.
### Mistral-Small-3.1-24B-Instruct-2503 with Draft
```bash
/models/llama.cpp/llama-server -m /models/Mistral-Small-3.1-24B-Instruct-2503-Q8_0.gguf -md /models/Mistral-Small-3.1-DRAFT-0.5B.Q8_0.gguf -fa -sm row --no-mmap -ngl 99 -ngld 99 --port 9009 -c 65536 --draft-max 16 --draft-min 5 --draft-p-min 0.5 --device CUDA2,CUDA1 --device-draft CUDA1 --tensor-split 0,1,1 --slots --metrics --numa distribute -t 40 --no-warmup
```
| prompt eval tk/s | prompt tokens | eval tk/s | total time | total tokens |
|------------------|---------------|-----------|------------|--------------|
| 187.35 | 1044 | 30.92 | 34347.16 | 1154 |
| draft acceptance rate = 0.29055 ( 446 accepted / 1535 generated) | | | | |
### Mistral-Small-3.1-24B no-Draft
```bash
/models/llama.cpp/llama-server -m /models/Mistral-Small-3.1-24B-Instruct-2503-Q8_0.gguf -fa -sm row --no-mmap -ngl 99 --port 9009 -c 65536 --draft-max 16 --draft-min 5 --draft-p-min 0.5 --device CUDA2,CUDA1 --tensor-split 0,1,1 --slots --metrics --numa distribute -t 40 --no-warmup
```
| prompt eval tk/s | prompt tokens | eval tk/s | total time | total tokens |
|------------------|---------------|-----------|------------|--------------|
| 187.06 | 992 | 30.41 | 33205.86 | 1102 |
### Gemma-3-27B with Draft
```bash
/models/llama.cpp/llama-server -m llama-server -m /models/gemma-3-27b-it-Q8_0.gguf -md /models/gemma-3-1b-it-Q8_0.gguf -fa --temp 1.0 --top-k 64 --min-p 0.0 --top-p 0.95 -sm row --no-mmap -ngl 99 -ngld 99 --port 9005 -c 20000 --cache-type-k q8_0 --cache-type-v q8_0 --draft-max 16 --draft-min 5 --draft-p-min 0.5 --device CUDA0,CUDA1 --device-draft CUDA0 --tensor-split 1,1,0 --slots --metrics --numa distribute -t 40 --no-warmup
```
| prompt eval tk/s | prompt tokens | eval tk/s | total time | total tokens |
|------------------|---------------|-----------|------------|--------------|
| 151.36 | 1806 | 14.87 | 122161.81 | 1913 |
| draft acceptance rate = 0.23570 ( 787 accepted / 3339 generated) | | | | |
### Gemma-3-27b no-Draft
```bash
/models/llama.cpp/llama-server -m llama-server -m /models/gemma-3-27b-it-Q8_0.gguf -fa --temp 1.0 --top-k 64 --min-p 0.0 --top-p 0.95 -sm row --no-mmap -ngl 99 --port 9005 -c 20000 --cache-type-k q8_0 --cache-type-v q8_0 --device CUDA0,CUDA1 --tensor-split 1,1,0 --slots --metrics --numa distribute -t 40 --no-warmup
```
| prompt eval tk/s | prompt tokens | eval tk/s | total time | total tokens |
|------------------|---------------|-----------|------------|--------------|
| 152.85 | 1957 | 20.96 | 94078.01 | 2064 |
### QwQ-32B.Q8
```bash
/models/llama.cpp/llama-server -m /models/QwQ-32B.Q8_0.gguf --temp 0.6 --top-k 40 --repeat-penalty 1.1 --min-p 0.0 --dry-multiplier 0.5 -fa -sm row --no-mmap -ngl 99 --port 9008 -c 80000 --samplers "top_k;dry;min_p;temperature;typ_p;xtc" --cache-type-k q8_0 --cache-type-v q8_0 --device CUDA0,CUDA1 --tensor-split 1,1,0 --slots --metrics --numa distribute -t 40 --no-warmup
```
| prompt eval tk/s | prompt tokens | eval tk/s | total time | total tokens |
|------------------|---------------|-----------|------------|--------------|
| 132.51 | 2313 | 19.50 | 119326.49 | 2406 |
### Gemma-3-27B QAT Q4
```bash
/models/llama.cpp/llama-server -m llama-server -m /models/gemma-3-27b-it-q4_0.gguf -fa --temp 1.0 --top-k 64 --min-p 0.0 --top-p 0.95 -sm row -ngl 99 -c 65536 --cache-type-k q8_0 --cache-type-v q8_0 --device CUDA0 --tensor-split 1,0,0 --slots --metrics --numa distribute -t 40 --no-warmup --no-mmap --port 9004
```
| prompt eval tk/s | prompt tokens | eval tk/s | total time | total tokens |
|------------------|---------------|-----------|------------|--------------|
| 1042.04 | 2411 | 36.13 | 2673.49 | 2424 |
| 634.28 | 14505 | 24.58 | 385537.97 | 23418 |
### Qwen2.5-Coder-32B
```bash
/models/llama.cpp/llama-server -m /models/Qwen2.5-Coder-32B-Instruct-Q8_0.gguf --top-k 20 -fa --top-p 0.9 --min-p 0.1 --temp 0.7 --repeat-penalty 1.05 -sm row -ngl 99 -c 65535 --samplers "top_k;dry;min_p;temperature;typ_p;xtc" --cache-type-k q8_0 --cache-type-v q8_0 --device CUDA0,CUDA1 --tensor-split 1,1,0 --slots --metrics --numa distribute -t 40 --no-warmup --no-mmap --port 9005
```
| prompt eval tk/s | prompt tokens | eval tk/s | total time | total tokens |
|------------------|---------------|-----------|------------|--------------|
| 187.50 | 11709 | 15.48 | 558661.10 | 19390 |
### Llama-3_3-Nemotron-Super-49B
```bash
/models/llama.cpp/llama-server -m /models/Llama-3_3-Nemotron-Super-49B/nvidia_Llama-3_3-Nemotron-Super-49B-v1-Q8_0-00001-of-00002.gguf -fa -sm row -ngl 99 -c 32768 --device CUDA0,CUDA1,CUDA2 --tensor-split 1,1,1 --slots --metrics --numa distribute -t 40 --no-mmap --port 9001
```
| prompt eval tk/s | prompt tokens | eval tk/s | total time | total tokens |
|------------------|---------------|-----------|------------|--------------|
| 120.56 | 1164 | 17.21 | 68414.89 | 1259 |
| 70.11 | 11644 | 14.58 | 274099.28 | 13219 |
| 2025-04-24T02:25:43 |
https://www.reddit.com/gallery/1k6hah2
|
FullstackSensei
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6hah2
| false | null |
t3_1k6hah2
|
/r/LocalLLaMA/comments/1k6hah2/smolboi_watercooled_3x_rtx_3090_fe_epyc_7642_in/
| false | false | 66 | null |
|
Calorie Tracking with Llama3.2 Vision and Ollama
| 1 |
[removed]
| 2025-04-24T03:21:59 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6ienj
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/amh0jnf7bpwe1/DASHPlaylist.mpd?a=1748061506%2CYmM0MzNjNmEzNzRiYmE0ZmE5ODcxNGQyNmRjZjUxNjlkMWI5NTI4ODYwNjU5ZDFhNWRlMjFhOTEyNjIzYmZlYg%3D%3D&v=1&f=sd', 'duration': 37, 'fallback_url': 'https://v.redd.it/amh0jnf7bpwe1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/amh0jnf7bpwe1/HLSPlaylist.m3u8?a=1748061506%2CMjI1MzdhNDJlOTlmMDk3NGE5ZmMzMmRlZDNlNjY2MzdmNDQ0NzQzM2VkNjlmNmJkMTdhMzI4MWNiNWQ5YjVkZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/amh0jnf7bpwe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1564}}
|
t3_1k6ienj
|
/r/LocalLLaMA/comments/1k6ienj/calorie_tracking_with_llama32_vision_and_ollama/
| false | false |
default
| 1 | null |
||
Calorie Tracking with Llama3.2 Vision and Ollama
| 1 |
[removed]
| 2025-04-24T03:26:14 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6ihdp
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/hrd6exv1cpwe1/DASHPlaylist.mpd?a=1748061568%2CY2RkMWFiMGJhYTZjNGU5OTcxMTcyMDdkNmQ5OTJiOGYzNTE3NWU0Y2Y1NzkzNmRkYWNkZjYyNDM5ZDZiNGE5OA%3D%3D&v=1&f=sd', 'duration': 37, 'fallback_url': 'https://v.redd.it/hrd6exv1cpwe1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/hrd6exv1cpwe1/HLSPlaylist.m3u8?a=1748061568%2CMDJhMDdmNjI0ODI5ZDg2ODIwMGEwZWI1NGQ1YWFhZGJkODhhNjVkZmY3MTM2OGNkYWRjMWIxZmVkZDg2Yzc2NQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/hrd6exv1cpwe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1564}}
|
t3_1k6ihdp
|
/r/LocalLLaMA/comments/1k6ihdp/calorie_tracking_with_llama32_vision_and_ollama/
| false | false |
default
| 1 | null |
||
Calorie Tracking with Llama3.2 Vision and Ollama
| 1 |
[removed]
| 2025-04-24T03:48:03 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6ivvc
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/xx8a7hizfpwe1/DASHPlaylist.mpd?a=1748061867%2CMjViZGZiMTZmYzU0N2Q4ZTE4MDU1MTY3N2M3MTU0NjExMWRjOTFjNjkxYmE5YjkyZjE0MWZhODQwYWU4MmRkMw%3D%3D&v=1&f=sd', 'duration': 37, 'fallback_url': 'https://v.redd.it/xx8a7hizfpwe1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/xx8a7hizfpwe1/HLSPlaylist.m3u8?a=1748061867%2COTcxNzEwMmNkYjgxMjA3NGQ1MmU1MWY5OTg3NGNkY2RiODNhZDQwZDYxZjNmZjM0NThhNWFkNmFlMjNhNjY0NQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/xx8a7hizfpwe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1564}}
|
t3_1k6ivvc
|
/r/LocalLLaMA/comments/1k6ivvc/calorie_tracking_with_llama32_vision_and_ollama/
| false | false |
default
| 1 | null |
||
Calorie Tracking with Llama3.2 Vision and Ollama!
| 1 |
[removed]
| 2025-04-24T03:51:26 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6iy2q
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/omrsbpkkgpwe1/DASHPlaylist.mpd?a=1748061915%2COWRjMzU1ZDIwYzk2YzUxOThhMDIzMTU1Y2FmM2Q4M2Y5MjQ0MzllYTY1OGJhMTdmM2NhMTUzMzFiNzhmMDE4ZA%3D%3D&v=1&f=sd', 'duration': 37, 'fallback_url': 'https://v.redd.it/omrsbpkkgpwe1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/omrsbpkkgpwe1/HLSPlaylist.m3u8?a=1748061915%2CMTc0YzlmNTVhOTU5YTNhZTBiNDczNTI4NzZkYmIyZWYyNjNlYjY3MjE0YjkxYTk3OWY2NDY2ZDJlZjAxNzJjYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/omrsbpkkgpwe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1564}}
|
t3_1k6iy2q
|
/r/LocalLLaMA/comments/1k6iy2q/calorie_tracking_with_llama32_vision_and_ollama/
| false | false |
default
| 1 | null |
||
Skywork-R1V2-38B - New SOTA open-source multimodal reasoning model
| 180 | 2025-04-24T04:16:54 |
https://huggingface.co/Skywork/Skywork-R1V2-38B
|
ninjasaid13
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6je2v
| false | null |
t3_1k6je2v
|
/r/LocalLLaMA/comments/1k6je2v/skyworkr1v238b_new_sota_opensource_multimodal/
| false | false | 180 |
{'enabled': False, 'images': [{'id': 'RTiZ46sO11nfXyNlVM8vyr9cqgUVM4y93u2zm8v-5Bg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RTiZ46sO11nfXyNlVM8vyr9cqgUVM4y93u2zm8v-5Bg.png?width=108&crop=smart&auto=webp&s=70a0ba4d7cce54fe987802e25b81bd5b2b64fe86', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/RTiZ46sO11nfXyNlVM8vyr9cqgUVM4y93u2zm8v-5Bg.png?width=216&crop=smart&auto=webp&s=c8d75b989bd3a50d903912c4ab1d2884096b164e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/RTiZ46sO11nfXyNlVM8vyr9cqgUVM4y93u2zm8v-5Bg.png?width=320&crop=smart&auto=webp&s=9f7bfd8cb7ea282377496346c8298850a193149e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/RTiZ46sO11nfXyNlVM8vyr9cqgUVM4y93u2zm8v-5Bg.png?width=640&crop=smart&auto=webp&s=df7083b743c70efd512caf939d946bd65171d252', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/RTiZ46sO11nfXyNlVM8vyr9cqgUVM4y93u2zm8v-5Bg.png?width=960&crop=smart&auto=webp&s=bbc2768290f8f3f1d937e31d5fe50d2fbaacf579', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/RTiZ46sO11nfXyNlVM8vyr9cqgUVM4y93u2zm8v-5Bg.png?width=1080&crop=smart&auto=webp&s=dd0d4a39944c67ba5415396606ad8ce65934d63c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/RTiZ46sO11nfXyNlVM8vyr9cqgUVM4y93u2zm8v-5Bg.png?auto=webp&s=cd2875a6aabcc99fe3c95d847074adf23e265c69', 'width': 1200}, 'variants': {}}]}
|
||
How good is QwQ 32B's OCR?
| 1 |
[removed]
| 2025-04-24T04:40:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6jse5/how_good_is_qwq_32bs_ocr/
|
Due-Employee4744
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6jse5
| false | null |
t3_1k6jse5
|
/r/LocalLLaMA/comments/1k6jse5/how_good_is_qwq_32bs_ocr/
| false | false |
self
| 1 | null |
LLM content on YT becoming repetitive
| 44 |
I've been following the discussion and content around LLMs very closely from the beginning of the AI craze on youtube and am subscribed to most LLM related channels. While in the beginning and well throughout most of the last one or two years there was a ton of new content every day, covering all aspects. Content felt very diverse. From RAG to inference, to evals and frameworks like Dspy, chunking strategies and ingestion pipelines, fine tuning libraries like unsloth and agentic frameworks like crewAI and autogen. Or of course the AI IDEs like cursor and windsurf and things like liteLLM need to be mentioned as well, and there's many more which don't come to mind right now.
Fast forward to today and the channels are still around, but they seem to cover only specific topics like MCP and then all at once. Clearly, once something new has been talked about you can't keep bringing it up. But at the same time I have a hard time believing that even in those established projects there's nothing new to talk about.
There would be so much room to speak about the awesome stuff you could do with all these tools, but to me it seems content creators have fallen into a routine. Do you share the same impression? What are channels you are watching that keep bringing innovative and inspiring content still at this stage of where the space has gotten to?
| 2025-04-24T04:40:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6jslj/llm_content_on_yt_becoming_repetitive/
|
Mr_Moonsilver
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6jslj
| false | null |
t3_1k6jslj
|
/r/LocalLLaMA/comments/1k6jslj/llm_content_on_yt_becoming_repetitive/
| false | false |
self
| 44 | null |
How good is QwQ 32B's OCR?
| 5 |
Is it the same as Qwen2.5 VL? I need a model to analyse Mathematics and Physics textbooks, and QwQ seems to be the best in reasoning at its size, but i don't know if it could handle the complex images in them. The Kaggle page for QwQ doesn't mention images.
| 2025-04-24T04:49:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6jxpe/how_good_is_qwq_32bs_ocr/
|
Impressive_Chicken_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6jxpe
| false | null |
t3_1k6jxpe
|
/r/LocalLLaMA/comments/1k6jxpe/how_good_is_qwq_32bs_ocr/
| false | false |
self
| 5 | null |
What GPU do you use?
| 2 |
Hey everyone, I’m doing some research for my local inference engine project. I’ll follow up with more polls. Thanks for participating!
[View Poll](https://www.reddit.com/poll/1k6k0jr)
| 2025-04-24T04:54:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6k0jr/what_gpu_do_you_use/
|
okaris
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6k0jr
| false | null |
t3_1k6k0jr
|
/r/LocalLLaMA/comments/1k6k0jr/what_gpu_do_you_use/
| false | false |
self
| 2 | null |
How much vram do you have?
| 14 |
Hey everyone, I’m doing some research for my local inference engine project. I’ll follow up with more polls. Thanks for participating!
[View Poll](https://www.reddit.com/poll/1k6k1df)
| 2025-04-24T04:55:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6k1df/how_much_vram_do_you_have/
|
okaris
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6k1df
| false | null |
t3_1k6k1df
|
/r/LocalLLaMA/comments/1k6k1df/how_much_vram_do_you_have/
| false | false |
self
| 14 | null |
What OS do you use?
| 37 |
Hey everyone, I’m doing some research for my local inference engine project. I’ll follow up with more polls. Thanks for participating!
[View Poll](https://www.reddit.com/poll/1k6k1pq)
| 2025-04-24T04:56:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6k1pq/what_os_do_you_use/
|
okaris
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6k1pq
| false | null |
t3_1k6k1pq
|
/r/LocalLLaMA/comments/1k6k1pq/what_os_do_you_use/
| false | false |
self
| 37 | null |
Time to get into LLM's in a big way this next Monday
| 0 |
My new system if finally being built and should be ready by Monday.
285K + 96GB's of DDR5-6600 + 5090 + uber fast SSD all on Ubuntu.
If the build shop could gotten me to 6600MHz on the AMD I would have went with the better(for gamers) 9950x3d.
While I certainly wouldn't want to run a large LLM totally in system ram as the dual channel nature of consumer CPU's is a bottleneck. But I do see running something like a 40B at Q8 model with 28GB's on the 5090 and 12gb's in system RAM. Squeezing a little more perhaps allows running a 70B class of models becomes workable.
So, I'm looking for suggestions as to what possibilities this'll open up in terms of "local quality" and training possibilities. I do python programming to make Stable Diffusion super fast(294 images per second at 512x512 on my 4090) so I can get into the low level stuff quite readily. I like to experiment and wonder what interesting things I could try on the new box.
NOTE: The more I think about it, instead of refurbishing my current system and selling it I'll likely have my 4090 moved to my new system as a little brother. Today I did tell the guy building it to upgrade the PS from 1200 watts to 1600 just in case.
| 2025-04-24T05:27:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6kjmo/time_to_get_into_llms_in_a_big_way_this_next/
|
Guilty-History-9249
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6kjmo
| false | null |
t3_1k6kjmo
|
/r/LocalLLaMA/comments/1k6kjmo/time_to_get_into_llms_in_a_big_way_this_next/
| false | false |
self
| 0 | null |
DEEPSEEK UNMASKED: EXPOSING THE CCP'S LATEST TOOL FOR SPYING, STEALING, AND SUBVERTING U.S. EXPORT CONTROL RESTRICTIONS
| 0 | 2025-04-24T06:31:35 |
https://selectcommitteeontheccp.house.gov/sites/evo-subsites/selectcommitteeontheccp.house.gov/files/evo-media-document/DeepSeek%20Final.pdf
|
NunyaBuzor
|
selectcommitteeontheccp.house.gov
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6linn
| false | null |
t3_1k6linn
|
/r/LocalLLaMA/comments/1k6linn/deepseek_unmasked_exposing_the_ccps_latest_tool/
| false | false |
default
| 0 | null |
|
Finetuning or RL on Llama4
| 2 |
Who has successfully finetuned Llama4 and whats your setup?
| 2025-04-24T06:59:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6lxle/finetuning_or_rl_on_llama4/
|
MutedSwimming3347
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6lxle
| false | null |
t3_1k6lxle
|
/r/LocalLLaMA/comments/1k6lxle/finetuning_or_rl_on_llama4/
| false | false |
self
| 2 | null |
Someone found my open AI server and used it to process disturbing amounts of personal data, for over a month
| 1 |
[removed]
| 2025-04-24T07:02:12 |
ufaruq
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6lz5m
| false | null |
t3_1k6lz5m
|
/r/LocalLLaMA/comments/1k6lz5m/someone_found_my_open_ai_server_and_used_it_to/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'w7jiUAHLpahDlIk1F1kwR6--Zft_wChJqGGFJIRgCnI', 'resolutions': [{'height': 18, 'url': 'https://preview.redd.it/r38ea60eeqwe1.png?width=108&crop=smart&auto=webp&s=b820248325bd03486ad6c3759a300ed0861321e6', 'width': 108}, {'height': 36, 'url': 'https://preview.redd.it/r38ea60eeqwe1.png?width=216&crop=smart&auto=webp&s=48b4219bc1426e85c68eb581cd91ea524dab15a9', 'width': 216}, {'height': 54, 'url': 'https://preview.redd.it/r38ea60eeqwe1.png?width=320&crop=smart&auto=webp&s=7ec97501721d0a462e59e4e7ba535d784bc535f6', 'width': 320}, {'height': 109, 'url': 'https://preview.redd.it/r38ea60eeqwe1.png?width=640&crop=smart&auto=webp&s=c71bac9d12a715964febade5d55570f2986807bd', 'width': 640}, {'height': 164, 'url': 'https://preview.redd.it/r38ea60eeqwe1.png?width=960&crop=smart&auto=webp&s=86572fe464be25821b6c382fb5b8b21224037ce9', 'width': 960}, {'height': 184, 'url': 'https://preview.redd.it/r38ea60eeqwe1.png?width=1080&crop=smart&auto=webp&s=3c40e9af75562346d63c3d1709db1481c45d8824', 'width': 1080}], 'source': {'height': 484, 'url': 'https://preview.redd.it/r38ea60eeqwe1.png?auto=webp&s=d5bfa8c4eeab2d6b6e459ab4eb53568c9aeed1f5', 'width': 2829}, 'variants': {}}]}
|
||
Details on OpenAI's upcoming 'open' AI model
| 286 |
\- In very early stages, targeting an early summer launch
\- Will be a reasoning model, aiming to be the top open reasoning model when it launches
\- Exploring a highly permissive license, perhaps unlike Llama and Gemma
\- Text in text out, reasoning can be tuned on and off
\- Runs on "high-end consumer hardware"
| 2025-04-24T07:53:04 |
https://techcrunch.com/2025/04/23/openai-seeks-to-make-its-upcoming-open-ai-model-best-in-class/
|
ayyndrew
|
techcrunch.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6mols
| false | null |
t3_1k6mols
|
/r/LocalLLaMA/comments/1k6mols/details_on_openais_upcoming_open_ai_model/
| false | false | 286 |
{'enabled': False, 'images': [{'id': 'jBLPKrE-sNiDaxe0zsX1DO2Ghuda8KNpR6LvSh4IYoc', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/jBLPKrE-sNiDaxe0zsX1DO2Ghuda8KNpR6LvSh4IYoc.jpeg?width=108&crop=smart&auto=webp&s=08f7b09e521df7a12f237a2f3ef2eb93990f11c9', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/jBLPKrE-sNiDaxe0zsX1DO2Ghuda8KNpR6LvSh4IYoc.jpeg?width=216&crop=smart&auto=webp&s=a1b5d8f2aee34326ede75c9b839ff8d2524ee300', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/jBLPKrE-sNiDaxe0zsX1DO2Ghuda8KNpR6LvSh4IYoc.jpeg?width=320&crop=smart&auto=webp&s=13442e065911e05f4c31ced72770d8f1efce557e', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/jBLPKrE-sNiDaxe0zsX1DO2Ghuda8KNpR6LvSh4IYoc.jpeg?width=640&crop=smart&auto=webp&s=462c8e493352f840f1bcce92fb0555e2b79db252', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/jBLPKrE-sNiDaxe0zsX1DO2Ghuda8KNpR6LvSh4IYoc.jpeg?width=960&crop=smart&auto=webp&s=059546cfb3cb36a198376574873cdceeff1ec15f', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/jBLPKrE-sNiDaxe0zsX1DO2Ghuda8KNpR6LvSh4IYoc.jpeg?width=1080&crop=smart&auto=webp&s=845073e7a9ef63fd09596cb4742367cacb2bcb55', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/jBLPKrE-sNiDaxe0zsX1DO2Ghuda8KNpR6LvSh4IYoc.jpeg?auto=webp&s=634941914287517efb5bd433401bb58c09185a68', 'width': 1200}, 'variants': {}}]}
|
|
Serving new models with vLLM with efficient quantization
| 20 |
Hey folks,
I'd love to hear from vLLM users what you guys' playbooks for serving recently supported models are.
I'm running the [vLLM openai compatiable docker container](https://hub.docker.com/r/vllm/vllm-openai/tags) on an inferencing server.
Up until now, i've taken the easy path of using pre-quantized AWQ checkpoints from the huggingface hub. But this often excludes a lot of recent models. Conversely, GUUFs are readily available pretty much on day 1. I'm left with a few options:
1. Quantize the target model to AWQ myself either in the vllm container or in a separate env then inject it into the container
2. Try the experimental GGUF support in vLLM (would love to hear people's experiences with this)
3. Experiment with the [other supported quantization formats](https://docs.vllm.ai/en/stable/features/quantization/index.html) like BnB when such checkpoints are available on HF hub.
There's also the new unsloth dynamic 4-bit quants that sound to be very good bang-for-buck in VRAM. They seem to be based on BnB with new features. Has anyone managed to get models in this format in vLLM working?
Thanks for any inputs!
| 2025-04-24T08:07:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6mvoi/serving_new_models_with_vllm_with_efficient/
|
Swedgetarian
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6mvoi
| false | null |
t3_1k6mvoi
|
/r/LocalLLaMA/comments/1k6mvoi/serving_new_models_with_vllm_with_efficient/
| false | false |
self
| 20 | null |
Code Agents course on DeepLearning AI with Hugging Face smolagents
| 6 |
Most AI agents use large language models to generate one tool call at a time. Code Agents take a different approach.
Unlike tool-calling agents that follow a step-by-step process: call a function, observe the result, decide what to do next, and repeat. Code Agents generate an entire block of code that performs a sequence of actions, then execute that code in one go.
In our new course with HuggingFace, Thom Wolf and Aymeric Roucher teach you how to build code agents.
This approach can make agents more efficient, more reliable, and better suited for complex tasks.
You’ll learn how to build code agents using the smolagents framework, run LLM-generated code safely with sandboxing and constrained execution, and evaluate your agents in both single and multi-agent systems.
https://preview.redd.it/paowhikcqqwe1.png?width=2461&format=png&auto=webp&s=92af4907ff1af4eb3f2bc137a7fcd81786cf9311
| 2025-04-24T08:08:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6mw32/code_agents_course_on_deeplearning_ai_with/
|
Zealousideal-Cut590
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6mw32
| false | null |
t3_1k6mw32
|
/r/LocalLLaMA/comments/1k6mw32/code_agents_course_on_deeplearning_ai_with/
| false | false | 6 |
{'enabled': False, 'images': [{'id': 'VFYD-TVqj8_HS9vNz2i3eLKmDoCzNMmhGLlTrlOdaCY', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/VFYD-TVqj8_HS9vNz2i3eLKmDoCzNMmhGLlTrlOdaCY.png?width=108&crop=smart&auto=webp&s=9aee748d4d2fcec0135a64cf6cd4e26b02766f67', 'width': 108}, {'height': 124, 'url': 'https://external-preview.redd.it/VFYD-TVqj8_HS9vNz2i3eLKmDoCzNMmhGLlTrlOdaCY.png?width=216&crop=smart&auto=webp&s=5162e71298930d3ba8ad81c02d9167366f1b5702', 'width': 216}, {'height': 184, 'url': 'https://external-preview.redd.it/VFYD-TVqj8_HS9vNz2i3eLKmDoCzNMmhGLlTrlOdaCY.png?width=320&crop=smart&auto=webp&s=7d1220b9d6387f89a1433ea0c4307402310e2912', 'width': 320}, {'height': 368, 'url': 'https://external-preview.redd.it/VFYD-TVqj8_HS9vNz2i3eLKmDoCzNMmhGLlTrlOdaCY.png?width=640&crop=smart&auto=webp&s=3eaac387de4da1713c88d8a590e1d10c30e53f17', 'width': 640}, {'height': 553, 'url': 'https://external-preview.redd.it/VFYD-TVqj8_HS9vNz2i3eLKmDoCzNMmhGLlTrlOdaCY.png?width=960&crop=smart&auto=webp&s=6211d0ad5e34294ad774fa1a63b1fae8114a5271', 'width': 960}, {'height': 622, 'url': 'https://external-preview.redd.it/VFYD-TVqj8_HS9vNz2i3eLKmDoCzNMmhGLlTrlOdaCY.png?width=1080&crop=smart&auto=webp&s=0efe3aaae78fa1916b26798173e5b5da0e8d36d4', 'width': 1080}], 'source': {'height': 1418, 'url': 'https://external-preview.redd.it/VFYD-TVqj8_HS9vNz2i3eLKmDoCzNMmhGLlTrlOdaCY.png?auto=webp&s=e894cebb68fb2c3ced999879ca5dfa6c2089ca1b', 'width': 2461}, 'variants': {}}]}
|
|
Looking for better alternatives to Ollama - need faster model updates and easier tool usage
| 20 |
I've been using Ollama because it's super straightforward - just check the model list on their site, find one with tool support, download it, and you're good to go. But I'm getting frustrated with how slow they are at adding support for new models like Llama 4 and other recent releases.
What alternatives to Ollama would you recommend that:
1. Can run in Docker
2. Add support for new models more quickly
3. Have built-in tool/function calling support without needing to hunt for templates
4. Are relatively easy to set up (similar to Ollama's simplicity)
I'm looking for something that gives me access to newer models faster while still maintaining the convenience factor. Any suggestions would be appreciated!
*Edit: I'm specifically looking for self-hosted options that I can run locally, not cloud services.*
| 2025-04-24T08:10:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6mx40/looking_for_better_alternatives_to_ollama_need/
|
netixc1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6mx40
| false | null |
t3_1k6mx40
|
/r/LocalLLaMA/comments/1k6mx40/looking_for_better_alternatives_to_ollama_need/
| false | false |
self
| 20 | null |
o4-mini ranks less than DeepSeek V3 | o3 ranks inferior to Gemini 2.5 | freemium > premium at this point!ℹ️
| 76 | 2025-04-24T08:35:52 |
https://www.reddit.com/gallery/1k6n9t6
|
BidHot8598
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6n9t6
| false | null |
t3_1k6n9t6
|
/r/LocalLLaMA/comments/1k6n9t6/o4mini_ranks_less_than_deepseek_v3_o3_ranks/
| false | false | 76 | null |
||
Best free question answering model for PC parts and smartphone questions?
| 1 |
[removed]
| 2025-04-24T08:38:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6naue/best_free_question_answering_model_for_pc_parts/
|
AlexGSquadron
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6naue
| false | null |
t3_1k6naue
|
/r/LocalLLaMA/comments/1k6naue/best_free_question_answering_model_for_pc_parts/
| false | false |
self
| 1 | null |
Easy RAG for business data?
| 0 |
Hi All.
I'm fairly new to LLM's, so be gentle with me :)
I'm looking for the best approach and tooling to create a RAG application that can analyze and use business data for a larger cooporation. I've tried to create a simple test with OLlama & Open WebUI, but I'm struggling with getting good results.
The end-goal would be to have a LLM that can be prompted like "How many facilities of type x do we have in Asia?" or "How much of product X is being shipped from Europe to USA total in 2025"? Or "Create a barchart showing the product production in Europe by country" etc.
Here's some more info; I can structure the data any way I want, since I own the application that contains the data. The data is representing the coorporations many facilities around the globe, their name, adress, capacities etc. + the amount of goods produced and their types. It also contains a bunch of data about the amount of goods shipped between facilities per year etc.
My initial idea was to upload a bunch of .json files to the "knowledge", where each json file contains the basic data for each facility + their annual shipments.
So far, I've just uploaded a bunch of Json files for one type of facility to test the models analysis and understanding of the json files. E.g a bunc of files named ID\_facilityname.json. It could look something like this;
`{`
`"ActualProduction": 24.0,`
`"Sale": "3rd Party Sales",`
`"ProductionFacilitySize": 100.0,`
`"Routes": [],`
`"Relations": [],`
`"VolumesTotal": {`
`"Total": 0.0,`
`"Product A": 0.0,`
`"Product B": 0.0,`
`"Product C": 0.0`
`},`
`"VolumesPerPeriod": {},`
`"Commodity": "CommodityType",`
`"Icon": "Producer",`
`"Classification": "Not working with us",`
`"Id": 7278,`
`"Name": "Facility Name"`
`}`
But I'm struggling with getting the LLM to understand, so even if I tell the model in the Sytemprompt that each json-file represents a facility and ask it "how many facilities are there" it just count to 7 even though there are 232 files..
So, here goes the questions;
1) How should the system prompt be structured to make ollama understand the data better?
2) Do I need to use other tools to make this work better, e.g langchain or similar?
3) Are there any parameters that I need to adjust to make it work better?
Sorry for the NOOB questions, any ideas will be greatly appreciated!
| 2025-04-24T09:05:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6nnz9/easy_rag_for_business_data/
|
SugarEnough9457
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6nnz9
| false | null |
t3_1k6nnz9
|
/r/LocalLLaMA/comments/1k6nnz9/easy_rag_for_business_data/
| false | false |
self
| 0 | null |
I benchmarked the Gemma 3 27b QAT models
| 146 |
I wanted to know what models performed the best, and it seemed like nobody had actual numbers for this information... so I ran the numbers myself.
I am running on llama.cpp v1.27.1 for the GGUFs, and LM Studio MLX v0.13.2 for the MLX model.
At first, I tried calculating perplexity. However, the PPL numbers kept on yielding really weird values from the PTB/wiki.test.raw corpus. The QAT models would generate numbers higher than the original BF16, and Bartowski's quant scored higher than the original QAT from google. I think the model is overfitting there, so it's not really a good metric.
So I decided to just use GPQA-main instead. It's more a more biased benchmark in terms of topic, but I suspect that actually doesn't matter too much. We're comparing different quants of the same model, not different finetunes/models. In the latter case, we might expect different finetunes/models to maybe perform better at say math but worse at coding/writing, have more biology questions in the training data set vs physics, or other biased performance skew etc. However, quantization is not so fine-grained; it simply truncates the lowest value bits for each parameter, so quality reduction/noise introduced should be more generalizable.
Here are the GPQA-main scores for the quants I tested:
| Model name | Score |
|---------------------------------------------------------------|------------------------------------------------------------------------------------------------------------|
| mlx-community/gemma-3-27b-it-qat-4bit | 0.333 |
| bartowski/google_gemma-3-27b-it-qat-GGUF (Q4_0) | 0.352 |
| stduhpf/google-gemma-3-27b-it-qat-q4_0-gguf-small | 0.346 |
| Unquantized Gemma 3 27b (via Huggingface api) | 0.375 |
| 2025-04-24T09:12:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6nrl1/i_benchmarked_the_gemma_3_27b_qat_models/
|
jaxchang
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6nrl1
| false | null |
t3_1k6nrl1
|
/r/LocalLLaMA/comments/1k6nrl1/i_benchmarked_the_gemma_3_27b_qat_models/
| false | false |
self
| 146 | null |
GLM-4-32B Missile Command
| 30 |
I've tried telling GLM-4-32B to make a couple of games for me, Missile Command and a Dungeons game.
It doesn't work very well with Bartowski's quants, but it does with Matteogeniaccio's; I don't know if it makes any difference.
\- GLM-4-32B-0414-F16-Q6\_K.gguf Matteogeniaccio
[https://jsfiddle.net/dkaL7vh3/](https://jsfiddle.net/dkaL7vh3/)
\- Bartowski Q6\_K
[https://jsfiddle.net/5r1hztyx/](https://jsfiddle.net/5r1hztyx/)
With several tests, always with a single instruction (Make me a missile command game using html, css and javascript), Matteogeniaccio's quant always gets it right.
| 2025-04-24T09:19:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6nuo3/glm432b_missile_command/
|
Jarlsvanoid
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6nuo3
| false | null |
t3_1k6nuo3
|
/r/LocalLLaMA/comments/1k6nuo3/glm432b_missile_command/
| false | false |
self
| 30 | null |
Vanished Details in Long Context
| 2 |
Hey folks,
Trying to get my local Gemma 3-27B (running on vLLM, got that sweet 61k context) to churn out really detailed meeting minutes from long call transcripts.
Structure and flow text are solid, but the model just loses details or summarizes stuff, even with prompts explicitly saying "get EVERYTHING, do NOT summarize!".
Weird part: It's great with details for topics discussed early in the transcript, but as the transcript goes on, details for later topics just vanish. Feels like "Lost in the Middle", but specifically for the level of detail.
Tried strong negative constraints and few-shot examples. Helps the format stick, but details still fade towards the end. Any prompt magic or local hacks to force consistent detail retention throughout the whole document? Really hoping to avoid chunking if possible.
Appreciate any advice!
| 2025-04-24T09:28:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6nzcy/vanished_details_in_long_context/
|
Schakuun
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6nzcy
| false | null |
t3_1k6nzcy
|
/r/LocalLLaMA/comments/1k6nzcy/vanished_details_in_long_context/
| false | false |
self
| 2 | null |
MCP, an easy explanation
| 48 |
When I tried looking up what an MCP is, I could only find tweets like “omg how do people not know what MCP is?!?”
So, in the spirit of not gatekeeping, here’s my understanding:
MCP stands for Model Context Protocol. The purpose of this protocol is to define a standardized and flexible way for people to build AI agents with.
MCP has two main parts:
The MCP Server & The MCP Client
The MCP Server is just a normal API that does whatever it is you want to do. The MCP client is just an LLM that knows your MCP server very well and can execute requests.
Let’s say you want to build an AI agent that gets data insights using natural language.
With MCP, your MCP server exposes different capabilities as endpoints… maybe /users to access user information and /transactions to get sales data.
Now, imagine a user asks the AI agent: "What was our total revenue last month?"
The LLM from the MCP client receives this natural language request. Based on its understanding of the available endpoints on your MCP server, it determines that "total revenue" relates to "transactions."
It then decides to call the /transactions endpoint on your MCP server to get the necessary data to answer the user's question.
If the user asked "How many new users did we get?", the LLM would instead decide to call the /users endpoint.
Let me know if I got that right or if you have any questions!
I’ve been learning more about agent protocols and post my takeaways on X @joshycodes. Happy to talk more if anyone’s curious!
| 2025-04-24T09:39:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6o4wj/mcp_an_easy_explanation/
|
SimplifyExtension
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6o4wj
| false | null |
t3_1k6o4wj
|
/r/LocalLLaMA/comments/1k6o4wj/mcp_an_easy_explanation/
| false | false |
self
| 48 | null |
ClosedAI
| 1 | 2025-04-24T09:40:22 |
Several-System1535
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6o5fr
| false | null |
t3_1k6o5fr
|
/r/LocalLLaMA/comments/1k6o5fr/closedai/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'E9SyjUcWVhaSRQw3PwfmRh5HymGXl8Dd_5Mvcn8tSQ0', 'resolutions': [{'height': 99, 'url': 'https://preview.redd.it/dvudeo7p6rwe1.jpeg?width=108&crop=smart&auto=webp&s=7047d81a07f44068072f6c3cf518782719f353df', 'width': 108}, {'height': 198, 'url': 'https://preview.redd.it/dvudeo7p6rwe1.jpeg?width=216&crop=smart&auto=webp&s=47797d91ab4713a534dde92948ebf71a760bd8a9', 'width': 216}, {'height': 294, 'url': 'https://preview.redd.it/dvudeo7p6rwe1.jpeg?width=320&crop=smart&auto=webp&s=57bb1a4f27d8af4a1a15929ede1e73417e2c689e', 'width': 320}, {'height': 588, 'url': 'https://preview.redd.it/dvudeo7p6rwe1.jpeg?width=640&crop=smart&auto=webp&s=427242cd981dff353481a70d5e37395049ba1889', 'width': 640}, {'height': 882, 'url': 'https://preview.redd.it/dvudeo7p6rwe1.jpeg?width=960&crop=smart&auto=webp&s=42b2c3870d60c58e38d095c07db88ed7947e7147', 'width': 960}, {'height': 993, 'url': 'https://preview.redd.it/dvudeo7p6rwe1.jpeg?width=1080&crop=smart&auto=webp&s=e372d9aa119b4371c44cb4981d7db8a3e182ddb8', 'width': 1080}], 'source': {'height': 1177, 'url': 'https://preview.redd.it/dvudeo7p6rwe1.jpeg?auto=webp&s=e4a69960e80f1c506aeaeb894bc164e2e2e8c8da', 'width': 1280}, 'variants': {}}]}
|
|||
Are there any benchmarks for "Deep Research"-type solutions?
| 1 |
[removed]
| 2025-04-24T10:03:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6ohcj/are_there_any_benchmarks_for_deep_researchtype/
|
GptGptovich
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6ohcj
| false | null |
t3_1k6ohcj
|
/r/LocalLLaMA/comments/1k6ohcj/are_there_any_benchmarks_for_deep_researchtype/
| false | false |
self
| 1 | null |
how to use o4-mini high in the new codex
| 1 |
I have been struggling to find the configuration setting needed for using o4-mini high in the codex terminal agent. Did anyone use o4-mini high successfully in the codex. codex launch starts with \`codex -m o4-mini\` and no where it mentioned about the high.
| 2025-04-24T10:24:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6osl4/how_to_use_o4mini_high_in_the_new_codex/
|
manber571
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6osl4
| false | null |
t3_1k6osl4
|
/r/LocalLLaMA/comments/1k6osl4/how_to_use_o4mini_high_in_the_new_codex/
| false | false |
self
| 1 | null |
4x64 DDR5 - 256GB consumer grade build for LLMs?
| 31 |
Hi, I have recently discovered that there are 64GB single sticks of DDR5 available - unregistered, unbuffered, no ECC, so the should in theory be compatible with our consumer grade gaming PCs.
I believe thats fairly new, I haven't seen 64GB single sticks just few months ago
Both AMD 7950x specs and most motherboards (with 4 DDR slots) only list 128GB as their max supported memory - I know for a fact that its possible to go above this, as there are some Ryzen 7950X dedicated servers with 192GB (4x48GB) available.
Has anyone tried to run a LLM on something like this? Its only two memory channels, so bandwidth would be pretty bad compared to enterprise grade builds with more channels, but still interesting
| 2025-04-24T10:41:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6p20z/4x64_ddr5_256gb_consumer_grade_build_for_llms/
|
scammer69
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6p20z
| false | null |
t3_1k6p20z
|
/r/LocalLLaMA/comments/1k6p20z/4x64_ddr5_256gb_consumer_grade_build_for_llms/
| false | false |
self
| 31 | null |
Local uncensored AI
| 1 |
[removed]
| 2025-04-24T10:44:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6p3qd/local_uncensored_ai/
|
FireWeener
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6p3qd
| false | null |
t3_1k6p3qd
|
/r/LocalLLaMA/comments/1k6p3qd/local_uncensored_ai/
| false | false |
self
| 1 | null |
GLM-4-32B Q5_K_S can fit in 24GB cards with decent context length
| 103 |
30K context, Q8 KV Cache, all layers in GPU, no offload, ollama 0.6.6
The "context efficiency" of this model is significantly better than that of Qwen2.5-32B. I can only get 8k context for Qwen when using the 32B-Q5\_K\_S gguf.
https://preview.redd.it/ix21gs9fnrwe1.png?width=1423&format=png&auto=webp&s=223f520b5bca53f0c5a171c1fbc03739ace47877
[https://huggingface.co/bartowski/THUDM\_GLM-4-32B-0414-GGUF/blob/main/THUDM\_GLM-4-32B-0414-Q5\_K\_S.gguf](https://huggingface.co/bartowski/THUDM_GLM-4-32B-0414-GGUF/blob/main/THUDM_GLM-4-32B-0414-Q5_K_S.gguf)
`set OLLAMA_FLASH_ATTENTION=1 && set OLLAMA_KV_CACHE_TYPE=q8_0 && ollama serve`
| 2025-04-24T11:20:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6pplv/glm432b_q5_k_s_can_fit_in_24gb_cards_with_decent/
|
AaronFeng47
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6pplv
| false | null |
t3_1k6pplv
|
/r/LocalLLaMA/comments/1k6pplv/glm432b_q5_k_s_can_fit_in_24gb_cards_with_decent/
| false | false | 103 |
{'enabled': False, 'images': [{'id': '1dsrduASN30gTvRnqUSwbFZ2_-PQSB60TEoileTChzU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3NYpVgamx1NXpydfb32BxQDBSawDgIlUbaanFyS12QE.jpg?width=108&crop=smart&auto=webp&s=07a178b85d55ec32d797f982626a2bff5c10ae0d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3NYpVgamx1NXpydfb32BxQDBSawDgIlUbaanFyS12QE.jpg?width=216&crop=smart&auto=webp&s=395aa09ceb3ad8f59346f588b7110cf3dcb5bff8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3NYpVgamx1NXpydfb32BxQDBSawDgIlUbaanFyS12QE.jpg?width=320&crop=smart&auto=webp&s=ac067de4bd178d306d759828861bf2ed8439a049', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3NYpVgamx1NXpydfb32BxQDBSawDgIlUbaanFyS12QE.jpg?width=640&crop=smart&auto=webp&s=e09f35ea9f5809bb0108aaeb81cfcd9b214c0a72', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3NYpVgamx1NXpydfb32BxQDBSawDgIlUbaanFyS12QE.jpg?width=960&crop=smart&auto=webp&s=de5afc9d441b131caf9ac1287900180316ea80a3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3NYpVgamx1NXpydfb32BxQDBSawDgIlUbaanFyS12QE.jpg?width=1080&crop=smart&auto=webp&s=9cc76b3e21f9f89bca962a8fd7448bcc0f094b5e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3NYpVgamx1NXpydfb32BxQDBSawDgIlUbaanFyS12QE.jpg?auto=webp&s=c817bd1dcc21c6e7ba9455e6121c7ab1fa45f493', 'width': 1200}, 'variants': {}}]}
|
|
Best small model
| 5 |
A bit dated, looking to run small models on 6GB VRAM laptop. Best UI still text gen-UI? Gwen good way to go? Thanks!
| 2025-04-24T11:45:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6q54d/best_small_model/
|
Jshap623
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6q54d
| false | null |
t3_1k6q54d
|
/r/LocalLLaMA/comments/1k6q54d/best_small_model/
| false | false |
self
| 5 | null |
I don't like Cursor.
| 0 |
I tried using Cursor expecting it to be fundamentally different from just using ChatGPT, Claude, or any other LLM directly, but honestly, it feels exactly the same. Maybe my expectations were too high because of all the hype, but I had to see it for myself.
One thing that's really starting to annoy me is the constant push for subscriptions. Why can’t these tools let us use our own API keys instead? A lot of us already have credits topped up with these platforms, and it just feels unnecessary to pay for another subscription on top.
In fact, you know what works better? Just use something like [repo2txt.com](http://repo2txt.com) along with your preferred chatbot that you already pay for. This lets you feed your entire codebase, or just the parts you care about, directly into the LLM through the prompt. That way, you don’t have to babysit the prompt, and it gets all the context automatically. To me, it’s basically what Cursor is doing anyway.
And like any other LLM-based tool, Cursor makes the same mistakes. It doesn’t always get the job done. For example, I asked it to update the class on each paragraph tag in an HTML file (a simple copy-paste job I could have done myself). It still missed most of the `<p>` tags, so I had to go back and do it manually :(
| 2025-04-24T12:11:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6qmi5/i_dont_like_cursor/
|
faragbanda
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6qmi5
| false | null |
t3_1k6qmi5
|
/r/LocalLLaMA/comments/1k6qmi5/i_dont_like_cursor/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': '52l9Atdti--m9H8uCWMS1DtvnvnCbykq0acWL-sxnrw', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/c99M4lFJDzThz4tGafVJGfDWDT6QGWZw8ygkve-BXOI.jpg?width=108&crop=smart&auto=webp&s=c7d34bc7baf4b57a8452c6185f66714a2db7a37e', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/c99M4lFJDzThz4tGafVJGfDWDT6QGWZw8ygkve-BXOI.jpg?width=216&crop=smart&auto=webp&s=9abfa5cf3865066e851715359c53d64439365535', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/c99M4lFJDzThz4tGafVJGfDWDT6QGWZw8ygkve-BXOI.jpg?width=320&crop=smart&auto=webp&s=268cfebf7d115d5064f73a24101da75fa605c263', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/c99M4lFJDzThz4tGafVJGfDWDT6QGWZw8ygkve-BXOI.jpg?width=640&crop=smart&auto=webp&s=1d069ad89e034cd7255e69cb6df8ce7ccff8d10c', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/c99M4lFJDzThz4tGafVJGfDWDT6QGWZw8ygkve-BXOI.jpg?width=960&crop=smart&auto=webp&s=489ac21e2690a1148d84e0bc52992a8f223891b3', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/c99M4lFJDzThz4tGafVJGfDWDT6QGWZw8ygkve-BXOI.jpg?width=1080&crop=smart&auto=webp&s=72a6b9450da07fe499447c0ca621e137985741e3', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/c99M4lFJDzThz4tGafVJGfDWDT6QGWZw8ygkve-BXOI.jpg?auto=webp&s=2376fe0e8570cd80eb251690957d71c08f3e1b99', 'width': 1536}, 'variants': {}}]}
|
Forgive me Ollama, for I have sinned.
| 1 |
[removed]
| 2025-04-24T12:23:37 |
Immediate_Song4279
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6qus3
| false | null |
t3_1k6qus3
|
/r/LocalLLaMA/comments/1k6qus3/forgive_me_ollama_for_i_have_sinned/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'K6iI9S2jA4WpgK1eWGuYaGGvQe56vrIz09-SLGSAg5w', 'resolutions': [{'height': 9, 'url': 'https://preview.redd.it/qmdq1werzrwe1.png?width=108&crop=smart&auto=webp&s=cfe2ec7c56f772c607160a5bbea211df2eb4be20', 'width': 108}, {'height': 19, 'url': 'https://preview.redd.it/qmdq1werzrwe1.png?width=216&crop=smart&auto=webp&s=ba0f926d45474ab17b54d0247ccfd4fd96561a4e', 'width': 216}, {'height': 28, 'url': 'https://preview.redd.it/qmdq1werzrwe1.png?width=320&crop=smart&auto=webp&s=25c729b401f3b19ffae8a661ab7b78256fc22842', 'width': 320}, {'height': 56, 'url': 'https://preview.redd.it/qmdq1werzrwe1.png?width=640&crop=smart&auto=webp&s=6ac91767e771baffaab093c3d73cfe5e9fc17f6c', 'width': 640}], 'source': {'height': 67, 'url': 'https://preview.redd.it/qmdq1werzrwe1.png?auto=webp&s=aaf6929f337475c9a04b0ab14ae7398123ba0585', 'width': 759}, 'variants': {}}]}
|
||
My first time using TinyLLaMa but there is one problem
| 1 |
[removed]
| 2025-04-24T12:38:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6r5sg/my_first_time_using_tinyllama_but_there_is_one/
|
the_stargazing_boy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6r5sg
| false | null |
t3_1k6r5sg
|
/r/LocalLLaMA/comments/1k6r5sg/my_first_time_using_tinyllama_but_there_is_one/
| false | false |
self
| 1 | null |
Is the future of coding agents self-learning LLMs using KGs to shape their reward functions?
| 3 |
Current coding agents (Copilot, etc.) are smart context-fetchers, but they don't really learn on our specific codebases. E.g., they always act like junior devs
**But what if they did?**
Imagine an LLM agent using Reinforcement Learning (RL). It tries tasks, gets feedback (tests pass/fail, etc.), and improves.
**The hard part? Rewarding "good" code.**
This is where Knowledge Graphs (KGs) could play a fascinating role, specifically in shaping the RL reward signal. Instead of just using KGs to retrieve context before generation, what if we use them after to evaluate the output?
* Example: The KG contains project standards, known anti-patterns, desired architectural principles, or even common bug categories specific to the codebase.
* Reward Shaping: The agent gets:
* Positive Reward: If its generated code passes tests AND adheres to architectural patterns defined in the KG.
* Negative Reward: If its code introduces anti-patterns listed in the KG, violates dependency rules, or uses deprecated functions documented there.
Basically, the agent learns to write code that not only works but also fits a project's specific rules and best practices.
**Is this the path forward?**
* Is KG-driven reward the key to truly adaptive coding agents?
* Is it worth the massive complexity (KG building, RL tuning)?
* Better ways to achieve self-learning in code? What's most practical?
Thoughts? Is self-learning the next big thing, and if so, how are we achieving it?
| 2025-04-24T12:47:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6rbxe/is_the_future_of_coding_agents_selflearning_llms/
|
juanviera23
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6rbxe
| false | null |
t3_1k6rbxe
|
/r/LocalLLaMA/comments/1k6rbxe/is_the_future_of_coding_agents_selflearning_llms/
| false | false |
self
| 3 | null |
Just vibe coded a fully functional Flappy Bird style game that you can play on Reddit. The era of LLMs is truly here
| 0 |
[https://www.reddit.com/r/RedditGames/](https://www.reddit.com/r/RedditGames/)
| 2025-04-24T12:56:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6rix8/just_vibe_coded_a_fully_functional_flappy_bird/
|
thejohnnyr
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6rix8
| false | null |
t3_1k6rix8
|
/r/LocalLLaMA/comments/1k6rix8/just_vibe_coded_a_fully_functional_flappy_bird/
| false | false |
self
| 0 | null |
Curious about O3/O4 architecture: Are they LLMs with orchestration layers?
| 0 |
I've been observing OpenAI's O3 and O4 models, particularly how they handle tool usage, and I have an interesting theory I'd like to get the community's thoughts on.
When these models use tools, they seem to pause, process the tool's response, and then continue - behavior that's quite different from traditional LLMs that generate text in a single pass. This makes me wonder if what we're seeing is actually a sophisticated orchestration system built around a base LLM (perhaps similar to GPT-4o).
My theory is that rather than a single model doing everything in one pass, the system might work something like:
1. Base LLM determines a tool is needed and generates the call
2. System pauses generation, executes the tool call
3. Tool results are fed back into the LLM with context about the original question
4. LLM continues generating based on this new input
5. This all appears seamless to the user
This would be similar to agent frameworks like LangChain or AutoGPT but more tightly integrated and refined.
This isn't meant as criticism - it would actually be an impressive engineering feat to make this work so smoothly. It just seems like a more practical approach than solving the "reasoning with tools" problem entirely within a single forward pass of a model.
Has anyone with more technical knowledge looked into this? Are there telltale signs that would confirm or disprove this architecture?
| 2025-04-24T13:08:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6rrvj/curious_about_o3o4_architecture_are_they_llms/
|
PacketRacket
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6rrvj
| false | null |
t3_1k6rrvj
|
/r/LocalLLaMA/comments/1k6rrvj/curious_about_o3o4_architecture_are_they_llms/
| false | false |
self
| 0 | null |
Gemma 3 27B — Quelle est la fenêtre contextuelle maximale que vous avez pu atteindre ?
| 1 |
[removed]
| 2025-04-24T13:43:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6sjyr/gemma_3_27b_quelle_est_la_fenêtre_contextuelle/
|
sablier12
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6sjyr
| false | null |
t3_1k6sjyr
|
/r/LocalLLaMA/comments/1k6sjyr/gemma_3_27b_quelle_est_la_fenêtre_contextuelle/
| false | false |
self
| 1 | null |
Behavior Without Memory: Stabilizing LLM Identity at Runtime
| 1 |
[removed]
| 2025-04-24T13:50:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6spjs/behavior_without_memory_stabilizing_llm_identity/
|
Robin898989
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6spjs
| false | null |
t3_1k6spjs
|
/r/LocalLLaMA/comments/1k6spjs/behavior_without_memory_stabilizing_llm_identity/
| false | false |
self
| 1 | null |
Experiences with open deep research and local LLMs
| 2 |
Has anyone had good results with open deep research implementations using local LLMs?
I am aware of at least several open deep research implementations:
* [https://github.com/langchain-ai/local-deep-researcher](https://github.com/langchain-ai/local-deep-researcher) This is the only one I am aware of that seems to have been tested on local LLMs at all. My experience has been hit or miss, with some queries unexpectedly returning an empty string as the running summary using deepseek-r1:8b.
* [https://github.com/langchain-ai/open\_deep\_research](https://github.com/langchain-ai/open_deep_research) Yes, this seems to be a different but very similar project from langchain. It does not seem to be intended for local LLMs.
* [https://github.com/huggingface/smolagents/tree/main/examples/open\_deep\_research](https://github.com/huggingface/smolagents/tree/main/examples/open_deep_research) I also haven't tried this, but smolagents seems like it is mostly geared towards commercial LLMs.
| 2025-04-24T13:58:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6swa7/experiences_with_open_deep_research_and_local_llms/
|
edmcman
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6swa7
| false | null |
t3_1k6swa7
|
/r/LocalLLaMA/comments/1k6swa7/experiences_with_open_deep_research_and_local_llms/
| false | false |
self
| 2 |
{'enabled': False, 'images': [{'id': 'EKgKSjI1K9a7uYjT9dta19cHE7fud4Zz-oj0XrQgdKg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EKgKSjI1K9a7uYjT9dta19cHE7fud4Zz-oj0XrQgdKg.png?width=108&crop=smart&auto=webp&s=09bd8e3b6d25cc8c9fcf0f95d4af953b6bdb45f0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EKgKSjI1K9a7uYjT9dta19cHE7fud4Zz-oj0XrQgdKg.png?width=216&crop=smart&auto=webp&s=bdc134c220c7272b34c025896b306bf00ecb768c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EKgKSjI1K9a7uYjT9dta19cHE7fud4Zz-oj0XrQgdKg.png?width=320&crop=smart&auto=webp&s=f6c7d1bce409eb65bf5aaf310c18d5b0193e3dab', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EKgKSjI1K9a7uYjT9dta19cHE7fud4Zz-oj0XrQgdKg.png?width=640&crop=smart&auto=webp&s=c36f6e98b6889f7a62c294ecdbcc4a2af01df7b7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EKgKSjI1K9a7uYjT9dta19cHE7fud4Zz-oj0XrQgdKg.png?width=960&crop=smart&auto=webp&s=6d8cdb9a8f7cec02d7edc4fea242b6183acee476', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EKgKSjI1K9a7uYjT9dta19cHE7fud4Zz-oj0XrQgdKg.png?width=1080&crop=smart&auto=webp&s=42de16f5738fb134428591d1e4d2c621210c9c8f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EKgKSjI1K9a7uYjT9dta19cHE7fud4Zz-oj0XrQgdKg.png?auto=webp&s=5527561e6b251a61a50384956a9b432a2cc41b0f', 'width': 1200}, 'variants': {}}]}
|
No memory in LLMs hosted via LocalAI
| 0 |
Hello again. I posted yesterday looking for a model to help me with the tasks associated with writing a story like worldbuilding and such, but now I have a different problem: the model I found and spent about 3 hours feeding info to about my story, characters, setting, etc, now thinks my main character and the city where the story is set are planets from Brandon Sanderson's Stormlight Archive. 🤦♂️
I'm running what are as far as I know (since I just downloaded/installed them yesterday) the latest versions of LocalAI and DockerDesktop in Win10, and it doesn't even appear to have memory between sessions if I don't shut down the PC. I've tried with the models that it calls gpt4/gpt-4o (I forget which actual models they are, but I'm using the cuda 12 versions), and also a version of gemma that I installed myself through LocalAI. In my testing this morning I clicked on chat next to the model name, gave it a short blurb about my story, asked it to remember, went back to the localhost homepage, clicked chat again, and asked it who <character's name> is, and all 3 text-generation models failed to remember, even though I hadn't shut down the container, DD, or my PC or even closed the localhost webpage between sessions.
I dunno if I'm doing something wrong, if there's some configuration option I need to set, or if localai/these models just aren't designed to remember when self-hosted (I have DD/localAI installed on a 2TB SSD that has more than half its available space free, so it's not a disk space issue), or what, but if I can't figure out how to make them remember stuff between sessions then this is all wasted effort and I'll just have to go pay for ChatGPT or something which I'd really rather not do.
| 2025-04-24T14:14:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6ta4s/no_memory_in_llms_hosted_via_localai/
|
libra00
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6ta4s
| false | null |
t3_1k6ta4s
|
/r/LocalLLaMA/comments/1k6ta4s/no_memory_in_llms_hosted_via_localai/
| false | false |
self
| 0 | null |
Does GLM have vision?
| 3 |
I noticed on the GitHub page they claim GLM is multimodal, but couldn't find anything on its vision capabilities
| 2025-04-24T14:20:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6tf5n/does_glm_have_vision/
|
maxwell321
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6tf5n
| false | null |
t3_1k6tf5n
|
/r/LocalLLaMA/comments/1k6tf5n/does_glm_have_vision/
| false | false |
self
| 3 | null |
images-text-to-image model with example code
| 1 |
I'm looking for a small local model (\~8B or smaller) that accepts a handful of small photos and a textual instruction on how to transform them into an output image. Basically finding a common shapes across the inputs and "drawing" that pattern on the input. I need multiple input images because there's some variation to capture but also to help the model discern the shape from the background (as it's not always obvious).
Does that exist? Is that task even feasible with current models?
I know it's possible to generate an image from another with a prompt.
But what's a good method and model for this? I was thinking about:
a. an image to image model, but they usually accept only one input image, so I'd have to create a composite input image from my samples. And I'm not sure the model is able to understand it's a composite image.
b. a multimodal model that accepts multiple images. I've used VLMs before, including those that take multiple images (or video), but I couldn't find a model with an example of code that accept n images + text and returns an image. Is that possible with something like Janus-Pro? I couldn't find example code for that use case. Moreover I have the impression that the visual properties are projected to embeddings during the encoding so the decoding into an image may not preserve them.
| 2025-04-24T14:24:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6tile/imagestexttoimage_model_with_example_code/
|
gnddh
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6tile
| false | null |
t3_1k6tile
|
/r/LocalLLaMA/comments/1k6tile/imagestexttoimage_model_with_example_code/
| false | false |
self
| 1 | null |
Why do best models from benchmark are not recommended here ?
| 0 |
Hi!
Since I've been here, when someone asks which model is best for their configuration (x GPU VRAM), the answer is often, for example, the classic current models like Llama or Qwen.
Personally, when I was looking at the beginning, I referred to this ranking of the best open source models available on hugging face: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/
I have the impression that we can find the best state-of-the-art open source model that meets the demand, right? So why this link, and the models on it, are not offered more often?
Please enlighten me on this subject, because everyone here has understood that the choice of the appropriate model is 90% of the requests on this thread lol
| 2025-04-24T14:36:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6tst0/why_do_best_models_from_benchmark_are_not/
|
ugo-7
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6tst0
| false | null |
t3_1k6tst0
|
/r/LocalLLaMA/comments/1k6tst0/why_do_best_models_from_benchmark_are_not/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'NBLCkzl6ZRucCik7mVkPwWjrECPTMlbL7qAMuVmpgmg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/d7eSR2m7O-BUNJgS10KIWA8xA6nhulsDpPfx1p4_650.jpg?width=108&crop=smart&auto=webp&s=62befafb5e0debaeb69a6220cbcc722ce0168278', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/d7eSR2m7O-BUNJgS10KIWA8xA6nhulsDpPfx1p4_650.jpg?width=216&crop=smart&auto=webp&s=a7ed77a5bcb5c05a85158f3a1b571f42fd279b54', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/d7eSR2m7O-BUNJgS10KIWA8xA6nhulsDpPfx1p4_650.jpg?width=320&crop=smart&auto=webp&s=e1aad0a62a8df048c4a69c52fb7d8827e86eb72d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/d7eSR2m7O-BUNJgS10KIWA8xA6nhulsDpPfx1p4_650.jpg?width=640&crop=smart&auto=webp&s=a0102f481e5865cd18aca9fa189cd8ebdbdf4cb3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/d7eSR2m7O-BUNJgS10KIWA8xA6nhulsDpPfx1p4_650.jpg?width=960&crop=smart&auto=webp&s=3c3aecd129519b5fe239051fb85f3d4f19afb870', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/d7eSR2m7O-BUNJgS10KIWA8xA6nhulsDpPfx1p4_650.jpg?width=1080&crop=smart&auto=webp&s=50690e3e1beedbfa3861a5267ca4b23bcb1615b2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/d7eSR2m7O-BUNJgS10KIWA8xA6nhulsDpPfx1p4_650.jpg?auto=webp&s=cfa54375edccef47455e0c730bb4ee0851104070', 'width': 1200}, 'variants': {}}]}
|
Odd Results with Llama-4 Scout Based on Prompt Structure
| 1 |
I pulled and rebuilt the llama.cpp repo this morning and I downloaded unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF that is less than a day old.
I have a technical document that is only about 8K tokens. What I notice is that when I do:
List all the acronyms in this document:
<pasted document>
I get terrible results. But if I do:
<pasted document>
List all the acronyms in this document.
I get perfect results. Why would this be? same behavior with temp=.8 or .2, and adding some hints in the system prompt makes no difference.
| 2025-04-24T14:54:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6u91y/odd_results_with_llama4_scout_based_on_prompt/
|
Simusid
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6u91y
| false | null |
t3_1k6u91y
|
/r/LocalLLaMA/comments/1k6u91y/odd_results_with_llama4_scout_based_on_prompt/
| false | false |
self
| 1 | null |
GitHub - Abyss-c0re/deepshell: Your self-hosted AI assistant. Interactive Linux Shell, Files and Folders analysis. Powered by Ollama.
| 1 | 2025-04-24T14:55:05 |
https://github.com/Abyss-c0re/deepshell
|
Agreeable_Net6716
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6u96o
| false | null |
t3_1k6u96o
|
/r/LocalLLaMA/comments/1k6u96o/github_abyssc0redeepshell_your_selfhosted_ai/
| false | false |
default
| 1 | null |
|
Best Model for my Project
| 0 |
Hi community,
Me and my team are developing a project where in we plan to feed some crime and the model can predict its nature
Eg -
Input - His Jewelry was taken by thieves in the early hours of monday
Output - Robbery
how can I build this model just by feeding definitions of crimes like robbery, forgery or murder
Please help me with this
| 2025-04-24T14:59:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6ucpe/best_model_for_my_project/
|
Turbulent-Rip3896
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6ucpe
| false | null |
t3_1k6ucpe
|
/r/LocalLLaMA/comments/1k6ucpe/best_model_for_my_project/
| false | false |
self
| 0 | null |
My future depends on this project ???
| 0 |
Need advice.
I want to check the quality of written feedback/comment given by managers. (Can't use chatgpt - Company doesn't want that)
I have all the feedback of all the employee's of past 2 years.
1. How to choose the data or parameters on which the LLM model should be trained ( example length - employees who got higher rating generally get good long feedback) So, similarly i want other parameter to check and then quantify them if possible.
2. What type of framework/ libraries these text analysis software use ( I want to create my own libraries under certain theme and then train LLM model).
Anyone who has worked on something similar.
Any source to read.
Any software i can use.
Any approach to quantify the quality of comments.It would mean a lot if you guys could give some good ideas.
| 2025-04-24T15:05:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6uii4/my_future_depends_on_this_project/
|
Sandwichboy2002
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6uii4
| false | null |
t3_1k6uii4
|
/r/LocalLLaMA/comments/1k6uii4/my_future_depends_on_this_project/
| false | false |
self
| 0 | null |
Has anyone used Prolog as a reasoning engine to guide retrieval in a RAG system, similar to how knowledge graphs are used ?
| 2 |
[removed]
| 2025-04-24T15:06:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6ujui/has_anyone_used_prolog_as_a_reasoning_engine_to/
|
Lost_Sleep9587
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6ujui
| false | null |
t3_1k6ujui
|
/r/LocalLLaMA/comments/1k6ujui/has_anyone_used_prolog_as_a_reasoning_engine_to/
| false | false |
self
| 2 | null |
Updates for FreeOllama, also updates for the FreeLeak series
| 20 |
Previously, we discovered that some Ollama servers were pass-protected. To address this, we enhanced our server scanner to confirm the actual availability of all accessible servers. Additionally, we developed FreeChat as a quick verification tool for this purpose.
[https://chat.freeleakhub.com/](https://chat.freeleakhub.com/)
[https://ollama.freeleakhub.com/](https://ollama.freeleakhub.com/)
[https://www.freeleakhub.com/](https://www.freeleakhub.com/)
| 2025-04-24T15:07:12 |
https://www.freeollama.com/
|
zxbsmk
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6uk5n
| false | null |
t3_1k6uk5n
|
/r/LocalLLaMA/comments/1k6uk5n/updates_for_freeollama_also_updates_for_the/
| false | false | 20 | null |
|
Deepseek breach leaks sensitive data
| 0 |
An interesting read about the recent deepseek breach.
> The vulnerabilities discovered in DeepSeek reveal a disturbing pattern in how organizations approach AI security. Wiz Research uncovered a publicly accessible ClickHouse database belonging to DeepSeek, containing more than a million lines of log streams with highly sensitive information. This exposed data included chat history, API keys and secrets, back-end details, and operational metadata.
| 2025-04-24T15:20:58 |
https://www.darkreading.com/cyberattacks-data-breaches/deepseek-breach-opens-floodgates-dark-web
|
throwawayacc201711
|
darkreading.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6uw2q
| false | null |
t3_1k6uw2q
|
/r/LocalLLaMA/comments/1k6uw2q/deepseek_breach_leaks_sensitive_data/
| false | false |
default
| 0 | null |
quiz yourself with llamatest
| 12 |
Made this to help myself study.
Type in a topic, or paste in text, and llamatest will generate questions and answers.
It tends to get a little wordy in the answers, but I am working on better prompting.
just a single html page, requires a running llama-server from llamacpp
I find it useful, hope you do too.
[https://github.com/openconstruct/llamatest](https://github.com/openconstruct/llamatest)
| 2025-04-24T15:43:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6vg1e/quiz_yourself_with_llamatest/
|
thebadslime
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6vg1e
| false | null |
t3_1k6vg1e
|
/r/LocalLLaMA/comments/1k6vg1e/quiz_yourself_with_llamatest/
| false | false |
self
| 12 |
{'enabled': False, 'images': [{'id': 'T7w_6q3U9yxilqLmSivrwfop_4xsFF7O5BsHyaAadWc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JhR-wa4KlORvRtXwQEab8OSEqwX9PUOvw_50X40QFjM.jpg?width=108&crop=smart&auto=webp&s=222e05e1080d68904d87becb01c5dae721314cd8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/JhR-wa4KlORvRtXwQEab8OSEqwX9PUOvw_50X40QFjM.jpg?width=216&crop=smart&auto=webp&s=c0cfb2b9d84543b812750a8b1be7ed121aa69dc9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/JhR-wa4KlORvRtXwQEab8OSEqwX9PUOvw_50X40QFjM.jpg?width=320&crop=smart&auto=webp&s=ecfa32316beaf655bb9cf0d6e0d2bb85afa66f26', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/JhR-wa4KlORvRtXwQEab8OSEqwX9PUOvw_50X40QFjM.jpg?width=640&crop=smart&auto=webp&s=933465621fad694bd8166a94a1a45591cde12b23', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/JhR-wa4KlORvRtXwQEab8OSEqwX9PUOvw_50X40QFjM.jpg?width=960&crop=smart&auto=webp&s=5a7f8c74bc3b714973eb2f19f96813e01970cdce', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/JhR-wa4KlORvRtXwQEab8OSEqwX9PUOvw_50X40QFjM.jpg?width=1080&crop=smart&auto=webp&s=1ab93b85725f967e13ac8097a14cd8fc3cdcc039', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/JhR-wa4KlORvRtXwQEab8OSEqwX9PUOvw_50X40QFjM.jpg?auto=webp&s=f015b230614f9cefda4ea36d783b49a7e96f8f0d', 'width': 1200}, 'variants': {}}]}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.