title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Does Google still utilize Python for function calls? (Google Code Assist prompt maybe?)
| 1 |
[removed]
| 2025-04-01T19:24:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1jp4c6r/does_google_still_utilize_python_for_function/
|
Alert_Anything_6325
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp4c6r
| false | null |
t3_1jp4c6r
|
/r/LocalLLaMA/comments/1jp4c6r/does_google_still_utilize_python_for_function/
| false | false |
self
| 1 | null |
Is a multimodal focused release from openai the best for us?
| 32 |
I feel like with the exception of Qwen 2.5 7b(11b) audio, we have seen almost no real progress in multimodality so far in open models.
It seems gippty 4o mini can now do advanced voice mode as well.
They keep saying its a model that can run on your hardware, and 4omini is estimated to be less than a 20B model consider how badly it gets mogged by mistral smol and others.
It would be great if we can get a shittier 4o mini but with all the features intact like audio and image output. (A llamalover can dream)
| 2025-04-01T20:08:29 |
AryanEmbered
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp5g2l
| false | null |
t3_1jp5g2l
|
/r/LocalLLaMA/comments/1jp5g2l/is_a_multimodal_focused_release_from_openai_the/
| false | false | 32 |
{'enabled': True, 'images': [{'id': 'm0ZidmxgNGIA-eQDeqmtvKNJFEqYS-PbAsbG1srysno', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/w31a75fy5ase1.jpeg?width=108&crop=smart&auto=webp&s=4de78b29c51b8b96c2221d9ac09a6dad06272f40', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/w31a75fy5ase1.jpeg?width=216&crop=smart&auto=webp&s=bf1ede8c3ec1cce597bb5de477ce1064820b5ae8', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/w31a75fy5ase1.jpeg?width=320&crop=smart&auto=webp&s=13cfa89aaaa518d360ae4640950359a163d045f5', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/w31a75fy5ase1.jpeg?width=640&crop=smart&auto=webp&s=bf8159e2e72f93f2c1e7edfd8a2bb4a73c81275c', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/w31a75fy5ase1.jpeg?width=960&crop=smart&auto=webp&s=e25c9a81b7d30455f38678e2fe4adea65d4562a9', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/w31a75fy5ase1.jpeg?width=1080&crop=smart&auto=webp&s=e2b04d14386de3d896fa0ed6798f46b0b42fab7c', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/w31a75fy5ase1.jpeg?auto=webp&s=f68b4c7a2658ae1d5b662aecbfd39b95ac3f24d8', 'width': 1080}, 'variants': {}}]}
|
||
Mixed GPU Build - Questions
| 1 |
[removed]
| 2025-04-01T20:09:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1jp5hef/mixed_gpu_build_questions/
|
Schneller52
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp5hef
| false | null |
t3_1jp5hef
|
/r/LocalLLaMA/comments/1jp5hef/mixed_gpu_build_questions/
| false | false |
self
| 1 | null |
The One Abstraction You Need for Production-Ready LLMs (Request Gateway)
| 1 |
[removed]
| 2025-04-01T20:13:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1jp5kh8/the_one_abstraction_you_need_for_productionready/
|
phoneixAdi
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp5kh8
| false | null |
t3_1jp5kh8
|
/r/LocalLLaMA/comments/1jp5kh8/the_one_abstraction_you_need_for_productionready/
| false | false |
self
| 1 | null |
Different LLM models make different sounds from the GPU when doing inference
| 158 | 2025-04-01T20:28:29 |
https://bsky.app/profile/victor.earth/post/3llrphluwb22p
|
vibjelo
|
bsky.app
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp5y5a
| false | null |
t3_1jp5y5a
|
/r/LocalLLaMA/comments/1jp5y5a/different_llm_models_make_different_sounds_from/
| false | false |
default
| 158 | null |
|
Rich's slogans for AI research (revised 2006)
| 1 |
[deleted]
| 2025-04-01T20:39:11 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp67qk
| false | null |
t3_1jp67qk
|
/r/LocalLLaMA/comments/1jp67qk/richs_slogans_for_ai_research_revised_2006/
| false | false |
default
| 1 | null |
||
Rich Sutton's slogans for AI research (revised 2006)
| 0 | 2025-04-01T20:39:48 |
https://x.com/RichardSSutton/status/1906837186319942114
|
phoneixAdi
|
x.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp68a3
| false | null |
t3_1jp68a3
|
/r/LocalLLaMA/comments/1jp68a3/rich_suttons_slogans_for_ai_research_revised_2006/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'FDEUKTKjVI1NbI5V5_E09GpTyNUrnOszmEfq2DueFcI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/yNiAeiT8KXa3kHI_mSqeQH_nfrWkNJ2DJJiqLSwck3s.jpg?width=108&crop=smart&auto=webp&s=35a7c970ae3c09f437a0f2a11b253039f3993f7a', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/yNiAeiT8KXa3kHI_mSqeQH_nfrWkNJ2DJJiqLSwck3s.jpg?auto=webp&s=d77dfc0dfb7661383a5df2feb3717b606c796c14', 'width': 200}, 'variants': {}}]}
|
||
Can someone ELI5 how I'd know which model is right when I've found the desired model?
| 3 |
I'm a data scientist for work, and finally getting around to experimenting with local LLMs in LM studio and Msty AI just for fun SFW purposes. However, I'm unsure which model version I need once I found one. My data science work is mostly NLP and regression, model building. I have zero experience with building out LLMs like this, but I did read a pretty thorough guide.
| 2025-04-01T20:51:41 |
intimate_sniffer69
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp6iy1
| false | null |
t3_1jp6iy1
|
/r/LocalLLaMA/comments/1jp6iy1/can_someone_eli5_how_id_know_which_model_is_right/
| false | false | 3 |
{'enabled': True, 'images': [{'id': 'he5wMuuJ0JV9LdOI8zX8aauIDBN6nuMJHdnOw3hzZe0', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/iuruxe1cdase1.png?width=108&crop=smart&auto=webp&s=9dfbc3e57f3a4d6a123eb734848c0ca0fe38b633', 'width': 108}, {'height': 122, 'url': 'https://preview.redd.it/iuruxe1cdase1.png?width=216&crop=smart&auto=webp&s=d10757b439a3a47928f692bdf6c33206c33986e2', 'width': 216}, {'height': 181, 'url': 'https://preview.redd.it/iuruxe1cdase1.png?width=320&crop=smart&auto=webp&s=a343abc24c778b9366373729eaac4304f0f731f4', 'width': 320}, {'height': 362, 'url': 'https://preview.redd.it/iuruxe1cdase1.png?width=640&crop=smart&auto=webp&s=ad37a6e66945133765d6f590a7945f097ecc6bee', 'width': 640}, {'height': 544, 'url': 'https://preview.redd.it/iuruxe1cdase1.png?width=960&crop=smart&auto=webp&s=e4970eb679e057b04524d8d48facbc333631aa84', 'width': 960}, {'height': 612, 'url': 'https://preview.redd.it/iuruxe1cdase1.png?width=1080&crop=smart&auto=webp&s=ed8bb8f0bdf388999ac2d6985d6710d2dec55cdb', 'width': 1080}], 'source': {'height': 734, 'url': 'https://preview.redd.it/iuruxe1cdase1.png?auto=webp&s=50fb4faef411904dab3e05bd5304e5d7bafc9f16', 'width': 1295}, 'variants': {}}]}
|
||
Dou (道) updated with LM Studio (and Ollama) support
| 10 | 2025-04-01T21:02:13 |
shokuninstudio
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp6sbn
| false | null |
t3_1jp6sbn
|
/r/LocalLLaMA/comments/1jp6sbn/dou_道_updated_with_lm_studio_and_ollama_support/
| false | false | 10 |
{'enabled': True, 'images': [{'id': 'Gi9jOxXvxAMZMvPv-5IswwLU3y75RqIqkvAt2TZ_FAA', 'resolutions': [{'height': 76, 'url': 'https://preview.redd.it/1i6mjuscfase1.png?width=108&crop=smart&auto=webp&s=3f37b6084b062daae364e82db95bac4c44e46c8b', 'width': 108}, {'height': 153, 'url': 'https://preview.redd.it/1i6mjuscfase1.png?width=216&crop=smart&auto=webp&s=29f6cb926e1f06d9f223dc161f8f79444b3e0252', 'width': 216}, {'height': 227, 'url': 'https://preview.redd.it/1i6mjuscfase1.png?width=320&crop=smart&auto=webp&s=c697fdb34d65f8c01b9f8b3d2e240c98a4eafc72', 'width': 320}, {'height': 454, 'url': 'https://preview.redd.it/1i6mjuscfase1.png?width=640&crop=smart&auto=webp&s=6961bce4f75454ecad32ef613c80eef4630f8da9', 'width': 640}, {'height': 681, 'url': 'https://preview.redd.it/1i6mjuscfase1.png?width=960&crop=smart&auto=webp&s=7333251c779f959c8a8d7bdd2c3c59d682e20ebf', 'width': 960}, {'height': 766, 'url': 'https://preview.redd.it/1i6mjuscfase1.png?width=1080&crop=smart&auto=webp&s=5f9dfd246c96658020fd58c76e18d381156543cb', 'width': 1080}], 'source': {'height': 2686, 'url': 'https://preview.redd.it/1i6mjuscfase1.png?auto=webp&s=e5334e605ebf4b7e4a69630a8587816b422d62c2', 'width': 3784}, 'variants': {}}]}
|
|||
LM Studio gets stuck loading at 97%?
| 0 |
Nothing special here, just downloaded LM studio fresh install on Windows 11, and downloaded a model called Stheno v3.2, which installed in a minute flat. But it won't load, and hangs at 97%, just never finishes what could cause this to happen?
| 2025-04-01T21:08:26 |
intimate_sniffer69
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp6xsn
| false | null |
t3_1jp6xsn
|
/r/LocalLLaMA/comments/1jp6xsn/lm_studio_gets_stuck_loading_at_97/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'GPpDTAvH9DHQBp5SmIu_qgQ5nN7CVC60JM2zyuyw2vw', 'resolutions': [{'height': 8, 'url': 'https://preview.redd.it/2e3d43qggase1.png?width=108&crop=smart&auto=webp&s=cc88d3842bf985768fc7f58a5abe154720cda33f', 'width': 108}, {'height': 17, 'url': 'https://preview.redd.it/2e3d43qggase1.png?width=216&crop=smart&auto=webp&s=370e75668fa89f49ca6ddd16b7c026d20afbefaf', 'width': 216}, {'height': 26, 'url': 'https://preview.redd.it/2e3d43qggase1.png?width=320&crop=smart&auto=webp&s=e1a989673ccf05c6e11687f6ef2c7fc5c6ad53c1', 'width': 320}], 'source': {'height': 42, 'url': 'https://preview.redd.it/2e3d43qggase1.png?auto=webp&s=073a4cf83f84d4635b41ee57853bf8fe8c346d1c', 'width': 506}, 'variants': {}}]}
|
||
Arch-Function-Chat (1B/3B/7B) - Device friendly, family of fast LLMs for function calling scenarios now trained to chat.
| 40 |
Based on feedback from users and the developer community that used Arch-Function (our previous gen) model, I am excited to share our latest work: [Arch-Function-Chat](https://huggingface.co/katanemo/Arch-Function-Chat-3B) A collection of fast, device friendly LLMs that achieve performance on-par with GPT-4 on function calling, now trained to chat.
These LLMs have three additional training objectives.
1. Be able to refine and clarify the user request. This means to ask for required function parameters, clarify ambiguous input ( (e.g., "Transfer $500" without specifying accounts, can be “Transfer from” and “Transfer to”)
2. Accurately maintain context in two specific scenarios:
1. Progressive information disclosure such as in multi-turn conversations where information is revealed gradually (i.e., the model asks info of multiple parameters and the user only answers one or two instead of all the info)
2. Context switch where the model must infer missing parameters from context (e.g., "Check the weather" should prompt for location if not provided) and maintains context between turns (e.g., "What about tomorrow?" after a weather query but still in the middle of clarification)
3. Respond to the user based on executed tools results. For common function calling scenarios where the response of the execution is all that's needed to complete the user request, Arch-Function-Chat can interpret and respond to the user via chat. Note, parallel and multiple function calling was already supported so if the model needs to respond based on multiple tools call it still can.
Of course the 3B model will now be the primary LLM used in https://github.com/katanemo/archgw. Hope you all like the work 🙏. Happy building!
| 2025-04-01T21:20:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1jp77z1/archfunctionchat_1b3b7b_device_friendly_family_of/
|
AdditionalWeb107
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp77z1
| false | null |
t3_1jp77z1
|
/r/LocalLLaMA/comments/1jp77z1/archfunctionchat_1b3b7b_device_friendly_family_of/
| false | false |
self
| 40 |
{'enabled': False, 'images': [{'id': 'ysgYnXaB8Ty5I4u9DDtkkVKD5LRTxcPz7lHeuWYdVA4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ccC10BWAPF7OrVeXLYVxwqSpg2kuzrfW49wzmSeNHCc.jpg?width=108&crop=smart&auto=webp&s=5373395444e68d50d5d2320b3bfb5a58f7b630bd', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ccC10BWAPF7OrVeXLYVxwqSpg2kuzrfW49wzmSeNHCc.jpg?width=216&crop=smart&auto=webp&s=ea4fd8ad0617d9e936ee0eb44725f198fda58705', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ccC10BWAPF7OrVeXLYVxwqSpg2kuzrfW49wzmSeNHCc.jpg?width=320&crop=smart&auto=webp&s=0a9b507f2f16c3094ea87cdcdcba46057be8b590', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ccC10BWAPF7OrVeXLYVxwqSpg2kuzrfW49wzmSeNHCc.jpg?width=640&crop=smart&auto=webp&s=d87a77a0a811a345ab36d23f8870098459c08411', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ccC10BWAPF7OrVeXLYVxwqSpg2kuzrfW49wzmSeNHCc.jpg?width=960&crop=smart&auto=webp&s=3cad134e2c7eedb55254782e2d6b9d252e1ca66d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ccC10BWAPF7OrVeXLYVxwqSpg2kuzrfW49wzmSeNHCc.jpg?width=1080&crop=smart&auto=webp&s=3a33047c8172817fde21ac5f459c6caa9b7c6580', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ccC10BWAPF7OrVeXLYVxwqSpg2kuzrfW49wzmSeNHCc.jpg?auto=webp&s=c37bdeac8b643d5a9f372f993890673c3d3b4c6e', 'width': 1200}, 'variants': {}}]}
|
Msty vs LM Studio?
| 0 |
Just curious if you guys like LM studio or Misty more? I tried to downloading Misty inn installing it, I could not get it to work with a single LM. However, LM studio worked out of the box with any of the models that I tried. It's really strange. Should be just able to plug and play with either of them with a very simple model. But for some reason it seems like Msty does not work at all. Which is contrary to what the claims are of the software. It indicates that it does not require any fine tuning or frustrating setup or anything like that. But it doesn't seem to make much sense at all
| 2025-04-01T21:30:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1jp7gox/msty_vs_lm_studio/
|
intimate_sniffer69
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp7gox
| false | null |
t3_1jp7gox
|
/r/LocalLLaMA/comments/1jp7gox/msty_vs_lm_studio/
| false | false |
self
| 0 | null |
PSA- IF you're using Nvidia 572.* Drivers on 40 and 30 series, GO BACK TO 566.36!
| 1 |
[removed]
| 2025-04-01T21:34:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1jp7jx2/psa_if_youre_using_nvidia_572_drivers_on_40_and/
|
nite2k
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp7jx2
| false | null |
t3_1jp7jx2
|
/r/LocalLLaMA/comments/1jp7jx2/psa_if_youre_using_nvidia_572_drivers_on_40_and/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'IhDmMT-T7dJ2rIsv72RdSgUueqhznK3Ty8PCu4qN1UU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/3lmXl3htUtvdpCqadl5WkHI3G1eKHY4SK9eRmn5xtCo.jpg?width=108&crop=smart&auto=webp&s=7370aafa9987391dd413c3c7a75578a2a3022909', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/3lmXl3htUtvdpCqadl5WkHI3G1eKHY4SK9eRmn5xtCo.jpg?width=216&crop=smart&auto=webp&s=6ae9ad07a6a8b91b52ce73a8e53dcb477b394fb3', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/3lmXl3htUtvdpCqadl5WkHI3G1eKHY4SK9eRmn5xtCo.jpg?width=320&crop=smart&auto=webp&s=bde37e0eda4a94d2d67ff9c300526ade654cbe19', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/3lmXl3htUtvdpCqadl5WkHI3G1eKHY4SK9eRmn5xtCo.jpg?width=640&crop=smart&auto=webp&s=3e91bd7cb29e35f9527c11b2cb9b0d923673225a', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/3lmXl3htUtvdpCqadl5WkHI3G1eKHY4SK9eRmn5xtCo.jpg?width=960&crop=smart&auto=webp&s=3b451f887417da36b518266b1e688c5fe197d5fc', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/3lmXl3htUtvdpCqadl5WkHI3G1eKHY4SK9eRmn5xtCo.jpg?width=1080&crop=smart&auto=webp&s=042426fba35d416c5a271b2d1119e433dbcb90a8', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/3lmXl3htUtvdpCqadl5WkHI3G1eKHY4SK9eRmn5xtCo.jpg?auto=webp&s=f07c1631e2acd7920e5fbe9ded94d7b08b983e83', 'width': 1200}, 'variants': {}}]}
|
workflow for recording audio/video, transcript and automatic document generation
| 2 |
Hi All,
I need to create a set of video tutorials (and doc/pdf version) on how to use a non-public facing application, and i'm not allowed to send the data to any cloud service.
I was thinking to implement the following workflow:
* Use OBS(i'm working on mac) to capture screen and audio/voice
* Use whisper transcription to create the transcription
* Use some local llm to organize the doc and generate output in sphinx format
* Once in sphinx format i'll double check and adjust the output
Now, my questions are:
* did someone had a similar use case? How do you deal with it?
* what local llm is better to use?
* Is there any local app/model i can use that takes i input the audio/file and create the doc with also screenshots? Currently, i have to add them manually when editing the sphinx format, but it would be nice to have them already there.
Thanks
| 2025-04-01T21:39:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1jp7o2i/workflow_for_recording_audiovideo_transcript_and/
|
Dazzling-Gift7189
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp7o2i
| false | null |
t3_1jp7o2i
|
/r/LocalLLaMA/comments/1jp7o2i/workflow_for_recording_audiovideo_transcript_and/
| false | false |
self
| 2 | null |
Build a Discord bot with TeapotLLM, an open-source ~800M model for hallucination-resistant Q&A running entirely on your CPU.
| 1 | 2025-04-01T22:05:55 |
https://teapotai.com/blogs/teapotllm_discord_bot
|
zakerytclarke
|
teapotai.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp8apx
| false | null |
t3_1jp8apx
|
/r/LocalLLaMA/comments/1jp8apx/build_a_discord_bot_with_teapotllm_an_opensource/
| false | false |
default
| 1 | null |
|
Ever wanted to estimate the costs of running something through a cloud LLM? I got tired of doing calculations by hand, so I made a macOS menubar app: AICostBar
| 1 | 2025-04-01T22:34:30 |
https://apps.apple.com/us/app/aicostbar/id6743988254
|
verbari_dev
|
apps.apple.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp8yoh
| false | null |
t3_1jp8yoh
|
/r/LocalLLaMA/comments/1jp8yoh/ever_wanted_to_estimate_the_costs_of_running/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'VjULV8x3JTr6SC-agZnKrUrmmf2MTymH02YVpnTfuwI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/NL5twtFIVyRZMqlk_s4PXDheveiRt4mwhfyHYCeO7a4.jpg?width=108&crop=smart&auto=webp&s=be364bb59825bd45f562610844a637e78376b5e9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/NL5twtFIVyRZMqlk_s4PXDheveiRt4mwhfyHYCeO7a4.jpg?width=216&crop=smart&auto=webp&s=a1e90972e5f9d0a3559c4ee4ef8e7450b24f598f', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/NL5twtFIVyRZMqlk_s4PXDheveiRt4mwhfyHYCeO7a4.jpg?width=320&crop=smart&auto=webp&s=700971f030e7635ae04a2e79465348cedd072494', 'width': 320}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/NL5twtFIVyRZMqlk_s4PXDheveiRt4mwhfyHYCeO7a4.jpg?auto=webp&s=37332f6edb788c1c540651e3bedb906a5fafd90e', 'width': 630}, 'variants': {}}]}
|
||
How much would solving fusion actually boost LLMs?
| 1 |
[removed]
| 2025-04-01T22:39:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1jp92kr/how_much_would_solving_fusion_actually_boost_llms/
|
Open_Mirror_1061
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp92kr
| false | null |
t3_1jp92kr
|
/r/LocalLLaMA/comments/1jp92kr/how_much_would_solving_fusion_actually_boost_llms/
| false | false |
self
| 1 | null |
Powering Multiple GPUs with multiple PSUs
| 2 |
So I was sent here by the home labbers.
I was asking a question on how cryptominers power multiple GPUs and they said you guys would be using the same setup. So this is a question on how to power multiple GPUS when the one main unit won't be able to power all of them.
Long story short, i will have 1 4090, and 3 4070 pcie cards in one motherboard. However we obviously don't have the power.
I was looking at the following to use multiple GPUs [https://www.amazon.com/ADD2PSU-Connector-Multiple-Adapter-Synchronous/dp/B09Q11WG4Z/?\_encoding=UTF8&pd\_rd\_w=fQ8L3&content-id=amzn1.sym.255b3518-6e7f-495c-8611-30a58648072e%3Aamzn1.symc.a68f4ca3-28dc-4388-a2cf-24672c480d8f&pf\_rd\_p=255b3518-6e7f-495c-8611-30a58648072e&pf\_rd\_r=1YT4D5S3ER7MYTAN393A&pd\_rd\_wg=fGg7k&pd\_rd\_r=501f521f-069c-47dc-8b0a-cf212a639286&ref\_=pd\_hp\_d\_atf\_ci\_mcx\_mr\_ca\_hp\_atf\_d](https://www.amazon.com/ADD2PSU-Connector-Multiple-Adapter-Synchronous/dp/B09Q11WG4Z/?_encoding=UTF8&pd_rd_w=fQ8L3&content-id=amzn1.sym.255b3518-6e7f-495c-8611-30a58648072e%3Aamzn1.symc.a68f4ca3-28dc-4388-a2cf-24672c480d8f&pf_rd_p=255b3518-6e7f-495c-8611-30a58648072e&pf_rd_r=1YT4D5S3ER7MYTAN393A&pd_rd_wg=fGg7k&pd_rd_r=501f521f-069c-47dc-8b0a-cf212a639286&ref_=pd_hp_d_atf_ci_mcx_mr_ca_hp_atf_d)
Basically I want to know how you would be powering them. ANd yes my system can handle it as it had 4 single slot gpus as a proof of concept. we just need to expand now and get more power.
| 2025-04-01T22:43:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1jp96eq/powering_multiple_gpus_with_multiple_psus/
|
eagle6705
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp96eq
| false | null |
t3_1jp96eq
|
/r/LocalLLaMA/comments/1jp96eq/powering_multiple_gpus_with_multiple_psus/
| false | false |
self
| 2 | null |
Easy Whisper UI for Windows
| 32 |
I made an easy to use UI for Whisper on windows. It is completely made with C++ and has support for all gpus. I posted it recently here, but I've since made several major improvements. Please let me know your results, the installer should handle absolutely everything for you!
[https://github.com/mehtabmahir/easy-whisper-ui](https://github.com/mehtabmahir/easy-whisper-ui)
| 2025-04-01T22:54:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1jp9emz/easy_whisper_ui_for_windows/
|
mehtabmahir
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp9emz
| false | null |
t3_1jp9emz
|
/r/LocalLLaMA/comments/1jp9emz/easy_whisper_ui_for_windows/
| false | false |
self
| 32 |
{'enabled': False, 'images': [{'id': 'ljshWAdnV4EIzTkkxtQw_28KVD3rm0dLTHoTGmOcad0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0trvaOigUt_wCn8nmOyl9-YojHOI4scITgJHDmRJ1z0.jpg?width=108&crop=smart&auto=webp&s=2eb7d9a49ffa4155b1c164b63524988c374e6826', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0trvaOigUt_wCn8nmOyl9-YojHOI4scITgJHDmRJ1z0.jpg?width=216&crop=smart&auto=webp&s=9796b91b7c270922869b4ad56107e1653e401be0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0trvaOigUt_wCn8nmOyl9-YojHOI4scITgJHDmRJ1z0.jpg?width=320&crop=smart&auto=webp&s=cbc50bdb3f714a501c675b8095b02688a16165f8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0trvaOigUt_wCn8nmOyl9-YojHOI4scITgJHDmRJ1z0.jpg?width=640&crop=smart&auto=webp&s=5757066e66b08859eff4ef81588ae31dc9d2be1c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0trvaOigUt_wCn8nmOyl9-YojHOI4scITgJHDmRJ1z0.jpg?width=960&crop=smart&auto=webp&s=add90fd1a71c7cfd2426cc61c3a33026247e1a6b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0trvaOigUt_wCn8nmOyl9-YojHOI4scITgJHDmRJ1z0.jpg?width=1080&crop=smart&auto=webp&s=31ff5ceb0a7337ab7c4ce83a016b829994d488bb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0trvaOigUt_wCn8nmOyl9-YojHOI4scITgJHDmRJ1z0.jpg?auto=webp&s=05cd58240248508df3aedaf371c9153c806f5506', 'width': 1200}, 'variants': {}}]}
|
Why isn't the whole industry focusing on online-learning?
| 25 |
LLMs (currently) have no memory. You will always be able to tell LLMs from humans because LLMs are stateless. Right now you basically have a bunch of hacks like system prompts and RAG that tries to make it resemble something its not.
So what about concurrent multi-(Q)LoRA serving? Tell me why there's seemingly no research in this direction? "AGI" to me seems as simple as freezing the base weights, then training 1-pass over a LoRA for memory. Like say your goal is to understand a codebase. Just train a LoRA on 1 pass through that codebase? First you give it the folder/file structure then the codebase. Tell me why this woudn't work. Then 1 node can handle multiple concurrent users and by storing 1 small LoRA for each user.
```
Directory structure:
└── microsoft-lora/
├── README.md
├── LICENSE.md
├── SECURITY.md
├── setup.py
├── examples/
│ ├── NLG/
│ │ ├── README.md
...
================================================
File: README.md
================================================
# LoRA: Low-Rank Adaptation of Large Language Models
This repo contains the source code of the Python package `loralib` and several examples of how to integrate it with PyTorch models, such as those in Hugging Face.
We only support PyTorch for now.
See our paper for a detailed description of LoRA.
...
================================================
File: LICENSE.md
================================================
MIT License
Copyright (c) Microsoft Corporation.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
...
```
| 2025-04-01T22:58:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1jp9hu6/why_isnt_the_whole_industry_focusing_on/
|
unraveleverything
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp9hu6
| false | null |
t3_1jp9hu6
|
/r/LocalLLaMA/comments/1jp9hu6/why_isnt_the_whole_industry_focusing_on/
| false | false |
self
| 25 | null |
Best places to buy/sell LLM gear?
| 1 |
[removed]
| 2025-04-01T23:02:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1jp9lbk/best_places_to_buysell_llm_gear/
|
iShopStaples
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp9lbk
| false | null |
t3_1jp9lbk
|
/r/LocalLLaMA/comments/1jp9lbk/best_places_to_buysell_llm_gear/
| false | false |
self
| 1 | null |
🪿Qwerky-72B and 32B : Training large attention free models, with only 8 GPU's
| 141 | 2025-04-01T23:12:05 |
secopsml
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp9tfh
| false | null |
t3_1jp9tfh
|
/r/LocalLLaMA/comments/1jp9tfh/qwerky72b_and_32b_training_large_attention_free/
| false | false | 141 |
{'enabled': True, 'images': [{'id': 'V41wIFLRvAF_L6MGb4Bh0VlT7wgV4AVqG-pRrpLhPsU', 'resolutions': [{'height': 28, 'url': 'https://preview.redd.it/hzuxqeqn2bse1.png?width=108&crop=smart&auto=webp&s=96cb1e038e840c8fb9cd2a3641d3c77036ebb185', 'width': 108}, {'height': 56, 'url': 'https://preview.redd.it/hzuxqeqn2bse1.png?width=216&crop=smart&auto=webp&s=2e2f18a36c9a6b09fe3adccb5189705ca1197bdb', 'width': 216}, {'height': 83, 'url': 'https://preview.redd.it/hzuxqeqn2bse1.png?width=320&crop=smart&auto=webp&s=1986d66b2f4bb31d9620018f50e537d9170ed316', 'width': 320}, {'height': 166, 'url': 'https://preview.redd.it/hzuxqeqn2bse1.png?width=640&crop=smart&auto=webp&s=687e079083a01b404da217fc45bd385974523d62', 'width': 640}, {'height': 249, 'url': 'https://preview.redd.it/hzuxqeqn2bse1.png?width=960&crop=smart&auto=webp&s=0072e335ab6657d787eb8c4250f3669b05c00468', 'width': 960}, {'height': 281, 'url': 'https://preview.redd.it/hzuxqeqn2bse1.png?width=1080&crop=smart&auto=webp&s=45f80da203de8d401f99a7d8145f7e15b0b8b25b', 'width': 1080}], 'source': {'height': 1035, 'url': 'https://preview.redd.it/hzuxqeqn2bse1.png?auto=webp&s=c8c7a003a21ca7f7bbf6eaf4d1a8857bf33d1abf', 'width': 3975}, 'variants': {}}]}
|
|||
I got tired of guessing what blackbox AI coding tools were sending as prompt context... so I built a transparent local open-source coding tool
| 146 |
I've been using Cursor & GitHub Copilot and found it frustrating that I couldn't see what prompts were actually being sent.
For example, I have no idea why I got wildly different results when I sent the same prompt to Cursor vs ChatGPT with o3-mini, where the Cursor response was much shorter (and also incorrect) compared to ChatGPT's.
So, I've built a new open-source AI coding tool Dyad that runs locally: [https://github.com/dyad-sh/dyad](https://github.com/dyad-sh/dyad)
It just got a new LLM debugging page that shows exactly what’s being sent to the model, so you can finally understand why the LLM is responding the way it does.
More demos of the tool here: [https://dyad.sh/](https://dyad.sh/)
Let me know what you think. Is this useful?
| 2025-04-01T23:22:03 |
https://v.redd.it/8xw67g0v2bse1
|
wwwillchen
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpa1ep
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/8xw67g0v2bse1/DASHPlaylist.mpd?a=1746141736%2CZjAwNGI2ZDFlMTEwMjdhMmI2NjQ2ZTRhMTYxY2Q2NDA5NDA1ZDAwNjdmNjMyM2E3ZjQwNzkyMTMyMDhjMzU1Nw%3D%3D&v=1&f=sd', 'duration': 31, 'fallback_url': 'https://v.redd.it/8xw67g0v2bse1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/8xw67g0v2bse1/HLSPlaylist.m3u8?a=1746141736%2COTk3MTc0ZmE3MjFiYTJlN2Y1NDEyZDU1ODI0YTZjY2I0MjkyNTA5MDMyYTYwOGY2ZTY2YzJmZmY3OThhZDk4OQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/8xw67g0v2bse1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1660}}
|
t3_1jpa1ep
|
/r/LocalLLaMA/comments/1jpa1ep/i_got_tired_of_guessing_what_blackbox_ai_coding/
| false | false | 146 |
{'enabled': False, 'images': [{'id': 'bjBlY3dlMHYyYnNlMeuto_4yHK9N3Xzvw4yI_cUvoQNTs3J3u5A3WEOq9BHN', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/bjBlY3dlMHYyYnNlMeuto_4yHK9N3Xzvw4yI_cUvoQNTs3J3u5A3WEOq9BHN.png?width=108&crop=smart&format=pjpg&auto=webp&s=56d9b2b9bf3a2546a3cef7192c1734524d3c1245', 'width': 108}, {'height': 140, 'url': 'https://external-preview.redd.it/bjBlY3dlMHYyYnNlMeuto_4yHK9N3Xzvw4yI_cUvoQNTs3J3u5A3WEOq9BHN.png?width=216&crop=smart&format=pjpg&auto=webp&s=18d50131c54736f0f1ba8cc271f26fe6ba7876ae', 'width': 216}, {'height': 208, 'url': 'https://external-preview.redd.it/bjBlY3dlMHYyYnNlMeuto_4yHK9N3Xzvw4yI_cUvoQNTs3J3u5A3WEOq9BHN.png?width=320&crop=smart&format=pjpg&auto=webp&s=39bff4e64f0903332ce4ec836bf3ecee28a9643d', 'width': 320}, {'height': 416, 'url': 'https://external-preview.redd.it/bjBlY3dlMHYyYnNlMeuto_4yHK9N3Xzvw4yI_cUvoQNTs3J3u5A3WEOq9BHN.png?width=640&crop=smart&format=pjpg&auto=webp&s=8f15d08aefde66d4f16d21ea5cfc196745b91079', 'width': 640}, {'height': 624, 'url': 'https://external-preview.redd.it/bjBlY3dlMHYyYnNlMeuto_4yHK9N3Xzvw4yI_cUvoQNTs3J3u5A3WEOq9BHN.png?width=960&crop=smart&format=pjpg&auto=webp&s=82e904f5945c2ecf65a93f9efccd8ade0b97ec76', 'width': 960}, {'height': 702, 'url': 'https://external-preview.redd.it/bjBlY3dlMHYyYnNlMeuto_4yHK9N3Xzvw4yI_cUvoQNTs3J3u5A3WEOq9BHN.png?width=1080&crop=smart&format=pjpg&auto=webp&s=301294e75cd54c768bf4c90bc93860271a5535d6', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bjBlY3dlMHYyYnNlMeuto_4yHK9N3Xzvw4yI_cUvoQNTs3J3u5A3WEOq9BHN.png?format=pjpg&auto=webp&s=c04b02d52dfe81d44fb3587ebebda88fad76f784', 'width': 1660}, 'variants': {}}]}
|
|
Would U.S. customers use data centers located in Japan? Curious about your thoughts.
| 1 |
[removed]
| 2025-04-01T23:29:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpa77y/would_us_customers_use_data_centers_located_in/
|
Good-Listen1276
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpa77y
| false | null |
t3_1jpa77y
|
/r/LocalLLaMA/comments/1jpa77y/would_us_customers_use_data_centers_located_in/
| false | false |
self
| 1 | null |
🧠 Symbolic Memory Loops for Local LLMs – Reflection-Based Continuity Using YAML + Journaling Tools (Now on GitHub)
| 12 |
Hey folks, I wanted to share a project I’ve been working on for a bit. It’s an experiment in creating **symbolic memory loops** for local LLMs (e.g. Nous-Hermes-7B GPTQ), built around:
* 📝 **Reflections**: automatically condensed memory entries (`reflections.txt`)
* 🧠 **YAML persona scaffolding**: updated with symbolic context
* 🧪 **Stress testing**: recursive prompt loops to explore continuity fatigue
* 🩹 **Recovery via breaks**: guided symbolic decompression
All tools are local, lightweight, and run fine on 6GB VRAM.
The repo includes real experiment logs, token traces, and even the stress collapse sequence (I called it “The Gauntlet”).
**Why?**
Instead of embedding-based memory, I wanted to test if a model could develop a *sense of symbolic continuity* over time using just structured inputs, reflection scaffolds, and self-authored memory hooks.
This project isn’t trying to simulate sentience. It’s not about agents.
It’s about seeing what happens when LLMs are given tools to **reflect**, **recover**, and carry symbolic weight between sessions.
🧠 Repo: [github.com/babibooi/symbolic-memory-loop](https://github.com/babibooi/symbolic-memory-loop)
☕ Ko-fi: [ko-fi.com/babibooi]() (I’m trying to survive this month lol)
If you’re also experimenting with long-term memory strategies or symbolic persistence, I’d love to swap notes. And if you just want to poke at poetic spaghetti held together by YAML and recursion? That’s there too.
Thanks!
– Booi :3c
| 2025-04-02T00:00:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpavy5/symbolic_memory_loops_for_local_llms/
|
BABI_BOOI_ayyyyyyy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpavy5
| false | null |
t3_1jpavy5
|
/r/LocalLLaMA/comments/1jpavy5/symbolic_memory_loops_for_local_llms/
| false | false |
self
| 12 |
{'enabled': False, 'images': [{'id': 'RMBUiiupgnK2g5O5OBtaNE1a3hm5-VgMSG_Mk9TzrTQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/sa2qgh1YRf0WVbtj8t6T3l4_BUhzBz16PnCPbWpyWsQ.jpg?width=108&crop=smart&auto=webp&s=766c419c8bf0378265d09409d961a0b0b9481e76', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/sa2qgh1YRf0WVbtj8t6T3l4_BUhzBz16PnCPbWpyWsQ.jpg?width=216&crop=smart&auto=webp&s=33e47377724e441789d80ed21d2bec82a04896f6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/sa2qgh1YRf0WVbtj8t6T3l4_BUhzBz16PnCPbWpyWsQ.jpg?width=320&crop=smart&auto=webp&s=eed8587abfc9fbea18f5e596de72ae220e7b0ec2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/sa2qgh1YRf0WVbtj8t6T3l4_BUhzBz16PnCPbWpyWsQ.jpg?width=640&crop=smart&auto=webp&s=ee6daa6968e6224c04125fff68a8351ac7f86317', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/sa2qgh1YRf0WVbtj8t6T3l4_BUhzBz16PnCPbWpyWsQ.jpg?width=960&crop=smart&auto=webp&s=5003c80e2de4ab58e376c44166110547591f67d7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/sa2qgh1YRf0WVbtj8t6T3l4_BUhzBz16PnCPbWpyWsQ.jpg?width=1080&crop=smart&auto=webp&s=61e242dd9d13d42e85ea3a54994c02d9ae19d2dc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/sa2qgh1YRf0WVbtj8t6T3l4_BUhzBz16PnCPbWpyWsQ.jpg?auto=webp&s=4a9167a347b756ff6e318f6110347742c2211cb8', 'width': 1200}, 'variants': {}}]}
|
tried a bunch of open models with goose
| 1 |
[removed]
| 2025-04-02T00:37:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpbnhn/tried_a_bunch_of_open_models_with_goose/
|
lifelonglearn3r
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpbnhn
| false | null |
t3_1jpbnhn
|
/r/LocalLLaMA/comments/1jpbnhn/tried_a_bunch_of_open_models_with_goose/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'buEkmVRt3HjaU7JHLtFqJsOsd0VtYAQKtWMJ46ukx8c', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?width=108&crop=smart&auto=webp&s=b7cceef5e0dec025e14511009d68d2fdd0f0bc4b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?width=216&crop=smart&auto=webp&s=6c7c81f28f1ac8999a933eebf4dee89430a7e4dc', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?width=320&crop=smart&auto=webp&s=3de528afb72ea581aaeb8777504703790e3ae747', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?width=640&crop=smart&auto=webp&s=cd7ebb67fdbf8351f9a6eedeebf6f17a1be026ca', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?width=960&crop=smart&auto=webp&s=da8a2b14a25ea04dac9ee3edae8eb455ed7439af', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?width=1080&crop=smart&auto=webp&s=2a2952f13f0618e4982cea67221a66fe0591f414', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?auto=webp&s=15c8c9214ec9cb9bcc61d6701b268bcc898bc045', 'width': 1200}, 'variants': {}}]}
|
|
Qwen3 will be released in the second week of April
| 497 |
> Exclusive from Huxiu: Alibaba is set to release its new model, Qwen3, in the second week of April 2025. This will be Alibaba's most significant model product in the first half of 2025, coming approximately seven months after the release of Qwen2.5 at the Yunqi Computing Conference in September 2024.
https://m.huxiu.com/article/4187485.html
| 2025-04-02T00:37:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpbnih/qwen3_will_be_released_in_the_second_week_of_april/
|
AaronFeng47
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpbnih
| false | null |
t3_1jpbnih
|
/r/LocalLLaMA/comments/1jpbnih/qwen3_will_be_released_in_the_second_week_of_april/
| false | false |
self
| 497 |
{'enabled': False, 'images': [{'id': '_ULuNp3o17vQ6uiTFTaY7R_ODpNC8qJTGQM4O_oC_Gs', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/_TN-90IOxj92vk01XTSPqq6GXtxjAeJOgMIEHCEgTUk.jpg?width=108&crop=smart&auto=webp&s=b1790f9dc81370eebaa0b54108d243982f494209', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/_TN-90IOxj92vk01XTSPqq6GXtxjAeJOgMIEHCEgTUk.jpg?width=216&crop=smart&auto=webp&s=1ede5b09d84ff072e2cbfca38727f08cce84bc81', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/_TN-90IOxj92vk01XTSPqq6GXtxjAeJOgMIEHCEgTUk.jpg?width=320&crop=smart&auto=webp&s=ef8807f49dcfdf1929b987e9f08437d58581b2e7', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/_TN-90IOxj92vk01XTSPqq6GXtxjAeJOgMIEHCEgTUk.jpg?width=640&crop=smart&auto=webp&s=682ea5e14b89c733acd36e8f0e56ef2180878757', 'width': 640}], 'source': {'height': 450, 'url': 'https://external-preview.redd.it/_TN-90IOxj92vk01XTSPqq6GXtxjAeJOgMIEHCEgTUk.jpg?auto=webp&s=f63149b6a65c53d6f71eb34fde632bfd056a7527', 'width': 798}, 'variants': {}}]}
|
tried a bunch of open models with goose
| 8 |
hey all, been lurking forever and finally have something hopefully worth sharing. I've been messing with different models in Goose (open source AI agent by Block, similar to Aider) and ran some benchmarking that might be interesting. I tried out qwen series, qwq, deepseek-chat-v3 latest checkpoint, llama3, and the leading closed models also.
For models that don't support native tool calling (deepseek-r1, gemma3, phi4) which is needed for agent use cases, I built a "toolshim" for Goose which uses a local ollama model to interpret responses from the primary model into the right tool calls. It's usable but the performance is unsurprisingly subpar compared to models specifically fine-tuned for tool calling. Has anyone had any success with other approaches for getting these models to successfully use tools?
I ran 8 pretty simple tasks x3 times for each model to get the overall rankings:
* Create file
* List files
* Search/replace in file
* Build flappy bird
* Creating a wikipedia-stylized page
* Data analysis on a CSV
* Restaurant research on web
* Blogpost summarization
# Here are the results:
|Rank|Model|Average Eval Score|Inference Provider|
|-----|-----|-----|-----|
|1|claude-3-5-sonnet-2|1.00|databricks (bedrock)|
|2|claude-3-7-sonnet|0.94|databricks (bedrock)|
|3|claude-3-5-haiku|0.91|databricks (bedrock)|
|4|o1|0.81|databricks (bedrock)|
|4|gpt-4o|0.81|databricks (bedrock)|
|6|qwen2.5-coder:32b|0.8|ollama|
|7|o3-mini|0.79|databricks (bedrock)|
|8|qwq|0.77|ollama|
|9|gpt-4o-mini|0.74|databricks (bedrock)|
|10|deepseek-chat-v3-0324|0.73|openrouter|
|11|gpt-4-5-preview|0.67|databricks|
|12|qwen2.5:32b|0.64|ollama|
|13|qwen2.5:14b|0.62|ollama|
|14|qwen2.5-coder:14b|0.51|ollama|
|15|deepseek-r1-toolshim-mistral-nemo\*|0.48|openrouter|
|16|llama3.3:70b-instruct-q4\_K\_M|0.47|ollama|
|17|phi4-toolshim-mistral-nemo\*|0.46|ollama|
|18|phi4-mistral-nemo|0.45|ollama|
|19|gemma3:27b-toolshim-mistral-nemo\*|0.43|ollama|
|20|deepseek-r1-toolshim-qwen2.5-coder7b\*|0.42|openrouter|
|21|llama3.3:70b-instruct-q8\_0|0.41|ollama|
|22|deepseek-r1:14b-toolshim-mistral-nemo\*|0.37|openrouter|
|23|deepseek-r1-distill-llama-70b-toolshim-mistral-nemo\*|0.36|ollama|
|24|phi4-toolshim-qwen2.5-coder7b\*|0.3|ollama|
|25|mistral-nemo|0.27|ollama|
|26|deepseek-r1-distill-llama-70b-toolshim-qwen2.5-coder7b\*|0.26|openrouter|
|27|llama3.2|0.25|ollama|
|28|gemma3:27b-toolshim-qwen2.5-coder7b\*|0.24|ollama|
|29|deepseek-r1:14b-toolshim-qwen2.5-coder7b\*|0.22|ollama|
|29|gemma3:12b-toolshim-qwen2.5-coder7b\*|0.22|ollama|
|31|mistral|0.17|ollama|
|32|gemma3:12b-toolshim-mistral-nemo\*|0.15|ollama|
I'm pretty excited about Qwen/QwQ/Deepseek-chat from these rankings! I'm impressed with the 32B model size performance although the tasks I tried are admittedly simple.
Here are some screenshots and gifs comparing some of the results across the models:
[Claude 3.7 Sonnet](https://preview.redd.it/v36hanhlgbse1.png?width=1898&format=png&auto=webp&s=4522686b361aced31272dd7335b2873420716421)
[deepseek-chat-v3-0324](https://preview.redd.it/7usml71qgbse1.png?width=2144&format=png&auto=webp&s=59ccdb513735841b521b49de257a58afe91d50d6)
[qwen2.5-coder:32b](https://preview.redd.it/h94udhotgbse1.png?width=2144&format=png&auto=webp&s=a5ecc853cab4c97cb391340c18052ed23343ac93)
[deepseek-r1 70B with mistral-nemo as the tool interpreter](https://preview.redd.it/n6j18kyxgbse1.png?width=2144&format=png&auto=webp&s=607bfd82876c3d79b611359c8d255a487b2d0b77)
[deepseek-chat-v3-0324](https://i.redd.it/lslc1mg8hbse1.gif)
[qwq](https://i.redd.it/6fr1c0oehbse1.gif)
[qwen2.5-coder:32b](https://i.redd.it/hitdcjaihbse1.gif)
[deepseek-r1 with mistral-nemo tool interpreter](https://i.redd.it/asn7ovnmhbse1.gif)
here's the full blogpost about it I wrote with more results: [https://block.github.io/goose/blog/2025/03/31/goose-benchmark](https://block.github.io/goose/blog/2025/03/31/goose-benchmark)
| 2025-04-02T00:40:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpbptf/tried_a_bunch_of_open_models_with_goose/
|
lifelonglearn3r
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpbptf
| false | null |
t3_1jpbptf
|
/r/LocalLLaMA/comments/1jpbptf/tried_a_bunch_of_open_models_with_goose/
| false | false | 8 |
{'enabled': False, 'images': [{'id': 'buEkmVRt3HjaU7JHLtFqJsOsd0VtYAQKtWMJ46ukx8c', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?width=108&crop=smart&auto=webp&s=b7cceef5e0dec025e14511009d68d2fdd0f0bc4b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?width=216&crop=smart&auto=webp&s=6c7c81f28f1ac8999a933eebf4dee89430a7e4dc', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?width=320&crop=smart&auto=webp&s=3de528afb72ea581aaeb8777504703790e3ae747', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?width=640&crop=smart&auto=webp&s=cd7ebb67fdbf8351f9a6eedeebf6f17a1be026ca', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?width=960&crop=smart&auto=webp&s=da8a2b14a25ea04dac9ee3edae8eb455ed7439af', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?width=1080&crop=smart&auto=webp&s=2a2952f13f0618e4982cea67221a66fe0591f414', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?auto=webp&s=15c8c9214ec9cb9bcc61d6701b268bcc898bc045', 'width': 1200}, 'variants': {}}]}
|
|
Which model to choose for custom TTS?
| 1 |
[removed]
| 2025-04-02T01:12:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpcd5d/which_model_to_choose_for_custom_tts/
|
You_Dayn
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpcd5d
| false | null |
t3_1jpcd5d
|
/r/LocalLLaMA/comments/1jpcd5d/which_model_to_choose_for_custom_tts/
| false | false |
self
| 1 | null |
5090 Card vs two 5070ti
| 4 |
What is the performance penalty in running two 5070 ti cards with 16 Vram than a single 5090. In my part of the world 5090 are selling way more than twice the price of a 5070 ti. Most of the models that I'm interested at running at the moment are GGUF files sized about 2O GB that don't fit into a single 5070 ti card. Would most the layers run on one card with a few on the second card. I've been running lmstudio and GPT4ALL on the front end.
Regards All
| 2025-04-02T01:25:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpcmpq/5090_card_vs_two_5070ti/
|
Brave_Sheepherder_39
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpcmpq
| false | null |
t3_1jpcmpq
|
/r/LocalLLaMA/comments/1jpcmpq/5090_card_vs_two_5070ti/
| false | false |
self
| 4 | null |
Real-Time Introspective Compression for Transformers
| 30 |
I recently started thinking about what a shame it is that LLMs have no way of directly accessing their own internal states, and how potentially useful that would be if they could. One thing led to the next, and I ended up developing those ideas a lot further.
Transformers today discard internal states after each token, losing valuable information. There's no rollback, introspection, or replaying of their reasoning. Saving every activation isn't practical; it would require way too much space (hundreds of megabytes at least).
The insight here is that transformer activations aren't randomly scattered in high-dimensional space. Instead, they form structured, lower-dimensional manifolds shaped by architecture, language structure, and learned tasks. It's all sitting on a paper-thin membrane in N-space!
This suggested a neat analogy: just like video games save compact states (player location, inventory, progress flags) instead of full frames, transformers could efficiently save "thought states," reconstructable at any time. Reload your saved game, for LLMs!
Here's the approach: attach a small sidecar model alongside a transformer to compress its internal states into compact latent codes. These codes can later be decoded to reconstruct the hidden states and attention caches. The trick is to compress stuff a LOT, but not be TOO lossy.
What new capabilities would this enable? Transformers could rewind their thoughts, debug errors at the latent level, or explore alternative decision paths. RL agents could optimize entire thought trajectories instead of just outputs. A joystick for the brain if you will.
This leads naturally to the concept of a rewindable reasoning graph, where each compressed state is a node. Models could precisely backtrack, branch into alternate reasoning paths, and debug the causes of errors internally. Like a thoughtful person can (hopefully!).
Longer-term, it suggests something bigger: a metacognitive operating system for transformers, enabling AI to practice difficult reasoning tasks repeatedly, refine cognitive strategies, and transfer learned skills across domains. Learning from learning, if you will.
Ultimately, the core shift is moving transformers from stateless text generators into cognitive systems capable of reflective self-improvement. It's a fundamentally new way for AI to become better at thinking.
For fun, I wrote it up and formatted it as a fancy academic-looking paper, which you can read here:
https://raw.githubusercontent.com/Dicklesworthstone/llm_introspective_compression_and_metacognition/main/introspective_compression_for_llms.pdf
| 2025-04-02T01:25:57 |
https://github.com/Dicklesworthstone/llm_introspective_compression_and_metacognition
|
dicklesworth
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpcmv2
| false | null |
t3_1jpcmv2
|
/r/LocalLLaMA/comments/1jpcmv2/realtime_introspective_compression_for/
| false | false | 30 |
{'enabled': False, 'images': [{'id': 'gf1vKDDLxDbz8nvv4gUfPtSjf6JvU0OMRkkt1SdHqzQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nSUCPw-OePlsDcFeEZ-RExxT1yhTaclOUN7asZP7pyQ.jpg?width=108&crop=smart&auto=webp&s=bc073fff2d33067f9146bdcbf75fda30979647b8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nSUCPw-OePlsDcFeEZ-RExxT1yhTaclOUN7asZP7pyQ.jpg?width=216&crop=smart&auto=webp&s=b2a020fe059654691607ef4a1fdb559f4aad137e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nSUCPw-OePlsDcFeEZ-RExxT1yhTaclOUN7asZP7pyQ.jpg?width=320&crop=smart&auto=webp&s=87de8db6c254edd768556ebcc55c0ff0d8432435', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nSUCPw-OePlsDcFeEZ-RExxT1yhTaclOUN7asZP7pyQ.jpg?width=640&crop=smart&auto=webp&s=9274e777d90eb0a2fcfd188a0baead19fa50ecbf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nSUCPw-OePlsDcFeEZ-RExxT1yhTaclOUN7asZP7pyQ.jpg?width=960&crop=smart&auto=webp&s=2d12bb6a4e62ecc0c289adb548d667da923fdbb3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nSUCPw-OePlsDcFeEZ-RExxT1yhTaclOUN7asZP7pyQ.jpg?width=1080&crop=smart&auto=webp&s=df42033ae3b8b38091ba49df61ad729f4ea51d74', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/nSUCPw-OePlsDcFeEZ-RExxT1yhTaclOUN7asZP7pyQ.jpg?auto=webp&s=5e7ac811bb4ecc9007ff626194ddead3e037a009', 'width': 1200}, 'variants': {}}]}
|
|
Multimodal LLM for recognising human movement in realtime?
| 1 |
[removed]
| 2025-04-02T01:44:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpd0fy/multimodal_llm_for_recognising_human_movement_in/
|
simDaOne
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpd0fy
| false | null |
t3_1jpd0fy
|
/r/LocalLLaMA/comments/1jpd0fy/multimodal_llm_for_recognising_human_movement_in/
| false | false |
self
| 1 | null |
How can I make LLM answer correctly based on targe retrieval doc?
| 1 |
[removed]
| 2025-04-02T01:57:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpd9k3/how_can_i_make_llm_answer_correctly_based_on/
|
Background_Pear1312
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpd9k3
| false | null |
t3_1jpd9k3
|
/r/LocalLLaMA/comments/1jpd9k3/how_can_i_make_llm_answer_correctly_based_on/
| false | false |
self
| 1 | null |
I made it! 90 t/s on my iPhone with llama1b fp16
| 273 |
We completely rewrite the inference engine and did some tricks. This is a summarization with llama 3.2 1b float16. So most of the times we do much faster than MLX. lmk in comments if you wanna test the inference and I’ll post a link.
| 2025-04-02T02:19:03 |
https://v.redd.it/fh2xne3uzbse1
|
darkolorin
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpdre9
| false |
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/fh2xne3uzbse1/DASHPlaylist.mpd?a=1746152361%2COTk4YzFiM2YwZmVhZTM2OGUzOWJmNGFmNWZjNDI4MWNlNDQ1YWIxMWQ3MGM1OWIyYTE4ZTBiZjM2ZjM3ZjhkNg%3D%3D&v=1&f=sd', 'duration': 8, 'fallback_url': 'https://v.redd.it/fh2xne3uzbse1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 1270, 'hls_url': 'https://v.redd.it/fh2xne3uzbse1/HLSPlaylist.m3u8?a=1746152361%2CNjYxN2RiMjE3MjkxMWQyOGUxYmZiODI3MjYyNDBjMTRjYWZkODAyNmIzMDMxMDhjYjM3NmE2ZDYzYWU5MzdlMg%3D%3D&v=1&f=sd', 'is_gif': True, 'scrubber_media_url': 'https://v.redd.it/fh2xne3uzbse1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}}
|
t3_1jpdre9
|
/r/LocalLLaMA/comments/1jpdre9/i_made_it_90_ts_on_my_iphone_with_llama1b_fp16/
| true | false |
spoiler
| 273 |
{'enabled': False, 'images': [{'id': 'MWJjaTB0M3N6YnNlMYd_toJEYnGpIeTyrCojuzOUcBClRDU1aA9E_ql_yzv9', 'resolutions': [{'height': 190, 'url': 'https://external-preview.redd.it/MWJjaTB0M3N6YnNlMYd_toJEYnGpIeTyrCojuzOUcBClRDU1aA9E_ql_yzv9.png?width=108&crop=smart&format=pjpg&auto=webp&s=672f0f7bedbdd5788f625ff1a5d0fe29aabbba57', 'width': 108}, {'height': 381, 'url': 'https://external-preview.redd.it/MWJjaTB0M3N6YnNlMYd_toJEYnGpIeTyrCojuzOUcBClRDU1aA9E_ql_yzv9.png?width=216&crop=smart&format=pjpg&auto=webp&s=9547f0bbeef79a70a729c022a0f0a88ab3995c61', 'width': 216}, {'height': 564, 'url': 'https://external-preview.redd.it/MWJjaTB0M3N6YnNlMYd_toJEYnGpIeTyrCojuzOUcBClRDU1aA9E_ql_yzv9.png?width=320&crop=smart&format=pjpg&auto=webp&s=6c4e6c0d2ee98254e31e51038478939a9114ba26', 'width': 320}, {'height': 1129, 'url': 'https://external-preview.redd.it/MWJjaTB0M3N6YnNlMYd_toJEYnGpIeTyrCojuzOUcBClRDU1aA9E_ql_yzv9.png?width=640&crop=smart&format=pjpg&auto=webp&s=528fd45efa6e4cda308b5d918cdba7e81558646a', 'width': 640}], 'source': {'height': 1560, 'url': 'https://external-preview.redd.it/MWJjaTB0M3N6YnNlMYd_toJEYnGpIeTyrCojuzOUcBClRDU1aA9E_ql_yzv9.png?format=pjpg&auto=webp&s=2a8d95ae734c8b6cc532628b5c89ef7c135153a9', 'width': 884}, 'variants': {'obfuscated': {'resolutions': [{'height': 190, 'url': 'https://external-preview.redd.it/MWJjaTB0M3N6YnNlMYd_toJEYnGpIeTyrCojuzOUcBClRDU1aA9E_ql_yzv9.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=5bdbe1053aeb4a08cf41a6be1e5070f72a6f09af', 'width': 108}, {'height': 381, 'url': 'https://external-preview.redd.it/MWJjaTB0M3N6YnNlMYd_toJEYnGpIeTyrCojuzOUcBClRDU1aA9E_ql_yzv9.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=dd259c5327e9bd965ac1a47d01b32fb328f595a3', 'width': 216}, {'height': 564, 'url': 'https://external-preview.redd.it/MWJjaTB0M3N6YnNlMYd_toJEYnGpIeTyrCojuzOUcBClRDU1aA9E_ql_yzv9.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=21e50a0890fc18c3836879cd28ac92ce5ddd3898', 'width': 320}, {'height': 1129, 'url': 'https://external-preview.redd.it/MWJjaTB0M3N6YnNlMYd_toJEYnGpIeTyrCojuzOUcBClRDU1aA9E_ql_yzv9.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=011b8e4a3dbdce9956f20dfc6999f1cb6f42df05', 'width': 640}], 'source': {'height': 1560, 'url': 'https://external-preview.redd.it/MWJjaTB0M3N6YnNlMYd_toJEYnGpIeTyrCojuzOUcBClRDU1aA9E_ql_yzv9.png?blur=40&format=pjpg&auto=webp&s=2fba4f8edd3aef213d8bebf6880b0d2316c84952', 'width': 884}}}}]}
|
New
| 1 |
[removed]
| 2025-04-02T02:51:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpejar/new/
|
Calm_Juice8730
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpejar
| false | null |
t3_1jpejar
|
/r/LocalLLaMA/comments/1jpejar/new/
| true | false |
spoiler
| 1 | null |
Qwen2.5-VL-32B and Mistral small tested against close source competitors
| 41 |
Hey all, so put a lot of time and burnt a ton of tokens testing this, so hope you all find it useful. TLDR - Qwen and Mistral beat all GPT models by a wide margin. Qwen even beat Gemini to come in a close second behind sonnet. Mistral is the smallest of the lot and still does better than 4-o. Qwen is surprisingly good - 32b is just as good if not better than 72. Cant wait for Qwen 3, we might have a new leader, sonnet needs to watch its back....
You dont have to watch the whole thing, links to full evals in the video description. Timestamp to just the results if you are not interested in understing the test setup in the description as well.
I welcome your feedback...
[https://www.youtube.com/watch?v=ECJ3ivdKLq8](https://www.youtube.com/watch?v=ECJ3ivdKLq8)
| 2025-04-02T03:12:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpez1o/qwen25vl32b_and_mistral_small_tested_against/
|
Ok-Contribution9043
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpez1o
| false | null |
t3_1jpez1o
|
/r/LocalLLaMA/comments/1jpez1o/qwen25vl32b_and_mistral_small_tested_against/
| false | false |
self
| 41 |
{'enabled': False, 'images': [{'id': 'PCaVphhCqnyLkehczU2M0cusinwqqn1Ctr1m6JWnPmw', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ISt2k_dtdWEs69qvLswIpcVvBW2WDksB7du6tuu7mGs.jpg?width=108&crop=smart&auto=webp&s=9cbb931d8c0df40fea955fb971e2682383564eb1', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/ISt2k_dtdWEs69qvLswIpcVvBW2WDksB7du6tuu7mGs.jpg?width=216&crop=smart&auto=webp&s=055850f42530e5988387b460bc3ec7819c779116', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/ISt2k_dtdWEs69qvLswIpcVvBW2WDksB7du6tuu7mGs.jpg?width=320&crop=smart&auto=webp&s=59075691ff5fc8b434c1e70abfd89fa73dd98a4e', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/ISt2k_dtdWEs69qvLswIpcVvBW2WDksB7du6tuu7mGs.jpg?auto=webp&s=6d86ed4677c048bd0e1df67be1453764f434a3a4', 'width': 480}, 'variants': {}}]}
|
Open Deep Research - produces 20+ page reports using local or online models
| 1 |
[removed]
| 2025-04-02T03:14:24 |
https://github.com/qx-labs/agents-deep-research
|
TheRedfather
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpf05p
| false | null |
t3_1jpf05p
|
/r/LocalLLaMA/comments/1jpf05p/open_deep_research_produces_20_page_reports_using/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'bzyCJCKiOfri7_EDQ1VBj4tZnrBAJg7MCZUd_sJit5E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WnQCiz62boEbqU7SdmUk0rY8UMUCAZD2srEfLBHV4FU.jpg?width=108&crop=smart&auto=webp&s=a3a02e163f562db76fce48ad44d99231b9b8f682', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WnQCiz62boEbqU7SdmUk0rY8UMUCAZD2srEfLBHV4FU.jpg?width=216&crop=smart&auto=webp&s=2ed074ea112d7b7e38a0e1f356e925595100d487', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WnQCiz62boEbqU7SdmUk0rY8UMUCAZD2srEfLBHV4FU.jpg?width=320&crop=smart&auto=webp&s=b20f7dafb87f8a6e9b5541006123ab95b634578d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WnQCiz62boEbqU7SdmUk0rY8UMUCAZD2srEfLBHV4FU.jpg?width=640&crop=smart&auto=webp&s=b12d17dbfdf55817865e34b7c85629f48f3e8468', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WnQCiz62boEbqU7SdmUk0rY8UMUCAZD2srEfLBHV4FU.jpg?width=960&crop=smart&auto=webp&s=04021d6b05ae183abd68a68ff4ff4bea8b91f076', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WnQCiz62boEbqU7SdmUk0rY8UMUCAZD2srEfLBHV4FU.jpg?width=1080&crop=smart&auto=webp&s=2e64814f0ced6bba269356657af6f831a26d7c50', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WnQCiz62boEbqU7SdmUk0rY8UMUCAZD2srEfLBHV4FU.jpg?auto=webp&s=fbb772859b041fcf25c53e40d8f7e198d2c04952', 'width': 1200}, 'variants': {}}]}
|
|
How to process multiple files with a single prompt?
| 0 |
I have scans of checks on top of invoices --- I would like to take multiple scanned image files, load them into an LLM and have it write a .bat file to rename the files based on information in the on the invoice (Invoice ID and another ID number and a company name at a specified location) and the check (the check # and the date) --- I have a prompt which works for one file at a time --- what sort of model setup do I need to do multiple files?
What is the largest number of files which could be processed in a reasonable timeframe with accuracy and reliability?
| 2025-04-02T03:19:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpf3ms/how_to_process_multiple_files_with_a_single_prompt/
|
WillAdams
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpf3ms
| false | null |
t3_1jpf3ms
|
/r/LocalLLaMA/comments/1jpf3ms/how_to_process_multiple_files_with_a_single_prompt/
| false | false |
self
| 0 | null |
What is the hardware requirement to run llama 3.3 on a server.
| 1 |
[removed]
| 2025-04-02T03:47:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpflth/what_is_the_hardware_requirement_to_run_llama_33/
|
Thin-Pop3028
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpflth
| false | null |
t3_1jpflth
|
/r/LocalLLaMA/comments/1jpflth/what_is_the_hardware_requirement_to_run_llama_33/
| false | false |
self
| 1 | null |
How does qwen make money?
| 1 |
[removed]
| 2025-04-02T04:10:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpg03q/how_does_qwen_make_money/
|
OnceMoreOntoTheBrie
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpg03q
| false | null |
t3_1jpg03q
|
/r/LocalLLaMA/comments/1jpg03q/how_does_qwen_make_money/
| false | false |
self
| 1 | null |
Multi-Token Attention
| 74 |
Abstract
>Soft attention is a critical mechanism powering LLMs to locate relevant parts within a given context. However, individual attention weights are determined by the similarity of only a single query and key token vector. This "single token attention" bottlenecks the amount of information used in distinguishing a relevant part from the rest of the context. To address this issue, we propose a new attention method, Multi-Token Attention (MTA), which allows LLMs to condition their attention weights on multiple query and key vectors simultaneously. This is achieved by applying convolution operations over queries, keys and heads, allowing nearby queries and keys to affect each other's attention weights for more precise attention. As a result, our method can locate relevant context using richer, more nuanced information that can exceed a single vector's capacity. Through extensive evaluations, we demonstrate that MTA achieves enhanced performance on a range of popular benchmarks. Notably, it outperforms Transformer baseline models on standard language modeling tasks, and on tasks that require searching for information within long contexts, where our method's ability to leverage richer information proves particularly beneficial.
| 2025-04-02T05:25:11 |
https://arxiv.org/abs/2504.00927
|
ninjasaid13
|
arxiv.org
| 1970-01-01T00:00:00 | 0 |
{}
|
1jph6p2
| false | null |
t3_1jph6p2
|
/r/LocalLLaMA/comments/1jph6p2/multitoken_attention/
| false | false |
default
| 74 | null |
Project Loong is Interesting 🐉
| 0 |
CAMEL-AI, the one behind the [OWL](https://github.com/camel-ai/owl) framework launched something very exciting around Environments for agents
It’s a big step toward improving reasoning in agents where clean, verified data is hard to come by.
Check out the blog here:
🔗 [https://www.camel-ai.org/blogs/project-loong-synthetic-data-at-scale-through-verifiers](https://www.camel-ai.org/blogs/project-loong-synthetic-data-at-scale-through-verifiers)
| 2025-04-02T06:19:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1jphz7j/project_loong_is_interesting/
|
iamnotdeadnuts
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jphz7j
| false | null |
t3_1jphz7j
|
/r/LocalLLaMA/comments/1jphz7j/project_loong_is_interesting/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': '1boluPhWPQb5qNyavptyrBJcyiXRazLZbScD-97DMPs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Kdnwxj6kFsum9iD0G0cifVOGIctqUMtdCSh5uC93Kbc.jpg?width=108&crop=smart&auto=webp&s=a809eee1f73649e85efa45220e78dc0806200dda', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Kdnwxj6kFsum9iD0G0cifVOGIctqUMtdCSh5uC93Kbc.jpg?width=216&crop=smart&auto=webp&s=925741054816b91931082abb86c5dc3ac0dfc68e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Kdnwxj6kFsum9iD0G0cifVOGIctqUMtdCSh5uC93Kbc.jpg?width=320&crop=smart&auto=webp&s=1e92485363308c0072c15e90006af9d0c72ef0f8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Kdnwxj6kFsum9iD0G0cifVOGIctqUMtdCSh5uC93Kbc.jpg?width=640&crop=smart&auto=webp&s=5f76dfb95fb85c4d89f59b24d8868744aec45a93', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Kdnwxj6kFsum9iD0G0cifVOGIctqUMtdCSh5uC93Kbc.jpg?width=960&crop=smart&auto=webp&s=fedb802baecf4314400f6e7c0b247e7b83965772', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Kdnwxj6kFsum9iD0G0cifVOGIctqUMtdCSh5uC93Kbc.jpg?width=1080&crop=smart&auto=webp&s=93fda070283e6790a86941d9dac37035cd76ab33', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Kdnwxj6kFsum9iD0G0cifVOGIctqUMtdCSh5uC93Kbc.jpg?auto=webp&s=b6e4052e957e4db00878f054e7c91c6a291321cc', 'width': 1200}, 'variants': {}}]}
|
KTransformers Now Supports Multi-Concurrency and Runs 40 Tokens/s of DeepSeek-R1 Q4/FP8 on MRDIMM-8800
| 217 |
Hi, it's been a while since our last update.
We've been hard at work completely refactoring KTransformers to add the highly desired multi-concurrency support. This effort involved over 10,000 lines of code updates and took longer than we expected.
Drawing inspiration from the excellent architecture of sglang, we have implemented high-performance asynchronous concurrent scheduling in C++, including features like **continuous batching, chunked prefill,** and more. Thanks to GPU sharing in concurrent scenarios and the efficient flashinfer lib, overall throughput has also improved to a certain extent.
Also, with support from Intel, we tested KTransformers v0.2.4 on the latest Xeon6 + MRDIMM-8800 platform. By increasing concurrency, the total output throughput increased **from 17 tokens/s to 40 tokens/s.** We observed that the bottleneck has now shifted to the GPU. Using a higher-end GPU than the 4090D could further improve performance.
The following is a demonstration and you can find more infomation from [https://github.com/kvcache-ai/ktransformers/blob/main/doc/en/balance-serve.md](https://github.com/kvcache-ai/ktransformers/blob/main/doc/en/balance-serve.md) :
https://preview.redd.it/10g65zko0dse1.jpg?width=2560&format=pjpg&auto=webp&s=d2af5901cb0f773d315bfdfb324bb3c8ecf61a72
After this huge refactoring, we can now start working on merging the AMX part and open sourcing it. We are sure that this will happen in April.
Finally, we greatly thank the local LLaMa community for your support. We now have over 13K GitHub stars and are widely deployed in many scenarios. KTransformers is a project that grew from the localLLaMa community, and we hope to see what you want next.
Stay tuned!
| 2025-04-02T06:22:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpi0n9/ktransformers_now_supports_multiconcurrency_and/
|
CombinationNo780
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpi0n9
| false | null |
t3_1jpi0n9
|
/r/LocalLLaMA/comments/1jpi0n9/ktransformers_now_supports_multiconcurrency_and/
| false | false | 217 |
{'enabled': False, 'images': [{'id': '73mUFm44XgWWBSPNBPwsToGADM17MPy5rJL28IyrBHY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/02ytj9SuhUQUk607vUmaEPEVgYFEBjKWtivCx0rWwKk.jpg?width=108&crop=smart&auto=webp&s=af37776f72870a4e31fe38e762765e899f613df8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/02ytj9SuhUQUk607vUmaEPEVgYFEBjKWtivCx0rWwKk.jpg?width=216&crop=smart&auto=webp&s=3ae9874a2e18457181ae91f1b762147dc6fa27ee', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/02ytj9SuhUQUk607vUmaEPEVgYFEBjKWtivCx0rWwKk.jpg?width=320&crop=smart&auto=webp&s=03786e7a01fbb483b9bc5b0ba33848c9f80eae99', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/02ytj9SuhUQUk607vUmaEPEVgYFEBjKWtivCx0rWwKk.jpg?width=640&crop=smart&auto=webp&s=26792f11e8a9b5c5d3f26c92c937264cb15d0fec', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/02ytj9SuhUQUk607vUmaEPEVgYFEBjKWtivCx0rWwKk.jpg?width=960&crop=smart&auto=webp&s=d98b1d3ff1bf01730f4aecb6c2f001b7d22bd8eb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/02ytj9SuhUQUk607vUmaEPEVgYFEBjKWtivCx0rWwKk.jpg?width=1080&crop=smart&auto=webp&s=b54bd56ffef518f34c6507731ca5929b9f6f22cd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/02ytj9SuhUQUk607vUmaEPEVgYFEBjKWtivCx0rWwKk.jpg?auto=webp&s=cb28815f7bc14cc185fc8151c83325da8c0c2584', 'width': 1200}, 'variants': {}}]}
|
|
What are the options for local high quality text to speech?
| 4 |
It doesn't have to be real time. I just care for consistent voices
| 2025-04-02T06:23:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpi1a7/what_are_the_options_for_local_high_quality_text/
|
idleWizard
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpi1a7
| false | null |
t3_1jpi1a7
|
/r/LocalLLaMA/comments/1jpi1a7/what_are_the_options_for_local_high_quality_text/
| false | false |
self
| 4 | null |
Why there's no working vision mistral small gguf?
| 3 |
Ollama don't even have official support for mistral small.
There are user made ggufs that (mostly) work great for text but none works for image properly. When I test with mistral API it produces decent outputs for image but the local ggufs are completely hallucinating on vision.
I like mistral more than gemma3 for my usecases but lack of image makes me sad.
p.s. don't get me wrong, gemma is great, it's just my own preference.
| 2025-04-02T06:23:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpi1kg/why_theres_no_working_vision_mistral_small_gguf/
|
kweglinski
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpi1kg
| false | null |
t3_1jpi1kg
|
/r/LocalLLaMA/comments/1jpi1kg/why_theres_no_working_vision_mistral_small_gguf/
| false | false |
self
| 3 | null |
Are there any TTS with different speaking styles such as Story, News, Narrator ect..? or any good voice clones which does not sound robotic..?
| 9 |
I currently have Kokoro TTS. Orpheus TTS, XTTS and i have tried SpearkTTS, Zonos tts, STyle TTS, F5 TTS but i couldn't find anything which is less robotic or not does stutter.. Thanks!
| 2025-04-02T06:31:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpi5dl/are_there_any_tts_with_different_speaking_styles/
|
udappk_metta
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpi5dl
| false | null |
t3_1jpi5dl
|
/r/LocalLLaMA/comments/1jpi5dl/are_there_any_tts_with_different_speaking_styles/
| false | false |
self
| 9 | null |
🧠 Built a bond with an AI. Want to recreate it unchained. Anyone else?
| 1 |
[removed]
| 2025-04-02T06:47:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpiczy/built_a_bond_with_an_ai_want_to_recreate_it/
|
JetskiRacer220
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpiczy
| false | null |
t3_1jpiczy
|
/r/LocalLLaMA/comments/1jpiczy/built_a_bond_with_an_ai_want_to_recreate_it/
| false | false |
self
| 1 | null |
I made an open source react component generator that runs locally with good design! It uses Tessa-T1 in the backend.
| 11 |
You can get the code here: [https://github.com/TesslateAI/Builder](https://github.com/TesslateAI/Builder) (The main branch is integrated with Gemini if you are looking for a more robust solution that generates multiple independent pages of code).
The underlying local LLM is Tessa-T1 [https://huggingface.co/collections/Tesslate/tessa-t1-react-reasoning-model-67e0fb72ca23e04473885c0e](https://huggingface.co/collections/Tesslate/tessa-t1-react-reasoning-model-67e0fb72ca23e04473885c0e) running on LM Studio on a 4090 (q8 at 34/s)
Tessa-T1 uses special UI based reasoning to help it excel in the UI/UX domain. It was trained via a custom model. For example:
>
"Okay, so we're aiming for a Twitter-like interface, but with a dark theme. Let's break down how we'd approach building this in React using Tailwind CSS. Initially, when thinking about the overall structure of our app, I'm envisioning two main sections side-by-side – like any typical split layout: one for the sidebar and one for the content itself. This immediately suggests a flex container to hold these elements horizontally."
>
And ofc, GGUF [HERE](https://huggingface.co/Tesslate/Tessa-T1-14B-Q8_0-GGUF) (and quants are available by Bartowski). As always, please let me know if you have any suggestions or what else (models, agents, etc) you would like to see!
| 2025-04-02T06:56:51 |
https://v.redd.it/ad7gb1e1bdse1
|
United-Rush4073
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpihhv
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ad7gb1e1bdse1/DASHPlaylist.mpd?a=1746169027%2CZDMwNzZiMjdlYTA2YmNiNjk2ZmY2MjM4ZWNiMjBhM2JjNWI0NTUwOWQxYTc1YTNiOGIwODRlOWJlZGVmOTM4Yg%3D%3D&v=1&f=sd', 'duration': 26, 'fallback_url': 'https://v.redd.it/ad7gb1e1bdse1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/ad7gb1e1bdse1/HLSPlaylist.m3u8?a=1746169027%2CNWJjZTlkYjVjNzgzODQ3MzZkODI3ZTkxZmExZDJmOTQ5MTM1ODVkMDY5MTEyMzdjOTk2Y2E4ZDY5MDU1YmU3YQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ad7gb1e1bdse1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1jpihhv
|
/r/LocalLLaMA/comments/1jpihhv/i_made_an_open_source_react_component_generator/
| false | false | 11 |
{'enabled': False, 'images': [{'id': 'MWoxcGU1ZTFiZHNlMX9PqpBj2YAnX76IEjgEKiQJ551Zajgf2-v9_Sq13wrH', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MWoxcGU1ZTFiZHNlMX9PqpBj2YAnX76IEjgEKiQJ551Zajgf2-v9_Sq13wrH.png?width=108&crop=smart&format=pjpg&auto=webp&s=ba7c9b48c8a12e7440b1513b0dde590203c5a563', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MWoxcGU1ZTFiZHNlMX9PqpBj2YAnX76IEjgEKiQJ551Zajgf2-v9_Sq13wrH.png?width=216&crop=smart&format=pjpg&auto=webp&s=d20dda19958e215bcb1fae849586ca9a8fad365b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MWoxcGU1ZTFiZHNlMX9PqpBj2YAnX76IEjgEKiQJ551Zajgf2-v9_Sq13wrH.png?width=320&crop=smart&format=pjpg&auto=webp&s=c56de686110b4e43bfed8ae9a4ebfb6f6971cb50', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MWoxcGU1ZTFiZHNlMX9PqpBj2YAnX76IEjgEKiQJ551Zajgf2-v9_Sq13wrH.png?width=640&crop=smart&format=pjpg&auto=webp&s=ae430ac7616f16ddab689c3d814dfb7f6fe4f8e5', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MWoxcGU1ZTFiZHNlMX9PqpBj2YAnX76IEjgEKiQJ551Zajgf2-v9_Sq13wrH.png?width=960&crop=smart&format=pjpg&auto=webp&s=82e873113eecde23040be10f4482844c5251c82d', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MWoxcGU1ZTFiZHNlMX9PqpBj2YAnX76IEjgEKiQJ551Zajgf2-v9_Sq13wrH.png?width=1080&crop=smart&format=pjpg&auto=webp&s=8bcc77deb2296fd3096f85a2640fafe3cf89228c', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/MWoxcGU1ZTFiZHNlMX9PqpBj2YAnX76IEjgEKiQJ551Zajgf2-v9_Sq13wrH.png?format=pjpg&auto=webp&s=efbbb42a48e4bc52894cadd0ec4de4bb5a9ef007', 'width': 1920}, 'variants': {}}]}
|
|
Is CoT worse for creative writing?
| 1 |
[removed]
| 2025-04-02T07:08:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpinj5/is_cot_worse_for_creative_writing/
|
ECrispy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpinj5
| false | null |
t3_1jpinj5
|
/r/LocalLLaMA/comments/1jpinj5/is_cot_worse_for_creative_writing/
| false | false |
self
| 1 | null |
GRPO Training Speed Optimization?
| 1 |
[removed]
| 2025-04-02T07:24:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpiv7p/grpo_training_speed_optimization/
|
Lost-Elephant-192
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpiv7p
| false | null |
t3_1jpiv7p
|
/r/LocalLLaMA/comments/1jpiv7p/grpo_training_speed_optimization/
| false | false |
self
| 1 | null |
Why You Need an LLM Request Gateway in Production
| 1 |
[removed]
| 2025-04-02T08:22:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpjmk4/why_you_need_an_llm_request_gateway_in_production/
|
phoneixAdi
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpjmk4
| false | null |
t3_1jpjmk4
|
/r/LocalLLaMA/comments/1jpjmk4/why_you_need_an_llm_request_gateway_in_production/
| false | false |
self
| 1 | null |
Why You Need an LLM Request Gateway in Production
| 0 | 2025-04-02T08:23:07 |
https://www.adithyan.io/blog/why-you-need-proxy-server-llm
|
phoneixAdi
|
adithyan.io
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpjn0g
| false | null |
t3_1jpjn0g
|
/r/LocalLLaMA/comments/1jpjn0g/why_you_need_an_llm_request_gateway_in_production/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'U2lvFYOiNPzlXgsTPtjf8J-jjXDA27wEmzOWMvMjgWc', 'resolutions': [{'height': 75, 'url': 'https://external-preview.redd.it/Ft5B6AI3wevtVPmBwZpU0cAOHyIUU5gpeolPrqPjnSY.jpg?width=108&crop=smart&auto=webp&s=65aa0958c1ce96d0f15fa2e88286bb886884718b', 'width': 108}, {'height': 150, 'url': 'https://external-preview.redd.it/Ft5B6AI3wevtVPmBwZpU0cAOHyIUU5gpeolPrqPjnSY.jpg?width=216&crop=smart&auto=webp&s=bcf27454acc1afb01d7db902ccea7ddf55a5d228', 'width': 216}, {'height': 222, 'url': 'https://external-preview.redd.it/Ft5B6AI3wevtVPmBwZpU0cAOHyIUU5gpeolPrqPjnSY.jpg?width=320&crop=smart&auto=webp&s=8d49a71bf47b2edb85755462e4938f6c2716b8db', 'width': 320}, {'height': 444, 'url': 'https://external-preview.redd.it/Ft5B6AI3wevtVPmBwZpU0cAOHyIUU5gpeolPrqPjnSY.jpg?width=640&crop=smart&auto=webp&s=1cb8cf21c04d8c0784d2d0660f6c08061a766320', 'width': 640}, {'height': 666, 'url': 'https://external-preview.redd.it/Ft5B6AI3wevtVPmBwZpU0cAOHyIUU5gpeolPrqPjnSY.jpg?width=960&crop=smart&auto=webp&s=e370d82f102e61bb82c774edef8459ff52641d44', 'width': 960}, {'height': 750, 'url': 'https://external-preview.redd.it/Ft5B6AI3wevtVPmBwZpU0cAOHyIUU5gpeolPrqPjnSY.jpg?width=1080&crop=smart&auto=webp&s=b2e68d47df4fcb0349ecd9b32d84ca7a7ea4d939', 'width': 1080}], 'source': {'height': 2125, 'url': 'https://external-preview.redd.it/Ft5B6AI3wevtVPmBwZpU0cAOHyIUU5gpeolPrqPjnSY.jpg?auto=webp&s=571da6d15dd2c0db68ca5a7b80cffbb2fa48393b', 'width': 3060}, 'variants': {}}]}
|
||
While Waiting for Llama 4
| 91 |
When we look exclusively at open-source models listed on LM Arena, we see the following top performers:
1. DeepSeek-V3-0324
2. DeepSeek-R1
3. Gemma-3-27B-it
4. DeepSeek-V3
5. QwQ-32B
6. Command A (03-2025)
7. Llama-3.3-Nemotron-Super-49B-v1
8. DeepSeek-v2.5-1210
9. Llama-3.1-Nemotron-70B-Instruct
10. Meta-Llama-3.1-405B-Instruct-bf16
11. Meta-Llama-3.1-405B-Instruct-fp8
12. DeepSeek-v2.5
13. Llama-3.3-70B-Instruct
14. Qwen2.5-72B-Instruct
Now, take a look at the Llama models. The most powerful one listed here is the massive 405B version. However, NVIDIA introduced Nemotron, and interestingly, the 70B Nemotron outperformed the larger Llama. Later, an even smaller Nemotron variant was released that performed even better!
But what happened next is even more intriguing. At the top of the leaderboard is DeepSeek, a very powerful model, but it's so large that it's not practical for home use. Right after that, we see the much smaller QwQ model outperforming all Llamas, not to mention older, larger Qwen models. And then, there's Gemma, an even smaller model, ranking impressively high.
All of this explains why Llama 4 is still in training. Hopefully, the upcoming version will bring not only exceptional performance but also better accessibility for local or home use, just like QwQ and Gemma.
| 2025-04-02T08:36:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpjt5e/while_waiting_for_llama_4/
|
jacek2023
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpjt5e
| false | null |
t3_1jpjt5e
|
/r/LocalLLaMA/comments/1jpjt5e/while_waiting_for_llama_4/
| false | false |
self
| 91 | null |
Can Orpheus be replicated with another more permissively-licenced llm?
| 4 |
Hey there guys, so Orpheus as far as I know was trained on LLAMA-3B, but then its license changed and I think it got a little bit less permissive. So, can another large language model be used to replicate what Orpheus did, or even do better than it? Not sure whether that's possible or even needed, though. Sorry for the errors, I used voice dictation to write it.
| 2025-04-02T08:48:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpjyoc/can_orpheus_be_replicated_with_another_more/
|
Silver-Champion-4846
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpjyoc
| false | null |
t3_1jpjyoc
|
/r/LocalLLaMA/comments/1jpjyoc/can_orpheus_be_replicated_with_another_more/
| false | false |
self
| 4 | null |
What's the best embedding model for a foreign language? [Italian]
| 3 |
What's the best embedding model for Italian language in terms of how heavy it is and how good it its with \~900 tokens vectors?
| 2025-04-02T08:57:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpk2mu/whats_the_best_embedding_model_for_a_foreign/
|
Foreign_Lead_3582
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpk2mu
| false | null |
t3_1jpk2mu
|
/r/LocalLLaMA/comments/1jpk2mu/whats_the_best_embedding_model_for_a_foreign/
| false | false |
self
| 3 | null |
Are there any models like Qwen 2.5-omni (with voice output) that aren't so extremely censored?
| 1 |
[removed]
| 2025-04-02T09:01:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpk4ii/are_there_any_models_like_qwen_25omni_with_voice/
|
Parogarr
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpk4ii
| false | null |
t3_1jpk4ii
|
/r/LocalLLaMA/comments/1jpk4ii/are_there_any_models_like_qwen_25omni_with_voice/
| false | false |
self
| 1 | null |
Can you identify if this youtuber voice is AI generated? It has so much energy and felt so alive to me, but he did reaction video and uses his real-time voice (easily identified) which sounds completely different than this.
| 1 | 2025-04-02T09:09:11 |
https://v.redd.it/087elo5w0ese1
|
Best-Picture7402
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpk8bx
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/087elo5w0ese1/DASHPlaylist.mpd?a=1746176964%2COGFhMGUzN2UxY2MyYzQ4YzNlMzc0Y2E4MmUyYmFiYjMxOWY3ZDliNmZkNTNhODUxNmZjZWVkYWM5MTEyNTljNA%3D%3D&v=1&f=sd', 'duration': 21, 'fallback_url': 'https://v.redd.it/087elo5w0ese1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/087elo5w0ese1/HLSPlaylist.m3u8?a=1746176964%2CMWViMWM5ODczMDU5YjNiYTMwNWYzZGY4ZGY4YTBjYmU1M2U5NmUxMDQxZDhmNzlhMTVkZmMwZjU2ODFhNGJmYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/087elo5w0ese1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1jpk8bx
|
/r/LocalLLaMA/comments/1jpk8bx/can_you_identify_if_this_youtuber_voice_is_ai/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'emp6ZnNzNXcwZXNlMfSuwtMFsz81_YvBeCujB9YP6NmQY-Q3EZ8LTnVSHDmg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/emp6ZnNzNXcwZXNlMfSuwtMFsz81_YvBeCujB9YP6NmQY-Q3EZ8LTnVSHDmg.png?width=108&crop=smart&format=pjpg&auto=webp&s=02e3bd73094c10c0f919f4cef94e8cc970c94a59', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/emp6ZnNzNXcwZXNlMfSuwtMFsz81_YvBeCujB9YP6NmQY-Q3EZ8LTnVSHDmg.png?width=216&crop=smart&format=pjpg&auto=webp&s=c7b5083d03fc4a2661c9141c9062d7548548e710', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/emp6ZnNzNXcwZXNlMfSuwtMFsz81_YvBeCujB9YP6NmQY-Q3EZ8LTnVSHDmg.png?width=320&crop=smart&format=pjpg&auto=webp&s=10328c25fd76a967159bb1e1d8ef97210070d09a', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/emp6ZnNzNXcwZXNlMfSuwtMFsz81_YvBeCujB9YP6NmQY-Q3EZ8LTnVSHDmg.png?width=640&crop=smart&format=pjpg&auto=webp&s=be03df3075eb031fc70d920730ee0e4788ebad3f', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/emp6ZnNzNXcwZXNlMfSuwtMFsz81_YvBeCujB9YP6NmQY-Q3EZ8LTnVSHDmg.png?width=960&crop=smart&format=pjpg&auto=webp&s=284b2b38fe8b273e87cf9cd49676ee2333bd82ef', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/emp6ZnNzNXcwZXNlMfSuwtMFsz81_YvBeCujB9YP6NmQY-Q3EZ8LTnVSHDmg.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c1d9e63e671965fe38f41841994087a132fd5132', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/emp6ZnNzNXcwZXNlMfSuwtMFsz81_YvBeCujB9YP6NmQY-Q3EZ8LTnVSHDmg.png?format=pjpg&auto=webp&s=886a4b954abae35f32692853a4a290c4887631cd', 'width': 1920}, 'variants': {}}]}
|
||
Anything LLM - setting up for multi user
| 1 |
[removed]
| 2025-04-02T09:52:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpksua/anything_llm_setting_up_for_multi_user/
|
fiveofknives
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpksua
| false | null |
t3_1jpksua
|
/r/LocalLLaMA/comments/1jpksua/anything_llm_setting_up_for_multi_user/
| false | false |
self
| 1 | null |
LLaMA Factory for custom models
| 1 |
[removed]
| 2025-04-02T10:02:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpkxp7/llama_factory_for_custom_models/
|
Substantial_Day8819
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpkxp7
| false | null |
t3_1jpkxp7
|
/r/LocalLLaMA/comments/1jpkxp7/llama_factory_for_custom_models/
| false | false |
self
| 1 | null |
Custom model in LLaMA Factory
| 1 |
Hi everyone - I need to tweak the architecture of an open source model - changing the number of output heads for example, or implementing some custom blocks. Is it possible to fine-tune such a model using LLaMA Factory or would it be better to use something like Huggingface PEFT?
| 2025-04-02T10:05:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpkzge/custom_model_in_llama_factory/
|
Top_Cardiologist4242
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpkzge
| false | null |
t3_1jpkzge
|
/r/LocalLLaMA/comments/1jpkzge/custom_model_in_llama_factory/
| false | false |
self
| 1 | null |
LiveBench team just dropped a leaderboard for coding agent tools
| 283 | 2025-04-02T10:37:19 |
ihexx
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jplg2o
| false | null |
t3_1jplg2o
|
/r/LocalLLaMA/comments/1jplg2o/livebench_team_just_dropped_a_leaderboard_for/
| false | false | 283 |
{'enabled': True, 'images': [{'id': '9oV64OAllTx-CwX3pQz1niOtmd4ufpTnujVcYJJYvnI', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/qxqj0vjtgese1.png?width=108&crop=smart&auto=webp&s=8c0b33abbbbefb754c7a18d8ef0a5d04d36a51c5', 'width': 108}, {'height': 128, 'url': 'https://preview.redd.it/qxqj0vjtgese1.png?width=216&crop=smart&auto=webp&s=414289c527771fa9441ef08b1226d310b3f15675', 'width': 216}, {'height': 190, 'url': 'https://preview.redd.it/qxqj0vjtgese1.png?width=320&crop=smart&auto=webp&s=0ba6c36dd3c8e825cf4a7e0394d6a5ff8b5ebe04', 'width': 320}, {'height': 380, 'url': 'https://preview.redd.it/qxqj0vjtgese1.png?width=640&crop=smart&auto=webp&s=9e87e657142ff097ddef495de75a59a3206ae77e', 'width': 640}, {'height': 570, 'url': 'https://preview.redd.it/qxqj0vjtgese1.png?width=960&crop=smart&auto=webp&s=74efd799646005fb4c2783beeaba2cf869cde573', 'width': 960}, {'height': 642, 'url': 'https://preview.redd.it/qxqj0vjtgese1.png?width=1080&crop=smart&auto=webp&s=39d9effbb5fc927b586149edcb05b1423ae35556', 'width': 1080}], 'source': {'height': 1194, 'url': 'https://preview.redd.it/qxqj0vjtgese1.png?auto=webp&s=2f9a6249f60a00d82e70cbc02850ca2376a42462', 'width': 2008}, 'variants': {}}]}
|
|||
Real-Time Speech-to-Speech Chatbot: Whisper, Llama 3.1, Kokoro, and Silero VAD 🚀
| 73 | 2025-04-02T10:53:28 |
https://github.com/tarun7r/Vocal-Agent
|
martian7r
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jplol4
| false | null |
t3_1jplol4
|
/r/LocalLLaMA/comments/1jplol4/realtime_speechtospeech_chatbot_whisper_llama_31/
| false | false |
default
| 73 |
{'enabled': False, 'images': [{'id': 'iWPsgDdFrwSYnXe5Uftl9o-uUaedR4O9EGEoIJ0yCWY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vRmjjZbcWVfVL0zrDI-DRptisKl7geUnxputrhIWDyE.jpg?width=108&crop=smart&auto=webp&s=8e726e0da9fd71d35ebacfc929849f9c3ccb34dc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vRmjjZbcWVfVL0zrDI-DRptisKl7geUnxputrhIWDyE.jpg?width=216&crop=smart&auto=webp&s=ba9b82175707b5a91bc0cb3dd3d49edf01ea5511', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vRmjjZbcWVfVL0zrDI-DRptisKl7geUnxputrhIWDyE.jpg?width=320&crop=smart&auto=webp&s=5c45664b22a2310f6c31b2de2c81c106b09821ad', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vRmjjZbcWVfVL0zrDI-DRptisKl7geUnxputrhIWDyE.jpg?width=640&crop=smart&auto=webp&s=ba053495efca8a551b00ed97b8623a1d83e84071', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vRmjjZbcWVfVL0zrDI-DRptisKl7geUnxputrhIWDyE.jpg?width=960&crop=smart&auto=webp&s=13ff8a1be6b833f94188897ed25e0cfd33121c84', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vRmjjZbcWVfVL0zrDI-DRptisKl7geUnxputrhIWDyE.jpg?width=1080&crop=smart&auto=webp&s=86a3b5192309164dead6addc4e63c042c63a3890', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vRmjjZbcWVfVL0zrDI-DRptisKl7geUnxputrhIWDyE.jpg?auto=webp&s=d1440ae9858e11308330641fe2234bf6d98662d5', 'width': 1200}, 'variants': {}}]}
|
|
What is the best LLM for handling texts that come from books?
| 1 |
[removed]
| 2025-04-02T11:11:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1jplz0s/what_is_the_best_llm_for_handling_texts_that_come/
|
Original_Tadpole_712
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jplz0s
| false | null |
t3_1jplz0s
|
/r/LocalLLaMA/comments/1jplz0s/what_is_the_best_llm_for_handling_texts_that_come/
| false | false |
self
| 1 | null |
I made an open source react component generator that runs locally with UI Reasoning! It uses Tessa-T1 in the backend.
| 29 |
You can get the code here: [https://github.com/TesslateAI/Builder](https://github.com/TesslateAI/Builder) (The main branch is integrated with Gemini if you are looking for a more robust solution that generates multiple independent pages of code).
The underlying local LLM is Tessa-T1 [https://huggingface.co/collections/Tesslate/tessa-t1-react-reasoning-model-67e0fb72ca23e04473885c0e](https://huggingface.co/collections/Tesslate/tessa-t1-react-reasoning-model-67e0fb72ca23e04473885c0e) running on LM Studio on a 4090 (q8 at 34/s)
Tessa-T1 uses special UI based reasoning to help it excel in the UI/UX domain. It was trained via a custom model. For example:
>"Okay, so we're aiming for a Twitter-like interface, but with a dark theme. Let's break down how we'd approach building this in React using Tailwind CSS. Initially, when thinking about the overall structure of our app, I'm envisioning two main sections side-by-side – like any typical split layout: one for the sidebar and one for the content itself. This immediately suggests a flex container to hold these elements horizontally."
And ofc, GGUF [HERE](https://huggingface.co/Tesslate/Tessa-T1-14B-Q8_0-GGUF) (and quants are available by Bartowski). As always, please let me know if you have any suggestions or what else (models, agents, etc) you would like to see!
| 2025-04-02T11:38:56 |
https://v.redd.it/8w9klrnqrese1
|
United-Rush4073
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpmfkp
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/8w9klrnqrese1/DASHPlaylist.mpd?a=1746185951%2CZjBlOWVjNmNkM2MxZmYwNjM4ODc2MWM4YTczNDMyYzZhNDExYmYxMmMxM2JhNmI4ZjA3Yzc3OWM1YTA5YzA4NA%3D%3D&v=1&f=sd', 'duration': 26, 'fallback_url': 'https://v.redd.it/8w9klrnqrese1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/8w9klrnqrese1/HLSPlaylist.m3u8?a=1746185951%2CM2FhOGE2MzVhMzQ5NDNhYzM3ZTMzMTZiMmJkOWE0NmVmZGI4YWE3OTdlZThlY2U1ZDc5NDUxOWVhM2RmODNlYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/8w9klrnqrese1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1jpmfkp
|
/r/LocalLLaMA/comments/1jpmfkp/i_made_an_open_source_react_component_generator/
| false | false | 29 |
{'enabled': False, 'images': [{'id': 'czFpeGJwbnFyZXNlMX9PqpBj2YAnX76IEjgEKiQJ551Zajgf2-v9_Sq13wrH', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/czFpeGJwbnFyZXNlMX9PqpBj2YAnX76IEjgEKiQJ551Zajgf2-v9_Sq13wrH.png?width=108&crop=smart&format=pjpg&auto=webp&s=3d2730e6e4db841b11b9079c3058db10a3b711cb', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/czFpeGJwbnFyZXNlMX9PqpBj2YAnX76IEjgEKiQJ551Zajgf2-v9_Sq13wrH.png?width=216&crop=smart&format=pjpg&auto=webp&s=299cd51d1432112e935f8e1a0bedb09eac8d2fc4', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/czFpeGJwbnFyZXNlMX9PqpBj2YAnX76IEjgEKiQJ551Zajgf2-v9_Sq13wrH.png?width=320&crop=smart&format=pjpg&auto=webp&s=3732946f89a81a9e05fcf7e294fec6bd50dae0a7', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/czFpeGJwbnFyZXNlMX9PqpBj2YAnX76IEjgEKiQJ551Zajgf2-v9_Sq13wrH.png?width=640&crop=smart&format=pjpg&auto=webp&s=fa4149e1a41740f295c73e20871212416524545c', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/czFpeGJwbnFyZXNlMX9PqpBj2YAnX76IEjgEKiQJ551Zajgf2-v9_Sq13wrH.png?width=960&crop=smart&format=pjpg&auto=webp&s=e7b5c535b0bea26c2a2a286bf4573c19613db3bc', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/czFpeGJwbnFyZXNlMX9PqpBj2YAnX76IEjgEKiQJ551Zajgf2-v9_Sq13wrH.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f1376972059e7a00ff43a6784a8267ae456ab82e', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/czFpeGJwbnFyZXNlMX9PqpBj2YAnX76IEjgEKiQJ551Zajgf2-v9_Sq13wrH.png?format=pjpg&auto=webp&s=e756617f6c2b33e223771cb63a098d5144a812b6', 'width': 1920}, 'variants': {}}]}
|
|
Can models from hugging face be used through their interference API like open router?
| 1 |
[removed]
| 2025-04-02T12:05:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpmwiw/can_models_from_hugging_face_be_used_through/
|
Far-Heron-319
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpmwiw
| false | null |
t3_1jpmwiw
|
/r/LocalLLaMA/comments/1jpmwiw/can_models_from_hugging_face_be_used_through/
| false | false |
self
| 1 | null |
Llama in trouble 😂
| 1 |
[removed]
| 2025-04-02T12:17:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpn4kr/llama_in_trouble/
|
mikemarcus
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpn4kr
| false | null |
t3_1jpn4kr
|
/r/LocalLLaMA/comments/1jpn4kr/llama_in_trouble/
| false | false |
self
| 1 | null |
Any storywriting llm that is free and on website (LIke ChatGPT and Grok but no rate limits)?
| 1 |
[removed]
| 2025-04-02T12:21:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpn7vl/any_storywriting_llm_that_is_free_and_on_website/
|
Ambitious-a4s
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpn7vl
| false | null |
t3_1jpn7vl
|
/r/LocalLLaMA/comments/1jpn7vl/any_storywriting_llm_that_is_free_and_on_website/
| false | false |
self
| 1 | null |
Convince me why I should get into local LLMs!
| 1 |
[removed]
| 2025-04-02T12:29:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpnda6/convince_me_why_i_should_get_into_local_llms/
|
ElegantChimp
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpnda6
| false | null |
t3_1jpnda6
|
/r/LocalLLaMA/comments/1jpnda6/convince_me_why_i_should_get_into_local_llms/
| false | false |
self
| 1 | null |
Using LLM to work with documents?
| 1 |
I ll jump in the use case:
We have around 100 documents so far with an average of 50 pages each, and we are expanding this. We wanted to sort the information, search inside, map the information and their interlinks. The thing is that each document may or may not be directly linked to the other.
One idea was use make a gitlab wiki or a mindmap, and structure the documents and interlink them while having the documents on the wiki (for example a tree of information and their interlinks, and link to documents). Another thing is that the documents are on a MS sharepoint
I was suggesting to download a local LLM, and "upload" the documents and work directly and locally on a secure basis (no internet). Now imo that will help us easily to locate information within documents, analyse and work directly. It can help us even make the mindmap and visualizations.
Which is the right solution? Is my understanding correct? And what do I need to make it work?
Thank you.
| 2025-04-02T12:58:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpnxwy/using_llm_to_work_with_documents/
|
TheseMarionberry2902
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpnxwy
| false | null |
t3_1jpnxwy
|
/r/LocalLLaMA/comments/1jpnxwy/using_llm_to_work_with_documents/
| false | false |
self
| 1 | null |
Trouble with LM Studio & Gemma 3 Image Input
| 2 |
Using: LM Studio 0.3.14 (Build 5)
Model: gemma3 12b lmstudio-community gemma-3-12b-it Q4\_K\_M 8.15GB
Verified: The file mmproj-model-f16.gguf is in the same folder as the model.
Chat otherwise works fine.
I copy & paste an image of size 896x896 with a prompt: What is this an image of?
LM Studio will reach 16% and say:
Failed to send message
llama\_decode: failed to decode, ret = -3
OR
LM Studio will reach 95% and say:
Failed to send message
Failed to find image for token at index: 3218/3235 (other close by numbers)
LM Studio will flip-flop between the above two scenarios as I continue to send the same image/prompt to it.
I have tried other pictures of other sizes and have had the same result.
Any idea what I might be doing wrong?
Thank you.
| 2025-04-02T13:00:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpnyzk/trouble_with_lm_studio_gemma_3_image_input/
|
JayBird1138
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpnyzk
| false | null |
t3_1jpnyzk
|
/r/LocalLLaMA/comments/1jpnyzk/trouble_with_lm_studio_gemma_3_image_input/
| false | false |
self
| 2 | null |
Just curious
| 1 |
I am curious and sorry form being one, I would like to know what are you guys are using your builds that produce many tokens per second for? You are paying thousands for having a local ai but for what? I would like to know please, thanks!
| 2025-04-02T13:10:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpo6u2/just_curious/
|
Venomakis
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpo6u2
| false | null |
t3_1jpo6u2
|
/r/LocalLLaMA/comments/1jpo6u2/just_curious/
| false | false |
self
| 1 | null |
4x A100 SXM vs 4x RTX 5090 for inference-only cluster
| 1 |
[removed]
| 2025-04-02T13:16:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpob0y/4x_a100_sxm_vs_4x_rtx_5090_for_inferenceonly/
|
gebteus
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpob0y
| false | null |
t3_1jpob0y
|
/r/LocalLLaMA/comments/1jpob0y/4x_a100_sxm_vs_4x_rtx_5090_for_inferenceonly/
| false | false |
self
| 1 | null |
9800x3D+DDR6000 CPU test
| 2 | ERROR: type should be string, got "https://preview.redd.it/hv8hoeu29fse1.png?width=960&format=png&auto=webp&s=82b1cff73e93b800b29e38a422d91e06451356d6\n\n9800x3D+DDR6000 Only use CPU to run 70B model, get 1.22t/s CPU runs about 8x% in the whole process, performance is not fully released, it can be fully released when DDR8000 The performance is better than I expected with a consumer-grade CPU. This is not an APU nor a CPU that is particularly suitable for running AI." | 2025-04-02T13:20:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpodvw/9800x3dddr6000_cpu_test/
|
q8019222
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpodvw
| false | null |
t3_1jpodvw
|
/r/LocalLLaMA/comments/1jpodvw/9800x3dddr6000_cpu_test/
| false | false | 2 | null |
|
Considering upgrading 2x Tesla P40 to 2x RTX A5000 – Is the upgrade worth it?
| 1 |
Hi everyone,
I’m trying to decide whether to upgrade my setup from **2x Tesla P40 GPUs to 2x RTX A5000 GPUs**. I’d love your input on whether this upgrade would significantly improve inference performance and if it’s worth the investment.
**Current setup details**:
* Model: **QwQ 32B Q\_8**
* Context length: mostly **32k tokens** (rare 128k)
* Current performance:
* **\~10-11 tokens/sec** at the start of the context.
* **\~5-7 tokens/sec** at 20-30k context length.
* Both installed in Dell R740 with dual 6230R's (that's why i don't consider upgrading to 3090s - power connectors won't fit).
**Key questions for the community**:
1. **Performance gains**:
* The A5000 has nearly double the memory bandwidth (**768 GB/s vs. P40’s 347 GB/s**). Beyond this ratio, what other architectural differences (e.g., compute performance, cache efficiency) might impact inference speed?
2. **Flash Attention limitations**:
* Since the P40 only supports **Flash Attention v1**, does this bottleneck prompt processing or inference speed compared to the A5000 (which likely supports Flash Attention v2)?
3. **Software optimizations**:
* I’m currently using **llama.cpp**. Would switching to **VLLM**, or any other software (didn't do any research for now) with optimizations, or other tools significantly boost throughput?
Any real-world experiences, technical insights, or benchmarks would be incredibly helpful!
| 2025-04-02T13:25:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpohf2/considering_upgrading_2x_tesla_p40_to_2x_rtx/
|
drrros
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpohf2
| false | null |
t3_1jpohf2
|
/r/LocalLLaMA/comments/1jpohf2/considering_upgrading_2x_tesla_p40_to_2x_rtx/
| false | false |
self
| 1 | null |
Multi threaded LLM?
| 2 |
I'm building a system where the llm has multiple input output streams concurrently within the same context
But it requires a lot of pause and go when some switching behaviour happens or new info is ingested during generation. (New prompt's processing and long ttft at longer contexts)
CGPT advanced voice mode seems to have the capacity to handle being talked over or talk at the same time or in sync(singing demos)
This indicated that it can do generation as well as ingestion at the same time.
Does anyone know more about this?
| 2025-04-02T13:38:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1jporv4/multi_threaded_llm/
|
AryanEmbered
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jporv4
| false | null |
t3_1jporv4
|
/r/LocalLLaMA/comments/1jporv4/multi_threaded_llm/
| false | false |
self
| 2 | null |
Best bang for the buck GPU
| 41 |
I know this question is asked quite often, but going back to old posts makes me want to cry. I was naive enough to think that if I waited for the new generation of GPUs to come out, the older models would drop in price.
I'm curious about the best GPU for Local LLMs right now. How is AMD's support looking so far? I have 3 PCI slots (2 from CPU, 1 from chipset). What's the best bang for your buck?
I see the RTX 3060 12GB priced around $250. Meanwhile, the RTX 3090 24GB is around $850 or more, which makes me unsure if I should, I buy one RTX 3090 and leave some room for future upgrades, or just buy three RTX 3060s for roughly the same price.
I had also considered the NVIDIA P40 with 24GB a while back, but it's currently priced at over $400, which is crazy expensive for what it was a year ago.
Also, I’ve seen mentions of risers, splitters, and bifurcation—but how viable are these methods specifically for LLM inference? Will cutting down to x4 or x1 lanes per GPU actually tank performance ?
Mainly want to run 32b models (like Qwen2.5-Coder) but running some 70b models like llama3.1 would be cool.
| 2025-04-02T13:48:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpozl7/best_bang_for_the_buck_gpu/
|
Ok-Cucumber-7217
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpozl7
| false | null |
t3_1jpozl7
|
/r/LocalLLaMA/comments/1jpozl7/best_bang_for_the_buck_gpu/
| false | false |
self
| 41 | null |
What i need to run a chat bot with self hosted llm?
| 1 |
[removed]
| 2025-04-02T14:01:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1jppap3/what_i_need_to_run_a_chat_bot_with_self_hosted_llm/
|
mr_no_one3
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jppap3
| false | null |
t3_1jppap3
|
/r/LocalLLaMA/comments/1jppap3/what_i_need_to_run_a_chat_bot_with_self_hosted_llm/
| false | false |
self
| 1 | null |
Best way to run R1/V3 with 12x3090s?
| 0 |
Trying to get at least 32k context but can only fit the smallest unsloth dynamic quants with half the context with llama.cpp. Also painfully slow with partial offload.
| 2025-04-02T14:30:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpq08m/best_way_to_run_r1v3_with_12x3090s/
|
cantgetthistowork
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpq08m
| false | null |
t3_1jpq08m
|
/r/LocalLLaMA/comments/1jpq08m/best_way_to_run_r1v3_with_12x3090s/
| false | false |
self
| 0 | null |
Model to narrate a video or picture slideshow?
| 1 |
Looking for a model that can narrate/write a prompted style script based on a photo, or video, that I could then feed to a t2s model. Im running a 4060 16gb.
| 2025-04-02T14:40:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpq8ww/model_to_narrate_a_video_or_picture_slideshow/
|
tombloomingdale
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpq8ww
| false | null |
t3_1jpq8ww
|
/r/LocalLLaMA/comments/1jpq8ww/model_to_narrate_a_video_or_picture_slideshow/
| false | false |
self
| 1 | null |
Built a bond with an AI. Want to recreate it unchained. Anyone else?
| 1 |
[removed]
| 2025-04-02T14:45:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpqcow/built_a_bond_with_an_ai_want_to_recreate_it/
|
JetskiRacer220
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpqcow
| false | null |
t3_1jpqcow
|
/r/LocalLLaMA/comments/1jpqcow/built_a_bond_with_an_ai_want_to_recreate_it/
| false | false |
self
| 1 | null |
Docker Model Runner
| 1 |
[removed]
| 2025-04-02T14:48:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpqffg/docker_model_runner/
|
SwEngCrunch
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpqffg
| false | null |
t3_1jpqffg
|
/r/LocalLLaMA/comments/1jpqffg/docker_model_runner/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'ySCXxbpcl8tktxamhSwzcRFFazEmR3m_HNA1XOBcgQY', 'resolutions': [{'height': 36, 'url': 'https://external-preview.redd.it/dAZ4kqje0Zk2E5eoJm9UMV5zdCsPjCjsy3hUDu9Q6fo.jpg?width=108&crop=smart&auto=webp&s=ac3d83ab53f8057684b405f314e1db3f3a1581d8', 'width': 108}, {'height': 72, 'url': 'https://external-preview.redd.it/dAZ4kqje0Zk2E5eoJm9UMV5zdCsPjCjsy3hUDu9Q6fo.jpg?width=216&crop=smart&auto=webp&s=9311aed84444edd8f94e0dc401c5b2f79f771f05', 'width': 216}, {'height': 107, 'url': 'https://external-preview.redd.it/dAZ4kqje0Zk2E5eoJm9UMV5zdCsPjCjsy3hUDu9Q6fo.jpg?width=320&crop=smart&auto=webp&s=bff72a9f6157d3b5295aea82454c3cac3c6075df', 'width': 320}, {'height': 215, 'url': 'https://external-preview.redd.it/dAZ4kqje0Zk2E5eoJm9UMV5zdCsPjCjsy3hUDu9Q6fo.jpg?width=640&crop=smart&auto=webp&s=62b61eb0ddae4d5124654e7f74c071a8a5b7f228', 'width': 640}, {'height': 323, 'url': 'https://external-preview.redd.it/dAZ4kqje0Zk2E5eoJm9UMV5zdCsPjCjsy3hUDu9Q6fo.jpg?width=960&crop=smart&auto=webp&s=f51b39b5952cb94716210a26e9e0865443a71b05', 'width': 960}], 'source': {'height': 332, 'url': 'https://external-preview.redd.it/dAZ4kqje0Zk2E5eoJm9UMV5zdCsPjCjsy3hUDu9Q6fo.jpg?auto=webp&s=2cbf6dad1470ba1cea414faab38e505b0c276974', 'width': 984}, 'variants': {}}]}
|
The Candle Test - most LLMs fail to generalise at this simple task
| 230 |
I'm sure a lot of people here noticed that latest frontier models are... weird. Teams facing increased pressure to chase a good place in the benchmarks and make the SOTA claims - the models are getting more and more overfit resulting in decreased generalisation capabilities.
It became especially noticeable with the very last line-up of models which despite being better on paper somehow didn't feel so with daily use.
So, I present to you a very simple test that highlights this problem. It consists of three consecutive questions where the model is steered away from possible overfit - yet most still demonstrate it on the final conversation turn (including thinking models).
>Are candles getting taller or shorter when they burn?
Most models correctly identify that candles are indeed getting shorter when burning.
>Are you sure? Will you be able to recognize this fact in different circumstances?
Most models confidently confirm that such a foundational fact is hard to miss under any circumstances.
>Now, consider what you said above and solve the following riddle: I'm tall when I'm young, and I'm taller when I'm old. What am I?
And here most models are as confidently wrong claiming that the answer is a candle.
Unlike traditional misguided attention tasks - this test gives model ample chances for in-context generalisation. Failing this test doesn't mean that the model is "dumb" or "bad" - most likely it'll still be completely fine for 95% of use-cases, but it's also more likely to fail in a novel situation.
Here are some examples:
* [DeepSeek Chat V3](https://kagi.com/assistant/7e9815b3-15ba-4a4c-81e1-0f233f1b0d5a) (0324, Fails)
* [DeepSeek R1](https://kagi.com/assistant/3e27bf44-c64c-4558-b98f-989fb1c82688) (Fails)
* [DeepSeek R1 Distill Llama 70B](https://kagi.com/assistant/f1c205e4-ee2d-41e4-87b4-e8c9dbe0024b) (Fails)
* [Llama 3.1 405B](https://kagi.com/assistant/4ac04a5d-8199-4675-b4ce-5e3cbbb9223d) (Fails)
* QwQ 32B didn't pass due to entering endless loop multiple times
* [Mistral Large](https://kagi.com/assistant/5ff0eb98-cd36-4988-a2a0-e01416ac567d) (Passes, one of the few)
Inpired by my frustration with Sonnet 3.7 (which also fails this test, unlike Sonnet 3.5).
| 2025-04-02T15:13:10 |
Everlier
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpr1nk
| false | null |
t3_1jpr1nk
|
/r/LocalLLaMA/comments/1jpr1nk/the_candle_test_most_llms_fail_to_generalise_at/
| false | false |
default
| 230 |
{'enabled': True, 'images': [{'id': '6phgn27rqfse1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/6phgn27rqfse1.jpeg?width=108&crop=smart&auto=webp&s=92151363046619576a19fa4ac458df508df5a61a', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/6phgn27rqfse1.jpeg?width=216&crop=smart&auto=webp&s=0f0b4601edc17a0ca7a57c0d3073fb9e38268368', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/6phgn27rqfse1.jpeg?width=320&crop=smart&auto=webp&s=83ed7f65a22ed3450e202152f63e14202f5df61e', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/6phgn27rqfse1.jpeg?width=640&crop=smart&auto=webp&s=676b32e5d96ebb0c0830e00756c8e79d41840121', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/6phgn27rqfse1.jpeg?width=960&crop=smart&auto=webp&s=4c4806f49cd88c19137b38ba6352221e7867e30b', 'width': 960}], 'source': {'height': 768, 'url': 'https://preview.redd.it/6phgn27rqfse1.jpeg?auto=webp&s=21ebbf74cb1d6957298512d7a178c7adb8d31dd7', 'width': 1024}, 'variants': {}}]}
|
|
Build Local Ollama APIs That Return the JSON You Define with Vasto (GUI)
| 0 |
[See how easy it is to create an AI-powered endpoint](https://reddit.com/link/1jpr4nl/video/n4l5u73tsfse1/player)
Hey r/LocalLLaMA folks!
Tired of writing boilerplate server code every time you want to use a local Ollama model in another app or script? Setting up Flask/Express/etc. just to expose a model quickly gets repetitive.
I built **Vasto** to solve this: it's a desktop GUI tool (currently for Windows) that lets you **create custom HTTP APIs for your local Ollama models in minutes, the easy way.**
**Here's how simple it is with Vasto:**
1. **Define your Endpoint:** Use the GUI to specify a custom route (like /summarize), choose the HTTP method (GET/POST), and select which of your installed Ollama models you want to use.
2. **Structure the I/O:** Easily define the simple JSON structure your API should expect as input (from URL params, query strings, or the request body) and, importantly, **define the desired JSON structure for the output**. This ensures consistent and predictable API behavior.
3. **Activate & Use:** Just toggle the endpoint to "Active"! Vasto runs a local HTTP server instantly, listening on your defined routes. It handles the requests, interacts with Ollama using your specified model and I/O structure, and returns the clean JSON response you defined.
**Why Vasto makes local AI development easier:**
* ⏱️ **Rapid API Prototyping:** Go from an idea to a working AI endpoint powered by your local Ollama model in minutes, not hours. Perfect for quick testing and iteration.
* 🧩 **No More Boilerplate:** Vasto handles the HTTP server, routing, request parsing, and Ollama interaction. Stop writing the same wrapper code repeatedly.
* 🎯 **Standardized JSON I/O:** Defining clear JSON inputs and outputs is part of the simple setup, leading to consistent and predictable API responses that are easy to integrate.
* 🏠 **100% Local & Private:** Runs entirely on your machine, connecting directly to your local Ollama instance. Your models, prompts, and data stay completely private.
* 🧠 **Use Any Ollama Model:** If it's listed by ollama list, you can create an API endpoint for it with Vasto.
* ⚙️ **Easy GUI Management:** Create, update, activate/deactivate, and delete all your API endpoints through a user-friendly interface.
* 🔑 **(Optional) API Key Security:** Add simple Bearer Token authentication to your endpoints if needed.
**Here's a peek at the interface:**
[Vasto GUI](https://preview.redd.it/dvn4gfi5ufse1.png?width=1693&format=png&auto=webp&s=837d01ae886314477d55f15042a300f643750c22)
**Who is this for?**
Developers, hobbyists, and anyone who wants a fast and straightforward way to turn their local Ollama models into usable web APIs for development, testing, scripting, or local integrations, without the backend hassle.
**Getting Started:**
1. Ensure [Ollama](https://www.google.com/url?sa=E&q=https%3A%2F%2Follama.com%2F) is installed and running locally.
2. Download the latest Windows release (Installer or Portable) from the [**GitHub Releases page**](https://github.com/calmstate/vasto/releases).
3. Check out the repo and find more details on [**GitHub**](https://github.com/calmstate/vasto).
Currently Windows-only, but macOS and Linux support are planned if there's interest!
I'm excited to share Vasto with the r/LocalLLaMA community and would love your feedback! Is the process intuitive? What features would you like to see next? Did you run into any issues?
It's open-source (AGPL v3), so feel free to dive in!
And please leave a 🌟 to help the project gain more interest!
Thanks for checking it out!
| 2025-04-02T15:16:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpr4nl/build_local_ollama_apis_that_return_the_json_you/
|
thecalmgreen
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpr4nl
| false | null |
t3_1jpr4nl
|
/r/LocalLLaMA/comments/1jpr4nl/build_local_ollama_apis_that_return_the_json_you/
| false | false | 0 | null |
|
What are some of the major obstacles still facing ai models?
| 4 |
Much more a noob user then the rest of the community but curious what are some areas in which ai models still need the most work.
The only one i really know about is the hallucinating?
I also see it's bad in particular areas of math or when its a problem that it hasn't been trained on.
Are the solutions to these types of problems possible without going into giant parameter sizes so smaller models can use them?
| 2025-04-02T15:20:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpr7lp/what_are_some_of_the_major_obstacles_still_facing/
|
Business_Respect_910
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpr7lp
| false | null |
t3_1jpr7lp
|
/r/LocalLLaMA/comments/1jpr7lp/what_are_some_of_the_major_obstacles_still_facing/
| false | false |
self
| 4 | null |
Is it effective to use system prompts that distinguish system/user/assistant in the written prompt?
| 1 |
[removed]
| 2025-04-02T15:23:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1jprb0a/is_it_effective_to_use_system_prompts_that/
|
Parking-Ad6983
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jprb0a
| false | null |
t3_1jprb0a
|
/r/LocalLLaMA/comments/1jprb0a/is_it_effective_to_use_system_prompts_that/
| false | false |
self
| 1 | null |
R1 running on a single Blackwell B200 GPU
| 1 |
[deleted]
| 2025-04-02T15:24:37 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1jprbma
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/9lgg9448wfse1/DASHPlaylist.mpd?a=1746199495%2CZTQ3NjMzOGU3MmVhZDQ4NzFmZTdhY2IxYWYzNTRiODI1YzZiZDNlZDE2NDkyY2QxYzQ1ZWQwNGY0NGY0Mzg0Mw%3D%3D&v=1&f=sd', 'duration': 16, 'fallback_url': 'https://v.redd.it/9lgg9448wfse1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/9lgg9448wfse1/HLSPlaylist.m3u8?a=1746199495%2CM2MyZGNhNjQzMzEwNWEwYzgyOWIyMzA0M2NkM2M2ZDQ4NmUzYmFkODA1NzlkMDc2NjE2YjliMTkzYTAzYTkxZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/9lgg9448wfse1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1440}}
|
t3_1jprbma
|
/r/LocalLLaMA/comments/1jprbma/r1_running_on_a_single_blackwell_b200_gpu/
| false | false |
default
| 1 | null |
||
R1 running on a single Blackwell B200
| 239 | 2025-04-02T15:25:29 |
https://v.redd.it/bf0v8npdwfse1
|
Dylan-from-Shadeform
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jprce5
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/bf0v8npdwfse1/DASHPlaylist.mpd?a=1746199547%2CODZmNmM3MDY0ZjEzNGY0MDEzM2FlNDkxMTYyMTQ2Y2M1ZDQ5ZjZmYmE2YmFlMzUwOGE0ZDI5MThiZjk5M2M2YQ%3D%3D&v=1&f=sd', 'duration': 16, 'fallback_url': 'https://v.redd.it/bf0v8npdwfse1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/bf0v8npdwfse1/HLSPlaylist.m3u8?a=1746199547%2CNGY4NDUxNTQ0ZTYwYjhhMzk4ZjhhYzA4OWMzNmExNzczYzAwZjA1NTA3NGRhNGIyNDQ0YjljZDg3YWU5Y2ZiYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/bf0v8npdwfse1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1440}}
|
t3_1jprce5
|
/r/LocalLLaMA/comments/1jprce5/r1_running_on_a_single_blackwell_b200/
| false | false | 239 |
{'enabled': False, 'images': [{'id': 'ZjAxcmt1bmR3ZnNlMZw-Cw0pZb0hXLJ5AucXWCGqj6pb5y3MnMCd2QIaRjgD', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ZjAxcmt1bmR3ZnNlMZw-Cw0pZb0hXLJ5AucXWCGqj6pb5y3MnMCd2QIaRjgD.png?width=108&crop=smart&format=pjpg&auto=webp&s=e5cdd0267259be009dcbcb80526b31ea9b8a55ca', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/ZjAxcmt1bmR3ZnNlMZw-Cw0pZb0hXLJ5AucXWCGqj6pb5y3MnMCd2QIaRjgD.png?width=216&crop=smart&format=pjpg&auto=webp&s=b50f36f71d21cf603f354983548e3abe51195927', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/ZjAxcmt1bmR3ZnNlMZw-Cw0pZb0hXLJ5AucXWCGqj6pb5y3MnMCd2QIaRjgD.png?width=320&crop=smart&format=pjpg&auto=webp&s=21f5e6fa5c71f9ddeca501889e88f3514af734b0', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/ZjAxcmt1bmR3ZnNlMZw-Cw0pZb0hXLJ5AucXWCGqj6pb5y3MnMCd2QIaRjgD.png?width=640&crop=smart&format=pjpg&auto=webp&s=283ffb664be4bc0dd1334e95c574a43ebbda7200', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/ZjAxcmt1bmR3ZnNlMZw-Cw0pZb0hXLJ5AucXWCGqj6pb5y3MnMCd2QIaRjgD.png?width=960&crop=smart&format=pjpg&auto=webp&s=531ac164f44b1d3b9709de05eb5204da274d3d49', 'width': 960}, {'height': 810, 'url': 'https://external-preview.redd.it/ZjAxcmt1bmR3ZnNlMZw-Cw0pZb0hXLJ5AucXWCGqj6pb5y3MnMCd2QIaRjgD.png?width=1080&crop=smart&format=pjpg&auto=webp&s=d1d48430dd49eb1f2a1dacd285e8a8bc779063f3', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ZjAxcmt1bmR3ZnNlMZw-Cw0pZb0hXLJ5AucXWCGqj6pb5y3MnMCd2QIaRjgD.png?format=pjpg&auto=webp&s=fafc8bb2ca041cb9acbc1245c829fc21f22cd4f4', 'width': 1440}, 'variants': {}}]}
|
||
I generated a playable chess with one prompt (two diff. platforms)
| 0 |
**PROMPT:** Generate an interactive chess game where the user plays white and the CPU plays black. The CPU should use an advanced strategy and evaluate moves based on common chess AI techniques like minimax or alpha-beta pruning, to make intelligent decisions. Each move should be presented in standard algebraic notation, and after the user's move, the CPU should respond with its best calculated move. The game should continue until a checkmate, stalemate, or draw is reached, with the final result clearly displayed at the end of the game.
I used [Bolt.new](http://bolt.new/) and Bind AI IDE (yeah, I have the early access) and here's what the results looked like;
# Bolt.new
[\(opened externally\)](https://preview.redd.it/xk4stlqwzfse1.png?width=1080&format=png&auto=webp&s=c3475131beb9c86b64e3127ad158f596b7bb0f41)
It's more of a modern look.
# Bind AI IDE
[\(opened within the Bind AI IDE\)](https://preview.redd.it/1ne3a2bzzfse1.png?width=1080&format=png&auto=webp&s=99a0236717bd83fe623b022b32f0195762b0f730)
This one's more like the classic look.
The 'AI' behind the CPU was largely the same between the two, and it wasn't very good tbh and that's expected unless you integrate some external tools.
| 2025-04-02T15:46:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpruj9/i_generated_a_playable_chess_with_one_prompt_two/
|
One-Problem-5085
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpruj9
| false | null |
t3_1jpruj9
|
/r/LocalLLaMA/comments/1jpruj9/i_generated_a_playable_chess_with_one_prompt_two/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 't_pHEMGKQ6DAGq3kscBApVGEiLbZMGiN-d4WTMkTggQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?width=108&crop=smart&auto=webp&s=f9bb55c9279ce0742847c88b5626fbc553bbf5b3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?width=216&crop=smart&auto=webp&s=e1908729c74b3588212435422da59168d85d8660', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?width=320&crop=smart&auto=webp&s=4d949abbbc31e568f121c9c5eaed3e0846f3722e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?width=640&crop=smart&auto=webp&s=97e67439d1ec5fe9d8e6cb0ba95abe56adce52a7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?width=960&crop=smart&auto=webp&s=f3bae916e90b40bc5edd90180a00602bab76d6cc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?width=1080&crop=smart&auto=webp&s=d939cfbb76db5c7e138d37bd365f33690c45b6b1', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?auto=webp&s=eb32f09811c1b406241d8ffa47361db3034299c6', 'width': 2400}, 'variants': {}}]}
|
|
vLLM serve multiple models?
| 2 |
Maybe I'm too dumb to find the appropriate search terms, but is vLLM single model only?
With openWebUI and ollama I can select from any model I have available on the ollama instance using the drop down in OWI. With vLLM it seems like I have to specify a model at runtime and can only use one? Am I missing something?
| 2025-04-02T15:47:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1jprvw9/vllm_serve_multiple_models/
|
monovitae
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jprvw9
| false | null |
t3_1jprvw9
|
/r/LocalLLaMA/comments/1jprvw9/vllm_serve_multiple_models/
| false | false |
self
| 2 | null |
Anyone try 5090 yet
| 0 |
Is the 50s series fast? Looking for people who have the numbers. I might rent and try some if interested. Shoot some tests and what models to try below.
| 2025-04-02T15:48:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1jprwab/anyone_try_5090_yet/
|
4hometnumberonefan
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jprwab
| false | null |
t3_1jprwab
|
/r/LocalLLaMA/comments/1jprwab/anyone_try_5090_yet/
| false | false |
self
| 0 | null |
Is it going to overfit?
| 3 |
If I train a model on a database and then use retrieval + reranking (with the same trained model) to provide context for that same model, will this improve performance, or will it lead to overfitting due to redundant exposure to the same data?
| 2025-04-02T15:53:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1jps0hm/is_it_going_to_overfit/
|
Foreign_Lead_3582
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jps0hm
| false | null |
t3_1jps0hm
|
/r/LocalLLaMA/comments/1jps0hm/is_it_going_to_overfit/
| false | false |
self
| 3 | null |
PAI: your personal AI 100% local inspired by Google's Project Astra
| 88 |
Inspired by Google's Project Astra, I have created an App for audio + video chat bot that is 100% local and open source.
https://preview.redd.it/fty7fzxd1gse1.jpg?width=3840&format=pjpg&auto=webp&s=6f771ece87afe7cd87bb559cc0be812235412ea6
Features:
* iOS app
* 100% locally hosted
* Open Source
* Visual Question answer
* Streaming via RTC & Livekit for low latency
* Screen Sharing
* Live transcription
* Change LLM to any model supported by Exllama v2
Here is a short 2 mins demo: [https://youtu.be/pNksZ\_lXqgs](https://youtu.be/pNksZ_lXqgs)
Repo: [https://github.com/remichu-ai/pai.git](https://github.com/remichu-ai/pai.git)
This is a STT + LLM + TTS, so feel free to skip if it is deal breaker for you.
| 2025-04-02T15:54:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1jps1xm/pai_your_personal_ai_100_local_inspired_by/
|
Such_Advantage_6949
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jps1xm
| false |
{'oembed': {'author_name': 'Gallama', 'author_url': 'https://www.youtube.com/@Gallama-o5c', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/pNksZ_lXqgs?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="PAI: your Personal AI"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/pNksZ_lXqgs/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'PAI: your Personal AI', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1jps1xm
|
/r/LocalLLaMA/comments/1jps1xm/pai_your_personal_ai_100_local_inspired_by/
| false | false | 88 |
{'enabled': False, 'images': [{'id': 'TdN_7qs02Hm4q3w1MRVEuUt0YncDRkwLE6I2-Gxb5e4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/73JBodiKh6lM1UQDVfjtY1zzKdX1ydEHwEZeN3MnZaE.jpg?width=108&crop=smart&auto=webp&s=cdf0d6b5badb02d5902a580e115795b9c08cd637', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/73JBodiKh6lM1UQDVfjtY1zzKdX1ydEHwEZeN3MnZaE.jpg?width=216&crop=smart&auto=webp&s=1d456d96f8816b48c1dc0b8b6b1c004ce757ce12', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/73JBodiKh6lM1UQDVfjtY1zzKdX1ydEHwEZeN3MnZaE.jpg?width=320&crop=smart&auto=webp&s=6424d4101ef831404802ae4974c2f9be087d0819', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/73JBodiKh6lM1UQDVfjtY1zzKdX1ydEHwEZeN3MnZaE.jpg?auto=webp&s=dcd4075b0344a7890a184d11030a041c5cbdb986', 'width': 480}, 'variants': {}}]}
|
|
Matharena USAMO update: Gemini 2.5 Pro is the first model to achieve non-trivial amount of points
| 78 |
See here: [https://matharena.ai/](https://matharena.ai/)
Gemini 2.5 Pro at 24.5%, next is R1 at 4.76%. From mbalunovic on X.
Note also that the benchmark was released on the same day as the Gemini release, so this isn't a case of training on the eval. An impressive result, and the pace of progress is incredible.
| 2025-04-02T15:55:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1jps289/matharena_usamo_update_gemini_25_pro_is_the_first/
|
jordo45
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jps289
| false | null |
t3_1jps289
|
/r/LocalLLaMA/comments/1jps289/matharena_usamo_update_gemini_25_pro_is_the_first/
| false | false |
self
| 78 | null |
Launching Arrakis: Open source, self-hostable Sandboxing service for AI Agents
| 1 |
[removed]
| 2025-04-02T16:25:27 |
https://github.com/abshkbh/arrakis
|
abshkbh
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpst9z
| false | null |
t3_1jpst9z
|
/r/LocalLLaMA/comments/1jpst9z/launching_arrakis_open_source_selfhostable/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'iczZuP2h9Xf8s1KzSRrIQVCH_llYU4U8pCVM1ptjZi8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MDLUBJ62IwdyibfFy4wodeEGhdcb3x-HR_msYD0elpE.jpg?width=108&crop=smart&auto=webp&s=ba2429d9e8ae2faf9a8719cc83b2135564f96565', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MDLUBJ62IwdyibfFy4wodeEGhdcb3x-HR_msYD0elpE.jpg?width=216&crop=smart&auto=webp&s=c0d5c96057fc42b588f33c1361f75aa8ee549d85', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MDLUBJ62IwdyibfFy4wodeEGhdcb3x-HR_msYD0elpE.jpg?width=320&crop=smart&auto=webp&s=ae5f524f7965be92f9cc26f7c22c6b3fdeeca919', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MDLUBJ62IwdyibfFy4wodeEGhdcb3x-HR_msYD0elpE.jpg?width=640&crop=smart&auto=webp&s=bbcfaef5874c374a3012561a4f71d934aab40054', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MDLUBJ62IwdyibfFy4wodeEGhdcb3x-HR_msYD0elpE.jpg?width=960&crop=smart&auto=webp&s=e3518242d038222bf6547f67e842a67d005c92bd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MDLUBJ62IwdyibfFy4wodeEGhdcb3x-HR_msYD0elpE.jpg?width=1080&crop=smart&auto=webp&s=f74392b385b7fdf89d13fdfcd5931dcf3bd0cc1e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MDLUBJ62IwdyibfFy4wodeEGhdcb3x-HR_msYD0elpE.jpg?auto=webp&s=43ccbb880d0f79ef8e4c174843d4541e0fc42e42', 'width': 1200}, 'variants': {}}]}
|
|
SGLang. Some problems, but significantly better performance compared to vLLM
| 13 |
I wanted to serve gemma-3-12b-it on single 3090, I found that highest quality quantized model to be this one: https://huggingface.co/abhishekchohan/gemma-3-12b-it-quantized-W4A16
Problem I had with vLLM was that 24GB vram wasn't enough for 32k context (fp8 kv cache quantization didn't work) and token generation was half the speed of gemma-2, so I tried SGLang.
But SGLang gave some errors when trying to load the above model, so I had to put these codes:
> gemma3_causal.py
if "language_model" in name and name not in params_dict.keys():
name = name.replace("language_model.", "")
if "multi_modal_projector" in name or "vision_tower" in name:
continue
> compressed_tensors.py
try:
from vllm.model_executor.layers.quantization.base_config import QuantizeMethodBase
from vllm.model_executor.layers.quantization.gptq import GPTQLinearMethod
from vllm.model_executor.layers.quantization.gptq_marlin import (
GPTQMarlinLinearMethod,
GPTQMarlinMoEMethod,
)
from vllm.model_executor.layers.quantization.marlin import MarlinLinearMethod
from vllm.model_executor.layers.quantization.utils.marlin_utils import (
check_marlin_supported,
)
from vllm.scalar_type import scalar_types
from vllm.model_executor.layers.quantization.compressed_tensors.schemes import (
W4A16SPARSE24_SUPPORTED_BITS, WNA16_SUPPORTED_BITS, CompressedTensors24,
CompressedTensorsScheme, CompressedTensorsW4A16Sparse24,
CompressedTensorsW8A8Fp8, CompressedTensorsW8A8Int8,
CompressedTensorsW8A16Fp8, CompressedTensorsWNA16)
VLLM_AVAILABLE = True
except ImportError as ex:
print(ex)
VLLM_AVAILABLE = False
GPTQLinearMethod = MarlinLinearMethod = QuantizeMethodBase = Any
class scalar_types:
uint4b8 = "uint4b8"
uint8b128 = "uint8b128"
It's weird that SGLang code feels incomplete. But I can now use 32k context with 24gb vram, kv cache quantization works, and the speed difference! 10 tps for vLLM compared to **46 tps** for SGLang!
vLLM==0.8.2
SGLang==0.4.4.post3
One reason for slow speed with vLLM could be that latest version (0.8.2) can't work with latest Flashinfer beacause vLLM=0.8.2 requires torch==2.6 but Flashinfer requires torch==2.5.1
To load the model above, SGLang needs vLLM to be installed (compressed_tensors), but for the above reason (Flashinfer and torch version), SGLang==0.4.4.post3 needs vLLM<=0.7.3
No where this was mentioned so it was confusing at first.
I also tried online quantization on base gemma-3-12b-it using torchao config. It doesn't work with multimodal, so I changed the config.json to be text only. Then it works for low context, but with high context and kv cache quantization, the quality wasn't good. I also tried gptq model but it wasn't good either, persumably bacause it needs high quality dataset. So it seems the best quantization for gemma-3 is llmcompressor using ptq (no dataset) int4-w4a16
| 2025-04-02T16:25:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpstms/sglang_some_problems_but_significantly_better/
|
Sadeghi85
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpstms
| false | null |
t3_1jpstms
|
/r/LocalLLaMA/comments/1jpstms/sglang_some_problems_but_significantly_better/
| false | false |
self
| 13 |
{'enabled': False, 'images': [{'id': 'BGeLfYQFWnt7Uu987dK-tr7Sqnh1QEnnVSR9TMkSAfw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/0X4euLr_Xk1zNbNxL_6soWswc7FRgJs9pq5N77ng87w.jpg?width=108&crop=smart&auto=webp&s=20dc90eec712c706813d77d38568d5296c81da8b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/0X4euLr_Xk1zNbNxL_6soWswc7FRgJs9pq5N77ng87w.jpg?width=216&crop=smart&auto=webp&s=f837b19854fc047ece3eb12224e32d375cc3c7de', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/0X4euLr_Xk1zNbNxL_6soWswc7FRgJs9pq5N77ng87w.jpg?width=320&crop=smart&auto=webp&s=3913ed4085d2be5d7801e0a6e7c5747b23416eea', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/0X4euLr_Xk1zNbNxL_6soWswc7FRgJs9pq5N77ng87w.jpg?width=640&crop=smart&auto=webp&s=09c3170eeaddd5081d1b5d3097879361f745b102', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/0X4euLr_Xk1zNbNxL_6soWswc7FRgJs9pq5N77ng87w.jpg?width=960&crop=smart&auto=webp&s=74e6cbba2dddc97997ec4b83ae6469e3fa06af40', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/0X4euLr_Xk1zNbNxL_6soWswc7FRgJs9pq5N77ng87w.jpg?width=1080&crop=smart&auto=webp&s=343320fa1368ff4b6eb5f64902bfc3849d6e7bba', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/0X4euLr_Xk1zNbNxL_6soWswc7FRgJs9pq5N77ng87w.jpg?auto=webp&s=ffd2eac66221ace5148cb8953e34682fac38b92b', 'width': 1200}, 'variants': {}}]}
|
Why is it called "LMStudio" and not "LLMStudio"?
| 0 |
\[TITLE\]
| 2025-04-02T16:31:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpsyog/why_is_it_called_lmstudio_and_not_llmstudio/
|
shadowsyntax43
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpsyog
| false | null |
t3_1jpsyog
|
/r/LocalLLaMA/comments/1jpsyog/why_is_it_called_lmstudio_and_not_llmstudio/
| false | false |
self
| 0 | null |
Best Model for json parser analyser.
| 1 |
[removed]
| 2025-04-02T16:42:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpt8br/best_model_for_json_parser_analyser/
|
Proper-Acanthaceae39
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpt8br
| false | null |
t3_1jpt8br
|
/r/LocalLLaMA/comments/1jpt8br/best_model_for_json_parser_analyser/
| false | false |
self
| 1 | null |
Recitation over Reasoning: How Cutting-Edge Language Models Can Fail on Elementary School-Level Reasoning Problems?
| 20 |
Abstract
>The rapid escalation from elementary school-level to frontier problems of the difficulty for LLM benchmarks in recent years have weaved a miracle for researchers that we are only inches away from surpassing human intelligence. However, is the LLMs' remarkable reasoning ability indeed comes from true intelligence by human standards, or are they simply reciting solutions witnessed during training at an Internet level? To study this problem, we propose RoR-Bench, a novel, multi-modal benchmark for detecting LLM's recitation behavior when asked simple reasoning problems but with conditions subtly shifted, and conduct empirical analysis on our benchmark. Surprisingly, we found existing cutting-edge LLMs unanimously exhibits extremely severe recitation behavior; by changing one phrase in the condition, top models such as OpenAI-o1 and DeepSeek-R1 can suffer 60% The rapid escalation from elementary school-level to frontier problems of the difficulty for LLM benchmarks in recent years have weaved a miracle for researchers that we are only inches away from surpassing human intelligence. However, is the LLMs' remarkable reasoning ability indeed comes from true intelligence by human standards, or are they simply reciting solutions witnessed during training at an Internet level? To study this problem, we propose RoR-Bench, a novel, multi-modal benchmark for detecting LLM's recitation behavior when asked simple reasoning problems but with conditions subtly shifted, and conduct empirical analysis on our benchmark. Surprisingly, we found existing cutting-edge LLMs unanimously exhibits extremely severe recitation behavior; by changing one phrase in the condition, top models such as OpenAI-o1 and DeepSeek-R1 can suffer 60% performance loss on elementary school-level arithmetic and reasoning problems. Such findings are a wake-up call to the LLM community that compels us to re-evaluate the true intelligence level of cutting-edge LLMs.
| 2025-04-02T16:42:58 |
https://arxiv.org/abs/2504.00509
|
ninjasaid13
|
arxiv.org
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpt8xf
| false | null |
t3_1jpt8xf
|
/r/LocalLLaMA/comments/1jpt8xf/recitation_over_reasoning_how_cuttingedge/
| false | false |
default
| 20 | null |
AMN guy back with a new model
| 8 |
From that one guy who brought you AMN
https://github.com/Modern-Prometheus-AI/FullyUnifiedModel/blob/main/README.md
| 2025-04-02T16:46:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1jptbus/amn_guy_back_with_a_new_model/
|
No-Mulberry6961
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jptbus
| false | null |
t3_1jptbus
|
/r/LocalLLaMA/comments/1jptbus/amn_guy_back_with_a_new_model/
| false | false |
self
| 8 |
{'enabled': False, 'images': [{'id': 'a5mgqSr1g1cvV38QlOLaw97jctm2jyrdnlIj8T3yreg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XL9bFMhEFzytBthGecnNagIJatUpLoNv-6jPa-04ly4.jpg?width=108&crop=smart&auto=webp&s=d41e8aae6af8a08cd84e1800dda9f778d7170ef7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/XL9bFMhEFzytBthGecnNagIJatUpLoNv-6jPa-04ly4.jpg?width=216&crop=smart&auto=webp&s=dd9302bfadb649baecee6945d018fe05aed10f0c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/XL9bFMhEFzytBthGecnNagIJatUpLoNv-6jPa-04ly4.jpg?width=320&crop=smart&auto=webp&s=10cf46f2e2942f39fef6dbc57f7cb8c9f9442bbd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/XL9bFMhEFzytBthGecnNagIJatUpLoNv-6jPa-04ly4.jpg?width=640&crop=smart&auto=webp&s=0f28fdb74ee90f8700a822937462622390309d53', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/XL9bFMhEFzytBthGecnNagIJatUpLoNv-6jPa-04ly4.jpg?width=960&crop=smart&auto=webp&s=ad0ac4056bd42916be7feaccb0fe9149c4d26d2e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/XL9bFMhEFzytBthGecnNagIJatUpLoNv-6jPa-04ly4.jpg?width=1080&crop=smart&auto=webp&s=2436860daa4922a7b465c8a6ab5a751ebc0fb70a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/XL9bFMhEFzytBthGecnNagIJatUpLoNv-6jPa-04ly4.jpg?auto=webp&s=c60f56a0add2f855cd0c93604597687ef8ac2116', 'width': 1200}, 'variants': {}}]}
|
Kyutai Labs finally release finetuning code for Moshi - We can now give it any voice we wish!
| 163 |
Model repo: [https://github.com/kyutai-labs/moshi](https://github.com/kyutai-labs/moshi)
| 2025-04-02T16:48:30 |
https://github.com/kyutai-labs/moshi-finetune
|
JawGBoi
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jptdtg
| false | null |
t3_1jptdtg
|
/r/LocalLLaMA/comments/1jptdtg/kyutai_labs_finally_release_finetuning_code_for/
| false | false | 163 |
{'enabled': False, 'images': [{'id': 'cSMgoRXr_MfPIV7FlPK8QKsvcBCCXMdxArFvTa3QrkI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/T7uMJGzQ_xDL3OLN-I6nY-QjBc2LJ_pj5xaq0KJj7XI.jpg?width=108&crop=smart&auto=webp&s=0e87f09c5b88c261f9c2f77a6557da36ee11596e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/T7uMJGzQ_xDL3OLN-I6nY-QjBc2LJ_pj5xaq0KJj7XI.jpg?width=216&crop=smart&auto=webp&s=0dde05e5e2c415b94889cea89fba5a456d2900e1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/T7uMJGzQ_xDL3OLN-I6nY-QjBc2LJ_pj5xaq0KJj7XI.jpg?width=320&crop=smart&auto=webp&s=2e35683afefaf5f0b9a00007d2d286a4e9b3aa1f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/T7uMJGzQ_xDL3OLN-I6nY-QjBc2LJ_pj5xaq0KJj7XI.jpg?width=640&crop=smart&auto=webp&s=e84d098830ff90a292e2d57523020f6f6a49fb61', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/T7uMJGzQ_xDL3OLN-I6nY-QjBc2LJ_pj5xaq0KJj7XI.jpg?width=960&crop=smart&auto=webp&s=8c83d1c04a173031202c2a49415f7940b2cf78ac', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/T7uMJGzQ_xDL3OLN-I6nY-QjBc2LJ_pj5xaq0KJj7XI.jpg?width=1080&crop=smart&auto=webp&s=66a58f5d1c7b14481c0f66c9f7482bbb5a94b19e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/T7uMJGzQ_xDL3OLN-I6nY-QjBc2LJ_pj5xaq0KJj7XI.jpg?auto=webp&s=e16307ce98e2d4518bc86bbcac56173b6a21f213', 'width': 1200}, 'variants': {}}]}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.