title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I have spent 7+ hours trying to get WSL2 to work with Multi-GPU training - is it basically impossible on windows? lol
| 9 |
First time running / attempting distributed training via Windows using WSL2 and I'm getting constant issues regarding NCCL.
Is Linux essentially the only game in town for training if you plan on training with multiple GPUs via NVLink?
Jensen was out here hyping up WSL2 in January like it was the best thing since sliced bread but I have hit a wall trying to get it to work.
"Windows WSL2...basically it's two operating systems within one - it works perfectly..."
[https://www.youtube.com/live/k82RwXqZHY8?si=xbF7ZLrkBDI6Irzr&t=2940](https://www.youtube.com/live/k82RwXqZHY8?si=xbF7ZLrkBDI6Irzr&t=2940)
| 2025-05-05T05:05:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1kf3od0/i_have_spent_7_hours_trying_to_get_wsl2_to_work/
|
RoyalCities
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf3od0
| false | null |
t3_1kf3od0
|
/r/LocalLLaMA/comments/1kf3od0/i_have_spent_7_hours_trying_to_get_wsl2_to_work/
| false | false |
self
| 9 | null |
i am searching image to image model i2i model that i canrun on my local system ?
| 1 |
[removed]
| 2025-05-05T05:21:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1kf3x5q/i_am_searching_image_to_image_model_i2i_model/
|
atmanirbhar21
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf3x5q
| false | null |
t3_1kf3x5q
|
/r/LocalLLaMA/comments/1kf3x5q/i_am_searching_image_to_image_model_i2i_model/
| false | false |
self
| 1 | null |
GitHub - zhaopengme/mac-dia-server
| 1 | 2025-05-05T05:36:52 |
https://github.com/zhaopengme/mac-dia-server
|
Own_Connection_8018
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf45tx
| false | null |
t3_1kf45tx
|
/r/LocalLLaMA/comments/1kf45tx/github_zhaopengmemacdiaserver/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '3WI2AF_x8_3xIl8L9Q2kpRHVW_bmpvnqspxn65VfW9w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aSK-pqev9BF3YhBcrUotRcsKL0gCyLQvXMU9-ld9doI.jpg?width=108&crop=smart&auto=webp&s=c278ab949697f157253da58b0468da073d17b54f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/aSK-pqev9BF3YhBcrUotRcsKL0gCyLQvXMU9-ld9doI.jpg?width=216&crop=smart&auto=webp&s=86a54c8bdeb15da06fae6ade6b8b10f4816abd01', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/aSK-pqev9BF3YhBcrUotRcsKL0gCyLQvXMU9-ld9doI.jpg?width=320&crop=smart&auto=webp&s=9134ff5e7686b4dce7bb9787f43a553121f59b36', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/aSK-pqev9BF3YhBcrUotRcsKL0gCyLQvXMU9-ld9doI.jpg?width=640&crop=smart&auto=webp&s=daae9f894126ec73ec9f6849eb9c7ccce06bc5a3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/aSK-pqev9BF3YhBcrUotRcsKL0gCyLQvXMU9-ld9doI.jpg?width=960&crop=smart&auto=webp&s=9048e1b8a355a3cb925b2e173e9820e0c8752cee', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/aSK-pqev9BF3YhBcrUotRcsKL0gCyLQvXMU9-ld9doI.jpg?width=1080&crop=smart&auto=webp&s=e10f6621716857275b668f45304d52f6fc691f7b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/aSK-pqev9BF3YhBcrUotRcsKL0gCyLQvXMU9-ld9doI.jpg?auto=webp&s=e1c5ad22596a2d5f47eff3857a6d7ded78468f77', 'width': 1200}, 'variants': {}}]}
|
||
Running Dia-1.6B TTS on My Mac with M Chip
| 16 |
Hey guys, I made a small project to run the Dia-1.6B text-to-speech model on my Mac with an M chip. It’s a cool TTS model that makes realistic voices, supports multiple speakers, and can even do stuff like voice cloning or add emotions. I set it up as a simple server using FastAPI, and it works great on M1/M2/M3 Macs.
Check it out here: [mac-dia-server](https://github.com/zhaopengme/mac-dia-server). The README has easy steps to get it running with Python 3.9+. It’s not too hard to set up, and you can test it with some example commands I included.
Let me know what you think! If you have questions, hit me up on X at . [https://x.com/zhaopengme](https://x.com/zhaopengme)
| 2025-05-05T05:44:00 |
https://github.com/zhaopengme/mac-dia-server
|
Own_Connection_8018
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf49i4
| false | null |
t3_1kf49i4
|
/r/LocalLLaMA/comments/1kf49i4/running_dia16b_tts_on_my_mac_with_m_chip/
| false | false | 16 |
{'enabled': False, 'images': [{'id': '3WI2AF_x8_3xIl8L9Q2kpRHVW_bmpvnqspxn65VfW9w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aSK-pqev9BF3YhBcrUotRcsKL0gCyLQvXMU9-ld9doI.jpg?width=108&crop=smart&auto=webp&s=c278ab949697f157253da58b0468da073d17b54f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/aSK-pqev9BF3YhBcrUotRcsKL0gCyLQvXMU9-ld9doI.jpg?width=216&crop=smart&auto=webp&s=86a54c8bdeb15da06fae6ade6b8b10f4816abd01', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/aSK-pqev9BF3YhBcrUotRcsKL0gCyLQvXMU9-ld9doI.jpg?width=320&crop=smart&auto=webp&s=9134ff5e7686b4dce7bb9787f43a553121f59b36', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/aSK-pqev9BF3YhBcrUotRcsKL0gCyLQvXMU9-ld9doI.jpg?width=640&crop=smart&auto=webp&s=daae9f894126ec73ec9f6849eb9c7ccce06bc5a3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/aSK-pqev9BF3YhBcrUotRcsKL0gCyLQvXMU9-ld9doI.jpg?width=960&crop=smart&auto=webp&s=9048e1b8a355a3cb925b2e173e9820e0c8752cee', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/aSK-pqev9BF3YhBcrUotRcsKL0gCyLQvXMU9-ld9doI.jpg?width=1080&crop=smart&auto=webp&s=e10f6621716857275b668f45304d52f6fc691f7b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/aSK-pqev9BF3YhBcrUotRcsKL0gCyLQvXMU9-ld9doI.jpg?auto=webp&s=e1c5ad22596a2d5f47eff3857a6d7ded78468f77', 'width': 1200}, 'variants': {}}]}
|
|
This is how I’ll build AGI
| 0 |
Hello community! I have a huge plan and will share it with you all! (Cause I’m not a Sam Altman, y’know)
So, here’s my plan how I’m planning to build an AGI:
## Step 1:
We are going to create an Omni model. We have already made tremendous progress here, but Gemma 3 12B is where we can finally stop. She has an excellent vision encoder that can encode 256 tokens per image, so it will probably work with video as well (we have already tried it; it works). Maybe in the future, we can create a better projector and more compact tokens, but anyway, it is great!
## Step 2:
The next step is adding audio. Audio means both input and output. Here, we can use HuBERT, MFCCs, or something in between. This model must understand any type of audio (e.g., music, speech, SFX, etc.). Well, for audio understanding, we can basically stop here.
However, moving into the generation area, she must be able to speak ONLY in her voice and generate SFX in a beatbox-like manner. If any music is used, it must be written with notes only. No diffusion, non-autoregressors, or GANs must be used. Autoregressive transformers only.
## Step 3:
Next is real-time. Here, we must develop a way to instantly generate speech so she can start talking after I speak to her. However, if more reasoning is required, she can do it with speaking or do pauses, which can upscale the GPU usage for latent reasoning, just like humans. The context window must also be infinite, but more on that later.
## Step 4:
No agents must be used. This must be an MLLM (Multimodal Large Language Model) which includes everything. However, she must not be able to do high label coding or math, or be a super advanced in some shit (e.g. bash).
Currently, we are developing LCP (Loli Connect Protocol) which can connect Loli Models (loli=small). This was, she can learn stuff (e.g. how to write a poem in haiku way), but instead of using LoRA, it will be a direct LSTM module that will be saved in real-time (just like humans learn during the process) requiring as little as two examples.
For other things, she will be able to directly access it (e.g. view and touch my screen) instead of using API. For example, yes, MLLM will be able to search stuff online, but directly by using the app, not an API call.
With generation, only text and audio directly available. If drawing, she can use procreate and draw by hand, and similar stuff applies to all other areas. If there’s a new experience, then use LCP and learn it in real-time.
## Step 5:
Local only. Everything must be local only. Yes, I’m okay spending $10,000-$20,000 on GPUs only. Moreover, model must be highly biased to things I like (of course) and uncensored (already done). For example, no voice cloning must be available, although she can try and draw in Ghibli style (sorry for that Miyazaki), but will do it no better than I can. And music must sound like me or similar artist (e.g. Yorushika). She must not be able to create absolutely anything, but trying is allowed.
It is not a world model, it is a human model. A model create to be like human, not surpass (make just a bit, cause can learn all Wikipedia). So, that’s it! This is my vision! I don’t care if you’re completely disagree (idk, maybe you’re a Sam Altman), but this is what I’ll fight for! Moreover, it must be shared as a public architecture even though some weights (e.g. TTS) may not be available, ALL ARCHITECTURES AND PIPELINES MUST BE FULLY PUBLIC NO MATTER WHAT!
Thanks!
| 2025-05-05T05:46:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1kf4aqh/this_is_how_ill_build_agi/
|
yukiarimo
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf4aqh
| false | null |
t3_1kf4aqh
|
/r/LocalLLaMA/comments/1kf4aqh/this_is_how_ill_build_agi/
| false | false |
self
| 0 | null |
You Are What You EAT:What the current llm lack to be closer to an Agi
| 1 |
[removed]
| 2025-05-05T06:55:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1kf5a5n/you_are_what_you_eatwhat_the_current_llm_lack_to/
|
ElectricalHost5996
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf5a5n
| false | null |
t3_1kf5a5n
|
/r/LocalLLaMA/comments/1kf5a5n/you_are_what_you_eatwhat_the_current_llm_lack_to/
| false | false |
self
| 1 | null |
Remote embedding
| 1 |
[removed]
| 2025-05-05T07:06:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1kf5fxr/remote_embedding/
|
maga_ot_oz
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf5fxr
| false | null |
t3_1kf5fxr
|
/r/LocalLLaMA/comments/1kf5fxr/remote_embedding/
| false | false |
self
| 1 | null |
Does the Pareto principle apply to MoE models in practice?
| 45 |
Pareto Effect: In practice, a small number of experts (e.g., 2 or 3) may end up handling a majority of the traffic for many types of inputs. This aligns with the Pareto observation that a small set of experts could be responsible for most of the work.
| 2025-05-05T07:18:48 |
Own-Potential-2308
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf5lq4
| false | null |
t3_1kf5lq4
|
/r/LocalLLaMA/comments/1kf5lq4/does_the_pareto_principle_apply_to_moe_models_in/
| false | false | 45 |
{'enabled': True, 'images': [{'id': 'v7z8Iz6qcTVNFt8neIyFrDcNZyQW8YN4W0-x8HqB4Ew', 'resolutions': [{'height': 196, 'url': 'https://preview.redd.it/meqwqkomzwye1.png?width=108&crop=smart&auto=webp&s=d007e9668124d1c4f2803e68ad41cb2fe606dd6c', 'width': 108}, {'height': 393, 'url': 'https://preview.redd.it/meqwqkomzwye1.png?width=216&crop=smart&auto=webp&s=6b99be6a2d3212c40d294a91a2d4ee51e419524d', 'width': 216}, {'height': 582, 'url': 'https://preview.redd.it/meqwqkomzwye1.png?width=320&crop=smart&auto=webp&s=7da24e833f902d72da62b61ffbc039376b2402cf', 'width': 320}, {'height': 1164, 'url': 'https://preview.redd.it/meqwqkomzwye1.png?width=640&crop=smart&auto=webp&s=90b7b7164d1e5b4f5e2f7e1fe2d3294442e22401', 'width': 640}, {'height': 1746, 'url': 'https://preview.redd.it/meqwqkomzwye1.png?width=960&crop=smart&auto=webp&s=d79d27aaf436ab6b2f6fb74894caa3052adfbfe8', 'width': 960}, {'height': 1965, 'url': 'https://preview.redd.it/meqwqkomzwye1.png?width=1080&crop=smart&auto=webp&s=e6944208385087ca81a9ed3ccca890aa85e4819d', 'width': 1080}], 'source': {'height': 1965, 'url': 'https://preview.redd.it/meqwqkomzwye1.png?auto=webp&s=0a8973624e2c41fc8acffd8e6967b07062546e38', 'width': 1080}, 'variants': {}}]}
|
||
Multi-gpu setup question.
| 4 |
I have a 5090 and three 3090’s. Is it possible to use them all at the same time, or do I have to use the 3090’s OR the 5090?
| 2025-05-05T07:25:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1kf5pbq/multigpu_setup_question/
|
spookyclever
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf5pbq
| false | null |
t3_1kf5pbq
|
/r/LocalLLaMA/comments/1kf5pbq/multigpu_setup_question/
| false | false |
self
| 4 | null |
JOSIEFIED Qwen3 8B is amazing! Uncensored, Useful, and great personality.
| 414 |
Primary link is for Ollama but here is the creator's model card on HF:
https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen3-8B-abliterated-v1
Just wanna say this model has replaced my older Abliterated models. I genuinely think this Josie model is better than the stock model. It adhears to instructions better and is not dry in its responses at all. Running at Q8 myself and it definitely punches above its weight class. Using it primarily in a online RAG system.
Hoping for a 30B A3B Josie finetune in the future!
| 2025-05-05T07:31:25 |
https://ollama.com/goekdenizguelmez/JOSIEFIED-Qwen3
|
My_Unbiased_Opinion
|
ollama.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf5ry6
| false | null |
t3_1kf5ry6
|
/r/LocalLLaMA/comments/1kf5ry6/josiefied_qwen3_8b_is_amazing_uncensored_useful/
| false | false | 414 |
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200}, 'variants': {}}]}
|
|
You Are What You EAT:What the current llm lack
| 1 |
[removed]
| 2025-05-05T07:31:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1kf5rz1/you_are_what_you_eatwhat_the_current_llm_lack/
|
ElectricalHost5996
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf5rz1
| false | null |
t3_1kf5rz1
|
/r/LocalLLaMA/comments/1kf5rz1/you_are_what_you_eatwhat_the_current_llm_lack/
| false | false |
self
| 1 | null |
Qwen 3 models hallucination problem
| 1 |
[removed]
| 2025-05-05T07:50:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1kf60u4/qwen_3_models_hallucination_problem/
|
SpizzyProgrammer
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf60u4
| false | null |
t3_1kf60u4
|
/r/LocalLLaMA/comments/1kf60u4/qwen_3_models_hallucination_problem/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '2CPXSIzkp22xYPsTVpgsp4OcDlEliyzHHGoKpPeBFBs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9XgXXrBl6uCwW0eE-n9K4sfzB5YuRIuJJ-W66FyTl_w.jpg?width=108&crop=smart&auto=webp&s=89957ddc3e0ceb4136c276d3d85968c69109c147', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9XgXXrBl6uCwW0eE-n9K4sfzB5YuRIuJJ-W66FyTl_w.jpg?width=216&crop=smart&auto=webp&s=e4a44b08f3f4a0dced4c58658a931f1249f694f2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9XgXXrBl6uCwW0eE-n9K4sfzB5YuRIuJJ-W66FyTl_w.jpg?width=320&crop=smart&auto=webp&s=a3de8cd502780f769e1d3da532275ec4fd60c53f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9XgXXrBl6uCwW0eE-n9K4sfzB5YuRIuJJ-W66FyTl_w.jpg?width=640&crop=smart&auto=webp&s=4f7cd146bad9c79b3890a9abf11391b109bd9776', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9XgXXrBl6uCwW0eE-n9K4sfzB5YuRIuJJ-W66FyTl_w.jpg?width=960&crop=smart&auto=webp&s=602260ad00758e767180485deb7d9ee48f77343e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9XgXXrBl6uCwW0eE-n9K4sfzB5YuRIuJJ-W66FyTl_w.jpg?width=1080&crop=smart&auto=webp&s=de3b7dea2a2861846122e38b8c470914a9487010', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9XgXXrBl6uCwW0eE-n9K4sfzB5YuRIuJJ-W66FyTl_w.jpg?auto=webp&s=cc4bd6610df3c9a4f902269a1812e41e7888b950', 'width': 1200}, 'variants': {}}]}
|
Absolute best performer for 48 Gb vram
| 39 |
Hi everyone,
I was wondering if there's a better model than Deepcogito 70B (a fined-tuned thinking version of Llama 3.3 70B for those who don't know) for 48Gb vram today ?
I'm not talking about pure speed, just about a usable model (so no CPU/Ram offloading) with decent speed (more than 10t/s) and great knowledge.
Sadly it seems that the 70B size isn't a thing anymore :(
And yes Qwen3 32B is very nice and a bit faster, but you can feel that it's a smaller model (even if it's incredibly good for it's size).
Thanks !
| 2025-05-05T07:53:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1kf62ck/absolute_best_performer_for_48_gb_vram/
|
TacGibs
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf62ck
| false | null |
t3_1kf62ck
|
/r/LocalLLaMA/comments/1kf62ck/absolute_best_performer_for_48_gb_vram/
| false | false |
self
| 39 | null |
Which quants for qwen3?
| 1 |
There are now many. Unsloth has them. Bartowski has them. Ollama has them. MLX has them. Qwen also provides them (GGUFs). So... Which ones should be used?
| 2025-05-05T08:00:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1kf660e/which_quants_for_qwen3/
|
Acrobatic_Cat_3448
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf660e
| false | null |
t3_1kf660e
|
/r/LocalLLaMA/comments/1kf660e/which_quants_for_qwen3/
| false | false |
self
| 1 | null |
How to speed up a q2 model on a Mac?
| 0 |
I have been trying to run q2 qwen3 32B on my macbook pro, but it is way slower than a q4 14 b model even though it uses a similar amount of RAM.. How can I speed it up on LM studio? I couldn’t find a MLx version.. I wished triton and AWQ were available on LM Studio,
| 2025-05-05T08:07:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1kf6988/how_to_speed_up_a_q2_model_on_a_mac/
|
power97992
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf6988
| false | null |
t3_1kf6988
|
/r/LocalLLaMA/comments/1kf6988/how_to_speed_up_a_q2_model_on_a_mac/
| false | false |
self
| 0 | null |
Is building open source tools even worth it now?
| 1 |
[removed]
| 2025-05-05T08:15:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1kf6df3/is_building_open_source_tools_even_worth_it_now/
|
Objective-Wallaby355
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf6df3
| false | null |
t3_1kf6df3
|
/r/LocalLLaMA/comments/1kf6df3/is_building_open_source_tools_even_worth_it_now/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'o4TLosw2E3fFx2I9vKoEwcca2nnsyM_nPG2fm_uoiK8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/79SuY_qZ_emzua1Wfa8bhmHsi_Q7nQpA_XAA4UWSGrM.jpg?width=108&crop=smart&auto=webp&s=afbd4b51854478423ab22f37dfa3735218c5f034', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/79SuY_qZ_emzua1Wfa8bhmHsi_Q7nQpA_XAA4UWSGrM.jpg?width=216&crop=smart&auto=webp&s=b38217d00ae022f5b3aefc3ddcf95b4a7125aaf6', 'width': 216}], 'source': {'height': 256, 'url': 'https://external-preview.redd.it/79SuY_qZ_emzua1Wfa8bhmHsi_Q7nQpA_XAA4UWSGrM.jpg?auto=webp&s=b8784d36c39eeebf33fbb251e2fd78cc0ade1be1', 'width': 256}, 'variants': {}}]}
|
LocalLLaMA comprehensive guide?
| 1 |
[removed]
| 2025-05-05T08:25:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1kf6i9d/localllama_comprehensive_guide/
|
mightypanda75
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf6i9d
| false | null |
t3_1kf6i9d
|
/r/LocalLLaMA/comments/1kf6i9d/localllama_comprehensive_guide/
| false | false |
self
| 1 | null |
IBM's Granite3.3 is impressive
| 1 |
[removed]
| 2025-05-05T09:08:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1kf73e9/ibms_granite33_is_impressive/
|
Loud_Importance_8023
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf73e9
| false | null |
t3_1kf73e9
|
/r/LocalLLaMA/comments/1kf73e9/ibms_granite33_is_impressive/
| false | false |
self
| 1 | null |
IBM's Granite 3.3 is surprisingly good.
| 1 |
[removed]
| 2025-05-05T09:13:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1kf75ln/ibms_granite_33_is_surprisingly_good/
|
Loud_Importance_8023
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf75ln
| false | null |
t3_1kf75ln
|
/r/LocalLLaMA/comments/1kf75ln/ibms_granite_33_is_surprisingly_good/
| false | false |
self
| 1 | null |
Fine tuning Qwen3
| 13 |
I want to finetune Qwen 3 reasoning. But I need to generate think tags for my dataset . Which model / method would u recommend best in order to create these think tags ?
| 2025-05-05T09:16:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1kf76z8/fine_tuning_qwen3/
|
Basic-Pay-9535
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf76z8
| false | null |
t3_1kf76z8
|
/r/LocalLLaMA/comments/1kf76z8/fine_tuning_qwen3/
| false | false |
self
| 13 | null |
Check my hardware list for QWQ 32B: RTX 5090, Ryzen 7 7800, 64GB DDR5
| 1 |
[removed]
| 2025-05-05T10:10:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1kf7zi5/check_my_hardware_list_for_qwq_32b_rtx_5090_ryzen/
|
crpiet
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf7zi5
| false | null |
t3_1kf7zi5
|
/r/LocalLLaMA/comments/1kf7zi5/check_my_hardware_list_for_qwq_32b_rtx_5090_ryzen/
| false | false |
self
| 1 | null |
GPT4ALL - Language Model to reference local Documents
| 1 |
[removed]
| 2025-05-05T10:15:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1kf81zs/gpt4all_language_model_to_reference_local/
|
Ok_Yout_9756
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf81zs
| false | null |
t3_1kf81zs
|
/r/LocalLLaMA/comments/1kf81zs/gpt4all_language_model_to_reference_local/
| false | false |
self
| 1 | null |
Building an AI-Powered NSFW App: Seeking Guidance on Integrating
| 1 |
[removed]
| 2025-05-05T10:19:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1kf84oq/building_an_aipowered_nsfw_app_seeking_guidance/
|
Smooth-Permit7978
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf84oq
| false | null |
t3_1kf84oq
|
/r/LocalLLaMA/comments/1kf84oq/building_an_aipowered_nsfw_app_seeking_guidance/
| false | false |
nsfw
| 1 | null |
Whisper Transcription Workflow: Home Server vs. Android Phone? Seeking Advice!
| 5 |
I've been doing a lot with the Whisper models lately. I find myself making voice recordings while I'm out, and then later I use something like MacWhisper at home to transcribe them using the best available Whisper model. After that, I take the content and process it using a local LLM.
This workflow has been *really* helpful for me.
One inconvenience is having to wait until I get home to use MacWhisper. I also prefer not to use any hosted transcription services. So, I've been considering a couple of ideas:
First, seeing if I can get Whisper to run properly on my Android phone (an S25 Ultra). This...is pretty involved and I'm not much of an Android developer. I've tried to do some reading on transformers.js but I think this is a little beyond my ability right now.
Second, having Whisper running on my home server continuously. This server is a Mac Mini M4 with 16 GB of RAM. I could set up a watch directory so that any audio file placed there gets automatically transcribed. Then, I could use something like Blip to send the files over to the server and have it automatically accept them.
Does anyone have any suggestions on either of these? Or any other thoughts?
| 2025-05-05T10:23:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1kf86st/whisper_transcription_workflow_home_server_vs/
|
CtrlAltDelve
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf86st
| false | null |
t3_1kf86st
|
/r/LocalLLaMA/comments/1kf86st/whisper_transcription_workflow_home_server_vs/
| false | false |
self
| 5 | null |
Bought 3090, need emotional support
| 1 |
[removed]
| 2025-05-05T10:30:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1kf8av2/bought_3090_need_emotional_support/
|
HandsOnDyk
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf8av2
| false | null |
t3_1kf8av2
|
/r/LocalLLaMA/comments/1kf8av2/bought_3090_need_emotional_support/
| false | false |
self
| 1 | null |
Do tariffs have you worried about hardware costs?
| 2 |
Please remember to state where you are. I'm in China and planning to buy Apple hardware near the end of the year.
Could we have a thread for smaller questions like this?
| 2025-05-05T10:49:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1kf8lcg/do_tariffs_have_you_worried_about_hardware_costs/
|
After-Cell
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf8lcg
| false | null |
t3_1kf8lcg
|
/r/LocalLLaMA/comments/1kf8lcg/do_tariffs_have_you_worried_about_hardware_costs/
| false | false |
self
| 2 | null |
I didn´t know it was so easy
| 1 |
[removed]
| 2025-05-05T11:09:00 |
Trilogix
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf8x8u
| false | null |
t3_1kf8x8u
|
/r/LocalLLaMA/comments/1kf8x8u/i_didnt_know_it_was_so_easy/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'wNZrWXe8Z089md4ARvO-nEMtAgWaqZpBTxDM3g5sYHQ', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/j0tyw6102yye1.png?width=108&crop=smart&auto=webp&s=5ecfce518652e2356208caee2147b1eff8404597', 'width': 108}, {'height': 115, 'url': 'https://preview.redd.it/j0tyw6102yye1.png?width=216&crop=smart&auto=webp&s=a4dd0da9078c756f19b0e5f1f1c1e5252a1c6f34', 'width': 216}, {'height': 171, 'url': 'https://preview.redd.it/j0tyw6102yye1.png?width=320&crop=smart&auto=webp&s=7e39e42150e599abd98c2b2528a1e243b6c3fb0b', 'width': 320}, {'height': 342, 'url': 'https://preview.redd.it/j0tyw6102yye1.png?width=640&crop=smart&auto=webp&s=7a02731739926a3b059e76d493746b67923ec2d8', 'width': 640}, {'height': 513, 'url': 'https://preview.redd.it/j0tyw6102yye1.png?width=960&crop=smart&auto=webp&s=32dbc6f675d48e2e18699c74b97abb7a2228a857', 'width': 960}, {'height': 578, 'url': 'https://preview.redd.it/j0tyw6102yye1.png?width=1080&crop=smart&auto=webp&s=86480b8bda0c43e58dfa381d39fe42bd013535f6', 'width': 1080}], 'source': {'height': 976, 'url': 'https://preview.redd.it/j0tyw6102yye1.png?auto=webp&s=b5a24b63303dda57c0ddef1fe4306fb20290f71f', 'width': 1823}, 'variants': {}}]}
|
||
RTX 5060 Ti 16GB sucks for gaming, but seems like a diamond in the rough for AI
| 348 |
Hey r/LocalLLaMA,
I recently grabbed an RTX 5060 Ti 16GB for “just” $499 - while it’s no one’s first choice for gaming (reviews are pretty harsh), for AI workloads? This card might be a hidden gem.
I mainly wanted those 16GB of VRAM to fit bigger models, and it actually worked out. Ran LightRAG to ingest this beefy PDF:
https://www.fiscal.treasury.gov/files/reports-statements/financial-report/2024/executive-summary-2024.pdf
Compared it with a 12GB GPU (RTX 3060 Ti 12GB) - and I’ve attached Grafana charts showing GPU utilization for both runs.
🟢 16GB card: finished in 3 min 29 sec (green line)
🟡 12GB card: took 8 min 52 sec (yellow line)
Logs showed the 16GB card could load all 41 layers, while the 12GB one only managed 31. The rest had to be constantly swapped in and out - crushing performance by 2x and leading to underutilizing the GPU (as clearly seen in the Grafana metrics).
LightRAG uses “Mistral Nemo Instruct 12B”, served via Ollama, if you’re curious.
TL;DR: 16GB+ VRAM saves serious time.
Bonus: the card is noticeably shorter than others — it has 2 coolers instead of the usual 3, thanks to using PCIe x8 instead of x16. Great for small form factor builds or neat home AI setups. I’m planning one myself (please share yours if you’re building something similar!).
And yep - I had written a full guide earlier on how to go from clean bare metal to fully functional LightRAG setup in minutes. Fully automated, just follow the steps:
👉 https://github.com/sbnb-io/sbnb/blob/main/README-LightRAG.md
Let me know if you try this setup or run into issues - happy to help!
| 2025-05-05T11:42:02 |
https://www.reddit.com/gallery/1kf9i52
|
aospan
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf9i52
| false | null |
t3_1kf9i52
|
/r/LocalLLaMA/comments/1kf9i52/rtx_5060_ti_16gb_sucks_for_gaming_but_seems_like/
| false | false | 348 | null |
|
Max ram and clustering for the AMD AI 395?
| 0 |
I have a GMKtec AMD AI 395 128G coming in, is 96G the max you can allocate to VRAM? I read you can get almost 110G, and then I also heard only 96G.
Any idea if you would be able to cluster two of them to run large context window/larger models?
| 2025-05-05T11:52:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1kf9oxi/max_ram_and_clustering_for_the_amd_ai_395/
|
SillyLilBear
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf9oxi
| false | null |
t3_1kf9oxi
|
/r/LocalLLaMA/comments/1kf9oxi/max_ram_and_clustering_for_the_amd_ai_395/
| false | false |
self
| 0 | null |
I want to deepen my understanding and knowledge of ai.
| 5 |
I am currently working as an ai full stack dev, but I want to deepen my understanding and knowledge of ai. I have mainly worked in stable diffusion and agent style chatbots, which are connected to your database. But It's mostly just prompting and using the various apis. I want to further deepen my understanding and have a widespread knowledge of ai. I have mostly done udemy courses and am self learnt ( was guided by a senior / my mentor ). Can someone suggest a path or roadmap and resources ?
| 2025-05-05T12:08:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1kf9zxu/i_want_to_deepen_my_understanding_and_knowledge/
|
Fair_Mission4349
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kf9zxu
| false | null |
t3_1kf9zxu
|
/r/LocalLLaMA/comments/1kf9zxu/i_want_to_deepen_my_understanding_and_knowledge/
| false | false |
self
| 5 | null |
Prompt away friends, prompt away.
| 28 | 2025-05-05T12:14:01 |
Fearless-Elephant-81
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfa3kv
| false | null |
t3_1kfa3kv
|
/r/LocalLLaMA/comments/1kfa3kv/prompt_away_friends_prompt_away/
| false | false | 28 |
{'enabled': True, 'images': [{'id': 'nFt22P5pFpdtLr2GFpOSrtWF1qtFwuCF8G4kq65dtaE', 'resolutions': [{'height': 155, 'url': 'https://preview.redd.it/6v9dhjtagyye1.jpeg?width=108&crop=smart&auto=webp&s=57fae4d08db3136cf7265aafe80a93f442498070', 'width': 108}, {'height': 311, 'url': 'https://preview.redd.it/6v9dhjtagyye1.jpeg?width=216&crop=smart&auto=webp&s=58f7d1e1850e7e4acc8080c3b80862952ed5e970', 'width': 216}, {'height': 461, 'url': 'https://preview.redd.it/6v9dhjtagyye1.jpeg?width=320&crop=smart&auto=webp&s=02ff367585adbe36493aa6d1953e6c485e42eaff', 'width': 320}, {'height': 922, 'url': 'https://preview.redd.it/6v9dhjtagyye1.jpeg?width=640&crop=smart&auto=webp&s=7ba5af6979114b6fc871032b3a7f599c2d2795dd', 'width': 640}, {'height': 1384, 'url': 'https://preview.redd.it/6v9dhjtagyye1.jpeg?width=960&crop=smart&auto=webp&s=4887914541fd2d4cc09ec479e7b7a76f77dedfe3', 'width': 960}, {'height': 1557, 'url': 'https://preview.redd.it/6v9dhjtagyye1.jpeg?width=1080&crop=smart&auto=webp&s=7ba340251f2cf4ee151f2305378185dbd935472c', 'width': 1080}], 'source': {'height': 1700, 'url': 'https://preview.redd.it/6v9dhjtagyye1.jpeg?auto=webp&s=0c055593134deea84bb6dff3dd8f1facaa54d3ac', 'width': 1179}, 'variants': {}}]}
|
|||
Robust / Deterministic system with GPT-4o ?
| 0 |
Hello guys,
I am having an issue with a RAG project I have in which I am testing my system with the OpenAI API with GPT-4o. I would like to make the system as robust as possible to the same query but the issue is that the models give different answers to the same query.
I tried to set temperature = 0 and top\_p = 1 (or also top\_p very low if it picks up the first words such that p > threshold, if there are ranked properly by proba) but the answer is not robust/consistent.
response = client.chat.completions.create(
model
=model_name,
messages
=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": prompt}],
temperature
=0,
top_p
=1,
seed
=1234,
)
Any idea about how I can deal with it ?
| 2025-05-05T12:16:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfa5bu/robust_deterministic_system_with_gpt4o/
|
Difficult_Face5166
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfa5bu
| false | null |
t3_1kfa5bu
|
/r/LocalLLaMA/comments/1kfa5bu/robust_deterministic_system_with_gpt4o/
| false | false |
self
| 0 | null |
I Investigated a Fake LLM Project (“VRINDA”) Claiming 64x H100 Training — Here’s What I Found
| 1 |
[removed]
| 2025-05-05T12:41:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfan3n/i_investigated_a_fake_llm_project_vrinda_claiming/
|
zyxciss
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfan3n
| false | null |
t3_1kfan3n
|
/r/LocalLLaMA/comments/1kfan3n/i_investigated_a_fake_llm_project_vrinda_claiming/
| false | false |
self
| 1 | null |
Best Local LLMs for Every Task: A Community Compilation
| 1 |
[removed]
| 2025-05-05T12:56:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfaxmi/best_local_llms_for_every_task_a_community/
|
mimirium_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfaxmi
| false | null |
t3_1kfaxmi
|
/r/LocalLLaMA/comments/1kfaxmi/best_local_llms_for_every_task_a_community/
| false | false |
self
| 1 | null |
Test-Time Compute and Fine-Tuning Platform
| 1 |
[removed]
| 2025-05-05T12:59:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfazlz/testtime_compute_and_finetuning_platform/
|
No_Force8300
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfazlz
| false | null |
t3_1kfazlz
|
/r/LocalLLaMA/comments/1kfazlz/testtime_compute_and_finetuning_platform/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'ew7I-qhnE81d-YyX5789AmO5sJ1Bi3WdH2EcS52eXLA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/HwCBkRiOoAvNDnEfiYCMKxBxa1g--ssA3TRdAkHdzC8.jpg?width=108&crop=smart&auto=webp&s=88a40866c8aa430a6cc98e2b4f31d30c7b73dcc9', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/HwCBkRiOoAvNDnEfiYCMKxBxa1g--ssA3TRdAkHdzC8.jpg?width=216&crop=smart&auto=webp&s=5858ba3e835dbafd39b35c21e43619d6d80d1c73', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/HwCBkRiOoAvNDnEfiYCMKxBxa1g--ssA3TRdAkHdzC8.jpg?width=320&crop=smart&auto=webp&s=a4631652615889a4be4d4143abc0f260597ea8b0', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/HwCBkRiOoAvNDnEfiYCMKxBxa1g--ssA3TRdAkHdzC8.jpg?width=640&crop=smart&auto=webp&s=e0a21777b5744f35834b087afc8dbdc718e09d40', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/HwCBkRiOoAvNDnEfiYCMKxBxa1g--ssA3TRdAkHdzC8.jpg?width=960&crop=smart&auto=webp&s=9fb2509dc028be0beb5dc41d467274d87ce4f7fa', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/HwCBkRiOoAvNDnEfiYCMKxBxa1g--ssA3TRdAkHdzC8.jpg?width=1080&crop=smart&auto=webp&s=9e686e974eb7e08a019c66ea4ff8b08fd9447e2f', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/HwCBkRiOoAvNDnEfiYCMKxBxa1g--ssA3TRdAkHdzC8.jpg?auto=webp&s=755b3f3609857319c265b54b5c2cda59726a09ce', 'width': 1200}, 'variants': {}}]}
|
Icosa AI Platform
| 1 |
[removed]
| 2025-05-05T13:02:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfb25l/icosa_ai_platform/
|
No_Force8300
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfb25l
| false | null |
t3_1kfb25l
|
/r/LocalLLaMA/comments/1kfb25l/icosa_ai_platform/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'ew7I-qhnE81d-YyX5789AmO5sJ1Bi3WdH2EcS52eXLA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/HwCBkRiOoAvNDnEfiYCMKxBxa1g--ssA3TRdAkHdzC8.jpg?width=108&crop=smart&auto=webp&s=88a40866c8aa430a6cc98e2b4f31d30c7b73dcc9', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/HwCBkRiOoAvNDnEfiYCMKxBxa1g--ssA3TRdAkHdzC8.jpg?width=216&crop=smart&auto=webp&s=5858ba3e835dbafd39b35c21e43619d6d80d1c73', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/HwCBkRiOoAvNDnEfiYCMKxBxa1g--ssA3TRdAkHdzC8.jpg?width=320&crop=smart&auto=webp&s=a4631652615889a4be4d4143abc0f260597ea8b0', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/HwCBkRiOoAvNDnEfiYCMKxBxa1g--ssA3TRdAkHdzC8.jpg?width=640&crop=smart&auto=webp&s=e0a21777b5744f35834b087afc8dbdc718e09d40', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/HwCBkRiOoAvNDnEfiYCMKxBxa1g--ssA3TRdAkHdzC8.jpg?width=960&crop=smart&auto=webp&s=9fb2509dc028be0beb5dc41d467274d87ce4f7fa', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/HwCBkRiOoAvNDnEfiYCMKxBxa1g--ssA3TRdAkHdzC8.jpg?width=1080&crop=smart&auto=webp&s=9e686e974eb7e08a019c66ea4ff8b08fd9447e2f', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/HwCBkRiOoAvNDnEfiYCMKxBxa1g--ssA3TRdAkHdzC8.jpg?auto=webp&s=755b3f3609857319c265b54b5c2cda59726a09ce', 'width': 1200}, 'variants': {}}]}
|
Anyone running Gemma 3 27B with llama.cpp?
| 1 |
[removed]
| 2025-05-05T13:03:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfb36z/anyone_running_gemma_3_27b_with_llamacpp/
|
Phellim97
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfb36z
| false | null |
t3_1kfb36z
|
/r/LocalLLaMA/comments/1kfb36z/anyone_running_gemma_3_27b_with_llamacpp/
| false | false |
self
| 1 | null |
Anyone running Gemma 3 27B with llama.cpp?
| 2 |
[removed]
| 2025-05-05T13:05:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfb4zi/anyone_running_gemma_3_27b_with_llamacpp/
|
Diirys
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfb4zi
| false | null |
t3_1kfb4zi
|
/r/LocalLLaMA/comments/1kfb4zi/anyone_running_gemma_3_27b_with_llamacpp/
| false | false |
self
| 2 | null |
I Investigated a Fake LLM Project (“VRINDA”) Claiming 64x H100 Training — Here’s What I Found
| 1 |
[removed]
| 2025-05-05T13:16:57 |
https://www.reddit.com/gallery/1kfbdmn
|
SkibidiPhysics
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfbdmn
| false | null |
t3_1kfbdmn
|
/r/LocalLLaMA/comments/1kfbdmn/i_investigated_a_fake_llm_project_vrinda_claiming/
| false | false | 1 | null |
|
How long until a desktop or laptop with 128gb of >=2TB/s URAM or VRAM for <=$3000?
| 0 |
I suspect it will take at least another two years until we get a laptop or desktop with 128gb of >=2TB/s URAM or VRAM for <=$3000. A mac studio is $3500 for 128 gb of 819GB/s uram. Project digits is similarly priced but slower in bandwidth. What about a desktop or laptop with 96gb of >=2TB/s URAM or VRAM for <=$2400? (probably the same timeline) And what about a desktop or laptop with 1TB of >=4TB/s URAM or VRAM for <=$6000? (At least 3-4 years unless ai makes memory cheaper or a breakthrough in neuromorphic or in photonic memory)
| 2025-05-05T13:27:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfblyr/how_long_until_a_desktop_or_laptop_with_128gb_of/
|
power97992
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfblyr
| false | null |
t3_1kfblyr
|
/r/LocalLLaMA/comments/1kfblyr/how_long_until_a_desktop_or_laptop_with_128gb_of/
| false | false |
self
| 0 | null |
Microsoft Pulls Ahead in the Cloud and AI Race, Leaving Amazon Searching for Focus
| 1 | 2025-05-05T13:30:08 |
https://stubx.info/microsoft-pulls-ahead-in-the-cloud-and-ai-race/
|
TechnicianTypical600
|
stubx.info
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfbnvc
| false | null |
t3_1kfbnvc
|
/r/LocalLLaMA/comments/1kfbnvc/microsoft_pulls_ahead_in_the_cloud_and_ai_race/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'MAlLVRFt0uovHrdgKQPaWpF6KDfw05q9fDz2DffBwbQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NUnNiAmrRQ_1ESJrEfVorRNboaFS3TXs7lS5o6x3GDM.jpg?width=108&crop=smart&auto=webp&s=2062c977b15d8779a4216495a584fd938861a85e', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/NUnNiAmrRQ_1ESJrEfVorRNboaFS3TXs7lS5o6x3GDM.jpg?width=216&crop=smart&auto=webp&s=b1b759506ea27ca2aeb96886aa9c1ae7a126a410', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/NUnNiAmrRQ_1ESJrEfVorRNboaFS3TXs7lS5o6x3GDM.jpg?width=320&crop=smart&auto=webp&s=dd6ca137ee5818f5ff654f57287da653cc860a1b', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/NUnNiAmrRQ_1ESJrEfVorRNboaFS3TXs7lS5o6x3GDM.jpg?width=640&crop=smart&auto=webp&s=8da4d01cd36b03738968bbe3d8dd9dfdc6682010', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/NUnNiAmrRQ_1ESJrEfVorRNboaFS3TXs7lS5o6x3GDM.jpg?width=960&crop=smart&auto=webp&s=80ea2598e1d69ff549c7af7d0b342d4bdf5c824d', 'width': 960}], 'source': {'height': 563, 'url': 'https://external-preview.redd.it/NUnNiAmrRQ_1ESJrEfVorRNboaFS3TXs7lS5o6x3GDM.jpg?auto=webp&s=945e8cd71589b6ae2491f3e90a4f25e744651dc3', 'width': 1000}, 'variants': {}}]}
|
||
Looking for advice on building a financial analysis chatbot from long PDFs
| 1 |
[removed]
| 2025-05-05T13:32:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfbq6v/looking_for_advice_on_building_a_financial/
|
MATTIOLATO
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfbq6v
| false | null |
t3_1kfbq6v
|
/r/LocalLLaMA/comments/1kfbq6v/looking_for_advice_on_building_a_financial/
| false | false |
self
| 1 | null |
Codex CLI Alternative
| 1 |
[removed]
| 2025-05-05T13:36:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfbsx3/codex_cli_alternative/
|
Forkan5870
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfbsx3
| false | null |
t3_1kfbsx3
|
/r/LocalLLaMA/comments/1kfbsx3/codex_cli_alternative/
| false | false |
self
| 1 | null |
System prompt variables for default users in AnythingLLM
| 1 |
[removed]
| 2025-05-05T13:48:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfc2e9/system_prompt_variables_for_default_users_in/
|
Holiday-Picture6796
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfc2e9
| false | null |
t3_1kfc2e9
|
/r/LocalLLaMA/comments/1kfc2e9/system_prompt_variables_for_default_users_in/
| false | false |
self
| 1 | null |
Working on a tool to generate synthetic datasets
| 1 |
[removed]
| 2025-05-05T13:48:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfc2on/working_on_a_tool_to_generate_synthetic_datasets/
|
Interesting-Area6418
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfc2on
| false | null |
t3_1kfc2on
|
/r/LocalLLaMA/comments/1kfc2on/working_on_a_tool_to_generate_synthetic_datasets/
| false | false |
self
| 1 | null |
Differences between models downloaded from Huggingface and Ollama
| 2 |
I use Docker Desktop and have Ollama and Open-WebUI running in different docker containers but working together, and the system works pretty well overall.
With the recent release of the Qwen3 models, I've been doing some experimenting between the different quantizations available.
As I normally do I downloaded the Qwen3 that is appropriate for my hardware from Huggingface and uploaded it to the docker container. It worked but its like its template is wrong. It doesn't identify its thinking, and it rambles on endlessly and has conversations with itself and a fictitious user generating screens after screens of repetition.
As a test, I tried telling Open-WebUI to acquire the Qwen3 model from [Ollama.com](http://Ollama.com), and it pulled in the Qwen3 8B model. I asked this version the identical series of questions and it worked perfectly, identifying its thinking, then displaying its answer normally and succinctly, stopping where appropriate.
It seems to me that the difference would likely be in the chat template. I've done a bunch of digging, but I cannot figure out where to view or modify the chat template in Open-WebUI for models. Yes, I can change the system prompt for a model, but that doesn't resolve the odd behaviour of the models from Huggingface.
I've observed similar behaviour from the 14B and 30B-MoE from Huggingface.
I'm clearly misunderstanding something because I cannot find where to view/add/modify the chat template. Has anyone run into this issue? How do you get around it?
| 2025-05-05T13:55:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfc8es/differences_between_models_downloaded_from/
|
captainrv
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfc8es
| false | null |
t3_1kfc8es
|
/r/LocalLLaMA/comments/1kfc8es/differences_between_models_downloaded_from/
| false | false |
self
| 2 |
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200}, 'variants': {}}]}
|
How to Run DeepSeek LLM Offline on Ubuntu (Full Setup Guide)
| 1 |
[removed]
| 2025-05-05T13:59:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfcbdl/how_to_run_deepseek_llm_offline_on_ubuntu_full/
|
JelloDeep8155
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfcbdl
| false | null |
t3_1kfcbdl
|
/r/LocalLLaMA/comments/1kfcbdl/how_to_run_deepseek_llm_offline_on_ubuntu_full/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '-ZFi0br0a2v861SemMItGGWS6FFv--ae78d-U4QFY80', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/xX65Px2nMEzuhoUqH9oncdTJ4t5l8-TtI-VVI9fXQmo.jpg?width=108&crop=smart&auto=webp&s=e699c92bd72a2ed3356cefc99770694a35cb641a', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/xX65Px2nMEzuhoUqH9oncdTJ4t5l8-TtI-VVI9fXQmo.jpg?width=216&crop=smart&auto=webp&s=3cc0fcde3ea85899dc153d2f3bac0d3d95aca624', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/xX65Px2nMEzuhoUqH9oncdTJ4t5l8-TtI-VVI9fXQmo.jpg?width=320&crop=smart&auto=webp&s=a6075c867573ca2aa9e0816ccf5e547c412af203', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/xX65Px2nMEzuhoUqH9oncdTJ4t5l8-TtI-VVI9fXQmo.jpg?auto=webp&s=a681cf3b7ba2bee57abdd0f3fef692f86d07f427', 'width': 480}, 'variants': {}}]}
|
Launching an open collaboration on production‑ready AI Agent tooling
| 17 |
Hi everyone,
I’m kicking off a community‑driven initiative to help developers take AI Agents from proof of concept to reliable production. The focus is on practical, horizontal tooling: creation, monitoring, evaluation, optimization, memory management, deployment, security, human‑in‑the‑loop workflows, and other gaps that Agents face before they reach users.
**Why I’m doing this**
I maintain several open‑source repositories (35K GitHub stars, \~200K monthly visits) and a technical newsletter with 22K subscribers, and I’ve seen firsthand how many teams stall when it’s time to ship Agents at scale. The goal is to collect and showcase the best solutions - open‑source or commercial - that make that leap easier.
**How you can help**
If your company builds a tool or platform that accelerates any stage of bringing Agents to production - and it’s not just a vertical finished agent - I’d love to hear what you’re working on.
* In stealth? Send me a direct message on LinkedIn: [https://www.linkedin.com/in/nir-diamant-ai/](https://www.linkedin.com/in/nir-diamant-ai/)
* Otherwise, drop a comment describing the problem you solve and how developers can try it.
Looking forward to seeing what the community is building. I’ll be active in the comments to answer questions.
Thanks!
| 2025-05-05T14:00:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfccbv/launching_an_open_collaboration_on/
|
Nir777
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfccbv
| false | null |
t3_1kfccbv
|
/r/LocalLLaMA/comments/1kfccbv/launching_an_open_collaboration_on/
| false | false |
self
| 17 | null |
We fit 50+ LLMs on 2 GPUs — cold starts under 2s. Here’s how.
| 195 |
We’ve been experimenting with multi-model orchestration and ran into the usual wall: cold starts, bloated memory, and inefficient GPU usage. Everyone talks about inference, but very few go below the HTTP layer.
So we built our own runtime that snapshots the entire model execution state , attention caches, memory layout, everything , and restores it directly on the GPU. Result?
•50+ models running on 2× A4000s
•Cold starts consistently under 2 seconds
•90%+ GPU utilization
•No persistent bloating or overprovisioning
It feels like an OS for inference , instead of restarting a process, we just resume it. If you’re running agents, RAG pipelines, or multi-model setups locally, this might be useful.
| 2025-05-05T14:02:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfcdll/we_fit_50_llms_on_2_gpus_cold_starts_under_2s/
|
pmv143
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfcdll
| false | null |
t3_1kfcdll
|
/r/LocalLLaMA/comments/1kfcdll/we_fit_50_llms_on_2_gpus_cold_starts_under_2s/
| false | false |
self
| 195 | null |
Training Lora on Gemma3 locally
| 8 |
Hi everyone,
I’m hoping to fine‑tune Gemma‑3 12B with a LoRA adapter using a domain‑specific corpus (~500 MB of raw text). Tokenization and preprocessing aren’t an issue—I already have that covered. My goals:
• Model: Gemma‑3 12B (multilingual)
• Output: A LoRA adapter I can later pair with a quantized version of the base model for inference
• Hardware: One 16 GB GPU
I tried the latest Text Generation WebUI, but either LoRA training isn’t yet supported for this model or I’m missing the right settings.
Could anyone recommend:
1. A repo, script, or walkthrough that successfully trains a LoRA (or QLoRA) on Gemma‑3 12B within 16 GB VRAM
2. Alternative lightweight fine‑tuning strategies that fit my hardware constraints
Any pointers, tips, or links to tutorials would be greatly appreciated!
| 2025-05-05T14:02:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfcdz7/training_lora_on_gemma3_locally/
|
Samurai2107
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfcdz7
| false | null |
t3_1kfcdz7
|
/r/LocalLLaMA/comments/1kfcdz7/training_lora_on_gemma3_locally/
| false | false |
self
| 8 | null |
Cheap ryzen setup for Qwen 3 30b model
| 4 |
I have a ryzen 5600 with a radeon 7600 8gb vram the key to my setup I found was dual 32gb Crucial pro ddr4 for a total of 64gb ram. I am getting 14 tokens per second which I think is very decent given my specs. I think the take home message is system memory capacity makes a difference.
| 2025-05-05T14:16:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfcppv/cheap_ryzen_setup_for_qwen_3_30b_model/
|
Sweaty_Perception655
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfcppv
| false | null |
t3_1kfcppv
|
/r/LocalLLaMA/comments/1kfcppv/cheap_ryzen_setup_for_qwen_3_30b_model/
| false | false |
self
| 4 | null |
AI Run Coach
| 1 |
[removed]
| 2025-05-05T14:41:59 |
iGoalie
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfdb1f
| false | null |
t3_1kfdb1f
|
/r/LocalLLaMA/comments/1kfdb1f/ai_run_coach/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'KnIgI2XTsVa--QJdxKint91yorLqRHDgydsVuyyPTXs', 'resolutions': [{'height': 113, 'url': 'https://preview.redd.it/hfhfd67p6zye1.jpeg?width=108&crop=smart&auto=webp&s=7d78e978729f99346a96245c6e34872379f5c181', 'width': 108}, {'height': 227, 'url': 'https://preview.redd.it/hfhfd67p6zye1.jpeg?width=216&crop=smart&auto=webp&s=f49122046f9d7a817a3a6064d9036edda4d51c82', 'width': 216}, {'height': 336, 'url': 'https://preview.redd.it/hfhfd67p6zye1.jpeg?width=320&crop=smart&auto=webp&s=accbccfd53b88862b891eee9e18b9485c6739632', 'width': 320}, {'height': 672, 'url': 'https://preview.redd.it/hfhfd67p6zye1.jpeg?width=640&crop=smart&auto=webp&s=1b6b5a5b4cf407591915cb2e2b9d0c92e0e67b81', 'width': 640}, {'height': 1009, 'url': 'https://preview.redd.it/hfhfd67p6zye1.jpeg?width=960&crop=smart&auto=webp&s=24cff73fe1c6e2b8acbdd08bdf12a583768e92d7', 'width': 960}, {'height': 1135, 'url': 'https://preview.redd.it/hfhfd67p6zye1.jpeg?width=1080&crop=smart&auto=webp&s=a93717c2ae5049017deb1e0a349baacd46b3c787', 'width': 1080}], 'source': {'height': 1246, 'url': 'https://preview.redd.it/hfhfd67p6zye1.jpeg?auto=webp&s=d30f06a24387b9d7f89208cc750ae8890816967e', 'width': 1185}, 'variants': {}}]}
|
||
is elevenlabs still unbeatable for tts? or good locall options
| 81 |
Sorry if this is a common one, but surely due to the progress of these models, by now something would have changed with the TTS landscape, and we have some clean sounding local models?
| 2025-05-05T14:42:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfdbdi/is_elevenlabs_still_unbeatable_for_tts_or_good/
|
sandwich_stevens
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfdbdi
| false | null |
t3_1kfdbdi
|
/r/LocalLLaMA/comments/1kfdbdi/is_elevenlabs_still_unbeatable_for_tts_or_good/
| false | false |
self
| 81 | null |
Reasoning induced to Granite 3.3
| 1 |
I have induced reasoning by indications to Granite 3.3 2B. There was no correct answer, but I like that it does not go into a Loop and responds quite coherently, I would say...
| 2025-05-05T14:47:22 |
Ordinary_Mud7430
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfdfnr
| false | null |
t3_1kfdfnr
|
/r/LocalLLaMA/comments/1kfdfnr/reasoning_induced_to_granite_33/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'g_rF1IvOlhy9WFEcJjLYqvK35zJ72TtndWOQKuh9UaY', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/eqhdhptn7zye1.png?width=108&crop=smart&auto=webp&s=b0ea51a89a991c9a797f8de31370f618409e38a8', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/eqhdhptn7zye1.png?width=216&crop=smart&auto=webp&s=b9c96eb88beca8b347a7402a3b70aac332298cb5', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/eqhdhptn7zye1.png?width=320&crop=smart&auto=webp&s=3167b7bfc0d9d6f638b63c46e5b5c5c3c7ab95b6', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/eqhdhptn7zye1.png?width=640&crop=smart&auto=webp&s=e5e982f635d793eb638df3eb8944e668c2b2c51f', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/eqhdhptn7zye1.png?width=960&crop=smart&auto=webp&s=11e54eeef2f4ab88716853bc521962fbefe2610c', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/eqhdhptn7zye1.png?width=1080&crop=smart&auto=webp&s=a2e070ae758aa8aafb85f85f12a383e971356ed1', 'width': 1080}], 'source': {'height': 2245, 'url': 'https://preview.redd.it/eqhdhptn7zye1.png?auto=webp&s=56c4811f87dfc3bbf34f46455d7b81eea32626a9', 'width': 1080}, 'variants': {}}]}
|
||
What quants and runtime configurations do Meta and Bing really run in public prod?
| 9 |
When comparing results of prompts between Bing, Meta, Deepseek and local LLMs such as quantized llama, qwen, mistral, Phi, etc. I find the results pretty comparable from the big guys to my local LLMs. Either they’re running quantized models for public use or the constraints and configuration dumb down the public LLMs somehow.
I am asking how LLMs are configured for scale and whether the average public user is actually getting the best LLM quality or some dumbed down restricted versions all the time. Ultimately pursuant to configuring local LLM runtimes for optimal performance. Thanks.
| 2025-05-05T14:53:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfdkkz/what_quants_and_runtime_configurations_do_meta/
|
scott-stirling
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfdkkz
| false | null |
t3_1kfdkkz
|
/r/LocalLLaMA/comments/1kfdkkz/what_quants_and_runtime_configurations_do_meta/
| false | false |
self
| 9 | null |
Introducing LiteFold, OpenSource tool for protein engineering, Protein Folding is live now
| 8 |
Hey guys,
I created this tool called LiteFold, the objective is to create the best workspace for protein engineers to accelerate their research. As of now it supports protein 3D structure prediction, visualization, comparing structures, metrics, and many more.
Do check out, my next plans are to integrate more workflows around RNA Folding, docking, interactions etc. I am not expert in biotech, but I like to research about it by passion and I am an ML engineer by profession and I want to bridge this gap and want to make these field accessible to other folks too.
So feedbacks are quite appreciated and it's fully open sourced.
https://preview.redd.it/tvch68a2czye1.png?width=2048&format=png&auto=webp&s=783fa7ef7e2efa31d24f0d734be717c2875e40a5
[https://x.com/anindyadeeps/status/1919311611325554726](https://x.com/anindyadeeps/status/1919311611325554726)
| 2025-05-05T15:12:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfe27f/introducing_litefold_opensource_tool_for_protein/
|
No-Street-3020
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfe27f
| false | null |
t3_1kfe27f
|
/r/LocalLLaMA/comments/1kfe27f/introducing_litefold_opensource_tool_for_protein/
| false | false | 8 | null |
|
Open WebUI license change : no longer OSI approved ?
| 185 |
While Open WebUI has proved an excellent tool, with a permissive license, I have noticed the new release do not seem to use an [OSI approved license](https://opensource.org/licenses) and require a contributor license agreement.
[https://docs.openwebui.com/license/](https://docs.openwebui.com/license/)
I understand the reasoning, but i wish they could find other way to enforce contribution, without moving away from an open source license. Some OSI approved license enforce even more sharing back for service providers (AGPL).
The FAQ "6. Does this mean Open WebUI is “no longer open source”? -> No, not at all." is missing the point. Even if you have good and fair reasons to restrict usage, it does not mean that you can claim to still be open source. I asked Gemini pro 2.5 preview, Mistral 3.1 and Gemma 3 and they tell me that no, the new license is not opensource / freesoftware.
For now it's totally reasonable, but If there are some other good reasons to add restrictions in the future, and a CLA that say "we can add any restriction to your code", it worry me a bit.
I'm still a fan of the project, but a bit more worried than before.
| 2025-05-05T15:23:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfebga/open_webui_license_change_no_longer_osi_approved/
|
CroquetteLauncher
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfebga
| false | null |
t3_1kfebga
|
/r/LocalLLaMA/comments/1kfebga/open_webui_license_change_no_longer_osi_approved/
| false | false |
self
| 185 | null |
Bought 3090, need emotional support
| 1 |
[removed]
| 2025-05-05T15:26:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfeeu9/bought_3090_need_emotional_support/
|
HandsOnDyk
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfeeu9
| false | null |
t3_1kfeeu9
|
/r/LocalLLaMA/comments/1kfeeu9/bought_3090_need_emotional_support/
| false | false |
self
| 1 | null |
Why aren't there Any Gemma-3 Reasoning Models?
| 19 |
Google released Gemma-3 models weeks ago and they are excellent for their sizes especially considering that they are non-reasoning ones. I thought that we would see a lot of reasoning fine-tunes especially that Google released the base models too.
I was excited to see what a reasoning Gemma-3-27B would be capable of and was looking forward to it. But, until now, neither Google nor the community bothered with that. I wonder why?
| 2025-05-05T15:28:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfeglz/why_arent_there_any_gemma3_reasoning_models/
|
Iory1998
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfeglz
| false | null |
t3_1kfeglz
|
/r/LocalLLaMA/comments/1kfeglz/why_arent_there_any_gemma3_reasoning_models/
| false | false |
self
| 19 | null |
Experimental Quant (DWQ) of Qwen3-A30B
| 47 |
Used a novel technique - details [here](https://x.com/N8Programs/status/1919283193892540850) \- to quantize Qwen3-30B-A3B into 4.5bpw in MLX. As shown in the image, the perplexity is now on par with a 6-bit quant at no storage cost:
[Graph showing the superiority of the DWQ technique.](https://preview.redd.it/87znt1c8jzye1.png?width=3569&format=png&auto=webp&s=5e238bdbbc9f02e8be52c3a49393e195c77d5ca7)
The way the technique works is distilling the logits of the 6bit into the 4bit, treating the quant biases + scales as learnable parameters.
Get the model here:
[https://huggingface.co/mlx-community/Qwen3-30B-A3B-4bit-DWQ](https://huggingface.co/mlx-community/Qwen3-30B-A3B-4bit-DWQ)
Should theoretically feel like a 6bit in a 4bit quant.
| 2025-05-05T15:54:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1kff36y/experimental_quant_dwq_of_qwen3a30b/
|
N8Karma
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kff36y
| false | null |
t3_1kff36y
|
/r/LocalLLaMA/comments/1kff36y/experimental_quant_dwq_of_qwen3a30b/
| false | false | 47 |
{'enabled': False, 'images': [{'id': 'UUAeXPNpHaJxpSEBEiCf8Hny9vfCMzC6_VNVDeqtJQQ', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/LCswmSPGlLeg2uEXPrDVWNpr9PBgA-O2GA2zpqtkfFQ.jpg?width=108&crop=smart&auto=webp&s=c288ab677a23809abebe4c283bc31b928b145dff', 'width': 108}, {'height': 132, 'url': 'https://external-preview.redd.it/LCswmSPGlLeg2uEXPrDVWNpr9PBgA-O2GA2zpqtkfFQ.jpg?width=216&crop=smart&auto=webp&s=695f53994eafac8aa972a48fd88068ce86b4f26f', 'width': 216}, {'height': 196, 'url': 'https://external-preview.redd.it/LCswmSPGlLeg2uEXPrDVWNpr9PBgA-O2GA2zpqtkfFQ.jpg?width=320&crop=smart&auto=webp&s=6c87fbeb0d8f7052e0255d4d771dcab856d5d745', 'width': 320}, {'height': 393, 'url': 'https://external-preview.redd.it/LCswmSPGlLeg2uEXPrDVWNpr9PBgA-O2GA2zpqtkfFQ.jpg?width=640&crop=smart&auto=webp&s=d2a9b92fb8daa878a2d33d37bea5b5e7acbb4797', 'width': 640}, {'height': 589, 'url': 'https://external-preview.redd.it/LCswmSPGlLeg2uEXPrDVWNpr9PBgA-O2GA2zpqtkfFQ.jpg?width=960&crop=smart&auto=webp&s=ffb07b495cf7a04c97f21f5bde80f586ea53fbdf', 'width': 960}, {'height': 663, 'url': 'https://external-preview.redd.it/LCswmSPGlLeg2uEXPrDVWNpr9PBgA-O2GA2zpqtkfFQ.jpg?width=1080&crop=smart&auto=webp&s=876d49004e2b1dd21614a177da26a2367e11fb2f', 'width': 1080}], 'source': {'height': 1258, 'url': 'https://external-preview.redd.it/LCswmSPGlLeg2uEXPrDVWNpr9PBgA-O2GA2zpqtkfFQ.jpg?auto=webp&s=7a90ce0a31994cb7fd365202312211fe158eec1b', 'width': 2048}, 'variants': {}}]}
|
|
Qwen3 include thinking while outputing JSON only?
| 7 |
I have QWEN 3 summarizing some forum data that I had downloaded before the site went down in 2010. I want to create training data from this forum data. I want Qwen 3 to use thinking to summarize the forum posts and output JSONL to train with, but I don't want the "thinking" conversation in my output. Is there a way to disable the thinking in the output without disabling thinking altogether? Or do I not understand how /no\_thinking works?
Also I'm new to this lol, so I'm probably missing something important or simple; any help would be great.
| 2025-05-05T16:06:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1kffed0/qwen3_include_thinking_while_outputing_json_only/
|
jpcrow
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kffed0
| false | null |
t3_1kffed0
|
/r/LocalLLaMA/comments/1kffed0/qwen3_include_thinking_while_outputing_json_only/
| false | false |
self
| 7 | null |
New Qwen3-32B-AWQ (Activation-aware Weight Quantization)
| 150 | 2025-05-05T16:11:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1kffj42/new_qwen332bawq_activationaware_weight/
|
jbaenaxd
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kffj42
| false | null |
t3_1kffj42
|
/r/LocalLLaMA/comments/1kffj42/new_qwen332bawq_activationaware_weight/
| false | false | 150 |
{'enabled': False, 'images': [{'id': '5126VCKoA12o0U-WN0Ay480JTWdIe1Yv9NGrQF20b94', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/-aaTUrK8hOTrBZTDUSYGah4_Rjpn4rU7szaPy5gCq8U.jpg?width=108&crop=smart&auto=webp&s=7139f664ba1b474ee8f0d0d28b52962f5efa98b8', 'width': 108}, {'height': 163, 'url': 'https://external-preview.redd.it/-aaTUrK8hOTrBZTDUSYGah4_Rjpn4rU7szaPy5gCq8U.jpg?width=216&crop=smart&auto=webp&s=4b486e37d4e49ede0b213209f2b14552632b8ec5', 'width': 216}, {'height': 242, 'url': 'https://external-preview.redd.it/-aaTUrK8hOTrBZTDUSYGah4_Rjpn4rU7szaPy5gCq8U.jpg?width=320&crop=smart&auto=webp&s=0cf9b71ee809a01da167e1f598f47c43934c600f', 'width': 320}, {'height': 485, 'url': 'https://external-preview.redd.it/-aaTUrK8hOTrBZTDUSYGah4_Rjpn4rU7szaPy5gCq8U.jpg?width=640&crop=smart&auto=webp&s=69449e796c503e08a80bc47f50ab28640b3a7384', 'width': 640}, {'height': 728, 'url': 'https://external-preview.redd.it/-aaTUrK8hOTrBZTDUSYGah4_Rjpn4rU7szaPy5gCq8U.jpg?width=960&crop=smart&auto=webp&s=1b8bf3bae299678f8a1a0a993207403c95ebc86f', 'width': 960}, {'height': 819, 'url': 'https://external-preview.redd.it/-aaTUrK8hOTrBZTDUSYGah4_Rjpn4rU7szaPy5gCq8U.jpg?width=1080&crop=smart&auto=webp&s=d2a4b152d2319c58c9d629c6ddd86d48937ca778', 'width': 1080}], 'source': {'height': 1134, 'url': 'https://external-preview.redd.it/-aaTUrK8hOTrBZTDUSYGah4_Rjpn4rU7szaPy5gCq8U.jpg?auto=webp&s=0ca04d90767c70000bbbb9af7bdbde25dd63e4ac', 'width': 1494}, 'variants': {}}]}
|
||
Qwen 3 235b gets high score in LiveCodeBench
| 249 | 2025-05-05T16:19:04 |
Independent-Wind4462
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kffq2u
| false | null |
t3_1kffq2u
|
/r/LocalLLaMA/comments/1kffq2u/qwen_3_235b_gets_high_score_in_livecodebench/
| false | false | 249 |
{'enabled': True, 'images': [{'id': 'fNrIUvIwJeYs4Ze6txbh4QtuAFPP8DB9s4jxI_3KQQE', 'resolutions': [{'height': 112, 'url': 'https://preview.redd.it/px3okqrznzye1.jpeg?width=108&crop=smart&auto=webp&s=28d4518fc4e5fa4ff2acc756b6f0cf4fea502642', 'width': 108}, {'height': 224, 'url': 'https://preview.redd.it/px3okqrznzye1.jpeg?width=216&crop=smart&auto=webp&s=1392265356cf0ebdf850623655cb71864f28800d', 'width': 216}, {'height': 332, 'url': 'https://preview.redd.it/px3okqrznzye1.jpeg?width=320&crop=smart&auto=webp&s=18080c814a2154ea99ed3a170d494d2060376354', 'width': 320}, {'height': 664, 'url': 'https://preview.redd.it/px3okqrznzye1.jpeg?width=640&crop=smart&auto=webp&s=c94891166c70bc3a59e886ae5359d04bdf3d33af', 'width': 640}, {'height': 996, 'url': 'https://preview.redd.it/px3okqrznzye1.jpeg?width=960&crop=smart&auto=webp&s=ee594a8a6958cd696d3be09006d77b7d3d2c01d8', 'width': 960}, {'height': 1121, 'url': 'https://preview.redd.it/px3okqrznzye1.jpeg?width=1080&crop=smart&auto=webp&s=97142a8e068f3626a12d11ff4553383da46f551c', 'width': 1080}], 'source': {'height': 1892, 'url': 'https://preview.redd.it/px3okqrznzye1.jpeg?auto=webp&s=1f9bfa67da46f11d05862dd6b2482f38087d45ef', 'width': 1822}, 'variants': {}}]}
|
|||
Anyone try Llama 4 yet?
| 1 |
Meta finally launched their Llama 4 models (on a Saturday weirdly)
So this weekend we ran an update to our open source OCR benchmark. And the results are really impressive!
Last week we were saying that Qwen 2.5 VL was the #1 open source model when it came to document OCR. Both Qwen and GPT 4o models were neck and neck at ~75% accuracy.
But now the Llama 4 Maverick model have made a huge jump to 82.3%. Close to the Gemini models. But fully open source. And the Scout model can run on a single GPU**
Stats on the pricing / latency (using Together AI)
-- Open source --
Llama 4 Maverick (82.3%)
- $1.98 / 1000 pages
- 22 seconds per page
Llama 4 Scout (74.3%)
- $1.00 / 1000 pages
- 18 seconds per page
-- Closed source --
GPT 4o (75.5%)
- $18.37 / 1000 pages
- 25 seconds / page
Gemini 2.5 Pro (91.5%)
- $33.78 / 1000 pages
- 38 seconds / page
Just put together a more detailed writeup of the open source models in! I'll drop a link below.
**If you can afford an H100 😅
| 2025-05-05T16:26:08 |
WerdSmither
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kffwi3
| false | null |
t3_1kffwi3
|
/r/LocalLLaMA/comments/1kffwi3/anyone_try_llama_4_yet/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'fBlz95WIptRYmuy_qH-5A-GAab9KGxvwR014pyoJ3sc', 'resolutions': [{'height': 74, 'url': 'https://preview.redd.it/837k724apzye1.jpeg?width=108&crop=smart&auto=webp&s=85e73be11287deff840821bbf4f1a7820bee2c4b', 'width': 108}, {'height': 148, 'url': 'https://preview.redd.it/837k724apzye1.jpeg?width=216&crop=smart&auto=webp&s=745b4be967b7585e0656e2128fd22947bb7ef31d', 'width': 216}, {'height': 220, 'url': 'https://preview.redd.it/837k724apzye1.jpeg?width=320&crop=smart&auto=webp&s=3578fa01e60a74db6aac09a4301723250b8419f2', 'width': 320}, {'height': 440, 'url': 'https://preview.redd.it/837k724apzye1.jpeg?width=640&crop=smart&auto=webp&s=56502c2627dcac0601cf06268db20df78438b668', 'width': 640}, {'height': 660, 'url': 'https://preview.redd.it/837k724apzye1.jpeg?width=960&crop=smart&auto=webp&s=b9b169d7aa77471e7952a8a35713617104a28956', 'width': 960}, {'height': 743, 'url': 'https://preview.redd.it/837k724apzye1.jpeg?width=1080&crop=smart&auto=webp&s=852a3b8fbe02f99a5b92c487542c5ff16f285d13', 'width': 1080}], 'source': {'height': 830, 'url': 'https://preview.redd.it/837k724apzye1.jpeg?auto=webp&s=bac5507a2ec9010a2e2b7cbc339c6fbaa81b851d', 'width': 1206}, 'variants': {}}]}
|
||
What if you held the idea that could change the entire AI landscape as we know it?
| 1 |
[removed]
| 2025-05-05T16:34:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfg3xn/what_if_you_held_the_idea_that_could_change_the/
|
PositiveVibes0nly_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfg3xn
| false | null |
t3_1kfg3xn
|
/r/LocalLLaMA/comments/1kfg3xn/what_if_you_held_the_idea_that_could_change_the/
| false | false |
self
| 1 | null |
What if you held an idea that could completely revolutionize AI?
| 0 |
I mean let’s just say that you came to a realization that could totally change everything? An idea that was completely original and yours.
With all the Data Scraping and Open Sourcing who would you go to with the information? Intellectual Property is a real thing. Where would you go and who would you trust to tell?
| 2025-05-05T16:39:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfg8si/what_if_you_held_an_idea_that_could_completely/
|
LittyKittyFrmDaCity
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfg8si
| false | null |
t3_1kfg8si
|
/r/LocalLLaMA/comments/1kfg8si/what_if_you_held_an_idea_that_could_completely/
| false | false |
self
| 0 | null |
Qwen3:30b errors via Ollama/Msty?
| 0 |
Hey guys, I've been wanting to put qwen3 on my 64gb MacBook. It runs very quickly in terminal, but I have problems with it Msty (my preferred UI wrapper), getting this error:
unable to load model:
/Users/me/.ollama/models/blobs/sha256-e9183b5c18a0cf736578c1e3d1cbd4b7e98e3ad3be6176b68c20f156d54a07ac
I've -rm'd and redownloaded the model, but running into the same error repeatedly.
Msty works well with both Cloud hosted models (Gemini OpenAI etc) and other local models (Gemma3, Qwen2.5-coder) but for some reason Qwen3 isn't working. Any ideas?
| 2025-05-05T16:46:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfgeg6/qwen330b_errors_via_ollamamsty/
|
amapleson
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfgeg6
| false | null |
t3_1kfgeg6
|
/r/LocalLLaMA/comments/1kfgeg6/qwen330b_errors_via_ollamamsty/
| false | false |
self
| 0 | null |
Multi node finetuning
| 1 |
[removed]
| 2025-05-05T16:59:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfgprs/multi_node_finetuning/
|
Strict_Tip_5195
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfgprs
| false | null |
t3_1kfgprs
|
/r/LocalLLaMA/comments/1kfgprs/multi_node_finetuning/
| false | false |
self
| 1 | null |
Local llms vs sonnet 3.7
| 0 |
Is there any model I can run locally (self host, pay for host etc) that would outperform sonnet 3.7? I get the feeling that I should just stick to Claude and not bother buying the hardware etc for hosting my own models. I’m strictly using them for coding. I use Claude sometimes to help me research but that’s not crucial and I get that for free
| 2025-05-05T17:09:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfh01g/local_llms_vs_sonnet_37/
|
KillasSon
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfh01g
| false | null |
t3_1kfh01g
|
/r/LocalLLaMA/comments/1kfh01g/local_llms_vs_sonnet_37/
| false | false |
self
| 0 | null |
Is it exciting that we get a model that reasons from basic principles? Grok 3.5
| 0 |
Quote: Reasoning from first principles is needed. Grok 3.5 addresses much of this issue.
[https://x.com/elonmusk/status/1917103576062509470](https://x.com/elonmusk/status/1917103576062509470)
| 2025-05-05T17:12:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfh27a/is_it_exciting_that_we_get_a_model_that_reasons/
|
Terminator857
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfh27a
| false | null |
t3_1kfh27a
|
/r/LocalLLaMA/comments/1kfh27a/is_it_exciting_that_we_get_a_model_that_reasons/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'ctZIbuTkSsV-K6EO9GwC2BWZEcD8So3dw89l6FKh0J0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/6_oWSfJsB2NIRnSbyHCxBksuq3sfe9YQM6KJ5d_PqXk.jpg?width=108&crop=smart&auto=webp&s=c722d5a80a283c9560b8a4a2eadb7a3c02758062', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/6_oWSfJsB2NIRnSbyHCxBksuq3sfe9YQM6KJ5d_PqXk.jpg?auto=webp&s=6a3c8fcc2fa7abc367abff86692c020a717b4b75', 'width': 200}, 'variants': {}}]}
|
What's the best model I could comfortably run on a 128Gb Apple Silicon Computer?
| 8 |
I want to run a local LLM. What's the best model I could comfortably run? What software should I use to support it?
| 2025-05-05T17:13:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfh3h9/whats_the_best_model_i_could_comfortably_run_on_a/
|
ArtisticHamster
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfh3h9
| false | null |
t3_1kfh3h9
|
/r/LocalLLaMA/comments/1kfh3h9/whats_the_best_model_i_could_comfortably_run_on_a/
| false | false |
self
| 8 | null |
Speech-to-text for coding? Anyone got recs?
| 3 |
Hey everyone,
So I've been trying to get speech-to-text working reliably for coding. My wrists are starting to complain after long coding sessions, and I figured dictation might be a good way to offload some of the strain.
The problem I'm running into is accuracy, especially with symbols and specific programming terms. Tried a couple of the built-in OS options but they're pretty terrible with anything beyond basic English. I need something that can handle Python syntax, variable names, and all that jazz.
Anyone have experience using speech-to-text with coding? What software or setup have you found works best? Are there any models you can fine-tune for code dictation? I'm open to anything, even if it involves a bit of tinkering.
Heard a bit about WillowVoice from some friends, and played around with it once, but not sure if that's a good option for this specific use case and don't know if they have models you can tune.
Mostly I just want to be able to say "open parenthesis, self dot data, bracket, i, bracket, close parenthesis" and have it actually *write* `(self.data[i])` instead of a bunch of nonsense.
Thanks in advance for any suggestions!
| 2025-05-05T17:19:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfh8tx/speechtotext_for_coding_anyone_got_recs/
|
Ibedevesh
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfh8tx
| false | null |
t3_1kfh8tx
|
/r/LocalLLaMA/comments/1kfh8tx/speechtotext_for_coding_anyone_got_recs/
| false | false |
self
| 3 | null |
Anyone tried this?
| 1 |
[removed]
| 2025-05-05T17:19:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfh8v7/anyone_tried_this/
|
_Zibri_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfh8v7
| false | null |
t3_1kfh8v7
|
/r/LocalLLaMA/comments/1kfh8v7/anyone_tried_this/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '9tJKfThJEtoTsu5RqYKAyyjLlXYxOSukp_ClrcqHyt0', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/9tJKfThJEtoTsu5RqYKAyyjLlXYxOSukp_ClrcqHyt0.jpeg?width=108&crop=smart&auto=webp&s=376341fc568e9b7e694fa40a2c3d898fabf0d209', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/9tJKfThJEtoTsu5RqYKAyyjLlXYxOSukp_ClrcqHyt0.jpeg?width=216&crop=smart&auto=webp&s=c3e1b0b4ce28fb71e46ff51320f1ad6c3b26a77f', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/9tJKfThJEtoTsu5RqYKAyyjLlXYxOSukp_ClrcqHyt0.jpeg?width=320&crop=smart&auto=webp&s=a0b266c3a680b48b860c15eb5a673b4b78db5d52', 'width': 320}, {'height': 361, 'url': 'https://external-preview.redd.it/9tJKfThJEtoTsu5RqYKAyyjLlXYxOSukp_ClrcqHyt0.jpeg?width=640&crop=smart&auto=webp&s=fe910801e4b7d6726b9d90e79187599cb02e26e8', 'width': 640}], 'source': {'height': 480, 'url': 'https://external-preview.redd.it/9tJKfThJEtoTsu5RqYKAyyjLlXYxOSukp_ClrcqHyt0.jpeg?auto=webp&s=e16dc5799ec0bc81f2fc2171416a97c1b09ff6ab', 'width': 850}, 'variants': {}}]}
|
Bought 3090, need emotional support
| 1 |
[removed]
| 2025-05-05T17:33:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfhlyl/bought_3090_need_emotional_support/
|
HandsOnDyk
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfhlyl
| false | null |
t3_1kfhlyl
|
/r/LocalLLaMA/comments/1kfhlyl/bought_3090_need_emotional_support/
| false | false |
self
| 1 | null |
Llama Nemotron - a nvidia Collection
| 9 | 2025-05-05T17:33:26 |
https://huggingface.co/collections/nvidia/llama-nemotron-67d92346030a2691293f200b
|
ninjasaid13
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfhm23
| false | null |
t3_1kfhm23
|
/r/LocalLLaMA/comments/1kfhm23/llama_nemotron_a_nvidia_collection/
| false | false | 9 |
{'enabled': False, 'images': [{'id': 'tehQzTDXjsQG_XSn7uPgnAE5FuopLCaJYuKH6pJbZQ4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tehQzTDXjsQG_XSn7uPgnAE5FuopLCaJYuKH6pJbZQ4.png?width=108&crop=smart&auto=webp&s=e948462407bf3f59b1abfc2da77fa8b1e29f8019', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/tehQzTDXjsQG_XSn7uPgnAE5FuopLCaJYuKH6pJbZQ4.png?width=216&crop=smart&auto=webp&s=1b819a88fd75371d553f4f2b92cd8fc2a3544d14', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/tehQzTDXjsQG_XSn7uPgnAE5FuopLCaJYuKH6pJbZQ4.png?width=320&crop=smart&auto=webp&s=68ef5d9c97a685ab4bf71b21deff13022460b022', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/tehQzTDXjsQG_XSn7uPgnAE5FuopLCaJYuKH6pJbZQ4.png?width=640&crop=smart&auto=webp&s=2b4073a8327df5e10356ae4a21706f98cbb7b80f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/tehQzTDXjsQG_XSn7uPgnAE5FuopLCaJYuKH6pJbZQ4.png?width=960&crop=smart&auto=webp&s=e93db80f815569c885ea723e840122c049aab32a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/tehQzTDXjsQG_XSn7uPgnAE5FuopLCaJYuKH6pJbZQ4.png?width=1080&crop=smart&auto=webp&s=62ec65f8a589477753072c1edaa9136ba6c58dde', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/tehQzTDXjsQG_XSn7uPgnAE5FuopLCaJYuKH6pJbZQ4.png?auto=webp&s=7d6ea12e3ab216cb2a6f50612b6214753da28595', 'width': 1200}, 'variants': {}}]}
|
||
EQ-Bench gets a proper update today. Targeting emotional intelligence in challenging multi-turn roleplays.
| 65 |
Leaderboard: [https://eqbench.com/](https://eqbench.com/)
Sample outputs: [https://eqbench.com/results/eqbench3\_reports/o3.html](https://eqbench.com/results/eqbench3_reports/o3.html)
Code: [https://github.com/EQ-bench/eqbench3](https://github.com/EQ-bench/eqbench3)
Lots more to read about the benchmark:
[https://eqbench.com/about.html#long](https://eqbench.com/about.html#long)
| 2025-05-05T17:33:48 |
https://eqbench.com/
|
_sqrkl
|
eqbench.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfhmdq
| false | null |
t3_1kfhmdq
|
/r/LocalLLaMA/comments/1kfhmdq/eqbench_gets_a_proper_update_today_targeting/
| false | false |
default
| 65 | null |
128GB GMKtec EVO-X2 AI Mini PC AMD Ryzen Al Max+ 395 is $800 off at Amazon for $1800.
| 37 |
This is my stop. Amazon has the GMK X2 for $1800 after a $800 coupon. That's price of just the Framework MB. This is a fully spec'ed computer with a 2TB SSD. Also, since it's through the Amazon Marketplace all tariffss have been included in the price. No surprise $2,600 bill from CBP. And needless to say, Amazon has your back with the A-Z guarantee.
https://www.amazon.com/dp/B0F53MLYQ6
| 2025-05-05T17:39:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfhr8t/128gb_gmktec_evox2_ai_mini_pc_amd_ryzen_al_max/
|
fallingdowndizzyvr
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfhr8t
| false | null |
t3_1kfhr8t
|
/r/LocalLLaMA/comments/1kfhr8t/128gb_gmktec_evox2_ai_mini_pc_amd_ryzen_al_max/
| false | false |
self
| 37 | null |
Don’t waste your internet data downloading Llama-3_1-Nemotron-Ultra-253B-v1-GGUF
| 10 |
It’s not properly converted to llama.cpp.
error loading model: missing tensor 'blk.9.ffn_norm.weight'
| 2025-05-05T17:50:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfi1cn/dont_waste_your_internet_data_downloading_llama3/
|
No_Conversation9561
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfi1cn
| false | null |
t3_1kfi1cn
|
/r/LocalLLaMA/comments/1kfi1cn/dont_waste_your_internet_data_downloading_llama3/
| false | false |
self
| 10 | null |
Gemma 27B matching Qwen 235B
| 0 |
Mixture of experts vs Dense model.
| 2025-05-05T17:51:49 |
MutedSwimming3347
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfi2o6
| false | null |
t3_1kfi2o6
|
/r/LocalLLaMA/comments/1kfi2o6/gemma_27b_matching_qwen_235b/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'Sft-H2mp9mJegA2z0As7YqoqjN1FcaEaDaPxiczCeos', 'resolutions': [{'height': 85, 'url': 'https://preview.redd.it/sdn78tik40ze1.jpeg?width=108&crop=smart&auto=webp&s=e848b7cb91b56e2ed3577e8151cffc3439446853', 'width': 108}, {'height': 170, 'url': 'https://preview.redd.it/sdn78tik40ze1.jpeg?width=216&crop=smart&auto=webp&s=eae43b882f871e3fbfa26f698fcf3d493f78a060', 'width': 216}, {'height': 251, 'url': 'https://preview.redd.it/sdn78tik40ze1.jpeg?width=320&crop=smart&auto=webp&s=d0f8381cdf68518e2aec907d4451cd8846ac15f1', 'width': 320}, {'height': 503, 'url': 'https://preview.redd.it/sdn78tik40ze1.jpeg?width=640&crop=smart&auto=webp&s=c3527d0d57ab9fc4d012ec4e56eb389b4fdef4c4', 'width': 640}, {'height': 755, 'url': 'https://preview.redd.it/sdn78tik40ze1.jpeg?width=960&crop=smart&auto=webp&s=86013ba986ebfbf15170eb9e4b351410f43eee58', 'width': 960}, {'height': 850, 'url': 'https://preview.redd.it/sdn78tik40ze1.jpeg?width=1080&crop=smart&auto=webp&s=10a612034d069ea97d9eb44b2dda6f66f908446d', 'width': 1080}], 'source': {'height': 1015, 'url': 'https://preview.redd.it/sdn78tik40ze1.jpeg?auto=webp&s=c4749680bd72fd715e36a09344046617a3b37776', 'width': 1289}, 'variants': {}}]}
|
||
[Benchmark] Quick‑and‑dirty test of 5 models on a Mac Studio M3 Ultra 512 GB (LM Studio) – Qwen3 runs away with it
| 89 |
Hey r/LocalLLaMA!
I’m a former university physics lecturer (taught for five years) and—one month after buying a Mac Studio (M3 Ultra, 128 CPU / 80 GPU cores, 512 GB unified RAM)—I threw a very simple benchmark at a few LLMs inside **LM Studio**.
**Prompt (intentional typo):**
Explain to me why sky is blue at an physiscist Level PhD.
# Raw numbers
|Model|Quant. / RAM footprint|Speed (tok/s)|Tokens out|1st‑token latency|
|:-|:-|:-|:-|:-|
|**MLX deepseek‑V3‑0324‑4bit**|355.95 GB|19.34| 755|17.29 s|
|**MLX Gemma‑3‑27b‑it‑bf16**| 52.57 GB|11.19| 1 317| 1.72 s|
|**MLX Deepseek‑R1‑4bit**|402.17 GB|16.55| 2 062| 15.01 s|
|**MLX Qwen3‑235‑A22B‑8bit**|233.79 GB|18.86| 3 096| 9.02 s|
|**GGFU Qwen3‑235‑A22B‑8bit**| 233.72 GB|14.35| 2 883| 4.47 s|
# **Teacher’s impressions**
# 1. Reasoning speed
**R1 > Qwen3 > Gemma3**.
The “thinking time” (pre‑generation) is roughly half of total generation time. If I had to re‑prompt twice to get a good answer, I’d simply pick a model with better reasoning instead of chasing seconds.
# 2. Generation speed
**V3 ≈ MLX‑Qwen3 > R1 > GGFU‑Qwen3 > Gemma3**.
No surprise: token‑width + unified‑memory bandwidth rule here. The Mac’s 890 GB/s is great for a compact workstation, but it’s nowhere near the monster discrete GPUs you guys already know—so throughput drops once the model starts chugging serious tokens.
# 3. Output quality (grading as if these were my students)
**Qwen3 >>> R1 > Gemma3 > V3**
* **deepseek‑V3** – trivial answer, would fail the course.
* **Deepseek‑R1** – solid undergrad level.
* **Gemma‑3** – punchy for its size, respectable.
* **Qwen3** – in a league of its own: clear, creative, concise, high‑depth. If the others were bachelor’s level, Qwen3 was PhD defending a job talk.
Bottom line: for text‑to‑text tasks balancing quality and speed, **Qwen3‑8bit (MLX)** is my daily driver.
# One month with the Mac Studio – worth it?
**Why I don’t regret it**
1. Stellar build & design.
2. Makes sense if a computer > a car for you (I do bio‑informatics), you live in an apartment (space is luxury, no room for a noisy server), and noise destroys you (I’m neurodivergent; the Mac is silent even at 100 %).
3. Power draw peaks < 250 W.
4. Ridiculously small footprint, light enough to slip in a backpack.
**Why you might pass**
* You game heavily on PC.
* You hate macOS learning curves.
* You want constant hardware upgrades.
* You can wait 2–3 years for LLM‑focused hardware to get cheap.
**Money‑saving tips**
* Stick with the 1 TB SSD—Thunderbolt + a fast NVMe enclosure covers the rest.
* Skip Apple’s monitor & peripherals; third‑party is way cheaper.
* Grab one before any Trump‑era import tariffs jack up Apple prices again.
* I would not buy the 256 Gb over the 512 Gb, of course is double the price, but it opens more opportunities at least for me. With it I can run an bioinformatics analysis while using Qwen3, and even if Qwen3 fits (tightly) in the 256 Gb, this won't let you with a large margin of maneuver for other tasks. Finally, who knows what would be the next generation of models and how much memory it will get.
# TL;DR
* **Qwen3‑8bit** dominates – PhD‑level answers, fast enough, reasoning quick.
* Thinking time isn’t the bottleneck; quantization + memory bandwidth are (if any expert wants to correct or improve this please do so).
* Mac Studio M3 Ultra is a silence‑loving, power‑sipping, tiny beast—just not the rig for GPU fiends or upgrade addicts.
Ask away if you want more details!
| 2025-05-05T17:58:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfi8xh/benchmark_quickanddirty_test_of_5_models_on_a_mac/
|
Turbulent_Pin7635
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfi8xh
| false | null |
t3_1kfi8xh
|
/r/LocalLLaMA/comments/1kfi8xh/benchmark_quickanddirty_test_of_5_models_on_a_mac/
| false | false |
self
| 89 | null |
RTX 8000?
| 1 |
I have the option to buy a RTX 8000 for just under a $1000, but is this worth it in 2025?
I have been look at getting a A5000 but would the extra 24gb of VRAM on the 8k be a better trade off then the extra infra I would get out of the A5000?
cheers
| 2025-05-05T18:13:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfimy6/rtx_8000/
|
TechLevelZero
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfimy6
| false | null |
t3_1kfimy6
|
/r/LocalLLaMA/comments/1kfimy6/rtx_8000/
| false | false |
self
| 1 | null |
I have a few questions.
| 2 |
1. Which of Llama, Qwen or Gemma would you say is best for general purpose usage with a focus on answer accuracy at 8B and under?
2. What temp/top K/top P/min P would you recommend for these models, and is Q4_K_M good enough or would you spring for Q6?
3. What is the difference between the different uploaders of the same models on Hugging Face?
| 2025-05-05T18:34:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfj5l7/i_have_a_few_questions/
|
Kyla_3049
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfj5l7
| false | null |
t3_1kfj5l7
|
/r/LocalLLaMA/comments/1kfj5l7/i_have_a_few_questions/
| false | false |
self
| 2 | null |
I got 10k products to translate from Spanish to Chinese, Eng and Japanese. what smart to do?
| 0 |
Should i find free llms and translate them? ir just use Openai API which cost money?
In the future if it's possible I just want to drag a csv file and drop it so the backend will translate in the background.
I'm still new need to hear opinion.
| 2025-05-05T18:39:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfja3c/i_got_10k_products_to_translate_from_spanish_to/
|
ballbeamboy2
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfja3c
| false | null |
t3_1kfja3c
|
/r/LocalLLaMA/comments/1kfja3c/i_got_10k_products_to_translate_from_spanish_to/
| false | false |
self
| 0 | null |
GPU Advice
| 3 |
I’m trying to decide between an RTX 4000 ada 20gb or 2x RTX A2000 12gbs.
The dual A2000 would be half the cost of a RTX 4000.
I need to go with sff cards due to space constraints and energy efficiency.
Thoughts?
| 2025-05-05T18:41:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfjcar/gpu_advice/
|
synthchef
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfjcar
| false | null |
t3_1kfjcar
|
/r/LocalLLaMA/comments/1kfjcar/gpu_advice/
| false | false |
self
| 3 | null |
This is how small models single-handedly beat all the big ones in benchmarks...
| 119 |
If you ever wondered how do the small models always beat the big models in the benchmarks, this is how...
| 2025-05-05T18:47:34 |
Cool-Chemical-5629
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfjhlv
| false | null |
t3_1kfjhlv
|
/r/LocalLLaMA/comments/1kfjhlv/this_is_how_small_models_singlehandedly_beat_all/
| false | false | 119 |
{'enabled': True, 'images': [{'id': '8qEVn0h_fvFzs53PFQJdKUGsWFPxHlRWsMdgj6gzQX0', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/kammsi5ce0ze1.png?width=108&crop=smart&auto=webp&s=585acf4d989bbbaf8fece428c93321cb2a1fca1f', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/kammsi5ce0ze1.png?width=216&crop=smart&auto=webp&s=7aa4c13584b4d7821471a06471bfaf9ba2a1a680', 'width': 216}, {'height': 177, 'url': 'https://preview.redd.it/kammsi5ce0ze1.png?width=320&crop=smart&auto=webp&s=15062534197465196f9d07720d9ada5684a28f3a', 'width': 320}, {'height': 355, 'url': 'https://preview.redd.it/kammsi5ce0ze1.png?width=640&crop=smart&auto=webp&s=c8bb724a8000281ed4503c39eeec9365cc1bf41c', 'width': 640}, {'height': 533, 'url': 'https://preview.redd.it/kammsi5ce0ze1.png?width=960&crop=smart&auto=webp&s=73d121808ba2c4fcb39c74dac5cc1b1935279c30', 'width': 960}, {'height': 600, 'url': 'https://preview.redd.it/kammsi5ce0ze1.png?width=1080&crop=smart&auto=webp&s=41b66c315231d833c73e6fe0e066d576b4a92f38', 'width': 1080}], 'source': {'height': 826, 'url': 'https://preview.redd.it/kammsi5ce0ze1.png?auto=webp&s=a4efa453463297d2cacb4796b6dda3fe6e82a45e', 'width': 1486}, 'variants': {}}]}
|
||
What's the Best Local "Sci-Fi Buddy" LLM Setup in 2025? (Memory & Tools Needed!)
| 2 |
Hey folks,
I've been running LLMs locally since the early days but haven't kept up with all the interface/memory management advancements. I'm looking beyond coding tools (like Continue Dev/Roo) and want to create a fun, persistent "sci-fi buddy" chatbot on my PC for chat and productivity.
What's the current state-of-the-art setup for this? My biggest hurdle is long-term memory – there are so many RAG/embedding options now! Is there a solid chat interface that works well with something like Ollama and handles memory automatically, remembering our chats without needing massive context windows?
Bonus points: Needs good tool use capabilities (e.g., accessing local files, analyzing code).
What setups (front-ends, memory solutions, etc.) are you all using or recommend for a capable, local AI companion? Ollama preferred because I'm used to it, but I'm open-minded!
Thanks!
| 2025-05-05T18:51:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfjlcw/whats_the_best_local_scifi_buddy_llm_setup_in/
|
CapnFlisto
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfjlcw
| false | null |
t3_1kfjlcw
|
/r/LocalLLaMA/comments/1kfjlcw/whats_the_best_local_scifi_buddy_llm_setup_in/
| false | false |
self
| 2 | null |
GroqStreamChain with Llama!
| 1 |
[removed]
| 2025-05-05T19:06:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfjybk/groqstreamchain_with_llama/
|
pr0m3la
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfjybk
| false | null |
t3_1kfjybk
|
/r/LocalLLaMA/comments/1kfjybk/groqstreamchain_with_llama/
| false | false |
self
| 1 | null |
We have Deep Research at Home, and it verifies all cited material post-generation
| 2 | 2025-05-05T19:09:28 |
https://github.com/atineiatte/deep-research-at-home
|
atineiatte
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfk1j3
| false | null |
t3_1kfk1j3
|
/r/LocalLLaMA/comments/1kfk1j3/we_have_deep_research_at_home_and_it_verifies_all/
| false | false | 2 |
{'enabled': False, 'images': [{'id': 'wdADkh9jiReq7cQ6jTpVXENJYjSpVJNzKhyynmET6E4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/g42sE3d4e0rww2wus87Oo2fDwX_1CTwB1qHQPBy8pOI.jpg?width=108&crop=smart&auto=webp&s=9ba6462b1ec348c78d183bbebd2712e698348465', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/g42sE3d4e0rww2wus87Oo2fDwX_1CTwB1qHQPBy8pOI.jpg?width=216&crop=smart&auto=webp&s=f522b066757b2c5a4fe6a5da3d8d01bf3fb44b98', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/g42sE3d4e0rww2wus87Oo2fDwX_1CTwB1qHQPBy8pOI.jpg?width=320&crop=smart&auto=webp&s=6ee885bda54048d5ff26b21d24567310a5af61bf', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/g42sE3d4e0rww2wus87Oo2fDwX_1CTwB1qHQPBy8pOI.jpg?width=640&crop=smart&auto=webp&s=45044db56a8c6580d033cd61a8ea9380b7b487bc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/g42sE3d4e0rww2wus87Oo2fDwX_1CTwB1qHQPBy8pOI.jpg?width=960&crop=smart&auto=webp&s=025dab3196d1f3f2d9fdfe228df9042d075169cd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/g42sE3d4e0rww2wus87Oo2fDwX_1CTwB1qHQPBy8pOI.jpg?width=1080&crop=smart&auto=webp&s=5737551c35863d1760bcfee9aa135d37949460f3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/g42sE3d4e0rww2wus87Oo2fDwX_1CTwB1qHQPBy8pOI.jpg?auto=webp&s=3789f6731b70abf1b37565a4728fa1ecc4de74b2', 'width': 1200}, 'variants': {}}]}
|
||
best model under 8B that is good at writing?
| 10 |
I am looking for the best local model that is good at revising / formatting text! I take a lot of notes, write a lot of emails, blog posts, etc. A lot of these models have terrible and formal writing outputs, and i'd like something that is more creative.
| 2025-05-05T19:11:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfk3fc/best_model_under_8b_that_is_good_at_writing/
|
Sudonymously
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfk3fc
| false | null |
t3_1kfk3fc
|
/r/LocalLLaMA/comments/1kfk3fc/best_model_under_8b_that_is_good_at_writing/
| false | false |
self
| 10 | null |
A step-by-step guide for fine-tuning the Qwen3-32B model on the medical reasoning dataset within an hour.
| 57 |
Building on the success of QwQ and Qwen2.5, Qwen3 represents a major leap forward in reasoning, creativity, and conversational capabilities. With open access to both dense and Mixture-of-Experts (MoE) models, ranging from 0.6B to 235B-A22B parameters, Qwen3 is designed to excel in a wide array of tasks.
In this tutorial, we will fine-tune the Qwen3-32B model on a medical reasoning dataset. The goal is to optimize the model's ability to reason and respond accurately to patient queries, ensuring it adopts a precise and efficient approach to medical question-answering.
| 2025-05-05T19:11:55 |
https://www.datacamp.com/tutorial/fine-tuning-qwen3
|
kingabzpro
|
datacamp.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfk3tw
| false | null |
t3_1kfk3tw
|
/r/LocalLLaMA/comments/1kfk3tw/a_stepbystep_guide_for_finetuning_the_qwen332b/
| false | false |
default
| 57 | null |
Super happy with the results of my yolov5 component identifier!
| 1 |
[removed]
| 2025-05-05T19:19:16 |
https://www.reddit.com/gallery/1kfkaka
|
oodelay
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfkaka
| false | null |
t3_1kfkaka
|
/r/LocalLLaMA/comments/1kfkaka/super_happy_with_the_results_of_my_yolov5/
| false | false | 1 | null |
|
Claude full system prompt with all tools is now ~25k tokens.
| 499 | 2025-05-05T19:25:13 |
https://github.com/asgeirtj/system_prompts_leaks/blob/main/claude.txt
|
StableSable
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfkg29
| false | null |
t3_1kfkg29
|
/r/LocalLLaMA/comments/1kfkg29/claude_full_system_prompt_with_all_tools_is_now/
| false | false | 499 |
{'enabled': False, 'images': [{'id': 'er_RIfng-0cxDV_pYaQevemCyE2RxgF3OOn_cjqPq04', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-gpcWCAltY66ZM29aKxaxLCWgfV5wmtUemjqB6JURhI.jpg?width=108&crop=smart&auto=webp&s=7e25b3b59ffcd63bf14c84871d38259e4878ac3a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-gpcWCAltY66ZM29aKxaxLCWgfV5wmtUemjqB6JURhI.jpg?width=216&crop=smart&auto=webp&s=4036bd623c05d94adb14fa69271a72c18c1764a6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-gpcWCAltY66ZM29aKxaxLCWgfV5wmtUemjqB6JURhI.jpg?width=320&crop=smart&auto=webp&s=dbe3336ace7975718c05412df16319a4b45c3655', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-gpcWCAltY66ZM29aKxaxLCWgfV5wmtUemjqB6JURhI.jpg?width=640&crop=smart&auto=webp&s=0ac3f0d5a570d0bb7b8c1d30d521411d0c135796', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-gpcWCAltY66ZM29aKxaxLCWgfV5wmtUemjqB6JURhI.jpg?width=960&crop=smart&auto=webp&s=7e9f528880f4a881fb939c22486d47a592271b67', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-gpcWCAltY66ZM29aKxaxLCWgfV5wmtUemjqB6JURhI.jpg?width=1080&crop=smart&auto=webp&s=410a5c1735c01c1cdbd7e58c7637a6926f36f760', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-gpcWCAltY66ZM29aKxaxLCWgfV5wmtUemjqB6JURhI.jpg?auto=webp&s=a6c12c18415947ba79423cc358a1d7941fb01302', 'width': 1200}, 'variants': {}}]}
|
||
Sharding for Parallel Inference Processing
| 1 |
Distributing inference compute across many devices seems like a reasonable way to escape our weenie-GPU purgatory.
As I understand there are two challenges.
• Transfer speed between CPUs is a bottleneck (like NV Link and Fabric Interconnect).
• getting two separate CPUs to parallel compute at a granular level of synchronization, working on the same next-token, seems tough to accomplish.
I know I don’t know. Would anyone here be willing to shed light on if this non-nVidia parallel compute path is being worked on or if that path has potential to help make local model implementation faster?
| 2025-05-05T19:39:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfkspz/sharding_for_parallel_inference_processing/
|
jxjq
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfkspz
| false | null |
t3_1kfkspz
|
/r/LocalLLaMA/comments/1kfkspz/sharding_for_parallel_inference_processing/
| false | false |
self
| 1 | null |
Open-Source Real-Time Voice Chat with Local 24B Model (~500ms Latency!)
| 1 |
[removed]
| 2025-05-05T19:41:07 |
https://youtube.com/watch?v=HM_IQuuuPX8&si=tt8qcpbkvz5ZIOJB
|
Lonligrin
|
youtube.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfkujy
| false |
{'oembed': {'author_name': 'Linguflex', 'author_url': 'https://www.youtube.com/@Linguflex', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/HM_IQuuuPX8?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="🤯 500ms Real-Time AI Voice Chat Demo: Feels Like Talking to a Human!"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/HM_IQuuuPX8/hqdefault.jpg', 'thumbnail_width': 480, 'title': '🤯 500ms Real-Time AI Voice Chat Demo: Feels Like Talking to a Human!', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1kfkujy
|
/r/LocalLLaMA/comments/1kfkujy/opensource_realtime_voice_chat_with_local_24b/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'MdMMV8uX-8fezO8h0dcvHSUXP-5A0n9vkdYroMmXHOQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/FrSeQHAfH8ucYuqX6ERN969lQMmZVCPB5a9bZoQMkEk.jpg?width=108&crop=smart&auto=webp&s=85f677228ea9fdd19d4bb3a9115ff97897199157', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/FrSeQHAfH8ucYuqX6ERN969lQMmZVCPB5a9bZoQMkEk.jpg?width=216&crop=smart&auto=webp&s=8509836566c4e75278ec223b0871188b62003164', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/FrSeQHAfH8ucYuqX6ERN969lQMmZVCPB5a9bZoQMkEk.jpg?width=320&crop=smart&auto=webp&s=52de6f797c358b77b4905f13d71e5458e1d5d320', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/FrSeQHAfH8ucYuqX6ERN969lQMmZVCPB5a9bZoQMkEk.jpg?auto=webp&s=6adb38aaad4363bc8fa4d9a052a2e606ef04faf7', 'width': 480}, 'variants': {}}]}
|
|
2x Rtx4000 vs 4x Rtx2000 ( ada)
| 1 |
[removed]
| 2025-05-05T19:53:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfl4xk/2x_rtx4000_vs_4x_rtx2000_ada/
|
lace_meUPadam
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfl4xk
| false | null |
t3_1kfl4xk
|
/r/LocalLLaMA/comments/1kfl4xk/2x_rtx4000_vs_4x_rtx2000_ada/
| false | false |
self
| 1 | null |
How to add generation to LLM?
| 0 |
Hello! I know that you can create projectors to add more modalities to an LLM and make the model learn abstract stuff (e.g., images). However, it works by combining projector vectors with text vectors in the input, but the output is still text!
Is there a way to make the projectors for outputs so that the model can generate stuff (e.g., speech)?
Thanks!
| 2025-05-05T20:00:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1kflbqv/how_to_add_generation_to_llm/
|
yukiarimo
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kflbqv
| false | null |
t3_1kflbqv
|
/r/LocalLLaMA/comments/1kflbqv/how_to_add_generation_to_llm/
| false | false |
self
| 0 | null |
How to add generation to LLM?
| 0 |
Hello! I know that you can create projectors to add more modalities to an LLM and make the model learn abstract stuff (e.g., images). However, it works by combining projector vectors with text vectors in the input, but the output is still text!
Is there a way to make the projectors for outputs so that the model can generate stuff (e.g., speech)?
Thanks!
| 2025-05-05T20:00:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1kflc2z/how_to_add_generation_to_llm/
|
yukiarimo
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kflc2z
| false | null |
t3_1kflc2z
|
/r/LocalLLaMA/comments/1kflc2z/how_to_add_generation_to_llm/
| false | false |
self
| 0 | null |
Qwen3-32b wrote this, who is it pretending to be?
| 1 |
[removed]
| 2025-05-05T20:15:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1kflpha/qwen332b_wrote_this_who_is_it_pretending_to_be/
|
Legitimate-Task-6713
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kflpha
| false | null |
t3_1kflpha
|
/r/LocalLLaMA/comments/1kflpha/qwen332b_wrote_this_who_is_it_pretending_to_be/
| false | false |
self
| 1 | null |
AI agents: We need less hype and more reliability
| 3 |
2025 is supposed to be the year of agents according to the big tech players. I was skeptical first, but better models, cheaper tokens, more powerful tools (MCP, memory, RAG, etc.) and 10X inference speed are making many agent use cases suddenly possible and economical. But what most customers struggle with isn't the capabilities, it's the reliability.
# Less Hype, More Reliability
Most customers don't need complex AI systems. They need simple and reliable automation workflows with clear ROI. The "book a flight" agent demos are very far away from this reality. Reliability, transparency, and compliance are top criteria when firms are evaluating AI solutions.
Here are a few "non-fancy" AI agent use cases that automate tasks and execute them in a highly accurate and reliable way:
1. **Web monitoring:** A leading market maker built their own in-house web monitoring tool, but realized they didn't have the expertise to operate it at scale.
2. **Web scraping:** a hedge fund with 100s of web scrapers was struggling to keep up with maintenance and couldn’t scale. Their data engineers where overwhelmed with a long backlog of PM requests.
3. **Company filings:** a large quant fund used manual content experts to extract commodity data from company filings with complex tables, charts, etc.
These are all relatively unexciting use cases that I automated with AI agents. It comes down to such relatively unexciting use cases where AI adds the most value.
Agents won't eliminate our jobs, but they will automate tedious, repetitive work such as web scraping, form filling, and data entry.
# Buy vs Make
Many of our customers tried to build their own AI agents, but often struggled to get them to the desire reliability. The top reasons why these in-house initiatives often fail:
1. Building the agent is only 30% of the battle. Deployment, maintenance, data quality/reliability are the hardest part.
2. The problem shifts from "can we pull the text from this document?" to "how do we teach an LLM o extract the data, validate the output, and deploy it with confidence into production?"
3. Getting > 95% accuracy in real world complex use cases requires state-of-the-art LLMs, but also:
* orchestration (parsing, classification, extraction, and splitting)
* tooling that lets non-technical domain experts quickly iterate, review results, and improve accuracy
* comprehensive automated data quality checks (e.g. with regex and LLM-as-a-judge)
# Outlook
Data is the competitive edge of many financial services firms, and it has been traditionally limited by the capacity of their data scientists. This is changing now as data and research teams can do a lot more with a lot less by using AI agents across the entire data stack. Automating well constrained tasks with highly-reliable agents is where we are at now.
But we should not narrowly see AI agents as replacing work that already gets done. Most AI agents will be used to automate tasks/research that humans/rule-based systems never got around to doing before because it was too expensive or time consuming.
| 2025-05-05T20:33:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1kfm5hl/ai_agents_we_need_less_hype_and_more_reliability/
|
madredditscientist
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfm5hl
| false | null |
t3_1kfm5hl
|
/r/LocalLLaMA/comments/1kfm5hl/ai_agents_we_need_less_hype_and_more_reliability/
| false | false |
self
| 3 | null |
Ollama 0.6.8 released, stating performance improvements for Qwen 3 MoE models (30b-a3b and 235b-a22b) on NVIDIA and AMD GPUs.
| 50 |
The update also includes:
>Fixed `GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed` issue caused by conflicting installations
>Fixed a memory leak that occurred when providing images as input
>`ollama show` will now correctly label older vision models such as `llava`
>Reduced out of memory errors by improving worst-case memory estimations
>Fix issue that resulted in a `context canceled` error
**Full Changelog**: [https://github.com/ollama/ollama/releases/tag/v0.6.8](https://github.com/ollama/ollama/releases/tag/v0.6.8)
| 2025-05-05T20:47:33 |
https://github.com/ollama/ollama/releases/tag/v0.6.8
|
swagonflyyyy
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kfmic3
| false | null |
t3_1kfmic3
|
/r/LocalLLaMA/comments/1kfmic3/ollama_068_released_stating_performance/
| false | false | 50 |
{'enabled': False, 'images': [{'id': 'WPPoO0IlQEsagS935AbTb5ysxHWgXhbv3p_JICaLzZg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0lw8qn5OefTTlWMq6-CTI2D_wJSi67bbux5PetB3scY.jpg?width=108&crop=smart&auto=webp&s=5decbbdfa47411a0e056a49a8f57d293a832d805', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0lw8qn5OefTTlWMq6-CTI2D_wJSi67bbux5PetB3scY.jpg?width=216&crop=smart&auto=webp&s=bc119422b50342c7037cd8b812e7443923c835fd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0lw8qn5OefTTlWMq6-CTI2D_wJSi67bbux5PetB3scY.jpg?width=320&crop=smart&auto=webp&s=a4309616a6ecf993b2b4f0566bcf7a49a7b5bdda', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0lw8qn5OefTTlWMq6-CTI2D_wJSi67bbux5PetB3scY.jpg?width=640&crop=smart&auto=webp&s=59a2d6daceec4522a308302ffb73397db28bdd3d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0lw8qn5OefTTlWMq6-CTI2D_wJSi67bbux5PetB3scY.jpg?width=960&crop=smart&auto=webp&s=3c9e0159f591cd2d93ea33774d8721a713b17f83', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0lw8qn5OefTTlWMq6-CTI2D_wJSi67bbux5PetB3scY.jpg?width=1080&crop=smart&auto=webp&s=ecb9a53864311eb1050a440db12c87c0f13b9e73', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0lw8qn5OefTTlWMq6-CTI2D_wJSi67bbux5PetB3scY.jpg?auto=webp&s=b81984002a29215c4dde235630a2b201f2b65900', 'width': 1200}, 'variants': {}}]}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.