title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Qwen3 on LiveBench
| 74 |
[https://livebench.ai/#/](https://livebench.ai/#/)
https://preview.redd.it/9wg6nkargxxe1.png?width=925&format=png&auto=webp&s=9d9016c13d45318a17731b376fb4e39f640251aa
https://preview.redd.it/n6nx96prgxxe1.png?width=947&format=png&auto=webp&s=4b4d9a9ac2f50f9bd95667bc088b3d388536d09b
https://preview.redd.it/bqf2671sgxxe1.png?width=940&format=png&auto=webp&s=e465c0584d3a19b3dd372b86397f737ac8d04e5c
https://preview.redd.it/3mi1zmhxgxxe1.png?width=943&format=png&auto=webp&s=069841c9dada3aaf5c340977f6e0db382e868c53
https://preview.redd.it/28rqjpuxgxxe1.png?width=1048&format=png&auto=webp&s=dacdc258ff44e6eabcb2bc94c555a7e84d36662b
| 2025-04-30T07:52:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbazrd/qwen3_on_livebench/
|
AaronFeng47
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbazrd
| false | null |
t3_1kbazrd
|
/r/LocalLLaMA/comments/1kbazrd/qwen3_on_livebench/
| false | false | 74 | null |
|
Any M3 ultra owners tried new Qwen models?
| 2 |
How’s the performance?
| 2025-04-30T08:03:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbb55q/any_m3_ultra_owners_tried_new_qwen_models/
|
No_Conversation9561
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbb55q
| false | null |
t3_1kbb55q
|
/r/LocalLLaMA/comments/1kbb55q/any_m3_ultra_owners_tried_new_qwen_models/
| false | false |
self
| 2 | null |
uhhh.. what?
| 1 |
[removed]
| 2025-04-30T08:18:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbbcmt/uhhh_what/
|
This_Interview_8571
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbbcmt
| false | null |
t3_1kbbcmt
|
/r/LocalLLaMA/comments/1kbbcmt/uhhh_what/
| false | false |
self
| 1 | null |
deepseek-ai/DeepSeek-Prover-V2-671B · Hugging Face
| 288 | 2025-04-30T08:18:47 |
https://huggingface.co/deepseek-ai/DeepSeek-Prover-V2-671B
|
Dark_Fire_12
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbbcp8
| false | null |
t3_1kbbcp8
|
/r/LocalLLaMA/comments/1kbbcp8/deepseekaideepseekproverv2671b_hugging_face/
| false | false | 288 |
{'enabled': False, 'images': [{'id': '1xQPyjkGx7KZ_4eWaU0ZRNrC2OMz4VeAvnKO21eHNxs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/MfS0kbAv6ZSxumHFhRuKL1EVBrJ457E-QoycmpgMTBk.jpg?width=108&crop=smart&auto=webp&s=cdf1789bb5d3ac0a9ba03a4b8b95466eca994cf2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/MfS0kbAv6ZSxumHFhRuKL1EVBrJ457E-QoycmpgMTBk.jpg?width=216&crop=smart&auto=webp&s=4506928f8f72e667f426dad4cff8d059c1a728b1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/MfS0kbAv6ZSxumHFhRuKL1EVBrJ457E-QoycmpgMTBk.jpg?width=320&crop=smart&auto=webp&s=d0d52a83ee442942e8dc2717fffd2483434a6dd7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/MfS0kbAv6ZSxumHFhRuKL1EVBrJ457E-QoycmpgMTBk.jpg?width=640&crop=smart&auto=webp&s=ff61d01135c5b0a1b9141534d8a5e2c46dbfa952', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/MfS0kbAv6ZSxumHFhRuKL1EVBrJ457E-QoycmpgMTBk.jpg?width=960&crop=smart&auto=webp&s=6e1f6c8f10d07f42ebc35235bd868d628078faf4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/MfS0kbAv6ZSxumHFhRuKL1EVBrJ457E-QoycmpgMTBk.jpg?width=1080&crop=smart&auto=webp&s=d05d3f08786d36bba07ce040410ede3bb6891901', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/MfS0kbAv6ZSxumHFhRuKL1EVBrJ457E-QoycmpgMTBk.jpg?auto=webp&s=d73ca93790ee120215d8618aa29dd9752800e623', 'width': 1200}, 'variants': {}}]}
|
||
uhh.. what?
| 11 |
I have no idea what's going on with qwen3 but I've never seen this type of hallucinating before. I noticed also that the smaller models locally seem to overthink and repeat stuff infinitely.
https://chat.qwen.ai/s/49cf72ca-7852-4d99-8299-5e4827d925da?fev=0.0.86
| 2025-04-30T08:21:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbbdra/uhh_what/
|
MigorRortis96
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbbdra
| false | null |
t3_1kbbdra
|
/r/LocalLLaMA/comments/1kbbdra/uhh_what/
| false | false |
self
| 11 | null |
Unsloth training times?
| 0 |
Hello all just enquiring who among us has done some unsloth training? Following the grpo steps against llama 3.1 8b, 250 steps is approx 8 hours on my 3060. Wondering what sort of speeds others are getting, starting to feel lately my 3060s are just not quite the super weapons I thought they were..
| 2025-04-30T08:33:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbbjn7/unsloth_training_times/
|
oldschooldaw
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbbjn7
| false | null |
t3_1kbbjn7
|
/r/LocalLLaMA/comments/1kbbjn7/unsloth_training_times/
| false | false |
self
| 0 | null |
Which qwen version should I install?
| 0 |
I just got a PC with 2 RTX 4070Ti Super (16gb vram each or 32gb total) and two DDR5 RAM sticks totaling 64gb. I plan to use LLM locally to write papers, do research, make presentations, and make reports.
I want to install LM Studio and Qwen3. Can someone explain or suggest which Qwen version and which quantization I should install? Any direction where to learn about Q4 vs Q6 vs etc versions?
| 2025-04-30T08:35:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbbkel/which_qwen_version_should_i_install/
|
West-Guess-69
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbbkel
| false | null |
t3_1kbbkel
|
/r/LocalLLaMA/comments/1kbbkel/which_qwen_version_should_i_install/
| false | false |
self
| 0 | null |
Has unsloth fixed the qwen3 GGUFs yet?
| 4 |
Like to update when it happens. Seeing quite a few bugs in the inital versions.
| 2025-04-30T08:35:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbbkkw/has_unsloth_fixed_the_qwen3_ggufs_yet/
|
thebadslime
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbbkkw
| false | null |
t3_1kbbkkw
|
/r/LocalLLaMA/comments/1kbbkkw/has_unsloth_fixed_the_qwen3_ggufs_yet/
| false | false |
self
| 4 | null |
unsloth/Qwen3-30B-A3B-GGUF not working in LM Studio? "Unknown model architecture"
| 0 |
Sorry if this is a noob question, but I keep getting this error
"llama.cpp error: 'error loading model architecture: unknown model architecture: 'qwen3moe''"
| 2025-04-30T08:48:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbbqak/unslothqwen330ba3bgguf_not_working_in_lm_studio/
|
SameLotus
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbbqak
| false | null |
t3_1kbbqak
|
/r/LocalLLaMA/comments/1kbbqak/unslothqwen330ba3bgguf_not_working_in_lm_studio/
| false | false |
self
| 0 | null |
DeepSeek-Prover-V2-671B is released
| 166 |
[https://huggingface.co/deepseek-ai/DeepSeek-Prover-V2-671B](https://huggingface.co/deepseek-ai/DeepSeek-Prover-V2-671B)
| 2025-04-30T08:54:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbbt74/deepseekproverv2671b_is_released/
|
Thin_Ad7360
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbbt74
| false | null |
t3_1kbbt74
|
/r/LocalLLaMA/comments/1kbbt74/deepseekproverv2671b_is_released/
| false | false |
self
| 166 |
{'enabled': False, 'images': [{'id': '1xQPyjkGx7KZ_4eWaU0ZRNrC2OMz4VeAvnKO21eHNxs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/MfS0kbAv6ZSxumHFhRuKL1EVBrJ457E-QoycmpgMTBk.jpg?width=108&crop=smart&auto=webp&s=cdf1789bb5d3ac0a9ba03a4b8b95466eca994cf2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/MfS0kbAv6ZSxumHFhRuKL1EVBrJ457E-QoycmpgMTBk.jpg?width=216&crop=smart&auto=webp&s=4506928f8f72e667f426dad4cff8d059c1a728b1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/MfS0kbAv6ZSxumHFhRuKL1EVBrJ457E-QoycmpgMTBk.jpg?width=320&crop=smart&auto=webp&s=d0d52a83ee442942e8dc2717fffd2483434a6dd7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/MfS0kbAv6ZSxumHFhRuKL1EVBrJ457E-QoycmpgMTBk.jpg?width=640&crop=smart&auto=webp&s=ff61d01135c5b0a1b9141534d8a5e2c46dbfa952', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/MfS0kbAv6ZSxumHFhRuKL1EVBrJ457E-QoycmpgMTBk.jpg?width=960&crop=smart&auto=webp&s=6e1f6c8f10d07f42ebc35235bd868d628078faf4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/MfS0kbAv6ZSxumHFhRuKL1EVBrJ457E-QoycmpgMTBk.jpg?width=1080&crop=smart&auto=webp&s=d05d3f08786d36bba07ce040410ede3bb6891901', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/MfS0kbAv6ZSxumHFhRuKL1EVBrJ457E-QoycmpgMTBk.jpg?auto=webp&s=d73ca93790ee120215d8618aa29dd9752800e623', 'width': 1200}, 'variants': {}}]}
|
Prime Intellect to successfully complete the first decentralized RL training of a 32B model
| 1 | 2025-04-30T08:54:31 |
https://x.com/PrimeIntellect/status/1917295731532259628
|
caligolae
|
x.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbbtca
| false | null |
t3_1kbbtca
|
/r/LocalLLaMA/comments/1kbbtca/prime_intellect_to_successfully_complete_the/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'ZGylY2mPtZRKV0CZYcgj9sF59gftlstS9qFQlkufeLM', 'resolutions': [{'height': 91, 'url': 'https://external-preview.redd.it/NZsWg2IXKyQXSGlwfNop19KB4fbWinIVevTz8wQtVeg.jpg?width=108&crop=smart&auto=webp&s=4fb54b69835d3b029dfb552a0f7350784d682331', 'width': 108}, {'height': 182, 'url': 'https://external-preview.redd.it/NZsWg2IXKyQXSGlwfNop19KB4fbWinIVevTz8wQtVeg.jpg?width=216&crop=smart&auto=webp&s=c90382833468b8c886b500d1030b0bd87e00f328', 'width': 216}, {'height': 270, 'url': 'https://external-preview.redd.it/NZsWg2IXKyQXSGlwfNop19KB4fbWinIVevTz8wQtVeg.jpg?width=320&crop=smart&auto=webp&s=83622eb3c463aee446ddacbeffb943bb4941632a', 'width': 320}, {'height': 541, 'url': 'https://external-preview.redd.it/NZsWg2IXKyQXSGlwfNop19KB4fbWinIVevTz8wQtVeg.jpg?width=640&crop=smart&auto=webp&s=b06b230eda4a5e9a69cc3b4005fb1eeae06b46e0', 'width': 640}, {'height': 812, 'url': 'https://external-preview.redd.it/NZsWg2IXKyQXSGlwfNop19KB4fbWinIVevTz8wQtVeg.jpg?width=960&crop=smart&auto=webp&s=83e492c087d180ad6928ae02207f8df72c0b2e00', 'width': 960}, {'height': 914, 'url': 'https://external-preview.redd.it/NZsWg2IXKyQXSGlwfNop19KB4fbWinIVevTz8wQtVeg.jpg?width=1080&crop=smart&auto=webp&s=221bec44e8108cb10a73b7b88329d2c1da847301', 'width': 1080}], 'source': {'height': 1336, 'url': 'https://external-preview.redd.it/NZsWg2IXKyQXSGlwfNop19KB4fbWinIVevTz8wQtVeg.jpg?auto=webp&s=605dc4b294af12d8cd69434d86121580bb170823', 'width': 1578}, 'variants': {}}]}
|
||
New model DeepSeek-Prover-V2-671B
| 73 |
link: [https://huggingface.co/deepseek-ai/DeepSeek-Prover-V2-671B/tree/main](https://huggingface.co/deepseek-ai/DeepSeek-Prover-V2-671B/tree/main)
| 2025-04-30T09:06:09 |
Dr_Karminski
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbbz1n
| false | null |
t3_1kbbz1n
|
/r/LocalLLaMA/comments/1kbbz1n/new_model_deepseekproverv2671b/
| false | false | 73 |
{'enabled': True, 'images': [{'id': 'nJWx6TuEjD9u7euugzLgU3zXlLutlRoPkLyXQk3Uccs', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/v83oaiaztxxe1.png?width=108&crop=smart&auto=webp&s=36ea44f0058681c084a8cc44d47c570a27b6640a', 'width': 108}, {'height': 127, 'url': 'https://preview.redd.it/v83oaiaztxxe1.png?width=216&crop=smart&auto=webp&s=7a732a78dd64ae1d9db9a76c084410cd74b5f005', 'width': 216}, {'height': 189, 'url': 'https://preview.redd.it/v83oaiaztxxe1.png?width=320&crop=smart&auto=webp&s=73be969d15c5e7a4e0f77d22ddd89e84ed0cccd6', 'width': 320}], 'source': {'height': 329, 'url': 'https://preview.redd.it/v83oaiaztxxe1.png?auto=webp&s=8544667cd4dd36656e71a076578dbda5a0aaa672', 'width': 556}, 'variants': {}}]}
|
||
GPT4ALL - Language Model to reference local Documents
| 1 |
[removed]
| 2025-04-30T09:12:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbc28h/gpt4all_language_model_to_reference_local/
|
Ok_Yout_9756
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbc28h
| false | null |
t3_1kbc28h
|
/r/LocalLLaMA/comments/1kbc28h/gpt4all_language_model_to_reference_local/
| false | false |
self
| 1 | null |
I built a VSCode extension to speed up Copilot prompts with reusable guidelines
| 1 |
[removed]
| 2025-04-30T09:16:05 |
Dramatic-Area7701
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbc3nh
| false | null |
t3_1kbc3nh
|
/r/LocalLLaMA/comments/1kbc3nh/i_built_a_vscode_extension_to_speed_up_copilot/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'zczLqbY6gTtkw2z9bmzMWcm90i4QauNK5IE4ttaWv28', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/lqo6avtmvxxe1.gif?width=108&crop=smart&format=png8&s=8347a50722599ff36f0976054e2c287a399880b8', 'width': 108}, {'height': 141, 'url': 'https://preview.redd.it/lqo6avtmvxxe1.gif?width=216&crop=smart&format=png8&s=14c0c820397f2a7ec4f3ba2544c503e04663f00f', 'width': 216}, {'height': 209, 'url': 'https://preview.redd.it/lqo6avtmvxxe1.gif?width=320&crop=smart&format=png8&s=47662e915d7684dc4b63525f5f2b898d2de9954c', 'width': 320}, {'height': 418, 'url': 'https://preview.redd.it/lqo6avtmvxxe1.gif?width=640&crop=smart&format=png8&s=d91bccd169e7dd14bd15c05f8b55d076319bc99c', 'width': 640}, {'height': 628, 'url': 'https://preview.redd.it/lqo6avtmvxxe1.gif?width=960&crop=smart&format=png8&s=be39bbb2f68f84d3fb92531e5f47f35b75d04b68', 'width': 960}, {'height': 706, 'url': 'https://preview.redd.it/lqo6avtmvxxe1.gif?width=1080&crop=smart&format=png8&s=58f0a4a40fabb7f63aa6cf8ecae2bb9157e682c3', 'width': 1080}], 'source': {'height': 784, 'url': 'https://preview.redd.it/lqo6avtmvxxe1.gif?format=png8&s=ab6be8c49c038d260842135d180bcc1f95210b11', 'width': 1198}, 'variants': {'gif': {'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/lqo6avtmvxxe1.gif?width=108&crop=smart&s=d9a08dcc7d3f0db01732bd8a6db86a65c401cd80', 'width': 108}, {'height': 141, 'url': 'https://preview.redd.it/lqo6avtmvxxe1.gif?width=216&crop=smart&s=dec571ed5d0e7a5b9ab9e8a006fb5423d931cacd', 'width': 216}, {'height': 209, 'url': 'https://preview.redd.it/lqo6avtmvxxe1.gif?width=320&crop=smart&s=b65fd59215805389da5ce2b1976f11a14dbf79f5', 'width': 320}, {'height': 418, 'url': 'https://preview.redd.it/lqo6avtmvxxe1.gif?width=640&crop=smart&s=9905ac3aaeeb81e3d8b6f76d3d24ec84c86a9548', 'width': 640}, {'height': 628, 'url': 'https://preview.redd.it/lqo6avtmvxxe1.gif?width=960&crop=smart&s=7d862d51ee8dc5013c6ccd9dabf8347c42e4afd7', 'width': 960}, {'height': 706, 'url': 'https://preview.redd.it/lqo6avtmvxxe1.gif?width=1080&crop=smart&s=ce06b64a5f459a6bdff8b303287643df5bc84d7e', 'width': 1080}], 'source': {'height': 784, 'url': 'https://preview.redd.it/lqo6avtmvxxe1.gif?s=759c2e400639948d0878dfa149420e27861ee002', 'width': 1198}}, 'mp4': {'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/lqo6avtmvxxe1.gif?width=108&format=mp4&s=7dc2a8e19a0ae4afba1224ed82f05366cd1194c2', 'width': 108}, {'height': 141, 'url': 'https://preview.redd.it/lqo6avtmvxxe1.gif?width=216&format=mp4&s=754c2005abb394e46f71c95ff94165deccf11015', 'width': 216}, {'height': 209, 'url': 'https://preview.redd.it/lqo6avtmvxxe1.gif?width=320&format=mp4&s=46664d3888723f6cc7c60cfd59ade4c042b61712', 'width': 320}, {'height': 418, 'url': 'https://preview.redd.it/lqo6avtmvxxe1.gif?width=640&format=mp4&s=94acb2bbe14ce2f5efd9078d2e24274b85499eab', 'width': 640}, {'height': 628, 'url': 'https://preview.redd.it/lqo6avtmvxxe1.gif?width=960&format=mp4&s=77e594c5a0f8006619224c754ace7ba9994a1261', 'width': 960}, {'height': 706, 'url': 'https://preview.redd.it/lqo6avtmvxxe1.gif?width=1080&format=mp4&s=11afe8ac8e0623421c36ad438da979654f81bb74', 'width': 1080}], 'source': {'height': 784, 'url': 'https://preview.redd.it/lqo6avtmvxxe1.gif?format=mp4&s=f8a09d99d2770cc270b4dba843b37ce5799252ab', 'width': 1198}}}}]}
|
||
Language identification model
| 1 |
Can someone suggest good language detection model for indian language?
Exploring facebook mms, speech brain.
Do you recommend any other model,
(Preferably considering low latency requirements)
| 2025-04-30T09:32:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbcbdj/language_identification_model/
|
Inevitable_Cold_6214
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbcbdj
| false | null |
t3_1kbcbdj
|
/r/LocalLLaMA/comments/1kbcbdj/language_identification_model/
| false | false |
self
| 1 | null |
I am looking for a recommendation: best I2V local model, with my specifications
| 1 |
[removed]
| 2025-04-30T09:35:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbcd05/i_am_looking_for_a_recommendation_best_i2v_local/
|
dorian_elgato
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbcd05
| false | null |
t3_1kbcd05
|
/r/LocalLLaMA/comments/1kbcd05/i_am_looking_for_a_recommendation_best_i2v_local/
| false | false |
self
| 1 | null |
Best frontend to access LM studio remotely (MLX support needed)
| 1 |
Hi,
I use an M3 ultra to access different local LLM with different prompt systems. I tried with Ollama + web openui, but the lack of MLX support makes it very slow.
As of now, I use LM Studio locally, but I would also access the models remotely with a Tailscale network.
I tried to plug web openui on LM studio, but the integrations with the workspaces is not very good, so I'm looking for another front end that would allow me to access LM studio backend. Or find some backend that support MLX models with which I could replace LM Studio (but ideally something that do not need to write code each time I want to change & configure a model).
Any idea?
Thx!
| 2025-04-30T09:38:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbcefq/best_frontend_to_access_lm_studio_remotely_mlx/
|
HappyFaithlessness70
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbcefq
| false | null |
t3_1kbcefq
|
/r/LocalLLaMA/comments/1kbcefq/best_frontend_to_access_lm_studio_remotely_mlx/
| false | false |
self
| 1 | null |
LLM selection
| 1 |
[removed]
| 2025-04-30T09:50:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbckhw/llm_selection/
|
began_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbckhw
| false | null |
t3_1kbckhw
|
/r/LocalLLaMA/comments/1kbckhw/llm_selection/
| false | false |
self
| 1 | null |
Trained model vs. train Model?
| 0 |
Hello,
So what are the advantages of trained models versus models I train myself on data I curate manually?
Is it better for various task if I take a pre-trained llm and ask them on my data or shall I better train my own model with solely the data I have available to me?
Thanks
| 2025-04-30T09:53:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbclyw/trained_model_vs_train_model/
|
Visible-Employee-403
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbclyw
| false | null |
t3_1kbclyw
|
/r/LocalLLaMA/comments/1kbclyw/trained_model_vs_train_model/
| false | false |
self
| 0 | null |
İs qwen3 good or slop
| 1 |
[removed]
| 2025-04-30T10:09:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbcu9h/is_qwen3_good_or_slop/
|
FRENLYFROK
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbcu9h
| false | null |
t3_1kbcu9h
|
/r/LocalLLaMA/comments/1kbcu9h/is_qwen3_good_or_slop/
| false | false |
self
| 1 | null |
How did small (<8B) model evolve in the last 3 years?
| 7 |
I could not find this info (or table) around.
I wish to know the performance of today small models compared to the models of 2-3 years ago (Like Mistral 7B v0.3 for example).
| 2025-04-30T10:10:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbcuvk/how_did_small_8b_model_evolve_in_the_last_3_years/
|
Robert__Sinclair
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbcuvk
| false | null |
t3_1kbcuvk
|
/r/LocalLLaMA/comments/1kbcuvk/how_did_small_8b_model_evolve_in_the_last_3_years/
| false | false |
self
| 7 | null |
The sad state of the VRAM market
| 0 |
Visually shows the gap in the market: >24GB, $/GB jumps from 40 to 80-100 for new cards.
Nvidia's newer cards also offering less than their 30 and 40 series. Buy less, pay more.
| 2025-04-30T10:19:08 |
Aphid_red
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbczgt
| false | null |
t3_1kbczgt
|
/r/LocalLLaMA/comments/1kbczgt/the_sad_state_of_the_vram_market/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'UtXMVGEoCSA104kj0QTU3OunXHU0nS-fIIxjZmplMTc', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/tixb6adp6yxe1.png?width=108&crop=smart&auto=webp&s=b801252adfcd2ed06cd9a4acfac14cec5fa52509', 'width': 108}, {'height': 125, 'url': 'https://preview.redd.it/tixb6adp6yxe1.png?width=216&crop=smart&auto=webp&s=306a199399e5caaec0b3f29d649dc069cd0e0858', 'width': 216}, {'height': 185, 'url': 'https://preview.redd.it/tixb6adp6yxe1.png?width=320&crop=smart&auto=webp&s=1872330c43c981df6e25d47b5d4729e72485d7d4', 'width': 320}, {'height': 371, 'url': 'https://preview.redd.it/tixb6adp6yxe1.png?width=640&crop=smart&auto=webp&s=a7399356e54b467e5f1294f906d5d59c4a76fd5c', 'width': 640}], 'source': {'height': 539, 'url': 'https://preview.redd.it/tixb6adp6yxe1.png?auto=webp&s=8a15446f975852268b8a5243763b74985c25e8be', 'width': 928}, 'variants': {}}]}
|
||
How do I find out what calibration data was used for the creation of AWQ models?
| 2 |
Based on the calibration data, two different AWQ models from the same base model could perform differently. So I think it’s essential to disclose the calibration dataset used.
| 2025-04-30T10:22:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbd11a/how_do_i_find_out_what_calibration_data_was_used/
|
appakaradi
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbd11a
| false | null |
t3_1kbd11a
|
/r/LocalLLaMA/comments/1kbd11a/how_do_i_find_out_what_calibration_data_was_used/
| false | false |
self
| 2 | null |
GPU Goldmine: Turning Idle Processing Power into Profit
| 0 |
Hey.
I was thinking about the future of decentralized computing and how to contribute your GPU idle time at home.
The problem I am currently facing is that I have a GPU at home but don't use it most of the time. I did some research and found out that people contribute to Stockfish or Fold @ Home. Those two options are non-profit.
But there are solutions for profit as well (specifically for AI, since I am not in the crypto game) like Vast, Spheron, or Prime Intellect (although they haven't launched their contributing compute feature yet).
What else is there to contribute your GPU's idle time, and what do you think about the future of this?
| 2025-04-30T10:39:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbdagv/gpu_goldmine_turning_idle_processing_power_into/
|
yachty66
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbdagv
| false | null |
t3_1kbdagv
|
/r/LocalLLaMA/comments/1kbdagv/gpu_goldmine_turning_idle_processing_power_into/
| false | false |
self
| 0 | null |
Qwen3 modality. Chat vs released models
| 4 |
I'm wondering if they are using some unreleased version not yet available on HF since they do accept images as input at [chat.qwen.ai](http://chat.qwen.ai) ; Should we expect multimodality update in coming months? What was it look like in previous releases?
| 2025-04-30T10:42:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbdc5y/qwen3_modality_chat_vs_released_models/
|
silveroff
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbdc5y
| false | null |
t3_1kbdc5y
|
/r/LocalLLaMA/comments/1kbdc5y/qwen3_modality_chat_vs_released_models/
| false | false |
self
| 4 | null |
Llama-server: "Exclude thought process when sending requests to API"
| 4 |
The setting is self-explanatory: it causes the model to exclude reasoning traces from past turns of the conversation, when generating its next response.
The non-obvious effect of this, however, is that it requires the model to reprocess its own previous response after removing reasoning traces. I just ran into this when testing the new Qwen3 models and it took me a while to figure out why it took so long before responding in multi-turn conversations.
Just thought someone might find this observation useful. I'm still not sure if turning it off will affect Qwen's performance; llama-server itself, for example, advises not to turn it off for DeepSeek R1.
| 2025-04-30T10:51:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbdhj0/llamaserver_exclude_thought_process_when_sending/
|
CattailRed
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbdhj0
| false | null |
t3_1kbdhj0
|
/r/LocalLLaMA/comments/1kbdhj0/llamaserver_exclude_thought_process_when_sending/
| false | false |
self
| 4 | null |
Raspberry Pi 5: a small comparison between Qwen3 0.6B and Microsoft's new BitNet model
| 24 |
I've been doing some quick tests today, and wanted to share my results. I was testing this for a local voice assistant feature. The Raspberry Pi has 4Gb of memory, and is running a [smart home controller](https://www.candlesmarthome.com) at the same time.
**Qwen 3 0.6B, Q4 gguf** *using llama.cpp*
\- 0.6GB in size
\- Uses 600MB of memory
\- About 20 tokens per second
>\`./llama-cli -m qwen3\_06B\_Q4.gguf -c 4096 -cnv -t 4\`
https://preview.redd.it/0k8pgez1cyxe1.png?width=2644&format=png&auto=webp&s=374a7543adcf213a1835a5b8cd39c4c25bf4a0f4
**BitNet-b1.58-2B-4T** *using BitNet (Microsoft's fork of llama.cpp)*
\- 1.2GB in size
\- Uses 300MB of memory (!)
\- About 7 tokens per second
https://preview.redd.it/mzftb1x4cyxe1.png?width=1784&format=png&auto=webp&s=a739aae47625710b9378a37b0ffac5cc030ab11f
>\`python run\_inference.py -m models/BitNet-b1.58-2B-4T/ggml-model-i2\_s.gguf -p "Hello from BitNet on Pi5!" -cnv -t 4 -c 4096\`
The low memory use of the BitNet model seems pretty impressive? But what I don't understand is why the BitNet model is relatively slow. Is there a way to improve performance of the BitNet model? Or is Qwen 3 just that fast?
| 2025-04-30T10:53:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbdi9l/raspberry_pi_5_a_small_comparison_between_qwen3/
|
privacyparachute
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbdi9l
| false | null |
t3_1kbdi9l
|
/r/LocalLLaMA/comments/1kbdi9l/raspberry_pi_5_a_small_comparison_between_qwen3/
| false | false | 24 | null |
|
GitHub - XiaomiMiMo/MiMo: MiMo: Unlocking the Reasoning Potential of Language Model – From Pretraining to Posttraining
| 44 | 2025-04-30T10:56:08 |
https://github.com/XiaomiMiMo/MiMo
|
marcocastignoli
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbdk08
| false | null |
t3_1kbdk08
|
/r/LocalLLaMA/comments/1kbdk08/github_xiaomimimomimo_mimo_unlocking_the/
| false | false | 44 |
{'enabled': False, 'images': [{'id': 'UPMN7_OYlskzyCFpkMaYaUjvcmgYPnOPdsF9bFVvIhQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/U_NJ00xovJr4fKj6AJwQ35Odzc6w6jc272-TXHXNUC4.jpg?width=108&crop=smart&auto=webp&s=2b9d940e9db5e7e7021e06a994b955a34cf70fa4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/U_NJ00xovJr4fKj6AJwQ35Odzc6w6jc272-TXHXNUC4.jpg?width=216&crop=smart&auto=webp&s=0d4241d36a406e3298f44f0912735b2ad7408e4d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/U_NJ00xovJr4fKj6AJwQ35Odzc6w6jc272-TXHXNUC4.jpg?width=320&crop=smart&auto=webp&s=1f92ed43691add29e18641e52dbbccc72f1d9dd1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/U_NJ00xovJr4fKj6AJwQ35Odzc6w6jc272-TXHXNUC4.jpg?width=640&crop=smart&auto=webp&s=471cbfb82ac86afc15e99dceb63f180024f5a0b3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/U_NJ00xovJr4fKj6AJwQ35Odzc6w6jc272-TXHXNUC4.jpg?width=960&crop=smart&auto=webp&s=026906f9657f13692857b89b4add726ff8360638', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/U_NJ00xovJr4fKj6AJwQ35Odzc6w6jc272-TXHXNUC4.jpg?width=1080&crop=smart&auto=webp&s=bb1ca32d17de107b35cb64adf563b50758cd67c0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/U_NJ00xovJr4fKj6AJwQ35Odzc6w6jc272-TXHXNUC4.jpg?auto=webp&s=88d3c390cd371319f883e9c20de9b4b05b57cf8f', 'width': 1200}, 'variants': {}}]}
|
||
Benchmarking VL Models with an illusion image
| 3 |
i used this picture in order to benchmark VL models.
https://preview.redd.it/99aov2h0eyxe1.png?width=794&format=png&auto=webp&s=cb6df5e0395a1f8341aadc4e3d358aa7506e13ac
i asked to : Claude, Chat GPT, Gemini, Qwen to describe this picture of a man with a calendar in one shot :-D
the prompt is: "**describe it**"
**Qwen3-235B-A22B:**
The image shows a young person sitting outdoors, using a laptop computer. Here are the details:
1. \*\*Person\*\*:
\- The individual appears to be focused on the laptop screen.
\- They are wearing a blue and white striped shirt.
\- Their posture suggests they are engaged in some activity on the laptop.
2. \*\*Laptop\*\*:
\- The laptop is open, and the screen displays text or code, indicating that the person might be programming, coding, or working on a technical task.
\- The keyboard is visible, and the person's hands are positioned as if they are typing or navigating the device.
3. \*\*Setting\*\*:
\- The background indicates an outdoor environment, possibly a street or public area.
\- There is a large structure resembling scaffolding or a construction framework behind the person.
\- The ground appears to be concrete or paved, suggesting an urban setting.
4. \*\*Additional Details\*\*:
\- The person seems to be seated on a makeshift surface, such as a plastic sheet or tarp, which is placed on the ground.
\- The overall scene conveys a sense of resourcefulness, as the person is making use of available resources to work or study despite being in an unconventional setting.
This image highlights determination and adaptability, showcasing someone who is dedicated to their work or studies even in challenging circumstances.
**ChatGpt 4o:**
The image humorously depicts a young man appearing to use a "laptop," but upon closer inspection, it's clearly a large calculator that's been creatively positioned and edited to look like a laptop. The "screen" of the calculator shows a scenic image, mimicking a computer display, and the man is seriously interacting with it as if it's a functional device. This is likely meant as a joke or meme, emphasizing creativity or a humorous take on resourcefulness.
Would you like a caption idea or meme text for this image?
**Claude 3.7 Sonnet:**
The image shows a person using a laptop computer outdoors in what appears to be a makeshift or improvised setting. They are seated and working on the laptop which is placed on what looks like a bamboo or wooden mesh structure (possibly a traditional bed frame or mat). The background suggests a modest environment with some debris or unfinished construction visible. The person is wearing what appears to be a blue striped shirt or garment and is focused on the laptop screen which displays some content that isn't clearly visible. The setting suggests someone making do with available resources to access technology, possibly in a developing region or rural area.
and the winner !!
**Gemini 2.5 pro:**
This image depicts a person from a side-back view, sitting outdoors and interacting with an object that resembles a laptop.
The person is wearing a blue shirt with black stripes and has short, dark hair. They are focused intently on the object in front of them, with their hands positioned over the lower section as if typing.
The object itself, while shaped like an open laptop, **appears to be a cleverly designed calendar or planner**. The top section, where a screen would be, displays a scenic landscape image. The bottom section features a printed representation of a keyboard, along with what looks like calendar pages or dates (the number 12 is visible near the bottom right).
The setting seems to be outdoors on rough ground, possibly dirt or concrete. A simple woven chair or structure is visible behind the person. The overall impression is one of resourcefulness or perhaps a humorous take on technology access.
| 2025-04-30T11:02:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbdo49/benchmarking_vl_models_with_an_illusion_image/
|
Specific-Tax-6700
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbdo49
| false | null |
t3_1kbdo49
|
/r/LocalLLaMA/comments/1kbdo49/benchmarking_vl_models_with_an_illusion_image/
| false | false | 3 | null |
|
UIGEN-T2 7B UI Reasoning Model with Forms, Charts, Checkout, and Animation support
| 2 | 2025-04-30T11:19:03 |
https://v.redd.it/rj5kdvauhyxe1
|
United-Rush4073
|
/r/LocalLLaMA/comments/1kbdxqp/uigent2_7b_ui_reasoning_model_with_forms_charts/
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbdxqp
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/rj5kdvauhyxe1/DASHPlaylist.mpd?a=1748733550%2CZmU5ZmQyN2Y0Y2Y3NGFmOTZkNjcyYTI3YTBiMTMzZjVjOTA4NDJiNGE1MzExMjFkNzAwNDIyY2I0MGY2ODg0MA%3D%3D&v=1&f=sd', 'duration': 101, 'fallback_url': 'https://v.redd.it/rj5kdvauhyxe1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/rj5kdvauhyxe1/HLSPlaylist.m3u8?a=1748733550%2COTQ1YjE3MDM1MTBmMzQ5MjkyZDUyZDY1ZDA5NDYyMWZiNWJmNWQ3M2ZmZGNkYjEwMDdmOWVkNjUyOTM1ODZhZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/rj5kdvauhyxe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1kbdxqp
|
/r/LocalLLaMA/comments/1kbdxqp/uigent2_7b_ui_reasoning_model_with_forms_charts/
| false | false | 2 |
{'enabled': False, 'images': [{'id': 'eWI2dHh1YXVoeXhlMe_UaMURtlZk59cq4QhqIanOb8OkqO54Jklq72w3NG2w', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eWI2dHh1YXVoeXhlMe_UaMURtlZk59cq4QhqIanOb8OkqO54Jklq72w3NG2w.png?width=108&crop=smart&format=pjpg&auto=webp&s=32c10b343ddbfbc2f607fc068493a4f92379d033', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/eWI2dHh1YXVoeXhlMe_UaMURtlZk59cq4QhqIanOb8OkqO54Jklq72w3NG2w.png?width=216&crop=smart&format=pjpg&auto=webp&s=a35f839c49f30540d1bd556e7ce12c17aca17b14', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/eWI2dHh1YXVoeXhlMe_UaMURtlZk59cq4QhqIanOb8OkqO54Jklq72w3NG2w.png?width=320&crop=smart&format=pjpg&auto=webp&s=6d6bd3ebef7024489fd17a7ca7565c469e9698d0', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/eWI2dHh1YXVoeXhlMe_UaMURtlZk59cq4QhqIanOb8OkqO54Jklq72w3NG2w.png?width=640&crop=smart&format=pjpg&auto=webp&s=c3eaad23332409ec22d0878ce876fe5e17658aae', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/eWI2dHh1YXVoeXhlMe_UaMURtlZk59cq4QhqIanOb8OkqO54Jklq72w3NG2w.png?width=960&crop=smart&format=pjpg&auto=webp&s=eac60f83f667dfaaf270f1f2832c819a4a502c6f', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/eWI2dHh1YXVoeXhlMe_UaMURtlZk59cq4QhqIanOb8OkqO54Jklq72w3NG2w.png?width=1080&crop=smart&format=pjpg&auto=webp&s=91a50d428824129f908fcc15a33b9feba9f12638', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/eWI2dHh1YXVoeXhlMe_UaMURtlZk59cq4QhqIanOb8OkqO54Jklq72w3NG2w.png?format=pjpg&auto=webp&s=79d767767aeeaf58bb1415378dba08d94b5b32d9', 'width': 1920}, 'variants': {}}]}
|
||
Qwen3 32B leading LiveBench / IF / story_generation
| 75 |
[https://livebench.ai/#/?IF=as](https://livebench.ai/#/?IF=as)
| 2025-04-30T11:22:01 |
secopsml
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbdzo2
| false | null |
t3_1kbdzo2
|
/r/LocalLLaMA/comments/1kbdzo2/qwen3_32b_leading_livebench_if_story_generation/
| false | false | 75 |
{'enabled': True, 'images': [{'id': 'uWgE-Wo8PIieaLaUFHO_m--GoDJw-Y0Sbpdo2oHerDY', 'resolutions': [{'height': 91, 'url': 'https://preview.redd.it/u3wxgjiaiyxe1.png?width=108&crop=smart&auto=webp&s=bb7310d753dc84e2d9dcc671ae8440100b715ec7', 'width': 108}, {'height': 183, 'url': 'https://preview.redd.it/u3wxgjiaiyxe1.png?width=216&crop=smart&auto=webp&s=489da1b6ef6b03158056495a697b4d9ab3452a3d', 'width': 216}, {'height': 271, 'url': 'https://preview.redd.it/u3wxgjiaiyxe1.png?width=320&crop=smart&auto=webp&s=fbf5bfb5fd85d0c7b7399e9b03623d0f5b1294b2', 'width': 320}, {'height': 543, 'url': 'https://preview.redd.it/u3wxgjiaiyxe1.png?width=640&crop=smart&auto=webp&s=db301bb03849c31d1716dcac3b89a7a8e6e05f4a', 'width': 640}], 'source': {'height': 791, 'url': 'https://preview.redd.it/u3wxgjiaiyxe1.png?auto=webp&s=417c6d9bcf5a86e255e49e43802778f893342d3b', 'width': 931}, 'variants': {}}]}
|
||
Could anyone explain what's the latest DeepSeek model for?
| 3 | 2025-04-30T11:31:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbe5cp/could_anyone_explain_whats_the_latest_deepseek/
|
Logical-Bag-3012
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbe5cp
| false | null |
t3_1kbe5cp
|
/r/LocalLLaMA/comments/1kbe5cp/could_anyone_explain_whats_the_latest_deepseek/
| false | false | 3 | null |
||
Qwen3 models dominate reasoning tasks, fail at simple facts — full benchmark results (with graph)
| 1 |
[removed]
| 2025-04-30T11:32:11 |
HauntingMoment
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbe5ze
| false | null |
t3_1kbe5ze
|
/r/LocalLLaMA/comments/1kbe5ze/qwen3_models_dominate_reasoning_tasks_fail_at/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 't5ZjGPJo8JyPJr7xKgLD9JR6b1-IO5FwItS1vdNxL3I', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/nwzu5womjyxe1.png?width=108&crop=smart&auto=webp&s=331344681c6deec6d79c398a840e689d478f7030', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/nwzu5womjyxe1.png?width=216&crop=smart&auto=webp&s=5cc738d57ad9f02982b021dbc912966fc7b98e3d', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/nwzu5womjyxe1.png?width=320&crop=smart&auto=webp&s=8ae249c1937abca2f112ff5c44dbd46469792890', 'width': 320}, {'height': 358, 'url': 'https://preview.redd.it/nwzu5womjyxe1.png?width=640&crop=smart&auto=webp&s=e72eb130f6aee8a6fd01ec1b69ce47fa17b81e8a', 'width': 640}, {'height': 538, 'url': 'https://preview.redd.it/nwzu5womjyxe1.png?width=960&crop=smart&auto=webp&s=ea07151108ee23a712616c9f96fe7d5f17dd0399', 'width': 960}, {'height': 605, 'url': 'https://preview.redd.it/nwzu5womjyxe1.png?width=1080&crop=smart&auto=webp&s=f034e5c8562ffc17487fce26ce8ee5cfd46d2e58', 'width': 1080}], 'source': {'height': 668, 'url': 'https://preview.redd.it/nwzu5womjyxe1.png?auto=webp&s=64055e3e2a3005bcde5e6c5c8f6ab50544e9172c', 'width': 1191}, 'variants': {}}]}
|
||
Why no GPU with huge memory?
| 0 |
Why AMD/nvidia wouldn't make a GPU with huge memory, like 128-256 or even 512 Gb?
It seems that a 2-3 rtx4090 with massive memory would provide a decent performance for full size DeepSeek model (680Gb+).
I can imagine, Nvidia is greedy: they wanna sell a server with 16\*A100 instead of only 2 rtx4090 with massive memory.
But what about AMD? They have 0 market share. Such move could bomb the Nvidia positions.
| 2025-04-30T11:44:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbee3z/why_no_gpu_with_huge_memory/
|
wedazu
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbee3z
| false | null |
t3_1kbee3z
|
/r/LocalLLaMA/comments/1kbee3z/why_no_gpu_with_huge_memory/
| false | false |
self
| 0 | null |
What Fast AI Voice System Is Used?
| 5 |
In Sesame's blog post here: [https://www.sesame.com/research/crossing\_the\_uncanny\_valley\_of\_voice](https://www.sesame.com/research/crossing_the_uncanny_valley_of_voice) \- You can have a live conversation with the model in real time, like a phone call.
I know that it seems to use Llama as the brain and their voice model as the model but how do they make it in real time?
| 2025-04-30T11:45:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbeez6/what_fast_ai_voice_system_is_used/
|
StrangerQuestionsOhA
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbeez6
| false | null |
t3_1kbeez6
|
/r/LocalLLaMA/comments/1kbeez6/what_fast_ai_voice_system_is_used/
| false | false |
self
| 5 |
{'enabled': False, 'images': [{'id': 'ALvO0UwZODj7Mx_z9pqYh4rE5zXNOaoNgZoy7Ex9bPM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/0Oou01TWJtl4pCxAfr8J8uVDtgZi-bGwYpd_n05vFC0.jpg?width=108&crop=smart&auto=webp&s=37506e3ac24db95dc5545b67defdf1a8d2d00c04', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/0Oou01TWJtl4pCxAfr8J8uVDtgZi-bGwYpd_n05vFC0.jpg?width=216&crop=smart&auto=webp&s=df6ebf82293a9f4e65f7d164088b16844960fd36', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/0Oou01TWJtl4pCxAfr8J8uVDtgZi-bGwYpd_n05vFC0.jpg?width=320&crop=smart&auto=webp&s=a568449f2fc06377c18158cae96b21b30ea54c6b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/0Oou01TWJtl4pCxAfr8J8uVDtgZi-bGwYpd_n05vFC0.jpg?width=640&crop=smart&auto=webp&s=1c2b382f99e013187fac6c4280a099933e4b0d47', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/0Oou01TWJtl4pCxAfr8J8uVDtgZi-bGwYpd_n05vFC0.jpg?width=960&crop=smart&auto=webp&s=b8fe6bc282b17a7831decfab7e61978156af4fc5', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/0Oou01TWJtl4pCxAfr8J8uVDtgZi-bGwYpd_n05vFC0.jpg?width=1080&crop=smart&auto=webp&s=092668d4239bd2181ed1011846370fbbbfb2cb20', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/0Oou01TWJtl4pCxAfr8J8uVDtgZi-bGwYpd_n05vFC0.jpg?auto=webp&s=6947dcbd44381523b0c1b480eac830d1e29bddbc', 'width': 1200}, 'variants': {}}]}
|
Qwen3 Finetuning Tuning Notebook
| 10 |
Qwen3 should be a great model for fine-tuning, so in this notebook I finetune it on a code dataset with TRL, LoRA, PEFT, etc.
| 2025-04-30T11:55:33 |
https://colab.research.google.com/drive/12v2u2IRcmda2fJTNZk88OLF246u_bfSH?usp=sharing
|
Zealousideal-Cut590
|
colab.research.google.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbela0
| false | null |
t3_1kbela0
|
/r/LocalLLaMA/comments/1kbela0/qwen3_finetuning_tuning_notebook/
| false | false | 10 |
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]}
|
|
I'm planning to build a pc to run LLMs and stable diffusion models locally
| 1 |
[removed]
| 2025-04-30T11:59:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbensm/im_planning_to_build_a_pc_to_run_llms_and_stable/
|
Critical-Ad-7210
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbensm
| false | null |
t3_1kbensm
|
/r/LocalLLaMA/comments/1kbensm/im_planning_to_build_a_pc_to_run_llms_and_stable/
| false | false |
self
| 1 | null |
7B UI Model that does charts and interactive elements
| 247 |
[https://huggingface.co/Tesslate/UIGEN-T2-7B-Q8\_0-GGUF](https://huggingface.co/Tesslate/UIGEN-T2-7B-Q8_0-GGUF)
| 2025-04-30T12:00:51 |
United-Rush4073
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbeoqw
| false | null |
t3_1kbeoqw
|
/r/LocalLLaMA/comments/1kbeoqw/7b_ui_model_that_does_charts_and_interactive/
| false | false | 247 |
{'enabled': True, 'images': [{'id': 'JvQQAsOhl8I2yrAsI7rDxjweozqjwN08Z1sPo85g-XI', 'resolutions': [{'height': 40, 'url': 'https://preview.redd.it/p1jwcst8pyxe1.png?width=108&crop=smart&auto=webp&s=ff305de94a44e917e9f69088cfd7a0bac057a120', 'width': 108}, {'height': 81, 'url': 'https://preview.redd.it/p1jwcst8pyxe1.png?width=216&crop=smart&auto=webp&s=07ac2532edabf03de8b84afc225515e2428c9a7c', 'width': 216}, {'height': 121, 'url': 'https://preview.redd.it/p1jwcst8pyxe1.png?width=320&crop=smart&auto=webp&s=9c1545aa1e1c1247d3ff9484a474747a8eb3bc4b', 'width': 320}, {'height': 242, 'url': 'https://preview.redd.it/p1jwcst8pyxe1.png?width=640&crop=smart&auto=webp&s=8f2457b3e00c6d0f8ce792b7638ea74e15edea7f', 'width': 640}, {'height': 363, 'url': 'https://preview.redd.it/p1jwcst8pyxe1.png?width=960&crop=smart&auto=webp&s=cef43cd005312ca9b814235a2d922960b7f4ed62', 'width': 960}, {'height': 409, 'url': 'https://preview.redd.it/p1jwcst8pyxe1.png?width=1080&crop=smart&auto=webp&s=7c1345255dfdf73f93c622557a08845d77cfe286', 'width': 1080}], 'source': {'height': 801, 'url': 'https://preview.redd.it/p1jwcst8pyxe1.png?auto=webp&s=1de6cf4c792538f42e49c3aa0043437e1573763d', 'width': 2115}, 'variants': {}}]}
|
||
What do you think about Qwen3 /think /no_think in the prompt?
| 7 |
I tried them and they work so well, I also tried similar things like
`no_think`
`<no_think>`
`/no think`
`/no-think`
However when I explicitly ask the model "Don't think" the model thinks about not to think.
How do you think this is implemented? Is it something in the training phase? I want to know how this work.
| 2025-04-30T12:13:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbexnh/what_do_you_think_about_qwen3_think_no_think_in/
|
0ssamaak0
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbexnh
| false | null |
t3_1kbexnh
|
/r/LocalLLaMA/comments/1kbexnh/what_do_you_think_about_qwen3_think_no_think_in/
| false | false |
self
| 7 | null |
Qwen3-30B-A3B solves the o1-preview Cipher problem!
| 52 |
Qwen3-30B-A3B (4\_0 quant) solves the Cipher problem first showcased in the [OpenAI o1-preview Technical Paper](https://openai.com/index/learning-to-reason-with-llms/). Only 2 months ago [QwQ solved it in 32 minutes](https://www.reddit.com/r/LocalLLaMA/comments/1j4s0o4/qwq32b_solves_the_o1preview_cipher_problem/), while now Qwen3 solves it in 5 minutes! Obviously the MoE greatly improves performance, but it is interesting to note Qwen3 uses 20% less tokens. I'm impressed that I can run a o1-class model on a MacBook.
Here's the full output from llama.cpp;
[https://gist.github.com/sunpazed/f5220310f120e3fc7ea8c1fb978ee7a4](https://gist.github.com/sunpazed/f5220310f120e3fc7ea8c1fb978ee7a4)
| 2025-04-30T12:19:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbf1m1/qwen330ba3b_solves_the_o1preview_cipher_problem/
|
sunpazed
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbf1m1
| false | null |
t3_1kbf1m1
|
/r/LocalLLaMA/comments/1kbf1m1/qwen330ba3b_solves_the_o1preview_cipher_problem/
| false | false |
self
| 52 | null |
Try DeepSeek Prover V2 directly on the model page powered by Novita AI - live on Hugging Face
| 6 |
[https://huggingface.co/deepseek-ai/DeepSeek-Prover-V2-671B?inference\_provider=novita](https://huggingface.co/deepseek-ai/DeepSeek-Prover-V2-671B?inference_provider=novita)
| 2025-04-30T12:25:49 |
https://v.redd.it/ul76nmcntyxe1
|
Nunki08
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbf60y
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ul76nmcntyxe1/DASHPlaylist.mpd?a=1748607967%2CMjYwNDBlNjU4MGRjNGIyMjAxMmZiMGFhMTZlNzcyM2IzYmZlMTc0ZjJlZjJiYjgwNTU4YzYwYzcyN2VkYzNiZA%3D%3D&v=1&f=sd', 'duration': 16, 'fallback_url': 'https://v.redd.it/ul76nmcntyxe1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/ul76nmcntyxe1/HLSPlaylist.m3u8?a=1748607967%2CYTRjOTExYWI4MTg1ZTEwNDZkYmRkZGE3YmI0MThiY2YxNzIzOWUxOGNiNmY5ZjIyMGJhZmI4YTY0MmY5MGY5MQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ul76nmcntyxe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1598}}
|
t3_1kbf60y
|
/r/LocalLLaMA/comments/1kbf60y/try_deepseek_prover_v2_directly_on_the_model_page/
| false | false | 6 |
{'enabled': False, 'images': [{'id': 'djdkb3BmZW50eXhlMVvEHpa6tJQIyg3tm9IV2JtjwH4tNMFTiOtpwC4XxyRX', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/djdkb3BmZW50eXhlMVvEHpa6tJQIyg3tm9IV2JtjwH4tNMFTiOtpwC4XxyRX.png?width=108&crop=smart&format=pjpg&auto=webp&s=8b95d2a2569206cf953c072ac8c34c64afe113c8', 'width': 108}, {'height': 145, 'url': 'https://external-preview.redd.it/djdkb3BmZW50eXhlMVvEHpa6tJQIyg3tm9IV2JtjwH4tNMFTiOtpwC4XxyRX.png?width=216&crop=smart&format=pjpg&auto=webp&s=a3452c7ccb072ba5502f4b75213d1c0197b560f8', 'width': 216}, {'height': 216, 'url': 'https://external-preview.redd.it/djdkb3BmZW50eXhlMVvEHpa6tJQIyg3tm9IV2JtjwH4tNMFTiOtpwC4XxyRX.png?width=320&crop=smart&format=pjpg&auto=webp&s=f5595d4407d5cdea5d37fc98de1eeef6cd2a62ef', 'width': 320}, {'height': 432, 'url': 'https://external-preview.redd.it/djdkb3BmZW50eXhlMVvEHpa6tJQIyg3tm9IV2JtjwH4tNMFTiOtpwC4XxyRX.png?width=640&crop=smart&format=pjpg&auto=webp&s=5fc6ee5d186873731da9de3e29e1ccd2150ab9a5', 'width': 640}, {'height': 648, 'url': 'https://external-preview.redd.it/djdkb3BmZW50eXhlMVvEHpa6tJQIyg3tm9IV2JtjwH4tNMFTiOtpwC4XxyRX.png?width=960&crop=smart&format=pjpg&auto=webp&s=b8706761c10f7883fe142cadb2105dc84af117f0', 'width': 960}, {'height': 729, 'url': 'https://external-preview.redd.it/djdkb3BmZW50eXhlMVvEHpa6tJQIyg3tm9IV2JtjwH4tNMFTiOtpwC4XxyRX.png?width=1080&crop=smart&format=pjpg&auto=webp&s=765e06f6f9d3590ea60d08e1816a89b58860f5be', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/djdkb3BmZW50eXhlMVvEHpa6tJQIyg3tm9IV2JtjwH4tNMFTiOtpwC4XxyRX.png?format=pjpg&auto=webp&s=21688f1d5befefe14dee0f8232c4a17d522fc594', 'width': 1598}, 'variants': {}}]}
|
|
We can now test prover v2 model in hugging face by inference providers
| 8 | 2025-04-30T12:31:23 |
Independent-Wind4462
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbfa4b
| false | null |
t3_1kbfa4b
|
/r/LocalLLaMA/comments/1kbfa4b/we_can_now_test_prover_v2_model_in_hugging_face/
| false | false | 8 |
{'enabled': True, 'images': [{'id': 'k4n6HQ1TgiNC78gKeYH9z2aBlB4o-ZxEIGwAchzS-gI', 'resolutions': [{'height': 111, 'url': 'https://preview.redd.it/32bcm17uuyxe1.jpeg?width=108&crop=smart&auto=webp&s=c0e92bf174cc7a9208d25b6625858efb20a9f0b4', 'width': 108}, {'height': 222, 'url': 'https://preview.redd.it/32bcm17uuyxe1.jpeg?width=216&crop=smart&auto=webp&s=da9c273524f5d815b02b910f81d68571ede1e7a1', 'width': 216}, {'height': 329, 'url': 'https://preview.redd.it/32bcm17uuyxe1.jpeg?width=320&crop=smart&auto=webp&s=56a21cdef413a4244d5426459d681254454e0658', 'width': 320}, {'height': 658, 'url': 'https://preview.redd.it/32bcm17uuyxe1.jpeg?width=640&crop=smart&auto=webp&s=602ea2dc0a3ad66b0c36c0c57609b2af3aff7411', 'width': 640}, {'height': 988, 'url': 'https://preview.redd.it/32bcm17uuyxe1.jpeg?width=960&crop=smart&auto=webp&s=7f97160ac3c8c0efae67efafd9895337bb1147ee', 'width': 960}, {'height': 1112, 'url': 'https://preview.redd.it/32bcm17uuyxe1.jpeg?width=1080&crop=smart&auto=webp&s=5d5a4c023406ae36d194fb034907c5ff4371be4a', 'width': 1080}], 'source': {'height': 1112, 'url': 'https://preview.redd.it/32bcm17uuyxe1.jpeg?auto=webp&s=85051382c08338bddf6f620dffd31339ebb77ecd', 'width': 1080}, 'variants': {}}]}
|
|||
NVIDIA and AMD are capitalist duo
| 1 |
[removed]
| 2025-04-30T12:33:06 |
Original_Slice9243
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbfbdx
| false | null |
t3_1kbfbdx
|
/r/LocalLLaMA/comments/1kbfbdx/nvidia_and_amd_are_capitalist_duo/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'fThpj9ZtxSMp_dJXyCgfSk435GZ2e93fiv93jwHKzQc', 'resolutions': [{'height': 30, 'url': 'https://preview.redd.it/mzaw0xuhuyxe1.png?width=108&crop=smart&auto=webp&s=9a99ef915fa2de747e69d8b9d33bb0cf49f10a51', 'width': 108}, {'height': 60, 'url': 'https://preview.redd.it/mzaw0xuhuyxe1.png?width=216&crop=smart&auto=webp&s=6ed33fe172e12abb71063805797c9748f8d74103', 'width': 216}, {'height': 89, 'url': 'https://preview.redd.it/mzaw0xuhuyxe1.png?width=320&crop=smart&auto=webp&s=d03d962c60e15c8ee88323bf3e0ffae0a578ea34', 'width': 320}, {'height': 178, 'url': 'https://preview.redd.it/mzaw0xuhuyxe1.png?width=640&crop=smart&auto=webp&s=b2be5e5a75858916d25fad2799caa7343874478c', 'width': 640}], 'source': {'height': 244, 'url': 'https://preview.redd.it/mzaw0xuhuyxe1.png?auto=webp&s=15612d398b09714ea3905e2c94357c930dc67f8a', 'width': 876}, 'variants': {}}]}
|
||
Determining Overall Speed with VLLM?
| 1 |
I'm trying to benchmark speed 2xrtx-4090 on Runpod with VLLM.
I feed one prompt at a time via OpenAI API and wait for a complete response before submitting next request. However, I get multiple speed readings for long prompt. I guess it's splitting into multiple batches? Is there a way to configure so that it also reports overall speed for the entire request?
I running my vllm like this.
vllm serve Qwen/Qwen3-30B-A3B-FP8 --max-model-len 34100 --tensor-parallel-size 2 --max-log-len 200 --disable-uvicorn-access-log --no-enable-prefix-caching > log.txt
I disabled prefix-caching to make sure every request gets processed fresh.
Here's the log for one request:
INFO 04-30 12:14:21 [logger.py:39] Received request chatcmpl-eb86ff143abf4dbb91c69374aacea6a2: prompt: '<|im_start|>system\nYou are a helpful assistant. /no_think<|im_end|>\n<|im_start|>user\nProvide a summary as well as a detail analysis of the following:\nPortugal (Portuguese pronunciation: [puɾtuˈɣal] ),', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.7, top_p=0.8, top_k=20, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=2000, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: None, lora_request: None, prompt_adapter_request: None.
INFO 04-30 12:14:21 [async_llm.py:252] Added request chatcmpl-eb86ff143abf4dbb91c69374aacea6a2.
INFO 04-30 12:14:26 [loggers.py:111] Engine 000: Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 41.1 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 14.0%, Prefix cache hit rate: 0.0%
INFO 04-30 12:14:36 [loggers.py:111] Engine 000: Avg prompt throughput: 3206.6 tokens/s, Avg generation throughput: 19.8 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 31.6%, Prefix cache hit rate: 0.0%
INFO 04-30 12:14:46 [loggers.py:111] Engine 000: Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 77.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 32.3%, Prefix cache hit rate: 0.0%
INFO 04-30 12:14:56 [loggers.py:111] Engine 000: Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 47.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 04-30 12:15:06 [loggers.py:111] Engine 000: Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
Thanks so much!
| 2025-04-30T12:37:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbfeg1/determining_overall_speed_with_vllm/
|
chibop1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbfeg1
| false | null |
t3_1kbfeg1
|
/r/LocalLLaMA/comments/1kbfeg1/determining_overall_speed_with_vllm/
| false | false |
self
| 1 | null |
Jetbrains opensourced their Mellum model
| 168 |
It's now on Hugging Face: [https://huggingface.co/JetBrains/Mellum-4b-base](https://huggingface.co/JetBrains/Mellum-4b-base)
Their announcement: [https://blog.jetbrains.com/ai/2025/04/mellum-goes-open-source-a-purpose-built-llm-for-developers-now-on-hugging-face/](https://blog.jetbrains.com/ai/2025/04/mellum-goes-open-source-a-purpose-built-llm-for-developers-now-on-hugging-face/)
| 2025-04-30T12:42:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbfhxx/jetbrains_opensourced_their_mellum_model/
|
stark-light
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbfhxx
| false | null |
t3_1kbfhxx
|
/r/LocalLLaMA/comments/1kbfhxx/jetbrains_opensourced_their_mellum_model/
| false | false |
self
| 168 |
{'enabled': False, 'images': [{'id': 'x0mOfOyCKF94r4h5GNSXtcppBbllTXAeIU4YQmnmXIM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/McEw9TAjZ9TD_O-m16P3Oi3cB_eAQzFQPDgvCzeX0NE.jpg?width=108&crop=smart&auto=webp&s=dc0ebddba5cce1e29ee58afb8af899f443834830', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/McEw9TAjZ9TD_O-m16P3Oi3cB_eAQzFQPDgvCzeX0NE.jpg?width=216&crop=smart&auto=webp&s=f30693381480e5c5c0c0e30cbd9a2c868ceb1547', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/McEw9TAjZ9TD_O-m16P3Oi3cB_eAQzFQPDgvCzeX0NE.jpg?width=320&crop=smart&auto=webp&s=88d441b71e53cf819b41370049c9d6610babbb50', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/McEw9TAjZ9TD_O-m16P3Oi3cB_eAQzFQPDgvCzeX0NE.jpg?width=640&crop=smart&auto=webp&s=133e8f27ddb975034322798f4d421ec80bf9a372', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/McEw9TAjZ9TD_O-m16P3Oi3cB_eAQzFQPDgvCzeX0NE.jpg?width=960&crop=smart&auto=webp&s=01ea17c87137da7cfb583453c71628f977b79440', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/McEw9TAjZ9TD_O-m16P3Oi3cB_eAQzFQPDgvCzeX0NE.jpg?width=1080&crop=smart&auto=webp&s=052ecc2ce05d0cd9f6afe74bbf6ab9858c6423fb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/McEw9TAjZ9TD_O-m16P3Oi3cB_eAQzFQPDgvCzeX0NE.jpg?auto=webp&s=1d2dc51cfe2295f97914990cd0b5d353464c286f', 'width': 1200}, 'variants': {}}]}
|
Granite 4 Pull requests submitted to vllm and transformers
| 53 | 2025-04-30T12:45:11 |
https://github.com/vllm-project/vllm/pull/17461
|
a_slay_nub
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbfkad
| false | null |
t3_1kbfkad
|
/r/LocalLLaMA/comments/1kbfkad/granite_4_pull_requests_submitted_to_vllm_and/
| false | false | 53 |
{'enabled': False, 'images': [{'id': 'dYmC6nIfXf-qR6KL3PntIubdWt-ZP-z90sCI3i4DGbo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lmY6SZWhi1CjBygy9eRmfiGrXYfX6mezsuQz9ssU3Sk.jpg?width=108&crop=smart&auto=webp&s=4f7a5faa3cc71f596a9e0dd104798f56a6ee225a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/lmY6SZWhi1CjBygy9eRmfiGrXYfX6mezsuQz9ssU3Sk.jpg?width=216&crop=smart&auto=webp&s=6ad3584f3dcc3a63ddea8f8a06ffafd523d2a68c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/lmY6SZWhi1CjBygy9eRmfiGrXYfX6mezsuQz9ssU3Sk.jpg?width=320&crop=smart&auto=webp&s=881c92d5f41257ffe273283b074d39949a2bd1a4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/lmY6SZWhi1CjBygy9eRmfiGrXYfX6mezsuQz9ssU3Sk.jpg?width=640&crop=smart&auto=webp&s=7c22013b44e73f2300cd86dcf4a2c506feb28337', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/lmY6SZWhi1CjBygy9eRmfiGrXYfX6mezsuQz9ssU3Sk.jpg?width=960&crop=smart&auto=webp&s=182497dfdcf4aa80715ce78ef4ee2990407a9d6d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/lmY6SZWhi1CjBygy9eRmfiGrXYfX6mezsuQz9ssU3Sk.jpg?width=1080&crop=smart&auto=webp&s=689f3dec360114e223434564c4a56a1da821900f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/lmY6SZWhi1CjBygy9eRmfiGrXYfX6mezsuQz9ssU3Sk.jpg?auto=webp&s=70386ecda7552eb48705cd85b0371f466b7cc631', 'width': 1200}, 'variants': {}}]}
|
||
Waiting for Qwen3 32b coder :) Speculative decoding disappointing
| 27 |
I find that Qwen-3 32b (non-coder obviously) does not benefit from \~2.5x speedup when launched with a draft model for speculative decoding (llama.cpp).
I tested with the exact same series of coding questions which run very fast on my current Qwen2.5 32b coder setup. The draft model`Qwen3-0.6B-Q4_0` replaced with`Qwen3-0.6B-Q8_0` makes no difference. Same for `Qwen3-1.7B-Q4_0.`
I also find that llama.cpp needs \~3.5GB for my 0.6b draft its KV buffer while that only was \~384MB with my Qwen 2.5 coder configuration (0.5b draft). This forces me to scale back context considerably with Qwen-3 32b. Anyhow, no sense running speculative decoding at the moment.
Conclusion: waiting for Qwen3 32b coder :)
| 2025-04-30T13:09:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbg2rl/waiting_for_qwen3_32b_coder_speculative_decoding/
|
AdamDhahabi
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbg2rl
| false | null |
t3_1kbg2rl
|
/r/LocalLLaMA/comments/1kbg2rl/waiting_for_qwen3_32b_coder_speculative_decoding/
| false | false |
self
| 27 | null |
What's the difference between Memory and context in Llamaindex? No clear doc explanation
| 1 |
[removed]
| 2025-04-30T13:16:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbg8sa/whats_the_difference_between_memory_and_context/
|
ProfessionalDress259
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbg8sa
| false | null |
t3_1kbg8sa
|
/r/LocalLLaMA/comments/1kbg8sa/whats_the_difference_between_memory_and_context/
| false | false |
self
| 1 | null |
What's the difference between Memory and context in Llamaindex? No clear doc explanation
| 1 |
[removed]
| 2025-04-30T13:18:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbg9u3/whats_the_difference_between_memory_and_context/
|
ProfessionalDress259
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbg9u3
| false | null |
t3_1kbg9u3
|
/r/LocalLLaMA/comments/1kbg9u3/whats_the_difference_between_memory_and_context/
| false | false |
self
| 1 | null |
Mellum Goes Open Source: A Purpose-Built LLM for Developers, Now on Hugging Face
| 37 | 2025-04-30T13:29:29 |
https://blog.jetbrains.com/ai/2025/04/mellum-goes-open-source-a-purpose-built-llm-for-developers-now-on-hugging-face/
|
BarracudaPff
|
blog.jetbrains.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbgj16
| false | null |
t3_1kbgj16
|
/r/LocalLLaMA/comments/1kbgj16/mellum_goes_open_source_a_purposebuilt_llm_for/
| false | false | 37 |
{'enabled': False, 'images': [{'id': 'S4MNmAcrZsNNLwduI30d5V0lVTaryZNSw2qfg2sGfxg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cO_ycPzjF9y-xz9dkp2pJWilKCTUD8M8MWZVSX3SYlQ.jpg?width=108&crop=smart&auto=webp&s=0488d35b8365ef998d2d461a8b8cb8eeead371d6', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/cO_ycPzjF9y-xz9dkp2pJWilKCTUD8M8MWZVSX3SYlQ.jpg?width=216&crop=smart&auto=webp&s=d44471a98ce9914e7b5d47b4afe036a5ff8fdcdb', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/cO_ycPzjF9y-xz9dkp2pJWilKCTUD8M8MWZVSX3SYlQ.jpg?width=320&crop=smart&auto=webp&s=11f932390f576f93dbaa725a7409086c818a078d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/cO_ycPzjF9y-xz9dkp2pJWilKCTUD8M8MWZVSX3SYlQ.jpg?width=640&crop=smart&auto=webp&s=09644e50897d70591f5620c1e80209e63a1d7658', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/cO_ycPzjF9y-xz9dkp2pJWilKCTUD8M8MWZVSX3SYlQ.jpg?width=960&crop=smart&auto=webp&s=5c333bfcf238dc0e29d6a0d48683ed603cf7b75a', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/cO_ycPzjF9y-xz9dkp2pJWilKCTUD8M8MWZVSX3SYlQ.jpg?width=1080&crop=smart&auto=webp&s=5d931a063109d65c47939d51168666b8d20cf9ea', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/cO_ycPzjF9y-xz9dkp2pJWilKCTUD8M8MWZVSX3SYlQ.jpg?auto=webp&s=5715115976ab7f3ddc7136c62d04ec75005a3e85', 'width': 1280}, 'variants': {}}]}
|
||
RAG or Fine-tuning for code review?
| 2 |
I’m currently using a 16GB MacBook Pro and have compiled a list of good and bad code review examples. While it’s possible to rely on prompt engineering to get an LLM to review my git diff, I understand that this is a fairly naive approach.
To generate high-quality, context-aware review comments, would it be more effective to use RAG or go down the fine-tuning path?
Appreciate any insights or experiences shared!
| 2025-04-30T13:33:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbgma1/rag_or_finetuning_for_code_review/
|
OysterD3
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbgma1
| false | null |
t3_1kbgma1
|
/r/LocalLLaMA/comments/1kbgma1/rag_or_finetuning_for_code_review/
| false | false |
self
| 2 | null |
OpenAI wants its 'open' AI model to call models in the cloud for help | TechCrunch
| 0 |
I don't think anyone has posted this here yet. I could be wrong, but I believe the implication of the model handoff is that you won't even be able to use their definitely-for-sure-going-to-happen-soon-trust-us-bro "open-source" model without an OpenAI API key.
| 2025-04-30T13:33:35 |
https://techcrunch.com/2025/04/24/openai-wants-its-open-ai-model-to-call-models-in-the-cloud-for-help/
|
pkseeg
|
techcrunch.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbgmjx
| false | null |
t3_1kbgmjx
|
/r/LocalLLaMA/comments/1kbgmjx/openai_wants_its_open_ai_model_to_call_models_in/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'J7RdfNTskfbFDBIi9p8391hxyd94IW-51lp7KKa0f2c', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/FxQZTR7VoCPI3rCl0zWUCSCLZfKvHkGCsFO4Klxu4ww.jpg?width=108&crop=smart&auto=webp&s=8b589f76e3c238df3e9dbc04e978d78207916f6a', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/FxQZTR7VoCPI3rCl0zWUCSCLZfKvHkGCsFO4Klxu4ww.jpg?width=216&crop=smart&auto=webp&s=ff8ac671c87dc897c79ccad577fd69e0afb1a1cc', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/FxQZTR7VoCPI3rCl0zWUCSCLZfKvHkGCsFO4Klxu4ww.jpg?width=320&crop=smart&auto=webp&s=9e970eb054b603259e91093d32a6b63f4a798912', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/FxQZTR7VoCPI3rCl0zWUCSCLZfKvHkGCsFO4Klxu4ww.jpg?width=640&crop=smart&auto=webp&s=c15d5318800afa4aa83606f0be4c01010cac2c6a', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/FxQZTR7VoCPI3rCl0zWUCSCLZfKvHkGCsFO4Klxu4ww.jpg?width=960&crop=smart&auto=webp&s=ea56e39796a51cc62c58d1669062d58fabb0ad59', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/FxQZTR7VoCPI3rCl0zWUCSCLZfKvHkGCsFO4Klxu4ww.jpg?width=1080&crop=smart&auto=webp&s=7d74eca297667afb676c8b620a0ac3485a84ae9f', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/FxQZTR7VoCPI3rCl0zWUCSCLZfKvHkGCsFO4Klxu4ww.jpg?auto=webp&s=b6c12d99801ec6e5c6b16ca8f5bc0190169fce4d', 'width': 1200}, 'variants': {}}]}
|
|
What's the difference between Memory and context in Llamaindex? No clear doc explanation
| 1 |
[removed]
| 2025-04-30T13:36:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbgpaz/whats_the_difference_between_memory_and_context/
|
ProfessionalDress259
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbgpaz
| false | null |
t3_1kbgpaz
|
/r/LocalLLaMA/comments/1kbgpaz/whats_the_difference_between_memory_and_context/
| false | false |
self
| 1 | null |
MNN Chat App now support run Qwen3 locally on devices with enable/disable thinking mode and dark mode
| 14 |
release note: [https://github.com/alibaba/MNN/blob/master/apps/Android/MnnLlmChat/README.md#version-040](https://github.com/alibaba/MNN/blob/master/apps/Android/MnnLlmChat/README.md#version-040)
apk download: [https://meta.alicdn.com/data/mnn/mnn\_chat\_d\_0\_4\_0.apk](https://meta.alicdn.com/data/mnn/mnn_chat_d_0_4_0.apk)
https://i.redd.it/ki9vnyme5zxe1.gif
https://preview.redd.it/n5f9w3717zxe1.jpg?width=581&format=pjpg&auto=webp&s=5b57112d3f0ecd70c6d6ca3886047829943739d8
| 2025-04-30T13:40:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbgsie/mnn_chat_app_now_support_run_qwen3_locally_on/
|
Juude89
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbgsie
| false | null |
t3_1kbgsie
|
/r/LocalLLaMA/comments/1kbgsie/mnn_chat_app_now_support_run_qwen3_locally_on/
| false | false | 14 | null |
|
Qwen/Qwen2.5-Omni-3B · Hugging Face
| 129 | 2025-04-30T13:43:06 |
https://huggingface.co/Qwen/Qwen2.5-Omni-3B
|
Dark_Fire_12
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbgug8
| false | null |
t3_1kbgug8
|
/r/LocalLLaMA/comments/1kbgug8/qwenqwen25omni3b_hugging_face/
| false | false | 129 |
{'enabled': False, 'images': [{'id': 'cs18oGXDVAThIb-T0LRZsl5rbNGcNc5_L8CTnu18M7s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VrKoYH0rSUXnh7ZB3s06wxaBwwhqJf3tLxecGQXrrRk.jpg?width=108&crop=smart&auto=webp&s=20de1cfb66b22fc2ae7600eb0150afc1f57bd7ae', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VrKoYH0rSUXnh7ZB3s06wxaBwwhqJf3tLxecGQXrrRk.jpg?width=216&crop=smart&auto=webp&s=edb8cde9210f434cbd429bc1361929ac303b523e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VrKoYH0rSUXnh7ZB3s06wxaBwwhqJf3tLxecGQXrrRk.jpg?width=320&crop=smart&auto=webp&s=456127215ab110b8f8095c560f64dfad1712d0f0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VrKoYH0rSUXnh7ZB3s06wxaBwwhqJf3tLxecGQXrrRk.jpg?width=640&crop=smart&auto=webp&s=b355cd4af64e469bec4d086fb697e376fa807fe9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VrKoYH0rSUXnh7ZB3s06wxaBwwhqJf3tLxecGQXrrRk.jpg?width=960&crop=smart&auto=webp&s=302375dd30b2f7e5f8fa25ba8947969bcbba8239', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VrKoYH0rSUXnh7ZB3s06wxaBwwhqJf3tLxecGQXrrRk.jpg?width=1080&crop=smart&auto=webp&s=1b1be284001cf85511a80e1417513b72ad72e368', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VrKoYH0rSUXnh7ZB3s06wxaBwwhqJf3tLxecGQXrrRk.jpg?auto=webp&s=e6e1dc29f4c97dbb4a96d6a01c94dc1a7bdbc590', 'width': 1200}, 'variants': {}}]}
|
||
https://www.nature.com/articles/s41467-025-58848-6
| 0 |
Efficient coding for humans to create principles of generalization; seems to work when applied to RL as well.
Thots?
| 2025-04-30T13:46:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbgxim/httpswwwnaturecomarticless41467025588486/
|
False_Grit
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbgxim
| false | null |
t3_1kbgxim
|
/r/LocalLLaMA/comments/1kbgxim/httpswwwnaturecomarticless41467025588486/
| false | false |
self
| 0 | null |
I built an open-source AI chat app – what are your thoughts on it?
| 1 |
[removed]
| 2025-04-30T13:46:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbgxjh/i_built_an_opensource_ai_chat_app_what_are_your/
|
One_Study_4420
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbgxjh
| false | null |
t3_1kbgxjh
|
/r/LocalLLaMA/comments/1kbgxjh/i_built_an_opensource_ai_chat_app_what_are_your/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'IqCG25xqMFrVz-M58QMgYRCdBJ19k83w3bO86brJlp0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OopLGsSIPh309t3U2RHncjBBaWWP_SP8_cP4xXYaxiU.jpg?width=108&crop=smart&auto=webp&s=2ac533d4f67cba4634703e3bb033c0a1b87ff585', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OopLGsSIPh309t3U2RHncjBBaWWP_SP8_cP4xXYaxiU.jpg?width=216&crop=smart&auto=webp&s=d5d0d1069b63f5d1a4b34f6c5a8baffedee4d97d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OopLGsSIPh309t3U2RHncjBBaWWP_SP8_cP4xXYaxiU.jpg?width=320&crop=smart&auto=webp&s=a392959c528f009792e2e076e5b74fca3738c939', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OopLGsSIPh309t3U2RHncjBBaWWP_SP8_cP4xXYaxiU.jpg?width=640&crop=smart&auto=webp&s=7976366e36fbc2fb0a2bb6272d640073afa38a3d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OopLGsSIPh309t3U2RHncjBBaWWP_SP8_cP4xXYaxiU.jpg?width=960&crop=smart&auto=webp&s=ae9c68d60bef40d6b5591904940cf9e87adf5bff', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OopLGsSIPh309t3U2RHncjBBaWWP_SP8_cP4xXYaxiU.jpg?width=1080&crop=smart&auto=webp&s=a2ef63d1aba07497b1a85abcf3d4e5ba9797159a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OopLGsSIPh309t3U2RHncjBBaWWP_SP8_cP4xXYaxiU.jpg?auto=webp&s=0eb06c7a711cce8981fd8b3a6e092261a3dca4bd', 'width': 1200}, 'variants': {}}]}
|
Help moving away from chatgpt+gemini
| 4 |
Hi,
Im starting to move away from chatgpt+gemini and would like to run local models only. i meed some help setting this up in terms of software. For serving is sglang better or vllm? I have ollama too. Never used lmstudio.
I like chatgpt app and chat interface allowing me to group projects in a single folder. For gemini I basically like deep research. id like to move to local models only now primarily to save costs and also because of recent news and constant changes.
are there any good chat interfaces that compare to chatgpt? How do you use these models as coding assistants as i primarily still use chatgpt extension in vscode or autocomplete in the code itself. For example I find continue on vscode still a bit buggy.
is anyone serving their local models for personal app use when going mobile?
| 2025-04-30T13:56:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbh5r7/help_moving_away_from_chatgptgemini/
|
Studyr3ddit
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbh5r7
| false | null |
t3_1kbh5r7
|
/r/LocalLLaMA/comments/1kbh5r7/help_moving_away_from_chatgptgemini/
| false | false |
self
| 4 | null |
Qwen3-32B 8-bit (MLX) vs GLM-4-32B Q8 (0414) comparison - Coding of wearher cards
| 1 |
[removed]
| 2025-04-30T13:57:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbh6qr/qwen332b_8bit_mlx_vs_glm432b_q8_0414_comparison/
|
Gregory-Wolf
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbh6qr
| false | null |
t3_1kbh6qr
|
/r/LocalLLaMA/comments/1kbh6qr/qwen332b_8bit_mlx_vs_glm432b_q8_0414_comparison/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'gAoNoPyD_UVgFeu80LoSA_vdnrzd0fA0ZbzOkySB3_M', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/JVU3CWwV4ZHc9r855EE5DIDxSnzG5dAgs7ph4JNeenI.jpg?width=108&crop=smart&auto=webp&s=91c9077630ce9fa358a40cc9a7ef5ab0e2be695b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/JVU3CWwV4ZHc9r855EE5DIDxSnzG5dAgs7ph4JNeenI.jpg?width=216&crop=smart&auto=webp&s=f009ed8dbddd4a2fdbceaaa50f6ce17efdf27a8a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/JVU3CWwV4ZHc9r855EE5DIDxSnzG5dAgs7ph4JNeenI.jpg?width=320&crop=smart&auto=webp&s=d6f864826db5217201d6b78b07e6beb037508d1b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/JVU3CWwV4ZHc9r855EE5DIDxSnzG5dAgs7ph4JNeenI.jpg?width=640&crop=smart&auto=webp&s=1370377ab7f9f1eac03a33523352b10ef14482e6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/JVU3CWwV4ZHc9r855EE5DIDxSnzG5dAgs7ph4JNeenI.jpg?width=960&crop=smart&auto=webp&s=83b5130b46d3a7925ab5c4d967aff0de38b2fe1c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/JVU3CWwV4ZHc9r855EE5DIDxSnzG5dAgs7ph4JNeenI.jpg?width=1080&crop=smart&auto=webp&s=e9e36a398959e8bb4b2207ca7c968c8a6ff96a92', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/JVU3CWwV4ZHc9r855EE5DIDxSnzG5dAgs7ph4JNeenI.jpg?auto=webp&s=38efccb40b836737610b159fd2d4b468db9d9bcf', 'width': 1200}, 'variants': {}}]}
|
|
I built an open-source AI chat app – thoughts on blazing-fast responses and custom AI models?
| 1 |
[removed]
| 2025-04-30T14:03:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbhbvo/i_built_an_opensource_ai_chat_app_thoughts_on/
|
dakxoero_4u
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbhbvo
| false | null |
t3_1kbhbvo
|
/r/LocalLLaMA/comments/1kbhbvo/i_built_an_opensource_ai_chat_app_thoughts_on/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '40g1OG9l3YTyobQSNF2IuPDL-259K-W4ebcLOsNbDrk', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/PhgxXrCbogQP29TC2uh_aXy6rfV22QPFZ9FfNJDQk0g.jpg?width=108&crop=smart&auto=webp&s=7dcb3b28391d572adab6f6229b8d01cc224ae73f', 'width': 108}, {'height': 107, 'url': 'https://external-preview.redd.it/PhgxXrCbogQP29TC2uh_aXy6rfV22QPFZ9FfNJDQk0g.jpg?width=216&crop=smart&auto=webp&s=503fcac1e49f997778c26ee1953e4a1e81f8df3c', 'width': 216}, {'height': 159, 'url': 'https://external-preview.redd.it/PhgxXrCbogQP29TC2uh_aXy6rfV22QPFZ9FfNJDQk0g.jpg?width=320&crop=smart&auto=webp&s=d334518724952fe785aca426ca6c79da5d83e299', 'width': 320}, {'height': 319, 'url': 'https://external-preview.redd.it/PhgxXrCbogQP29TC2uh_aXy6rfV22QPFZ9FfNJDQk0g.jpg?width=640&crop=smart&auto=webp&s=5a230ff0b352355b5958a57493b462fb101460cb', 'width': 640}, {'height': 478, 'url': 'https://external-preview.redd.it/PhgxXrCbogQP29TC2uh_aXy6rfV22QPFZ9FfNJDQk0g.jpg?width=960&crop=smart&auto=webp&s=102c8aa66a439215611eb5714bea717296e0090b', 'width': 960}, {'height': 538, 'url': 'https://external-preview.redd.it/PhgxXrCbogQP29TC2uh_aXy6rfV22QPFZ9FfNJDQk0g.jpg?width=1080&crop=smart&auto=webp&s=57a20bf10f5dc85d37896aba36edbcc873c6655b', 'width': 1080}], 'source': {'height': 901, 'url': 'https://external-preview.redd.it/PhgxXrCbogQP29TC2uh_aXy6rfV22QPFZ9FfNJDQk0g.jpg?auto=webp&s=57e83d12f0225ffec361852678f9c74ea945ba4d', 'width': 1806}, 'variants': {}}]}
|
Qwen 3 times out or can't complete tiny task on laptop?
| 2 |
Hi,
I've installed n8n with Ollama and pulled:
* qwen3:4b
* qwen3:8b
* llama3.2
When I ask any of those models:
"Hello"
It replies without any issues after a few seconds.
If I ask a question like:
"How can an AI help with day to day business tasks?" (I ask this in English and German)
llama is responding within some time and the results are ok.
Both Qwen will swallow close to 90% CPU for minutes and then I interrupt the docker container / kill Ollama.
What other model can I use on a an AMD Laptop 32GB RAM, Ryzen 7 (16 × AMD Ryzen 7 PRO 6850U with Radeon Graphics), no dedicated Graphics which might even have some better answers than llama?
| 2025-04-30T14:06:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbhdr7/qwen_3_times_out_or_cant_complete_tiny_task_on/
|
Chris8080
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbhdr7
| false | null |
t3_1kbhdr7
|
/r/LocalLLaMA/comments/1kbhdr7/qwen_3_times_out_or_cant_complete_tiny_task_on/
| false | false |
self
| 2 | null |
NEW deepseek just droped [ deepseek-ai/DeepSeek-Prover-V1 ]
| 1 |
[removed]
| 2025-04-30T14:24:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbhsiu/new_deepseek_just_droped/
|
Rare-Programmer-1747
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbhsiu
| false | null |
t3_1kbhsiu
|
/r/LocalLLaMA/comments/1kbhsiu/new_deepseek_just_droped/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '48gy_SWLx1b949upbqxz3dSEcrOFIujRU-9nYXmeE9k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VndUC6N6OMtx5vF6Lpg2QSlRL9OcrliJmMga5XqVl-E.jpg?width=108&crop=smart&auto=webp&s=5ddeaeaf76eadba3a89e8dedb0b33c123f685118', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VndUC6N6OMtx5vF6Lpg2QSlRL9OcrliJmMga5XqVl-E.jpg?width=216&crop=smart&auto=webp&s=5bfd7de77c2f9065ec4cef0e772760cdcacc6bed', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VndUC6N6OMtx5vF6Lpg2QSlRL9OcrliJmMga5XqVl-E.jpg?width=320&crop=smart&auto=webp&s=784e57997cc432b6d18be50add8ad28d78dddb2c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VndUC6N6OMtx5vF6Lpg2QSlRL9OcrliJmMga5XqVl-E.jpg?width=640&crop=smart&auto=webp&s=7efb2eec4809014715ab9c70b659d68f62fd9227', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VndUC6N6OMtx5vF6Lpg2QSlRL9OcrliJmMga5XqVl-E.jpg?width=960&crop=smart&auto=webp&s=0f89ecedffbf0a68b5dd3be941a4b65c06ac0e55', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VndUC6N6OMtx5vF6Lpg2QSlRL9OcrliJmMga5XqVl-E.jpg?width=1080&crop=smart&auto=webp&s=81e2853de0e743d6e68d12f63cbda8a5bd4a9f0f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VndUC6N6OMtx5vF6Lpg2QSlRL9OcrliJmMga5XqVl-E.jpg?auto=webp&s=358717937726abeaff4039d93f959ea3f1c4cbd4', 'width': 1200}, 'variants': {}}]}
|
NEW [ Deepseek ] just droped [ deepseek-ai/DeepSeek-Prover-V1 ]
| 1 |
[removed]
| 2025-04-30T14:25:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbhu47/new_deepseek_just_droped/
|
DistributionOk2434
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbhu47
| false | null |
t3_1kbhu47
|
/r/LocalLLaMA/comments/1kbhu47/new_deepseek_just_droped/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '48gy_SWLx1b949upbqxz3dSEcrOFIujRU-9nYXmeE9k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VndUC6N6OMtx5vF6Lpg2QSlRL9OcrliJmMga5XqVl-E.jpg?width=108&crop=smart&auto=webp&s=5ddeaeaf76eadba3a89e8dedb0b33c123f685118', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VndUC6N6OMtx5vF6Lpg2QSlRL9OcrliJmMga5XqVl-E.jpg?width=216&crop=smart&auto=webp&s=5bfd7de77c2f9065ec4cef0e772760cdcacc6bed', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VndUC6N6OMtx5vF6Lpg2QSlRL9OcrliJmMga5XqVl-E.jpg?width=320&crop=smart&auto=webp&s=784e57997cc432b6d18be50add8ad28d78dddb2c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VndUC6N6OMtx5vF6Lpg2QSlRL9OcrliJmMga5XqVl-E.jpg?width=640&crop=smart&auto=webp&s=7efb2eec4809014715ab9c70b659d68f62fd9227', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VndUC6N6OMtx5vF6Lpg2QSlRL9OcrliJmMga5XqVl-E.jpg?width=960&crop=smart&auto=webp&s=0f89ecedffbf0a68b5dd3be941a4b65c06ac0e55', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VndUC6N6OMtx5vF6Lpg2QSlRL9OcrliJmMga5XqVl-E.jpg?width=1080&crop=smart&auto=webp&s=81e2853de0e743d6e68d12f63cbda8a5bd4a9f0f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VndUC6N6OMtx5vF6Lpg2QSlRL9OcrliJmMga5XqVl-E.jpg?auto=webp&s=358717937726abeaff4039d93f959ea3f1c4cbd4', 'width': 1200}, 'variants': {}}]}
|
Qwen3:4b runs on my 3.5 years old Pixel 6 phone
| 491 |
It is a bit slow, but still I'm surprised that this is even possible.
Imagine being stuck somewhere with no network connectivity, running a model like this allows you to have a compressed knowledge base that can help you survive in whatever crazy situation you might find yourself in.
Managed to run 8b too, but it was even slower to the point of being impractical.
Truly exciting time to be alive!
| 2025-04-30T14:38:27 |
osherz5
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbi47j
| false | null |
t3_1kbi47j
|
/r/LocalLLaMA/comments/1kbi47j/qwen34b_runs_on_my_35_years_old_pixel_6_phone/
| false | false | 491 |
{'enabled': True, 'images': [{'id': 'Ifh5f9poZhO7IupxAtQjD-_ziOaqvRZ68fjuwev6IhY', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/zbzcq79ihzxe1.jpeg?width=108&crop=smart&auto=webp&s=884454ed6e59797d270fc64c442779999db64a91', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/zbzcq79ihzxe1.jpeg?width=216&crop=smart&auto=webp&s=41a5e36cd2c3c244458dc557f74ecfbab9632278', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/zbzcq79ihzxe1.jpeg?width=320&crop=smart&auto=webp&s=4a95cd591dc901fa6bb499f7b71367e6407048ea', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/zbzcq79ihzxe1.jpeg?width=640&crop=smart&auto=webp&s=cd015487d2280215d98a5300b8f7e6e99b586939', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/zbzcq79ihzxe1.jpeg?width=960&crop=smart&auto=webp&s=0a4c9268a7c9ca79523dbd245ade68d23b9f172c', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/zbzcq79ihzxe1.jpeg?width=1080&crop=smart&auto=webp&s=d22e2d33b441b064645411e7f70dae3644ca46c4', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/zbzcq79ihzxe1.jpeg?auto=webp&s=4b2a8dc5a674dbd237e8ff36a7ebdf43c6bd59c1', 'width': 1080}, 'variants': {}}]}
|
||
NEW deepseek just droped [ deepseek-ai/DeepSeek-Prover-V1 ]
| 0 |
A new DeepSeek model has recently been released. You can find information about it on Hugging Face.
glhf . chat is a website that allows you to install and use open-source language models. They may offer free credits to get started.
If you're unsure how to use the website, here's a simple guide:
1. Sign up normally.
2. Navigate to the page for selecting a model.
3. Go to Hugging Face, find the model you want, and copy its identifier (usually the part after the main website address).
4. Return to the website and paste the model identifier into the designated area, which might look like a message input field.
This method should work for many language models available on Hugging Face.
| 2025-04-30T14:39:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbi54q/new_deepseek_just_droped/
|
DistributionOk2434
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbi54q
| false | null |
t3_1kbi54q
|
/r/LocalLLaMA/comments/1kbi54q/new_deepseek_just_droped/
| false | false |
self
| 0 | null |
deepseek-ai/DeepSeek-Prover-V2-7B · Hugging Face
| 30 | 2025-04-30T15:02:12 |
https://huggingface.co/deepseek-ai/DeepSeek-Prover-V2-7B
|
Dark_Fire_12
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbiokq
| false | null |
t3_1kbiokq
|
/r/LocalLLaMA/comments/1kbiokq/deepseekaideepseekproverv27b_hugging_face/
| false | false | 30 |
{'enabled': False, 'images': [{'id': '5f9OQVbD9A8UU0Fi-yD5kn20Ri7eSNZJnFQx_Dozuv0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3dM4nULZ8IZaICRt-gDS2MMGpKEYwP5nklHZaBtAHkY.jpg?width=108&crop=smart&auto=webp&s=fff4a4694ad70c4ca7319420a4464e6dedc46484', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3dM4nULZ8IZaICRt-gDS2MMGpKEYwP5nklHZaBtAHkY.jpg?width=216&crop=smart&auto=webp&s=000abe9ac61781d3ba18dd2d0b0fcda03b3d4abc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3dM4nULZ8IZaICRt-gDS2MMGpKEYwP5nklHZaBtAHkY.jpg?width=320&crop=smart&auto=webp&s=e441b98263ab3e4faa45486980d35fad5bd9a77d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3dM4nULZ8IZaICRt-gDS2MMGpKEYwP5nklHZaBtAHkY.jpg?width=640&crop=smart&auto=webp&s=525cd6be303a752c1086be826c8df28f01e94af1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3dM4nULZ8IZaICRt-gDS2MMGpKEYwP5nklHZaBtAHkY.jpg?width=960&crop=smart&auto=webp&s=fdd1e54a8b40d8e2334595df38c0427e0cb90ccc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3dM4nULZ8IZaICRt-gDS2MMGpKEYwP5nklHZaBtAHkY.jpg?width=1080&crop=smart&auto=webp&s=7f3caba75aca372b51715917b7e88609b8fadf38', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3dM4nULZ8IZaICRt-gDS2MMGpKEYwP5nklHZaBtAHkY.jpg?auto=webp&s=c10d80dfc5370cc58cf41a32f9a17365b8dc2a52', 'width': 1200}, 'variants': {}}]}
|
||
One more thing! Qwen2.5-Omni-3B released!
| 1 |
[removed]
| 2025-04-30T15:15:33 |
Dr_Karminski
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbj08q
| false | null |
t3_1kbj08q
|
/r/LocalLLaMA/comments/1kbj08q/one_more_thing_qwen25omni3b_released/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'TFc8WSfdx8ycpZZMtL2AbHxam25-AIXlghJEAubff3E', 'resolutions': [{'height': 91, 'url': 'https://preview.redd.it/sa3kjjavnzxe1.jpeg?width=108&crop=smart&auto=webp&s=bc31c7885feb7e4946f357df8d5bf72256c533f7', 'width': 108}, {'height': 183, 'url': 'https://preview.redd.it/sa3kjjavnzxe1.jpeg?width=216&crop=smart&auto=webp&s=ac90afc44916415893c5d573a48f887d45bd6cf7', 'width': 216}, {'height': 271, 'url': 'https://preview.redd.it/sa3kjjavnzxe1.jpeg?width=320&crop=smart&auto=webp&s=0a38edac7b6dd3c45a29466fb5ea5ee1e675a779', 'width': 320}, {'height': 542, 'url': 'https://preview.redd.it/sa3kjjavnzxe1.jpeg?width=640&crop=smart&auto=webp&s=b2946818e0c5a08d2d04004ec7a5d3e44ef5315e', 'width': 640}, {'height': 813, 'url': 'https://preview.redd.it/sa3kjjavnzxe1.jpeg?width=960&crop=smart&auto=webp&s=cb5ce54babb92771eb1531202c4abd32a5115871', 'width': 960}, {'height': 915, 'url': 'https://preview.redd.it/sa3kjjavnzxe1.jpeg?width=1080&crop=smart&auto=webp&s=59e482d373a5b3bcd5152ce2f98e1583a8155f99', 'width': 1080}], 'source': {'height': 3003, 'url': 'https://preview.redd.it/sa3kjjavnzxe1.jpeg?auto=webp&s=cbc144ea8a1b68ee4b954aaad12507ccd8a92eec', 'width': 3544}, 'variants': {}}]}
|
||
Another Qwen model, Qwen2.5-Omni-3B released!
| 46 |
It's an end-to-end multimodal model that can take text, images, audio, and video as input and generate text and audio streams.
| 2025-04-30T15:16:57 |
Dr_Karminski
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbj1hd
| false | null |
t3_1kbj1hd
|
/r/LocalLLaMA/comments/1kbj1hd/another_qwen_model_qwen25omni3b_released/
| false | false | 46 |
{'enabled': True, 'images': [{'id': 'XdVKBBmG3_Og03-jVeZN6csjU6EWxbF9QdjwGwmuTHQ', 'resolutions': [{'height': 91, 'url': 'https://preview.redd.it/drua8oq7ozxe1.jpeg?width=108&crop=smart&auto=webp&s=f5a884cb7336090a19a0c1dc5f19db42bba059dd', 'width': 108}, {'height': 183, 'url': 'https://preview.redd.it/drua8oq7ozxe1.jpeg?width=216&crop=smart&auto=webp&s=00bd96a46970fb340e2a76e1abdf68e58152ec51', 'width': 216}, {'height': 271, 'url': 'https://preview.redd.it/drua8oq7ozxe1.jpeg?width=320&crop=smart&auto=webp&s=b7d8bc5f467a4993205254cb7f5b92fff90dc982', 'width': 320}, {'height': 542, 'url': 'https://preview.redd.it/drua8oq7ozxe1.jpeg?width=640&crop=smart&auto=webp&s=c47932d0e5e8749da5aa3ce239b5d20b20c79fbb', 'width': 640}, {'height': 813, 'url': 'https://preview.redd.it/drua8oq7ozxe1.jpeg?width=960&crop=smart&auto=webp&s=5e34d9409aad35d4a3b83049c4b45072a60d4a50', 'width': 960}, {'height': 915, 'url': 'https://preview.redd.it/drua8oq7ozxe1.jpeg?width=1080&crop=smart&auto=webp&s=b6890a9435d9f71d0a5dd1855edf78e8d9944c6c', 'width': 1080}], 'source': {'height': 3003, 'url': 'https://preview.redd.it/drua8oq7ozxe1.jpeg?auto=webp&s=ed59f7012196b24184fc637e47a12207bb8f5c6a', 'width': 3544}, 'variants': {}}]}
|
||
Amazed by llamacon
| 0 |
24H later I'm amazed by llama-con, seems like nothing has happened except for some llama-guard/llama-firewall things, Am I write?
Not to say it's worthless, juste that.. meh
| 2025-04-30T15:21:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbj5ib/amazed_by_llamacon/
|
No_Afternoon_4260
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbj5ib
| false | null |
t3_1kbj5ib
|
/r/LocalLLaMA/comments/1kbj5ib/amazed_by_llamacon/
| false | false |
self
| 0 | null |
A new DeepSeek just released [ deepseek-ai/DeepSeek-Prover-V2-671B ]
| 51 |
A new DeepSeek model has recently been released. You can find information about it on Hugging Face.
https://preview.redd.it/g1m3lns4pzxe1.png?width=3572&format=png&auto=webp&s=412b93e8ccd03433e05c114ad27200f484ab3ec1
A new language model has been released: DeepSeek-Prover-V2.
This model is designed specifically for formal theorem proving in Lean 4. It uses advanced techniques involving recursive proof search and learning from both informal and formal mathematical reasoning.
The model, DeepSeek-Prover-V2-671B, shows strong performance on theorem proving benchmarks like MiniF2F-test and PutnamBench. A new benchmark called ProverBench, featuring problems from AIME and textbooks, was also introduced alongside the model.
This represents a significant step in using AI for mathematical theorem proving.
| 2025-04-30T15:23:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbj6q3/a_new_deepseek_just_released/
|
Rare-Programmer-1747
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbj6q3
| false | null |
t3_1kbj6q3
|
/r/LocalLLaMA/comments/1kbj6q3/a_new_deepseek_just_released/
| false | false | 51 | null |
|
Local / Private voice agent via Ollama, Kokoro, Whisper, LiveKit
| 26 |
I built a totally local Speech-to-Speech agent that runs completely on CPU (mostly because I'm a mac user) with a combo of the following:
\- Whisper via Vox-box for STT: [https://github.com/gpustack/vox-box](https://github.com/gpustack/vox-box)
\- Ollama w/ Gemma3:4b for LLM: [https://ollama.com](https://ollama.com)
\- Kokoro via FastAPI by remsky for TTS: [https://github.com/remsky/Kokoro-FastAPI](https://github.com/remsky/Kokoro-FastAPI)
\- LiveKit Server for agent orchestration and transport: [https://github.com/livekit/livekit](https://github.com/livekit/livekit)
\- LiveKit Agents for all of the agent logic and gluing together the STT / LLM / TTS pipeline: [https://github.com/livekit/agents](https://github.com/livekit/agents)
\- The Web Voice Assistant template in Next.js: [https://github.com/livekit-examples/voice-assistant-frontend](https://github.com/livekit-examples/voice-assistant-frontend)
I used \`all-MiniLM-L6-v2\` as the embedding model and FAISS for efficient similarity search, both to optimize performance and minimize RAM usage.
Ollama tends to reload the model when switching between embedding and completion endpoints, so this approach avoids that issue. If anyone hows how to fix this, I might switch back to Ollama for embeddings, but I legit could not find the answer anywhere.
If you want, you could modify the project to use GPU as well—which would dramatically improve response speed, but then it will only run on Linux machines. Will probably ship some changes soon to make it easier.
There's some issues with WSL audio and network connections via Docker, so it doesn't work on Windows yet, but I'm hoping to get it working at some point (or I'm always happy to see PRs <3)
The repo: [https://github.com/ShayneP/local-voice-ai](https://github.com/ShayneP/local-voice-ai)
Run the project with \`./test.sh\`
If you run into any issues either drop a note on the repo or let me know here and I'll try to fix it!
| 2025-04-30T15:26:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbj97u/local_private_voice_agent_via_ollama_kokoro/
|
Shayps
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbj97u
| false | null |
t3_1kbj97u
|
/r/LocalLLaMA/comments/1kbj97u/local_private_voice_agent_via_ollama_kokoro/
| false | false |
self
| 26 |
{'enabled': False, 'images': [{'id': '5RBu-a35PVVyOWSGncUdV-swPyaSMIRVTtnzjl5jGn0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/F-Mzg9AGZf7KWBDEY1bQRaLHXnyPmyT5gpkTy1vNJMc.jpg?width=108&crop=smart&auto=webp&s=36eb197f8fac220a2184d73c7c00dd26e1a71386', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/F-Mzg9AGZf7KWBDEY1bQRaLHXnyPmyT5gpkTy1vNJMc.jpg?width=216&crop=smart&auto=webp&s=2dead942bd20ca1d4e3c5a97772f82768b1d2689', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/F-Mzg9AGZf7KWBDEY1bQRaLHXnyPmyT5gpkTy1vNJMc.jpg?width=320&crop=smart&auto=webp&s=2ab50d671cf8fa0c7dbf6dc91471133dd1c1ee77', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/F-Mzg9AGZf7KWBDEY1bQRaLHXnyPmyT5gpkTy1vNJMc.jpg?width=640&crop=smart&auto=webp&s=b147ad8453266ba8a140636e80eb02c2540616da', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/F-Mzg9AGZf7KWBDEY1bQRaLHXnyPmyT5gpkTy1vNJMc.jpg?width=960&crop=smart&auto=webp&s=df5e876d7ae149986d13e163110d86c813acd30f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/F-Mzg9AGZf7KWBDEY1bQRaLHXnyPmyT5gpkTy1vNJMc.jpg?width=1080&crop=smart&auto=webp&s=831bcd4fb7175f7109641c23701f0befba9daa43', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/F-Mzg9AGZf7KWBDEY1bQRaLHXnyPmyT5gpkTy1vNJMc.jpg?auto=webp&s=6f53b7a26f9d26fb5721e853a7fe70350c2f9bfb', 'width': 1200}, 'variants': {}}]}
|
Helium 1 2b - a kyutai Collection
| 29 |
Helium-1 is a lightweight language model with 2B parameters, targeting edge and mobile devices. It supports the 24 official languages of the European Union.
| 2025-04-30T15:32:02 |
https://huggingface.co/collections/kyutai/helium-1-2b-681237bbba8c1cf18a02e4bd
|
Dark_Fire_12
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbjej4
| false | null |
t3_1kbjej4
|
/r/LocalLLaMA/comments/1kbjej4/helium_1_2b_a_kyutai_collection/
| false | false | 29 |
{'enabled': False, 'images': [{'id': 'QGSNMQpf-8j3ipiGmF7rX526JsE7yvO0QTGBB-ou-AU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ePtxj9PkUESI5d-3TvU1YUDXSJQpvFzSn_KMe_ZLJM8.jpg?width=108&crop=smart&auto=webp&s=51fb973e5bf7c8e41949bce9b4967ca01a71a8c5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ePtxj9PkUESI5d-3TvU1YUDXSJQpvFzSn_KMe_ZLJM8.jpg?width=216&crop=smart&auto=webp&s=e4cb3f1f976d3f32c2c2273209380e58f806212d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ePtxj9PkUESI5d-3TvU1YUDXSJQpvFzSn_KMe_ZLJM8.jpg?width=320&crop=smart&auto=webp&s=87d6f865eda3c33e16bce05330f116d9ffef7ccb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ePtxj9PkUESI5d-3TvU1YUDXSJQpvFzSn_KMe_ZLJM8.jpg?width=640&crop=smart&auto=webp&s=a10489c77442f1f8fb5f23e743700bff31393ef3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ePtxj9PkUESI5d-3TvU1YUDXSJQpvFzSn_KMe_ZLJM8.jpg?width=960&crop=smart&auto=webp&s=c74cd1153fbbbf180c6c9462de5e7c8344641927', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ePtxj9PkUESI5d-3TvU1YUDXSJQpvFzSn_KMe_ZLJM8.jpg?width=1080&crop=smart&auto=webp&s=170a7ec568ef5b049fb730e4739e91172cb1f60d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ePtxj9PkUESI5d-3TvU1YUDXSJQpvFzSn_KMe_ZLJM8.jpg?auto=webp&s=9dc8520157530bc6dce0611788caf768f6bd1286', 'width': 1200}, 'variants': {}}]}
|
|
Using Qwen3 for fill in the middle autocomplete (LM Studio/VS Code), best <fim> model?
| 1 |
[removed]
| 2025-04-30T15:38:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbjk7r/using_qwen3_for_fill_in_the_middle_autocomplete/
|
SureDevise
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbjk7r
| false | null |
t3_1kbjk7r
|
/r/LocalLLaMA/comments/1kbjk7r/using_qwen3_for_fill_in_the_middle_autocomplete/
| false | false |
self
| 1 | null |
5 Common Mistakes When Scaling AI Agents
| 1 |
[removed]
| 2025-04-30T16:28:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbkrxc/5_common_mistakes_when_scaling_ai_agents/
|
Nir777
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbkrxc
| false | null |
t3_1kbkrxc
|
/r/LocalLLaMA/comments/1kbkrxc/5_common_mistakes_when_scaling_ai_agents/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '0kfVPCBA_rATawulsJkNtB40Zin1w_dD_pNFXqt4u_g', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/exQlu-ZuOi14EVbVfktG5Yes84nr42wQ2SxwG0lxUeY.jpg?width=108&crop=smart&auto=webp&s=a2ea54853e890a747359fc26daf993d405b0375f', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/exQlu-ZuOi14EVbVfktG5Yes84nr42wQ2SxwG0lxUeY.jpg?width=216&crop=smart&auto=webp&s=2b8adc1017416d8e7dcf828aead42704b2dfe540', 'width': 216}, {'height': 187, 'url': 'https://external-preview.redd.it/exQlu-ZuOi14EVbVfktG5Yes84nr42wQ2SxwG0lxUeY.jpg?width=320&crop=smart&auto=webp&s=a8893dbaa22e747f1fc079e33ffe5f3df8ca0951', 'width': 320}, {'height': 375, 'url': 'https://external-preview.redd.it/exQlu-ZuOi14EVbVfktG5Yes84nr42wQ2SxwG0lxUeY.jpg?width=640&crop=smart&auto=webp&s=e9e7f703d9a90b4b5c72ca30ef19ca4e0ed76e71', 'width': 640}, {'height': 562, 'url': 'https://external-preview.redd.it/exQlu-ZuOi14EVbVfktG5Yes84nr42wQ2SxwG0lxUeY.jpg?width=960&crop=smart&auto=webp&s=0b37e277f217e75589a9ee0dc6af8ef4599b2faf', 'width': 960}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/exQlu-ZuOi14EVbVfktG5Yes84nr42wQ2SxwG0lxUeY.jpg?auto=webp&s=e1ece29ee7afe6a7d99fe3dfab08ee1cd6af30f3', 'width': 1024}, 'variants': {}}]}
|
Qwen3-30B-A3B is on another level (Appreciation Post)
| 528 |
Model: Qwen3-30B-A3B-UD-Q4\_K\_XL.gguf | 32K Context (Max Output 8K) | 95 Tokens/sec
PC: Ryzen 7 7700 | 32GB DDR5 6000Mhz | RTX 3090 24GB VRAM | Windows 11 Pro x64
Okay, I just wanted to share my extreme satisfaction for this model. It is lightning fast and I can keep it on 24/7 (while using my PC normally - aside from gaming of course). There's no need for me to bring up ChatGPT or Gemini anymore for general inquiries, since it's always up and I don't need to load it up every time I was to use it. I have deleted all other LLMs from my PC as well. This is now the standard for me and I won't settle for anything less.
For anyone just starting to use it, it took a few variants of the model to find the right one. The 4K\_M one was bugged and would stay in an infinite loop. Now the UD-Q4\_K\_XL variant didn't have that issue and works as intended.
There isn't any point to this post other than to give credit and voice my satisfaction to all the people involved that made this model and variant. Kudos to you. I no longer feel FOMO either of wanting to upgrade my PC (GPU, RAM, architecture, etc.). This model is fantastic and I can't wait to see how it is improved upon.
| 2025-04-30T16:32:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbkv2d/qwen330ba3b_is_on_another_level_appreciation_post/
|
Prestigious-Use5483
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbkv2d
| false | null |
t3_1kbkv2d
|
/r/LocalLLaMA/comments/1kbkv2d/qwen330ba3b_is_on_another_level_appreciation_post/
| false | false |
self
| 528 | null |
How can I run Qwen3-8b without thinking?
| 1 |
[removed]
| 2025-04-30T16:33:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbkvlq/how_can_i_run_qwen38b_without_thinking/
|
Verskop
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbkvlq
| false | null |
t3_1kbkvlq
|
/r/LocalLLaMA/comments/1kbkvlq/how_can_i_run_qwen38b_without_thinking/
| false | false |
self
| 1 | null |
JS/TS version of Google's ADK?
| 3 |
Has anyone ported [Google's Agent Development Kit](https://github.com/google/adk-python) to js/ts?
| 2025-04-30T16:33:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbkvwp/jsts_version_of_googles_adk/
|
behradkhodayar
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbkvwp
| false | null |
t3_1kbkvwp
|
/r/LocalLLaMA/comments/1kbkvwp/jsts_version_of_googles_adk/
| false | false |
self
| 3 |
{'enabled': False, 'images': [{'id': 'fob3YvtnGKCw520W88HJwtJxctivzrQzRyTjJ6VdgSw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/OWqMzHMGcNw5z5BVoHE90l1cFvLya5JgkjyJkAmhxwY.jpg?width=108&crop=smart&auto=webp&s=7f98b6a496a5b81a90de837fbccdcd8969b8dcf1', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/OWqMzHMGcNw5z5BVoHE90l1cFvLya5JgkjyJkAmhxwY.jpg?width=216&crop=smart&auto=webp&s=96710c2b72bfbfe33259e264caebf1ab5eb51bbd', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/OWqMzHMGcNw5z5BVoHE90l1cFvLya5JgkjyJkAmhxwY.jpg?width=320&crop=smart&auto=webp&s=1b1b758f6673a0671b521b863c1e73277824374a', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/OWqMzHMGcNw5z5BVoHE90l1cFvLya5JgkjyJkAmhxwY.jpg?width=640&crop=smart&auto=webp&s=e6dfa22d6df32b96c33f245a4da5ad64e0400262', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/OWqMzHMGcNw5z5BVoHE90l1cFvLya5JgkjyJkAmhxwY.jpg?width=960&crop=smart&auto=webp&s=9c2100770faf4bcdf386333ef0bd2215063583cf', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/OWqMzHMGcNw5z5BVoHE90l1cFvLya5JgkjyJkAmhxwY.jpg?width=1080&crop=smart&auto=webp&s=4854544e1aff35a98844d577e371e78afe0f609c', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/OWqMzHMGcNw5z5BVoHE90l1cFvLya5JgkjyJkAmhxwY.jpg?auto=webp&s=7312db684dd7d34f37bf9c9b070b76b3afe6da59', 'width': 1200}, 'variants': {}}]}
|
Qwen just dropped an omnimodal model
| 215 |
>Qwen2.5-Omni is an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaAneously generating text and natural speech responses in a streaming manner.
There are [3B](https://huggingface.co/Qwen/Qwen2.5-Omni-3B) and [7B](https://huggingface.co/Qwen/Qwen2.5-Omni-7B) variants.
| 2025-04-30T16:42:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbl3vv/qwen_just_dropped_an_omnimodal_model/
|
numinouslymusing
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbl3vv
| false | null |
t3_1kbl3vv
|
/r/LocalLLaMA/comments/1kbl3vv/qwen_just_dropped_an_omnimodal_model/
| false | false |
self
| 215 |
{'enabled': False, 'images': [{'id': 'cs18oGXDVAThIb-T0LRZsl5rbNGcNc5_L8CTnu18M7s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VrKoYH0rSUXnh7ZB3s06wxaBwwhqJf3tLxecGQXrrRk.jpg?width=108&crop=smart&auto=webp&s=20de1cfb66b22fc2ae7600eb0150afc1f57bd7ae', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VrKoYH0rSUXnh7ZB3s06wxaBwwhqJf3tLxecGQXrrRk.jpg?width=216&crop=smart&auto=webp&s=edb8cde9210f434cbd429bc1361929ac303b523e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VrKoYH0rSUXnh7ZB3s06wxaBwwhqJf3tLxecGQXrrRk.jpg?width=320&crop=smart&auto=webp&s=456127215ab110b8f8095c560f64dfad1712d0f0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VrKoYH0rSUXnh7ZB3s06wxaBwwhqJf3tLxecGQXrrRk.jpg?width=640&crop=smart&auto=webp&s=b355cd4af64e469bec4d086fb697e376fa807fe9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VrKoYH0rSUXnh7ZB3s06wxaBwwhqJf3tLxecGQXrrRk.jpg?width=960&crop=smart&auto=webp&s=302375dd30b2f7e5f8fa25ba8947969bcbba8239', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VrKoYH0rSUXnh7ZB3s06wxaBwwhqJf3tLxecGQXrrRk.jpg?width=1080&crop=smart&auto=webp&s=1b1be284001cf85511a80e1417513b72ad72e368', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VrKoYH0rSUXnh7ZB3s06wxaBwwhqJf3tLxecGQXrrRk.jpg?auto=webp&s=e6e1dc29f4c97dbb4a96d6a01c94dc1a7bdbc590', 'width': 1200}, 'variants': {}}]}
|
How can I run Qwen3-8b without thinking?
| 1 |
[removed]
| 2025-04-30T16:44:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbl55b/how_can_i_run_qwen38b_without_thinking/
|
Verskop
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbl55b
| false | null |
t3_1kbl55b
|
/r/LocalLLaMA/comments/1kbl55b/how_can_i_run_qwen38b_without_thinking/
| false | false |
self
| 1 | null |
GH200 vs RTX PRO 6000
| 6 |
How does the GH200 superchip compare to the RTX Pro 6000 series? How much VRAM is actually available for the GPU?
I found this website (https://gptshop.ai/config/indexus.html) offering a desktop workstation with the GH200 series for a bit over 40k, which for 624GB of VRAM seems great. A system with 4x RTX Pro 6000 is over 50k and has only a total of 384GB of VRAM. If I understood correctly, memory bandwith is slower, so I'm guessing the 4x RTX Pro will be significantly faster. But I'm wondering what the actual performance difference will be.
Thanks!
| 2025-04-30T17:00:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1kblite/gh200_vs_rtx_pro_6000/
|
Virtual-Ducks
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kblite
| false | null |
t3_1kblite
|
/r/LocalLLaMA/comments/1kblite/gh200_vs_rtx_pro_6000/
| false | false |
self
| 6 | null |
Applying chat template in finetuning
| 1 |
[removed]
| 2025-04-30T17:06:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1kblo9s/applying_chat_template_in_finetuning/
|
commander-trex
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kblo9s
| false | null |
t3_1kblo9s
|
/r/LocalLLaMA/comments/1kblo9s/applying_chat_template_in_finetuning/
| false | false |
self
| 1 | null |
Prompt eval speed of Qwen 30b moe slow
| 2 |
I don't know if it is actually a bug or something else, but the prompt eval speed in llama cpp (newest version) for the moe seems very low. I get about 500 tk/s in prompt eval time which is approximately the same as for the dense 32b model. Before opening a bug request I wanted to check if its true that the eval speed should be much higher than for the dense model or if i don't understand why its lower.
| 2025-04-30T17:07:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1kblpsj/prompt_eval_speed_of_qwen_30b_moe_slow/
|
Flashy_Management962
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kblpsj
| false | null |
t3_1kblpsj
|
/r/LocalLLaMA/comments/1kblpsj/prompt_eval_speed_of_qwen_30b_moe_slow/
| false | false |
self
| 2 | null |
Qwen 3 outputs reasoning instead of reply in LMStudio
| 0 |
How to fix that?
| 2025-04-30T17:20:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbm0j3/qwen_3_outputs_reasoning_instead_of_reply_in/
|
ImaginaryRea1ity
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbm0j3
| false | null |
t3_1kbm0j3
|
/r/LocalLLaMA/comments/1kbm0j3/qwen_3_outputs_reasoning_instead_of_reply_in/
| false | false |
self
| 0 | null |
Building LocalLlama machine – Episode 1: Ancient 2008 Motherboard Meets Qwen 3
| 1 |
[removed]
| 2025-04-30T17:27:22 |
https://www.reddit.com/gallery/1kbm73t
|
jacek2023
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbm73t
| false | null |
t3_1kbm73t
|
/r/LocalLLaMA/comments/1kbm73t/building_localllama_machine_episode_1_ancient/
| false | false | 1 | null |
|
Building LocalLlama machine – Episode 1: Ancient 2008 Motherboard Meets Qwen 3
| 1 |
[removed]
| 2025-04-30T17:30:05 |
https://www.reddit.com/gallery/1kbm9h4
|
jacek2023
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbm9h4
| false | null |
t3_1kbm9h4
|
/r/LocalLLaMA/comments/1kbm9h4/building_localllama_machine_episode_1_ancient/
| false | false | 1 | null |
|
Building LocalLlama machine – Episode 1: Ancient 2008 Motherboard Meets Qwen 3
| 1 |
[removed]
| 2025-04-30T17:30:54 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbma8u
| false | null |
t3_1kbma8u
|
/r/LocalLLaMA/comments/1kbma8u/building_localllama_machine_episode_1_ancient/
| false | false |
default
| 1 | null |
||
Modular GPU Kernel Hackathon
| 1 | 2025-04-30T17:34:54 |
https://app.agihouse.org/events/modular-hackathon-20250510
|
carolinedfrasca
|
app.agihouse.org
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbmdmu
| false | null |
t3_1kbmdmu
|
/r/LocalLLaMA/comments/1kbmdmu/modular_gpu_kernel_hackathon/
| false | false |
default
| 1 | null |
|
Qwen 3 14B seems incredibly solid at coding.
| 369 |
"make pygame script of a hexagon rotating with balls inside it that are a bouncing around and interacting with hexagon and each other and are affected by gravity, ensure proper collisions"
| 2025-04-30T17:35:13 |
https://v.redd.it/mlsv22wyc0ye1
|
onil_gova
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbmdwx
| false |
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/mlsv22wyc0ye1/DASHPlaylist.mpd?a=1748626526%2CZDA0NTRlM2Q5N2UyN2Y3NGZjOTliMWE1NzM2MzI0OTRlZmY4NGM5Y2M1NzVmOTNlZGYyZDY0NjY1NjJiNjZlMA%3D%3D&v=1&f=sd', 'duration': 6, 'fallback_url': 'https://v.redd.it/mlsv22wyc0ye1/DASH_480.mp4?source=fallback', 'has_audio': False, 'height': 480, 'hls_url': 'https://v.redd.it/mlsv22wyc0ye1/HLSPlaylist.m3u8?a=1748626526%2COTNjNTc3NzY1YzdhOWVhYmNjZjNhZWM3MDAwOGMzNjZlNjg0ZDM5YmI3N2U0YTY3MDU1MzcyMWFkODAxMjQ4MA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/mlsv22wyc0ye1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 662}}
|
t3_1kbmdwx
|
/r/LocalLLaMA/comments/1kbmdwx/qwen_3_14b_seems_incredibly_solid_at_coding/
| false | false | 369 |
{'enabled': False, 'images': [{'id': 'NDBlMG82d3ljMHllMZR9iOr37oSx87-h3PnfyveNYzdEy8XBRNglZvBXFTBN', 'resolutions': [{'height': 78, 'url': 'https://external-preview.redd.it/NDBlMG82d3ljMHllMZR9iOr37oSx87-h3PnfyveNYzdEy8XBRNglZvBXFTBN.png?width=108&crop=smart&format=pjpg&auto=webp&s=d61c6f96dea8a5170220ba79cb9fd0c849ab75a3', 'width': 108}, {'height': 156, 'url': 'https://external-preview.redd.it/NDBlMG82d3ljMHllMZR9iOr37oSx87-h3PnfyveNYzdEy8XBRNglZvBXFTBN.png?width=216&crop=smart&format=pjpg&auto=webp&s=65edcb92847c4ee6a1ae3bed3ae4b3b1b6189e29', 'width': 216}, {'height': 232, 'url': 'https://external-preview.redd.it/NDBlMG82d3ljMHllMZR9iOr37oSx87-h3PnfyveNYzdEy8XBRNglZvBXFTBN.png?width=320&crop=smart&format=pjpg&auto=webp&s=558c86d8335383623416857a18139b3fd6e2be56', 'width': 320}, {'height': 464, 'url': 'https://external-preview.redd.it/NDBlMG82d3ljMHllMZR9iOr37oSx87-h3PnfyveNYzdEy8XBRNglZvBXFTBN.png?width=640&crop=smart&format=pjpg&auto=webp&s=89a1c2c502a8012312efa9676730b8357c7d1caa', 'width': 640}], 'source': {'height': 544, 'url': 'https://external-preview.redd.it/NDBlMG82d3ljMHllMZR9iOr37oSx87-h3PnfyveNYzdEy8XBRNglZvBXFTBN.png?format=pjpg&auto=webp&s=d501f5c238c5ae2d4370bf63ae1c68ee54821419', 'width': 750}, 'variants': {}}]}
|
|
Muyan-TTS: We built an open-source, low-latency, highly customizable TTS model for developers
| 96 |
Hi everyone,I'm a developer from the ChatPods team. Over the past year working on audio applications, we often ran into the same problem: open-source TTS models were either low quality or not fully open, making it hard to retrain and adapt. So we built [Muyan-TTS](https://github.com/MYZY-AI/Muyan-TTS), a fully open-source, low-cost model designed for easy fine-tuning and secondary development.The current version supports English best, as the training data is still relatively small. But we have open-sourced the entire training and data processing pipeline, so teams can easily adapt or expand it based on their needs. We also welcome feedback, discussions, and contributions.
# You can find the project here:
* arXiv paper: [https://arxiv.org/abs/2504.19146](https://arxiv.org/abs/2504.19146)
* GitHub: [https://github.com/MYZY-AI/Muyan-TTS](https://github.com/MYZY-AI/Muyan-TTS)
* HuggingFace weights:
* [https://huggingface.co/MYZY-AI/Muyan-TTS](https://huggingface.co/MYZY-AI/Muyan-TTS)
* [https://huggingface.co/MYZY-AI/Muyan-TTS-SFT](https://huggingface.co/MYZY-AI/Muyan-TTS-SFT)
Muyan-TTS provides full access to model weights, training scripts, and data workflows. There are two model versions: a Base model trained on multi-speaker audio data for zero-shot TTS, and an SFT model fine-tuned on single-speaker data for better voice cloning. We also release the training code from the base model to the SFT model for speaker adaptation. It runs efficiently, generating one second of audio in about 0.33 seconds on standard GPUs, and supports lightweight fine-tuning without needing large compute resources.
We focused on solving practical issues like long-form stability, easy retrainability, and efficient deployment. The model uses a fine-tuned LLaMA-3.2-3B as the semantic encoder and an optimized SoVITS-based decoder. Data cleaning is handled through pipelines built on Whisper, FunASR, and NISQA filtering.
https://preview.redd.it/69xh6uzvd0ye1.png?width=2670&format=png&auto=webp&s=f9cdf7f7a7620807a6283bd30f02ae39e7a984a9
https://preview.redd.it/it0ikfiwd0ye1.png?width=5490&format=png&auto=webp&s=af663748e8d0be6740f382a67fa17fec552df67d
Full code for each component is available in the [GitHub repo](https://github.com/MYZY-AI/Muyan-TTS).
# Performance Metrics
We benchmarked Muyan-TTS against popular open-source models on standard datasets (LibriSpeech, SEED):
https://preview.redd.it/4b2h4dn1e0ye1.png?width=1280&format=png&auto=webp&s=d9399772d4f80dd7fd8e2a352d21df7b26cf6633
# Demo
https://reddit.com/link/1kbmjh4/video/zffbozb4e0ye1/player
# Why Open-source This?
We believe that, just like Samantha in *Her*, voice will become a core way for humans to interact with AI — making it possible for everyone to have an AI companion they can talk to anytime. Muyan-TTS is only a small step in that direction. There's still a lot of room for improvement in model design, data preparation, and training methods. We hope that others who are passionate about speech technology, TTS, or real-time voice interaction will join us on this journey.
We’re looking forward to your feedback, ideas, and contributions. Feel free to open an issue, send a PR, or simply leave a comment.
| 2025-04-30T17:41:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbmjh4/muyantts_we_built_an_opensource_lowlatency_highly/
|
Ok-Sir-8964
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbmjh4
| false | null |
t3_1kbmjh4
|
/r/LocalLLaMA/comments/1kbmjh4/muyantts_we_built_an_opensource_lowlatency_highly/
| false | false | 96 |
{'enabled': False, 'images': [{'id': 'DNFERKsxUSFkdWJsmL4acMqe9mNzWDtH-u0Rta7nXlE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Mo66MZ-iXVOMBQaXa2wqFwH6cSZDffH94n-_hKZPuCU.jpg?width=108&crop=smart&auto=webp&s=187fe95dab95dce10117b3197131a0838111fb8c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Mo66MZ-iXVOMBQaXa2wqFwH6cSZDffH94n-_hKZPuCU.jpg?width=216&crop=smart&auto=webp&s=9280df39113e22a6f1c9ce57f6b0cf0457943a40', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Mo66MZ-iXVOMBQaXa2wqFwH6cSZDffH94n-_hKZPuCU.jpg?width=320&crop=smart&auto=webp&s=dedbb34a69d4148a900b0719d497a354e72245b8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Mo66MZ-iXVOMBQaXa2wqFwH6cSZDffH94n-_hKZPuCU.jpg?width=640&crop=smart&auto=webp&s=30f8f15f1e5cd79c3a1b7a4ffea9de8c546a6cb8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Mo66MZ-iXVOMBQaXa2wqFwH6cSZDffH94n-_hKZPuCU.jpg?width=960&crop=smart&auto=webp&s=a037873cccb5c935a71534ddde5d0f574685dee1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Mo66MZ-iXVOMBQaXa2wqFwH6cSZDffH94n-_hKZPuCU.jpg?width=1080&crop=smart&auto=webp&s=438a4ade6b96fcbd073f4b8a68fae75292a86640', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Mo66MZ-iXVOMBQaXa2wqFwH6cSZDffH94n-_hKZPuCU.jpg?auto=webp&s=82ad7382fb9fb601cd2048131decb2918037f619', 'width': 1200}, 'variants': {}}]}
|
|
Integrating Agents with a chatbot over A2A
| 0 |
Doing some more experimentation with A2A and local LLMs by integrating multiple external agents into a demo chatbot to add specific capabilities.
Using a local LLM with tool calling to determine if an external agent should be called.
Does anyone have any good experiences to share around A2A? Do you think it will catch on as a complement to MCP?
Blog: [https://www.teachmecoolstuff.com/viewarticle/using-a2a-with-multiple-agents](https://www.teachmecoolstuff.com/viewarticle/using-a2a-with-multiple-agents)
| 2025-04-30T17:58:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbmy48/integrating_agents_with_a_chatbot_over_a2a/
|
funJS
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbmy48
| false | null |
t3_1kbmy48
|
/r/LocalLLaMA/comments/1kbmy48/integrating_agents_with_a_chatbot_over_a2a/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'FWfN3I_aSWWeBvz0kSnI6WbqDHPesaFYU-RKBgH0afY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Ol-uSKMk94Q3bvzGQJEUyi1kjJKu5hRYpU5lfeO5JNM.jpg?width=108&crop=smart&auto=webp&s=da9997765ffdce6ca201796ab450ea42756d1d0c', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/Ol-uSKMk94Q3bvzGQJEUyi1kjJKu5hRYpU5lfeO5JNM.jpg?width=216&crop=smart&auto=webp&s=ea41b38aedd79c58a6813685f21291c176101d31', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/Ol-uSKMk94Q3bvzGQJEUyi1kjJKu5hRYpU5lfeO5JNM.jpg?width=320&crop=smart&auto=webp&s=b5500025b201a69d0e48c6deb445e29703975960', 'width': 320}], 'source': {'height': 400, 'url': 'https://external-preview.redd.it/Ol-uSKMk94Q3bvzGQJEUyi1kjJKu5hRYpU5lfeO5JNM.jpg?auto=webp&s=97bc29501ad66cbcae3ca84e14a5a8d3ddbca691', 'width': 400}, 'variants': {}}]}
|
OAuth for AI memories
| 0 |
Hey everyone, I worked on a fun weekend project.
I tried to build an OAuth layer that can extract memories from ChatGPT in a scoped way and offer those memories to 3rd party for personalization.
This is just a PoC for now and it's not a product. I mainly worked on that because I wanted to spark a discussion around that topic.
Would love to know what you think!
[https://dudulasry.substack.com/p/oauth-for-ai-memories](https://dudulasry.substack.com/p/oauth-for-ai-memories)
| 2025-04-30T18:00:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbn0c0/oauth_for_ai_memories/
|
Old_Cauliflower6316
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbn0c0
| false | null |
t3_1kbn0c0
|
/r/LocalLLaMA/comments/1kbn0c0/oauth_for_ai_memories/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'ja_S_lXGoLNMHDQHZ_uGjxdHrz0xXbxKsnQ1DTr5Wro', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2Vx2tTKCtJ_ZMBSoGGKUDrPR98n5kY5vA0TsuA0Jr9k.jpg?width=108&crop=smart&auto=webp&s=bb16a6818dcd0b778b236aa9aaefab99993ba0c6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2Vx2tTKCtJ_ZMBSoGGKUDrPR98n5kY5vA0TsuA0Jr9k.jpg?width=216&crop=smart&auto=webp&s=2a365be3677e830da77597c3242c2030a667b49c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2Vx2tTKCtJ_ZMBSoGGKUDrPR98n5kY5vA0TsuA0Jr9k.jpg?width=320&crop=smart&auto=webp&s=2418fed11a50aaceeb59741566f54c6755caf16a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2Vx2tTKCtJ_ZMBSoGGKUDrPR98n5kY5vA0TsuA0Jr9k.jpg?width=640&crop=smart&auto=webp&s=3f8e21e39b5899f3571a32070d67d9f3f3a2fdf8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2Vx2tTKCtJ_ZMBSoGGKUDrPR98n5kY5vA0TsuA0Jr9k.jpg?width=960&crop=smart&auto=webp&s=7951b4f1c20df9001474ed8a08e99b343457686c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2Vx2tTKCtJ_ZMBSoGGKUDrPR98n5kY5vA0TsuA0Jr9k.jpg?width=1080&crop=smart&auto=webp&s=150ec14f0d497a7c8929d616cd2697ba1956b8f9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2Vx2tTKCtJ_ZMBSoGGKUDrPR98n5kY5vA0TsuA0Jr9k.jpg?auto=webp&s=0656cbff1983663012e6de1035aac982a39d04d4', 'width': 1200}, 'variants': {}}]}
|
Self-Hosted Linux Assistant (Ollama) NLP to Shell commands. Analysis with of shell output, files and folders.
| 1 | 2025-04-30T18:06:52 |
https://github.com/Abyss-c0re/deepshell
|
Agreeable_Net6716
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbn5n3
| false | null |
t3_1kbn5n3
|
/r/LocalLLaMA/comments/1kbn5n3/selfhosted_linux_assistant_ollama_nlp_to_shell/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '2veFhX4MKY3cwrABkI3mw3fxbPqeir0ZcIJZIPOVNxA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rQAyDagXDjCqlJgqwyrreu3cSAsAKrIt1zDLzNSL8aU.jpg?width=108&crop=smart&auto=webp&s=f7f057634bc98682ef6e80551f44023f11c2c778', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rQAyDagXDjCqlJgqwyrreu3cSAsAKrIt1zDLzNSL8aU.jpg?width=216&crop=smart&auto=webp&s=a5fa51384e6fb08fab604a5186b4933518d0271c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rQAyDagXDjCqlJgqwyrreu3cSAsAKrIt1zDLzNSL8aU.jpg?width=320&crop=smart&auto=webp&s=41982c57e158f5bf79102b21fa92137e235aff1c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rQAyDagXDjCqlJgqwyrreu3cSAsAKrIt1zDLzNSL8aU.jpg?width=640&crop=smart&auto=webp&s=8ac84088e46ca37cc444c38d497b71b798960f0f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rQAyDagXDjCqlJgqwyrreu3cSAsAKrIt1zDLzNSL8aU.jpg?width=960&crop=smart&auto=webp&s=2aa73883d1035709c383e5661d6afa5cb33f9589', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rQAyDagXDjCqlJgqwyrreu3cSAsAKrIt1zDLzNSL8aU.jpg?width=1080&crop=smart&auto=webp&s=86f0d65e26078158e4f003eb7b503e7ffd08d863', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rQAyDagXDjCqlJgqwyrreu3cSAsAKrIt1zDLzNSL8aU.jpg?auto=webp&s=eb6ec0f92b0afa49077064d74fa20655f682e64c', 'width': 1200}, 'variants': {}}]}
|
||
Building LocalLlama machine – Episode 1: Ancient 2008 Motherboard Meets Qwen 3
| 1 |
[removed]
| 2025-04-30T18:15:35 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbnd7i
| false | null |
t3_1kbnd7i
|
/r/LocalLLaMA/comments/1kbnd7i/building_localllama_machine_episode_1_ancient/
| false | false |
default
| 1 | null |
||
Qwen3 on 2008 Motherboard
| 2 | 2025-04-30T18:16:32 |
https://www.reddit.com/gallery/1kbne2s
|
jacek2023
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbne2s
| false | null |
t3_1kbne2s
|
/r/LocalLLaMA/comments/1kbne2s/qwen3_on_2008_motherboard/
| false | false | 2 | null |
||
China has delivered , yet again
| 784 | 2025-04-30T18:17:15 |
TheLogiqueViper
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbneq2
| false | null |
t3_1kbneq2
|
/r/LocalLLaMA/comments/1kbneq2/china_has_delivered_yet_again/
| false | false | 784 |
{'enabled': True, 'images': [{'id': 'Gx8yw3bzZ4AFhHfuAW-6KOBlj9pOO_PSjC06KRe3oAY', 'resolutions': [{'height': 83, 'url': 'https://preview.redd.it/e93wo9ljk0ye1.jpeg?width=108&crop=smart&auto=webp&s=386bbd92435326133c0a1608144bb005e6578cb4', 'width': 108}, {'height': 167, 'url': 'https://preview.redd.it/e93wo9ljk0ye1.jpeg?width=216&crop=smart&auto=webp&s=d0532ad5bba8e13640823a0f9cbc680fa3884b43', 'width': 216}, {'height': 248, 'url': 'https://preview.redd.it/e93wo9ljk0ye1.jpeg?width=320&crop=smart&auto=webp&s=4a392ec5bcbbd8375856cf06d73abdd7d5938a4a', 'width': 320}, {'height': 496, 'url': 'https://preview.redd.it/e93wo9ljk0ye1.jpeg?width=640&crop=smart&auto=webp&s=cc424cefb35ca7f93a4d04370e1f1726a462c266', 'width': 640}], 'source': {'height': 674, 'url': 'https://preview.redd.it/e93wo9ljk0ye1.jpeg?auto=webp&s=b6ff08001b1ba8dd59a408412bf22a1618060e12', 'width': 869}, 'variants': {}}]}
|
|||
Qwen3-30B-A3B requirements question
| 1 |
[removed]
| 2025-04-30T18:22:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbnj0m/qwen330ba3b_requirements_question/
|
Outrageous-Source-49
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbnj0m
| false | null |
t3_1kbnj0m
|
/r/LocalLLaMA/comments/1kbnj0m/qwen330ba3b_requirements_question/
| false | false |
self
| 1 | null |
Qwen3 on 2008 Motherboard
| 56 |
Building LocalLlama machine – Episode 1: Ancient 2008 Motherboard Meets Qwen 3
My desktop is an i7-13700, RTX 3090, and 128GB of RAM. Models up to 24GB run well for me, but I feel like trying something bigger. I already tried connecting a second GPU (a 2070) to see if I could run larger models, but the problem turned out to be the case, my Define 7 doesn’t fit two large graphics cards. I could probably jam them in somehow, but why bother? I bought an **open-frame** case and started building "LocalLlama supercomputer"!
I already ordered motherboard with 4x PCI-E 16x but first **let's have some fun**.
I was looking for information on how components other than the GPU affect LLMs. There’s a lot of theoretical info out there, but very few practical results. Since I'm a huge fan of Richard Feynman, instead of trusting the theory, I decided to test it myself.
The oldest computer I own was bought in **2008** (what were you doing in 2008?). It turns out the motherboard has two PCI-E x16 slots. I installed the latest Ubuntu on it, plugged **two 3060s** into the slots, and compiled `llama.cpp`. What happens when you connect GPUs to a very old motherboard and try to run the latest models on it? Let’s find out!
First, let’s see what kind of hardware we’re dealing with:
Machine: Type: Desktop System: MICRO-STAR product: MS-7345 v: 1.0 BIOS: American Megatrends v: 1.9 date: 07/07/2008
Memory: System RAM: total: 6 GiB available: 5.29 GiB used: 2.04 GiB (38.5%) CPU: Info: dual core model: Intel Core2 Duo E8400 bits: 64 type: MCP cache: L2: 6 MiB Speed (MHz): avg: 3006 min/max: N/A cores: 1: 3006 2: 3006
So we have a **dual-core processor from 2008 and 6GB of RAM.** A major issue with this motherboard is the **lack of an M.2 slot**. That means I have to load models via SATA — which results in the model taking **several minutes just to load**!
Since I’ve read a lot about issues with PCI lanes and how weak motherboards communicate with GPUs, I decided to run all tests using both cards — even for models that would fit on a single one.
The processor is **passively cooled**. The whole setup is very quiet, even though it’s an open-frame build. The only fans are in the power supply and the 3060 — but they barely spin at all.
So what are the results? (see screenshots)
Qwen\_Qwen3-8B-Q8\_0.gguf - **33 t/s**
Qwen\_Qwen3-14B-Q8\_0.gguf - **19 t/s**
Qwen\_Qwen3-30B-A3B-Q5\_K\_M.gguf - **47 t/s**
Qwen\_Qwen3-32B-Q4\_K\_M.gguf - **14 t/s**
Yes, it's slower than the RTX 3090 on the i7-13700 — but not as much as I expected. Remember, this is a motherboard from 2008, **17 years ago**.
I hope this is useful! I doubt anyone has a slower motherboard than mine ;)
In the next episode, it'll probably be an **X399 board with a 3090 + 3060 + 3060** (I need to test it before ordering a second 3090)
(I tried to post it 3 times, something was wrong probably because the post title)
| 2025-04-30T18:29:08 |
https://www.reddit.com/gallery/1kbnoyj
|
jacek2023
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbnoyj
| false | null |
t3_1kbnoyj
|
/r/LocalLLaMA/comments/1kbnoyj/qwen3_on_2008_motherboard/
| false | false | 56 | null |
|
Any pit falls to Langchain to know before trying it?
| 0 |
I need a quick prototype and seemed very easy to learn. Of course there are limits with abstractions, but it seemed very useful from what they claim they provide.
A colleague did whine about setting it up and running into issues though I think it may be more of a dev issue than tool when I ran into no issues with it personally.
What should I know about using lang chain? My main questions are
1. Is it easy to work with custom models. Specifically things like Unsloth and my own fine tuned models.
2. Is the abstractions composed or monolithic untamable beasts?
3. Is it good for agents?
4. Is using the computer vision part a thing in LangChain?
5. Is it a rug pull like Anaconda vibe?
(For those curious I need it to help automate tasks that I feel I always run out of time to do in the day doing it myself.)
| 2025-04-30T18:39:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbnxzn/any_pit_falls_to_langchain_to_know_before_trying/
|
I_will_delete_myself
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbnxzn
| false | null |
t3_1kbnxzn
|
/r/LocalLLaMA/comments/1kbnxzn/any_pit_falls_to_langchain_to_know_before_trying/
| false | false |
self
| 0 | null |
🔧 Designing a self-hosted, agent-first ChatCRM — looking for brain-dump on system design, vector DB vs Postgres, RAG strategy, agents & UI
| 1 |
[removed]
| 2025-04-30T18:44:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbo1z7/designing_a_selfhosted_agentfirst_chatcrm_looking/
|
Aggravating-Gap7783
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbo1z7
| false | null |
t3_1kbo1z7
|
/r/LocalLLaMA/comments/1kbo1z7/designing_a_selfhosted_agentfirst_chatcrm_looking/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'ZmPVKnz3T1YTeYVtLHFZXRkNBUJNDjbee2GDi9mcGWU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JRgN8NqnCFF9DiH6WgGE1zKEsWRA4fttkY-sQbC6Miw.jpg?width=108&crop=smart&auto=webp&s=9e61a53fc6626e91d3927f6e1b82f7d2a2345895', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/JRgN8NqnCFF9DiH6WgGE1zKEsWRA4fttkY-sQbC6Miw.jpg?width=216&crop=smart&auto=webp&s=36314d72f60a913c8babf12ef7913e431aa2b851', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/JRgN8NqnCFF9DiH6WgGE1zKEsWRA4fttkY-sQbC6Miw.jpg?width=320&crop=smart&auto=webp&s=07fb74af1a0893727ecd75521a79a1dc08317de1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/JRgN8NqnCFF9DiH6WgGE1zKEsWRA4fttkY-sQbC6Miw.jpg?width=640&crop=smart&auto=webp&s=1e48501828ebfb8a47fedb137b6c04c0c915a8b1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/JRgN8NqnCFF9DiH6WgGE1zKEsWRA4fttkY-sQbC6Miw.jpg?width=960&crop=smart&auto=webp&s=c1a1655367c423743a85a0c998b07e5e5a66219b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/JRgN8NqnCFF9DiH6WgGE1zKEsWRA4fttkY-sQbC6Miw.jpg?width=1080&crop=smart&auto=webp&s=06cc6c7d85f9d66844471bc1804eba06ab49602b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/JRgN8NqnCFF9DiH6WgGE1zKEsWRA4fttkY-sQbC6Miw.jpg?auto=webp&s=aedad8a52f1c62c7d5d4d40f655e9e30c3798570', 'width': 1200}, 'variants': {}}]}
|
🔧 Designing a self-hosted, agent-first ChatCRM — looking for brain-dump on system design, vector DB vs Postgres, RAG strategy, agents & UI
| 1 |
[removed]
| 2025-04-30T18:46:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbo489/designing_a_selfhosted_agentfirst_chatcrm_looking/
|
Aggravating-Gap7783
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbo489
| false | null |
t3_1kbo489
|
/r/LocalLLaMA/comments/1kbo489/designing_a_selfhosted_agentfirst_chatcrm_looking/
| false | false |
self
| 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.